2019 SiPS Paper
Published in: 2019 IEEE International Workshop on Signal Processing Systems (SiPS)
Date of Conference: October 20–23, 2019
Authors: Meiqi Wang; Jianqiao Mo*; Jun Lin; Zhongfeng Wang; Li Du
Abstract: Early-exit is a kind of technique to terminate a pre-specified computation at an early stage depending on the input samples and has been introduced to reduce energy consumption for Deep Neural Networks (DNNs). Previous early-exit approaches suffered from the burden of manually tuning early-exit loss-weights to find a good trade-off between complexity reduction and system accuracy. In this work, we first propose DynExit, a dynamic loss-weight modification strategy for ResNets, which adaptively modifies the ratio of different exit branches and searches for a proper spot for both accuracy and cost. Then, an efficient hardware unit for early-exit branches is developed, which can be easily integrated to existing hardware architectures of DNNs to reduce average computing latency and energy cost. Experimental results show that the proposed DynExit strategy can reduce up to 43.6% FLOPS compared to the state-of-the-arts approaches. On the other hand, it is able to achieve 1.2% accuracy improvement over the existing end-to-end fixed loss-weight training scheme with comparable computation reduction ratio. The proposed hardware architecture for DynExit is evaluated on the platform of Xilinx Zynq-7000 ZC706 development board. Synthesis results demonstrate that the architecture can achieve high speed with low hardware complexity. To the best of our knowledge, this is the first hardware implementation for early-exit techniques used for DNNs in open literature.
Note: Received Best Paper Award (the first prize)