On-Device Training Under 256KB Memory

Ji Lin* 1 , Ligeng Zhu* 1 , Wei-Ming Chen 1 , Wei-Chen Wang 1 , Chuang Gan 2 , Song Han 1
MIT, MIT-IBM Watson AI Lab
(* indicates equal contributions)

If you are interested in getting updates, please sign up to get notified and check our tinyml.mit.edu for related work!

News

  • [11/28/2022] Our poster sessino is at Wed Nov 30 11:30am-1:00pm (New Orleans time)@ Hall J #702. Stop by if you are interested!
  • [10/04/2022] Our paper on tiny on-device training is highlighted on the MIT homepage!
  • [09/16/2022] Our paper is accepted to NeurIPS 2022!
  • [06/30/2022] Our video demo of on-device training on micro-controllers is now avaliable online!
  • [06/30/2022] Our paper is released on arXiv.

Abstract

On-device training enables the model to adapt to new data collected from the sensors by fine-tuning a pre-trained model. However, the training memory consumption is prohibitive for IoT devices that have tiny memory resources. We propose an algorithm-system co-design framework to make on-device training possible with only 256KB of memory. On-device training faces two unique challenges: (1) the quantized graphs of neural networks are hard to optimize due to mixed bit-precision and the lack of normalization; (2) the limited hardware resource (memory and computation) does not allow full backward computation. To cope with the optimization difficulty, we propose Quantization-Aware Scaling to calibrate the gradient scales and stabilize quantized training. To reduce the memory footprint, we propose Sparse Update to skip the gradient computation of less important layers and sub-tensors. The algorithm innovation is implemented by a lightweight training system, Tiny Training Engine, which prunes the backward computation graph to support sparse updates and offload the runtime auto-differentiation to compile time. Our framework is the first practical solution for on-device transfer learning of visual recognition on tiny IoT devices (e.g., a microcontroller with only 256KB SRAM), using less than 1/100 of the memory of existing frameworks while matching the accuracy of cloud training+edge deployment for the tinyML application VWW. Our study enables IoT devices to not only perform inference but also continuously adapt to new data for on-device lifelong learning.

Demo


We also provide a Bilibili version.

Methods & Results

Historically, DNNs training happens on the cloud. Can we learn on the edge? The large memory usage is the challenge. We enable on-device training under 256KB memory, using less than 1/1000 memory of PyTorch while matching the accuracy on the visual wake words application.

Figure.1 : Algorithm and system co-design reduces the training memory from 303MB (PyTorch) to 149KB with the same transfer learning accuracy, leading to 2300x reduction. The numbers are measured with MobilenetV2-w0.35, batch size 1 and resolution 128x128. It can be deployed to a microcontroller with 256KB SRAM.


The first idea is to optimize a real quantized graph. This is difficult to optimize due to low-precision (INT8) and no BN. We propose quantization aware scaling (QAS) to automatically scale the gradient, which effectively stabilizes the training and matches the FP32 accuracy

Figure.2 : Fake v.s. real quantized graph in the dataflow.

To reduce the memory footprint, we propose sparse layer and sparse tensor update, to skip the gradient computation of less important layers and sub-tensors. We developed an automated method to find the best sparsity under varying memory budgets.

Figure.3 : Sparse update only performs partial back-propagations, leading to less memory usage but comparable accuracy on downstream tasks.

The innovation is implemented by Tiny Training Engine (TTE), which offloads auto-diff from run-time to compile-time and uses codegen to minimize run-time overhead. It also supports graph pruning and reordering to support sparse updates, achieving measured memory saving and speedup.

Figure.4 : Measured peak memory and latency: (a) Sparse update with our graph optimization reduces the measured peak memory by 20-21$\times$. (b) Graph optimization consistently improves the peak memory (c) Sparse update with our operators achieves 23-25$\times$ faster training speed. For all numbers, we choose the config that achieves the same accuracy as full update.

Conclusion

As result, we can enable tiny on-device training under 256KB SRAM and 1MB Flash while achieving higher accuracy than MLPerf Tiny on the VWW dataset. It suggests that tiny IoT devices can not only perform inference but also continuously adapt to new data for lifelong learning. Tiny training can enable a wide range of exciting applications: mobile phones can learn customized language models and cameras can continually recognize new faces / objects while the personal data is never uploaded to the cloud thus protecting privacy. With on-device training, AI can also continuously learn over time, adapting to a world that is changing fast. This would also empower IoT applications where there is no physical connection to the internet.

Citation

 @inproceedings{lin2022ondevice,
    title     = {On-Device Training Under 256KB Memory},
    author    = {Lin, Ji and Zhu, Ligeng and Chen, Wei-Ming and Wang, Wei-Chen and Gan, Chuang and Han, Song},
    booktitle = {Annual Conference on Neural Information Processing Systems (NeurIPS)},
    year      = {2022}
} 

Related work

Acknowledgments: We thank National Science Foundation (NSF), MIT-IBM Watson AI Lab, MIT AI Hardware Program, Amazon, Intel, Qualcomm, Ford, Google for supporting this research.