Intelligent edge devices with rich sensors (e.g., billions of mobile phones and IoT devices) have been ubiquitous in our daily lives. Combining artificial intelligence (AI) and these edge devices, there are vast real-world applications such as smart home, smart retail, autonomous driving, and so on. However, the state-of-the-art deep learning AI systems typically require tremendous resources (e.g., large labeled dataset, many computational resources, many AI experts), both for training and inference. This hinders the application of these powerful deep learning AI systems on edge devices. The TinyML project aims to improve the efficiency of deep learning AI systems by requiring less computation, fewer engineers, and less data, to facilitate the giant market of edge AI and AIoT.
If you are interested in getting updates, please sign up here to get notified!
12.07.2022: Our tiny-engine now supports patch-based inference.
10.04.2022: Our paper on tiny on-device training is highlighted on the MIT homepage!
9.15.2022: Our On-device Training Under 256kB Memory is accepted by NeurIPS 2022!
8.30.2022: We have released the code for TinyEngine. Check our github for more information!
8.29.2022: Our new course on TinyML and Efficient Deep Learning will be released soon in September 2022: efficientml.ai.
6.1.2022: We have launched a website mcunet.mit.edu to introduce our series of tinyml research.
12.8.2021: Delayed Gradient Averaging: Tolerate the Communication Latency for Federated Learning is accepted by NeurIPS 2021.
12.8.2021: MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning is accepted by NeurIPS 2021.
12.7.2021: MCUNet: Tiny Deep Learning on IoT Devices is accepted by NeurIPS 2020 as spotlight presentation.
6.1.2020: TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning is accepted by NeurIPS 2020.