TinyML Projects

[NeurIPS’22] On-Device Training Under 256KB Memory

[Paper] [Website] [Demo]

#On-device Learning, #Memory, #Training, #System, #Compiler

[arXiv] BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation

[Paper] [Website] [Demo]

#Point-Cloud, #Self-driving, #3D Vision

[NeurIPS’21] Delayed Gradient Averaging: Tolerate the Communication Latency for Federated Learning

[Paper] [Slides] [Poster] [Website]

#On-device Learning, #Latency, #Federated Learning

[NeurIPS’21] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning

[Paper] [Slides] [Website] [Demo] [Use Cases]

#Inference, #Micro-Controller

[NeurIPS’20] TinyTL: Reduce Activations, Not Trainable Parameters for Efficient On-Device Learning

[Paper] [Slides] [Code] [Website]

#On-device Learning, #Memory-Efficient, #Transfer Learning

[NeurIPS’20] MCUNet: Tiny Deep Learning on IoT Devices

[Paper] [Slides] [Poster] [Code] [Website] [Demo]

#Inference, #Micro-Controller

[NeurIPS’19] Deep Leakage from Gradients

[Paper] [Slides] [Code] [Website

#Training, #Federated Learning, #Privacy

[ICLR’18] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training

[Paper] [Code]

#Distributed Training, #Bandwidth