PointAcc: Efficient Point Cloud Accelerator

Deep learning on point clouds plays a vital role in a wide range of applications such as autonomous driving and AR/VR. These applications interact with people in real time on edge devices and thus require low latency and low energy. Compared to projecting the point cloud to 2D space, directly processing 3D point cloud yields higher accuracy and lower #MACs. However, the extremely sparse nature of point cloud poses challenges to hardware acceleration. For example, we need to explicitly determine the nonzero outputs and search for the nonzero neighbors (mapping operation), which is unsupported in existing accelerators. Furthermore, explicit gather and scatter of sparse features are required, resulting in large data movement overhead. In this paper, we comprehensively analyze the performance bottleneck of modern point cloud networks on CPU/GPU/TPU. To address the challenges, we then present PointAcc, a novel point cloud deep learning accelerator. PointAcc maps diverse mapping operations onto one versatile ranking-based kernel, streams the sparse computation with configurable caching, and temporally fuses consecutive dense layers to reduce the memory footprint. Evaluated on 8 point cloud models across 4 applications, PointAcc achieves 3.7X speedup and 22X energy savings over RTX 2080Ti GPU. Codesigned with light-weight neural networks, PointAcc rivals the prior accelerator Mesorasi by 100X speedup with 9.1% higher accuracy running segmentation on the S3DIS dataset. PointAcc paves the way for efficient point cloud recognition.

Moreover, the sparsity in point clouds is fundamentally different from that in traditional CNNs which comes from the weight pruning and ReLU activation function. The sparsity patterns of point clouds are constrained by the physical objects in the real world. That is to say, the nonzero points will never dilate during the computation.

Therefore, point cloud processing requires a variety of mapping operations, such as ball query and kernel mapping, to establish the relationship between input and output points for computation, which has not been explored by existing deep learning accelerators. Moreover, strictly restricted sparsity pattern in point cloud networks leads to irregular sparse computation pattern. Thus it requires explicit gather and scatter of point features for matrix computation, which results in a massive memory footprint.

Due to extreme sparsity, mapping operations and data movement overhead together take up >50% of the total runtime latency. What's worse, due to unsupported mapping ops, data movement between co-processors (CPU and TPU) worsens the bottleneck.

The following figure shows an example of searching for (input, output) pair of weight w(-1, -1). It is converted into a coordinates intersection, and implemented with parallelizable merge-sort operation.

Performance Gain of the full version PointAcc over server-level products:

Performance Gain of the edge version PointAcc over edge-level devices:

Furthermore, PointAcc runs 1.4X faster running kernel mapping over hash-table-based ASIC implementation, and 1.18X faster running k-nearest-neighbor over quick-sort-based TopK Engine in previous work. Using one versatile architecture outperforms specialized design for each mapping operation independently.

On GPU, fetching only the on-demanded input features saves the data movement cost by 3X. Though, the matrix-matrix multiplication is thus decomposed into matrix-vector multiplication, which significantly increase the computation overhead due to the low utilization of GPU cores. However, such overhead is removed in PointAcc because of the computation power of the systolic array.

On the left shows the probability density of DRAM footprint per layer in MinkowskiUNet. A wider region indicates higher frequency of the given data size. The shape of distribution are nearly the same with/without caching, which indicates that the caching works consistently on different layers and on different datasets. On average, the configurable cache reduces the layer DRAM access by 3.5X to 6.3X, where each point features are only fetched nearly once on average.

On the right shows the reduction ratio of DRAM access when running PointNet++-based networks with layer fusion. Compared against running networks layer by layer independently, our layer fusion help cut the DRAM access from 33% to 41%.

@inproceedings{ lin2021pointacc, title={{PointAcc: Efficient Point Cloud Accelerator}}, author={Lin, Yujun and Zhang, Zhekai and Tang, Haotian and Wang, Hanrui and Han, Song}, booktitle={54th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO ’21),}, year={2021} }

**Acknowledgments**: This work was supported by National Science Foundation, Hyundai, Qualcomm
and MIT-IBM Watson AI Lab. We also thank AWS Ma- chine Learning Research Awards for the computational
resource.