Open Access


Download data is not yet available.


Convolutional neural networks (CNNs) play an important role in many computer vision applications such as object classification and recognition. To achieve high recognition rate, these neural networks are usually implemented on high-performance computing platforms with high processing speed and large memory. This is a big obstacle for deploying these models on devices with limited hardware resources such as embedded computers. For convolution layers, it requires a lot of multiply-accumulation operations to extract useful features from input images. Furthermore, multiplication of floating-point numbers has long latency and demands a big hardware overhead. In this paper, we analyze and identify the causes that limit the performance of CNNs. Then a method for implementing convolutional networks on hardware with limited resources is presented. Performance evaluation in terms of power, execution time as well as recognition rate is presented in detail. Experimental results on both the FPGA hardware platform and the ARM Cortex-A embedded processor indicate that CNNs using the XNOR-popcount approach can be optimized to achieve a 1000-fold increase in computational performance and approximately a 24-fold reduction in power consumption compared to the tranditional implementation of CNNs on common embedded computer systems.

Author's Affiliation
Article Details

Issue: Vol 5 No 1 (2022)
Page No.: 1332-1341
Published: Mar 31, 2022
Section: Research article

 Copyright Info

Creative Commons License

Copyright: The Authors. This is an open access article distributed under the terms of the Creative Commons Attribution License CC-BY 4.0., which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

 How to Cite
Pham, K., Tran, Q., & Nguyen, L. (2022). Optimizing the convolutional neural networks for resource-constraint hardwares. Science & Technology Development Journal - Engineering and Technology, 5(1), 1332-1341.

 Cited by

Article level Metrics by Paperbuzz/Impactstory
Article level Metrics by Altmetrics

 Article Statistics
HTML = 13 times
PDF   = 3 times
Total   = 3 times