Comprometida para conectar al mundo

WRS

Low-precision deep-learning-based automatic modulation recognition system

Low-precision deep-learning-based automatic modulation recognition system

Authors: Satish Kumar, Aakash Agarwal, Neeraj Varshney, Rajarshi Mahapatra
Status: Final
Date of publication: 22 September 2022
Published in: ITU Journal on Future and Evolving Technologies, Volume 3 (2022), Issue 2, Pages 214-223
Article DOI : https://doi.org/10.52953/CTYJ2699
Abstract:
Convolution Neural Network (CNN)-based deep learning models have recently been employed in Automated Modulation Classification (AMC) systems, with excellent results. However, hardware deployment of these CNN-based AMC models is very difficult due to their large size, floating point weights and activations, and real-time processing requirements in hardware such as Field Programmable Gate Arrays (FPGAs). In this study, we designed CNN-based AMC techniques for complex-valued temporal radio signal domains and made them less complex with a small memory footprint for FPGA implementation. This work mainly focuses on quantized CNN, low precision mathematics, and quantization-aware CNN training to overcome the problem of larger model sizes, floating-point weights, and activations. Low precision weights, activations, and quantized CNN, on the other hand, have a considerable impact on the accuracy of the model. Thus, we propose an iterative pruning-based training mechanism to maintain the overall accuracy above a certain threshold while decreasing the model size for hardware implementation. The proposed schemes are 21.55 times less complex and achieve at least 1.6% higher accuracy than the baseline. Moreover, results show that our convolution layer-based Quantized Modulation Classification Network (QMCNet) with pruning has 92.01% less multiply-accumulate bit operations (bit_operations), 61.39% less activation bits, and 87.58% less weight bits than the 8 bit quantized baseline model whereas the quantized and pruned Residual-Unit based model (RUNet) has 95.36% less bit_operations, 29.97% less activation bits and 98.22% less weight bits than the 8 bit quantized baseline model.

Keywords: Automatic modulation classification, convolution neural network, FPGA, iterative pruning, quantization-aware training
Rights: © International Telecommunication Union, available under the CC BY-NC-ND 3.0 IGO license.
electronic file
Detalle del artículoArtículoPrecio
Inglés
PDF format  
GratuitoDescargar