Implementation of An Extremely Effective Modified Reconfigurable Constant Coefficient Multiplier for Neural Network Architecture using FPGA
Main Article Content
Abstract
Due to their potential to reduce silicon area or boost throughput, low-precision computations were widely studied to speed up deep learning applications on field-programmable gate arrays (FPGAs). However, the precision suffers as a result of these advantages. proving the superiority of modified reconfigurable constant coefficient multipliers (MRCCMs) over low-precision math in terms of silicon area savings. MRCCMs can be highly optimized for FPGAs because they only use subtractors, adders, multiplexers, and bit shifts (MUXs) to multiply input values by a constrained set of coefficients. suggested a family of MRCCMs designed specifically for FPGA logic components to guarantee their effective use. Create innovative training methods that convert potential MRCCM coefficient models to the weight value ranges of neural networks to reduce information loss due to quantization. As a result, hardware can still use MRCCMs while keeping high accuracy. Utilizing the ResNet-18, ResNet-50, and AlexNet networks, illustrates the advantages of these methods. The resulting implementations reduce resource consumption by up to 50% compared to conventional 8-bit quantized networks, which results in substantial speedups and power savings. All other methods with MRCCMs accomplish accuracy that is at least comparable to an 8-bit uniformly quantized system while significantly reducing resource usage, while our MRCCM has the lowest consumption of resources and surpasses 6-bit fixed point accuracy. Similar to that, this study compared the MRCCM approach using Xilinx FPGA on various sizes of MRCCM like ADD-3, ADD-4, and ADD-2.
Downloads
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.