Skip to content

Improving the Accuracy of Analog-Based In-Memory Computing Accelerators Post-Training

Abstract

Analog-Based In-Memory Computing (AIMC) inference accelerators can be used to efficiently execute Deep Neural Network (DNN) inference workloads. However, to mitigate accuracy losses, due to circuit and device non-idealities, Hardware-Aware (HWA) training methodologies must be employed. These typically require significant information about the underlying hardware. In this paper, we propose two Post-Training (PT) optimization methods to improve accuracy after training is performed. For each crossbar, the first optimizes the conductance range of each column, and the second optimizes the input, i.e, Digital-to-Analog Converter (DAC), range. It is demonstrated that, when these methods are employed, the complexity during training, and the amount of information about the underlying hardware can be reduced, with no notable change in accuracy (≤0.1%) when finetuning the pretrained RoBERTa transformer model for all General Language Understanding Evaluation (GLUE) benchmark tasks. Additionally, it is demonstrated that further optimizing learned parameters PT improves accuracy.

View PDF

Authors

  • Corey Lammie*
  • Athanasios Vasilopoulos*
  • Julian Büchel*
  • Giacomo Camposampiero*
  • Manuel Le Gallo*
  • Malte J. Rasch
  • Abu Sebastian*

*External Authors

Venue

ISCAS 2024

Date

2024

Share

Related Publications

Join Us on the Cutting-Edge of AI Innovation