Authors

* External authors

Venue

Date

Share

Improving the Accuracy of Analog-Based In-Memory Computing Accelerators Post-Training

Corey Lammie*

Athanasios Vasilopoulos*

Julian Büchel*

Giacomo Camposampiero*

Manuel Le Gallo*

Malte J. Rasch

Abu Sebastian*

* External authors

ISCAS 2024

2024

Abstract

Analog-Based In-Memory Computing (AIMC) inference accelerators can be used to efficiently execute Deep Neural Network (DNN) inference workloads. However, to mitigate accuracy losses, due to circuit and device non-idealities, Hardware-Aware (HWA) training methodologies must be employed. These typically require significant information about the underlying hardware. In this paper, we propose two Post-Training (PT) optimization methods to improve accuracy after training is performed. For each crossbar, the first optimizes the conductance range of each column, and the second optimizes the input, i.e, Digital-to-Analog Converter (DAC), range. It is demonstrated that, when these methods are employed, the complexity during training, and the amount of information about the underlying hardware can be reduced, with no notable change in accuracy (≤0.1%) when finetuning the pretrained RoBERTa transformer model for all General Language Understanding Evaluation (GLUE) benchmark tasks. Additionally, it is demonstrated that further optimizing learned parameters PT improves accuracy.

Related Publications

Towards Exact Gradient-based Training on Analog In-memory Computing

NeurIPS, 2024
Zhaoxian Wu*, Tayfun Gokmen*, Malte J. Rasch, Tianyi Chen*

Analog in-memory accelerators present a promising solution for energy-efficient training and inference of large vision or language models. While the inference on analog accelerators has been studied recently, the analog training perspective is under-explored. Recent studies …

Fast and robust analog in-memory deep neural network training

Nature Communications, 2024
Malte J. Rasch, Fabio Carta*, Omobayode Fagbohungbe*, Tayfun Gokmen*

Analog in-memory computing is a promising future technology for efficiently accelerating deep learning networks. While using in-memory computing to accelerate the inference phase has been studied extensively, accelerating the training phase has received less attention, despi…

Analog AI as a Service: A Cloud Platform for In-Memory Computing

SSE, 2024
Kaoutar El Maghraouir*, Kim Tran*, Kurtis Ruby*, Borja Godoy*, Jordan Murray*, Manuel Le Gallo-Bourdeau*, Todd Deshane*, Pablo Gonzalez*, Diego Moreda*, Hadjer Benmeziane*, Corey Liam Lammie*, Julian Büchel*, Malte J. Rasch, Abu Sebastian*, Vijay Narayanan*

This paper introduces the Analog AI Cloud Composer platform, a service that allows users to access Analog In-Memory Computing (AIMC) simulation and computing resources over the cloud. We introduce the concept of an Analog AI as a Service (AAaaS). AIMC offers a novel approach…

  • HOME
  • Publications
  • Improving the Accuracy of Analog-Based In-Memory Computing Accelerators Post-Training

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.