Wednesday, September 24, 2025 - 02:00 pm
Online

 DISSERTATION DFENSE
 

Author : Md Hasibul Amin
Advisor: Dr. Ramtin Zand
Date: Sep 24th, 2025
Time: 2:00 pm
Place: Teams Meeting

Abstract

There has been a rapid growth in the computational demands of machine learning (ML) workloads in recent days. Conventional von Neumann architectures are not capable of keeping up with the high cost of data movement between the processor and memory, well-known as memory wall problem. In-memory computing (IMC) has been focused as a solution by the researchers, where the computation is performed inside the memory devices such as SRAM, MRAM, RRAM etc. Most commonly, the memory devices are arranged in a crossbar setting where the matrix-vector multiplication (MVM) operation is performed through intrinsic parallelism of analog computations. The conventional IMC systems require high-power signal conversion blocks to connect between analog crossbars and digital processing units, hindering efficient computation. In this dissertation, we propose In-Memory Analog Computing (IMAC) architectures that perform the MVM and nonlinear vector operation (NLV) consequently using analog functional units, eliminating the needs for costly signal conversions. Despite its advantages, computing the whole DNN in the analog domain introduces critical usability and reliability challenges. This dissertation systematically investigates these challenges and presents a set of circuit-, system-, and architecture-level solutions to mitigate their impact. Furthermore, we develop a comprehensive simulation framework to enable cross-layer design and performance optimization of IMAC systems tailored to user-defined ML workloads. Our results demonstrate that IMAC can achieve significant energy and latency savings with negligible accuracy loss, making it a compelling direction for next-generation ML hardware acceleration.