While machine learning and artificial intelligence perform repetitive processes and simple tasks, they use an extensive amount of data that causes memory bottleneck issues. This is when a system’s memory resources are insufficient to meet data processing and storage demands. With memory bottleneck, transferring data between processors and memory consumes a significant amount of energy, particularly in large data centers.
Zand plans to address this issue by utilizing in-memory analog computing (IMAC), which reduces the processor-memory bottleneck by performing computations directly where the data resides, saving both time and energy. His team will develop a novel IMAC architecture and a corresponding framework for deploying machine learning workloads to eliminate the need of a separate memory and processor.
Read the full article here. It talks about Dr. Zand and Dr. Bakos research into IMAC architectures.