Jung Ho Ahn

A RoMe paper accepted at HPCA 2026 (along with three other papers)

Delighted to announce that we have a second paper accepted at HPCA 2026: RoMe.

The Problem
There is a fundamental granularity mismatch in modern AI hardware. While HBM is the de facto standard for AI accelerators, its 32-byte access granularity has remained unchanged for over a decade. In contrast, Large Language Models (LLMs) operate on hidden states and weight matrices that require streaming contiguous blocks ranging from kilobytes to megabytes.
The Solution
We propose RoMe (Row-granularity-access Memory system). RoMe shifts DRAM access to row granularity, effectively eliminating the complex column and bank group structures required for fine-grained access.

By simplifying the interface, we can free up command/address pins and repurpose them to create additional memory channels. The result is a 12.5% increase in bandwidth with minimal hardware overhead. Think of it as a “huge-page only” system for the terabyte-scale memory era.

Full credit goes to my students: Hwayong Nam, Seungmin Baek, Jumin Kim, and Michael Jaemin Kim.

See you all in Sydney!

RoMe

Title

RoMe: Row Granularity Access Memory System for Large Language Models

Authors

Hwayong Nam, Seungmin Baek, Jumin Kim, Michael Jaemin Kim, and Jung Ho Ahn

Abstract

Modern HBM-based memory systems have evolved over generations while retaining cache line granularity accesses. Preserving this fine granularity necessitated the introduction of bank groups and pseudo channels. These structures expand timing parameters and control overhead, significantly increasing memory controller scheduling complexity. Large language models (LLMs) now dominate deep learning workloads, streaming contiguous data blocks ranging from several kilobytes to megabytes per operation. In a conventional HBM-based memory system, these transfers are fragmented into hundreds of 32B cache line transactions. This forces the memory controller to employ unnecessarily intricate scheduling, leading to growing inefficiency.

To address this problem, we propose RoMe. RoMe accesses DRAM at row granularity and removes columns, bank groups, and pseudo channels from the memory interface. This design simplifies memory scheduling, thereby requiring fewer pins per channel. The freed pins are aggregated to form additional channels, increasing overall bandwidth by 12.5% with minimal extra pins. RoMe demonstrates how memory scheduling logic can be significantly simplified for representative LLM workloads, and presents an alternative approach for next-generation HBM-based memory systems achieving increased bandwidth with minimal hardware overhead.