DRAM: Architectures, Interfaces, and Systems Bruce Jacob, University of Maryland, College Park 1/2 Day Since the early 1980s, we have enjoyed an exponential increase in microprocessor performance of roughly 50% per year. This exponential growth rate has enabled many technologies that we take for granted today, including affordable personal computing and networks fast enough to support high-bandwidth interactive use (e.g. the WWW). In addition, the exponential growth rate and resulting rapid turnover from high performance to obsolescence ensures a generous supply of microprocessors that are no longer competitive in the desktop arena but are quite adequate for embedded-system needs. These processors cannot justify high-performance price premiums and are instead sold for mere dollars apiece, thus enabling high-performance embedded applications such as digital cellular phones, portable CD and MP3 players, and high-resolution video games. However, the phenomenal pace of improved system performance is expected to come to a grinding halt within the next decade: The memory and processor components of computer systems are both increasing in performance, but they are doing so at significantly different rates. It has been predicted that, within the next decade, processors will be so far ahead of DRAM systems in performance that the improvement rate will be dominated by memory at a 7% performance increase per year. Thus it is extremely important to understand DRAM systems, their implementation, and their place within a system framework. This tutorial attempts to address this need; the tutorial will discuss DRAM internals, their interfaces, their performance, and their use within high-performance systems as well as embedded systems.