
“It’s a miracle it works at all.” Not the most inspiring words from someone who helped define the latest DDR spec. But, that’s the the state of today’s memory systems. Closed eyes and mV voltage swings are the topic of today’s electrical engineering podcast. Daniel Bogdanoff (@Keysight_Daniel) and Mike Hoffman sit down with Perry Keller to talk about the state of memory today and it’s inevitable march into the future.
Agenda:
00:00 Today’s guest is Perry Keller, he works a lot with standards committees and making next generation technology happen.
00:50 Perry has been working with memory for 15 years.
1:10 He also did ASIC design, project management for software and hardware
1:25
Perry is on the JEDEC board of directors
JEDEC is one of the oldest standards body, maybe older than IEEE
1:50 JEDEC was established to create standards for semiconductors. This was an era when vacuum tubes were being replaced by solid state devices.
2:00 JEDEC started by working on instruction set standards
2:15 There are two main groups. A wide bandgap semiconductors group and a memory group.
3:00 Volatile memory vs. nonvolatile memory. An SSD is nonvolatile storage, like in a phone. But if you look at a DIMM in a PC that’s volatile.
3:40 Nonvolatile memory is everywhere, even in light bulbs.
4:00 Even a DRAM can hold its contents for quite some time. JEDEC had discussions about doing massive erases because spooks will try to recover data from it.
DRAM uses capacitors for storage, so the colder they are the longer they hold their charge.
4:45 DRAM is the last vestige of the classical wide single ended parallel bus. “It’s a miracle that it works at all.”
5:30 Perry showed a friend a GDDR5 bus and challenged him to get an eye on it and he couldn’t.
6:10 Even though DDR signals look awful, it depends on reliable data transfer. The timing and clocking is set up in a way to deal with all of the various factors.
7:00 DDR specifications continue to march forward. There’s always something going on in memory.
8:00 Perry got involved with JEDEC through a conversation with the board chairman.
8:35 When DDR started, 144 MT/s (megatransfers per second) was considered fast. But, DDR5 has and end of life goal of 6.5 GT/s on a 80+ bit wide single ended parallel bus.
9:05 What are the big drivers for memory technology? Power. Power is everything. LPDDR – low power DDR – is a big push right now.
9:30 if you look at the memory ecosystem, the big activity is in mobile. The server applications are becoming focused with the cloud, but the new technology and investment is mobile.
10:00 If you look at a DRAM, you can divide it into three major categories. Mainstream PC memory, low power memory, and GDDR. GDDR is graphics memory. The differences are in both power and cost.
For example, LPDDR is static designs. You can clock it down to DC, which you can’t do with normal DDR.
The first DDR was essentially TTL compatible. Now, we’re looking at 1.1V power supplies and voltage swings in the mV.
Semiconductor technology is driving the voltages down to a large degree.
11:45 DRAM and GDDR is a big deal for servers.
A company from China tried to get JEDEC to increase the operating temperature range of DRAMs by 10 C. They fire up one new coal fired power plant per week in China to meet growing demand. They found they could cut it down to only 3 per month with this change in temperature specs.
13:10 About 5 years ago, the industry realized that simply increasing I/O speeds wouldn’t help system performance that much because the core memory access time hasn’t changed in 15 years. The I/O rate has increased, but basically they do that by pulling more bits at once out of the core and shifting them out. The latency is what really hurts at a system level.
14:15 Development teams say that their entire budget for designing silicon is paid for out of smaller electric bills.
15:00 Wide bandgap semiconductors are happy running at very high temperatures. If these temperatures end up in the data centers, you’ll have to have moon suits to access the servers.
16:30 Perry says there’s more interesting stuff going on in the computing than he’s seen in his whole career.
The interface between different levels is not very smooth. The magic in a spin-up disk is in the cache-optimizing algorithms. That whole 8-level structure is being re-thought.
18:00 Von Neumann architectures are not constraining people any more.
18:10 Anything that happens architecturally in the computing world affects and is affected by memory.
22:10 When we move from packaged semiconductors to 3D silicon we will see the end of DDR. The first successful step is called high bandwidth memory, which is essentially a replacement for GDDR5.
23:00 To move to a new DDR spec, you basically have to double the burst size.
One comment