DDR5 and 3D Silicon – #25

DDR5 marks a huge shift in thinking for traditional high-tech memory and IO engineering teams. The implications of this are just now being digested by the industry, and opening up doors for new technologies. In today’s electrical engineering podcast, Daniel Bogdanoff and Mike Hoffman sit down with Perry Keller to discuss how engineers should “get their game on” for DDR5.

“You reach critical certain thresholds that are driven by the laws of physics and material science” – Perry Keller

DDR5 marks a huge shift in thinking for traditional high-tech memory and IO engineering teams. The implications of this are just now being digested by the industry, and opening up doors for new technologies. In today’s electrical engineering podcast, Daniel Bogdanoff and Mike Hoffman sit down with Perry Keller to discuss how engineers should “get their game on” for DDR5.

 

Audio:

Sign up for the DDR5 Webcast with Perry on April 24, 2018!

Agenda:

00:20 Getting your game on with DDR5

LPDDR5 6.4 gigatransfers per second (GT/s)

“You reach critical certain thresholds that are driven by the laws of physics and material science” – Perry Keller

1:00 We’re running into the limits of what physics allows

2:00 DDR3 at 1600 – the timing budget was starting to close.

2:30  With DDR5, a whole new set of concepts need to be embraced.

3:00 DesignCon is the trade show – Mike is famous for his picture with ChipHead

4:00 Rick Eads talked about DesignCon in the PCIe electrical engineering podcast

4:40 The DDR5 paradigm shift is being slowly digested

4:50 DDR (double data rate) introduced source synchronous clocking

All the previous memories had a system clock that governed when data was transferred.

Source synchronous clocking is when the system controlling the data also controls the clock. Source synchronous clocking is also known as forward clocking.

This was the start of high speed digital design.

At 1600 Megatransfers per second (MT/s), this all started falling apart.

For DDR5, you have to go from high speed digital design concepts to concepts in high speed serial systems, like USB.

The reason is that you cant control the timing as tightly. So, you have to count on where the data eye is.

As long as the receiver can follow where that data eye is, you can capture the information reliably.

DRAM doesn’t use an embedded clock due to latency. There’s a lot of overhead, which reduces channel efficiency

9:00
DDR is single ended for data, but over time more signals become differential.

You can’t just drop High Speed Serial techniques into DDR and have it work.

The problem is, the eye is closed. The old techniques won’t work anymore.

10:45
DDR is the last remaining wide parallel communication system.

There’s a controller on one end, which is the CPU. The other end is a memory device.

11:15
With DDR5, the eye is closed. So, the receiver will play a bigger part. It’s important to understand the concepts of equalizing receivers.

You have to think about how the controller and the receiver work together.

12:20
Historically, the memory folks and IO folks have been different teams. The concepts were different. Now, those teams are merging

13:00
DDR5 is one of the last steps before people have to start grappling with communication theory. Modulation, etc.

14:10
Most PCs now will have two channels of communication that’s dozens or hundreds of bits wide.

14:45
What is 3D silicon?

If 3D silicon doesn’t come through, we’ll have to push more bits through copper.

3D silicon is nice because you can pack more into a smaller space.

3D silicon is multiple chips bonded together. Vias connect through the chips instead of traces.

The biggest delay for 3D silicon is that it turns on its head the entire value delivery system.

7 years ago, JEDEC started working on wide IO

17:15
What’s the difference between 3D silicon and having it all built right into the processor?

It’s the difference between working in two dimensions and three dimensions. If you go 3D, you can minimize footprint and connections

18:45
Flash memory, the big deal has been building multiple active layers.

19:45
The ability to stack would be useful for mobile.

21:45
Where is technology today with DDR?

DDR4 is now mainstream, and JEDEC started on DDR5 a year ago (2017)

Memory, DDR5+, and JEDEC – #24

“It’s a miracle it works at all.” In this electrical engineering podcast, we discuss the state of memory today and it’s inevitable march into the future.

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

“It’s a miracle it works at all.” Not the most inspiring words from someone who helped define the latest DDR spec. But, that’s the the state of today’s memory systems. Closed eyes and mV voltage swings are the topic of today’s electrical engineering podcast. Daniel Bogdanoff (@Keysight_Daniel) and Mike Hoffman sit down with Perry Keller to talk about the state of memory today and it’s inevitable march into the future.

Agenda:

00:00 Today’s guest is Perry Keller, he works a lot with standards committees and making next generation technology happen.

00:50 Perry has been working with memory for 15 years.

1:10 He also did ASIC design, project management for software and hardware

1:25
Perry is on the JEDEC board of directors

JEDEC is one of the oldest standards body, maybe older than IEEE

1:50 JEDEC was established to create standards for semiconductors. This was an era when vacuum tubes were being replaced by solid state devices.

2:00 JEDEC started by working on instruction set standards

2:15 There are two main groups. A wide bandgap semiconductors group and a memory group.

3:00 Volatile memory vs. nonvolatile memory. An SSD is nonvolatile storage, like in a phone. But if you look at a DIMM in a PC that’s volatile.

3:40 Nonvolatile memory is everywhere, even in light bulbs.

4:00 Even a DRAM can hold its contents for quite some time. JEDEC had discussions about doing massive erases because spooks will try to recover data from it.

DRAM uses capacitors for storage, so the colder they are the longer they hold their charge.

4:45 DRAM is the last vestige of the classical wide single ended parallel bus. “It’s a miracle that it works at all.”

5:30 Perry showed a friend a GDDR5 bus and challenged him to get an eye on it and he couldn’t.

6:10 Even though DDR signals look awful, it depends on reliable data transfer. The timing and clocking is set up in a way to deal with all of the various factors.

7:00 DDR specifications continue to march forward. There’s always something going on in memory.

8:00 Perry got involved with JEDEC through a conversation with the board chairman.

8:35 When DDR started, 144 MT/s (megatransfers per second) was considered fast. But, DDR5 has and end of life goal of 6.5 GT/s on a 80+ bit wide single ended parallel bus.

9:05 What are the big drivers for memory technology? Power. Power is everything. LPDDR – low power DDR – is a big push right now.

9:30 if you look at the memory ecosystem, the big activity is in mobile. The server applications are becoming focused with the cloud, but the new technology and investment is mobile.

10:00 If you look at a DRAM, you can divide it into three major categories. Mainstream PC memory, low power memory, and GDDR. GDDR is graphics memory. The differences are in both power and cost.

For example, LPDDR is static designs. You can clock it down to DC, which you can’t do with normal DDR.

The first DDR was essentially TTL compatible. Now, we’re looking at 1.1V power supplies and voltage swings in the mV.

Semiconductor technology is driving the voltages down to a large degree.

11:45 DRAM and GDDR is a big deal for servers.

A company from China tried to get JEDEC to increase the operating temperature range of DRAMs by 10 C. They fire up one new coal fired power plant per week in China to meet growing demand. They found they could cut it down to only 3 per month with this change in temperature specs.

13:10 About 5 years ago, the industry realized that simply increasing I/O speeds wouldn’t help system performance that much because the core memory access time hasn’t changed in 15 years. The I/O rate has increased, but basically they do that by pulling more bits at once out of the core and shifting them out. The latency is what really hurts at a system level.

14:15 Development teams say that their entire budget for designing silicon is paid for out of smaller electric bills.

15:00 Wide bandgap semiconductors are happy running at very high temperatures. If these temperatures end up in the data centers, you’ll have to have moon suits to access the servers.

16:30 Perry says there’s more interesting stuff going on in the computing than he’s seen in his whole career.

The interface between different levels is not very smooth. The magic in a spin-up disk is in the cache-optimizing algorithms. That whole 8-level structure is being re-thought.

18:00 Von Neumann architectures are not constraining people any more.

18:10 Anything that happens architecturally in the computing world affects and is affected by memory.

22:10 When we move from packaged semiconductors to 3D silicon we will see the end of DDR. The first successful step is called high bandwidth memory, which is essentially a replacement for GDDR5.

23:00 To move to a new DDR spec, you basically have to double the burst size.