The Huge Challenge of Testing USB 3.2 – #33

USB 3.2 doubles the data rate of previous USB specs, but makes the testing process significantly harder. Find out why in this electrical engineering podcast.

USB 3.2 testing is darn hard! We talk compliance test specs, USB 3.2 testing BKMs, and pre-spec silicon. Guest Jit Lim sits down with Mike Hoffman and Daniel Bogdanoff to talk about the new difficulties engineers are facing as they develop USB 3.2 silicon.

 

Agenda:

In the last electrical engineering podcast, we talked about how USB 3.2 runs in x2 mode (“by two”)

This means there’s a lot of crosstalk. The USB Type C connector is great, but its small size and fast edges means crosstalk is a serious concern.

When we test USB, we want to emulate real-world communications. This means you have to check, connect, and capture signals from four lanes.

For testing Thunderbolt you always have to do this, too.

Early silicon creators and early adopters are already creating IP and chips for a spec that isn’t released yet.

2:00 They’re testing based on the BKM (Best Known Method)

3:30 Jit was just at Keysight World Japan, where many people presented BKMs for current technologies. Waiting for a test spec to be released is not an excuse for starting to work on a technology.

4:50 How many companies are actually developing USB 3.2 products? The answer isn’t straightforward – the ecosystem is very complex and there are multiple vendors for a single system (like a cable).

6:30 Many USB silicon vendors will develop an end-product and get it certified to prove that their silicon will work. They then sell the silicon and IP to other companies for use in their products.

7:50 Daniel listened to an interesting podcast about how Monoprice reverse engineers complex products and sells them for cheaper:
https://www.npr.org/sections/money/2014/11/28/366793693/episode-586-how-stuff-gets-cheaper

9:40 There are some BNC cables at the Keysight Colorado Springs site that were literally wire pulled and built in the building.

10:00 Has anything changed as USB technology advances? There are a lot of new challenges – multiple challenges, retimers, multiple test modes

Testing retimers is nontrivial, they are full receivers and full transmitter.

11:30 When a new spec is developed, what does that look like? How far does the test group go when setting a new spec?

The spec doesn’t look at how to test, it just looks it what it should do.

Then, there’s a compliance test specification (CTS). This is developed by a test group, that looks at how things should be tested.

So, there are two groups. the first asks “what should the spec be?” and the second asks “how do we test that group?”

13:30 How many people are testing USB 3.2? Even though the compliance test specification is not developed yet? There are non being shipped, but there is a lot of activity!

14:30 What are the main challenges? Basics. When you have 10 Gbps over copper on a PCB, people are failing spec! There are issues with some devices passing only intermittently. Especially over long cables and traces.

15:45 Cheap PCBs make things even more tricky. So, there’s very sophisticated transmitter equalization and even moire sophisticated receiver equalization. It’s crucial to keep the low cost PCB material and processes to keep the overall end-product cost low. Using higher end materials would dramatically increase the cost of consumer products.

17:30 The first TV Mike bought was after his internship at Intel. He bought a $30-ish 1080i TV for $1600. Now, you couldn’t give away that TV.

18:30 Stupid questions for Jit:
What is your favorite national park and why?
What is your favorite PCB material and why?

 

 

 

 

DDR5 Rx Testing is a Whole New Ballgame – #28

Receiver testing (Rx) was never a concern for DDR design. Until now. The margin for error ran out, and now Rx testing is getting standardized. We sit down with Stephanie Rubalcava to explore the challenges of this new ground.

Receiver testing (Rx) was never a concern for DDR design. Until now. The margin for error ran out, and now Rx testing is getting standardized. We sit down with Stephanie Rubalcava to explore the challenges of this new ground.

Video:

 

Audio:

Agenda:

1:00
This is the first time in the industry that high-accuracy, standardized receiver measurements need to be done

2:20
DDR is very different from traditional memory in terms of testing

3:10
Process of getting specs defined

3:50
What a DDR receiver test (DDR Rx Test) looks like

4:50
Even being just 100 mV off when testing can make a part appear to fail

5:20
The BERT sends out a signal to test the channel, but what’s really being tested is the DIMM and device’s ability to receive data under certain conditions

6:30
Receiver types across different devices? There’s a DQS data clock signal, and a data signal. There are also command and address lines in DDR.

6:50
For Rx testing, we’re calibrating the signal going into the receiver

7:30
JEDEC develops a lot of the testing standards

8:10
Two components of test standards: compliance and characterization. Compliance asks “do I meet the spec?” Characterization asks “how well does my system perform, and where is my fail point?”

9:35
Receiver test as whole is a challenge for engineers

They need new kinds of calibration, DDR fixtures, and tests.

12:20
DDR Transmitters (DDR Tx) are progressing with DDR5 as well as receivers. We do have the DDR Tx history testing all the way back to DDR1.

There are similar specifications for characteristics of DDR transmitters and DDR receivers.

13:20
DDR Transmitter testing is at “the ball of the part” and checks for signal characteristics.

 

 

Secret Specs, LPDDR5, and Interposers – #26

Keeping specs secret is just part of the job. Getting a usable, working spec is another. Learn why JEDEC guards a spec, the basic DDR architecture, and geek out with us about the challenges of probing DDR.

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

Keeping specs secret is just part of the job. Getting a usable, working spec is another. We sat down with Jennie Grosslight to learn why JEDEC guards a spec, the basic DDR architecture, and geek out  about the challenges of probing DDR.

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

 

 

Agenda:

1:00
How are electrical engineering and protocol specifications defined?

2:00
Bigger companies tend to drive specifications because they can afford to put money into new products

Sometimes small or midsize companies with an idea can make something new happen, but they have to push it

2:50
Most memory technologies have a couple players:
1. The chipset and the memory controller industry
2. The actual devices that store data (DRAM)

3:30
There’s a tremendous amount of work between all the players to make all the parts work together.

5:00
Why JEDEC keeps information about new products private as they’re being developed:
If you spread your information too wide then you can get a lot of misinformation. Fake news!
Early discussions also might not resemble the end product

6:20
DDR5, LPDDR, and 3D silicon die stacking are new and exciting in memory

7:00
We keep pushing physics to new edges

7:20
Heat management in 3D silicon is a big challenge

8:20
LPDDR5 is the new low power memory for devices like cell phones and embedded devices

9:10
5G devices will likely depend on low power memory

10:20
Once the RF challenges of 5G are figured out there will be even more challenges on the digital side. Systems have to deal with large bandwidths and low latencies

11:10
Higher performance and lower power is driving development of LPDDR5

It will be interesting to see if improvements are made in jumps or very slowly

12:00
Dropping voltage swing and increasing speed both make the eye smaller
Making the eye smaller makes you more vulnerable to crosstalk

12:20 – Completely closed eyes for DDR5

13:00
How to probe DDR?
We use a lot of simulation because the circuits are so sensitive

14:20
Crosstalk is often a problem when making DDR and LPDDR measurements

14:50
Economics drives everything so new technology is often based on existing systems

15:40
What comes next is up to who comes up with the best idea

16:40
What will drive change is when the existing materials can no longer meet performance

17:50
Power is important for big data farms as well as cell phones

19:50
GDDR and DDR

21:00
Chipset rank on a DIMM

The pieces share a common data bus so you need to know the order to properly test

24:20
DIMM interposer used for logic measurements for servers

25:50
With a scope a ball grid array is used under a device or the pins are probed

Oscilloscope interposers are available that work similarly to the logic analyzer interposers

The logic analyzer looks at all the signals at once, typically the oscilloscope only looks at a few

28:10
When testing you want to validate that the device followed the protocal in the right sequence

29:10
Data rates of DDR

DDR5 is supposed to get to 6400 MT/s

 

DDR5 and 3D Silicon – #25

DDR5 marks a huge shift in thinking for traditional high-tech memory and IO engineering teams. The implications of this are just now being digested by the industry, and opening up doors for new technologies. In today’s electrical engineering podcast, Daniel Bogdanoff and Mike Hoffman sit down with Perry Keller to discuss how engineers should “get their game on” for DDR5.

“You reach critical certain thresholds that are driven by the laws of physics and material science” – Perry Keller

DDR5 marks a huge shift in thinking for traditional high-tech memory and IO engineering teams. The implications of this are just now being digested by the industry, and opening up doors for new technologies. In today’s electrical engineering podcast, Daniel Bogdanoff and Mike Hoffman sit down with Perry Keller to discuss how engineers should “get their game on” for DDR5.

 

Audio:

Sign up for the DDR5 Webcast with Perry on April 24, 2018!

Agenda:

00:20 Getting your game on with DDR5

LPDDR5 6.4 gigatransfers per second (GT/s)

“You reach critical certain thresholds that are driven by the laws of physics and material science” – Perry Keller

1:00 We’re running into the limits of what physics allows

2:00 DDR3 at 1600 – the timing budget was starting to close.

2:30  With DDR5, a whole new set of concepts need to be embraced.

3:00 DesignCon is the trade show – Mike is famous for his picture with ChipHead

4:00 Rick Eads talked about DesignCon in the PCIe electrical engineering podcast

4:40 The DDR5 paradigm shift is being slowly digested

4:50 DDR (double data rate) introduced source synchronous clocking

All the previous memories had a system clock that governed when data was transferred.

Source synchronous clocking is when the system controlling the data also controls the clock. Source synchronous clocking is also known as forward clocking.

This was the start of high speed digital design.

At 1600 Megatransfers per second (MT/s), this all started falling apart.

For DDR5, you have to go from high speed digital design concepts to concepts in high speed serial systems, like USB.

The reason is that you cant control the timing as tightly. So, you have to count on where the data eye is.

As long as the receiver can follow where that data eye is, you can capture the information reliably.

DRAM doesn’t use an embedded clock due to latency. There’s a lot of overhead, which reduces channel efficiency

9:00
DDR is single ended for data, but over time more signals become differential.

You can’t just drop High Speed Serial techniques into DDR and have it work.

The problem is, the eye is closed. The old techniques won’t work anymore.

10:45
DDR is the last remaining wide parallel communication system.

There’s a controller on one end, which is the CPU. The other end is a memory device.

11:15
With DDR5, the eye is closed. So, the receiver will play a bigger part. It’s important to understand the concepts of equalizing receivers.

You have to think about how the controller and the receiver work together.

12:20
Historically, the memory folks and IO folks have been different teams. The concepts were different. Now, those teams are merging

13:00
DDR5 is one of the last steps before people have to start grappling with communication theory. Modulation, etc.

14:10
Most PCs now will have two channels of communication that’s dozens or hundreds of bits wide.

14:45
What is 3D silicon?

If 3D silicon doesn’t come through, we’ll have to push more bits through copper.

3D silicon is nice because you can pack more into a smaller space.

3D silicon is multiple chips bonded together. Vias connect through the chips instead of traces.

The biggest delay for 3D silicon is that it turns on its head the entire value delivery system.

7 years ago, JEDEC started working on wide IO

17:15
What’s the difference between 3D silicon and having it all built right into the processor?

It’s the difference between working in two dimensions and three dimensions. If you go 3D, you can minimize footprint and connections

18:45
Flash memory, the big deal has been building multiple active layers.

19:45
The ability to stack would be useful for mobile.

21:45
Where is technology today with DDR?

DDR4 is now mainstream, and JEDEC started on DDR5 a year ago (2017)

Memory, DDR5+, and JEDEC – #24

“It’s a miracle it works at all.” In this electrical engineering podcast, we discuss the state of memory today and it’s inevitable march into the future.

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

“It’s a miracle it works at all.” Not the most inspiring words from someone who helped define the latest DDR spec. But, that’s the the state of today’s memory systems. Closed eyes and mV voltage swings are the topic of today’s electrical engineering podcast. Daniel Bogdanoff (@Keysight_Daniel) and Mike Hoffman sit down with Perry Keller to talk about the state of memory today and it’s inevitable march into the future.

Agenda:

00:00 Today’s guest is Perry Keller, he works a lot with standards committees and making next generation technology happen.

00:50 Perry has been working with memory for 15 years.

1:10 He also did ASIC design, project management for software and hardware

1:25
Perry is on the JEDEC board of directors

JEDEC is one of the oldest standards body, maybe older than IEEE

1:50 JEDEC was established to create standards for semiconductors. This was an era when vacuum tubes were being replaced by solid state devices.

2:00 JEDEC started by working on instruction set standards

2:15 There are two main groups. A wide bandgap semiconductors group and a memory group.

3:00 Volatile memory vs. nonvolatile memory. An SSD is nonvolatile storage, like in a phone. But if you look at a DIMM in a PC that’s volatile.

3:40 Nonvolatile memory is everywhere, even in light bulbs.

4:00 Even a DRAM can hold its contents for quite some time. JEDEC had discussions about doing massive erases because spooks will try to recover data from it.

DRAM uses capacitors for storage, so the colder they are the longer they hold their charge.

4:45 DRAM is the last vestige of the classical wide single ended parallel bus. “It’s a miracle that it works at all.”

5:30 Perry showed a friend a GDDR5 bus and challenged him to get an eye on it and he couldn’t.

6:10 Even though DDR signals look awful, it depends on reliable data transfer. The timing and clocking is set up in a way to deal with all of the various factors.

7:00 DDR specifications continue to march forward. There’s always something going on in memory.

8:00 Perry got involved with JEDEC through a conversation with the board chairman.

8:35 When DDR started, 144 MT/s (megatransfers per second) was considered fast. But, DDR5 has and end of life goal of 6.5 GT/s on a 80+ bit wide single ended parallel bus.

9:05 What are the big drivers for memory technology? Power. Power is everything. LPDDR – low power DDR – is a big push right now.

9:30 if you look at the memory ecosystem, the big activity is in mobile. The server applications are becoming focused with the cloud, but the new technology and investment is mobile.

10:00 If you look at a DRAM, you can divide it into three major categories. Mainstream PC memory, low power memory, and GDDR. GDDR is graphics memory. The differences are in both power and cost.

For example, LPDDR is static designs. You can clock it down to DC, which you can’t do with normal DDR.

The first DDR was essentially TTL compatible. Now, we’re looking at 1.1V power supplies and voltage swings in the mV.

Semiconductor technology is driving the voltages down to a large degree.

11:45 DRAM and GDDR is a big deal for servers.

A company from China tried to get JEDEC to increase the operating temperature range of DRAMs by 10 C. They fire up one new coal fired power plant per week in China to meet growing demand. They found they could cut it down to only 3 per month with this change in temperature specs.

13:10 About 5 years ago, the industry realized that simply increasing I/O speeds wouldn’t help system performance that much because the core memory access time hasn’t changed in 15 years. The I/O rate has increased, but basically they do that by pulling more bits at once out of the core and shifting them out. The latency is what really hurts at a system level.

14:15 Development teams say that their entire budget for designing silicon is paid for out of smaller electric bills.

15:00 Wide bandgap semiconductors are happy running at very high temperatures. If these temperatures end up in the data centers, you’ll have to have moon suits to access the servers.

16:30 Perry says there’s more interesting stuff going on in the computing than he’s seen in his whole career.

The interface between different levels is not very smooth. The magic in a spin-up disk is in the cache-optimizing algorithms. That whole 8-level structure is being re-thought.

18:00 Von Neumann architectures are not constraining people any more.

18:10 Anything that happens architecturally in the computing world affects and is affected by memory.

22:10 When we move from packaged semiconductors to 3D silicon we will see the end of DDR. The first successful step is called high bandwidth memory, which is essentially a replacement for GDDR5.

23:00 To move to a new DDR spec, you basically have to double the burst size.

Wide Bandgap Semiconductors for Power Electronics – Electrical Engineering Podcast #20

Wide bandgap semiconductors, like Gallium Nitride (GaN) and Silicon Carbide (SiC) are shaping the future of power electronics by boosting power efficiency and reducing physical footprint. Server farms, alternative energy sources, and electrical grids will all be affected!

Wide bandgap semiconductors, like Gallium Nitride (GaN) and Silicon Carbide (SiC) are shaping the future of power electronics by boosting power efficiency and reducing physical footprint. Server farms, alternative energy sources, and electrical grids will all be affected! Mike Hoffman and Daniel Bogdanoff sit down with Kenny Johnson to discuss in today’s electrical engineering podcast.

 

Links:

Fact Sheet: https://energy.gov/eere/articles/infographic-wide-bandgap-semiconductors

Fact Sheet
https://energy.gov/sites/prod/files/2013/12/f5/wide_bandgap_semiconductors_factsheet.pdf

Tech Assessment (Good timeline information)
https://energy.gov/sites/prod/files/2015/02/f19/QTR%20Ch8%20-%20Wide%20Bandgap%20TA%20Feb-13-2015.pdf

Agenda – Wide Bandgap Semiconductors

Use in Power Electronics

3:00 What is a wide bandgap semiconductor? GaN (Gallium Nitride) devices and SiC (Silicon Carbide) can switch on and off much faster than typical silicon power devices. Wide bandgap semiconductors also have better thermal conductivity. And, wide bandgap semiconductors have a significantly lower drain-source resistance (R-on).
For switch mode power supplies, the transistor switch time is the key source of inefficiency. So, switching faster makes things more efficient.

4:00 They will also reduce the size of power electronics.

6:30 Wide bandgap semiconductors have a very fast rise time, which can cause EMI and RFI problems. The high switching speed also means they can’t handle much parasitic inductance. So, today’s IC packaging technology isn’t ideal.

8:30 Wide bandgap semiconductors are enabling the smart grid. The smart grid essentially means that you only turning on things being used, and turning off power completely when they aren’t being used.

9:35 Wide bandgap semiconductors will probably be integrated into server farms before they are used in power grid distribution or in homes.

10:20 Google uses a lot of power. 2.3 TWh (terawatt hour)
NYT article: http://www.nytimes.com/2011/09/09/technology/google-details-and-defends-its-use-of-electricity.html

It’s estimated Google has 900,000 servers, and that accounts for maybe 1% of the world’s servers.
So, they are willing to put in the investment to work out the details of this technology.

11:50 The US Department of Energy wants people to get an advanced degree in power electronics. Countries want to have technology leadership in this area.

13:00 Wide bandgap semiconductors are also very important for wind farms and other alternative forms of energy.

Having a solid switch mode power supply means that you don’t have to have extra capacity.

USA Dept of Energy: If industrial motor systems were wide bandgap semiconductors took over, it would save a ton of energy.

14:45 A huge percentage of the world’s power is consumed by electrical pumps.

16:20 Kenny’s oldest son works for a company that goes around and shows companies how to recover energy costs.

There aren’t many tools available for measuring wide bandgap semiconductor power electronics.

19:30 Utilities and servers are the two main industries that will initially adopt wide band gap semiconductors

20:35 When will this technology get implemented in the real world? There are parts available today, but it probably won’t be viable for roughly 2-5 years.

21:00 Devices with fast switching are beneficial, but have their own set of problems. The faster a devices switches, the more EMI and RFI you have to deal with.

Spread spectrum clocking is a technique used to pass EMI compliance.

24:00 Band gaps of different materials: Diamond 5.5 eV Gallium Nitride (GaN) 3.4 eV Silicon Carbide (SiC) 3.3 eV

PAM4 and 400G – Ethernet #18

Learn how PAM4 is allowing some companies to double their data rate – and the new challenges this brings up for engineers. (electrical engineering podcast)

Today’s systems simply can’t communicate any faster. Learn how some companies are getting creative and doubling their data rates using PAM4 – and the extra challenge this technology means for engineers.

Mike Hoffman and Daniel Bogdanoff sit down with PAM4 transmitter expert Alex Bailes and PAM4 receiver expert Steve Reinhold to discuss the trends, challenges, and rewards of this technology.

 

1:00
PAM isn’t just cooking spray.

What is PAM4? PAM stands for Pulse Amplitude Modulation, and is a serial data communication technique in which more than one bit of data can be communicated per clock cycle. Instead of just a high (1) or low (0) value, a in PAM4, a voltage level can represent 00, 01, 10, or 11. NRZ is essentially just PAM2.

We are reaching the limit of NRZ communication capabilities over the current communication channels.

2:10 PAM has been around for a while, it was used in 1000BASE-T. 10GBASE-T uses PAM16, which means it has 16 different possible voltage levels per clock cycle. It acts a bit like an analog to digital converter.

2:55 Many existing PAM4 specifications have voltage swings of 600-800 mV

3:15 What does a PAM4 receiver look like?  A basic NRZ receiver just needs a comparator, but what about multiple levels?

3:40 Engineers add multiple slicers and do post-processing to clean up the data or put an ADC at the receiver and do the data analysis all at once.

PAM4 communicates 2-bits per clock cycle, 00, 01, 10, or 11.

4:25 Radio engineers have been searching for better modulation techniques for some time, but now digital people are starting to get interested.

4:40 With communications going so fast, the channel bandwidth limits the ability to transmit data.

PAM4 allows you to effectively double your data rate by doubling the amount of data per clock cycle.

5:05 What’s the downside of PAM4? The Signal to Noise Ratio (SNR) for PAM4  worse than traditional NRZ. In a perfect world, the ideal SNR would be 9.6 dB (for four levels instead of two). In reality, it’s worse, though.

5:30 Each eye may not be the same height, so that also has an effect on the total SNR.

6:05 What’s the bit error ratio (BER) of a PAM4 vs. NRZ signal if the transmission channel doesn’t change?

6:45 The channels were already challenged, even for many NRZ signals. So, it doesn’t look good for PAM4 signals. Something has to change.

7:00 PAM4 is designed to operate at a high BER. NRZ typically specified a 1E-12 or 1E-15 BER, but many PAM4 specs are targeting 1E-4 or 1E-5. It uses forward error correction (or other schemes) to get accurate data transmission.

7:50 Companies are designing more complex receivers and more robust computing power to make PAM4 work. This investment is worth it because they don’t have to significantly change their existing hardware.

8:45 PAM is being driven largely by Ethernet. The goal is to get to a 1 Tb/s data rate.

9:15 Currently 400 GbE is the next step towards the 1 Tbps Ethernet rate (terabit per second).

10:25 In Steve’s HP days, the salesmen would e-mail large pictures (1 MB) to him to try to fill up his drive.

11:10 Is there a diminishing rate of return for going to higher PAM levels?

PAM3 is used in automotive Ethernet, and 1000BASE-T uses PAM5.

Broadcom pushed the development of PAM3. The goal was to have just one pair of cables going through a vehicle instead of the 4 pairs in typical Ethernet cables.

Cars are an electrically noisy environment, so Ethernet is very popular for entertainment systems and less critical systems.

Essentially, Ethernet is replacing FlexRay. There was a technology battle for different automotive communication techniques. You wouldn’t want your ABS running on Ethernet because it’s not very robust.

14:45 In optical communication systems there is more modulation, but those systems don’t have the same noise constraints.

For digital communications, PAM8 is not possible over today’s channels because of the noise.

15:20 PAM4 is the main new scheme for digital communications

15:50 Baseband digital data transmission covers a wide frequency range. It goes from DC (all zeroes or all ones) to a frequency of the baud rate over 2 (e.g. 101010). This causes intersymbol interference (ISI) jitter that has to be corrected for – which is why we use transmitter equalization and receiver equalization.

16:55 PAM4 also requires clock recovery, and it is much harder to recover a clock when you have multiple possible signal levels.

17:35 ISI is easier to think about on an NRZ signal. If a signal has ten 0s in a row, then transitions up to ten 1s in a row,  the channel attenuation will be minimal. But, if you put a transition every bit, the attenuation will be much worse.

19:15 To reduce ISI, we use de-emphasis or pre-emphasis on the transmit side, and equalization on the receiver side. Engineers essentially boost the high frequencies at the expense of the low frequencies. It’s very similar to Dolby audio.

20:40 How do you boost only the high frequencies? There are circuits you can design that react based on the history of the bit stream. At potentially error-inducing transition bits, this circuitry drives a higher amplitude than a normal bit.

22:35 Clock recovery is a big challenge, especially for collapsed eyes. In oscilloscopes, there are special techniques to recover the eye and allow system analysis.

With different tools, you can profile an impulse response and detect whether you need to de-emphasize or modify the signal before transmission. Essentially, you can get the transfer function of your link.

23:45 For Ethernet systems, there are usually three equalization taps. Chip designers can modify the tap coefficients to tweak their systems and get the chip to operate properly. They have to design in enough compensation flexibility to make the communication system operate properly.

25:00 PAM vs. QAM? Is QAM just an RF and optical technique, or can it be used in a digital system?

25:40 Steve suspects QAM will start to be used for digital communications instead of just being used in coherent communication systems.

26:30 PAM4 is mostly applicable to the 200 GbE and 400 GbE, and something has to have to happen for us to get faster data transfer.

26:48 Many other technologies are starting to look into PAM4 – InfiniBand, Thunderbolt, and PCIe for example.

You can also read the EDN article on PAM4 here. If you’re working on PAM4, you can also check out how to prepare for PAM4 technology on this page.

 

 

 

 

Heterogeneous Computing & Quantum Engineering – #17

Learn about parallel computing, the rise of heterogeneous processing (also known as hybrid processing), and quantum engineering in today’s EEs Talk Tech electrical engineering podcast!

Learn about parallel computing, the rise of heterogeneous processing (also known as hybrid processing), and the prospect of quantum engineering as a field of study!

 

Audio link:

00:40

Parallel computing used to be a way of sharing tasks between processor cores.

When processor clock rates stopped increasing, the response of the microprocessor companies was to increase the number of cores on a chip to increase throughput.

01:44

But now, the increased use of specialized processing elements has become more popular.

A GPU is a good example of this. A GPU is very different from an x86 or ARM processor and is tuned for a different type of processing.

GPUs are very good at matrix math and vector math. Originally, they were designed to process pixels. They use a lot of floating point math because the math behind how a pixel  value is computed is very complex.

A GPU is very useful if you have a number of identical operations you have to calculate at the same time.

4:00

GPUs used to be external daughter cards, but in the last year or two the GPU manufacturers are starting to release low power parts suitable for embedded applications. They include several traditional cores and a GPU.

So, now you can build embedded systems that take advantage of machine learning algorithms that would have traditionally required too much processing power and too much thermal power.

 

4:50

This is an example of a heterogeneous processor (AMD) or hybrid processor. A heterogeneous processor contains cores of different types, and a software architect figures out which types of workloads are processed by which type of core.

Andrew Chen (professor) has predicted that this will increase in popularity because it’s become difficult to take advantage of shrinking the semiconductor feature size.

6:00

This year or next year, we will start to see heterogeneous processors (MOOR) with multiple types of cores.

Traditional processors are tuned for algorithms on integer and floating point operations where there isn’t an advantage to doing more than one thing at a time. The dependency chain is very linear.

A GPU is good at doing multiple computations at the same time so it can be useful when there aren’t tight dependency chains.

Neither processor is very good at doing real-time processing. If you have real time constraints – the latency between an ADC and the “answer” returned by the system must be short – there is a lot of computing required right now. So, a new type of digital hardware is required. Right now, ASICs and FPGAs tend to fill that gap, as we’ve discussed in the All about ASICs podcast.

9:50

Quantum cores (like we discussed in the what is quantum computing podcast) are something that we could see on processor boards at some point. Dedicated quantum computers that can exceed the performance of traditional computers will be introduced within the next 50 years, and as soon as the next 10 or 15 years.

To be a consumer product, a quantum computer would have to be a solid state device, but their existence is purely speculative at this point in time.

11:50

Quantum computing is reinventing how processing happens. And, quantum computers are going to tackle very different types of problems than conventional computers.

12:50

There is a catalog on the web of problems and algorithms that would be substantially better on a quantum on a computer than a traditional computer.

13:30

People are creating algorithms for computers that don’t even exist yet.

The Economist estimated that the total spend on quantum computing research is over 1 Billion dollars per year globally. A huge portion of that is generated by the promise of these algorithms and papers. The interest is driven by this.

Quantum computers will not completely replace typical processors.

15:00

Lee’s opinion is that the quantum computing industry is still very speculative, but the upsides are so great that neither the incumbent large computing companies nor the industrialized countries want to be left behind if it does take off.

The promise of quantum computing is beyond just the commercial industry, it’s international and inter-industry. You can find long whitepapers from all sorts of different governments laying out a quantum computing research strategy. There’s also a lot of venture capitalists investing in quantum computing.

17:40

Is this research and development public, or is there a lot of proprietary information out there? It’s a mixture, many of the startups and companies have software components that they are open sourcing and claim to have “bits of physics” working (quantum bits or qbits), but they are definitely keeping trade secrets.

19:50 Quantum communication means space lasers.

Engineering with quantum effects has promise as an industry. One can send photons with entangled states. The Chinese government has a satellite that can generate these photons and send them to base stations. If anyone reads them they can tell because the wave function collapsed too soon.

Quantum sensing promises to develop accelerometers and gyroscopes that are orders of magnitude more sensitive than what’s commercially available today.

21:35

Quantum engineering could become a new field. Much like electrical engineering was born 140 years ago, electronics was born roughly 70 years ago, computer science was born out of math and electrical engineering. It’s possible that the birth of quantum engineering will be considered to be some point in the next 5 years or last 5 years.

23:00

Lee’s favorite quantum state is the Bell state. It’s the equal probability state between 1 and 0, among other interesting properties. The Bell state encapsulates a lot of the quantum weirdness in one snippet of math.

 

 

 

 

 

 

 

Quantum Bits and Cracking RSA – #16

What does a quantum computer look like? What does the future of cyber security hold? We sit down with Lee Barford to discuss.

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective

 

How will quantum computing change the future of security? What does a quantum computer look like? Mike and Daniel sit down with Lee Barford to get some answers.

Video Version:

Audio version

Last time we looked at “what is quantum computing” and talked about quantum bits and storing data in superstates.

00:40 Lee talks about how to crack RSA and Shor’s algorithm (wikipedia)

00:50 The history of quantum computing (wiki). The first person to propose it was Richard Feynman in the mid 1960s. There was some interest, but it died out.

In the 1990s, Peter Shor published a paper pointing out that if you could build a quantum computer with certain operational properties (machine code instructions), then you could find one factor of a number no matter how long it is.

Then, he outlined another number of things he would need, like a quantum Fast Fourier Transform (FFT).

Much of the security we use every day is both the RSA public key system and the Diffie Hellman Key Exchange algorithm.

HTTPS connections use the Diffie Hellman Key Exchange algorithm. RSA stands for “really secure algorithm” “Rivest, Shamir, and Adelman.”

4:00

RSA only works if the recipients know each other, but Diffie Hellman works for people who don’t know each other but still want to communicate securely. This is useful because it’s not practical for everyone to have their own RSA keys.

5:00

Factoring numbers that are made up of large prime numbers is the basis for RSA. The processing power required for factoring is too large to be practical. People have been working on this for 2500 years.

6:45

Shor’s algorithm is theoretically fast enough to break RSA. If you could build a quantum computer with enough quantum bits and operate with a machine language cycle time that is reasonable (us or ms), then it would be possible to factor thousand bit numbers.

7:50

Famous professors and famous universities have a huge disparity of opinion as to when a quantum computer of that size could be built. Some say 5-10 years, others say up to 50.

8:45

What does a quantum computer look like? It’s easier to describe architecturally than physically. A quantum computer isn’t that much different from a classical computer, it’s simply a co-processor that has to co-exist with current forms of digital electronics.

9:15

If you look at Shor’s algorithm, there are a lot of familiar commands, like “if statements” and “for loops.” But, quantum gates, or quantum assembly language operations, are used in the quantum processor. (more about this)

10:00

Lee thinks that because a quantum gate operates in time instead of space, the term “gate” isn’t a great name.

10:30

What quantum computers exist today? Some have been built, but with only a few quantum bits. The current claim is that people have created quantum computers with up to 21 quantum bits. But, there are potentially a lot of errors and noise. For example, can they actually maintain a proper setup and hold time?

11:50

Continuing the Schrodinger’s Cat analogy…

In reality, if you have a piece of physics that you’ve managed to put into a superimposed quantum state, any disturbance of it (photon impact, etc.) will cause it to collapse into an unwanted state or to collapse too early.

13:15

So, quantum bits have to be highly isolated from their environments. So, in vacuums or extreme cold temperatures (well below 1 degree Kelvin!).

13:45

The research companies making big claims about the quantity of bits are not using solid state quantum computers.

The isolation of a quantum computer can’t be perfect, so there’s a limited lifetime for the computation before the probability of getting an error gets too high.

14:35

Why do we need a superposition of states? Why does it matter when the superimposed states collapse to one state? If it collapses at the wrong time you’ll get a wrong answer. With Shor’s algorithm it’s easy to check for the right answer. And, you get either a remainder of 0 or your don’t. If you get 0, the answer is correct. The computation only has to be reliable enough for you to check the answer.

16:15

If the probability of getting the right answer is high enough, you can afford to get the wrong answer on occasion.

16:50

The probability of the state of a quantum bit isn’t just 50%, so how do you set the probability of the state? It depends on the physical system. You can write to a quantum bit by injecting energy into the system, for example using a very small number of photons as a pulse with a carefully controlled timing and phase.

18:15

Keysight helps quantum computer researchers generate and measure pulses with metrological levels of precision.

The pulses have to be very carefully timed and correlated with sub nanosecond accuracy. You need time synchronization between all the bits at once for it to be useful.

19:40

What is a quantum bit? Two common kinds of quantum bits are

1: Ions trapped in a vacuum with laser trapping . The ions can’t move because they are held in place by standing waves of laser beams. The vacuum can be at room temperature but the ions are low temperature because they can’t move.

2. Josephson junctions in tank circuits (a coil capacitor) produce oscillations at microwave frequencies. Under the right physical conditions, those can be designed to behave like an abstract two state quantum system. You just designate zero and one to different states of the system.

Probabilities are actually a wrong description, it should be complex quantum amplitudes.

23:00

Josephson junctions were talked about in an earlier electrical engineering podcast discussing SI units.

23:40

After working with quantum computing, it’s common to walk away feeling a lot less knowledgeable.

24:30

Stupid question section:

“If you had Schrodinger’s cat in a box, would you look or not?”

Lee says the cat’s wave function really collapsed as it started to warm up so the state has already been determined.

 

 

The World’s Fastest ADC – #13

Learn about designing the world’s fastest ADC in today’s electrical engineering podcast! We sit down with Mike to talk about ADC design and ADC specs. Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

 

We talk to ASIC Planner Mike Beyers about what it takes to design the world’s fastest ADC in today’s electrical engineering podcast.

Video Version (YouTube):

 

Audio Only:

Intro:
Mike is an ASIC planner on the ASIC Design Team.

Prestudy, learn about making an ASIC.

00:30

What is an ADC?

An ADC is an analog to digital converter, it takes analog data inputs and provides digital data outputs.

What’s the difference between analog and digital ASICs?

1:00
There are three types of ASICs:
1.Signal conditioning ASICs
2. Between 1 and 3 is a converter, either digital to analog (DAC) or analog to digital (ADC)
3. Signal processing ASICs, also known as digital ASICs

1:50
Signal conditioning ASICs can be very simple or very complicated
e.g. Stripline filters are simple, front end of an oscilloscope can be complicated

2:45
There’s a distinction between a converter vs. an analog chip with some digital functionality
A converter has both digital and analog. But there are some analog chips with a digital interface, like an I2C or SPI interface.

4:25
How do you get what’s happening into the analog world onto a digital interface, and how fast can you do it?

4:35
Mike Hoffman designed a basic ADC design in school using a chain of operational amplifiers (opamps)
A ladder converter, or “thermometer code” is the most basic of ADC designs

6:00
A slow ADC can use single ended CMOS, a faster ADC might use parallel LVDS, now it’s almost always SERDES for highest performance chips

6:35
The world’s fastest ADC?

6:55
Why do we design ADCs? We usually don’t make what we can buy off the shelf.

The Nyquist rate determines the necessary sample rate, for example, a 10 GHz signal needs to be sampled at 20 – 25 Gigasamples per second
1/25 GHz = 40 ps

8:45
ADC Vertical resolution, or the number of bits.

So, ADCs generally have two main specs, speed (sample rate) and vertical resolution.

9:00
The ability to measure time very accurately is often most important, but people often miss the noise side of things.

9:45
It’s easy to oversimplify into just two specs. But, there’s more that hast to be considered. Specifications like bandwidth, frequency flatness, noise, and SFDR

10:20
It’s much easier to add bits to an ADC design than it is to decrease the ADCs noise.

10:42
Noise floor, SFDR, and SNR measure how good an analog to digital converter is.

SFDR means “spurious free dynamic range” and SNR means “signal to noise ratio”

11:00
Other things you need to worry about are error codes, especially for instrumentation.

For some ADC folding architectures and successive approximation architectures, there can be big errors. This is acceptable for communication systems but not for visualizing equipment.

12:30
So, there are a lot of factors to consider when choosing ADC.

12:45
Where does ADC noise come from? It comes from both the ADC and from the support circuitry.

13:00
We start with a noise budget for the instrument and allocate the budget to different blocks of the oscilloscope or instrument design.

13:35
Is an ADC the ultimate ASIC challenge? It’s both difficult analog design and difficult high-speed digital design, so we have to use fine geometry CMOS processes to make it happen.

15:00
How fast are our current ADCs? 160 Gigasamples per second.

15:45
We accomplish that with a chain of ADCs, not just a single ADC.

16:15
ADC interleaving. If you think about it simply, if you want to double your sample rate you can just double the number of ADCs and shift their sampling clocks.

But this has two problems. First, they still have the same bandwidth, you don’t get an increase. Second, you have to get a very good clock and offset them carefully.

17:00
To get higher bandwidth, you can use a sampler, which is basically just a very fast switch with higher bandwidth that then delivers the signal to the ADCs at a lower bandwidth

But, you have to deal with new problems like intersymbol interference (ISI).

18:20
So, what are the downsides of interleaving?

Getting everything to match up is hard, so you have to have a lot of adjustability to calibrate the samplers.

For example, if the q levels of one ADC are higher than the other, you’ll get a lot of problems. Like frequency spurs and gain spurs.

We can minimize this with calibration and some DSP  (digital signal processing) after the capture.

20:00
Triple interleaving and double interleaving – the devil is in the details

21:00
Internally, our ADCs are made up of a number of slices of smaller, slower ADC blocks.

21:15
Internally, we have three teams. An analog ASIC team, a digital ASIC team, and also an ADC ASIC team.

22:15
Technology for ADCs is “marching forward at an incredible rate”

The off-the-shelf ADC technologies are enabling new technologies like 5G, 100G/400G/1T Ethernet, and DSP processing.

23:00
Is processing driven by ADCs, or are ADCs advancing processor technology? Both!

24:00
Predictions?

Mike H.: New “stupid question for the guest” section
What is your favorite sample rate and why?
400 MSa – one of the first scopes Mike B. worked on. Remember “4 equals 5”