New 110 GHz Oscilloscope – UXR Q&A #35

Brig Asay, Melissa, and Daniel Bogdanoff sit down to answer the internet’s questions about the new 110 GHz UXR oscilloscope. How long did it take? What did it cost? Find out!

Brig Asay, Melissa, and Daniel Bogdanoff sit down to answer the internet’s questions about the new 110 GHz UXR oscilloscope. How long did it take? What did it cost? Find out!

 

Some of the questions & comments

S K on YouTube: How long does it take to engineer something like this? With custom ASICs all over the place and what not…

Glitch on YouTube: Can you make a budget version of it for $99?

Steve Sousa on YouTube: But how do you test the test instrument?? It’s already so massively difficult to make this, how can you measure and qualify it’s gain, linearity etc?

TechNiqueBeatz on YouTube: About halfway through the video now.. what would the practical application(s) of an oscilloscope like this be?

Alberto Vaudagna on YouTube: Do you know what happen to the data after the dsp? It go to the CPU motherboard and processed by the CPU or the data is overlayed on the screen and the gui is runner’s by the CPU?

How does a piece of equipment like that get delivered? I just don’t think UPS or Fedex is going to cut it for million+ dollar prototype. It would be nice to see some higher magnification views of the front end.

Ulrich Frank:mNice sturdy-looking handles at the side of the instrument – to hold on to and keep you steady when you hear the price…

SAI Peregrinus: That price! It costs less than half the price of a condo in Brooklyn, NY! (Search on Zillow, sort by price high to low. Pg 20 has a few for $2.7M, several of which are 1 bedroom…)

RoGeorgeRoGeorge: Wow, speechless!

R Bhalakiya: THIS IS ALL VOODOO MAGIC

Maic Salazar Diagnostics: This is majestic!!

Sean Bosse: Holy poop. Bet it was hard keeping this quiet until the release.

jonka1: Looking at the front end it looks as if the clock signal paths are of different lengths. How is phase dealt with? Is it in this module or later in software?

cims: The Bugatti Veyron of scopes with a price to match, lol

One scope to rule them all…wow! Keyesight drops the proverbial mic with this one

Mike Oliver: That is a truly beautiful piece of equipment. It is more of a piece of art work than any other equipment I have ever seen.

Gyro on EEVBlog: It’s certainly a step change in just how bad a bad day at the office could really get!
TiN: I have another question, regarding the input. Are there any scopes that have waveguide input port, instead of very pricey precision 1.0mm/etc connectors?
Or in this target scope field, that’s not important as much, since owner would connect the input cable and never disconnect? Don’t see those to last many cable swaps in field, even 2.4mm is quite fragile.

User on EEVBlog: According to the specs, It looks like the 2 channel version he looked at “only” requires 1370 VA and can run off 120V.  The 4 channel version only works off 200-240V

The really interesting question: how do they calibrate that calibration probe.
They have to characterize the imperfections in it’s output to a significantly better accuracy than this scope can measure.  Unless there’s something new under the sun in calibration methodology?

Mikes Electric Stuff‏ @mikelectricstuf: Can I get it in beige?

Yaghiyah‏ @yaghiyah: Does it support Zone Triggering?

User on Twitter:

It’ll be a couple paychecks before I’m in the market, but I’d really be interested in some detail on the probes and signal acquisition techniques. Are folks just dropping a coax connector on the PCB as a test point? The test setup alone has to be a science in itself.

I’d also be interested in knowing if the visiting aliens that you guys mugged to get this scope design are alive and being well cared for.

Hi Daniel, just out of curiosity and within any limits of NDAs, can you go into how the design process goes for one of these bleeding-edge instruments? Mostly curious how much of the physical design, like the channels in the hybrid, are designed by a human versus designed parametrically and synthesized

One Protocol to Rule Them All!? – #34

The USB Type-C brings a lot of protocols into one physical connector, but is there room for one protocol to handle all our IO needs? Mike Hoffman and Daniel Bogdanoff sit down with high speed digital communications expert Jit Lim to find out.

USB Type-C brings a lot of protocols into one physical connector, but is there room for one protocol to handle all our IO needs? Mike Hoffman and Daniel Bogdanoff sit down with high speed digital communications expert Jit Lim to find out.

 

0:00 This is Jit’s 3rd podcast of the series

1:00 We already have one connector to rule them all with USB Type-C, but it’s just a connector. Will we ever have one specification to rule them all?

2:00 Prior to USB Type-C, each protocol required it’s own connector. With USB TYpe-C, you can run multiple protocols over the same physical connector

3:00 This would make everything more simple for engineers, they would only need to test and characterize one technology.

3:30 Jit proposes a “Type-C I/O”

4:00 Thunderbolt already allows displayport to tunnel through it

4:30 Thunderbolt already has a combination of capabilities. It has a display mode – you can buy a Thunderbolt display. This means you can run data and display using the same technology

6:30 There’s a notion of a muxed signals

7:00 The PHY speed is the most important. Thunderbolt is running 20 Gb/s

7:15 What would the physical connection look like? Will the existing USB Type-C interface work? Currently we already see 80 Gb/s ports (4 lanes) in existing consumer PCs

9:20 Daniel hates charging his phone without fast charging

9:40 The USB protocol is for data transfer, but is there going to be a future USB dispaly protocol? There are already some audio and video modes in current USB, like a PC headset

11:30 Why are we changing? The vision is to plug it in and have it “just work”

12:00 Today, standards groups are quite separate. They each have their own ecosystems that they are comfortable in. So, this is a big challenge for getting to a single spec

13:15 Performance capabilities, like cable loss, is also a concern and challenge

14:00 For a tech like this were to exist, will the groups have to merge? Or, will someone just come out with a spec that obsoletes all of the others?

15:30 Everyone has a cable hoard. Daniel’s is a drawer, Mike’s is a shoebox

16:30 You still have to be aware of the USB Type-C cables that you buy. There’s room for improvement

17:30 Mike wants a world of only USB Type-C connectors and 3.5mm headphone jacks

18:30 From a test and measurement perspective, it’s very attractive to have a single protocol. You’d only have to test at one rate, one time

19:30 Stupid questions

USB 3.2 + Why You Only Have USB Ports On One Side of Your Laptop – #32

USB 3.2 DOUBLES the data transfer capabilities of previous USB specifications, and could mean the end of having USB ports on just one side of your computer. Find out more in today’s electrical engineering podcast with Jit Lim, Daniel Bogdanoff, and Mike Hoffman.

USB 3.2 DOUBLES the data transfer capabilities of previous USB specifications, and could mean the end of having USB ports on just one side of your computer. Find out more in today’s electrical engineering podcast with Jit Lim, Daniel Bogdanoff, and Mike Hoffman.

 

1:00
Jit is the USB and Thunderbolt lead for Keysight.

1:30
USB 3.2 specifications were released Fall 2017 and released two main capabilities.

USB 3.2 doubles the performance of  USB 3.1. You can now run 10Gb/s x2. It uses both sides of the CC connector.

In the x2 mode, both sides of the connectors are used instead of just one.

4:00
The other new part of USB 3.2 is that it adds the ability to have the USB silicon farther away from the port. It achieves this using retimers, which makes up for the lossy transmission channel.

5:00
Why laptops only have USB ports on one side! The USB silicon has to be close to the connector.

6:30
If the silicon is 5 or 6 inches away from the connector, it will fail the compliance tests. That’s why we need retimers.

7:15
USB is very good at maintaining backwards compatibility

The USB 3.0 spec and the USB 3.1 spec no longer exist. It’s only USB 3.2.

The USB 3.2 specification includes the 3.0 and the 3.1 specs as part of them, and acts as a special mode.

9:00
From a protocol layer and a PHY layer, nothing much has changed. It simply adds communication abilities.

9:55
Who is driving the USB spec? There’s a lot of demand! USB Type C is very popular for VR and AR.

12:00
There’s no benefit to using legacy devices with modern USB 3.2 ports.

13:45
There’s a newly released variant of USB Type C that does not have USB 2.0 support. It repurposes the USB 2 pins. It won’t be called USB, but it’ll essentially be the same thing. It’s used for a new headset.

15:20
USB Type C is hugely popular for VR and AR applications. You can send data, video feeds, and power.

17:00
Richie’s Vive has an audio cable, a power cable, and an HDMI cable. The new version, though, has a USB Type-C that handles some of this.

18:00
USB 3.2 will be able to put a retimer on a cable as well. You can put one at each end.

What is a retimer? A retimer is used when a signal traverses a lossy board or transmission line. A retimer acquires the signal, recovers it, and retransmits it.

It’s a type of repeater. Repeaters can be either redrivers or repeaters. A redriver just re-amplifies a signal, including any noise. A retimer does a full data recovery and re-transmission.

21:20
Stupid Questions:
What is your favorite alt mode, and why?
If you could rename Type-C to anything, what would you call it?

 

 

 

Memory, DDR5+, and JEDEC – #24

“It’s a miracle it works at all.” In this electrical engineering podcast, we discuss the state of memory today and it’s inevitable march into the future.

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

“It’s a miracle it works at all.” Not the most inspiring words from someone who helped define the latest DDR spec. But, that’s the the state of today’s memory systems. Closed eyes and mV voltage swings are the topic of today’s electrical engineering podcast. Daniel Bogdanoff (@Keysight_Daniel) and Mike Hoffman sit down with Perry Keller to talk about the state of memory today and it’s inevitable march into the future.

Agenda:

00:00 Today’s guest is Perry Keller, he works a lot with standards committees and making next generation technology happen.

00:50 Perry has been working with memory for 15 years.

1:10 He also did ASIC design, project management for software and hardware

1:25
Perry is on the JEDEC board of directors

JEDEC is one of the oldest standards body, maybe older than IEEE

1:50 JEDEC was established to create standards for semiconductors. This was an era when vacuum tubes were being replaced by solid state devices.

2:00 JEDEC started by working on instruction set standards

2:15 There are two main groups. A wide bandgap semiconductors group and a memory group.

3:00 Volatile memory vs. nonvolatile memory. An SSD is nonvolatile storage, like in a phone. But if you look at a DIMM in a PC that’s volatile.

3:40 Nonvolatile memory is everywhere, even in light bulbs.

4:00 Even a DRAM can hold its contents for quite some time. JEDEC had discussions about doing massive erases because spooks will try to recover data from it.

DRAM uses capacitors for storage, so the colder they are the longer they hold their charge.

4:45 DRAM is the last vestige of the classical wide single ended parallel bus. “It’s a miracle that it works at all.”

5:30 Perry showed a friend a GDDR5 bus and challenged him to get an eye on it and he couldn’t.

6:10 Even though DDR signals look awful, it depends on reliable data transfer. The timing and clocking is set up in a way to deal with all of the various factors.

7:00 DDR specifications continue to march forward. There’s always something going on in memory.

8:00 Perry got involved with JEDEC through a conversation with the board chairman.

8:35 When DDR started, 144 MT/s (megatransfers per second) was considered fast. But, DDR5 has and end of life goal of 6.5 GT/s on a 80+ bit wide single ended parallel bus.

9:05 What are the big drivers for memory technology? Power. Power is everything. LPDDR – low power DDR – is a big push right now.

9:30 if you look at the memory ecosystem, the big activity is in mobile. The server applications are becoming focused with the cloud, but the new technology and investment is mobile.

10:00 If you look at a DRAM, you can divide it into three major categories. Mainstream PC memory, low power memory, and GDDR. GDDR is graphics memory. The differences are in both power and cost.

For example, LPDDR is static designs. You can clock it down to DC, which you can’t do with normal DDR.

The first DDR was essentially TTL compatible. Now, we’re looking at 1.1V power supplies and voltage swings in the mV.

Semiconductor technology is driving the voltages down to a large degree.

11:45 DRAM and GDDR is a big deal for servers.

A company from China tried to get JEDEC to increase the operating temperature range of DRAMs by 10 C. They fire up one new coal fired power plant per week in China to meet growing demand. They found they could cut it down to only 3 per month with this change in temperature specs.

13:10 About 5 years ago, the industry realized that simply increasing I/O speeds wouldn’t help system performance that much because the core memory access time hasn’t changed in 15 years. The I/O rate has increased, but basically they do that by pulling more bits at once out of the core and shifting them out. The latency is what really hurts at a system level.

14:15 Development teams say that their entire budget for designing silicon is paid for out of smaller electric bills.

15:00 Wide bandgap semiconductors are happy running at very high temperatures. If these temperatures end up in the data centers, you’ll have to have moon suits to access the servers.

16:30 Perry says there’s more interesting stuff going on in the computing than he’s seen in his whole career.

The interface between different levels is not very smooth. The magic in a spin-up disk is in the cache-optimizing algorithms. That whole 8-level structure is being re-thought.

18:00 Von Neumann architectures are not constraining people any more.

18:10 Anything that happens architecturally in the computing world affects and is affected by memory.

22:10 When we move from packaged semiconductors to 3D silicon we will see the end of DDR. The first successful step is called high bandwidth memory, which is essentially a replacement for GDDR5.

23:00 To move to a new DDR spec, you basically have to double the burst size.

Data Analytics for Engineering Projects – #23

Learn some best practices for engineering projects that have huge amounts of data. Data analytics tools are crucial for project success! Listen in on today’s EEs Talk Tech electrical engineering podcast.

It seems most large labs have a go-to data person. You know, the one who had to upgrade his PC so it could handle insanely complex Excel pivot tables? In large electrical engineering R&D labs, measurement data can often be inaccessible and unreliable.

In today’s electrical engineering podcast, Daniel Bogdanoff (@Keysight_Daniel) sits down with Ailee Grumbine and Brad Doerr to talk about techniques for managing test & measurement data for large engineering projects.

 

Agenda:

1:10 – Who is using data analytics?

2:00 – for a hobbyist in the garage, they may still have a lot of data. But, because it’s a one-person team, it’s much easier to handle the data.

Medium and large size teams generate a lot of data. There are a lot of prototypes, tests, etc.

3:25 – The best teams manage their data efficiently. They are able to make quick, informed decisions.

4:25 – A manager told Brad, “I would rather re-make the measurements because I don’t trust the data that we have.”

6:00 – Separate the properties from the measurements. Separate the data from the metadata. Separating data from production lines, prototype units, etc. helps us at Keysight make good engineering decisions.

9:30 – Data analytics helps for analyzing simulation data before tape out of a chip.

10:30 – It’s common to have multiple IT people managing a specific project.

11:00 – Engineering companies should use a data analytics tool that is data and domain agnostic.

11:45 – Many teams have an engineer or two that manage data for their teams. Often, it’s the team lead. They often get buried in data analytics instead of engineering and analysis work. It’s a bad investment to have engineers doing IT work.

14:00 – A lot of high speed serial standards have workshops and plugfests. They test their products to make sure they are interoperable and how they stack up against their competitors.

15:30 – We plan to capture industry-wide data and let people see how their project stacks up against the industry as a whole.

16:45 – On the design side, it’s important to see how the design team’s simulation results stack up against the validation team’s empirical results.

18:00 – Data analytics is crucial for manufacturing. About 10% of our R&D tests make it to manufacturing. And, manufacturing has a different set of data and metrics.

19:00 – Do people get hired/fired based on data? In one situation, there was a lack of data being shared that ended up costing the company over $1M and 6 months of time-to-market.

 

 

 

Quantum Bits and Cracking RSA – #16

What does a quantum computer look like? What does the future of cyber security hold? We sit down with Lee Barford to discuss.

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective

 

How will quantum computing change the future of security? What does a quantum computer look like? Mike and Daniel sit down with Lee Barford to get some answers.

Video Version:

Audio version

Last time we looked at “what is quantum computing” and talked about quantum bits and storing data in superstates.

00:40 Lee talks about how to crack RSA and Shor’s algorithm (wikipedia)

00:50 The history of quantum computing (wiki). The first person to propose it was Richard Feynman in the mid 1960s. There was some interest, but it died out.

In the 1990s, Peter Shor published a paper pointing out that if you could build a quantum computer with certain operational properties (machine code instructions), then you could find one factor of a number no matter how long it is.

Then, he outlined another number of things he would need, like a quantum Fast Fourier Transform (FFT).

Much of the security we use every day is both the RSA public key system and the Diffie Hellman Key Exchange algorithm.

HTTPS connections use the Diffie Hellman Key Exchange algorithm. RSA stands for “really secure algorithm” “Rivest, Shamir, and Adelman.”

4:00

RSA only works if the recipients know each other, but Diffie Hellman works for people who don’t know each other but still want to communicate securely. This is useful because it’s not practical for everyone to have their own RSA keys.

5:00

Factoring numbers that are made up of large prime numbers is the basis for RSA. The processing power required for factoring is too large to be practical. People have been working on this for 2500 years.

6:45

Shor’s algorithm is theoretically fast enough to break RSA. If you could build a quantum computer with enough quantum bits and operate with a machine language cycle time that is reasonable (us or ms), then it would be possible to factor thousand bit numbers.

7:50

Famous professors and famous universities have a huge disparity of opinion as to when a quantum computer of that size could be built. Some say 5-10 years, others say up to 50.

8:45

What does a quantum computer look like? It’s easier to describe architecturally than physically. A quantum computer isn’t that much different from a classical computer, it’s simply a co-processor that has to co-exist with current forms of digital electronics.

9:15

If you look at Shor’s algorithm, there are a lot of familiar commands, like “if statements” and “for loops.” But, quantum gates, or quantum assembly language operations, are used in the quantum processor. (more about this)

10:00

Lee thinks that because a quantum gate operates in time instead of space, the term “gate” isn’t a great name.

10:30

What quantum computers exist today? Some have been built, but with only a few quantum bits. The current claim is that people have created quantum computers with up to 21 quantum bits. But, there are potentially a lot of errors and noise. For example, can they actually maintain a proper setup and hold time?

11:50

Continuing the Schrodinger’s Cat analogy…

In reality, if you have a piece of physics that you’ve managed to put into a superimposed quantum state, any disturbance of it (photon impact, etc.) will cause it to collapse into an unwanted state or to collapse too early.

13:15

So, quantum bits have to be highly isolated from their environments. So, in vacuums or extreme cold temperatures (well below 1 degree Kelvin!).

13:45

The research companies making big claims about the quantity of bits are not using solid state quantum computers.

The isolation of a quantum computer can’t be perfect, so there’s a limited lifetime for the computation before the probability of getting an error gets too high.

14:35

Why do we need a superposition of states? Why does it matter when the superimposed states collapse to one state? If it collapses at the wrong time you’ll get a wrong answer. With Shor’s algorithm it’s easy to check for the right answer. And, you get either a remainder of 0 or your don’t. If you get 0, the answer is correct. The computation only has to be reliable enough for you to check the answer.

16:15

If the probability of getting the right answer is high enough, you can afford to get the wrong answer on occasion.

16:50

The probability of the state of a quantum bit isn’t just 50%, so how do you set the probability of the state? It depends on the physical system. You can write to a quantum bit by injecting energy into the system, for example using a very small number of photons as a pulse with a carefully controlled timing and phase.

18:15

Keysight helps quantum computer researchers generate and measure pulses with metrological levels of precision.

The pulses have to be very carefully timed and correlated with sub nanosecond accuracy. You need time synchronization between all the bits at once for it to be useful.

19:40

What is a quantum bit? Two common kinds of quantum bits are

1: Ions trapped in a vacuum with laser trapping . The ions can’t move because they are held in place by standing waves of laser beams. The vacuum can be at room temperature but the ions are low temperature because they can’t move.

2. Josephson junctions in tank circuits (a coil capacitor) produce oscillations at microwave frequencies. Under the right physical conditions, those can be designed to behave like an abstract two state quantum system. You just designate zero and one to different states of the system.

Probabilities are actually a wrong description, it should be complex quantum amplitudes.

23:00

Josephson junctions were talked about in an earlier electrical engineering podcast discussing SI units.

23:40

After working with quantum computing, it’s common to walk away feeling a lot less knowledgeable.

24:30

Stupid question section:

“If you had Schrodinger’s cat in a box, would you look or not?”

Lee says the cat’s wave function really collapsed as it started to warm up so the state has already been determined.

 

 

What is Quantum Computing?- #15

Learn about the basics of quantum computing and quantum computers from Dr. Lee Barford. We discuss Schrodinger’s cat and more!

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthlyelectrical engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

What is a quantum computer and what is quantum computing? In this week’s episode, Daniel Bogdanoff and Mike Hoffman are joined by quantum computing expert Lee Barford.

Video Version (YouTube):

Audio Only:

0:45 Intro

Lee Barford helps to guide Keysight into the quantum computing business + enables the quantum computing experts at Keysight

 

2:00 The importance of quantum computing

Clock rates in all types of digital processors stopped going up in 2006 due to heating limits

The processor manufacturers realized the need for more parallelism.

Today, Lee helps engineers at Keysight take advantage of this parallelism.

Graphics processors can be used as vector and matrix machines

Bitcoin utilizes this method.

 

6:00 The implications of advancements in quantum computing

Today, there are parts being made with feature size of the digital transistor that are 10, maybe 7 nanometers (depending on who you believe)

So we are heading below 5 nanometers, and there aren’t many unit cells of silicon left at that point. (a unit cell of silicon is 0.5 nanometer)

The uncertainty principle comes into play since there are few enough atoms where quantum mechanical effects will disturb the electronics.

There are many concerns including a superposition of states (Schrodinger’s cat) and low error tolerance.

 

10:20 Is Moore’s law going to fail? 

Quantum computing is one way of moving the computer industry past this barrier

Taking advantage of quantum mechanical effects, engineering with them, to build a new kind of computers that for certain problems, promise to do better than what we currently do.

 

15:20 Questions for future episodes:

What sort of technology goes into a quantum computer?

What’s the current state of experimentation?

What are some of the motivations for funding quantum computing research?

How is Keysight involved in this industry?

What problems is quantum computing aiming to solve?

 

17:30 Using quantum effects to our advantage

Quantum computers likely be used in consumer devices because there has to be a very low temperature and/or a vacuum.

18:00

A quantum computer’s fundamental storage unit is a qubit (quantum bit).  A quantum bit (qubit) can be either 1 or 0 with some finite probability

19:00
A quantum register can store multiple qubits, and when read, have a probability of being either of these numbers. A quantum register can store more than one state at a time, but only one value can be read from the quantum register.

21:00 How does one get a useful value out of a quantum register? You do as much of the computation before reading the state and then read the quantum computers quantum register.

This works because the quantum computer’s either has such a high probability to be correct that you don’t need to verify it, or it’s simple to double check if the answer is correct.

21:00 How do you get the desired value out of a quantum register? You do as much of the computation ahead of time and then read the quantum computers quantum register.

22:30 Quantum computers can factor very large numbers (breaking RSA in cryptography)

 

 

 

 

 

 

The World’s Fastest ADC – #13

Learn about designing the world’s fastest ADC in today’s electrical engineering podcast! We sit down with Mike to talk about ADC design and ADC specs. Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

 

We talk to ASIC Planner Mike Beyers about what it takes to design the world’s fastest ADC in today’s electrical engineering podcast.

Video Version (YouTube):

 

Audio Only:

Intro:
Mike is an ASIC planner on the ASIC Design Team.

Prestudy, learn about making an ASIC.

00:30

What is an ADC?

An ADC is an analog to digital converter, it takes analog data inputs and provides digital data outputs.

What’s the difference between analog and digital ASICs?

1:00
There are three types of ASICs:
1.Signal conditioning ASICs
2. Between 1 and 3 is a converter, either digital to analog (DAC) or analog to digital (ADC)
3. Signal processing ASICs, also known as digital ASICs

1:50
Signal conditioning ASICs can be very simple or very complicated
e.g. Stripline filters are simple, front end of an oscilloscope can be complicated

2:45
There’s a distinction between a converter vs. an analog chip with some digital functionality
A converter has both digital and analog. But there are some analog chips with a digital interface, like an I2C or SPI interface.

4:25
How do you get what’s happening into the analog world onto a digital interface, and how fast can you do it?

4:35
Mike Hoffman designed a basic ADC design in school using a chain of operational amplifiers (opamps)
A ladder converter, or “thermometer code” is the most basic of ADC designs

6:00
A slow ADC can use single ended CMOS, a faster ADC might use parallel LVDS, now it’s almost always SERDES for highest performance chips

6:35
The world’s fastest ADC?

6:55
Why do we design ADCs? We usually don’t make what we can buy off the shelf.

The Nyquist rate determines the necessary sample rate, for example, a 10 GHz signal needs to be sampled at 20 – 25 Gigasamples per second
1/25 GHz = 40 ps

8:45
ADC Vertical resolution, or the number of bits.

So, ADCs generally have two main specs, speed (sample rate) and vertical resolution.

9:00
The ability to measure time very accurately is often most important, but people often miss the noise side of things.

9:45
It’s easy to oversimplify into just two specs. But, there’s more that hast to be considered. Specifications like bandwidth, frequency flatness, noise, and SFDR

10:20
It’s much easier to add bits to an ADC design than it is to decrease the ADCs noise.

10:42
Noise floor, SFDR, and SNR measure how good an analog to digital converter is.

SFDR means “spurious free dynamic range” and SNR means “signal to noise ratio”

11:00
Other things you need to worry about are error codes, especially for instrumentation.

For some ADC folding architectures and successive approximation architectures, there can be big errors. This is acceptable for communication systems but not for visualizing equipment.

12:30
So, there are a lot of factors to consider when choosing ADC.

12:45
Where does ADC noise come from? It comes from both the ADC and from the support circuitry.

13:00
We start with a noise budget for the instrument and allocate the budget to different blocks of the oscilloscope or instrument design.

13:35
Is an ADC the ultimate ASIC challenge? It’s both difficult analog design and difficult high-speed digital design, so we have to use fine geometry CMOS processes to make it happen.

15:00
How fast are our current ADCs? 160 Gigasamples per second.

15:45
We accomplish that with a chain of ADCs, not just a single ADC.

16:15
ADC interleaving. If you think about it simply, if you want to double your sample rate you can just double the number of ADCs and shift their sampling clocks.

But this has two problems. First, they still have the same bandwidth, you don’t get an increase. Second, you have to get a very good clock and offset them carefully.

17:00
To get higher bandwidth, you can use a sampler, which is basically just a very fast switch with higher bandwidth that then delivers the signal to the ADCs at a lower bandwidth

But, you have to deal with new problems like intersymbol interference (ISI).

18:20
So, what are the downsides of interleaving?

Getting everything to match up is hard, so you have to have a lot of adjustability to calibrate the samplers.

For example, if the q levels of one ADC are higher than the other, you’ll get a lot of problems. Like frequency spurs and gain spurs.

We can minimize this with calibration and some DSP  (digital signal processing) after the capture.

20:00
Triple interleaving and double interleaving – the devil is in the details

21:00
Internally, our ADCs are made up of a number of slices of smaller, slower ADC blocks.

21:15
Internally, we have three teams. An analog ASIC team, a digital ASIC team, and also an ADC ASIC team.

22:15
Technology for ADCs is “marching forward at an incredible rate”

The off-the-shelf ADC technologies are enabling new technologies like 5G, 100G/400G/1T Ethernet, and DSP processing.

23:00
Is processing driven by ADCs, or are ADCs advancing processor technology? Both!

24:00
Predictions?

Mike H.: New “stupid question for the guest” section
What is your favorite sample rate and why?
400 MSa – one of the first scopes Mike B. worked on. Remember “4 equals 5”

Copper vs. Fiber Optic Cable and Optical Communication Techniques – #11

Stefan Loeffler discusses the latest optical communication techniques
and advances in the industry as well as the use of fiber optic cable in electronics and long-range telecommunication networks. Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

Mike Hoffman and Daniel Bogdanoff continue their discussion with Stefan Loeffler about optical communication. In the first episode, we looked at “what is optical communication?” and “how does optical communication work?” This week we dig deeper into some of the latest optical communication techniques and advances in the industry as well as the use of fiber optic cable in electronics and long-range telecommunication networks.

Video version (YouTube):

 

Audio Version:

 

Discussion Overview:

 

Installation of optical fiber and maintenance of optical fiber

We can use optical communication techniques such as phase multiplexing

There’s a race between using more colors and higher bitrates to increase data communication rates.

Indium doped fiber amplifiers can multiply multiple channels at different colors on the same optical PHY.

You can use up to 80 colors on a single fiber optic channel! 3:52

How is optical communication similar to RF? Optical communication is a lot like WiFi 4:07

Light color in optical fiber is the equivalent of carrier frequencies in RF

 

How do we increase the data rate in optical fiber?

There are many multiplexing methods such as multicore, wavelength division, and polarization 4:50

Practically, only two polarization modes can be used at once. The limiting factor is the separation technology on the receiver side. 6:20

But, this still doubles our bandwidth!

What about dark fiber? Dark fiber is the physical piece of optical fiber that is unused. 7:07

Using dark fiber on an existing optical fiber is the first step to increasing fiber optic bandwidth.

But wavelengths can also be added.

Optical C-band vs L-band 7:48

Optical C-band was the first long-distance band. It is now joined by the L-band.

Is there a difference between using different colors and different wavelengths?

Optical fibers are a light show for mosquitos! 8:30

 

How do we fix optical fibers? 10:36

For short distances, an OTDR or visual light fault detectors are often used by sending red light into a fiber and lights up when there’s a break in the fiber

 

Are there other ways to extend the amount of data we can push through a fiber? 11:35

Pulses per second can be increased, but we will eventually bleed into neighboring channels

Phase modulation is also used

PAM-4 comes into play with coding (putting multiple bits in a symbol)

And QAM which relies on both amplitude and phase modulation

PAM-4 test solutions

How do we visualize optical fibers?  14:05

We can use constellation diagrams which plot magnitude and phase

 

Do we plan for data error? 15:00

Forward error correction is used, but this redundancy involves significant overhead

 

QAM vs PAM

64 Gigabot (QAM-64) was the buzzword at OFC 2017 16:52

PAM is used for shorter links while QAM is used for longer links

 

How do we evaluate fiber? 18:02

We can calculate cost per managed bit and energy per managed bit

Energy consumption is a real concern 18:28

 

The race between copper and fiber 19:13

Fiber wins on long distance because of power consumption

But does fiber win on data rate?

Google Fiber should come to Colorado Springs…and Germany!

To compensate for the loss of the signal on the distance, you push more power in for transmitting and decrypting

Fibers attenuate the signal much less than copper does

But the problem comes when we have to translate the signal back into electrical on the receiving end

Is there a break-even point with fiber and copper? 22:15

 

Optical communication technology in the future

What speed are we at now and what’s the next technology? 23:05

600 G technology will be here eventually

We can expect 1.5 years between iterations in bandwidth. This is really slow in terms of today’s fast-paced technology.

We typically see 100 G speeds today

 

Predictions 26:00

 

All About ASICs – #10

Chip sage and ASIC planner Mike Beyers discusses the challenges and trends in integrated circuit design in this week’s electrical engineering podcast.

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

The future will be built using ASICs! Daniel Bogdanoff and Mike Hoffman sit down with chip sage and planner Mike Beyers to discuss the challenges of building custom application specific integrated circuits. This podcast was inspired by the blog post “Creating an ASIC – Our Quest to Make the Best Cheap Oscilloscope

Video version (YouTube):

 

Audio version:

Discussion Overview:

We’re finally a real podcast now!

What is an ASIC? An ASIC is an application specific integrated circuit, an IC designed for a specific task.

Why do we use ASICs?

ASIC architecture 101 2:46
The main specification people talk about is the size smallest thing you can find on a chip – like the gate of a CMOS transistor

Effective gate length is shorter than the gate length drawn because of the manufacturing  process.

Another key spec is how many transistors you can fit in a square mm
Metal layers for interconnects are also more important, but can cause the mask sets to be more expensive
Do we care more about a gate’s footprint or its depth? 4:11

Will Moore’s Law hit a ceiling? 4:29
What about using three dimensional structures? 5:37
Is Moore’s Law just a marketing number? 5:51

Does technology ever slow down? 6:29

Power is often the largest limiter 6:58
Google builds data centers next to hydroelectric dams 7:34
Battery power 7:43
Power drives cost 7:53

How does the power problem affect ASICs? 8:25
There are power integrity and thermal management concerns
Dedicated routes on an ASIC vs switching on an FPGA 8:14

Who actually uses ASICs? 10:14
IOT technology – 7 nm and 14nm chips

A lot of people are using older technology because it’s much more affordable (like 45 nm)

ASICs on your bike could be a thing? 11:16
SRAM wireless electronic bike shifters 11:57
Is bike hacking a real thing? Yes! Encrypted wireless communication helps prevent it.

Is an opamp (operational amplifier) an ASIC?

What to consider when investing in an ASIC 13:23
What’s the next best alternative to building this ASIC?
With an ASIC, you can often drive lower cost, but you also increase performance and  reliability
Is there a return on investment? 14:24

What happens when Moore’s Law hits a dead end with transistors? 14:46
Could we replace electrical with optical? 15:30
Is it possible that there other fundamental devices out there, waiting to be discovered? 16:20
The theoretical fourth device, the memristor 17:00

Will analog design ever die? Mike was told to get into digital design.

Non-binary logic could be the future 18:23

If someone wants and ASIC, how do they get one? 18:50
In-house design vs. external fabs/foundries, total turnkey solutions vs. the foundry model

You can get a cheaper chip by going to a larger architecture, but the chip will run hotter and slower.

RTL – Most common code languages Verilog or VHDL vs. higher level languages like C 22:50

Behavioral Verilog vs. Structural Verilog 24:00

The history of Keysight ASICs 25:45

Predictions 28:40
How to connect with us 29:00