IC Packaging – #37

Packaging engineers are the unsung heroes of the IC world. Packaging Expert Jesse Rebeck sits down and explores the complexities of IC packaging.

The unsung heroes of the IC world – packaging engineers!

The pictures I promised:

The UXR Amplifier Fanout Package:

UXR_Amplifier_FanOutPackage

Bert Signal Conditioning Hybrid Packaging:

BERT_SignalConditioning_HybridPackage

UXR Data Processor Flip Chip Packaging:

UXR_DataProcessor_FlipChipPackage

New 110 GHz Oscilloscope – UXR Q&A #35

Brig Asay, Melissa, and Daniel Bogdanoff sit down to answer the internet’s questions about the new 110 GHz UXR oscilloscope. How long did it take? What did it cost? Find out!

Brig Asay, Melissa, and Daniel Bogdanoff sit down to answer the internet’s questions about the new 110 GHz UXR oscilloscope. How long did it take? What did it cost? Find out!

 

Some of the questions & comments

S K on YouTube: How long does it take to engineer something like this? With custom ASICs all over the place and what not…

Glitch on YouTube: Can you make a budget version of it for $99?

Steve Sousa on YouTube: But how do you test the test instrument?? It’s already so massively difficult to make this, how can you measure and qualify it’s gain, linearity etc?

TechNiqueBeatz on YouTube: About halfway through the video now.. what would the practical application(s) of an oscilloscope like this be?

Alberto Vaudagna on YouTube: Do you know what happen to the data after the dsp? It go to the CPU motherboard and processed by the CPU or the data is overlayed on the screen and the gui is runner’s by the CPU?

How does a piece of equipment like that get delivered? I just don’t think UPS or Fedex is going to cut it for million+ dollar prototype. It would be nice to see some higher magnification views of the front end.

Ulrich Frank:mNice sturdy-looking handles at the side of the instrument – to hold on to and keep you steady when you hear the price…

SAI Peregrinus: That price! It costs less than half the price of a condo in Brooklyn, NY! (Search on Zillow, sort by price high to low. Pg 20 has a few for $2.7M, several of which are 1 bedroom…)

RoGeorgeRoGeorge: Wow, speechless!

R Bhalakiya: THIS IS ALL VOODOO MAGIC

Maic Salazar Diagnostics: This is majestic!!

Sean Bosse: Holy poop. Bet it was hard keeping this quiet until the release.

jonka1: Looking at the front end it looks as if the clock signal paths are of different lengths. How is phase dealt with? Is it in this module or later in software?

cims: The Bugatti Veyron of scopes with a price to match, lol

One scope to rule them all…wow! Keyesight drops the proverbial mic with this one

Mike Oliver: That is a truly beautiful piece of equipment. It is more of a piece of art work than any other equipment I have ever seen.

Gyro on EEVBlog: It’s certainly a step change in just how bad a bad day at the office could really get!
TiN: I have another question, regarding the input. Are there any scopes that have waveguide input port, instead of very pricey precision 1.0mm/etc connectors?
Or in this target scope field, that’s not important as much, since owner would connect the input cable and never disconnect? Don’t see those to last many cable swaps in field, even 2.4mm is quite fragile.

User on EEVBlog: According to the specs, It looks like the 2 channel version he looked at “only” requires 1370 VA and can run off 120V.  The 4 channel version only works off 200-240V

The really interesting question: how do they calibrate that calibration probe.
They have to characterize the imperfections in it’s output to a significantly better accuracy than this scope can measure.  Unless there’s something new under the sun in calibration methodology?

Mikes Electric Stuff‏ @mikelectricstuf: Can I get it in beige?

Yaghiyah‏ @yaghiyah: Does it support Zone Triggering?

User on Twitter:

It’ll be a couple paychecks before I’m in the market, but I’d really be interested in some detail on the probes and signal acquisition techniques. Are folks just dropping a coax connector on the PCB as a test point? The test setup alone has to be a science in itself.

I’d also be interested in knowing if the visiting aliens that you guys mugged to get this scope design are alive and being well cared for.

Hi Daniel, just out of curiosity and within any limits of NDAs, can you go into how the design process goes for one of these bleeding-edge instruments? Mostly curious how much of the physical design, like the channels in the hybrid, are designed by a human versus designed parametrically and synthesized

One Protocol to Rule Them All!? – #34

The USB Type-C brings a lot of protocols into one physical connector, but is there room for one protocol to handle all our IO needs? Mike Hoffman and Daniel Bogdanoff sit down with high speed digital communications expert Jit Lim to find out.

USB Type-C brings a lot of protocols into one physical connector, but is there room for one protocol to handle all our IO needs? Mike Hoffman and Daniel Bogdanoff sit down with high speed digital communications expert Jit Lim to find out.

 

0:00 This is Jit’s 3rd podcast of the series

1:00 We already have one connector to rule them all with USB Type-C, but it’s just a connector. Will we ever have one specification to rule them all?

2:00 Prior to USB Type-C, each protocol required it’s own connector. With USB TYpe-C, you can run multiple protocols over the same physical connector

3:00 This would make everything more simple for engineers, they would only need to test and characterize one technology.

3:30 Jit proposes a “Type-C I/O”

4:00 Thunderbolt already allows displayport to tunnel through it

4:30 Thunderbolt already has a combination of capabilities. It has a display mode – you can buy a Thunderbolt display. This means you can run data and display using the same technology

6:30 There’s a notion of a muxed signals

7:00 The PHY speed is the most important. Thunderbolt is running 20 Gb/s

7:15 What would the physical connection look like? Will the existing USB Type-C interface work? Currently we already see 80 Gb/s ports (4 lanes) in existing consumer PCs

9:20 Daniel hates charging his phone without fast charging

9:40 The USB protocol is for data transfer, but is there going to be a future USB dispaly protocol? There are already some audio and video modes in current USB, like a PC headset

11:30 Why are we changing? The vision is to plug it in and have it “just work”

12:00 Today, standards groups are quite separate. They each have their own ecosystems that they are comfortable in. So, this is a big challenge for getting to a single spec

13:15 Performance capabilities, like cable loss, is also a concern and challenge

14:00 For a tech like this were to exist, will the groups have to merge? Or, will someone just come out with a spec that obsoletes all of the others?

15:30 Everyone has a cable hoard. Daniel’s is a drawer, Mike’s is a shoebox

16:30 You still have to be aware of the USB Type-C cables that you buy. There’s room for improvement

17:30 Mike wants a world of only USB Type-C connectors and 3.5mm headphone jacks

18:30 From a test and measurement perspective, it’s very attractive to have a single protocol. You’d only have to test at one rate, one time

19:30 Stupid questions

USB 3.2 + Why You Only Have USB Ports On One Side of Your Laptop – #32

USB 3.2 DOUBLES the data transfer capabilities of previous USB specifications, and could mean the end of having USB ports on just one side of your computer. Find out more in today’s electrical engineering podcast with Jit Lim, Daniel Bogdanoff, and Mike Hoffman.

USB 3.2 DOUBLES the data transfer capabilities of previous USB specifications, and could mean the end of having USB ports on just one side of your computer. Find out more in today’s electrical engineering podcast with Jit Lim, Daniel Bogdanoff, and Mike Hoffman.

 

1:00
Jit is the USB and Thunderbolt lead for Keysight.

1:30
USB 3.2 specifications were released Fall 2017 and released two main capabilities.

USB 3.2 doubles the performance of  USB 3.1. You can now run 10Gb/s x2. It uses both sides of the CC connector.

In the x2 mode, both sides of the connectors are used instead of just one.

4:00
The other new part of USB 3.2 is that it adds the ability to have the USB silicon farther away from the port. It achieves this using retimers, which makes up for the lossy transmission channel.

5:00
Why laptops only have USB ports on one side! The USB silicon has to be close to the connector.

6:30
If the silicon is 5 or 6 inches away from the connector, it will fail the compliance tests. That’s why we need retimers.

7:15
USB is very good at maintaining backwards compatibility

The USB 3.0 spec and the USB 3.1 spec no longer exist. It’s only USB 3.2.

The USB 3.2 specification includes the 3.0 and the 3.1 specs as part of them, and acts as a special mode.

9:00
From a protocol layer and a PHY layer, nothing much has changed. It simply adds communication abilities.

9:55
Who is driving the USB spec? There’s a lot of demand! USB Type C is very popular for VR and AR.

12:00
There’s no benefit to using legacy devices with modern USB 3.2 ports.

13:45
There’s a newly released variant of USB Type C that does not have USB 2.0 support. It repurposes the USB 2 pins. It won’t be called USB, but it’ll essentially be the same thing. It’s used for a new headset.

15:20
USB Type C is hugely popular for VR and AR applications. You can send data, video feeds, and power.

17:00
Richie’s Vive has an audio cable, a power cable, and an HDMI cable. The new version, though, has a USB Type-C that handles some of this.

18:00
USB 3.2 will be able to put a retimer on a cable as well. You can put one at each end.

What is a retimer? A retimer is used when a signal traverses a lossy board or transmission line. A retimer acquires the signal, recovers it, and retransmits it.

It’s a type of repeater. Repeaters can be either redrivers or repeaters. A redriver just re-amplifies a signal, including any noise. A retimer does a full data recovery and re-transmission.

21:20
Stupid Questions:
What is your favorite alt mode, and why?
If you could rename Type-C to anything, what would you call it?

 

 

 

Battlebots 2018 & the Hardcore Robotics Team – #27

“I tend to not turn Tombstone on outside of the arena – it scares the crap out of me.” – Ray Billings, Hardcore Robotics team captain. We sit down with BattleBots’ resident bad boy to talk about the engineering behind the world’s meanest fighting robots. We also talk robot carnage. Because we know you’re really here for robot carnage.

“I tend to not turn Tombstone on outside of the arena. It scares the crap out of me…” – Ray Billings, Hardcore Robotics team captain. We sit down with BattleBots’ resident bad boy to talk about the engineering behind the world’s meanest fighting robots. We also talk robot carnage. Because we know you’re really here for robot carnage.

Agenda:

00:03 Ray Billings leads the Hardcore Robotics Battlebots team, and is the “resident villain” on Battlebots.

00:40 Mike went to high school with Ray’s son

01:15 Ray’s robot, “Tombstone” is ranked #1 on the Battlebots circuit. Highlights here.

1:34 The winner trophy for Battlebots is a giant nut.

2:00 Ray doesn’t turn on the robot very often outside of the arena

2:35 Ray’s carnage story: he bent a 1” thick titanium plate

3:20 You have to see combat robots live to get the full experience

4:10 The first match of Battlebots 2018 should be one of the most epic Battlebots fights of all time

4:30 Ray has done over 1,000 combat robot matches in 17 years

5:00 How Ray got into Battlebots

6:25 The main robot is called an offset horizontal spinner. It spins a 70-75 lb bar at 2500 rpm.

7:40 The body is 4130 choromoly tubing. The drive motors were intended for an electric wheelchair, and the weapons motor is from an electric golf cart.

8:20 Normal electrical motors are not designed to work for combat robots. Ray significantly stresses the motors.

8:50 The weapon motor was designed to be used at 48V 300A, but Ray uses it at 60V and 1100A (at spinup). This would overheat and destroy the motor, so it shouldn’t be done long-term.

9:40 – 70-80kW at spinup, and no start capacitor. He just uses a big marine relay.

10:00 Ray’s robot has 1 second to be lethal

10:30 If there’s a motor-stall potential mid match, Ray will turn off the motor to save batteries/electronics

11:00 What’s the weak point of Ray’s robot? One match, the weapon bar snapped in half.

11:40 Ray uses tool-grade steel, so it won’t bend, it’ll just snap.

12:40 The shock loads can break the case. The weapon motor looks like it’s rigidly mounted, but because it’s on a titanium plate it has some shock absorber. There’s also a clutch system in the sprocket to help offset shock.

13:40 Ray’s robot has to take all of the force that the opponent’s robots do (equal and opposite), but if it’s coming in a direction you want vs. one you don’t want you can design-in protection.

14:40 What test challenges were faced during assembly and design?

It’s been highly iterated. There are no shortcuts for designing combat robots. You have to see where something breaks, then adjust.

15:45 When Ray started in 2004, his robot was just a “middle of the pack” robot. With years of iteration, it’s now a class-dominant robot.

16:45 Ray spins up the robot at least once before a competition. It’ll pick up debris from the ground and throw it around.

17:50 Battery technology and batteries for combat robots: Originally they used lead acid batteries for their current ability. Now, almost everyone uses Lithium chemistry. The sport is about power-to-weight ratio, so the lighter batteries have given people much more flexibility.

19:00 Why aren’t there gas powered combat robots? There are some that have flamethrowers, and there are a couple gas powered ones. However, they aren’t as dependable.

20:15 Ray has wrecked arenas. The arena rails are 1/2” steel, and Ray can cut a soda-can sized hole in them. He’s wrecked panels and ceiling lights.

21:20 Combat robot communication systems: today everything runs on 2.4 GHz digitally encoded systems. They often use RC plane controls because they are highly customizable and there are a lot of available channels.

22:00 Drive systems: the wheels & motors come together. They use a hard foam in the tires so you can’t get a flat.

22:45 Centrifugal force – not a huge problem because the blade spins in-plane. But, when he gets bumped up the blade fights gravity before it can self-right.

24:40 The rest of the Hardcore Robotics team is three people.. The team is Ray, his son (Justin), and his friend Rick. Rick used to run his own team, but has more fun fabricating and building robots than he does driving them.

25:30 There will be 6 fights/hour, and the show will be on the Discovery channel and the science channel premiering May 11th.

26:15 The first fight got leaked in some promo footage, Tombstone vs. Minotaur.

26:35 Would Ray rather fight a good robot or a bad one? Ray says “anyone.”

Battlebots 2018 (season 3) will have “fight card” fights, then a playoff of the top 16 robots.

27:50 A given frame only lasts an event or two before needing to be replaced. This many fights is really hard on the robot.

29:20 Combat robot kits are a great way to get into the sport, especially ant-weight and beetle weight kits.

30:00 Stupid questions

31:15 Ray wants to try a new hammer robot, a full-shell spinner, and a vertical spinner.

32:40 Support Ray by getting Hardcore Robotics gear from battlebots.com and the toys from Target, Amazon, hexbugs, etc.

33:15 Ray is also an engineer at Intel.

Memory, DDR5+, and JEDEC – #24

“It’s a miracle it works at all.” In this electrical engineering podcast, we discuss the state of memory today and it’s inevitable march into the future.

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

“It’s a miracle it works at all.” Not the most inspiring words from someone who helped define the latest DDR spec. But, that’s the the state of today’s memory systems. Closed eyes and mV voltage swings are the topic of today’s electrical engineering podcast. Daniel Bogdanoff (@Keysight_Daniel) and Mike Hoffman sit down with Perry Keller to talk about the state of memory today and it’s inevitable march into the future.

Agenda:

00:00 Today’s guest is Perry Keller, he works a lot with standards committees and making next generation technology happen.

00:50 Perry has been working with memory for 15 years.

1:10 He also did ASIC design, project management for software and hardware

1:25
Perry is on the JEDEC board of directors

JEDEC is one of the oldest standards body, maybe older than IEEE

1:50 JEDEC was established to create standards for semiconductors. This was an era when vacuum tubes were being replaced by solid state devices.

2:00 JEDEC started by working on instruction set standards

2:15 There are two main groups. A wide bandgap semiconductors group and a memory group.

3:00 Volatile memory vs. nonvolatile memory. An SSD is nonvolatile storage, like in a phone. But if you look at a DIMM in a PC that’s volatile.

3:40 Nonvolatile memory is everywhere, even in light bulbs.

4:00 Even a DRAM can hold its contents for quite some time. JEDEC had discussions about doing massive erases because spooks will try to recover data from it.

DRAM uses capacitors for storage, so the colder they are the longer they hold their charge.

4:45 DRAM is the last vestige of the classical wide single ended parallel bus. “It’s a miracle that it works at all.”

5:30 Perry showed a friend a GDDR5 bus and challenged him to get an eye on it and he couldn’t.

6:10 Even though DDR signals look awful, it depends on reliable data transfer. The timing and clocking is set up in a way to deal with all of the various factors.

7:00 DDR specifications continue to march forward. There’s always something going on in memory.

8:00 Perry got involved with JEDEC through a conversation with the board chairman.

8:35 When DDR started, 144 MT/s (megatransfers per second) was considered fast. But, DDR5 has and end of life goal of 6.5 GT/s on a 80+ bit wide single ended parallel bus.

9:05 What are the big drivers for memory technology? Power. Power is everything. LPDDR – low power DDR – is a big push right now.

9:30 if you look at the memory ecosystem, the big activity is in mobile. The server applications are becoming focused with the cloud, but the new technology and investment is mobile.

10:00 If you look at a DRAM, you can divide it into three major categories. Mainstream PC memory, low power memory, and GDDR. GDDR is graphics memory. The differences are in both power and cost.

For example, LPDDR is static designs. You can clock it down to DC, which you can’t do with normal DDR.

The first DDR was essentially TTL compatible. Now, we’re looking at 1.1V power supplies and voltage swings in the mV.

Semiconductor technology is driving the voltages down to a large degree.

11:45 DRAM and GDDR is a big deal for servers.

A company from China tried to get JEDEC to increase the operating temperature range of DRAMs by 10 C. They fire up one new coal fired power plant per week in China to meet growing demand. They found they could cut it down to only 3 per month with this change in temperature specs.

13:10 About 5 years ago, the industry realized that simply increasing I/O speeds wouldn’t help system performance that much because the core memory access time hasn’t changed in 15 years. The I/O rate has increased, but basically they do that by pulling more bits at once out of the core and shifting them out. The latency is what really hurts at a system level.

14:15 Development teams say that their entire budget for designing silicon is paid for out of smaller electric bills.

15:00 Wide bandgap semiconductors are happy running at very high temperatures. If these temperatures end up in the data centers, you’ll have to have moon suits to access the servers.

16:30 Perry says there’s more interesting stuff going on in the computing than he’s seen in his whole career.

The interface between different levels is not very smooth. The magic in a spin-up disk is in the cache-optimizing algorithms. That whole 8-level structure is being re-thought.

18:00 Von Neumann architectures are not constraining people any more.

18:10 Anything that happens architecturally in the computing world affects and is affected by memory.

22:10 When we move from packaged semiconductors to 3D silicon we will see the end of DDR. The first successful step is called high bandwidth memory, which is essentially a replacement for GDDR5.

23:00 To move to a new DDR spec, you basically have to double the burst size.

Data Analytics for Engineering Projects – #23

Learn some best practices for engineering projects that have huge amounts of data. Data analytics tools are crucial for project success! Listen in on today’s EEs Talk Tech electrical engineering podcast.

It seems most large labs have a go-to data person. You know, the one who had to upgrade his PC so it could handle insanely complex Excel pivot tables? In large electrical engineering R&D labs, measurement data can often be inaccessible and unreliable.

In today’s electrical engineering podcast, Daniel Bogdanoff (@Keysight_Daniel) sits down with Ailee Grumbine and Brad Doerr to talk about techniques for managing test & measurement data for large engineering projects.

 

Agenda:

1:10 – Who is using data analytics?

2:00 – for a hobbyist in the garage, they may still have a lot of data. But, because it’s a one-person team, it’s much easier to handle the data.

Medium and large size teams generate a lot of data. There are a lot of prototypes, tests, etc.

3:25 – The best teams manage their data efficiently. They are able to make quick, informed decisions.

4:25 – A manager told Brad, “I would rather re-make the measurements because I don’t trust the data that we have.”

6:00 – Separate the properties from the measurements. Separate the data from the metadata. Separating data from production lines, prototype units, etc. helps us at Keysight make good engineering decisions.

9:30 – Data analytics helps for analyzing simulation data before tape out of a chip.

10:30 – It’s common to have multiple IT people managing a specific project.

11:00 – Engineering companies should use a data analytics tool that is data and domain agnostic.

11:45 – Many teams have an engineer or two that manage data for their teams. Often, it’s the team lead. They often get buried in data analytics instead of engineering and analysis work. It’s a bad investment to have engineers doing IT work.

14:00 – A lot of high speed serial standards have workshops and plugfests. They test their products to make sure they are interoperable and how they stack up against their competitors.

15:30 – We plan to capture industry-wide data and let people see how their project stacks up against the industry as a whole.

16:45 – On the design side, it’s important to see how the design team’s simulation results stack up against the validation team’s empirical results.

18:00 – Data analytics is crucial for manufacturing. About 10% of our R&D tests make it to manufacturing. And, manufacturing has a different set of data and metrics.

19:00 – Do people get hired/fired based on data? In one situation, there was a lack of data being shared that ended up costing the company over $1M and 6 months of time-to-market.

 

 

 

PAM4 and 400G – Ethernet #18

Learn how PAM4 is allowing some companies to double their data rate – and the new challenges this brings up for engineers. (electrical engineering podcast)

Today’s systems simply can’t communicate any faster. Learn how some companies are getting creative and doubling their data rates using PAM4 – and the extra challenge this technology means for engineers.

Mike Hoffman and Daniel Bogdanoff sit down with PAM4 transmitter expert Alex Bailes and PAM4 receiver expert Steve Reinhold to discuss the trends, challenges, and rewards of this technology.

 

1:00
PAM isn’t just cooking spray.

What is PAM4? PAM stands for Pulse Amplitude Modulation, and is a serial data communication technique in which more than one bit of data can be communicated per clock cycle. Instead of just a high (1) or low (0) value, a in PAM4, a voltage level can represent 00, 01, 10, or 11. NRZ is essentially just PAM2.

We are reaching the limit of NRZ communication capabilities over the current communication channels.

2:10 PAM has been around for a while, it was used in 1000BASE-T. 10GBASE-T uses PAM16, which means it has 16 different possible voltage levels per clock cycle. It acts a bit like an analog to digital converter.

2:55 Many existing PAM4 specifications have voltage swings of 600-800 mV

3:15 What does a PAM4 receiver look like?  A basic NRZ receiver just needs a comparator, but what about multiple levels?

3:40 Engineers add multiple slicers and do post-processing to clean up the data or put an ADC at the receiver and do the data analysis all at once.

PAM4 communicates 2-bits per clock cycle, 00, 01, 10, or 11.

4:25 Radio engineers have been searching for better modulation techniques for some time, but now digital people are starting to get interested.

4:40 With communications going so fast, the channel bandwidth limits the ability to transmit data.

PAM4 allows you to effectively double your data rate by doubling the amount of data per clock cycle.

5:05 What’s the downside of PAM4? The Signal to Noise Ratio (SNR) for PAM4  worse than traditional NRZ. In a perfect world, the ideal SNR would be 9.6 dB (for four levels instead of two). In reality, it’s worse, though.

5:30 Each eye may not be the same height, so that also has an effect on the total SNR.

6:05 What’s the bit error ratio (BER) of a PAM4 vs. NRZ signal if the transmission channel doesn’t change?

6:45 The channels were already challenged, even for many NRZ signals. So, it doesn’t look good for PAM4 signals. Something has to change.

7:00 PAM4 is designed to operate at a high BER. NRZ typically specified a 1E-12 or 1E-15 BER, but many PAM4 specs are targeting 1E-4 or 1E-5. It uses forward error correction (or other schemes) to get accurate data transmission.

7:50 Companies are designing more complex receivers and more robust computing power to make PAM4 work. This investment is worth it because they don’t have to significantly change their existing hardware.

8:45 PAM is being driven largely by Ethernet. The goal is to get to a 1 Tb/s data rate.

9:15 Currently 400 GbE is the next step towards the 1 Tbps Ethernet rate (terabit per second).

10:25 In Steve’s HP days, the salesmen would e-mail large pictures (1 MB) to him to try to fill up his drive.

11:10 Is there a diminishing rate of return for going to higher PAM levels?

PAM3 is used in automotive Ethernet, and 1000BASE-T uses PAM5.

Broadcom pushed the development of PAM3. The goal was to have just one pair of cables going through a vehicle instead of the 4 pairs in typical Ethernet cables.

Cars are an electrically noisy environment, so Ethernet is very popular for entertainment systems and less critical systems.

Essentially, Ethernet is replacing FlexRay. There was a technology battle for different automotive communication techniques. You wouldn’t want your ABS running on Ethernet because it’s not very robust.

14:45 In optical communication systems there is more modulation, but those systems don’t have the same noise constraints.

For digital communications, PAM8 is not possible over today’s channels because of the noise.

15:20 PAM4 is the main new scheme for digital communications

15:50 Baseband digital data transmission covers a wide frequency range. It goes from DC (all zeroes or all ones) to a frequency of the baud rate over 2 (e.g. 101010). This causes intersymbol interference (ISI) jitter that has to be corrected for – which is why we use transmitter equalization and receiver equalization.

16:55 PAM4 also requires clock recovery, and it is much harder to recover a clock when you have multiple possible signal levels.

17:35 ISI is easier to think about on an NRZ signal. If a signal has ten 0s in a row, then transitions up to ten 1s in a row,  the channel attenuation will be minimal. But, if you put a transition every bit, the attenuation will be much worse.

19:15 To reduce ISI, we use de-emphasis or pre-emphasis on the transmit side, and equalization on the receiver side. Engineers essentially boost the high frequencies at the expense of the low frequencies. It’s very similar to Dolby audio.

20:40 How do you boost only the high frequencies? There are circuits you can design that react based on the history of the bit stream. At potentially error-inducing transition bits, this circuitry drives a higher amplitude than a normal bit.

22:35 Clock recovery is a big challenge, especially for collapsed eyes. In oscilloscopes, there are special techniques to recover the eye and allow system analysis.

With different tools, you can profile an impulse response and detect whether you need to de-emphasize or modify the signal before transmission. Essentially, you can get the transfer function of your link.

23:45 For Ethernet systems, there are usually three equalization taps. Chip designers can modify the tap coefficients to tweak their systems and get the chip to operate properly. They have to design in enough compensation flexibility to make the communication system operate properly.

25:00 PAM vs. QAM? Is QAM just an RF and optical technique, or can it be used in a digital system?

25:40 Steve suspects QAM will start to be used for digital communications instead of just being used in coherent communication systems.

26:30 PAM4 is mostly applicable to the 200 GbE and 400 GbE, and something has to have to happen for us to get faster data transfer.

26:48 Many other technologies are starting to look into PAM4 – InfiniBand, Thunderbolt, and PCIe for example.

You can also read the EDN article on PAM4 here. If you’re working on PAM4, you can also check out how to prepare for PAM4 technology on this page.

 

 

 

 

The World’s Fastest ADC – #13

Learn about designing the world’s fastest ADC in today’s electrical engineering podcast! We sit down with Mike to talk about ADC design and ADC specs. Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

 

We talk to ASIC Planner Mike Beyers about what it takes to design the world’s fastest ADC in today’s electrical engineering podcast.

Video Version (YouTube):

 

Audio Only:

Intro:
Mike is an ASIC planner on the ASIC Design Team.

Prestudy, learn about making an ASIC.

00:30

What is an ADC?

An ADC is an analog to digital converter, it takes analog data inputs and provides digital data outputs.

What’s the difference between analog and digital ASICs?

1:00
There are three types of ASICs:
1.Signal conditioning ASICs
2. Between 1 and 3 is a converter, either digital to analog (DAC) or analog to digital (ADC)
3. Signal processing ASICs, also known as digital ASICs

1:50
Signal conditioning ASICs can be very simple or very complicated
e.g. Stripline filters are simple, front end of an oscilloscope can be complicated

2:45
There’s a distinction between a converter vs. an analog chip with some digital functionality
A converter has both digital and analog. But there are some analog chips with a digital interface, like an I2C or SPI interface.

4:25
How do you get what’s happening into the analog world onto a digital interface, and how fast can you do it?

4:35
Mike Hoffman designed a basic ADC design in school using a chain of operational amplifiers (opamps)
A ladder converter, or “thermometer code” is the most basic of ADC designs

6:00
A slow ADC can use single ended CMOS, a faster ADC might use parallel LVDS, now it’s almost always SERDES for highest performance chips

6:35
The world’s fastest ADC?

6:55
Why do we design ADCs? We usually don’t make what we can buy off the shelf.

The Nyquist rate determines the necessary sample rate, for example, a 10 GHz signal needs to be sampled at 20 – 25 Gigasamples per second
1/25 GHz = 40 ps

8:45
ADC Vertical resolution, or the number of bits.

So, ADCs generally have two main specs, speed (sample rate) and vertical resolution.

9:00
The ability to measure time very accurately is often most important, but people often miss the noise side of things.

9:45
It’s easy to oversimplify into just two specs. But, there’s more that hast to be considered. Specifications like bandwidth, frequency flatness, noise, and SFDR

10:20
It’s much easier to add bits to an ADC design than it is to decrease the ADCs noise.

10:42
Noise floor, SFDR, and SNR measure how good an analog to digital converter is.

SFDR means “spurious free dynamic range” and SNR means “signal to noise ratio”

11:00
Other things you need to worry about are error codes, especially for instrumentation.

For some ADC folding architectures and successive approximation architectures, there can be big errors. This is acceptable for communication systems but not for visualizing equipment.

12:30
So, there are a lot of factors to consider when choosing ADC.

12:45
Where does ADC noise come from? It comes from both the ADC and from the support circuitry.

13:00
We start with a noise budget for the instrument and allocate the budget to different blocks of the oscilloscope or instrument design.

13:35
Is an ADC the ultimate ASIC challenge? It’s both difficult analog design and difficult high-speed digital design, so we have to use fine geometry CMOS processes to make it happen.

15:00
How fast are our current ADCs? 160 Gigasamples per second.

15:45
We accomplish that with a chain of ADCs, not just a single ADC.

16:15
ADC interleaving. If you think about it simply, if you want to double your sample rate you can just double the number of ADCs and shift their sampling clocks.

But this has two problems. First, they still have the same bandwidth, you don’t get an increase. Second, you have to get a very good clock and offset them carefully.

17:00
To get higher bandwidth, you can use a sampler, which is basically just a very fast switch with higher bandwidth that then delivers the signal to the ADCs at a lower bandwidth

But, you have to deal with new problems like intersymbol interference (ISI).

18:20
So, what are the downsides of interleaving?

Getting everything to match up is hard, so you have to have a lot of adjustability to calibrate the samplers.

For example, if the q levels of one ADC are higher than the other, you’ll get a lot of problems. Like frequency spurs and gain spurs.

We can minimize this with calibration and some DSP  (digital signal processing) after the capture.

20:00
Triple interleaving and double interleaving – the devil is in the details

21:00
Internally, our ADCs are made up of a number of slices of smaller, slower ADC blocks.

21:15
Internally, we have three teams. An analog ASIC team, a digital ASIC team, and also an ADC ASIC team.

22:15
Technology for ADCs is “marching forward at an incredible rate”

The off-the-shelf ADC technologies are enabling new technologies like 5G, 100G/400G/1T Ethernet, and DSP processing.

23:00
Is processing driven by ADCs, or are ADCs advancing processor technology? Both!

24:00
Predictions?

Mike H.: New “stupid question for the guest” section
What is your favorite sample rate and why?
400 MSa – one of the first scopes Mike B. worked on. Remember “4 equals 5”

Copper vs. Fiber Optic Cable and Optical Communication Techniques – #11

Stefan Loeffler discusses the latest optical communication techniques
and advances in the industry as well as the use of fiber optic cable in electronics and long-range telecommunication networks. Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

Mike Hoffman and Daniel Bogdanoff continue their discussion with Stefan Loeffler about optical communication. In the first episode, we looked at “what is optical communication?” and “how does optical communication work?” This week we dig deeper into some of the latest optical communication techniques and advances in the industry as well as the use of fiber optic cable in electronics and long-range telecommunication networks.

Video version (YouTube):

 

Audio Version:

 

Discussion Overview:

 

Installation of optical fiber and maintenance of optical fiber

We can use optical communication techniques such as phase multiplexing

There’s a race between using more colors and higher bitrates to increase data communication rates.

Indium doped fiber amplifiers can multiply multiple channels at different colors on the same optical PHY.

You can use up to 80 colors on a single fiber optic channel! 3:52

How is optical communication similar to RF? Optical communication is a lot like WiFi 4:07

Light color in optical fiber is the equivalent of carrier frequencies in RF

 

How do we increase the data rate in optical fiber?

There are many multiplexing methods such as multicore, wavelength division, and polarization 4:50

Practically, only two polarization modes can be used at once. The limiting factor is the separation technology on the receiver side. 6:20

But, this still doubles our bandwidth!

What about dark fiber? Dark fiber is the physical piece of optical fiber that is unused. 7:07

Using dark fiber on an existing optical fiber is the first step to increasing fiber optic bandwidth.

But wavelengths can also be added.

Optical C-band vs L-band 7:48

Optical C-band was the first long-distance band. It is now joined by the L-band.

Is there a difference between using different colors and different wavelengths?

Optical fibers are a light show for mosquitos! 8:30

 

How do we fix optical fibers? 10:36

For short distances, an OTDR or visual light fault detectors are often used by sending red light into a fiber and lights up when there’s a break in the fiber

 

Are there other ways to extend the amount of data we can push through a fiber? 11:35

Pulses per second can be increased, but we will eventually bleed into neighboring channels

Phase modulation is also used

PAM-4 comes into play with coding (putting multiple bits in a symbol)

And QAM which relies on both amplitude and phase modulation

PAM-4 test solutions

How do we visualize optical fibers?  14:05

We can use constellation diagrams which plot magnitude and phase

 

Do we plan for data error? 15:00

Forward error correction is used, but this redundancy involves significant overhead

 

QAM vs PAM

64 Gigabot (QAM-64) was the buzzword at OFC 2017 16:52

PAM is used for shorter links while QAM is used for longer links

 

How do we evaluate fiber? 18:02

We can calculate cost per managed bit and energy per managed bit

Energy consumption is a real concern 18:28

 

The race between copper and fiber 19:13

Fiber wins on long distance because of power consumption

But does fiber win on data rate?

Google Fiber should come to Colorado Springs…and Germany!

To compensate for the loss of the signal on the distance, you push more power in for transmitting and decrypting

Fibers attenuate the signal much less than copper does

But the problem comes when we have to translate the signal back into electrical on the receiving end

Is there a break-even point with fiber and copper? 22:15

 

Optical communication technology in the future

What speed are we at now and what’s the next technology? 23:05

600 G technology will be here eventually

We can expect 1.5 years between iterations in bandwidth. This is really slow in terms of today’s fast-paced technology.

We typically see 100 G speeds today

 

Predictions 26:00