PAM4 and 400G – Ethernet #18

Learn how PAM4 is allowing some companies to double their data rate – and the new challenges this brings up for engineers. (electrical engineering podcast)

Today’s systems simply can’t communicate any faster. Learn how some companies are getting creative and doubling their data rates using PAM4 – and the extra challenge this technology means for engineers.

Mike Hoffman and Daniel Bogdanoff sit down with PAM4 transmitter expert Alex Bailes and PAM4 receiver expert Steve Reinhold to discuss the trends, challenges, and rewards of this technology.


PAM isn’t just cooking spray.

What is PAM4? PAM stands for Pulse Amplitude Modulation, and is a serial data communication technique in which more than one bit of data can be communicated per clock cycle. Instead of just a high (1) or low (0) value, a in PAM4, a voltage level can represent 00, 01, 10, or 11. NRZ is essentially just PAM2.

We are reaching the limit of NRZ communication capabilities over the current communication channels.

2:10 PAM has been around for a while, it was used in 1000BASE-T. 10GBASE-T uses PAM16, which means it has 16 different possible voltage levels per clock cycle. It acts a bit like an analog to digital converter.

2:55 Many existing PAM4 specifications have voltage swings of 600-800 mV

3:15 What does a PAM4 receiver look like?  A basic NRZ receiver just needs a comparator, but what about multiple levels?

3:40 Engineers add multiple slicers and do post-processing to clean up the data or put an ADC at the receiver and do the data analysis all at once.

PAM4 communicates 2-bits per clock cycle, 00, 01, 10, or 11.

4:25 Radio engineers have been searching for better modulation techniques for some time, but now digital people are starting to get interested.

4:40 With communications going so fast, the channel bandwidth limits the ability to transmit data.

PAM4 allows you to effectively double your data rate by doubling the amount of data per clock cycle.

5:05 What’s the downside of PAM4? The Signal to Noise Ratio (SNR) for PAM4  worse than traditional NRZ. In a perfect world, the ideal SNR would be 9.6 dB (for four levels instead of two). In reality, it’s worse, though.

5:30 Each eye may not be the same height, so that also has an effect on the total SNR.

6:05 What’s the bit error ratio (BER) of a PAM4 vs. NRZ signal if the transmission channel doesn’t change?

6:45 The channels were already challenged, even for many NRZ signals. So, it doesn’t look good for PAM4 signals. Something has to change.

7:00 PAM4 is designed to operate at a high BER. NRZ typically specified a 1E-12 or 1E-15 BER, but many PAM4 specs are targeting 1E-4 or 1E-5. It uses forward error correction (or other schemes) to get accurate data transmission.

7:50 Companies are designing more complex receivers and more robust computing power to make PAM4 work. This investment is worth it because they don’t have to significantly change their existing hardware.

8:45 PAM is being driven largely by Ethernet. The goal is to get to a 1 Tb/s data rate.

9:15 Currently 400 GbE is the next step towards the 1 Tbps Ethernet rate (terabit per second).

10:25 In Steve’s HP days, the salesmen would e-mail large pictures (1 MB) to him to try to fill up his drive.

11:10 Is there a diminishing rate of return for going to higher PAM levels?

PAM3 is used in automotive Ethernet, and 1000BASE-T uses PAM5.

Broadcom pushed the development of PAM3. The goal was to have just one pair of cables going through a vehicle instead of the 4 pairs in typical Ethernet cables.

Cars are an electrically noisy environment, so Ethernet is very popular for entertainment systems and less critical systems.

Essentially, Ethernet is replacing FlexRay. There was a technology battle for different automotive communication techniques. You wouldn’t want your ABS running on Ethernet because it’s not very robust.

14:45 In optical communication systems there is more modulation, but those systems don’t have the same noise constraints.

For digital communications, PAM8 is not possible over today’s channels because of the noise.

15:20 PAM4 is the main new scheme for digital communications

15:50 Baseband digital data transmission covers a wide frequency range. It goes from DC (all zeroes or all ones) to a frequency of the baud rate over 2 (e.g. 101010). This causes intersymbol interference (ISI) jitter that has to be corrected for – which is why we use transmitter equalization and receiver equalization.

16:55 PAM4 also requires clock recovery, and it is much harder to recover a clock when you have multiple possible signal levels.

17:35 ISI is easier to think about on an NRZ signal. If a signal has ten 0s in a row, then transitions up to ten 1s in a row,  the channel attenuation will be minimal. But, if you put a transition every bit, the attenuation will be much worse.

19:15 To reduce ISI, we use de-emphasis or pre-emphasis on the transmit side, and equalization on the receiver side. Engineers essentially boost the high frequencies at the expense of the low frequencies. It’s very similar to Dolby audio.

20:40 How do you boost only the high frequencies? There are circuits you can design that react based on the history of the bit stream. At potentially error-inducing transition bits, this circuitry drives a higher amplitude than a normal bit.

22:35 Clock recovery is a big challenge, especially for collapsed eyes. In oscilloscopes, there are special techniques to recover the eye and allow system analysis.

With different tools, you can profile an impulse response and detect whether you need to de-emphasize or modify the signal before transmission. Essentially, you can get the transfer function of your link.

23:45 For Ethernet systems, there are usually three equalization taps. Chip designers can modify the tap coefficients to tweak their systems and get the chip to operate properly. They have to design in enough compensation flexibility to make the communication system operate properly.

25:00 PAM vs. QAM? Is QAM just an RF and optical technique, or can it be used in a digital system?

25:40 Steve suspects QAM will start to be used for digital communications instead of just being used in coherent communication systems.

26:30 PAM4 is mostly applicable to the 200 GbE and 400 GbE, and something has to have to happen for us to get faster data transfer.

26:48 Many other technologies are starting to look into PAM4 – InfiniBand, Thunderbolt, and PCIe for example.

You can also read the EDN article on PAM4 here. If you’re working on PAM4, you can also check out how to prepare for PAM4 technology on this page.





Heterogeneous Computing & Quantum Engineering – #17

Learn about parallel computing, the rise of heterogeneous processing (also known as hybrid processing), and quantum engineering in today’s EEs Talk Tech electrical engineering podcast!

Learn about parallel computing, the rise of heterogeneous processing (also known as hybrid processing), and the prospect of quantum engineering as a field of study!


Audio link:


Parallel computing used to be a way of sharing tasks between processor cores.

When processor clock rates stopped increasing, the response of the microprocessor companies was to increase the number of cores on a chip to increase throughput.


But now, the increased use of specialized processing elements has become more popular.

A GPU is a good example of this. A GPU is very different from an x86 or ARM processor and is tuned for a different type of processing.

GPUs are very good at matrix math and vector math. Originally, they were designed to process pixels. They use a lot of floating point math because the math behind how a pixel  value is computed is very complex.

A GPU is very useful if you have a number of identical operations you have to calculate at the same time.


GPUs used to be external daughter cards, but in the last year or two the GPU manufacturers are starting to release low power parts suitable for embedded applications. They include several traditional cores and a GPU.

So, now you can build embedded systems that take advantage of machine learning algorithms that would have traditionally required too much processing power and too much thermal power.



This is an example of a heterogeneous processor (AMD) or hybrid processor. A heterogeneous processor contains cores of different types, and a software architect figures out which types of workloads are processed by which type of core.

Andrew Chen (professor) has predicted that this will increase in popularity because it’s become difficult to take advantage of shrinking the semiconductor feature size.


This year or next year, we will start to see heterogeneous processors (MOOR) with multiple types of cores.

Traditional processors are tuned for algorithms on integer and floating point operations where there isn’t an advantage to doing more than one thing at a time. The dependency chain is very linear.

A GPU is good at doing multiple computations at the same time so it can be useful when there aren’t tight dependency chains.

Neither processor is very good at doing real-time processing. If you have real time constraints – the latency between an ADC and the “answer” returned by the system must be short – there is a lot of computing required right now. So, a new type of digital hardware is required. Right now, ASICs and FPGAs tend to fill that gap, as we’ve discussed in the All about ASICs podcast.


Quantum cores (like we discussed in the what is quantum computing podcast) are something that we could see on processor boards at some point. Dedicated quantum computers that can exceed the performance of traditional computers will be introduced within the next 50 years, and as soon as the next 10 or 15 years.

To be a consumer product, a quantum computer would have to be a solid state device, but their existence is purely speculative at this point in time.


Quantum computing is reinventing how processing happens. And, quantum computers are going to tackle very different types of problems than conventional computers.


There is a catalog on the web of problems and algorithms that would be substantially better on a quantum on a computer than a traditional computer.


People are creating algorithms for computers that don’t even exist yet.

The Economist estimated that the total spend on quantum computing research is over 1 Billion dollars per year globally. A huge portion of that is generated by the promise of these algorithms and papers. The interest is driven by this.

Quantum computers will not completely replace typical processors.


Lee’s opinion is that the quantum computing industry is still very speculative, but the upsides are so great that neither the incumbent large computing companies nor the industrialized countries want to be left behind if it does take off.

The promise of quantum computing is beyond just the commercial industry, it’s international and inter-industry. You can find long whitepapers from all sorts of different governments laying out a quantum computing research strategy. There’s also a lot of venture capitalists investing in quantum computing.


Is this research and development public, or is there a lot of proprietary information out there? It’s a mixture, many of the startups and companies have software components that they are open sourcing and claim to have “bits of physics” working (quantum bits or qbits), but they are definitely keeping trade secrets.

19:50 Quantum communication means space lasers.

Engineering with quantum effects has promise as an industry. One can send photons with entangled states. The Chinese government has a satellite that can generate these photons and send them to base stations. If anyone reads them they can tell because the wave function collapsed too soon.

Quantum sensing promises to develop accelerometers and gyroscopes that are orders of magnitude more sensitive than what’s commercially available today.


Quantum engineering could become a new field. Much like electrical engineering was born 140 years ago, electronics was born roughly 70 years ago, computer science was born out of math and electrical engineering. It’s possible that the birth of quantum engineering will be considered to be some point in the next 5 years or last 5 years.


Lee’s favorite quantum state is the Bell state. It’s the equal probability state between 1 and 0, among other interesting properties. The Bell state encapsulates a lot of the quantum weirdness in one snippet of math.








Quantum Bits and Cracking RSA – #16

What does a quantum computer look like? What does the future of cyber security hold? We sit down with Lee Barford to discuss.

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective


How will quantum computing change the future of security? What does a quantum computer look like? Mike and Daniel sit down with Lee Barford to get some answers.

Video Version:

Audio version

Last time we looked at “what is quantum computing” and talked about quantum bits and storing data in superstates.

00:40 Lee talks about how to crack RSA and Shor’s algorithm (wikipedia)

00:50 The history of quantum computing (wiki). The first person to propose it was Richard Feynman in the mid 1960s. There was some interest, but it died out.

In the 1990s, Peter Shor published a paper pointing out that if you could build a quantum computer with certain operational properties (machine code instructions), then you could find one factor of a number no matter how long it is.

Then, he outlined another number of things he would need, like a quantum Fast Fourier Transform (FFT).

Much of the security we use every day is both the RSA public key system and the Diffie Hellman Key Exchange algorithm.

HTTPS connections use the Diffie Hellman Key Exchange algorithm. RSA stands for “really secure algorithm” “Rivest, Shamir, and Adelman.”


RSA only works if the recipients know each other, but Diffie Hellman works for people who don’t know each other but still want to communicate securely. This is useful because it’s not practical for everyone to have their own RSA keys.


Factoring numbers that are made up of large prime numbers is the basis for RSA. The processing power required for factoring is too large to be practical. People have been working on this for 2500 years.


Shor’s algorithm is theoretically fast enough to break RSA. If you could build a quantum computer with enough quantum bits and operate with a machine language cycle time that is reasonable (us or ms), then it would be possible to factor thousand bit numbers.


Famous professors and famous universities have a huge disparity of opinion as to when a quantum computer of that size could be built. Some say 5-10 years, others say up to 50.


What does a quantum computer look like? It’s easier to describe architecturally than physically. A quantum computer isn’t that much different from a classical computer, it’s simply a co-processor that has to co-exist with current forms of digital electronics.


If you look at Shor’s algorithm, there are a lot of familiar commands, like “if statements” and “for loops.” But, quantum gates, or quantum assembly language operations, are used in the quantum processor. (more about this)


Lee thinks that because a quantum gate operates in time instead of space, the term “gate” isn’t a great name.


What quantum computers exist today? Some have been built, but with only a few quantum bits. The current claim is that people have created quantum computers with up to 21 quantum bits. But, there are potentially a lot of errors and noise. For example, can they actually maintain a proper setup and hold time?


Continuing the Schrodinger’s Cat analogy…

In reality, if you have a piece of physics that you’ve managed to put into a superimposed quantum state, any disturbance of it (photon impact, etc.) will cause it to collapse into an unwanted state or to collapse too early.


So, quantum bits have to be highly isolated from their environments. So, in vacuums or extreme cold temperatures (well below 1 degree Kelvin!).


The research companies making big claims about the quantity of bits are not using solid state quantum computers.

The isolation of a quantum computer can’t be perfect, so there’s a limited lifetime for the computation before the probability of getting an error gets too high.


Why do we need a superposition of states? Why does it matter when the superimposed states collapse to one state? If it collapses at the wrong time you’ll get a wrong answer. With Shor’s algorithm it’s easy to check for the right answer. And, you get either a remainder of 0 or your don’t. If you get 0, the answer is correct. The computation only has to be reliable enough for you to check the answer.


If the probability of getting the right answer is high enough, you can afford to get the wrong answer on occasion.


The probability of the state of a quantum bit isn’t just 50%, so how do you set the probability of the state? It depends on the physical system. You can write to a quantum bit by injecting energy into the system, for example using a very small number of photons as a pulse with a carefully controlled timing and phase.


Keysight helps quantum computer researchers generate and measure pulses with metrological levels of precision.

The pulses have to be very carefully timed and correlated with sub nanosecond accuracy. You need time synchronization between all the bits at once for it to be useful.


What is a quantum bit? Two common kinds of quantum bits are

1: Ions trapped in a vacuum with laser trapping . The ions can’t move because they are held in place by standing waves of laser beams. The vacuum can be at room temperature but the ions are low temperature because they can’t move.

2. Josephson junctions in tank circuits (a coil capacitor) produce oscillations at microwave frequencies. Under the right physical conditions, those can be designed to behave like an abstract two state quantum system. You just designate zero and one to different states of the system.

Probabilities are actually a wrong description, it should be complex quantum amplitudes.


Josephson junctions were talked about in an earlier electrical engineering podcast discussing SI units.


After working with quantum computing, it’s common to walk away feeling a lot less knowledgeable.


Stupid question section:

“If you had Schrodinger’s cat in a box, would you look or not?”

Lee says the cat’s wave function really collapsed as it started to warm up so the state has already been determined.



What is Quantum Computing?- #15

Learn about the basics of quantum computing and quantum computers from Dr. Lee Barford. We discuss Schrodinger’s cat and more!

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthlyelectrical engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

What is a quantum computer and what is quantum computing? In this week’s episode, Daniel Bogdanoff and Mike Hoffman are joined by quantum computing expert Lee Barford.

Video Version (YouTube):

Audio Only:

0:45 Intro

Lee Barford helps to guide Keysight into the quantum computing business + enables the quantum computing experts at Keysight


2:00 The importance of quantum computing

Clock rates in all types of digital processors stopped going up in 2006 due to heating limits

The processor manufacturers realized the need for more parallelism.

Today, Lee helps engineers at Keysight take advantage of this parallelism.

Graphics processors can be used as vector and matrix machines

Bitcoin utilizes this method.


6:00 The implications of advancements in quantum computing

Today, there are parts being made with feature size of the digital transistor that are 10, maybe 7 nanometers (depending on who you believe)

So we are heading below 5 nanometers, and there aren’t many unit cells of silicon left at that point. (a unit cell of silicon is 0.5 nanometer)

The uncertainty principle comes into play since there are few enough atoms where quantum mechanical effects will disturb the electronics.

There are many concerns including a superposition of states (Schrodinger’s cat) and low error tolerance.


10:20 Is Moore’s law going to fail? 

Quantum computing is one way of moving the computer industry past this barrier

Taking advantage of quantum mechanical effects, engineering with them, to build a new kind of computers that for certain problems, promise to do better than what we currently do.


15:20 Questions for future episodes:

What sort of technology goes into a quantum computer?

What’s the current state of experimentation?

What are some of the motivations for funding quantum computing research?

How is Keysight involved in this industry?

What problems is quantum computing aiming to solve?


17:30 Using quantum effects to our advantage

Quantum computers likely be used in consumer devices because there has to be a very low temperature and/or a vacuum.


A quantum computer’s fundamental storage unit is a qubit (quantum bit).  A quantum bit (qubit) can be either 1 or 0 with some finite probability

A quantum register can store multiple qubits, and when read, have a probability of being either of these numbers. A quantum register can store more than one state at a time, but only one value can be read from the quantum register.

21:00 How does one get a useful value out of a quantum register? You do as much of the computation before reading the state and then read the quantum computers quantum register.

This works because the quantum computer’s either has such a high probability to be correct that you don’t need to verify it, or it’s simple to double check if the answer is correct.

21:00 How do you get the desired value out of a quantum register? You do as much of the computation ahead of time and then read the quantum computers quantum register.

22:30 Quantum computers can factor very large numbers (breaking RSA in cryptography)







How to Price Your Electronics Hardware Project – #14

Pricing a new hardware product in a global economy with regional pricing, psychological factors, and the challenges of pricing in white space. This week’s guest is Brig Asay. Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly electrical engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

Daniel Bogdanoff and Mike Hoffman sit down with Brig Asay to talk about how to price a hardware project. Listen in as they discuss the complexities of pricing a new hardware product in a global economy.

Follow Brig Asay on Yelp @baasay.

Video Version (YouTube):

Audio Only:

0:00 Intro

How should you price hardware?


Tell us in the comments what you think our green screen should be!


Economics 101: Supply & Demand

This is how we generally set prices for hardware


Top down pricing takes into account your cost of manufacturing.

But if you price based on production costs, you’re going to fail in your pricing.

It’s all about what consumers are willing to pay.


Pharmaceutical companies are the example of bad pricing schemes. They justify high prices based on high R&D costs.

But the reality is that consumers don’t care about R&D costs. They care about how bad they need the product, and this will determine how much they are willing to pay.


Someone on EEVblog hacked a 3000T, reverse engineering it to make it a 1 GHz scope.



The newer the idea, the harder it is to price because there’s no real market value.

Talking to potential customers is a good way to start pricing in white space.


Marketing 101: Who are your customers?

Determining who you are trying to sell to and talking with them can help with pricing.

Competitor pricing is a good baseline, but then you often get into value-based pricing.


Spreadsheets are the killer of pricing. They compete with your gut feeling.

$10K per GHz of bandwidth is a standard in oscilloscope pricing, but it doesn’t always apply. When we came out with the Infiniium Z-Series, a 63 GHz scope, we knew the market couldn’t support a $630K price.


Price/volume curve = Supply and demand chart


Different regions have different pricing expectations.

Currency, cultural expectations, and import taxes all come into play when considering regional pricing.

Should a small company even worry about regional pricing?


You need to be willing to adjust pricing.

Dynamics of the market and the value of your product can change over time.

If you’re not selling anything, you need to adjust your price.

Priced too low and people may have the perception that you’re selling a low-quality product.


Pricing too low may also inadvertently shrink your market size.

Overly undercutting your competitor may hurt you in the long-run.


Does the psychological side to pricing always apply?

What’s the stigma around prices ending in a 9 or 8?


Stupid Questions with Mike:

What is your favorite price and why?

What is your favorite currency and why?


Tell us about your software or hardware project in the comments!




The World’s Fastest ADC – #13

Learn about designing the world’s fastest ADC in today’s electrical engineering podcast! We sit down with Mike to talk about ADC design and ADC specs. Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.


We talk to ASIC Planner Mike Beyers about what it takes to design the world’s fastest ADC in today’s electrical engineering podcast.

Video Version (YouTube):


Audio Only:

Mike is an ASIC planner on the ASIC Design Team.

Prestudy, learn about making an ASIC.


What is an ADC?

An ADC is an analog to digital converter, it takes analog data inputs and provides digital data outputs.

What’s the difference between analog and digital ASICs?

There are three types of ASICs:
1.Signal conditioning ASICs
2. Between 1 and 3 is a converter, either digital to analog (DAC) or analog to digital (ADC)
3. Signal processing ASICs, also known as digital ASICs

Signal conditioning ASICs can be very simple or very complicated
e.g. Stripline filters are simple, front end of an oscilloscope can be complicated

There’s a distinction between a converter vs. an analog chip with some digital functionality
A converter has both digital and analog. But there are some analog chips with a digital interface, like an I2C or SPI interface.

How do you get what’s happening into the analog world onto a digital interface, and how fast can you do it?

Mike Hoffman designed a basic ADC design in school using a chain of operational amplifiers (opamps)
A ladder converter, or “thermometer code” is the most basic of ADC designs

A slow ADC can use single ended CMOS, a faster ADC might use parallel LVDS, now it’s almost always SERDES for highest performance chips

The world’s fastest ADC?

Why do we design ADCs? We usually don’t make what we can buy off the shelf.

The Nyquist rate determines the necessary sample rate, for example, a 10 GHz signal needs to be sampled at 20 – 25 Gigasamples per second
1/25 GHz = 40 ps

ADC Vertical resolution, or the number of bits.

So, ADCs generally have two main specs, speed (sample rate) and vertical resolution.

The ability to measure time very accurately is often most important, but people often miss the noise side of things.

It’s easy to oversimplify into just two specs. But, there’s more that hast to be considered. Specifications like bandwidth, frequency flatness, noise, and SFDR

It’s much easier to add bits to an ADC design than it is to decrease the ADCs noise.

Noise floor, SFDR, and SNR measure how good an analog to digital converter is.

SFDR means “spurious free dynamic range” and SNR means “signal to noise ratio”

Other things you need to worry about are error codes, especially for instrumentation.

For some ADC folding architectures and successive approximation architectures, there can be big errors. This is acceptable for communication systems but not for visualizing equipment.

So, there are a lot of factors to consider when choosing ADC.

Where does ADC noise come from? It comes from both the ADC and from the support circuitry.

We start with a noise budget for the instrument and allocate the budget to different blocks of the oscilloscope or instrument design.

Is an ADC the ultimate ASIC challenge? It’s both difficult analog design and difficult high-speed digital design, so we have to use fine geometry CMOS processes to make it happen.

How fast are our current ADCs? 160 Gigasamples per second.

We accomplish that with a chain of ADCs, not just a single ADC.

ADC interleaving. If you think about it simply, if you want to double your sample rate you can just double the number of ADCs and shift their sampling clocks.

But this has two problems. First, they still have the same bandwidth, you don’t get an increase. Second, you have to get a very good clock and offset them carefully.

To get higher bandwidth, you can use a sampler, which is basically just a very fast switch with higher bandwidth that then delivers the signal to the ADCs at a lower bandwidth

But, you have to deal with new problems like intersymbol interference (ISI).

So, what are the downsides of interleaving?

Getting everything to match up is hard, so you have to have a lot of adjustability to calibrate the samplers.

For example, if the q levels of one ADC are higher than the other, you’ll get a lot of problems. Like frequency spurs and gain spurs.

We can minimize this with calibration and some DSP  (digital signal processing) after the capture.

Triple interleaving and double interleaving – the devil is in the details

Internally, our ADCs are made up of a number of slices of smaller, slower ADC blocks.

Internally, we have three teams. An analog ASIC team, a digital ASIC team, and also an ADC ASIC team.

Technology for ADCs is “marching forward at an incredible rate”

The off-the-shelf ADC technologies are enabling new technologies like 5G, 100G/400G/1T Ethernet, and DSP processing.

Is processing driven by ADCs, or are ADCs advancing processor technology? Both!


Mike H.: New “stupid question for the guest” section
What is your favorite sample rate and why?
400 MSa – one of the first scopes Mike B. worked on. Remember “4 equals 5”

How Internet is Delivered – Data Centers and Infrastructure – #12

Laser Netflix delivery, backyard data centers, and how the internet gets delivered to homes and businesses. This week’s podcast guest is optical guru Stefan Loeffler. Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

Laser-delivered Netflix and backyard data centers!

The conversation continues with optical communications guru, Stefan Loeffler. In this episode, Daniel Bogdanoff and Mike Hoffman discuss optical infrastructure today and what the future holds for optics.

Video version (YouTube):


Audio Version:

Discussion Overview:

Optical Communication Infrastructure 00:30

Optics = Laser-driven Netflix delivery system

Client-side vs line-side 1:00

Line-side is the network that transports the signals from the supplier to the consumer

Client-side is the equipment that is either a consumer or business, accepting the data from the network provider.


Yellow cables in your wall indicate presence of fiber 1:40

Technically, optics is communication using radiation! But it is invisible to us as humans. 2:20


Getting fiber all the way to the antenna is one of the major new technologies 2:30

But this requires you to have power at the antenna 2:45

However, typically there is a “hotel” or  base station at the bottom of the antenna where the power is and where fiber traditionally connects, instead of up to the antenna

Really new or experimental antennas have fiber running all the way up the pole  3:28


Network topologies- star, ring, and mesh 3:42

Base stations are usually organized in star-form, or a star network pattern. A star network starts at a single base station and distributes data to multiple cells

Rings (ring networks) are popular in metro infrastructure because you can encircle an entire area 4:20

Optical rings are like traffic circles for data.

Is ring topology the most efficient or flexible? 6:20

An advantage of ring and mesh topologies is built-in resilience

Mesh topologies have more bandwidth but require more fiber optic cable 7:10

How often is the topology or format of a network defined by geography or regulations? 8:30


How consumers get fiber 9:20

Business or academic campuses typically utilize mesh networks on the client side, subscribing to a fiber provider

Fiber itself or a certain bandwidth using that fiber can be leased

If you’re a business, like a financial institution, and latency or bandwidth is critical, leasing fiber is necessary so you have control over the network 9:45


What’s the limiting factor of optical? 

What are the limitations of the hardware that’s sending/receiving optical signals? 11:08

Whatever we do in fiber, at some point, it is electrical 11:27

There will be a tipping point where quantum computing and photon-computing (optical computing) comes into play 11:40

Will optical links ever compete with silicon? Maybe we will have optical computers in the future 12:02

The limiting factor is the power supply 12:40

What’s costing all this energy? 12:58

The more data (bits and bytes) we push through, the more energy in the form of optical photons or electrons we are pushing through. We also must use a DSP for decoding which costs energy

One of the first 100 Gb links between two clients was between the New York Stock Exchange and the London Stock Exchange 14:00


The evolution of the transmission of data 14:45 

Will we ever have open-air optical communication? 15:50

RF technology uses open-air communication today, but it is easy to disturb

The basic material fiber is made of is cheap (silica, quartz), and can be found on any beach 16:08

Whereas copper has a supply problem and, thus, continues to increase in price


Other uses for optical 16:33

Crystal fiber and multicore fiber is being experimented with to increase the usable bandwidth

Optical, as waveguides, can be built into small wafer sections 17:15

Optics is used in electrical chips when photons are easier to push through than electrons

Cross-talk can happen with optical, too 18:13

Testing is done with optical probing, which works because of optical coupling

Optical-to-electrical converter solution 


Optical satellite communication 19:48

Hollow-fiber could be used in a vacuum, such as space

The refractive index of the fiber’s core is higher than the cladding, which guides the optical signal through 21:05

A hollow-fiber would be like a mini mirror tube


Optical data transmission 21:25 

Higher carrier frequencies means you can modulate faster, but there’s more loss and dispersion

This means optical communication could be harder in open-air vs. in traditional fiber 22:45

70-80% headroom is typical

The congested part of a network drives the change in technology. 24:25


Mega data centers vs. distributed data centers 

Cooling and power is important so big data centers are being built by Google, Facebook, Netflix in places where cheap, cool water is abundant 24:30

Distributed data centers are becoming more popular than mega-data centers 24:55

All images on Facebook have “cdn” in the URL because the image is hosted on a content distribution network, or cloud

Data centers are described by megawatts (MW) of power, not size or amount of data processed 26:20

Internal data center traffic takes up about 75% of the traffic 27:47

Distributed networks utilize a mesh network and require communication between networks


Telecom starts using faster fiber when about 20% of the fiber is used 28:55

This 20% utilization is also common in CAN busses because of safety-critical data communication

Uptime guarantees require the Telecom industry to keep this number at 20%


Keysight optical resources and solutions  31:00

Predictions 31:45

Also, check out our previous conversations with Stefan about Optical Communication 101 and Optical Communication Techniques.

Copper vs. Fiber Optic Cable and Optical Communication Techniques – #11

Stefan Loeffler discusses the latest optical communication techniques
and advances in the industry as well as the use of fiber optic cable in electronics and long-range telecommunication networks. Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

Mike Hoffman and Daniel Bogdanoff continue their discussion with Stefan Loeffler about optical communication. In the first episode, we looked at “what is optical communication?” and “how does optical communication work?” This week we dig deeper into some of the latest optical communication techniques and advances in the industry as well as the use of fiber optic cable in electronics and long-range telecommunication networks.

Video version (YouTube):


Audio Version:


Discussion Overview:


Installation of optical fiber and maintenance of optical fiber

We can use optical communication techniques such as phase multiplexing

There’s a race between using more colors and higher bitrates to increase data communication rates.

Indium doped fiber amplifiers can multiply multiple channels at different colors on the same optical PHY.

You can use up to 80 colors on a single fiber optic channel! 3:52

How is optical communication similar to RF? Optical communication is a lot like WiFi 4:07

Light color in optical fiber is the equivalent of carrier frequencies in RF


How do we increase the data rate in optical fiber?

There are many multiplexing methods such as multicore, wavelength division, and polarization 4:50

Practically, only two polarization modes can be used at once. The limiting factor is the separation technology on the receiver side. 6:20

But, this still doubles our bandwidth!

What about dark fiber? Dark fiber is the physical piece of optical fiber that is unused. 7:07

Using dark fiber on an existing optical fiber is the first step to increasing fiber optic bandwidth.

But wavelengths can also be added.

Optical C-band vs L-band 7:48

Optical C-band was the first long-distance band. It is now joined by the L-band.

Is there a difference between using different colors and different wavelengths?

Optical fibers are a light show for mosquitos! 8:30


How do we fix optical fibers? 10:36

For short distances, an OTDR or visual light fault detectors are often used by sending red light into a fiber and lights up when there’s a break in the fiber


Are there other ways to extend the amount of data we can push through a fiber? 11:35

Pulses per second can be increased, but we will eventually bleed into neighboring channels

Phase modulation is also used

PAM-4 comes into play with coding (putting multiple bits in a symbol)

And QAM which relies on both amplitude and phase modulation

PAM-4 test solutions

How do we visualize optical fibers?  14:05

We can use constellation diagrams which plot magnitude and phase


Do we plan for data error? 15:00

Forward error correction is used, but this redundancy involves significant overhead



64 Gigabot (QAM-64) was the buzzword at OFC 2017 16:52

PAM is used for shorter links while QAM is used for longer links


How do we evaluate fiber? 18:02

We can calculate cost per managed bit and energy per managed bit

Energy consumption is a real concern 18:28


The race between copper and fiber 19:13

Fiber wins on long distance because of power consumption

But does fiber win on data rate?

Google Fiber should come to Colorado Springs…and Germany!

To compensate for the loss of the signal on the distance, you push more power in for transmitting and decrypting

Fibers attenuate the signal much less than copper does

But the problem comes when we have to translate the signal back into electrical on the receiving end

Is there a break-even point with fiber and copper? 22:15


Optical communication technology in the future

What speed are we at now and what’s the next technology? 23:05

600 G technology will be here eventually

We can expect 1.5 years between iterations in bandwidth. This is really slow in terms of today’s fast-paced technology.

We typically see 100 G speeds today


Predictions 26:00


All About ASICs – #10

Chip sage and ASIC planner Mike Beyers discusses the challenges and trends in integrated circuit design in this week’s electrical engineering podcast.

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

The future will be built using ASICs! Daniel Bogdanoff and Mike Hoffman sit down with chip sage and planner Mike Beyers to discuss the challenges of building custom application specific integrated circuits. This podcast was inspired by the blog post “Creating an ASIC – Our Quest to Make the Best Cheap Oscilloscope

Video version (YouTube):


Audio version:

Discussion Overview:

We’re finally a real podcast now!

What is an ASIC? An ASIC is an application specific integrated circuit, an IC designed for a specific task.

Why do we use ASICs?

ASIC architecture 101 2:46
The main specification people talk about is the size smallest thing you can find on a chip – like the gate of a CMOS transistor

Effective gate length is shorter than the gate length drawn because of the manufacturing  process.

Another key spec is how many transistors you can fit in a square mm
Metal layers for interconnects are also more important, but can cause the mask sets to be more expensive
Do we care more about a gate’s footprint or its depth? 4:11

Will Moore’s Law hit a ceiling? 4:29
What about using three dimensional structures? 5:37
Is Moore’s Law just a marketing number? 5:51

Does technology ever slow down? 6:29

Power is often the largest limiter 6:58
Google builds data centers next to hydroelectric dams 7:34
Battery power 7:43
Power drives cost 7:53

How does the power problem affect ASICs? 8:25
There are power integrity and thermal management concerns
Dedicated routes on an ASIC vs switching on an FPGA 8:14

Who actually uses ASICs? 10:14
IOT technology – 7 nm and 14nm chips

A lot of people are using older technology because it’s much more affordable (like 45 nm)

ASICs on your bike could be a thing? 11:16
SRAM wireless electronic bike shifters 11:57
Is bike hacking a real thing? Yes! Encrypted wireless communication helps prevent it.

Is an opamp (operational amplifier) an ASIC?

What to consider when investing in an ASIC 13:23
What’s the next best alternative to building this ASIC?
With an ASIC, you can often drive lower cost, but you also increase performance and  reliability
Is there a return on investment? 14:24

What happens when Moore’s Law hits a dead end with transistors? 14:46
Could we replace electrical with optical? 15:30
Is it possible that there other fundamental devices out there, waiting to be discovered? 16:20
The theoretical fourth device, the memristor 17:00

Will analog design ever die? Mike was told to get into digital design.

Non-binary logic could be the future 18:23

If someone wants and ASIC, how do they get one? 18:50
In-house design vs. external fabs/foundries, total turnkey solutions vs. the foundry model

You can get a cheaper chip by going to a larger architecture, but the chip will run hotter and slower.

RTL – Most common code languages Verilog or VHDL vs. higher level languages like C 22:50

Behavioral Verilog vs. Structural Verilog 24:00

The history of Keysight ASICs 25:45

Predictions 28:40
How to connect with us 29:00

Optical 101 – #9

How does optical communication work? We sit down with Stefan Loeffler to discuss the basics of optics and its uses for electrical engineering.

Optical communication 101 – learn about the basics of optics! Daniel Bogdanoff and Mike Hoffman interview Stefan Loeffler.

Video Version (YouTube):

Audio version:

Discussion overview:

Similarities between optical and electrical

Stefan was at OFC
What is optics? 1:21
What is optical communication? 1:30
There’s a sender and a receiver (optical telecommunication)
Usually we use a 9 um fiber optic cable, but sometimes we use lasers and air as a medium

The transmitter is typically a laser
LEDs don’t work for optical

Optical fiber alignment is challenging, and is often accomplished using robotics

How is optical different from electrical engineering?

Photodiodes act receivers, use a transimpedance amplifier. It is essentially “electrical in, electrical out” with optical in the middle.

Optical used to be binary, but now it’s QAM 64

Why do we have optical communication?
A need for long distance communication led to the use of optical.
Communication lines used to follow train tracks, and there were huts every 80 km. So, signals could be regenerated every 80 km.

In the 1990s, a new optical amplifier was introduced.

Optical amplifier test solutions

Signal reamplifcation vs. signal regeneration

There’s a .1 dB per km loss in modern fiber optic cable 11:20
This enables undersea fiber optic communication, which has to be very reliable

How does undersea communication get implemented?
Usually by consortium: I-ME-WE SEA-ME-WE

AT&T was originally a network provider

What is dark fiber (also known as dark fibre)?
Fiber is cheap, installation and right-of-way is expensive

What happens if fiber breaks?

Dark fiber can be used as a sensor by observing the change in its refractive index

Water in fiber optic line is bad, anchors often break fiber optic cable 17:30

Fiber optic cable can be made out of a lot of different things

Undersea fiber has to have some extra slack in the cable
Submarines are often used to inspect fiber optic cable

You can find breaks in the line using OTDR – “Optical time domain reflectometry”

A “distributed reflection” means a mostly linear loss. The slope of the reflection tells you the loss rate.

The refractive index in fiber optic cable is about 1.5

Latency and delay 23:00
The main issue is the data processing, not the data transmission

A lot of optical engineers started in RF engineering 24:00

Environmental factors influence the channel, these include temperature, pressure, and physical bends
Recently thunderstorms were found to have an effect on the fiber channel

Distributed fiber sensing is used drilling

Polarization in fiber, polarization multiplexing techniques
Currently, we’re using 194 THz, which gives 50 nm windows

Future challenges for optical 28:25
It’s cost driven. Laying fiber is expensive. And, when all dark fiber is being used, you have to increase bandwidth on existing fiber.

Shannon relation 30:00

Predictions 31:10

Watch the previous episode here!