Space Technology – #36

Space requires new technologies. Much like the space race of the 1950s, engineers are feverishly working to gain a competitive advantage. Mark Lombardi sits down to explore rad hardening, thermal vacuum chambers, space mining, CubeSats, and battery technology.

Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

Space requires new technologies. Much like the space race of the 1950s, engineers are feverishly working to gain a competitive advantage. Mark Lombardi sits down to explore rad hardening, thermal vacuum chambers, space mining, CubeSats, and battery technology.


Mark Lombardi – 25 years at HP/Agilent/Keysight. He worked for RT logic for a few years, where he got into space.

2:00 – Your odds of survival getting to space are better than getting to the top of Everest.

2:30 – Space mining from the Asteroid belt has the potential to create the worlds first trillionaire.

3:20 – We need to establish manufacturing in space. For example, what if you manufactured satellites on the moon instead of on earth?

4:00 – The main driver is price-per-pound

6:10 – The Space Force – it sounds a little silly at first but is very reasonable when you take a closer look.

7:45 – How do you test objects bound for space?

8:30 – Space is transitioning from government-only to commercial. Businesses are starting to explore how to add value to society and make a profit from space.

9:15 – Phased arrays, reusable rockets, LEO satellites are all changing space technology.

10:00 – Low earth orbit satellites have much lower delay. Geosynchronous satellites have a 250 ms propagation delay.

This has interesting implications for 5G – that 250 ms latency is too long for 5G requirements. So, LEO satellites are what will be used.

12:00 – Using LEO satellites will be deployed in force instead of as singles, as mentioned in the Weather Cubesat podcast.

13:45 – Ghana launched their own satellite, which is a huge step. They eventually won’t be dependent on others for their space access. And, they can do specialized things for reasonable prices.

15:00 – Announcements – we haven’t podcasted in a long time, sorry! We are switching to 1x per month

16:45 – Radiation hardening for electronics, sometimes called electronics hardening. Historically, you had to plan for a long life in a satellite. Now, you don’t have to.

17:30 – It’s also hard to get a rad hardened cutting-edge technology.

18:00 – LEO satellites get less radiation, so it’s less of a problem. And, since they are cheaper, you can build in an expected mortality rate.

19:00 – You can also rev hardware faster, allowing you to use newer technology. Think about imagers, the technology has moved a long way in seven years.

19:55 – Space is cold. Space is a vacuum. So, to test our gear you have to reproduce that on earth. To do that, we use special chambers.

20:50 – Thermal vacuum chambers (T vac) are used to test space objects. Automotive parts are actually very resilient to temperature changes and can be leveraged into space designs.

21:30 – What happens to electronics in space? The vacuum is a bigger challenge than the temperature changes.

23:30 – To get more bandwidth, we have to increase frequency. This leads to attenuation in the air and in cables. Some designers are switching to waveguides.

25:00 – With modular test equipment, you could potentially have test gear that can survive in space.

27:00 – What is the current and projected size of the space industry?

28:10 – What batteries are used in space? What factors into battery decisions? – Lithium ion batteries work well in space, and are used when we can charge them with solar energy.

28:40 – Deep space exploration uses all sorts of obscure battery technology.

29:10 – Electronic propulsion

30:05 – Over 150V, things get interesting. The breakdown voltage is different in space than it is on earth. So, designers have to be very careful.

Intro to RF – EEs Talk Tech Electrical Engineering Podcast #21

Learn about RF designs, radio frequencies, RADAR, GPS, and RF terms you need to know in today’s electrical engineering podcast!

We sit down with Phil Gresock to talk about the basics of RF for “DC plebians.” Learn about RF designs, radio frequencies, RADAR, GPS, and RF terms you need to know in today’s electrical engineering podcast!



RF stands for radio frequency

00:40 Phil Gresock was an RF application engineer

1:15 Everything is time domain, but a lot of RF testing tools end up being frequency domain oriented

2:15 Think about radio, for example. A tall radio tower isn’t actually one big antenna!

3:50 Check out the FCC spectrum allocation chart

4:10 RF communication is useful when we want to communicate and it doesn’t make sense to run a cable to what we’re communicating to.

4:50 When you tune your radio to a frequency, you are tuning to a center frequency. The center frequency is then down converted into a range

6:30 Check out Mike’s blog on how signal modulation works:

7:00 Communication is just one use case. RADAR also is an RF application.

8:10 The principles between RF and DC or digital use models are very similar, but the words we use tend to be different.

Bandwidth for oscilloscopes means DC to a frequency, but for RF it means the analysis bandwidth around a center frequency

9:22 Cellular and FCC allocation chart will talk about different “channels.”

Channel in the RF world refers to frequency ranges, but in the DC domain it typically refers to a specific input.

10:25 Basic RF block diagram:

First, there’s an input from an FPGA or data creating device. Then, the signal gets mixed with a local oscillator (LO). That then connects to a transmission medium, like a fiber optic cable or through the air.

Cable TV is an RF signal that is cabled, not wireless.

Then, the transmitted signal connects to an RF downcoverter, which is basically another mixer, and that gets fed into a processing block.

13:50 Tesla created a remote control boat and pretended it was voice controlled.

15:30 Does the military arena influence consumer electronics, or does the consumer electronics industry influence military technology?

16:00 GPS is a great example of military tech moving to consumer electronics

17:00 IoT (internet of things) is also driving a lot of the technology

18:00 The ISM band is unregulated!

19:15 A router uses a regulated frequency and hops off the frequency when it’s being used for emergency communications

20:50 RADAR, how does it work?

22:22 To learn more about RF, check out App Note 150 here:




Heterogeneous Computing & Quantum Engineering – #17

Learn about parallel computing, the rise of heterogeneous processing (also known as hybrid processing), and quantum engineering in today’s EEs Talk Tech electrical engineering podcast!

Learn about parallel computing, the rise of heterogeneous processing (also known as hybrid processing), and the prospect of quantum engineering as a field of study!


Audio link:


Parallel computing used to be a way of sharing tasks between processor cores.

When processor clock rates stopped increasing, the response of the microprocessor companies was to increase the number of cores on a chip to increase throughput.


But now, the increased use of specialized processing elements has become more popular.

A GPU is a good example of this. A GPU is very different from an x86 or ARM processor and is tuned for a different type of processing.

GPUs are very good at matrix math and vector math. Originally, they were designed to process pixels. They use a lot of floating point math because the math behind how a pixel  value is computed is very complex.

A GPU is very useful if you have a number of identical operations you have to calculate at the same time.


GPUs used to be external daughter cards, but in the last year or two the GPU manufacturers are starting to release low power parts suitable for embedded applications. They include several traditional cores and a GPU.

So, now you can build embedded systems that take advantage of machine learning algorithms that would have traditionally required too much processing power and too much thermal power.



This is an example of a heterogeneous processor (AMD) or hybrid processor. A heterogeneous processor contains cores of different types, and a software architect figures out which types of workloads are processed by which type of core.

Andrew Chen (professor) has predicted that this will increase in popularity because it’s become difficult to take advantage of shrinking the semiconductor feature size.


This year or next year, we will start to see heterogeneous processors (MOOR) with multiple types of cores.

Traditional processors are tuned for algorithms on integer and floating point operations where there isn’t an advantage to doing more than one thing at a time. The dependency chain is very linear.

A GPU is good at doing multiple computations at the same time so it can be useful when there aren’t tight dependency chains.

Neither processor is very good at doing real-time processing. If you have real time constraints – the latency between an ADC and the “answer” returned by the system must be short – there is a lot of computing required right now. So, a new type of digital hardware is required. Right now, ASICs and FPGAs tend to fill that gap, as we’ve discussed in the All about ASICs podcast.


Quantum cores (like we discussed in the what is quantum computing podcast) are something that we could see on processor boards at some point. Dedicated quantum computers that can exceed the performance of traditional computers will be introduced within the next 50 years, and as soon as the next 10 or 15 years.

To be a consumer product, a quantum computer would have to be a solid state device, but their existence is purely speculative at this point in time.


Quantum computing is reinventing how processing happens. And, quantum computers are going to tackle very different types of problems than conventional computers.


There is a catalog on the web of problems and algorithms that would be substantially better on a quantum on a computer than a traditional computer.


People are creating algorithms for computers that don’t even exist yet.

The Economist estimated that the total spend on quantum computing research is over 1 Billion dollars per year globally. A huge portion of that is generated by the promise of these algorithms and papers. The interest is driven by this.

Quantum computers will not completely replace typical processors.


Lee’s opinion is that the quantum computing industry is still very speculative, but the upsides are so great that neither the incumbent large computing companies nor the industrialized countries want to be left behind if it does take off.

The promise of quantum computing is beyond just the commercial industry, it’s international and inter-industry. You can find long whitepapers from all sorts of different governments laying out a quantum computing research strategy. There’s also a lot of venture capitalists investing in quantum computing.


Is this research and development public, or is there a lot of proprietary information out there? It’s a mixture, many of the startups and companies have software components that they are open sourcing and claim to have “bits of physics” working (quantum bits or qbits), but they are definitely keeping trade secrets.

19:50 Quantum communication means space lasers.

Engineering with quantum effects has promise as an industry. One can send photons with entangled states. The Chinese government has a satellite that can generate these photons and send them to base stations. If anyone reads them they can tell because the wave function collapsed too soon.

Quantum sensing promises to develop accelerometers and gyroscopes that are orders of magnitude more sensitive than what’s commercially available today.


Quantum engineering could become a new field. Much like electrical engineering was born 140 years ago, electronics was born roughly 70 years ago, computer science was born out of math and electrical engineering. It’s possible that the birth of quantum engineering will be considered to be some point in the next 5 years or last 5 years.


Lee’s favorite quantum state is the Bell state. It’s the equal probability state between 1 and 0, among other interesting properties. The Bell state encapsulates a lot of the quantum weirdness in one snippet of math.








The World’s Fastest ADC – #13

Learn about designing the world’s fastest ADC in today’s electrical engineering podcast! We sit down with Mike to talk about ADC design and ADC specs. Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.


We talk to ASIC Planner Mike Beyers about what it takes to design the world’s fastest ADC in today’s electrical engineering podcast.

Video Version (YouTube):


Audio Only:

Mike is an ASIC planner on the ASIC Design Team.

Prestudy, learn about making an ASIC.


What is an ADC?

An ADC is an analog to digital converter, it takes analog data inputs and provides digital data outputs.

What’s the difference between analog and digital ASICs?

There are three types of ASICs:
1.Signal conditioning ASICs
2. Between 1 and 3 is a converter, either digital to analog (DAC) or analog to digital (ADC)
3. Signal processing ASICs, also known as digital ASICs

Signal conditioning ASICs can be very simple or very complicated
e.g. Stripline filters are simple, front end of an oscilloscope can be complicated

There’s a distinction between a converter vs. an analog chip with some digital functionality
A converter has both digital and analog. But there are some analog chips with a digital interface, like an I2C or SPI interface.

How do you get what’s happening into the analog world onto a digital interface, and how fast can you do it?

Mike Hoffman designed a basic ADC design in school using a chain of operational amplifiers (opamps)
A ladder converter, or “thermometer code” is the most basic of ADC designs

A slow ADC can use single ended CMOS, a faster ADC might use parallel LVDS, now it’s almost always SERDES for highest performance chips

The world’s fastest ADC?

Why do we design ADCs? We usually don’t make what we can buy off the shelf.

The Nyquist rate determines the necessary sample rate, for example, a 10 GHz signal needs to be sampled at 20 – 25 Gigasamples per second
1/25 GHz = 40 ps

ADC Vertical resolution, or the number of bits.

So, ADCs generally have two main specs, speed (sample rate) and vertical resolution.

The ability to measure time very accurately is often most important, but people often miss the noise side of things.

It’s easy to oversimplify into just two specs. But, there’s more that hast to be considered. Specifications like bandwidth, frequency flatness, noise, and SFDR

It’s much easier to add bits to an ADC design than it is to decrease the ADCs noise.

Noise floor, SFDR, and SNR measure how good an analog to digital converter is.

SFDR means “spurious free dynamic range” and SNR means “signal to noise ratio”

Other things you need to worry about are error codes, especially for instrumentation.

For some ADC folding architectures and successive approximation architectures, there can be big errors. This is acceptable for communication systems but not for visualizing equipment.

So, there are a lot of factors to consider when choosing ADC.

Where does ADC noise come from? It comes from both the ADC and from the support circuitry.

We start with a noise budget for the instrument and allocate the budget to different blocks of the oscilloscope or instrument design.

Is an ADC the ultimate ASIC challenge? It’s both difficult analog design and difficult high-speed digital design, so we have to use fine geometry CMOS processes to make it happen.

How fast are our current ADCs? 160 Gigasamples per second.

We accomplish that with a chain of ADCs, not just a single ADC.

ADC interleaving. If you think about it simply, if you want to double your sample rate you can just double the number of ADCs and shift their sampling clocks.

But this has two problems. First, they still have the same bandwidth, you don’t get an increase. Second, you have to get a very good clock and offset them carefully.

To get higher bandwidth, you can use a sampler, which is basically just a very fast switch with higher bandwidth that then delivers the signal to the ADCs at a lower bandwidth

But, you have to deal with new problems like intersymbol interference (ISI).

So, what are the downsides of interleaving?

Getting everything to match up is hard, so you have to have a lot of adjustability to calibrate the samplers.

For example, if the q levels of one ADC are higher than the other, you’ll get a lot of problems. Like frequency spurs and gain spurs.

We can minimize this with calibration and some DSP  (digital signal processing) after the capture.

Triple interleaving and double interleaving – the devil is in the details

Internally, our ADCs are made up of a number of slices of smaller, slower ADC blocks.

Internally, we have three teams. An analog ASIC team, a digital ASIC team, and also an ADC ASIC team.

Technology for ADCs is “marching forward at an incredible rate”

The off-the-shelf ADC technologies are enabling new technologies like 5G, 100G/400G/1T Ethernet, and DSP processing.

Is processing driven by ADCs, or are ADCs advancing processor technology? Both!


Mike H.: New “stupid question for the guest” section
What is your favorite sample rate and why?
400 MSa – one of the first scopes Mike B. worked on. Remember “4 equals 5”

How Internet is Delivered – Data Centers and Infrastructure – #12

Laser Netflix delivery, backyard data centers, and how the internet gets delivered to homes and businesses. This week’s podcast guest is optical guru Stefan Loeffler. Hosted by Daniel Bogdanoff and Mike Hoffman, EEs Talk Tech is a twice-monthly engineering podcast discussing tech trends and industry news from an electrical engineer’s perspective.

Laser-delivered Netflix and backyard data centers!

The conversation continues with optical communications guru, Stefan Loeffler. In this episode, Daniel Bogdanoff and Mike Hoffman discuss optical infrastructure today and what the future holds for optics.

Video version (YouTube):


Audio Version:

Discussion Overview:

Optical Communication Infrastructure 00:30

Optics = Laser-driven Netflix delivery system

Client-side vs line-side 1:00

Line-side is the network that transports the signals from the supplier to the consumer

Client-side is the equipment that is either a consumer or business, accepting the data from the network provider.


Yellow cables in your wall indicate presence of fiber 1:40

Technically, optics is communication using radiation! But it is invisible to us as humans. 2:20


Getting fiber all the way to the antenna is one of the major new technologies 2:30

But this requires you to have power at the antenna 2:45

However, typically there is a “hotel” or  base station at the bottom of the antenna where the power is and where fiber traditionally connects, instead of up to the antenna

Really new or experimental antennas have fiber running all the way up the pole  3:28


Network topologies- star, ring, and mesh 3:42

Base stations are usually organized in star-form, or a star network pattern. A star network starts at a single base station and distributes data to multiple cells

Rings (ring networks) are popular in metro infrastructure because you can encircle an entire area 4:20

Optical rings are like traffic circles for data.

Is ring topology the most efficient or flexible? 6:20

An advantage of ring and mesh topologies is built-in resilience

Mesh topologies have more bandwidth but require more fiber optic cable 7:10

How often is the topology or format of a network defined by geography or regulations? 8:30


How consumers get fiber 9:20

Business or academic campuses typically utilize mesh networks on the client side, subscribing to a fiber provider

Fiber itself or a certain bandwidth using that fiber can be leased

If you’re a business, like a financial institution, and latency or bandwidth is critical, leasing fiber is necessary so you have control over the network 9:45


What’s the limiting factor of optical? 

What are the limitations of the hardware that’s sending/receiving optical signals? 11:08

Whatever we do in fiber, at some point, it is electrical 11:27

There will be a tipping point where quantum computing and photon-computing (optical computing) comes into play 11:40

Will optical links ever compete with silicon? Maybe we will have optical computers in the future 12:02

The limiting factor is the power supply 12:40

What’s costing all this energy? 12:58

The more data (bits and bytes) we push through, the more energy in the form of optical photons or electrons we are pushing through. We also must use a DSP for decoding which costs energy

One of the first 100 Gb links between two clients was between the New York Stock Exchange and the London Stock Exchange 14:00


The evolution of the transmission of data 14:45 

Will we ever have open-air optical communication? 15:50

RF technology uses open-air communication today, but it is easy to disturb

The basic material fiber is made of is cheap (silica, quartz), and can be found on any beach 16:08

Whereas copper has a supply problem and, thus, continues to increase in price


Other uses for optical 16:33

Crystal fiber and multicore fiber is being experimented with to increase the usable bandwidth

Optical, as waveguides, can be built into small wafer sections 17:15

Optics is used in electrical chips when photons are easier to push through than electrons

Cross-talk can happen with optical, too 18:13

Testing is done with optical probing, which works because of optical coupling

Optical-to-electrical converter solution 


Optical satellite communication 19:48

Hollow-fiber could be used in a vacuum, such as space

The refractive index of the fiber’s core is higher than the cladding, which guides the optical signal through 21:05

A hollow-fiber would be like a mini mirror tube


Optical data transmission 21:25 

Higher carrier frequencies means you can modulate faster, but there’s more loss and dispersion

This means optical communication could be harder in open-air vs. in traditional fiber 22:45

70-80% headroom is typical

The congested part of a network drives the change in technology. 24:25


Mega data centers vs. distributed data centers 

Cooling and power is important so big data centers are being built by Google, Facebook, Netflix in places where cheap, cool water is abundant 24:30

Distributed data centers are becoming more popular than mega-data centers 24:55

All images on Facebook have “cdn” in the URL because the image is hosted on a content distribution network, or cloud

Data centers are described by megawatts (MW) of power, not size or amount of data processed 26:20

Internal data center traffic takes up about 75% of the traffic 27:47

Distributed networks utilize a mesh network and require communication between networks


Telecom starts using faster fiber when about 20% of the fiber is used 28:55

This 20% utilization is also common in CAN busses because of safety-critical data communication

Uptime guarantees require the Telecom industry to keep this number at 20%


Keysight optical resources and solutions  31:00

Predictions 31:45

Also, check out our previous conversations with Stefan about Optical Communication 101 and Optical Communication Techniques.

Optical 101 – #9

How does optical communication work? We sit down with Stefan Loeffler to discuss the basics of optics and its uses for electrical engineering.

Optical communication 101 – learn about the basics of optics! Daniel Bogdanoff and Mike Hoffman interview Stefan Loeffler.

Video Version (YouTube):

Audio version:

Discussion overview:

Similarities between optical and electrical

Stefan was at OFC
What is optics? 1:21
What is optical communication? 1:30
There’s a sender and a receiver (optical telecommunication)
Usually we use a 9 um fiber optic cable, but sometimes we use lasers and air as a medium

The transmitter is typically a laser
LEDs don’t work for optical

Optical fiber alignment is challenging, and is often accomplished using robotics

How is optical different from electrical engineering?

Photodiodes act receivers, use a transimpedance amplifier. It is essentially “electrical in, electrical out” with optical in the middle.

Optical used to be binary, but now it’s QAM 64

Why do we have optical communication?
A need for long distance communication led to the use of optical.
Communication lines used to follow train tracks, and there were huts every 80 km. So, signals could be regenerated every 80 km.

In the 1990s, a new optical amplifier was introduced.

Optical amplifier test solutions

Signal reamplifcation vs. signal regeneration

There’s a .1 dB per km loss in modern fiber optic cable 11:20
This enables undersea fiber optic communication, which has to be very reliable

How does undersea communication get implemented?
Usually by consortium: I-ME-WE SEA-ME-WE

AT&T was originally a network provider

What is dark fiber (also known as dark fibre)?
Fiber is cheap, installation and right-of-way is expensive

What happens if fiber breaks?

Dark fiber can be used as a sensor by observing the change in its refractive index

Water in fiber optic line is bad, anchors often break fiber optic cable 17:30

Fiber optic cable can be made out of a lot of different things

Undersea fiber has to have some extra slack in the cable
Submarines are often used to inspect fiber optic cable

You can find breaks in the line using OTDR – “Optical time domain reflectometry”

A “distributed reflection” means a mostly linear loss. The slope of the reflection tells you the loss rate.

The refractive index in fiber optic cable is about 1.5

Latency and delay 23:00
The main issue is the data processing, not the data transmission

A lot of optical engineers started in RF engineering 24:00

Environmental factors influence the channel, these include temperature, pressure, and physical bends
Recently thunderstorms were found to have an effect on the fiber channel

Distributed fiber sensing is used drilling

Polarization in fiber, polarization multiplexing techniques
Currently, we’re using 194 THz, which gives 50 nm windows

Future challenges for optical 28:25
It’s cost driven. Laying fiber is expensive. And, when all dark fiber is being used, you have to increase bandwidth on existing fiber.

Shannon relation 30:00

Predictions 31:10

Watch the previous episode here!