Speed has become synonymous with the quality of our life especially as it relates to the digital domain. And, speed terms such as gigabits, nanoseconds and now even picoseconds have become so common that if we encounter a hurdle associated with speed it can be difficult to fathom or even believe. We have come to expect that getting to the next level of speed just requires putting “the pedal to the metal”. In many ways that is true but when it comes to certain aspects associated with speed, such as moving from 28 or 32 Gbps to 56 Gbps there are very real challenges we face in hardware design. In his article, “Strike Up the Bandwidth” Scott McMorrow of Samtec wrote in 2015 for EDN magazine, he noted that “The first given is that even if we are successfully achieving information transfer rates of 28 Gbps, as an industry we have to accept that even with the best materials available today we can just barely get to 56 Gbps, which is the next level on the data transfer rate ladder.”
Of course, 8 years in the computing world is pretty much ancient history but the reality is that while 28 Gbps is ubiquitous these days, for hardware design, 56 Gbps (and the next steps beyond) remains a challenge. The reasons for this are described below.
The 56 Gbps Factors
The issues that come into play when moving to 56 Gbps can be quite daunting and include but are not limited to:
The focus of this article is on some of the most critical factors facing hardware developers for 56 Gbps implementations and includes:
Developing a Skew Budget
In the early 2000s, skew was a huge issue for the development of then high-speed performance products. In a nutshell, the problems encountered in differential pairs relative to skew happened when a trace ran over glass fibers for some distance and then between the glass fibers. This resulted in skew between the two sides of the pair and degradation of the signal at the receiver. The best solution came in the form of mechanical spreading of the glass weave so that it was uniform across both sides of the differential pair. This approach worked well until we moved to 56 Gbps. At this speed, the bit periods are small enough that there isn’t any margin for skew. A bit period is a little less than 20 picoseconds. About ¼ of this, or 5 picoseconds, can be tolerated in skew. The “common” knowledge within the industry is that isn’t possible to spread glass enough to tolerate this. This has given rise to the use of twinax cable and its related products for 56 Gbps product implementations from companies such as Samtec. Of course, these cables are more expensive than the traces in a PCB. And, in addition to buying the cable, there must be connectors on both ends of it.
By their very nature, PCB boards are quite lossy especially for high speed implementations. The good news is that the transceivers have become so good that at 56 Gbps loss in the trace of the board is acceptable as long as a real low Df (aka loss tangent or dissipation factor) laminate is being used and the surface roughness of the copper is managed. As we have gone up the speed curve, low Df laminates have become available from several vendors (thereby avoiding sole supplier risks and costs). Historically, dielectric loss has always been much larger than copper loss. Now, with laminate loss tangents of 0.002 and 0.0015, the copper losses are the same metric as dielectric losses. This means it’s time to spend money on foils that are as smooth as can be had. It’s also important to make sure the PCB fabricator doesn’t, either by accident or intention, roughen up the copper (which can happen with fabricators who are used to using an oxide in their manufacturing processes).
Connectors for 56 Gbps
It’s important to note that the only place we see 56 Gbps is in Internet products where a whole bunch of data has to be moved very fast. In these products, the 56 Gbps is the signal that goes from the SFP module to the first IC on the PCB. The rest of the PCB operates at 28 Gbps. (This also holds true for 128 Gbps products). In these instances, connectors as we commonly think of them are not in the signal path. The issue of concern is the 56 Gbps signal that goes from the IC to the SFP module and back. In these areas, twin ax cable and flyover technologies, such as those provided by Samtec, avoid the physical and performance limitations of traditional PCB technology.
PCI Express and PAM4
Beyond traditional Ethernet protocol implementations, the other place we see high speed products is in PCI Express. The top speed specified by the PCI Express standard has been 28 Gbps. While the specification was originally designed to expand PC bandwidths It has been and still is used for a lot of other implementations. One reason for this is the PCI Express protocol is one that everyone understands. PCI Express rev 7.0 is now projected to be completed by the 2024-2025 time frame. Rev 6 of the specification has 64 GT/S (which already exists in Ethernet chips) and Rev 7 has 128 G/TS. These high speeds bring PAM4 into the conversation.
As a signal encoding technique PAM4 has been around for quite a while. However, implementation of it for high-speed/high data rate products has always been more of a promise than a reality. That’s because it didn’t have enough noise margin and the bit error rate was not low enough to satisfy the standard. This can be seen by comparing the “eye” diagrams for NRZ and PAM4. NRZ has two signal levels to represent the I/O information for the digital logic signal with logic 0 being the negative voltage and logic 1 being the positive voltage. One bit of logic information is transmitted or received within each clock period.
In comparison, PAM4 uses four different signal levels for signal transmission. Each symbol period represents two bits of logic information (0, 1, 2, 3). This is achieved with a waveform that has four different levels each carrying two bits: 00, 01, 10 or 11, as shown below.
As shown In Figure 1, in PAM4 the eye diagram is significantly smaller. This is what makes ‘the error rate go up and why there is less noise margin. The good news is that PAM4 now contains error correction, which must be added to the transceiver, to mitigate the noise issues. Revs 6 and 7 of the PCI Express specification both require PAM4 with error correction. (Until we had transistors by the millions we couldn’t have error corrections). The “more steps” associated with PAM4 do make it a signal integrity challenge.
Figure 1. NRZ vs PAM4 signaling technologies (Information and graphics from FS Community article: NRZ vs. PAM4 Modulation Techniques)
It’s pretty much a given that each time we bump up to the next level of “high-speed” there are a number of additional factors that have to taken into account. The reality is that there are no “plug-and-play” solutions for the ever-evolving speed spectrum. But having an understanding of what works or doesn’t and the requirements for meeting the speed challenges goes a long way towards designing the next generation or 56 Gbps products.