5G is often heralded as the future of communications technology. It’s actually the antithesis. It is the anti-Internet clawing intelligence back into the network and limiting innovation. With 5G consumers would once again be limited to a choice of offerings and a new generation would rediscover the busy signal.
From Services to Opportunity
5G represents a threat to the level playing field and innovation of the Internet. It is the new face of the battle over network neutrality.
At its inception in the 19th-century, the talking telegraph (AKA the telephone) was an amazing feat of engineering using analog technology. One could speak into a microphone in one city and be heard in another city. Accomplishing this required a very large investment in technology. The customers were consumers of the service.
Today, we have generic computing platforms and generic connectivity (The Internet). This has made it possible for anyone to write a telephony application and share it with others. The nature of consumer technology has changed.
We no longer need to depend on a provider for services such as telephony. We have many companies offering not only voice over IP (VoIP) but video too. Using open APIs, those with programming skills (or toolkits) can implement such applications themselves. Despite the term “Consumer Technology”, we can each be creators and contributors.
5G is at odds with this movement of functionality from the providers’ networks into our devices. We are moving from intelligence in the network to intelligent devices. As I see it, 5G seems more like the past of networking rather than the future.
My view is that the IEEE has a responsibility to take a policy-neutral stand and treat 5G as one of many approaches to connectivity.
From Consuming to Creating
In 1956 the Supreme Court ruled against ATT and in favor of Hush-a-Phone (nothing more than a box used to provide some privacy). ATT provided telephony as a service and argued that Hush-a-Phone degraded their service. Every element of a trillion-dollar worldwide phone system was built towards supporting that one service. They were the provider, and customers consumed their telephony service.
5G is an attempt to return to the days before Hush-a-Phone using the 5G radio as a MacGuffin. The argument is that without 5G, we can’t have a world in which people can casually assume video conference and connected devices because only a phone company can do voice and video. This from the industry that couldn’t make a business of Picturephone after forty years of trying. Zoom and others succeeded because they changed the rules and didn’t try to capture value in the network. The consumers have won, and that’s an existential crisis for telecom.
Today a “telephone” is merely an app using generic connectivity (such as Wi-Fi). While we still use the word consumer, we are as much producers as consumers. We can harness the technology to create and, by sharing software, we can share our creativity with others. Many have the skills and potential to write a telephone app.
In that sense, everything has become consumer technology as we use software to reinvent the world. You can get a phone app from Skype rather than your service provider.
As I wrote in “From Broadband to Infrastructure” this shift from telephony being a network service to being implemented as an app requires
a shift from purpose-built infrastructure to generic connectivity and generic computing. From special purpose circuits to generic microprocessors.
The Internet has changed how we think about connectivity. With 5G, the traditional providers are trying to put the genie back in the bottle. The IEEE should play a leadership role in helping policymakers to understand this new landscape. This is why I’m concerned when I see 5G declared as the future of networks.
The Saga of Red/Green
Before we get to 5G, it’s worth looking back at why the red/green analog interface was so powerful. In the 1980s, the carriers introduced their digital service — ISDN (Integrated Services Digital Network). It extended the intelligent phone network all the way to the premises (home or office and, at first glance, that seemed wonderful.
But users (such as myself) had taken advantage of the simplicity of the red/green analog wire interface to attach our own devices (even before the 1968 Carterfone decision made it legal). By the 1990s, modems went at 56Kbps — the same speed as an ISDN B channel. The only reason that they didn’t go faster was that the repurposed digital telephone network had implemented a hard limit at 56Kbps in their protocol.
Consumer innovation outpaced telecom. This is a recurring theme. I had proposed replacing the standard equipment in the central office (line cards) with digital versions that would automatically offer DSL capabilities at a very lost cost. At $100/line, we could’ve had universal broadband in the 1990s!
From Telephony to The Internet. And Back Again.
Telephony was an amazing achievement of 19th-century engineering. Different telephone companies had competed for customers but getting them to cooperate and interoperate was difficult, so ATT was given stewardship of the phone network as a regulated utility.
Television, in the 1930s, was another amazing feat of engineering. It required every element of the system to be engineered to microsecond precision using analog electronics. Unlike the telephone system, you owned your television just like you owned your radio. Interoperability was achieved through standards and licensing technology.
The IEEE played a vital role in creating these standards. Today’s Consumer Technology Society is a direct descendent of the IEEE Broadcast Society. The IEEE itself was formed by the merger of the IRE (Institute of Radio Engineers) and the AIEE (American Institute of Electrical Engineers). Value was created using electronics (radios) and electricity (power engineering).
Getting all elements of a telephone network to work together and preserve the waveform over a long distance is very difficult and expensive. Note my careful use of words — “telephone network” rather than “communications network” and “waveform” rather than “signal”. The latter words were borrowed from day-to-day language because their technical and common meaning were in tight alignment. Words like “network” are so generic that they make it easy to talk past each other and confuseo networking-as-a-service with social networks.
Continued use of those words blinds us to how today’s connected world is based on fundamentally different principles from traditional telecommunications networks. Rather than using purpose-built systems, we use generic connectivity and software to create disparate solutions. On the surface, nothing has changed, but behind the scenes, the story is very different.
The Discovery of "Best-Efforts"
Digital technology was developed, in part, to reduce the cost of telephony. A digital signal could be regenerated because there are discrete states. In the simple case, a signal is one or zero. Von Neumann represented a big breakthrough since a string of bits could be interpreted as an instruction (an opcode or operation code), allowing the use of a generic interpreter instead of purpose-build logic.
In a period of about twenty years from the 1940s to the 1960s, we went from plug-board computing (wired logic) to today’s modern operating systems (Multics and Unix).
We also used modems (Modulator/Demodulator) to repurpose the telephone network as a data communications network.
At first, this was a poor fit since staying dialed up occupied dedicated resources. Dedicated circuits made sense in the days of analog technology and, initially, the digital technology was used to emulate the analog phone network, including preserving the characteristics of a dedicated circuit by dedicating resources along the path.
The idea that we need to dedicate a path to a particular connection is at the heart of 5G. In the 1990s, using those voice circuits for modem connections created a crisis. The resources were unavailable as long as the user was connected, even if the connection was idle most of the time. By sharing the connection without making latency promises, the scarcity disappeared.
In effect, 5G brings back the busy signal. If you can’t get the resources you need, you are out of luck. This is one of the ironies of 5G. By making promises, they guarantee failure. Dedicating resources to some leaves nothing available to others. At its heart, the Internet is a way to share a common infrastructure using protocols such as TCP (Transmission Control Protocol). 5G replaces this distributed control with central authority demanding payments.
The idea of grouping bits together into packages and sending them over a computer network seems to have arisen independently in a number of places, including the work of Donald Davies in Britain. What is telling is that his goal was to interconnect computers rather than emulate traditional phone networks.
As I see it, the development of Packet Radio networks forced further innovation and brought software to bear on the problem of exchanging messages between computers using an unreliable medium — radios. The traditional approach was to build reliability into the network. But ALOHAnet was do-it-yourself and used software to program around the limitations of packet radios. I wrote about this in my first column.
At first, programming-around seemed like a clever one-off hack, but the idea turned out to be transformational — a new paradigm. Traditional engineering is layered in that you built elements and layers of abstractions for a purpose. Instead, we can use software to harness any available resources and discover what we can do with them.
One of the telephony’s secrets is that compositing the elements across disparate providers never really worked without analog shims. Complexity doesn’t scale very well.
One reason that IP (Internet Protocol) won was because it didn’t make any promises. It punted and put the burden on applications to do what they could with the available resources. The conventional wisdom was that this could not work for voice because the ear was so sensitive to glitches.
Or so it seemed. There was no need to challenge this assumption because there was already a perfectly fine and profitable voice network. I use the term complacent engineering for accepting the givens rather than challenging them.
The Discovery of Voice over IP as a Service
One VoIP origin story is that VocalTec, a small company in Israel, developed a simple software solution to handling jitter and packet loss on their local network, thus enabling voice to work. They were surprised to find that their customers were using the app across the wider Internet. It happened to work because, by that time, the capacity of much of the Internet had grown as a byproduct of the demand for broadband to support the web. “broadband” is another technical term that has been repurposed to mean fat pipe.
VoIP, in itself, was an invention. The discovery was that it could act like the traditional phone network without needing to build voice into the network. Today VoIP calls are bridged to the traditional network.
Broadband was attractive to providers because it allowed them to sell additional services without an additional cost of dedicated facilities.
I played a role by getting Microsoft to ship IP (Internet Protocol) support as a standard part of Windows along with support for Network Address Translators (NATs), which, today, we call routers (even though technically they are not routers). “Internet” was meant to be just one service in the mix along with web, interactive television, e-commerce, etc. The NAT changed this by enabling the user to do everything with a single connection. That came to include voice and video. The problem is that the provider didn’t get additional revenue from that value created. Oops.
In 2003 Skype was founded. Though they were not the first to do VoIP, their app provided phone calls as a global service despite the telecom providers’ efforts to limit competition. This puts a lie to the idea that VoIP needs a special network. Not only that, Skype could offer video! This is very counter-intuitive. The reason Skype could offer video at no additional cost is that they didn’t guarantee it would work. It just happened that as the generic capacity of the Internet grows, new services become possible. Netflix is another beneficiary of the new opportunities.
Video from the telcos failed because they had to charge a high price for dedicated facilities on a specially built network — just like 5G.
Today we casually expect video conferencing to just work everywhere! And, again, none of the added value goes to the traditional providers. Video is now consumer technology. https://jitsi.org/ is an open-source video conference capability that you can host yourself! Buying video as a service is an option but not a requirement.
All this value is created outside the network. Providing the infrastructure to enable all these new services has become an economic problem rather than a technical one. We need connectivity to be available as infrastructure, but we can’t finance it out of service revenue. In that sense, it is like roads and sidewalks.
Yet public policy is still centered on the notion that we can fund the infrastructure by selling services and that the Internet is just another television channel.
5G is the fifth generation of cellular telephony. The big surprise of the fourth generation (LTE or Long Term Evolution) is that we didn’t need a special voice network because we could use VoIP techniques over a data network.
The defining premise of 5G is that the networks should be aware of each application’s needs. This was indeed true in the early days when every element had to be tuned to each application. Hush-a-phone made sense in that context — it was simply an element of the telephone network. Another special network was needed for video. This doesn’t mean that you had a separate physical network. It is the job of the network control plane to resolve conflicts between the competing requirements for the shared facilities.
The fast lane is one way for applications to buy priority. Selling such lanes and hosting applications is the business model of 5G.
Versus Best-Efforts
This is why the concept of best-efforts is so disruptive and why it is so hard to understand. It allows the applications to resolve the conflicts among themselves without the need for a control plane. In fact, it can work better than a control plane since instead of a busy signal, the application can adapt, such as using text instead of voice. We’ve had situations where people have been unable to communicate because they can’t get a good cellular signal, but there was enough connectivity for the diagnostic messages. That capacity could’ve allowed testing for help.
Recently T-Mobile had a major outage. One of the reasons they cited is that their dependency on IMS. Another reminder that traditional telecommunications architecture is not very resilient.
Embracing best efforts requires changing our metric of success. In the 1970s, we could measure call completion by determining whether the other phone rang. By that measure, there was no need for an answering machine to take a message. Once users could create their own solutions, answering machines became the norm.
Without the ability to take capacity from the commons and sell it to the highest bidder, how does a provider make money? There are many examples of how to fund public infrastructure such as roads and sewer systems. How should the IEEE navigate a transition which threatens the current shareholders?
5G – The Future that Was to be
In 2004 we saw IMS (IP Multimedia System) based on the premise that we need a special control plane in order to make multimedia work. This was despite the fact that multimedia was working quite well. At some point, people figured this out, and Lucent’s stock price plummeted. But the basic idea that you need to build applications into the network did not die. It simply hibernated and emerged as 5G even though its use cases are already working quite well.
The 5G radio
As Cisco’s John Apostolopoulos observed, Wi-Fi 6 and 5G radios are basically the same. The main technical difference being in the frequency bands and the economic model. This is entirely a policy decision. If anything, Wi-Fi 6 has a compelling advantage because it can interoperate with the existing Wi-Fi infrastructure, whereas a 5G requires spending billions of dollars to achieve the same thing without any of the synergies of Wi-Fi!
The primary difference is economic, and telecom providers must recoup billions of dollars investing in a brand-new infrastructure. App developers, on the other hand, can’t depend on 5G, thus do not drive demand for 5G radios in the interim.
Both radios offer very high performance within the same radio. The problem is extending such high-performance guarantees beyond the radio.
But, before we leave the subject of the radio, isn’t it strange that 5G is entirely about wireless. If the protocols are so important, why aren’t they available via fiber (or other wires) and Wi-Fi? T-Mobile further muddies the water by rebranding its service as 5G using existing radios but with 5G network protocols.
These are all tell-tale signs of what I call marketecture. It looks like system architecture but is designed by the marketing department. I wrote about a striking example in my previous column. The Android TV box that Verizon Wireless provides is branded as “5G” but contains no 5G technology. It can work with any Internet connection.
The 5G network
The story of 5G is reminiscent of the challenge of IMS in the assumption that there is a need to extend such promises. Creating the perception that there is a need is a marketing challenge. Part of this is getting researchers on board by labeling their work on radio technology as 5G, thus creating the appearance of a strong body of research supporting the business model. The 5G radio becomes a MacGuffin or element that serves the purposes of furthering the larger narrative.
This chart from Huawei is the best I’ve found for explaining the real reason for 5G — the ability to violate network neutrality and sell fast lanes to the highest bidder. The more they can get engineers to develop systems that are dependent upon brittle promises, the more money they make.
This works at cross-purposes to innovation and consumer technology and takes capacity off the table. The worse they make the open Internet, the more money they make.
Perhaps the bigger goal is to get into the information services business. When ATT developed Unix and Minitel started offering services in France, there was a concern that ATT’s control of the network would allow them to stifle competition. 5G is an attempt to get into the information services business. Hosting services in their facilities means their customers — Amazon Web Services, Microsoft Azure, and others — are their competitors. This represents an inherent conflict of interest.
I recently came across the term MEC or Mobile Edge Computing. In a previous column, I was skeptical and asked where the edge is. In reading about 6G, I see that the term is being used for provider-owned facilities on the customer’s premises. This is a commercial version of the failed residential gateway — another attempt to return to the days before Hush-a-Phone, when the carrier owned the customer premises (to use a telecom term) gear, not the customer!
There is another audience for 5G — those who want to control what people do with the network. 5G is very good for authoritarian governments. It makes worries about Facebook and Google surveillance seem mild by comparison. If you depend on the network for security, you are really choosing who can snoop, not whether.
We can take a quick look at some of the applications used to justify 5G
- Speed and Reach. Yes, 5G radios can be higher performance than LTE if you are close enough. But that’s also true of Wi-Fi 6. The longer range of 5G comes from getting first dibs on frequency bands and not because of better technology. To the extent we treat frequency bands as property (a problematic idea), it should be part of our commons and not sold off to the highest bidder. Wi-Fi has shown how very well the shared-medium approach can work.
- Remote Virtual Reality. We already do remote gaming with Steam and other services. Remote VR seems to be based on the idea that if 5G radios give you the very low latency you can’t get from older versions of Wi-Fi, we can extend such promises over a network. The very characteristics that make it difficult with old versions of Wi-Fi make it difficult to extend such promises over an arbitrary network. There’s also the business question of why the focus on a zero-billion-dollar industry. But it’s a nice story.
- Remote-Control. If these applications require such precise timing, then they fail if there is the slightest network glitch. Oops. Any good systems engineers should focus on resilience rather than burst performance.
- Remote Surgery. Really? Take remote control to the next level and let people die if there is a network glitch? ‘Nuff said.
- Connecting Vehicles. There are a few strains of this:
- Robot Driving. This is the idea that there would be drivers housed in a building remotely driving and operating vehicles. Whether this makes economic sense, I can’t say. But, again, the remote-control systems must be designed to be resilient. When something goes wrong at 200kph, you can’t rely on a remote driver. Or if the vehicle loses the signal in a tunnel?
- Highly connected autonomous vehicles. The goal is autonomous. It is indeed useful to assure connectivity, but the vehicles should use generic connectivity rather than having a brittle dependency on 5G protocols.
- Coordinating cars. Having a complex network with cars tracking all other cars is at odds with a complex network. What we need is the simplest connectivity without the gratuitous complexity of 5G. And without the failure point of a billing system in the path. Hasn’t anyone learned the lesson of the failure of ATM? It can be useful to have ad-hoc routing to interconnect cars. Such ad-hoc routing requires a best-efforts approach in order to scale
- Software-Defined Network. Wait, hasn’t all network software-defined for the last half-century? Oh, you mean a control plane. Didn’t we learn that that is a terrible idea? And unnecessary?
- Network Function Virtualization. This is, perhaps, the big motivation. It’s just another term for cloud services. Calling them network functions obscures the fact that they are competing with their customers.
The use cases are all about making applications brittlely dependent upon a network provider. What they have in common is that they build on the assumption that each application needs a purpose-built infrastructure or, at least, special accommodations. This is why understanding the discovery of VoIP is so important and forces us to rethink that assumption.
It’s tempting to want to buy a dedicated lane on the highway, but those of us sharing the common facilities shouldn’t subsidize those who want to buy an advantage.
We also need to heed to examples of large projects that spend hundreds of millions of dollars on a single project and then failing. Generic infrastructure is a far safer bet and is highly leveraged because they allow for rapid iteration at low risk.
Building smarts into networks or cities rewards deep pockets and prevent innovation. That is at odds with the new world of consumer technology in which we are creators and not just consumers.
Alternatives to 5G
Imagine if we had an infrastructure that rewards innovation (such as VoIP) and gives new ideas a chance by creating a level playing field. If I wrote this column even a few years ago, I would have the burden of explaining that the best-efforts Internet could scale. Today I don’t need to — we take video conference for granted. It isn’t even remarkable! And that is remarkable.
The major lesson of the Internet is that we can composite a path out of locally funded facilities that are open to all. Instead of a provider owning the entire path, we simply interconnect facilities and use software to program-around issues such as congestion. We already have the protocol in TCP and are improving upon it.
The term “5G” won’t go away — there has been too much of an investment selling something called “5G” as the future. But we must think critically about what it means. Despite my concerns, 5G will play a role in the future. But it is not the one future of connectivity. The IEEE has a role to play in presenting 5G in a larger context. At the very least, it should distinguish between the 5G radio and the network protocols. For the radio itself, Wi-Fi 6 must be presented as another option — one that puts users in control.
The Consumer Technology Society has a particular role in a world in which consumers create and don’t just consume. Let’s assure that people and companies, both large and small, have the tools to be contributors.
This article was originally published in the IEEE Journal.