Technology vendors and major carriers are selling products and services that aren’t necessarily aligned with what utilities fundamentally require. Not surprisingly, this has created confusion between spectrum, technology naming conventions and much more. A few typical questions include:
- What is provided by 5G that I wasn’t getting with 4G, and do I need it?
- What new mission-critical applications can I expect from 5G?
- What standards and frequency bands are required? Are there inconsistencies I should look for?
- Are we discussing frequency bands and band designations, or are we conflating them with technology standards?
- What do you specifically mean when you say 5G?
Navigating the details of today’s technology standards and spectrum options (i.e., available channel bandwidth per frequency band and standards bandwidth requirements) will require asking the right questions. The goal is to identify the optimal design solution to fit your utility’s requirements.
Frequency Versus Technology
Frequency bands have been referred to interchangeably with the names that have been associated with them. Many of these names have originated from FCC nomenclature for low- and high-frequency bands, or from marketing campaigns executed by various major carriers. This has often been a source of confusion and errors.
The FCC’s recent ruling to reband part of the 900 MHz land mobile radio (LMR) spectrum is a good example. Intended to allow for broadband operations, this section was designated as Band 8, because it was the same spectrum as the Third Generation Partnership Project (3GPP) global Band 8. This created an issue, however, because the U.S. version of Band 8 has different uplink and downlink separation when compared to the Global Band 8 standard. This distinction must be noted when buying Band 8 radios and devices. Buyers need to verify if the device supports the U.S.-required frequency separation or if it is fixed to the global version. So, while 3GPP designates this spectrum segment as Band 8, the device and RF equipment vendors must now specify the exact network configuration the device supports.
Wireless cellular communications originated within the 850 MHz band when the FCC published 47 CFR Part 22 in 1981. This was the innovation that launched an industry and changed telecommunications forever. At that point, cellular and 850 MHz were referred to interchangeably.
Then, broadband personal communication services (PCS) technology was introduced within the 1900 MHz band in 1993. This became a branding campaign, with many companies claiming that PCS was not cellular. This was one of the first sources of misinformation. In reality, this was a similar technology working on a different frequency band — which was in fact an inferior frequency band for coverage. However, it was available and added network capacity.
Following this, the industry added the 700 MHz spectrum when AT&T and Verizon bought huge nationwide licenses in 2008 that allowed them to deploy the same frequency band across the entire country. With the same technology and same frequency licenses uniformly deployed, this effectively ended the era of dropped calls and spotty connectivity — a common problem with 850-band cellular and 1900-band PCS as coverage ending at county lines. The 700 MHz spectrum was routinely conflated with 4G/LTE, especially with consumers and people outside the industry, as it was the only band using the technology standard at the time. As more carriers purchased bands and technology, the industry continued to conflate them.
This 700 MHz rollout also became the first time that long-term evolution (LTE) technology was deployed. Phones and devices running on LTE could connect across wide swaths of the country because the low-frequency base station signal in the 700 MHz range propagated much better than anything that had previously been introduced in the cellular communications space. It also had relatively large channels that could support the benefits of LTE in driving a lower latency and higher throughput experience for customers.
Next, we had a new frequency band called Advanced Wireless Service (AWS) that was introduced to operate in the 2100 MHz band. Marketing campaigns called the AWS band XLTE by Verizon, though it was actually just LTE technology running in the 2100 MHz band of spectrum.
Given this history of confusing messaging, it should surprise no one that it is continuing. We have seen 4G, 4G Advanced Pro, 4.9G (designated by some vendors), a 5G logo illuminated on devices before it was 3GPP ratified (according to some carriers), and the true 5G era. Even when we are told something supports 5G, or our device says it’s connected to 5G, it doesn’t tell us anything about features and functions that are enabled.
Many in the industry talk about feature compatibility on existing telecommunications infrastructure while simultaneously touting features supported only by new deployments of millimeter wave (mmWave) frequencies. Carriers have also used this as an opportunity to diversify their offerings by combining their 5G solutions and spectrum holdings into one brand. The industry has now seen carriers like Verizon do this with Ultra-Wide Band (UWB) monikers. Note: This is a channel size or bandwidth, NOT spectrum or a technology standard.
5G is often spoken of in terms of ubiquitous, highly touted features, while in reality some of those features may only be available in limited areas. It all depends on the feature implementation, channel bandwidth, and proximity of the end devices to infrastructure. It also depends on how the feature is implemented and where we observe the benefit, such as in the application or at the radio air interface. We may actually need a scorecard to keep track of the new jargon and claimed benefits. These are only a few examples:
- Ultra-low latency/ultra-high reliability
- MU-MIMO (massive user multiple input/multiple output)
- eMBB (enhanced mobile broadband)
- Side link
- Order of magnitude improvements in data rates and latency
- Millions of connected devices
- Vehicle-to-everything communications
- Machine to machine (M2M) communication
- 10-year-plus M2M battery life
- Edge computing
- Flattened RAN architecture
- Better system performance, efficiency and capacity
- Ultra-low-cost sensors
We often hear that we can use existing hardware and all the network infrastructure after a simple software upgrade. This is typically not the case. Some deployments can do some of what is wanted, and other deployments can realize different advantages. Keep in mind, we can’t necessarily take all these network pieces and make them achieve every new feature and functionality, particularly in today’s era of complexity.
We must contemplate what bands and features are supported in the chipset. Which of those were enabled by the devices, and how will they function in real-world applications? The capability is there, but it isn’t simple.
Why Does It Matter?
Given the noise and confusion in the marketplace, focusing on the things that really matter is of utmost importance. What problem must be solved and what features and functionalities are required? What spectrum bands are being utilized? What technologies are available to solve the problem?
If utilities believe they must have 5G because it is the newest technology, they may start down a path that can lead to overspending on devices, overbuilding infrastructure and deploying systems that require larger spectrum segments. While 4G and 5G are both based off of 3GPP standards, they are not interchangeable and 4G may be the right choice for certain needs and use cases.
The spectrum band will dictate much of what can be deployed. If it is low band (sub-1 GHz) and has a 3GPP band designation, then a large device ecosystem is available with commonly understood deployment scenarios. This spectrum is utilized to cover large areas with a foundational network.
Large wavelengths in some portions of spectrum preclude flashy new features like MU-MIMO or eMBB. Those features at sub-1 Ghz frequency would require massive antenna arrays with enough elements to enable beam forming/steering such that the wind and tower loading requirements would be cost-prohibitive. It is highly unlikely this would be sustainable, even assuming a utility could afford to buy enough devices to build out such a network, or that any manufacturer would make the antennas needed to support it.
If the spectrum band is mmWave, then we know these deployments are only available in limited areas, and on new purpose-built equipment that predominantly covers at line-of-site. Devices available today, or the ones deployed throughout the service territory, likely do not support these bands. Utilities will see significant limitations in coverage and deployment. Bear in mind this is not a substitute for broad coverage provided by sub-1 GHz solutions.
The cost of updating existing infrastructure and devices must be carefully evaluated. Limitations to the current device ecosystem and currently high chipset prices will drive costs of newer devices above what we see in 4G-only devices. Over time, we could begin to see pricing parity as the devices in this ecosystem surpass the number of devices in the 4G ecosystem. This will likely take many years though.
Focus on What Matters
Once spectrum questions are resolved, delay tolerances are understood, and other specific requirements are known, meaningful conversations can begin. This should diminish the focus on which “G” is needed. There are a lot of exciting things coming in 5G, but not all of these features and benefits are useful or needed by utilities.
Focusing on what matters and ignoring the marketing hype is the best way to get your utility network where you and your customers need it to be.
Learn more about important decisions utilities face as they begin building out foundational communications systems.