The hyperventilating around 5G aside, why the future of wireless is still FAST

There has been a lot of unrealistic hype around fifth generation (5G) wireless technology. There has also been a lot of ink spilled outlining why that hype is unjustified. Both views have some justification. Unfortunately both are often based on an incomplete view of wireless technology. For example it is obvious many writers don’t even understand exactly what 5G means. The fifth generation of what?

So let’s begin with some history before we get into the more technical weeds. Over the years cellular mobile phone carriers have used a series of mutually incompatible methods of providing wireless services to their customers. As time moved forward technology advanced. Each carrier occasionally adds new technology and then slowly retires the older technology.

Many times these additions are minor and cause few issues for the average subscriber. But in some cases the new technology is substantially better, but completely incompatible with the old.  So much so that previous devices won’t function on the new system. Since these types of changes are quite disruptive so they don’t occur often. But when they do occur they are quite noticeable to everyone and so have become known as generational upgrades.

What is now known as first generation or 1G wireless technology mainly refers to the first of several of analog cell phone systems that were deployed in the late 1980’s and early 1990’s (mostly the AMPS system in the US). While each functioned a little differently most were just wireless versions of a landline phone. These systems allowed you to dial a number and talk to someone, but that was about it.

In the mid 1990’s the first 2G digital systems began to be introduced. In this case “digital” mainly referred to the digitization of voice calls, although they also introduced text messaging as an added service. The main advantage was the ability to squeeze more phone calls into the same amount of radio space and increasing the total number of calls the system could handle.

While most systems were eventually expanded to include some form of packet switched data connections to the internet (think glorified dial-up), the focus was primarily on maximizing the number of circuit switched voice calls to landline phones. In the US four competing systems were deployed: GSM, CDMA/IS-95, D-AMPS/TDMA (Cingular) and iDEN (Nextel).

With the increased use of the internet starting to occur in the late 1990’s, two of the previous 2G systems, GSM and CDMA, evolved to include high speed data connections. However the focus was still mainly on voice calling. While most people still referred to 3G systems by their previous 2G names, technically they were known as CDMA2000 1X/EVDO and UMTS/HSPA+.

By the mid 2000’s the amount of internet data being transmitted over the various wireless networks began rapidly overtaking the amount of voice data being transmitted. A new wireless system was needed and by this time almost everyone in the wireless industry was finally willing to support a single worldwide standard. To do this most of the various organizations that had developed many of the previous standards decided to become a part of a group called the 3rd Generation Partnership Project (3GPP).

What the group developed was a system called Long Term Evolution or LTE. Unlike previous standards, LTE was an entirely packet switched based data system. Actually until the introduction of voice-over LTE technology years later (or more accurately GSMA IR.92 IMS Profile for Voice and SMS), you couldn’t even make a traditional voice call over the system. Most carriers had to continue operating their old 2G and 3G systems for this purpose.

The LTE standard was first outlined in 2008 in a series of documents known as 3GPP Release 8. The standard has been updated and extended several times since in Releases 9, 10, 11, 12, 13 and 14. However the new enhancements have all retained backward compatibility with Release 8.

After years of successful use, the industry is now at a point where it is starting to bump up against the limits of the LTE standard. One of the big ones is that the LTE standard never really defined how carriers should use radio frequencies beyond 6 GHz. Nature makes working with frequencies beyond 6 GHz difficult (the wavelengths at these frequencies are small so even the tiniest objects can block or absorb them) and in 2008 using them just didn’t seem terribly practical.

But technology has continued to march forward and using extremely high radio frequencies is now at least technically viable. In addition new techniques have been developed over the past 10 years to increase spectral efficiency (the ability to squeeze more bits on a given amount of radio bandwidth). The problem is most of these new methods are completely different from the methods used by the LTE standard.

So after much deliberation the members of the 3GPP decided it was time to make a jump. They decided that 3GPP Release 15 would introduce a new method of transmitting data over radio waves that unfortunately would also be completely incompatible with LTE. That new method is called 5G New Radio or 5GNR (linguistic creativity obviously not being a strong trait among wireless engineers).

In addition to increasing spectral efficiency, the 5GNR standard also makes working with frequencies up to 100 GHz easier. This basically makes available 94 GHz of extra bandwidth to work with or more than 15 times the roughly 6 GHz LTE was designed to use. As a result the potential amount of data that 5G devices will be able to squeeze out of the air is multiple times greater than anything that would have been possible with LTE devices. The optimists are correct on this point.

But as always, the devil is in the details. Theoretically 5GNR makes it possible to transmit 15-30 bits per second for every cycle (Hz). But in the real world this will likely translate to only about 30% more data than LTE.  So when the mobile carriers upgrade their existing bandwidth to use the 5GNR method, most users will only see about a 30% bump in performance. Significant, but hardly earth shattering. Most people won’t even notice the change.

5gnr sumary
Summary of 5G characteristics

Also the natural challenges of transmitting data over radio frequencies beyond 6 GHz still exists. While we can cram tons of data into the 94 GHz that will be potentially available, earth’s atmosphere makes it difficult to send it very far (think feet/meters vs miles/kilometers). Here is a chart (courtesy of Verizon) that shows the relative distance that can be transmitted using existing frequencies below only 2.5 GHz (the actual distance depends on a combination of the effective radiated power of the transmitters, the sensitivity of the receiving antennas, terrain and several other factors).

 

 

Propegation

What this means is that to take advantage of all that bandwidth above 6 GHz the carriers will have to build huge numbers of new towers very close together (probably roughly every 1000 feet (300 meters) or less). While these won’t have to be the big towers we often see to today , each of them will still normally need to be connected to the internet via their own fiber cable (if you can’t send a signal very far anyway, there is no point in having an antenna 250 feet in the air).

But in the end no matter how you look at it, building all these radio towers will be extremely expensive. This might be financially viable in extremely dense urban areas (this technology would work great in a packed football stadium). They are unlikely to be workable in suburban or rural areas anytime soon. In those cases it will probably just be cheaper to string fiber to every home. So the pessimists are correct on this point.

So why am I optimistic? The problem with the 5G pessimists is that they are implying that all the bandwidth below 6 GHz that can be used for mobile phone service has already been deployed and is being used. The reality is only a fraction of this bandwidth is currently being used. Of the four major US carriers, only Sprint has rights to more than 200 MHz (0.2 GHz) of that 6 GHz (here is how it is being used).

The bottom line is regardless of the efficiency of the method that is used to transmit data, overall performance is still mostly depends on the amount of available bandwidth the carriers have to use. Give them more bandwidth and your mobile device will naturally work faster.

While much of sub 6 GHz bandwidth is being used for other important uses (e.g. airplanes, broadcast television and radio, GPS, the military, etc.), a surprising amount of new bandwidth will still likely soon become available. Even among the existing spectrum, much of it is underutilized.

Here is a list of places much needed extra bandwidth will come. Each alone will only give an incremental boost. But together it will ensure a fast 5G future even if carriers never transmit a single bit on a frequency over 6 GHz.

850 MHz – refarming 2G and 3G spectrum

As I mentioned LTE is data only. Until recently most carriers have had to continue operating their old less efficient 2G and 3G networks in order for their customers to be able to make calls. This changed recently with the widespread deployment of voice-over LTE (VoLTE) technology that enabled users to start making traditional voice calls over LTE.  Since LTE has much better spectral efficiency than the old 2G and 3G networks, that old bandwidth can now be put to much better use. Over the next year or two most carriers will completely shut down these old networks freeing it for use by either LTE or 5GNR.

600 MHz

A few years ago the FCC auctioned off almost all of the bandwidth between 600 MHz and 700 MHz (formerly known as UHF TV channels 38-51). Most of this bandwidth was purchased by T-Mobile and is now being gradually deployed as legacy TV broadcasters slowly vacate the spectrum. The process should be complete by the middle of 2020.

T-Mobile was the only US carrier that lacked significant low band spectrum (below 1000 MHz (1.0 GHz)). In addition to dramatically improving T-Mobile’s coverage in rural and suburban areas, it will also significantly increase total capacity in urban areas.

Personally I think the rest of the UHF TV band (channels 14-36 or roughly between 500 and 600 MHz) will also eventually be auctioned off. I have serious doubts most traditional linear television will survive the cord cutting era. But it will probably be another 10 years before all of that fully plays out.

600 MHz and 1.7/2.1 GHz – Squatters

This is a particular annoyance of mine. During the past couple of major bandwidth auctions (600 MHz and AWS-3) a couple of parties have purchased big chunks of spectrum just to sit on it. The parties are speculating that eventually the value of this spectrum will increase substantially and they intend to sell it at a significant profit when it does (looking at you Charlie Ergen).

Technically the FCC has rules requiring buyers put to use any spectrum they purchase from the government within a certain amount of time. But the reality is people often play games to get around these rules for years or even decades. This forces everyone else to unnecessarily pay more for substandard service.

This is not a technical problem. It is a political problem. Charlie Ergen’s Dish Networks alone is currently sitting on almost 100 MHz of quality bandwidth. It alone holds almost as much bandwidth as Verizon. This needs to stop. We need an FCC with the guts to tell these people to either deploy now or give the spectrum back.

2.5-2.7 GHz – former Broadband Radio Service (BRS) and Educational Broadcast Service (EBS) bands.

Back many years ago the FCC made available about 200 MHz of spectrum for “Wireless Cable” service. About half the spectrum was set aside for commercial use and the other half for educational use. Only a few TV services were built using the spectrum and the FCC eventually changed the rules on using it making it available to wireless carriers. Through a series of purchases and long term leases Sprint has ended up with most of it.

But Sprint, as the smallest of the four major carriers in the US, has never been able to raise the capital necessary to fully deploy the spectrum. As a result in most areas of the country the spectrum is unused. However the pending merger of Sprint with T-Mobile should provide the capital needed to start making use of this resource. My guess is T-Mobile will likely start using the spectrum shortly after the mergers approval.

But I don’t think the FCC should assume it will happen. At the very least the FCC should make the approval of this merger dependent on T-Mobile quickly deploying this spectrum. If they don’t they should be forced to sell it to someone who will. It is far too valuable a resource to waste.

3.5-3.7 GHz – Citizens Broadband Radio Service (CBRS)

This is a new band of spectrum that should be available in the US shortly. These frequencies have historically been used by the US military. The biggest user being the US Navy who use it for radar.

For the most part this radar is only used at sea and is rarely used inland. Because of this the FCC decided the spectrum could potentially be shared. While the US Navy will continue to have priority in using this band, this will probably only be a significant issue near its ports. Everywhere else both wireless carriers and unlicensed users will be able to use it. In some ways the system will work similar to WiFi. But it will be different in the sense that the carriers will have the option of paying for the right to force unlicensed users to switch channels if they decide to use it. It is also different in that users will have the option to deploy high power base stations (50 watt ERP vs WiFi’s 1 watt limit).

The FCC is hoping to encourage smaller companies and individual to make the most use of this spectrum (hence the term “Citizens Band”). Personally I will be interested in seeing how this plays out. Since it is mid-band spectrum my own guess is that at least one or more major carriers will quickly deploy it in the major urban areas (Verizon seems the most interested). But the FCC is capping how much bandwidth they can use. Also the big carriers will probably continue to ignore most rural areas under the (probably mostly correct) belief that their low band towers can provide sufficient service.

As a result this may open up opportunities for communities that find themselves on the fringes of existing carrier coverage. The cost of priority licenses in these areas should be modest and may not be needed at all. With most new phones likely having support for the CBRS band built in (LTE Band 48), small providers (like small town telephone and cable companies) should be able to just throw up a high power tower and and start handing out sim cards. Newer phones containing a second eSim should make deployment even easier since these devices can support using two carriers simultaneously (few people are likely to be willing to sacrifice the service of a major carrier or carry a second device). It’s a little to early to tell how things will turn out, but this at least has the potential for being a win-win situation for both urban and rural users.

3.7-4.2 MHz – the “C” Band

Back about 30 years ago older readers may remember people putting large dish antennas in their backyards in order to receive what were, for a short time, freely available cable TV stations like HBO. Those big dishes were being pointed at satellites that were broadcasting analog transmissions to cable TV providers across the country. These providers picked up the signal with their own dishes and then passed them on to subscribers. Needless to say companies like HBO weren’t happy with people other than cable TV subscribers watching their shows for free. It wasn’t long before these signals were encrypted and these viewers asked to pay up.

While the backyard pirates have now disappeared, this radio band is still used by cable broadcasters to send their now encrypted digital content to cable providers. But the FCC has recently noticed that the band isn’t being used as efficiently as it could. As a result it has opened up discussions with the cable industry about sharing at least part of the band with mobile phone carriers. The negotiations are still in an early stage, but the industry has already mostly agreed to share at least part of this band. It is mainly a question of how much of the bandwidth they are willing to offer given various monetary incentives.

Currently it appears the wireless carriers will gain access to somewhere between 200 and 300 Mhz of bandwidth (out of a total of nearly 500 MHz). This is nearly as much as the entire industry is currently using. Potentially this type of bandwidth could finally enable true gigabit level service on a mobile device. Unfortunately it will probably be about another 5 years before everything is finalized.

Conclusion

As you can see, there is plenty of available bandwidth to enable near gigabit service at some point in the next 5 or 6 years without having to use any bandwidth over 6 GHz. Indeed any deployments of extremely high frequency bandwidth will likely be just a bonus. Since the propagation characteristics of all this new sub 6 GHz bandwidth shouldn’t be that substantially different from that which is already being deployed, most of it should work fine on existing towers.

But there is one caveat. Deploying all this bandwidth still won’t be cheap even if the carriers won’t need to build huge numbers of new towers. What gets built will ultimately be based on how much money the carriers can justify spending.

Personally my guess is most of the available bandwidth over 2 GHz will end up being deployed almost exclusively in urban areas. The propagation characteristics of this bandwidth (read short range) make it hard to justify in rural areas. That said the combination of T-Mobiles 600 MHz spectrum, the 600 MHz spectrum Charlie Ergen is squatting on and the original 850 MHz cellular band (CLR) both Verizon and AT&T should soon be refarming soon should provide substantial capacity. Throw in the rest of the 500 MHz UHF TV band in the future and even deep rural areas should be fine.

The problem is the further you are from a tower, the slower the speed. There is still a limit on how far apart towers can be placed and still retain reasonable service. Based on my own experience working in rural areas of southeast Minnesota, the current arrangement is far from ideal. Local deployment of CBRS base stations may help fill in some of the gaps, but even a handful of low bandwidth towers would likely do a much better job.

Unfortunately it appears the economics of rural areas make these towers difficult to justify. Similar to rural electrification efforts back in the 1930’s through 1950’s, there just aren’t’ enough users living in these areas to make the investment profitable. So some level of subsidisation is likely necessary to make something like this possible.

In addition note that wireless technology, no matter how good it gets, will probably never be able to fully replace a direct fiber or even coaxial cable link. Even cheap coaxial cable using the DOCIS 3.1 standard can potentially provide a 10 Gbit connection in both directions. We are a long way from wireless carriers being able to provide that kind of performance at a reasonable price. I will agree that most cable providers need to up their game (something cord cutting should speed along). But if you like watching lots of Netflix on a big 4K TV I wouldn’t start looking for Verizon, AT&T or T-Mobile to replace them anytime soon.

Finally don’t look for big price drops. The cost of providing wireless service is relatively fixed. To provide X amount of coverage/capacity you need X amount of towers for a given amount of spectrum. The use of 5G won’t change the equation much. On the other hand I don’t think you will see significantly higher prices either. What you should see is increasingly better coverage and progressively faster and more reliable service.

Will Tesla do it alone?

There is an assumption many make that it will be impossible for one company to electrify our transportation system alone. It is accepted as common sense that at some point the legacy auto makers will jump in to close the gap. It is presumed that eventually, once electric cars can better compete with gasoline powered vehicles, that the automakers will hit some magic switch and start making electric cars enmass.

I admit that at one point I found the argument compelling. But with the cost of electric cars dropping rapidly and the the legacy manufactures still not making any substantial moves, I’m no longer sure we should be assuming this will be the case. What I am slowly beginning to suspect is that most of them may simply not be financially able to make this transition.

What Tesla is making increasingly clear is that building an electric car is different than building a gasoline powered one. It requires a different supply chain and different production lines. In particular it requires a huge up front investment in battery and electric motor production and software. There is some overlap in things like car interiors and body panels, but that is not where most of the value lies in any vehicle – electric or otherwise.

Production lines for these things requires more than just big buildings. It requires new tooling and hiring staff with specialized skills. Indeed trying to retrofit existing plants may actually be more expensive than building new facilities. Needless to say all of this is highly capital intensive. It will require either borrowing loads of money or selling tons of new equity.

Here is the problem. Many automakers are already up to their eyeballs in debt. Debt that was borrowed with the intent of paying it back with the profits from gasoline powered vehicles. Indeed many of the automakers are not even using current profits to pay back debt. They are using it for dividends and share buybacks and are borrowing even more money to fund current operations. Operations that will soon be mostly worthless.

The only automobile profits that will exist in the near future will mainly come from selling electric cars. But the legacy automakers have yet to even start acquiring the capital to build electric cars. Perhaps we should consider it is because no one will give it to them.

Remember electric cars won’t add to the bottom line of legacy manufactures. They will at best just replace current revenue. This means that the future profits from new electric cars will not just have to pay back the new investor capital needed to build these things, it will also have to payback the billions currently owed. Debt that will typically have to be paid off ahead of the new investors.

In other words, if you had substantial money to invest to build electric cars would you give it some someone like Ford who already owes huge sums and who will soon have to make huge write-offs on things like internal combustion engine plants? Or would you rather give it to someone like Tesla who has proven they can build great electric cars and who have none of the overhead of soon to be worthless legacy assets and the debt that was used to acquire them?

I agree that over the long run Tesla probably won’t be the only company who will produce electric cars in the future. But I expect the others will more likely be new companies built from the ground up than a current manufacture of gasoline powered cars. Indeed I’m no longer convinced legacy manufactures will survive long enough to even face a Tesla competitor.

Perhaps one or two legacy manufactures have enough capital on hand to pull this off along with the guts to write off almost their entire current asset base. But if they exist, I have yet to identify them.

Capitalism vs. Socialism: Toward building a new model to describe our economic world.

A response to Overcoming Individualism by Matt Hartman, Co-chair of the North Carolina chapter of the Democratic Socialists of America

I’ve been following the actives of the Democratic Socialists of America for several months. While I am definitely sympathetic to the values and aspirations of the organization, I have been less impressed with its strategy and vision for the future. Based on Matt’s article, I suspect its leaders are beginning to have similar reservations. But it also appears to reveal that its leaders are still struggling to grasp the nature of the problems the nation and its members are facing. In my opinion what they haven’t yet recognized is how substantially our world has changed. They don’t yet see that a 19th century solution simply won’t solve our 21st century problems.

What people like Bhaskar Sunkara don’t recognize is that the reason most people are “not being drawn to the movement as workers” is because the very concept of “worker” in the traditional economic sense is disappearing. What he and others don’t seem to see is that this change is a big source of much of the social discontent spreading throughout the developed world. The transfer of wealth from workers to capitalists is real (probably more real than what it was in the 19th century), but the causes are much different.

What needs to be recognized is that the concept of “labor” or “work” from an economic point of view is just another resource used in the overall production process. It is an expense to be minimized. Strictly speaking it is no different than any other resource like coal, grain, iron or silicon. Also like these other physical resources where multiple different materials can be used to solve the same problem, human effort isn’t the only possible source of “work”. Much like the production process can be engineered to eliminate a resource like fossil fuels, it can also be changed to reduce and even eliminate human effort.

Now the automation of human labor has been happening for centuries. But as I have pointed out before and what most people have failed to recognize is that for the most part we have only been automating and augmenting one aspect of work – physical effort.

But work involves more than physical effort. Any productive actively also requires intelligent direction that has typically required human cognitive effort (what most people call thinking). But it has always been extremely difficult to automate this aspect of work and so human beings have remained essential to the production process. Until, that is, we developed what Steve Jobs liked to call “a bicycle for the mind”.

Think of it this way (and I suspect most employers probably already are):

Imagine all the world’s workers have banded together and formed a company called Workers, Inc. Workers, Inc. sells labor to companies throughout the world. It sends humans to them to perform services and collects payment for that work (the company’s revenue). The humans that Workers, Inc. sends out need to be maintained and taught in order to be productive. So the company spends money to feed, house, repair and educate them (the company’s expenses).

The company is profitable and growing. While it has always competed with machines for physical labor, until recently it had a monopoly on the market for cognitive labor.

But a competitor has appeared. Intelligent Robots, Inc. is now offering its services to the world’s businesses. Its robots currently don’t work quite as well as the humans provided by Workers, Inc., but they are typically cheaper, work longer, are often easier to operate, can do things the humans can’t and are improving rapidly. Workers, Inc. is still competitive for most tasks, but it is being forced to drop prices for several services. It has also been forced to spend greater and greater amounts of money on education which is reducing profitability.

To make this a little more interesting now imagine both these companies decide to issue an IPO. You are an investment manager of a large hedge fund. The bankers representing Workers, Inc. and Intelligent Robots, Inc. approach you about buying shares. How would you view these opportunities? What do you think about their profit potential? What are their risk profiles? Would Workers, Inc. be a great widows and orphans stock? Is Intelligent Robots, Inc. just a risky startup? Would you buy one or the other, both or neither? Would you short either one? Do you think both companies will be around in 25 years?

I would argue you can already make this choice. Whose stock would you rather own, Amazon or Walmart; Tesla or GM? Wanna take a guess on which of these companies are most likely to be doing a couple of trillion in revenue in 25 years with only a handful of employees?

The bottom line is the concept of a “workers paradise” obviously doesn’t make a whole lot of sense in a world without workers. Because of this I view both capitalism and socialism much the same way I view Newtonian physics. Neither is wrong per se so much as they are an incomplete view of how economic production can work. Like Newtonian physics, capitalism and socialism are a set of social rules that work well within the bounds of certain parameters. In this case they both work as long as we assume human effort is required for our survival.

But once you push outside those boundaries the rules no longer apply. You need new rules to deal with the new environment. For the first time it may be possible for humans to survive without working. We are moving from a world of scarcity where a great deal of brute force was required to make anything of significance to one of abundance that requires little from us.

This could be wonderful. But it will only be wonderful if we all agree that everyone is entitled to this future. This is the fight I think we need to be having (and the battles for “Medicare for all” and a UBI are a great start). If we don’t I think there is a very real risk that this future with slip through our fingers in a series of senseless battles over rules and values that are no longer relevant (e.g. socialism vs capitalism, left vs right, etc.).

Like physics during the 20th century, society needs to develop a new model (or explanation) of how our economy works and then figure out how to base our solutions on it. Our economy is much larger and becoming increasingly more complicated than it was in the 19th century. We need the economic equivalent of General Relatively and the Standard Model (an understanding of which was needed to enable technologies like GPS and semiconductor manufacturing).

My guess is that at least parts of this new model already exists although we haven’t recognized and accepted it yet. But before we can accept the new model, we need to recognize the limitations of the old ones.

We need to stop looking to Marx or Smith, as brilliant as they were, for solutions to our current problems. Their solutions no longer scale. Our world is a 1000 times more complex than anything they could have possibly conceived. Personally I would begin by looking to people like PikettyRaworth and Kelly who I think have captured aspects of the new model. But I have yet to see anyone who has pulled together all the pieces and would encourage the search for others.

Finally, to circle back to Matt’s article, I would like to point out that the rapidly changing rules of our economic game cuts sharply across almost all racial, cultural and political lines. Anyone whose survival is based on their own employment or the employment of others (e.g. children and Social Security recipients) is at risk. If we continue to divide against each other in a desperate attempt to preserve our own positions, we will all lose. This, in my opinion, is the common ground on which to unite not just those on the left, but the entire nation. At this point I don’t think we have much choice.

The Decline of Brick and Mortar Retailing

As I’ve discussed earlier, I’m seeing mounting evidence that eventually machines will replace all economically significant human labor. But while I suspect this will happen relatively quickly (at least from a historical perspective), it won’t happen overnight or all at once.

It also won’t happen evenly. Some fields and industries will eliminate large numbers of workers relatively soon while others will hang on and even expand their numbers for many years to come.

One of the ways I look at this is by imagining the production path, from raw materials to finished product, of an item that I might want to buy.  All along that path humans are involved to one degree or another. But gradually, at each step along the way, human labor is slowly being squeezed out of the process as machines replace our effort.

Eventually what we will create, for all intents and purposes, is the equivalent of Star Trek’s replicator. To obtain a product we will punch a few buttons on our phone and, without any human involvement, it will appear at our door.  This could be a wonderful development.  But this will only be the case if everyone has access to those buttons.  Unfortunately I have real concerns that we may be building a world where only a few will be able to use them.

This process of replacing human labor has been slowly happening for decades in many different ways.  But things have begun to change rapidly and it has become impossible for one person to track all the different developments.

While I try to watch events in many different fields along multiple different supply chains, I find myself currently focusing on three in particular. These three – the introduction of self driving vehicles, the replacement of fossil fuels by solar energy and the rise of online shopping – are what I view as the coalmine canaries in the process of automating our economy.  I’ll address the first two in other articles, but in this one I’d like to focus on the decline of brick and mortar retailing and its replacement by online shopping.

Whenever a given industry is first disrupted, people often find it difficult to believe that it could be entirely replaced.  You will often see statements, typically made by those working in the declining industry, like the following:

There will always be a need for X.   It will never completely go away.

Needless to say brick and mortar retailing is no different.  People often fail to recognize that the winds have suddenly changed direction before it is too late.

What has changed and is driving the push toward online shopping is the same force that drives most other economic changes: cost.  Because of automation, online retailers have simply developed systems that can get products to consumers cheaper and more efficiently than brick and mortar retailers.

In addition it isn’t just the final price of a given item that makes online shopping cheaper for consumers (although that is a big part of it).  It’s the entire shopping experience that has become faster and more efficient.

For example most shopping endeavors begin with researching what to buy.  In the past, depending on the product, this could be a lengthy process.  It often involved talking to friends and relatives, reading books and magazines and ultimately traveling to multiple stores to find exactly what you need at the lowest possible price.  It was an extremely inefficient process.  Most people simply didn’t have the time or resources to examine all possible alternatives and often settled on a less than ideal product at a greater than necessary price.

Online shopping is changing this process.  It is rapidly condensing the research process from  hours or even days to in some cases minutes.  It has become increasingly easy to find the ideal product at an optimal price.

And those prices are typically significantly less than what people often pay in a brick and mortar store.  While online retailers have to pay shipping costs, they don’t need to maintain multiple distribution points (stores) or pay clerks and salespeople to help customers.  This means online merchants have a significantly lower cost of doing business and they can afford to pass those savings to customers in the form of substantially lower prices.

The result has been a rapid shift of sales moving from brick and mortar stores to online merchants.  According to the most recent US Government tally, online shopping now accounts for about 8.4% of total retail sales (about $100 billion in the 3rd quarter of 2016). Since the end of the 2008-2009 recession online sales revenue has been growing at a fairly consistent pace of 15% per year (pdf).  (It should be noted that this number includes both online only merchants like Amazon along with the portion of goods that traditional brick and mortar merchants have started selling online.)

But looking at total retail sales is a bit deceptive since this number includes things like automobiles, restaurant meals and building supplies that either can’t, or are not yet, sold online in any significant quantity. A more accurate view would be to exclude those items and look mainly at items typically found in the average mall or shopping center.

In the Government’s monthly retail report this information is mostly captured by two main categories.   The first is a category called “GAFO” (General Merchandise, Apparel and Accessories, Furniture and Other Sales). This includes items sold both online and offline by traditional brick and mortar merchants.  It is identified as government categories 442, 443, 448, 451, 452, and 4532. The second category is what is called “Non-Store Retailers”. This category includes mostly GAFO type merchandise sold by pure online sellers like Amazon and is identified as government category 454.   The total of these categories was about $450 billion in the 3rd quarter of 2016.  This means about 22% of all “GAFO” type items are currently sold online (100/450=0.22).

With online sales increasing at a rate of about 15% per year, this could mean that in about 11 years all GAFO type merchandise will be sold online.  Although even this is probably optimistic for brick and mortar retailers.  Profit margins for a typical retail business are rarely more than 5% (e.g. Walmart averages about 3%).  Because retail store have fairly significant fixed costs, even a relatively small decline in total revenue is enough to make them unprofitable.

Therefore I expect brick and mortar merchants to begin closing stores at a fairly rapid clip. This of course would force even more consumers to begin shopping online whether they want to or not and actually accelerate the growth rate of online shopping. As a result huge numbers of traditional brick and mortar stores could disappear from our cities in as few as 5-6 years.

Why do I think this might be the case?  Lets take a look at a fairly typical shopping mall near where I live.  If we examine their directory (click “Map View” for a clearer look – pdf) we can see that this mall has four major anchor tenants (Macy’s, Sears, JC Penney and Herberger’s) and about 65 smaller locations including a fairly large Barnes and Noble.

Unfortunately for this mall, Macy’s just announced that it is closing their store. Surprisingly they beat Sears to the punch although probably not by much.  It is reasonable to expect Sears will also also close within a year.  Herberger’s, a part of the Bon-Ton chain, has also been struggling for the the last several years.  Finally there are rumors the JC Penney chain may close almost a third of its stores over then next 2 years.

To be fair both JC Penney and Herberger’s could see a fairly significant bump in their business after Macy’s and Sears close.  This means they could remain fairly profitable for at least a few more years.

Although this probably won’t help most of the smaller stores in this mall most of whom likely rely on the customer traffic generated by the anchor stores to survive (in this case at least 10 locations already appear to be vacant).  The survival of Barnes and Noble is particularly questionable.  It is doubtful most of the rest could survive the closing of 2 or 3 anchor stores.

So how long before this mall is almost entirely vacant?  Based on what I’m seeing this could happen in less than 3 years.  If this is the case, I have a hard time seeing most similar malls surviving more than 5-6 years.

Their are a couple of factors that could slow the transition to online shopping.  One thing that I have noticed is that the retailers most impacted by online sellers are those brick and mortar merchants that have traditionally sold their goods at relatively high markups.  Most mall based retailers fall squarely in this category.

On the other hand discount retailers like the dollar stores, TJX and the fast fashion stores like H&M appear to be holding their own.  At least for now, online retailers haven’t yet been able to sufficiently undercut these types of stores enough to make a significant impact.

The problem is the operational model of online retailers is still a moving target (this is particularly true of Amazon). So there is no reason to believe that online retailers won’t continue to expand and move downmarket. But there is the possibility of some type of floor – either in terms of price or service – below which online sellers will never be able to compete.

Although my own guess is that this level, if it exists at all, is probably quite low. Personally the only type of brick and mortar store that I think might have a fighting chance are convenience stores.  But even here I have my doubts.

So how many job will be lost because of this shift?  At this point I have a hard time coming up with an exact number.  But I think we can get some sense of the possible scale by looking at the revenue per employee numbers for different types of retailers.

Currently online behemoth Amazon generates about $550,000 of revenue per employee.  Although admittedly Amazon is more than just a retailer and does business in other areas.  As a result the ratio between Amazon’s retail sales revenue and its retail employees could be significantly lower.  We don’t know because, as far as I know, Amazon doesn’t release this type of information.  Although personally I suspect the actual number is reasonably close and is probably even increasing.

Compare this to current brick and mortar retail giant Walmart.  Walmart currently generates more revenue than the entire online retail sector.  However Walmart gets only about $210,000 of revenue out of each of its 2.3 million employees.  Assuming Amazon’s revenue per employee numbers are roughly accurate, this means Walmart would have to figure out how to either eliminate nearly 1 million employees or double its revenue in order to remain cost competitive with Amazon.

Companies like Macy’s and TJX are even worse.  Each of them is only able to generate about $165,000 and $150,000 of revenue per employee respectively. The only major retailer that appears competitive with Amazon is Costco.  Costco is currently able to match Amazon at about $550,000 of revenue per employee.

As I mentioned, these numbers are just a rough proxy for what is really happening and should be viewed with caution.   But until I can find a study that directly compares the efficiency of online and offline retailers, it is the best I can do.  But if this analysis is even roughly accurate, I don’t think it is difficult to see how several million retail jobs could be lost over the course of the next several years. Online merchants simply don’t need that many workers.

Will our economy grow fast enough to replace those who currently work for brick and mortar retailers?  Based on past history, this is possible at least over the short term. But people working these jobs will likely need to find their new jobs in some other industry (healthcare?).  Unless the total GAFO retail industry more than doubles in size in real terms over then next several years (which would require an unlikely CAGR of 7%-10% or more), there is simply no way the world of online retailing could possibly absorb them.

Happy MLK Day

Ran across a short post by Chris Boeskool that seems appropriate for today.  In it Chris tells a simple story to which I suspect most people can relate.  He talks about going to work one day and suddenly finding himself working alongside someone who insists the people around him always step aside when he is coming through.

Chris comes to realizes how what for many people likely began as a childhood sense of entitlement, can escalate into an adult sense of privilege.  A belief that we deserve some form of special treatment.  A belief that can be quite painful to give up when those around us finally decide they are no longer willing to accommodate it.  For those who fall into that situation:

Equality can feel like oppression. But it’s not. What you’re feeling is just the discomfort of losing a little bit of your privilege — the same discomfort that an only child feels when she goes to preschool and discovers that there are other kids who want to play with the same toys as she does.

It’s like an old man being used to having a community pool all to himself, having that pool actually opened up to everyone in the community, and then that old man yelling, “But what about MY right to swim in a pool all by myself?!”

And what we’re seeing politically right now is a bit of anger from both sides. On one side, we see people who are angry about “those people” being let into “our” pool. They’re angry about sharing their toys with the other kids in the classroom.

They’re angry about being labeled a “racist,” just because they say racist things and have racist beliefs. They’re angry about having to consider others who might be walking toward them, strangely exerting their right to exist.

On the other side, we see people who believe that pool is for everyone. We see people who realize that when our kids throw a fit in preschool, we teach them about how sharing is the right thing to do. We see people who understand being careful with their language as a way of being respectful to others. We see people who are attempting to stand in solidarity with the ones who are claiming their right to exist — the ones who are rightfully angry about having to always move out of the way, people who are asking themselves the question, “What if I just keep walking?”

The only thing I would add is that in the real world these two groups usually aren’t that crisply defined.  The reality is most of us have a foot in each camp.  For what ever reason we all usually have some sense of privilege.  It is usually something that has been a part of us for so long that we just take it for granted.

It isn’t until we suddenly slam into someone who has finally decided that they are no longer willing to be “nice” and just move aside that we are given the opportunity to realize what has been happening.  Unfortunately even then many of us refuse to recognize it.

What the Russian attempt to influence the US election says about our new President

I spent my afternoon reading over the recent Intelligence Community Assessment on the Russian government’s recent efforts to influence the US presidential election.   Most of the report isn’t entirely unexpected.  The fact that the Kremlin has set up an extensive system to spread disinformation in order to advance its own agenda should come as a surprise to no one.  There is a reason countries make significant investments in propaganda.

What I see as a more interesting question is what does the Russian propaganda tell us about our current situation?  Why were the Russians attempting to influence the US election?  What were they attempting to accomplish?

As the report states, Mr Putin clearly has a goal of destabilizing the United States.  Mr. Putin has long wanted to reestablish Russia as a world superpower and part of that process obviously involves reducing the US role as the world’s economic and military leader.  He is particularly interested in driving a wedge between the United States and Europe.  As Fiona Hill stated in recent testimony before the U.S. House of Representatives Armed Services Committee:

The preferred scenario for Russia in Europe, as Putin has repeatedly made clear, would be one without NATO and without any other strategic alliances that are embedded in the European Union’s security concepts. Putin has repeatedly described NATO enlargement as driven by the United States and aimed at bring U.S. military bases and forces up to Russian borders to contain Russia. Although this narrative is flawed, much of the Russian elite accept it as ground truth—and many, including Putin, have done so since the NATO bombing of Belgrade in 1999, and especially since the expansion of NATO to Eastern Europe in 2004. Putin has thus consistently pushed for a renegotiation of European security structures to downgrade the conventional military and nuclear role of the United States and NATO, and give Russia military and security parity with European forces.

Although again none of this is particularly surprising.  The main issue is how does his attempt to influence the election help him achieve these goals?

The ODNI report also states that Mr. Putin had a “clear preference for President-elect Trump”.  Why?  It doesn’t seem plausible that Mr. Putin would simply prefer a Republican President over a Democratic one.  For example would he have also had a “clear preference” for Jeb Bush or Ted Cruz had they been nominated?  This seems doubtful.

More likely it seems Mr. Putin saw (and continues to see) in Mr Trump a unique opportunity to destabilize the United States and reduce its influence in the world.  He saw that for the first time the United States might elect a President who lacked most of the skills needed to govern and decided he needed to take advantage of the situation.

He seems to have quickly realized that Mr. Trump was extremely volatile and polarizing and would probably quickly divide the country.  He also likely soon figured out that Mr. Trump isn’t particularly intelligent and that he has a poor understanding of the world around him (and knows Mr. Trump probably doesn’t even recognize this).  Most importantly, he appears to strongly believe  that Mr. Trump is weak, insecure and easy to manipulate.

Mr. Trump’s response to this report appears to confirm Mr. Putin’s suspicions.  Instead of being horrified by the report, he is trying to downplay it.  Instead of asking for more information,  he is disparaging our intelligence gathering community.  He even appears to see Putin’s activity as a backhand compliment, instead or recognizing how badly he is being played.

Although it is clear Mr. Trump does appear to have one thing right.  Mr. Putin is definitely “very smart”.  I suspect he will have a very productive four years.

No man is so foolish but he may sometimes give another good counsel, and no man so wise that he may not easily err if he takes no other counsel than his own. He that is taught only by himself has a fool for a master. – Ben Jonson

 

Automation and Inequality

Over the past few month I have become increasingly concerned about rising levels of automation in the US and world economies.  In my opinion it is becoming obvious that human labor will become progressively worthless from an economic standpoint.  At some point in the near future it is clear that almost all tasks of economic value will be done better by a machine than a human.

Some will argue that this will never happen because even though increasing automation will replace some jobs, advancing technology will always create more new tasks than it destroys.  Clearly this is what has happened in the past.  But I have come to suspect that there are good reasons why this probably won’t continue in the future.

Productive work typically involves some combination of physical labor (moving stuff around) and intelligent direction.  Until recently, most machines were really only good at helping us move stuff around.  Humans were still needed to figure out how to move all of that stuff around in a productive manner.

This meant that as machines began making it possible to increase the total amount of goods and services being produced by augmenting human strength, the value of intelligent human effort increased along with it.  This was because every new machine always required intelligent human operators in a relatively fixed ratio and those operators needed to be paid for their efforts.  (Note: the role of “human operator” refers to both those directly operating the machines and all the behind the scenes support staff like bookkeepers and engineers that were needed to keep the machines running.)

But this appears to have started changing about 30 years ago when computers began to be used in the production process as shown below.

fredgraph

This chart  shows the ratio between all compensation paid to employees (mostly wages) and GDP (a measure of the value of all goods and services produced in the US during a year – currently about $18.5 trillion).  You will notice that from 1948 to about 1975, the relative value of employee compensation remained fairly constant at roughly 50% of GDP.  Then it started a slow decline and currently stands at about 43% of GDP.  Assuming my hypotheses is correct, the relative value of employee compensation should continue falling and eventually begin approaching 0% of GDP as intelligent machines like self-driving trucks quickly push humans operators out of the way.

Btw, who pocketed the 7% of GDP that is no longer being used to pay workers?   Need you ask?

saezzucman2014fig1

What is even more concerning is that fact that in most cases today the decision on how our machines are used is made solely by the owners of those machines.  Until recently this hasn’t been much of a problem because this decision has always been subject to a very significant check.  Since all machines required some form of intelligent human operator, workers always had a say in how they were used.  This was because a significant part of a machines output needed to be used to pay workers.  By necessity the output of any machine had to be shared between the owner of a machine and the people hired to operate that machine.

Those workers of course subsequently used that compensation to trade for the output of other machines run by other workers.  This meant that at least some of the output of our economic system had to be dedicated to the needs and desires of workers as well as the needs and desires of machine owners.

But what happens when humans are no longer required to operate the machines?  What is to stop machine owners from using their machines anyway they wish?  If you don’t need to pay anyone, why produce anything at all for the masses?  Since you will no longer be paying those workers any longer, they likely won’t have any money to spend anyway.  Why not just redirect the machines to create things you and your fellow machine owners want and desire?  Why bother creating houses, cars or other products for the mass of former workers?  Since you own the machines, why not use them instead to build super yachts or mansions on Mars?

If I am correct, at some point soon the value of wages received by all workers will start falling below the total value of the goods and services needed to sustain those workers.  This will likely happen because of a combination of rising unemployment, increasing competition with new intelligent machines driving existing wages lower, and rising costs for basic necessities as machine owners divert more and more resources toward their own desires.  Needless to say the results of this happening are likely to be catastrophic for all but the wealthy.

But this doesn’t have to happen.  There is no reason ownership of the machines needs to concentrate solely into the hands of an economic elite.  This ownership could just as easily be distributed in some way to everyone.

For better or worse I suspect that over the next decade or two we will be increasingly asked to choose the type of post-work future we would like to see.  One alternative could be an abundant Star Trek like future  where everyone’s physical needs are met by a massive, solar powered, automated network of machines (although probably minus the warp drive and transporter, at least for now).  The other extreme, where for whatever reason we currently seem to be headed, is a Galt’s Gulch for the wealthy few who will increasingly redirect the worlds resources toward their own desires.

Note that unlike the past this choice is no longer being driven by the question of fairness. If my analysis is correct, it is now a survival question.  And like it or not, the only way I see to make this choice is through our political process.  If we wish to survive, somehow we will need to come together and figure this one out.