The hyperventilating around 5G aside, why the future of wireless is still FAST

There has been a lot of unrealistic hype around fifth generation (5G) wireless technology. There has also been a lot of ink spilled outlining why that hype is unjustified. Both views have some justification. Unfortunately both are often based on an incomplete view of wireless technology. For example it is obvious many writers don’t even understand exactly what 5G means. The fifth generation of what?

So let’s begin with some history before we get into the more technical weeds. Over the years cellular mobile phone carriers have used a series of mutually incompatible methods of providing wireless services to their customers. As time moved forward technology advanced. Each carrier occasionally adds new technology and then slowly retires the older technology.

Many times these additions are minor and cause few issues for the average subscriber. But in some cases the new technology is substantially better, but completely incompatible with the old.  So much so that previous devices won’t function on the new system. Since these types of changes are quite disruptive so they don’t occur often. But when they do occur they are quite noticeable to everyone and so have become known as generational upgrades.

What is now known as first generation or 1G wireless technology mainly refers to the first of several of analog cell phone systems that were deployed in the late 1980’s and early 1990’s (mostly the AMPS system in the US). While each functioned a little differently most were just wireless versions of a landline phone. These systems allowed you to dial a number and talk to someone, but that was about it.

In the mid 1990’s the first 2G digital systems began to be introduced. In this case “digital” mainly referred to the digitization of voice calls, although they also introduced text messaging as an added service. The main advantage was the ability to squeeze more phone calls into the same amount of radio space and increasing the total number of calls the system could handle.

While most systems were eventually expanded to include some form of packet switched data connections to the internet (think glorified dial-up), the focus was primarily on maximizing the number of circuit switched voice calls to landline phones. In the US four competing systems were deployed: GSM, CDMA/IS-95, D-AMPS/TDMA (Cingular) and iDEN (Nextel).

With the increased use of the internet starting to occur in the late 1990’s, two of the previous 2G systems, GSM and CDMA, evolved to include high speed data connections. However the focus was still mainly on voice calling. While most people still referred to 3G systems by their previous 2G names, technically they were known as CDMA2000 1X/EVDO and UMTS/HSPA+.

By the mid 2000’s the amount of internet data being transmitted over the various wireless networks began rapidly overtaking the amount of voice data being transmitted. A new wireless system was needed and by this time almost everyone in the wireless industry was finally willing to support a single worldwide standard. To do this most of the various organizations that had developed many of the previous standards decided to become a part of a group called the 3rd Generation Partnership Project (3GPP).

What the group developed was a system called Long Term Evolution or LTE. Unlike previous standards, LTE was an entirely packet switched based data system. Actually until the introduction of voice-over LTE technology years later (or more accurately GSMA IR.92 IMS Profile for Voice and SMS), you couldn’t even make a traditional voice call over the system. Most carriers had to continue operating their old 2G and 3G systems for this purpose.

The LTE standard was first outlined in 2008 in a series of documents known as 3GPP Release 8. The standard has been updated and extended several times since in Releases 9, 10, 11, 12, 13 and 14. However the new enhancements have all retained backward compatibility with Release 8.

After years of successful use, the industry is now at a point where it is starting to bump up against the limits of the LTE standard. One of the big ones is that the LTE standard never really defined how carriers should use radio frequencies beyond 6 GHz. Nature makes working with frequencies beyond 6 GHz difficult (the wavelengths at these frequencies are small so even the tiniest objects can block or absorb them) and in 2008 using them just didn’t seem terribly practical.

But technology has continued to march forward and using extremely high radio frequencies is now at least technically viable. In addition new techniques have been developed over the past 10 years to increase spectral efficiency (the ability to squeeze more bits on a given amount of radio bandwidth). The problem is most of these new methods are completely different from the methods used by the LTE standard.

So after much deliberation the members of the 3GPP decided it was time to make a jump. They decided that 3GPP Release 15 would introduce a new method of transmitting data over radio waves that unfortunately would also be completely incompatible with LTE. That new method is called 5G New Radio or 5GNR (linguistic creativity obviously not being a strong trait among wireless engineers).

In addition to increasing spectral efficiency, the 5GNR standard also makes working with frequencies up to 100 GHz easier. This basically makes available 94 GHz of extra bandwidth to work with or more than 15 times the roughly 6 GHz LTE was designed to use. As a result the potential amount of data that 5G devices will be able to squeeze out of the air is multiple times greater than anything that would have been possible with LTE devices. The optimists are correct on this point.

But as always, the devil is in the details. Theoretically 5GNR makes it possible to transmit 15-30 bits per second for every cycle (Hz). But in the real world this will likely translate to only about 30% more data than LTE.  So when the mobile carriers upgrade their existing bandwidth to use the 5GNR method, most users will only see about a 30% bump in performance. Significant, but hardly earth shattering. Most people won’t even notice the change.

5gnr sumary
Summary of 5G characteristics

Also the natural challenges of transmitting data over radio frequencies beyond 6 GHz still exists. While we can cram tons of data into the 94 GHz that will be potentially available, earth’s atmosphere makes it difficult to send it very far (think feet/meters vs miles/kilometers). Here is a chart (courtesy of Verizon) that shows the relative distance that can be transmitted using existing frequencies below only 2.5 GHz (the actual distance depends on a combination of the effective radiated power of the transmitters, the sensitivity of the receiving antennas, terrain and several other factors).

 

 

Propegation

What this means is that to take advantage of all that bandwidth above 6 GHz the carriers will have to build huge numbers of new towers very close together (probably roughly every 1000 feet (300 meters) or less). While these won’t have to be the big towers we often see to today , each of them will still normally need to be connected to the internet via their own fiber cable (if you can’t send a signal very far anyway, there is no point in having an antenna 250 feet in the air).

But in the end no matter how you look at it, building all these radio towers will be extremely expensive. This might be financially viable in extremely dense urban areas (this technology would work great in a packed football stadium). They are unlikely to be workable in suburban or rural areas anytime soon. In those cases it will probably just be cheaper to string fiber to every home. So the pessimists are correct on this point.

So why am I optimistic? The problem with the 5G pessimists is that they are implying that all the bandwidth below 6 GHz that can be used for mobile phone service has already been deployed and is being used. The reality is only a fraction of this bandwidth is currently being used. Of the four major US carriers, only Sprint has rights to more than 200 MHz (0.2 GHz) of that 6 GHz (here is how it is being used).

The bottom line is regardless of the efficiency of the method that is used to transmit data, overall performance is still mostly depends on the amount of available bandwidth the carriers have to use. Give them more bandwidth and your mobile device will naturally work faster.

While much of sub 6 GHz bandwidth is being used for other important uses (e.g. airplanes, broadcast television and radio, GPS, the military, etc.), a surprising amount of new bandwidth will still likely soon become available. Even among the existing spectrum, much of it is underutilized.

Here is a list of places much needed extra bandwidth will come. Each alone will only give an incremental boost. But together it will ensure a fast 5G future even if carriers never transmit a single bit on a frequency over 6 GHz.

850 MHz – refarming 2G and 3G spectrum

As I mentioned LTE is data only. Until recently most carriers have had to continue operating their old less efficient 2G and 3G networks in order for their customers to be able to make calls. This changed recently with the widespread deployment of voice-over LTE (VoLTE) technology that enabled users to start making traditional voice calls over LTE.  Since LTE has much better spectral efficiency than the old 2G and 3G networks, that old bandwidth can now be put to much better use. Over the next year or two most carriers will completely shut down these old networks freeing it for use by either LTE or 5GNR.

600 MHz

A few years ago the FCC auctioned off almost all of the bandwidth between 600 MHz and 700 MHz (formerly known as UHF TV channels 38-51). Most of this bandwidth was purchased by T-Mobile and is now being gradually deployed as legacy TV broadcasters slowly vacate the spectrum. The process should be complete by the middle of 2020.

T-Mobile was the only US carrier that lacked significant low band spectrum (below 1000 MHz (1.0 GHz)). In addition to dramatically improving T-Mobile’s coverage in rural and suburban areas, it will also significantly increase total capacity in urban areas.

Personally I think the rest of the UHF TV band (channels 14-36 or roughly between 500 and 600 MHz) will also eventually be auctioned off. I have serious doubts most traditional linear television will survive the cord cutting era. But it will probably be another 10 years before all of that fully plays out.

600 MHz and 1.7/2.1 GHz – Squatters

This is a particular annoyance of mine. During the past couple of major bandwidth auctions (600 MHz and AWS-3) a couple of parties have purchased big chunks of spectrum just to sit on it. The parties are speculating that eventually the value of this spectrum will increase substantially and they intend to sell it at a significant profit when it does (looking at you Charlie Ergen).

Technically the FCC has rules requiring buyers put to use any spectrum they purchase from the government within a certain amount of time. But the reality is people often play games to get around these rules for years or even decades. This forces everyone else to unnecessarily pay more for substandard service.

This is not a technical problem. It is a political problem. Charlie Ergen’s Dish Networks alone is currently sitting on almost 100 MHz of quality bandwidth. It alone holds almost as much bandwidth as Verizon. This needs to stop. We need an FCC with the guts to tell these people to either deploy now or give the spectrum back.

2.5-2.7 GHz – former Broadband Radio Service (BRS) and Educational Broadcast Service (EBS) bands.

Back many years ago the FCC made available about 200 MHz of spectrum for “Wireless Cable” service. About half the spectrum was set aside for commercial use and the other half for educational use. Only a few TV services were built using the spectrum and the FCC eventually changed the rules on using it making it available to wireless carriers. Through a series of purchases and long term leases Sprint has ended up with most of it.

But Sprint, as the smallest of the four major carriers in the US, has never been able to raise the capital necessary to fully deploy the spectrum. As a result in most areas of the country the spectrum is unused. However the pending merger of Sprint with T-Mobile should provide the capital needed to start making use of this resource. My guess is T-Mobile will likely start using the spectrum shortly after the mergers approval.

But I don’t think the FCC should assume it will happen. At the very least the FCC should make the approval of this merger dependent on T-Mobile quickly deploying this spectrum. If they don’t they should be forced to sell it to someone who will. It is far too valuable a resource to waste.

3.5-3.7 GHz – Citizens Broadband Radio Service (CBRS)

This is a new band of spectrum that should be available in the US shortly. These frequencies have historically been used by the US military. The biggest user being the US Navy who use it for radar.

For the most part this radar is only used at sea and is rarely used inland. Because of this the FCC decided the spectrum could potentially be shared. While the US Navy will continue to have priority in using this band, this will probably only be a significant issue near its ports. Everywhere else both wireless carriers and unlicensed users will be able to use it. In some ways the system will work similar to WiFi. But it will be different in the sense that the carriers will have the option of paying for the right to force unlicensed users to switch channels if they decide to use it. It is also different in that users will have the option to deploy high power base stations (50 watt ERP vs WiFi’s 1 watt limit).

The FCC is hoping to encourage smaller companies and individual to make the most use of this spectrum (hence the term “Citizens Band”). Personally I will be interested in seeing how this plays out. Since it is mid-band spectrum my own guess is that at least one or more major carriers will quickly deploy it in the major urban areas (Verizon seems the most interested). But the FCC is capping how much bandwidth they can use. Also the big carriers will probably continue to ignore most rural areas under the (probably mostly correct) belief that their low band towers can provide sufficient service.

As a result this may open up opportunities for communities that find themselves on the fringes of existing carrier coverage. The cost of priority licenses in these areas should be modest and may not be needed at all. With most new phones likely having support for the CBRS band built in (LTE Band 48), small providers (like small town telephone and cable companies) should be able to just throw up a high power tower and and start handing out sim cards. Newer phones containing a second eSim should make deployment even easier since these devices can support using two carriers simultaneously (few people are likely to be willing to sacrifice the service of a major carrier or carry a second device). It’s a little to early to tell how things will turn out, but this at least has the potential for being a win-win situation for both urban and rural users.

3.7-4.2 MHz – the “C” Band

Back about 30 years ago older readers may remember people putting large dish antennas in their backyards in order to receive what were, for a short time, freely available cable TV stations like HBO. Those big dishes were being pointed at satellites that were broadcasting analog transmissions to cable TV providers across the country. These providers picked up the signal with their own dishes and then passed them on to subscribers. Needless to say companies like HBO weren’t happy with people other than cable TV subscribers watching their shows for free. It wasn’t long before these signals were encrypted and these viewers asked to pay up.

While the backyard pirates have now disappeared, this radio band is still used by cable broadcasters to send their now encrypted digital content to cable providers. But the FCC has recently noticed that the band isn’t being used as efficiently as it could. As a result it has opened up discussions with the cable industry about sharing at least part of the band with mobile phone carriers. The negotiations are still in an early stage, but the industry has already mostly agreed to share at least part of this band. It is mainly a question of how much of the bandwidth they are willing to offer given various monetary incentives.

Currently it appears the wireless carriers will gain access to somewhere between 200 and 300 Mhz of bandwidth (out of a total of nearly 500 MHz). This is nearly as much as the entire industry is currently using. Potentially this type of bandwidth could finally enable true gigabit level service on a mobile device. Unfortunately it will probably be about another 5 years before everything is finalized.

Conclusion

As you can see, there is plenty of available bandwidth to enable near gigabit service at some point in the next 5 or 6 years without having to use any bandwidth over 6 GHz. Indeed any deployments of extremely high frequency bandwidth will likely be just a bonus. Since the propagation characteristics of all this new sub 6 GHz bandwidth shouldn’t be that substantially different from that which is already being deployed, most of it should work fine on existing towers.

But there is one caveat. Deploying all this bandwidth still won’t be cheap even if the carriers won’t need to build huge numbers of new towers. What gets built will ultimately be based on how much money the carriers can justify spending.

Personally my guess is most of the available bandwidth over 2 GHz will end up being deployed almost exclusively in urban areas. The propagation characteristics of this bandwidth (read short range) make it hard to justify in rural areas. That said the combination of T-Mobiles 600 MHz spectrum, the 600 MHz spectrum Charlie Ergen is squatting on and the original 850 MHz cellular band (CLR) both Verizon and AT&T should soon be refarming soon should provide substantial capacity. Throw in the rest of the 500 MHz UHF TV band in the future and even deep rural areas should be fine.

The problem is the further you are from a tower, the slower the speed. There is still a limit on how far apart towers can be placed and still retain reasonable service. Based on my own experience working in rural areas of southeast Minnesota, the current arrangement is far from ideal. Local deployment of CBRS base stations may help fill in some of the gaps, but even a handful of low bandwidth towers would likely do a much better job.

Unfortunately it appears the economics of rural areas make these towers difficult to justify. Similar to rural electrification efforts back in the 1930’s through 1950’s, there just aren’t’ enough users living in these areas to make the investment profitable. So some level of subsidisation is likely necessary to make something like this possible.

In addition note that wireless technology, no matter how good it gets, will probably never be able to fully replace a direct fiber or even coaxial cable link. Even cheap coaxial cable using the DOCIS 3.1 standard can potentially provide a 10 Gbit connection in both directions. We are a long way from wireless carriers being able to provide that kind of performance at a reasonable price. I will agree that most cable providers need to up their game (something cord cutting should speed along). But if you like watching lots of Netflix on a big 4K TV I wouldn’t start looking for Verizon, AT&T or T-Mobile to replace them anytime soon.

Finally don’t look for big price drops. The cost of providing wireless service is relatively fixed. To provide X amount of coverage/capacity you need X amount of towers for a given amount of spectrum. The use of 5G won’t change the equation much. On the other hand I don’t think you will see significantly higher prices either. What you should see is increasingly better coverage and progressively faster and more reliable service.

Integrating legacy applications into a Chrome OS environment – Updated 01-26-2020

Why some people need to continue running an old OS

New versions of an operating system offer many advantages.   They usually fix bugs and annoyances in the previous system and offer many new features to both users and developers.  Attracted to these advantages, most users soon replace an older system with the new update.

As users increasingly switch to the newer version, programmers start to put less and less effort into writing and updating programs for the older system.  Because of this new programs or major updates to older programs increasingly either won’t run on the older OS or run poorly.  This in turn causes even more people to abandoned the older system.  It’s a natural progression that has played out many times since the first computer was powered up.

But there is also another side to this situation.  In addition to adding new features, developers of newer versions of an operating systems also often remove features that older programs may have relied upon to run.  As a result older programs sometimes no longer run properly on the newly updated OS.

In some cases programmers will update older programs to run on the newer system.  But no matter how popular a program may be, many developers may still find they lack either the time, resources, ability and/or desire to upgrade their programs.

So when it comes time to update an older computer, users sometimes face a dilemma.  While they may be excited about running all the new software that their older operating system won’t support, they may also need to continue using older programs that the updated operating system won’t run.

This is a problem I began facing several years ago when I started to consider upgrading my aging Windows XP computers.  I knew, based on past experience, that upgrading would not be a simple task because of several legacy applications I used to run my business.  But I also believed I could eventually get everything transitioned.  Transitioning to a new version of Windows was never trouble free, but up until that time I always found work arounds to keep my older programs running.

Unfortunately I soon found that this time my assumptions were not entirely correct. Starting with Windows Vista, Microsoft decided to become more aggressive in removing support for legacy software.  As a result I and others found that the programs I relied upon everyday to do my job simply would not run properly on either Windows Vista or Windows 7, much less Windows 10. I also realized that keeping my older programs running on future version of Windows would likely become an uphill battle since Microsoft seems determined to continue leaving its past behind.

I wasn’t the only person who discovered what Microsoft was doing.  Businesses in particular realized that this time the cost of transitioning to a newer version of Windows wasn’t going to just be the cost of buying a new computer.  This time many organizations also needed to develop entirely new business processes based on new software.   This is particularly a problem for businesses who bought expensive industrial equipment designed to last decades and that require custom applications to operate.  Not surprisingly many have dragged their feet as long as possible.  A process that is repeating itself now with Windows 7.

Keeping key software running while moving forward

At first I thought my only choice was to bite the bullet and find replacements for all my legacy software.  But I decided that if I was going to go to all that trouble, why stick with Windows at all?  When I really thought about it, I realized I wasn’t finding any of the post Windows XP upgrades to be all that compelling.  If I was going to be forced into a forklift upgrade, why not take my time and explore other options?

At first I tried several different versions of Linux.  But eventually I took a chance and bought a Chromebook.  I found, much to my surprise, that I really loved how Chrome OS worked.  The system isn’t perfect, but  I like the direction it is heading and it was a good fit for my lifestyle.

What I particularly liked was the fact that I typically don’t install applications on Chrome OS like I did with all my previous operating systems. Instead of running applications naively, Chrome OS is designed to access applications running on other computers. Instead of me having to be responsible for maintaining and updating all my applications, I can now rely on others to do so for me.

Are there risks? Of course. Obviously the maintainers of online applications can (and have) shut down their software at any time. But as I pointed out above, continuing to rely on traditional software is hardly a safe bet either. What good is having a physical copy of a piece of software if you can’t find a machine to run it on?

So while I decided I wanted to move forward with Chrome OS, I still needed to figure out what to do about the older applications I had to continue running.  I could find replacements for a few applications, but several were specialized programs that I knew would never be updated.  Abandoning them all at once would be costly and wasn’t a viable option.

For better or worse I needed to find a way to run both a new, modern operating system like Chrome OS and my old Windows system at the same time.  The challenge was to do this as seamlessly as possible.  Instead of having two independent systems, what I wanted to do was create what would appear to be a single solution.  I don’t entirely have this figured out and it is still a work in progress, but for those who might be interested what follows are a few things I have discovered on how to make this work.

New on new, old on old

One of the first things I noticed doesn’t even involve installing new hardware or software.  It was simply that I found things worked more smoothly when I used an application only on either one operating system or the other.  Even when it is possible, I noticed that trying to keep two versions of the same application running on two different systems is complicated and time consuming.  In most cases it simply isn’t worth the effort.

For example I quickly found there was little benefit in trying to maintain the ability to check my email on both my Chrome OS device and my legacy Windows system.  Checking email is much easier and more convenient on my Chrome devices so why fight to maintain that ability on my Windows machine?  This was also true with browsing the web and was among the first actives I stopped on my legacy OS.

A second related issue is that I have given up on trying to keep programs on my legacy system updated.  This is particularly true with large complicated programs like web browsers and office suites.  Even when these programs continue to be officially supported on my older Windows system, I often find developers have little interest in doing any more than the minimum work needed to keep them running.  Because of this I have increasingly noticed new updates are more unstable than the versions they replace.

So as a rule I try to only run newer and updated applications on Chrome OS and only older legacy applications on Windows.   Also since most of these older applications rarely access the Internet, I no longer consider the lack of new security updates on a legacy OS like Windows XP to be a significant concern.

Running a single desktop

I knew early on that the only way I could run both Chrome OS applications and Windows applications was by running two separate machines.  In my case I decided to retain my existing legacy desktop machine (uninstalling all but my legacy applications) and buying new Chrome OS devices that would become my daily drivers.

Side note: One issue that I have discovered since I began this project is that older hardware doesn’t run forever and that it is almost impossible to buy new hardware that will run old operating systems.  The way to get around this is to purchase a new machine running a current operating system, install a virtual machine like VirtualBox, and then install my legacy operating system and applications within the virtual machine.

Since virtual machines are assigned their own IP address, at the network level they are treated as a separate machine.  As a result you can generally either ignore or use the newer host operating system as you see fit. Installing a virtual machine is a bit beyond the scope of this article. But the process is not particularly difficult. Oddly enough while Chrome OS does use virtual machine software to run both the Android and Linux operating systems, it is not currently possible to fire up your own virtual machine and install something like Windows. End side note.

Having to separate devices usually means having to deal with the headache of rotating between two different keyboard/mouse/monitor setups, one set connected to each system. I quickly realized I needed to figure out some way to integrate my legacy Windows desktop directly into the Chrome OS desktop.

My first thought was to use Chrome Remote Desktop (CRD) to access my legacy machine. While this is a functional solution, I found it to be a less than ideal for a couple of reasons.  The first is that CRD (at least as currently implemented) tends to be slow even over a local network. In the world of networking tools it is what is known as a “screen scraper” and by default screen scrapping has a tendency to have relatively high latency and be bandwidth intensive.

The second issue was that CRD relies on the Chrome browser being installed on both machines. With Google having ended Chrome updates for Windows XP in early 2016 (and Windows 7 shortly), I knew this wasn’t a viable long term solution.

I decided a better alternative was to utilize a desktop sharing solution that was already available on many Windows machines.  Remote Desktop Protocol (RDP, formerly Terminal Services) is a Microsoft protocol for viewing a Windows desktop across a network.  It is a multiplatform, multiuser solution that is even functional over the Internet if you open port 3389 on your router.  Because it is based on a lower level protocal performance has a tendency to be quick even in low bandwidth situations (an RDP server sends compact information that describes a desktop rather than what is essentially a series of screenshots of the desktop).

A RDP server is built into all Pro versions of Windows.  All that I needed was a Chrome OS RDP client.  A quick search of the Chrome Store soon found two. Chrome RDP  is the least expensive option ($10) but lacks the ability to pass sound. Xtralogic makes a client that passes sound as well for a slightly higher price ($10/year).  Both work fast enough over a local network that it seems like I am directly connected to my Windows computer.

There is one minor issue to be aware if you are trying to use the Chrome RDP client to access a RDP server running on Windows XP or earlier.  By default Chrome RDP won’t log into these version of Windows.  But there is an easy fix.  You just need to change a setting that disables Network Level Authentication (Options:Advanced, check “Allow Non-NLA Connections).

The only downside for some users is that a RDP server is not available on either Windows XP Home or Windows 7 Home.  In this case there are two alternatives.  Probably the best bet is to just upgrade to a Pro version of Windows.  For Windows XP you need to find a retail copy of Windows Pro (either the full or upgrade version – the OEM versions usually won’t work).  You can commonly find a copy of this and other Windows Pro versions on places like Ebay if you look around.  [Yes, I am also aware that technically a RDP server is already built into most version of Windows including the Home edition and that there are various hacks to enable it.  I’m sure most of these probably work fine, but since there are always possible issues with any hack I won’t make any recommendations here.  Google and proceed at your own risk if you want to explore this option.]

If you can’t upgrade to Window Pro, find Chrome RDP to be a less than ideal solution, and/or are running a legacy OS other than Windows, the other alternative is to use something similar to CRD called VNC.  The VNC protocol has been around for years and there are multiple different clients and servers available for different operating systems including a free client in the Chrome store.

I have found the open source version of RealVNC server works well on Windows XP.  There are also versions available for various Linux distributions.  My understanding is that a VNC server is also included in Mac OS X version 10.4 and later.

Overall VNC is lightweight and easy to install.  While it is still a “screen scraper”, personally I found the performance to be better than CRD and for me it is more stable and easier to use.  Also unlike CRD, VNC doesn’t require the installation of the Chrome browser on your legacy machine.  Keep in mind that in many situations you also have the option of running both Chrome RDP and VNC and can rotate between them as you see fit.

While using either a RDP or VNC client to access my legacy Windows machine works well most of the time, there are a couple of minor issues that I have run across mainly related to the keyboard layout used by Chrome devices.  Even though I might see a legacy application on my screen, the keys on my keyboard still function like they would if I was using a Chrome OS application.

In my case some of my legacy Windows programs require use of function keys (F1, F2, etc) to run properly.  By default Chrome OS doesn’t enable these keys even if you plug in a PC/Windows keyboard.  Fortunately there are two easy solutions.  The first is to just hold down the “Search” key and press one of the keys on the top of your keyboard.  For example pressing Search+[]]] = F5 (if you are using a Windows keyboard on a Chromebox  press the “Windows” key and the F5 key instead).  The other alternative is to change a setting in Chrome OS to force it recognize the top row of keys on your keyboard as function keys.  In this case if you want to again use a Chrome key as usual (e.g. to mute sound or darken the screen), simply hold down the Search and press the appropriate key to temporarily restore that function.

Sharing files between Chrome OS and a legacy OS

The next issue I faced was how to share files between Chrome OS and my legacy Windows OS.  In Windows files are typically stored on a local hard drive and shared via the SMB/CIFS protocol (aka Windows File Sharing).  Macs and most Linux systems also usually support this protocol via some implementation of Samba.

When Chrome OS was first released, the only way to store files was either on a small local drive or remotely on Google Drive.  However support for other file systems has been added to Chrome OS through the File System Provider API.  While this API doesn’t directly support any particular file system, it does allow third party developers to add support for many alternative systems.

In addition Chrome OS now has a built-in SMB/CIFS client. Assuming your legacy OS machine has been setup to share files. using the built-in SMB/CIFS plugin is easy. Simply go to Settings|Advanced|File|Network File Shares and click the “Add file share” button. All you need to do is fill in the URL of your Windows share (usually “//ip address/sharename” and enter your username and password.

Unfortunately there is one annoying bug in both the File System Provider API and the built-in SMB/CIFS client that for some reason Google seems to be in no hurry to fix.  The native File app in Chrome OS can always access a remote file system.  But the Chrome browser itself cannot.  What this means is if you find a file on a website that you want to save, the Chrome “Save” dialog box will only give you two choices – the local “Download” folder on your machine and Google Drive.  A remote file system won’t be an option.  This means you have to download the file locally and then open the File app to transfer the file to your remote file system (highly annoying).  Oddly enough you are able to upload a file from your remote file system to a website in most cases. So adding an attachment to GMail thankfully works.

A more significant problem is that the built-in SMB/CIFS client only access files using the newer versions of the SMB protocol (SMB 2 and 3). Older operating systems (like XP) only use version 1 of the SMB protocol and so can’t be accessed using the built-in SMB/CIFS client.

There are multiple ways around both of these issues. But I strongly recommend pursuing only one. Keep all of your data on a separate file server. Doing so will make maintaining and securing your data much easier.

Years ago setting up a file server was complicated and expensive. But today you can purchase a small NAS appliance for a few hundred dollars depending on the size and number of the hard drives you need.

Currently I’m running one of Synology’s inexpensive dual drive units in a RAID 1 configuration (each drive is a duplicate of the other in case one fails). It runs all versiona of the SMB protocol so it can be accessed by both older and newer operating systems. Although that said, over the years I’ve worked with Chrome OS I’ve found it works best when you can keep all or most of your data in Google Drive.

Fortunately my Synology box makes this easy. It allows me to synchronize any folder on my NAS with any folder on Google Drive. Personally I just create a folder called “NAS” on Google Drive that I synchronize with most of the contents of my NAS. The only things I don’t put in this folder are things like Google Docs and Sheets since these aren’t typically readable on other operating systems. Since I started doing this I don’t even bother using Chrome OS’s built-in SMB/CIFS client.

Finally no discussion about file management would be complete without addressing the issue of backup. While each situation is different, in most cases you want to accomplish three things with a backup system. The first is you want to get copies of your files off site (in case your house burns down and/or your online account is closed or hacked). The second is ideally you want to make additional copies and store them over time (in case you find you suddenly need something you accidentally deleted last week or last month). Finally both these things should happen automatically.

My Synology box makes all of this super easy. First of all by synchronizing  with Google Drive I have effectively moved most of my data off site. Synology also has a small app that makes it easy to keep backups of my data over time. I usually keep a snapshot of my data from one day ago, one month ago and one year ago in case something gets accidentally deleted. For extra protection I also have my Synology backup all my files (including my timed snapshots) to Backblaze’s inexensive B2 cloud storage. My Synology box handles all of this automatically. Highly recommended.

Finally I also recommend backing up Google accounts using a cloud to cloud backup service. While backing up G-Suite type accounts is the most commonly available through places like Backupify typical Google accounts can also be backed up at Spinbackup or Upsafe. If for some reason you find yourself locked out of your Google account or have all your data wiped out, these services will allow you to restore restore everything to a new account if necessary.

Sharing printers and scanners

Printers

Printing on Chrome OS while at the same time also being able to print from a legacy operating system is a bit of a challenge because the native printing system on Chrome OS has traditionally been mostly networked based. In the past all printing was done via a service called Google Cloud Print (GCP). As a result you typically needed a printer that was both Google Cloud Print ready and that would also had legacy driver support for your older operating system. This limited your selection and typically prevented most people from using older printers.

Today things are a bit more flexible. Google has decided to largely abandon Google Cloud Print and has instead replaced it with the “CUPS” printing system from Linux. What this means is that Chrome OS now has built in drivers for a number of different printers. In addition Chrome OS now has excellent support for most network based printing protocols. As a result Chrome OS will see and connect to many wireless printers automatically (Go to Settings|Advanced|Printing|Printers).

As a result finding a printer that will still have drivers available for your legacy system is quickly becoming the more significant problem. Fortunately most major manufactures seem to be doing a pretty good job of maintaining legacy drivers. But I would recommend verifying this before making a purchase.

In most cases I would recommend a good quality, network based wireless printer that supports your legacy operating system. Keep in mind printing over a network has been happening for a long time and almost all older operating systems will support some type of network printing protocol (support for LPD was particularly common).

Scanners

The most convenient way to add scanning capability is to use a network connected multifunction device that can print, scan, copy and/or fax.  Similar to stand alone network printers, these devices usually come with all the software needed to handle most scanning, copying and faxing functions already installed in the machine itself.  Unlike in the past,  you no longer need to install separate software packages on your computer (either Chrome OS or your legacy operating system) in order to use them (one example).

What this means is that you can walk up to these machines and scan a document without dealing directly with your computer.  Depending on the device settings, scanned documents are immediately converted to a file (pdf, jpeg, tiff, etc) and sent to some type of local or remote storage device (network share, ftp site, website (e.g. Google Drive), flash drive, etc).  Some even have built in optical character recognition (OCR).  Unless you have unique requirements, odds are you can find a device that will fit your needs.

My current machine allows me to scan either directly to my Google Drive account or to a network share on my NAS.  Having set up shortcuts with common default settings, I can usually scan documents by only hitting a few buttons.  I then just look for the file in either a Google Drive folder or a folder on my legacy machine.  In many cases this can eliminate the need for dedicated scanning drivers or software on either your Chrome OS or legacy machine.

When choosing a scanner there are a few limitations on some machines that you may want to be aware of and avoid.  First of all I have found that some scanners don’t always scan edge to edge.  This is particularly true of automatic document feeders that sometimes cut off the top and bottom of a page (this is usually not a problem when you scan using a devices flatbed scanner).  Depending on what you are scanning, this may or may not be a problem.

In addition image quality can vary significantly.  Text based documents usually look fine, but detailed grayscale and color photographs will quickly reveal a marginal scanner.  Unfortunately price isn’t always a reliable indicator in this regard and testing directly isn’t always an option today when most are bought online.  But if you ask around you might find someone who has the model you are interested in and is willing to post or email you a sample scan in the format and resolution you typically use.  I usually find 300 dpi grayscale or color is the minimum necessary to archive common paper documents in a readable form.  In most cases avoid the black and white setting.

Conclusion

Of course not all of the above advice is entirely unique to Chrome OS.  Much of it could also apply to almost anyone who has invested heavily in a particular operating system and now needs to move on. Perhaps a few of them will find some of these strategies helpful.