The hyperventilating around 5G aside, why the future of wireless is still FAST

There has been a lot of unrealistic hype around fifth generation (5G) wireless technology. There has also been a lot of ink spilled outlining why that hype is unjustified. Both views have some justification. Unfortunately both are often based on an incomplete view of wireless technology. For example it is obvious many writers don’t even understand exactly what 5G means. The fifth generation of what?

So let’s begin with some history before we get into the more technical weeds. Over the years cellular mobile phone carriers have used a series of mutually incompatible methods of providing wireless services to their customers. As time moved forward technology advanced. Each carrier occasionally adds new technology and then slowly retires the older technology.

Many times these additions are minor and cause few issues for the average subscriber. But in some cases the new technology is substantially better, but completely incompatible with the old.  So much so that previous devices won’t function on the new system. Since these types of changes are quite disruptive so they don’t occur often. But when they do occur they are quite noticeable to everyone and so have become known as generational upgrades.

What is now known as first generation or 1G wireless technology mainly refers to the first of several of analog cell phone systems that were deployed in the late 1980’s and early 1990’s (mostly the AMPS system in the US). While each functioned a little differently most were just wireless versions of a landline phone. These systems allowed you to dial a number and talk to someone, but that was about it.

In the mid 1990’s the first 2G digital systems began to be introduced. In this case “digital” mainly referred to the digitization of voice calls, although they also introduced text messaging as an added service. The main advantage was the ability to squeeze more phone calls into the same amount of radio space and increasing the total number of calls the system could handle.

While most systems were eventually expanded to include some form of packet switched data connections to the internet (think glorified dial-up), the focus was primarily on maximizing the number of circuit switched voice calls to landline phones. In the US four competing systems were deployed: GSM, CDMA/IS-95, D-AMPS/TDMA (Cingular) and iDEN (Nextel).

With the increased use of the internet starting to occur in the late 1990’s, two of the previous 2G systems, GSM and CDMA, evolved to include high speed data connections. However the focus was still mainly on voice calling. While most people still referred to 3G systems by their previous 2G names, technically they were known as CDMA2000 1X/EVDO and UMTS/HSPA+.

By the mid 2000’s the amount of internet data being transmitted over the various wireless networks began rapidly overtaking the amount of voice data being transmitted. A new wireless system was needed and by this time almost everyone in the wireless industry was finally willing to support a single worldwide standard. To do this most of the various organizations that had developed many of the previous standards decided to become a part of a group called the 3rd Generation Partnership Project (3GPP).

What the group developed was a system called Long Term Evolution or LTE. Unlike previous standards, LTE was an entirely packet switched based data system. Actually until the introduction of voice-over LTE technology years later (or more accurately GSMA IR.92 IMS Profile for Voice and SMS), you couldn’t even make a traditional voice call over the system. Most carriers had to continue operating their old 2G and 3G systems for this purpose.

The LTE standard was first outlined in 2008 in a series of documents known as 3GPP Release 8. The standard has been updated and extended several times since in Releases 9, 10, 11, 12, 13 and 14. However the new enhancements have all retained backward compatibility with Release 8.

After years of successful use, the industry is now at a point where it is starting to bump up against the limits of the LTE standard. One of the big ones is that the LTE standard never really defined how carriers should use radio frequencies beyond 6 GHz. Nature makes working with frequencies beyond 6 GHz difficult (the wavelengths at these frequencies are small so even the tiniest objects can block or absorb them) and in 2008 using them just didn’t seem terribly practical.

But technology has continued to march forward and using extremely high radio frequencies is now at least technically viable. In addition new techniques have been developed over the past 10 years to increase spectral efficiency (the ability to squeeze more bits on a given amount of radio bandwidth). The problem is most of these new methods are completely different from the methods used by the LTE standard.

So after much deliberation the members of the 3GPP decided it was time to make a jump. They decided that 3GPP Release 15 would introduce a new method of transmitting data over radio waves that unfortunately would also be completely incompatible with LTE. That new method is called 5G New Radio or 5GNR (linguistic creativity obviously not being a strong trait among wireless engineers).

In addition to increasing spectral efficiency, the 5GNR standard also makes working with frequencies up to 100 GHz easier. This basically makes available 94 GHz of extra bandwidth to work with or more than 15 times the roughly 6 GHz LTE was designed to use. As a result the potential amount of data that 5G devices will be able to squeeze out of the air is multiple times greater than anything that would have been possible with LTE devices. The optimists are correct on this point.

But as always, the devil is in the details. Theoretically 5GNR makes it possible to transmit 15-30 bits per second for every cycle (Hz). But in the real world this will likely translate to only about 30% more data than LTE.  So when the mobile carriers upgrade their existing bandwidth to use the 5GNR method, most users will only see about a 30% bump in performance. Significant, but hardly earth shattering. Most people won’t even notice the change.

5gnr sumary
Summary of 5G characteristics

Also the natural challenges of transmitting data over radio frequencies beyond 6 GHz still exists. While we can cram tons of data into the 94 GHz that will be potentially available, earth’s atmosphere makes it difficult to send it very far (think feet/meters vs miles/kilometers). Here is a chart (courtesy of Verizon) that shows the relative distance that can be transmitted using existing frequencies below only 2.5 GHz (the actual distance depends on a combination of the effective radiated power of the transmitters, the sensitivity of the receiving antennas, terrain and several other factors).

 

 

Propegation

What this means is that to take advantage of all that bandwidth above 6 GHz the carriers will have to build huge numbers of new towers very close together (probably roughly every 1000 feet (300 meters) or less). While these won’t have to be the big towers we often see to today , each of them will still normally need to be connected to the internet via their own fiber cable (if you can’t send a signal very far anyway, there is no point in having an antenna 250 feet in the air).

But in the end no matter how you look at it, building all these radio towers will be extremely expensive. This might be financially viable in extremely dense urban areas (this technology would work great in a packed football stadium). They are unlikely to be workable in suburban or rural areas anytime soon. In those cases it will probably just be cheaper to string fiber to every home. So the pessimists are correct on this point.

So why am I optimistic? The problem with the 5G pessimists is that they are implying that all the bandwidth below 6 GHz that can be used for mobile phone service has already been deployed and is being used. The reality is only a fraction of this bandwidth is currently being used. Of the four major US carriers, only Sprint has rights to more than 200 MHz (0.2 GHz) of that 6 GHz (here is how it is being used).

The bottom line is regardless of the efficiency of the method that is used to transmit data, overall performance is still mostly depends on the amount of available bandwidth the carriers have to use. Give them more bandwidth and your mobile device will naturally work faster.

While much of sub 6 GHz bandwidth is being used for other important uses (e.g. airplanes, broadcast television and radio, GPS, the military, etc.), a surprising amount of new bandwidth will still likely soon become available. Even among the existing spectrum, much of it is underutilized.

Here is a list of places much needed extra bandwidth will come. Each alone will only give an incremental boost. But together it will ensure a fast 5G future even if carriers never transmit a single bit on a frequency over 6 GHz.

850 MHz – refarming 2G and 3G spectrum

As I mentioned LTE is data only. Until recently most carriers have had to continue operating their old less efficient 2G and 3G networks in order for their customers to be able to make calls. This changed recently with the widespread deployment of voice-over LTE (VoLTE) technology that enabled users to start making traditional voice calls over LTE.  Since LTE has much better spectral efficiency than the old 2G and 3G networks, that old bandwidth can now be put to much better use. Over the next year or two most carriers will completely shut down these old networks freeing it for use by either LTE or 5GNR.

600 MHz

A few years ago the FCC auctioned off almost all of the bandwidth between 600 MHz and 700 MHz (formerly known as UHF TV channels 38-51). Most of this bandwidth was purchased by T-Mobile and is now being gradually deployed as legacy TV broadcasters slowly vacate the spectrum. The process should be complete by the middle of 2020.

T-Mobile was the only US carrier that lacked significant low band spectrum (below 1000 MHz (1.0 GHz)). In addition to dramatically improving T-Mobile’s coverage in rural and suburban areas, it will also significantly increase total capacity in urban areas.

Personally I think the rest of the UHF TV band (channels 14-36 or roughly between 500 and 600 MHz) will also eventually be auctioned off. I have serious doubts most traditional linear television will survive the cord cutting era. But it will probably be another 10 years before all of that fully plays out.

600 MHz and 1.7/2.1 GHz – Squatters

This is a particular annoyance of mine. During the past couple of major bandwidth auctions (600 MHz and AWS-3) a couple of parties have purchased big chunks of spectrum just to sit on it. The parties are speculating that eventually the value of this spectrum will increase substantially and they intend to sell it at a significant profit when it does (looking at you Charlie Ergen).

Technically the FCC has rules requiring buyers put to use any spectrum they purchase from the government within a certain amount of time. But the reality is people often play games to get around these rules for years or even decades. This forces everyone else to unnecessarily pay more for substandard service.

This is not a technical problem. It is a political problem. Charlie Ergen’s Dish Networks alone is currently sitting on almost 100 MHz of quality bandwidth. It alone holds almost as much bandwidth as Verizon. This needs to stop. We need an FCC with the guts to tell these people to either deploy now or give the spectrum back.

2.5-2.7 GHz – former Broadband Radio Service (BRS) and Educational Broadcast Service (EBS) bands.

Back many years ago the FCC made available about 200 MHz of spectrum for “Wireless Cable” service. About half the spectrum was set aside for commercial use and the other half for educational use. Only a few TV services were built using the spectrum and the FCC eventually changed the rules on using it making it available to wireless carriers. Through a series of purchases and long term leases Sprint has ended up with most of it.

But Sprint, as the smallest of the four major carriers in the US, has never been able to raise the capital necessary to fully deploy the spectrum. As a result in most areas of the country the spectrum is unused. However the pending merger of Sprint with T-Mobile should provide the capital needed to start making use of this resource. My guess is T-Mobile will likely start using the spectrum shortly after the mergers approval.

But I don’t think the FCC should assume it will happen. At the very least the FCC should make the approval of this merger dependent on T-Mobile quickly deploying this spectrum. If they don’t they should be forced to sell it to someone who will. It is far too valuable a resource to waste.

3.5-3.7 GHz – Citizens Broadband Radio Service (CBRS)

This is a new band of spectrum that should be available in the US shortly. These frequencies have historically been used by the US military. The biggest user being the US Navy who use it for radar.

For the most part this radar is only used at sea and is rarely used inland. Because of this the FCC decided the spectrum could potentially be shared. While the US Navy will continue to have priority in using this band, this will probably only be a significant issue near its ports. Everywhere else both wireless carriers and unlicensed users will be able to use it. In some ways the system will work similar to WiFi. But it will be different in the sense that the carriers will have the option of paying for the right to force unlicensed users to switch channels if they decide to use it. It is also different in that users will have the option to deploy high power base stations (50 watt ERP vs WiFi’s 1 watt limit).

The FCC is hoping to encourage smaller companies and individual to make the most use of this spectrum (hence the term “Citizens Band”). Personally I will be interested in seeing how this plays out. Since it is mid-band spectrum my own guess is that at least one or more major carriers will quickly deploy it in the major urban areas (Verizon seems the most interested). But the FCC is capping how much bandwidth they can use. Also the big carriers will probably continue to ignore most rural areas under the (probably mostly correct) belief that their low band towers can provide sufficient service.

As a result this may open up opportunities for communities that find themselves on the fringes of existing carrier coverage. The cost of priority licenses in these areas should be modest and may not be needed at all. With most new phones likely having support for the CBRS band built in (LTE Band 48), small providers (like small town telephone and cable companies) should be able to just throw up a high power tower and and start handing out sim cards. Newer phones containing a second eSim should make deployment even easier since these devices can support using two carriers simultaneously (few people are likely to be willing to sacrifice the service of a major carrier or carry a second device). It’s a little to early to tell how things will turn out, but this at least has the potential for being a win-win situation for both urban and rural users.

3.7-4.2 MHz – the “C” Band

Back about 30 years ago older readers may remember people putting large dish antennas in their backyards in order to receive what were, for a short time, freely available cable TV stations like HBO. Those big dishes were being pointed at satellites that were broadcasting analog transmissions to cable TV providers across the country. These providers picked up the signal with their own dishes and then passed them on to subscribers. Needless to say companies like HBO weren’t happy with people other than cable TV subscribers watching their shows for free. It wasn’t long before these signals were encrypted and these viewers asked to pay up.

While the backyard pirates have now disappeared, this radio band is still used by cable broadcasters to send their now encrypted digital content to cable providers. But the FCC has recently noticed that the band isn’t being used as efficiently as it could. As a result it has opened up discussions with the cable industry about sharing at least part of the band with mobile phone carriers. The negotiations are still in an early stage, but the industry has already mostly agreed to share at least part of this band. It is mainly a question of how much of the bandwidth they are willing to offer given various monetary incentives.

Currently it appears the wireless carriers will gain access to somewhere between 200 and 300 Mhz of bandwidth (out of a total of nearly 500 MHz). This is nearly as much as the entire industry is currently using. Potentially this type of bandwidth could finally enable true gigabit level service on a mobile device. Unfortunately it will probably be about another 5 years before everything is finalized.

Conclusion

As you can see, there is plenty of available bandwidth to enable near gigabit service at some point in the next 5 or 6 years without having to use any bandwidth over 6 GHz. Indeed any deployments of extremely high frequency bandwidth will likely be just a bonus. Since the propagation characteristics of all this new sub 6 GHz bandwidth shouldn’t be that substantially different from that which is already being deployed, most of it should work fine on existing towers.

But there is one caveat. Deploying all this bandwidth still won’t be cheap even if the carriers won’t need to build huge numbers of new towers. What gets built will ultimately be based on how much money the carriers can justify spending.

Personally my guess is most of the available bandwidth over 2 GHz will end up being deployed almost exclusively in urban areas. The propagation characteristics of this bandwidth (read short range) make it hard to justify in rural areas. That said the combination of T-Mobiles 600 MHz spectrum, the 600 MHz spectrum Charlie Ergen is squatting on and the original 850 MHz cellular band (CLR) both Verizon and AT&T should soon be refarming soon should provide substantial capacity. Throw in the rest of the 500 MHz UHF TV band in the future and even deep rural areas should be fine.

The problem is the further you are from a tower, the slower the speed. There is still a limit on how far apart towers can be placed and still retain reasonable service. Based on my own experience working in rural areas of southeast Minnesota, the current arrangement is far from ideal. Local deployment of CBRS base stations may help fill in some of the gaps, but even a handful of low bandwidth towers would likely do a much better job.

Unfortunately it appears the economics of rural areas make these towers difficult to justify. Similar to rural electrification efforts back in the 1930’s through 1950’s, there just aren’t’ enough users living in these areas to make the investment profitable. So some level of subsidisation is likely necessary to make something like this possible.

In addition note that wireless technology, no matter how good it gets, will probably never be able to fully replace a direct fiber or even coaxial cable link. Even cheap coaxial cable using the DOCIS 3.1 standard can potentially provide a 10 Gbit connection in both directions. We are a long way from wireless carriers being able to provide that kind of performance at a reasonable price. I will agree that most cable providers need to up their game (something cord cutting should speed along). But if you like watching lots of Netflix on a big 4K TV I wouldn’t start looking for Verizon, AT&T or T-Mobile to replace them anytime soon.

Finally don’t look for big price drops. The cost of providing wireless service is relatively fixed. To provide X amount of coverage/capacity you need X amount of towers for a given amount of spectrum. The use of 5G won’t change the equation much. On the other hand I don’t think you will see significantly higher prices either. What you should see is increasingly better coverage and progressively faster and more reliable service.

Integrating legacy applications into a Chrome OS environment – Updated

Why some people need to continue running an old OS

New versions of an operating system offer many advantages.   They usually fix bugs and annoyances in the previous system and offer many new features to both users and developers.  Attracted to these advantages, most users soon replace an older system with the new update.

As users increasingly switch to the newer version, programmers start to put less and less effort into writing and updating programs for the older system.  Because of this new programs or major updates to older programs increasingly either won’t run on the older OS or run poorly.  This in turn causes even more people to abandoned the older system.  It’s a natural progression that has played out many times since the first computer was powered up.

But there is also another side to this situation.  In addition to adding new features, developers of newer versions of an operating systems also often remove features that older programs may have relied upon to run.  As a result older programs sometimes no longer run properly on the newly updated OS.

In some cases programmers will update older programs to run on the newer system.  But no matter how popular a program may be, many developers may still find they lack either the time, resources, ability and/or desire to upgrade their programs.

So when it comes time to update an older computer, users sometimes face a dilemma.  While they may be excited about running all the new software that their older operating system won’t support, they may also need to continue using older programs that the updated operating system won’t run.

This is a problem I began facing several years ago when I started to consider upgrading my aging Windows XP computers.  I knew, based on past experience, that upgrading would not be a simple task because of several legacy applications I used to run my business.  But I also believed I could eventually get everything transitioned.  Transitioning to a new version of Windows was never trouble free, but up until that time I always found work arounds to keep my older programs running.

Unfortunately I soon found that this time my assumptions were not entirely correct.  Starting with Windows Vista, Microsoft decided to become more aggressive in removing support for legacy software.  As a result I and others found that the programs I relied upon everyday to do my job simply would not run properly on either Windows Vista or later on Windows 7.  I knew I could find workarounds for some programs (like XP mode), but even those workarounds that did work were often more complicated and troublesome than previous workarounds.   I also realized that keeping my older programs running on future version of Windows would likely become an uphill battle since Microsoft seems determined to continue leaving its past behind.

I wasn’t the only person who discovered what Microsoft was doing.  Businesses in particular realized that this time the cost of transitioning to a newer version of Windows wasn’t going to just be the cost of buying a new computer.  This time many organizations also needed to develop entirely new business processes based on new software.   This is particularly a problem for businesses who bought expensive industrial equipment designed to last decades and that require custom Windows XP or earlier applications to operate.  Not surprisingly many have dragged their feet as long as possible.  A process that seems to be repeating itself now with Windows 7.

Keeping key software running while moving forward

At first I thought my only choice was to bite the bullet and find replacements for all my legacy software.  But I decided that if I was going to go to all that trouble, why stick with Windows at all?  When I really thought about it, I realized I wasn’t finding any of the post Windows XP upgrades to be all that compelling.  If I was going to be forced into a forklift upgrade, why not take my time and explore other options?

At first I tried several different versions of Linux.  But eventually I took a chance and bought a Chromebook.  I found, much to my surprise, that I really loved how it worked.  The system isn’t perfect, but it is improving quickly.  I like the direction it is heading and it was a good fit for my lifestyle.

But I still needed to figure out what to do about the older applications I had to continue running.  I could find replacements for a few applications, but several were specialized programs that I knew would never be updated.  Abandoning them all at once would be costly and wasn’t a viable option.

For better or worse I needed to find a way to run both a new, modern operating system like Chrome OS and my old Window XP system at the same time.  The challenge was to do this as seamlessly as possible.  Instead of having two independent systems, what I wanted to do was create what would appear to be a single solution.  I don’t entirely have this figured out and it is still a work in progress, but for those who might be interested what follows are a few things I have discovered on how to make this work.

New on new, old on old

One of the first things I noticed doesn’t even involve installing new hardware or software.  It was simply that I found things worked more smoothly when I used an application only on either one operating system or the other.  Even when it is possible, I noticed that trying to keep two versions of the same application running on two different systems is complicated and time consuming.  In most cases it simply isn’t worth the effort.

For example I quickly found there was little benefit in trying to maintain the ability to check my email on both my Chrome OS and legacy Windows systems.  Checking email is much easier and more convenient on my Chrome devices so why fight to maintain that ability on my Windows machine?  This was also true with browsing the web and was among the first actives I stopped on my legacy OS.

A second related issue is that I have given up on trying to keep programs on my legacy system updated.  This is particularly true with large complicated programs like web browsers and office suites.  Even when these programs continue to be officially supported on my system, I often find developers have little interest in doing any more than the minimum work needed to keep them running.  Because of this I have increasingly noticed new updates are more unstable than the versions they replace.

So as a rule I try to only run newer and updated programs on Chrome OS and only older legacy applications on Windows.   Also since most of these older applications rarely access the Internet, I no longer consider the lack of new security updates on a legacy OS like Windows XP to be a significant concern.

Running a single desktop

I knew early on that the only way I could run both Chrome OS applications and Windows XP applications was by running two separate machines.  In my case I decided to retain my existing legacy desktop machine (uninstalling all but my legacy applications) and buying both a Chromebox and a Chromebook. But I also realized that having to move between two different keyboard/mouse/monitor setups, one set connected to each system, would be a headache.  I needed to figure out some way to integrate my legacy Windows desktop directly into the Chrome OS desktop.

My first thought was to use Chrome Remote Desktop (CRD) to access my legacy machine.  While this is a functional solution, I found it to be a less than ideal for a couple of reasons.  The first is that CRD (at least as currently implemented) tends to be slow even over a local network.  In the world of networking tools it is what is known as a “screen scraper” and by default screen scrapping has a tendency to have relatively high latency and be bandwidth intensive.

The second issue was that CRD relies on the Chrome browser being installed on both machines  With Google having ended Chrome updates for Windows XP in early 2016, I knew this wouldn’t remain a viable long term solution.

I decided a better alternative was to utilize a desktop sharing solution that was already available on many Windows machines.  Remote Desktop Protocol (RDP, formerly Terminal Services) is a Microsoft protocol for viewing a Windows desktop across a network.  It is a multiplatform, multiuser solution that is even functional over the Internet if you open port 3389 on your router.  Because it is based on a lower level protocal performance has a tendency to be quick even in low bandwidth situations (an RDP server sends compact information that describes a desktop rather than what is essentially a series of screenshots of the desktop).

A RDP server is built into all Pro versions of Windows.  All that I needed was a Chrome OS RDP client.  A quick search of the Chrome Store soon found Chrome RDP.  Chrome RDP isn’t free, but it is well worth the $10 price.  For me it is fast enough over a local network that it seems like I am directly connected to my Windows computer.

There is one minor issue to be aware if you are trying to use the Chrome RDP client to access a RDP server running on Windows XP or earlier.  By default Chrome RDP won’t log into these version of Windows.  But there is an easy fix.  You just need to change a setting that disables Network Level Authentication (Options:Advanced, check “Allow Non-NLA Connections).

The only downside for some users is that a RDP server is not available on Windows XP Home.  In this case there are two alternatives.  Probably the best bet is to just upgrade to a Pro version of Windows.  For Windows XP you need to find a retail copy of Windows XP Pro (either the full or upgrade version – the OEM versions won’t work).  You can commonly find a copy of this and other Windows Pro versions on places like Ebay if you look around.  [Yes, I am also aware that technically a RDP server is already built into most version of Windows including the Home edition and that there are various hacks to enable it.  I’m sure most of these probably work fine, but since there are always possible issues with any hack I won’t make any recommendations here.  Google and proceed at your own risk if you want to explore this option.]

Be aware also that the Chrome RDP client does have a few limitations.  The big one is that it won’t, at this time, pass any sound from a Windows system even though this is technically possible.  For me this isn’t a problem since none of my legacy applications rely on sound.   But I recognize that this could be a significant problem for others.  I’ve also run into some rare Windows applications that for some reason don’t play nice with RDP (e.g. there is an odd bug in Word 97 where the “cut” command (but not the “copy” command) occasionally fails.

If you can’t upgrade to Window Pro, find Chrome RDP to be a less than ideal solution, and/or are running a legacy OS other than Windows, the other alternative is to use something similar to CRD called VNC.  The VNC protocol has been around for years and there are multiple different clients and servers available for different operating systems including a free client in the Chrome store.

I have found the open source version of RealVNC server works well on Windows XP.  There are also versions available for various Linux distributions.  My understanding is that a VNC server is also included in Mac OS X version 10.4 and later.

Overall VNC is lightweight and easy to install.  While it is still a “screen scraper”, personally I found the performance to be better than CRD and for me it is more stable and easier to use.  Also unlike CRD, VNC doesn’t require the installation of the Chrome browser on your legacy machine.  Keep in mind that in many situations you also have the option of running both Chrome RDP and VNC and can rotate between them as you see fit.

While using either a RDP or VNC client to access my legacy Windows machine works well most of the time, there are a couple of minor issues that I have run across mainly related to the keyboard layout used by Chrome devices.  Even though I might see a legacy application on my screen, the keys on my keyboard still function like they would if I was using a Chrome OS application.

In my case some of my legacy Windows programs require use of function keys (F1, F2, etc) to run properly.  By default Chrome OS doesn’t enable these keys even if you plug in a PC/Windows keyboard.  Fortunately there are two easy solutions.  The first is to just hold down the “Search” key and press one of the keys on the top of your keyboard.  For example pressing Search+[]]] = F5 (if you are using a Windows keyboard on a Chromebox  press the “Windows” key and the F5 key instead).  The other alternative is to change a setting in Chrome OS to force it recognize the top row of keys on your keyboard as function keys.  In this case if you want to again use a Chrome key as usual (e.g. to mute sound or darken the screen), simply hold down the Search and press the appropriate key to temporarily restore that function.

Sharing files between Chrome OS and a legacy OS

The next issue I faced was how to share files between Chrome OS and my legacy Windows OS.  In Windows files are typically stored on a local hard drive and shared via the SMB/CIFS protocol (aka Windows File Sharing).  Macs and most Linux systems also usually support this protocol via some implementation of Samba.

When Chrome OS was first released, the only way to store files was either on a small local drive or remotely on Google Drive.  However support for other file systems has been added to Chrome OS through the File System Provider API.  While this API doesn’t directly support any particular file system, it does allow third party developers to add support for many alternative systems.

Assuming your legacy OS machine has been setup to share files. using a SMB/CIFS plugin is easy.   All you need to do is download and install one of two SMB/CIFS clients from the Chrome Store and run the app.  Both programs will ask you to enter the IP address (or DNS domain name) of your legacy OS machine, the “share” or file folder you want to connect to and your username and password if any.  If you are on a large Windows network, you may also need to enter your Windows Domain name (most people can leave this blank).

Unfortunately there is one annoying bug in the File System Provider API that for some reason Google seems to be in no hurry to fix (the bug report was first filed over 2 years ago).  The native File app in Chrome OS can always access a remote file system.  But the Chrome browser itself cannot.  What this means is if you find a file on a website that you want to save, Chrome will only give you two choices – the local “Download” folder on your machine and Google Drive.  Your remote file system won’t be an option.  This means you have to download the file locally and then open the File app to transfer the file to your remote file system (highly annoying).  Oddly enough you are able to upload a file from your remote file system to a website in most cases (flash based upload apps seem to be the exception, but at least these are slowly going away).  So adding an attachment to GMail thankfully works.

Feel free to visit this bug report and click the “star” in the upper left corner to encourage Google to fix this issue if you are so inclined.  You must be logged into your Google account for this to work and will be emailed any updates on the issue if they ever start working on it.

If you want, in some cases you may also be able to access your Google Drive files from your legacy system.   There are two ways this can be done.  The first of course is to use a web browser.  But since keeping a modern web browser running under a legacy system often becomes increasingly difficult, this may be a less than ideal solution.  Also using files this way is often a two step process since in most cases since you typically can’t use a browser to directly open most files.  Instead you first have to download the file to a local folder and then open it using a legacy application.

The other option is to install the Google Drive Sync application.  The Google Drive Sync application will create a folder on your system called “Drive” and keep that folder synchronized with the contents of your Google Drive account.  By default the “Drive” folder will be placed in your “My Documents” folder in Windows, but you can override this during install by selecting the “Advanced” tab (note: you typically cannot change this after installation without reinstalling the program).

For the most part Google Drive Sync works well and unlike CRD it doesn’t require the Chrome browser to be installed.  Hopefully this means it will remain a relatively long term solution.  The only issue that you might want to keep in mind is that, since multiple users could potentially be syncing different Google accounts on the same machine, the program is designed to only run and sync your files when you are logged into your account.  If this is a problem you may want to consider either scheduling Google Drive Sync as a Windows task or using Microsoft’s srvany application to run it as a Windows service.

Bear in mind that my rule of “old on old, new on new” also seems to apply to data files as well as applications.  In other words if you primarily work with certain files on either your legacy or Chrome OS system you should store your files on the relevant system.  The problem of course is certain files (like images and pdfs) may be needed on both systems so it can sometimes be difficult to decide where to store them.

Although each situation will be different, in most cases I have found it is usually easier to access files on an SMB/CIFS share from a Chrome OS device than it is for a legacy system to access files on Google Drive.  As a result I usually default to storing my files on my legacy Windows system.  I definitely try to avoid placing a file in both locations since this creates synchronization problems (you are never quite sure which one is the one you need).

The downside is this is not the best alternative if you need remote access to your files.  But there are a couple of workarounds.  The first is to buy a router with a built in VPN server and use the VPN client built into Chrome OS to access a SMB/CIFS share sitting behind the router.  The problem is the process of setting up a VPN server usually requires networking skills that are beyond the capability of the average Chrome OS user.

A second option is to install a web server like Apache on your legacy system (usually enabling password protection and encryption via https) and then opening a port on your router to expose the server to the internet.  For Windows based systems I recommend using XAMPP (last supported version for Windows XP is 1.8.2 ).  The default install of only Apache and PHP is typically sufficient.

I have found this is usually a faster and more reliable way to access your files particularly over flaky internet connections.  But you won’t be able to use the standard “open file” dialogue box to access your files .  Like using a browser to access your Drive files, you will need to use a browser to first download your files to the local Download folder and then access them from there.

The problem with setting up a web server is again that it usually involves skills beyond the average Chrome OS user.  But for any company or organization with access to a good system administrator, this and setting up a VPN are both viable alternatives.  I have placed some instructions on setting up Apache and a few tiny but helpful php files in this zip file for those who might be interested (the php files are just small, human readable text files).  A Chrome OS text editor can be found here.

Finally no discussion about file management would be complete without addressing the issue of backup.  While each situation is different, in most cases you want to accomplish three things with a backup system.  The first is you want to get copies of your files off site (in case your house burns down and/or your online account is closed or hacked).  The second is ideally you want to make additional copies and store them over time (in case you find you suddenly need something you accidentally deleted last week or last month).  Finally both these things should happen automatically.

Keep in mind you will likely have files both on your legacy system and in Google Drive and should always have backups of both.  This typically will require you to have two different types of backup systems.

If you want you can setup a third computer in a remote location and use it for your backups.  But this process can be fairly complicated, more expensive than you might think and rather high maintenance (at least it has been for me).  Today I usually recommend using one of the many cloud based backup systems now available rather than rolling your own.  For a legacy system, I personally like Backblaze (if you also decide to use Google Drive Sync, be sure to see additional instructions here).    For backing up a Google account I’d recommend taking a look at something like Backupify (if you have a G-Suite type account) or CloudAlly (if you have a standard Google account).

Sharing printers and scanners

Printers

Printing on Chrome OS is a bit of a challenge because the native printing system in Chrome OS is via Google Cloud Print (GCP).  As a result printing from both Chrome OS and a legacy OS means figuring out how to enable GCP and still being able to print using a legacy OS print driver.

One option is to connect a printer to your legacy box as normal, install the Chrome browser and then enable GCP (with Windows XP you also need to make sure to install the “Microsoft XML paper specification pack”).   While this works, it probably won’t be a long term solution because it again relies on installing the Chrome browser which may not be supported and/or functional over the long haul on your legacy system.

Because of this I usually recommend buying a GCP ready printer.  GCP ready printers are network printers which means they are essentially a computer with a printer built in.  As a result they aren’t dependent on a connection to a separate computer to control them or run the GCP software.  Instead network printers today are usually configured via a web browser.  You no longer need to install and use the custom configuration software that was typically needed in the past.

The only issue here is to make sure the printer you buy still has drivers available for your legacy system.  While most major manufactures seem to be doing a pretty good job of maintaining legacy drivers, I would recommend verifying this before making a purchase.

In many cases to enable GCP you just connect the printer to your network, open the printers configuration application in your web browser, find and click the GCP menu and register the printer.  Registering a printer usually just involves entering your Google username and password.

One of the problems with GCP of course is that it makes printing offline impossible.  But there is a workaround if you have a relatively modern GCP ready printer.  One nice thing about most modern GCP ready printers is that they typically also implement a newer printing protocol called IPP Everywhere (see my previous comments in this post for more background).  An app in the Web Store called “Wifi printer driver for Chromebooks” taps into this capability via Wifi Direct (which is based on IPP Everywhere) allowing you to print offline.  You can even use the printer at a friend’s house or in a hotel if they have enabled Wifi Direct on their printer.

Once this is done you need to decide how you want to connect the printer to your legacy machine.  There are a couple of possible ways of doing this including using a USB cable.  But I find the easiest and most flexible way is to connect to it over a network using LPR (aka LPD/LPR or Line Printer Remote protocol).  I found doing so allowed me to place my printers in more convenient locations and makes sharing them with other users easier.

The LPR protocol has been around for decades and has been built into almost every network printer ever constructed.  Almost every major legacy OS has the ability to connect to a LPR printer.  The only thing that is a little tricky with using LPR in the case of a legacy OS like Windows XP is that the LPR software may not be installed by default.  In the case of Windows XP, you need to use the Add/Remove Windows Components applet in the Windows Control Panel.  In the applet it is listed under “Other Network File and Print Service”.  Once the LPR software is installed you install your printer driver via the “Add Printer” wizard in Windows as normal.  For other legacy OS’s you will need to use Google to search for the relevant instructions.

Scanners

The most convenient way to add scanning capability is to use a network connected multifunction device that can print, scan, copy and/or fax.  Similar to stand alone network printers, these devices usually come with all the software needed to handle most scanning, copying and faxing functions already installed in the machine itself.  Unlike in the past,  you no longer need to install separate software packages on your computer in order to use them (one example).

What this means is that you can walk up to these machines and scan a document without dealing directly with your computer.  Depending on the device settings, scanned documents are immediately converted to a file (pdf, jpeg, tiff, etc) and sent to some type of local or remote storage device (network share, ftp site, website (e.g. Google Drive), flash drive, etc).  Some even have built in OCR.  Unless you have unique requirements, odds are you can find a device that will fit your needs.

My current machine allows me to scan either directly to my Google Drive account or to an FTP folder.  Having set up shortcuts with common default settings, I can usually scan documents by only hitting a few buttons.  I then just look for the file in either a Google Drive folder or a folder on my legacy machine.  In many cases this can eliminate the need for dedicated scanning drivers or software on either your Chrome OS or legacy machine.

Obviously being able to scan to a Google Drive folder is the easiest way to make scanned files available on Chrome OS.  While in my case I didn’t find this difficult to do, unfortunately this isn’t yet a common feature on most multifunction devices.  I am hoping that perhaps someday Google will create something like a “Google Cloud Scan” program to encourage manufacturers to add this capability in a consistent manner (although I’ll admit I’m not holding my breath).

The easiest way for a network scanner to send a file to a Windows machine is to have the scanner upload a file to a shared folder via SMB/CIFS.  However like Google Drive support, this is also not supported in all multifunction devices.

Instead uploading files to an FTP site seems to be a more common alternative and this of course requires installing some type of FTP server on your legacy system.  Fortunately there are several FTP servers available for Windows and most other legacy OS systems.   Most are easy to install and use.

I have found Filezilla Server works well (version 0.9.42 was the last to officially support Windows XP).  The server even comes with a nice Windows program that makes configuring it easy.  Slimftpd is also a super small FTP server that I have found to be fast and stable (basic instructions).  Because you are typically running these servers within your own network (not exposing them to the internet), security usually isn’t a major concern.

When choosing a scanner there are a few limitations on some machines that you may want to be aware of and avoid.  First of all I have found that some scanners don’t always scan edge to edge.  This is particularly true of automatic document feeders that sometimes cut off the top and bottom of a page (this is usually not a problem when you scan using a devices flatbed scanner).  Depending on what you are scanning, this may or may not be a problem.

In addition image quality can vary significantly.  Text based documents usually look fine, but detailed grayscale and color photographs will quickly reveal a marginal scanner.  Unfortunately price isn’t always a reliable indicator in this regard and testing directly isn’t always an option today when most are bought online.  But if you ask around you might find someone who has the model you are interested in and is willing to post or email you a sample scan in the format and resolution you typically use.  I usually find 300 dpi grayscale or color is the minimum necessary to archive common paper documents in a readable form.  In most cases avoid the black and white setting.

Conclusion

Of course not all of the above advice is entirely unique to Chrome OS.  Much of it could also apply to almost anyone who has invested heavily in a particular operating system and now needs to move on.

With the release of Windows 10, I suspect it may not be long before those who have spent years pouring their data into Windows 7 start facing many of the same issues Windows XP users had to deal with over the past few years.  Perhaps a few of them will find some of these strategies helpful.

Updated March 5, 2017