The Web—that thin veneer of human-readable design on top of the machine babble that constitutes the Internet—is dying. And the way it’s dying has farther-reaching implications than almost anything else in technology today.
Think about your mobile phone. All those little chiclets on your screen are apps, not websites, and they work in ways that are fundamentally different from the way the Web does.
Mountains of data tell us that, in aggregate, we are spending time in apps that we once spent surfing the Web. We’re in love with apps, and they’ve taken over. On phones, 86% of our time is spent in apps, and just 14% is spent on the Web, according to mobile-analytics company Flurry.
This might seem like a trivial change. In the old days, we printed out directions from the website MapQuest that were often wrong or confusing. Today we call up Waze on our phones and are routed around traffic in real time. For those who remember the old way, this is a miracle.
Everything about apps feels like a win for users—they are faster and easier to use than what came before. But underneath all that convenience is something sinister: the end of the very openness that allowed Internet companies to grow into some of the most powerful or important companies of the 21st century.
Take that most essential of activities for e-commerce: accepting credit cards. When Amazon.com made its debut on the Web, it had to pay a few percentage points in transaction fees. But Apple takes 30% of every transaction conducted within an app sold through its app store, and “very few businesses in the world can withstand that haircut,” says Chris Dixon, a venture capitalist at Andreessen Horowitz.
App stores, which are shackled to particular operating systems and devices, are walled gardens where Apple, Google , Microsoft and Amazon get to set the rules. For a while, that meant Apple banned Bitcoin, an alternative currency that many technologists believe is the most revolutionary development on the Internet since the hyperlink. Apple regularly bans apps that offend its politics, taste, or compete with its own software and services.
But the problem with apps runs much deeper than the ways they can be controlled by centralized gatekeepers. The Web was invented by academics whose goal was sharing information. Tim Berners-Lee was just trying to make it easy for scientists to publish data they were putting together during construction of CERN, the world’s biggest particle accelerator.
No one involved knew they were giving birth to the biggest creator and destroyer of wealth anyone had ever seen. So, unlike with app stores, there was no drive to control the early Web. Standards bodies arose—like the United Nations, but for programming languages. Companies that would have liked to wipe each other off the map were forced, by the very nature of the Web, to come together and agree on revisions to the common language for Web pages.
The result: Anyone could put up a Web page or launch a new service, and anyone could access it. Google was born in a garage. Facebook was born in Mark Zuckerberg ’s dorm room.
But app stores don’t work like that. The lists of most-downloaded apps now drive consumer adoption of those apps. Search on app stores is broken.
The Web is built of links, but apps don’t have a functional equivalent. Facebook and Google are trying to fix this by creating a standard called “deep linking,” but there are fundamental technical barriers to making apps behave like websites.
The Web was intended to expose information. It was so devoted to sharing above all else that it didn’t include any way to pay for things—something some of its early architects regret to this day, since it forced the Web to survive on advertising.
The Web wasn’t perfect, but it created a commons where people could exchange information and goods. It forced companies to build technology that was explicitly designed to be compatible with competitors’ technology. Microsoft’s Web browser had to faithfully render Apple’s website. If it didn’t, consumers would use another one, such as Firefox or Google’s Chrome, which has since taken over.
Today, as apps take over, the Web’s architects are abandoning it. Google’s newest experiment in email nirvana, called Inbox, is available for both Android and Apple’s iOS, but on the Web it doesn’t work in any browser except Chrome. The process of creating new Web standards has slowed to a crawl. Meanwhile, companies with app stores are devoted to making those stores better than—and entirely incompatible with—app stores built by competitors.
“In a lot of tech processes, as things decline a little bit, the way the world reacts is that it tends to accelerate that decline,” says Mr. Dixon. “If you go to any Internet startup or large company, they have large teams focused on creating very high quality native apps, and they tend to de-prioritize the mobile Web by comparison.”
Many industry watchers think this is just fine. Ben Thompson, an independent tech and mobile analyst, told me he sees the dominance of apps as the “natural state” for software.
Ruefully, I have to agree. The history of computing is companies trying to use their market power to shut out rivals, even when it’s bad for innovation and the consumer.
That doesn’t mean the Web will disappear. Facebook and Google still rely on it to furnish a stream of content that can be accessed from within their apps. But even the Web of documents and news items could go away. Facebook has announced plans to host publishers’ work within Facebook itself, leaving the Web nothing but a curiosity, a relic haunted by hobbyists.
I think the Web was a historical accident, an anomalous instance of a powerful new technology going almost directly from a publicly funded research lab to the public. It caught existing juggernauts like Microsoft flat-footed, and it led to the kind of disruption today’s most powerful tech companies would prefer to avoid.
It isn’t that today’s kings of the app world want to quash innovation, per se. It is that in the transition to a world in which services are delivered through apps, rather than the Web, we are graduating to a system that makes innovation, serendipity and experimentation that much harder for those who build things that rely on the Internet. And today, that is pretty much everyone.
—Follow Christopher Mims on Twitter @Mims; write to him at firstname.lastname@example.org.
Ever since the debut of the iPad nearly five years ago, pundits have been talking about the possibility of a post-PC professional existence. But I’m actually living it; I haven’t touched a personal computer in six months and I’m more productive than ever.
If you could peek over my shoulder at the device I’m writing this column on, you might call me a liar. By all appearances, my notebook computer, with its 13-inch screen, trackpad and keyboard, is a PC.
And yet Gartner, the most influential company charged with determining what is and is not a personal computer, has declared that my Samsung ElectronicsChromebook 2 isn’t a PC. Gartner doesn’t include sales of Chromebooks in its quarterly tally of how many PCs are sold.
“We define a PC as a device which is capable for both content consumption and creation, regardless of form factor,” says Mikako Kitagawa, Gartner’s lead PC analyst.
I guess I’m not a content creator.
Chromebooks, in case you haven’t touched one—and market research indicates that you haven’t—are Google ’s answer to Windows and Mac computers. Gadget reviewers who use Chromebooks only when they are paid to often describe them as more limited than a typical PC. But people who use Chromebooks regularly are more likely to observe that they can do pretty much everything that the average PC user needs.
To be fair to Gartner, many Chromebooks, including my own, have the same innards as smartphones so, at least on paper, they seem underpowered. Samsung’s Chromebook 2 has the same processor, amount of memory and even number of screen pixels as Samsung’s flagship smartphone, the Galaxy S5. The only reason the Chromebook 2 works as a PC is that Google’s Chrome operating system is incredibly lightweight—smaller and less taxing on hardware.
I don’t mean to shill for Chromebooks. It’s just that Google is in the vanguard of creating PCs that function like smartphones: light, portable, always on, always connected and relying on the cloud to do their heavy lifting. It’s pretty obvious that in the not too distant future, Apple and Microsoft are going to free their fans from the PC in the same way.
Apple Chief Executive Tim Cook has said that he does 80% of his work on an iPad. I bet it would be 100% if the iPad possessed the characteristics that allow you to create content rather than just consume it: true multitasking and fast switching between applications, plus a bigger screen. But there’s evidence that a larger, so-called iPad Pro is coming. And I bet that Apple eventually will give us a version of its mobile operating system that makes iPads true replacements for notebooks, even those made by Apple.
Then there’s Microsoft’s Surface Pro 3, which is a full PC in tablet form, one of the many two-in-one notebooks that PC makers have been rolling out lately. From the processing power these hybrid devices pack to their snap-on keyboards, they are clearly designed to get real work done.
Yet were I a Surface Pro user, I still wouldn’t be using a PC, according to IDC, Gartner’s leading competitor for tallying how many PCs are sold each year. But Gartner does consider the Surface Pro a PC. According to Jay Chou, IDC’s senior analyst in charge of tracking PCs, the firm doesn’t consider anything with a detachable keyboard a PC. His firm does consider Chromebooks to be PCs, even though the Surface Pro 3 is far more powerful than most Chromebooks.
That Gartner and IDC can’t even agree on the definition of a PC speaks volumes about the strange times in which we live. Is a smartphone stretched into the shape of a laptop a PC? No, says Gartner. What about a PC crammed into the shape of a tablet? Nope, counters IDC.
These delineations are ridiculous from the perspective of the end user. That’s because most of us are entertaining ourselves and getting work done in the one place absolutely all of these devices can access—the cloud.
I store and edit all my photos in the cloud, which also is where all my media are streamed from. Unless you’re editing video, building 3-D models, playing elaborate games or dependent on legacy Windows applications that your company hasn’t moved to the cloud, you don’t strictly need a PC anymore.
At this point in history, booting up a full-fledged PC operating system to write an email is like using a nuclear sub to go on a weekend fishing trip. And a good Chromebook can be had for $300. Personally, I’d rather spend my technology budget on the one thing I truly can’t do without—my smartphone. Surveys indicate that in this respect, I’m typical of every generation to follow the baby boomers.
In short, I’m done with PCs—at least as they are conventionally defined. And I think the majority of long-suffering PC users would be too if they weren’t so accustomed to thinking of computers in the same way they have for decades. Building new technology is easy compared with changing the habits of those who use it.
—Follow Christopher Mims on Twitter @Mims and write to him at email@example.com.