By James Kwak
This week, Apple passed Microsoft to become the most valuable technology company in the world (measured by the market value of its stock).* I’ve been wondering about Apple and, in particular, why “apps” — which at first glance struck me as a giant step backward in computing technology — have gotten so much buzz in the media. Then I bought an iPad, and while I understand apps a little better, I’m still perplexed. But since this isn’t a particularly technology-savvy audience, this is going to take some setting up. The background is here in Part 1; Part 2 will be coming shortly.
(Note that here I’m talking about personal computing, which is what people like you and I do on our own; enterprise computing is something very different that I’ve written about before, and still largely takes place on mainframe computers.)
A Little Background
Rather than recap the entire history of computing (hilarious synopsis here, hat tip Brad DeLong), I’ll start in the early 1990s. At this point, many people had personal computers, but for the most part they weren’t connected to anything except maybe a printer. (Actually, in the early 1980s my father brought home one of those primitive modems where you actually placed your phone receiver into a socket to communicate, so we could log into the mainframe at his university, but that was the exception.)
A personal computer has an operating system (Windows, OS X, Linux, etc.). This isn’t quite correct, but you can think of the OS as the software that manages the physical parts of a computer: it runs the internal parts, like the CPU and the hard disk drive, and it controls the interface to the parts that you interact with, like the keyboard and the screen. There are also applications that run on a computer (Excel, PhotoShop, Half-Life, etc.). These applications don’t directly manage the physical parts of the computer; instead, they talk to the operating system, which in turn talks to the physical parts. They do this via the application programming interface, or API, that is published (made accessible) by the operating system.
For our purposes, there are two important features of this structure. First, each operating system has a different API, so you have to write programs differently for each OS. That doesn’t mean every line of code has to be different, but the way you call lower-level functions will differ across operating systems. On top of this, each OS developer (Microsoft, Apple, etc.) provides a different set of tools that you use to write programs for its OS. Software developers tend to become better at using one set of tools than another, and hence more likely to write programs for one OS than another.
Second, programs that can access the operating system’s API can do a lot of different things to your computer — this is what makes software powerful. At the same time, that means they can do damage to you.
So in the early to mid-1990s, we had self-contained personal computers (Windows or Mac) that ran programs that were written specifically for the operating systems they ran on. (A given program, like Excel, might exist in both Windows and Mac versions, but those were two completely different pieces of software that just looked the same on the outside.) Microsoft dominated this world for a couple of reasons, most importantly that many more programs were being written for Windows than for Mac. I believe this is partly because it was easier to write programs for Windows (Microsoft did a better job providing tools for developers), and partly because the Windows installed base was a lot bigger than the Mac installed base, so a new Windows application had a lot more potential buyers. The Windows installed base was bigger, in turn, because of Microsoft’s business model: it licensed Windows to any hardware manufacturer who wanted it, and therefore you had more diversity, more innovation, and lower price points for Windows PCs than for Macs. There were other factors as well, but those are the basics.
Then Tim Berners-Lee gave us the Internet, and Marc Andreesen gave us the browser, and everything changed.
Ever since the mid-1990s, the Internet has played a bigger and bigger role in our daily computing. And so the most important application of all became the Internet browser (Internet Explorer, Netscape, Firefox, Safari, Chrome). This is an application that has the ability to find, display, and interact with resources on the Internet. Like all applications, it talks to the operating system via its API. But it’s special in a few respects.
- One is simply that many people spend more time in their browsers than in all their other applications put together.
- Another is that the Internet is largely built around a few basic standards, like HTML (a language that web pages are written in). All browsers have to be able to interpret those standards. So if you build web pages using those standards, you know that all browsers will be able to access them; you don’t have to worry about what operating system your visitor’s computer is running.**
- A third is that the browser can be designed in such a way as to minimize risk to the computer it is running on. Ordinarily, browsers do not have the ability to modify data on your filesystem. This is for security reasons; the goal is to prevent web sites from automatically launching attacks on your computer. Of course, web sites are constantly asking if you want to save files to your computer, and then you’re on your own. And there are technologies that can be added to a browser, like ActiveX, that give programs on web sites the ability to get at your hard drive. But in principle, it is harder for a program that lives on a web site and runs inside a browser to do damage than for a program that you install on your computer and that has direct access to the operating system via the API.
The result was the golden age of web-based computing. Around a decade ago, during the Internet boom, the idea became popular in the technology community that all computing would move “to the Web.” That is, instead of installing standalone applications that ran on directly on our computers and accessed the operating system’s API directly, the interesting software would live on web sites on the Internet, would conform to Internet standards, and would therefore run properly in any browser. This was supposed to have several benefits:
- Computing would be safer, since our computers would be protected by our browsers.***
- People wouldn’t have to worry about installing and updating software — just about keeping track of their bookmarks.
- Programs would be easier to learn and use for ordinary people, since browsers offer a consistent and intuitive way of interacting with programs.
- We wouldn’t have to worry about carrying our data around, backing it up, and syncing it between computers, because it would all be on the Internet.
- Developers would only have to write each program once, because then it would automatically work on all browsers (assuming everyone conformed to standards) and hence on all operating systems.
- As a corollary, the Age of Microsoft would come to an end, since one pillar of its dominance — the huge community of developers writing for Windows — would now be irrelevant.
To some degree, this has happened. I’m writing this post using Firefox at WordPress.com. The computers in my house have three different operating systems and I use three different browsers (Firefox, Safari, and Chrome), which I keep synchronized using XMarks. I spend the vast majority of my computer time in a browser, and not just for consuming information; besides the blog (WordPress), my email, tasks, calendar, and contacts all belong to Google, I try to do most of my lightweight work in Google Documents, I share photos using Flickr, etc. Much of the modern, interactive computing that people do (like Facebook) is done in a browser.
This is, roughly speaking, what Google is all about: a world where the OS and the browser don’t matter because they are just tools to get us onto the Internet, where we keep our data and do all our work. It’s why Google is writing two operating systems, Android and Chrome, that will both be free, and is developing a suite of Web-based “productivity” applications; they want to cripple Microsoft’s business model by giving away their versions of the two things that make Microsoft so profitable: Windows and Office.
Microsoft is still a big, profitable company, because PCs will be around for a long time, most companies use Windows, Office, and other Microsoft products for networking, email, etc., and those products can be very sticky, especially in a corporate environment. But the world is moving away from the 1990s model. Microsoft recognizes this, of course. This is why they fought so hard to crush Netscape in the 1990s — they wanted control of the browser. And it’s why they’ve spent so much money — Hotmail, MSN, .NET, Windows Live, Bing — trying to establish a presence on the Internet. But they just haven’t been very good at it.
So at a high level, this is the story of personal computing over the past fifteen years. But recently there has been a new plot twist, which will be subject of Part 2.
* Great quote by Steve Ballmer in the New York Times story: “Windows phone – boom! We have to deliver devices with our partners this Christmas.” Does he realize that he talks like Ari Gold on Entourage?
** This can be thought of as a kind of isolation layer. With Windows, software developers don’t need to worry about whether the customer has a Dell, HP, or Acer computer; as long as it has Windows, it will behave in a predictable way. With Internet standards, now you don’t need to worry about what OS the customer has, just what browser she has.
*** Yes, browsers have security flaws, so this isn’t a perfect system.