The Internet will not continue to exist in its current form. Right now, we run our textprocessors, music players and games on our own local machines, and access the Internet for news, email, blogposts, fora, and many other things.
All that’s about to change.
The Internet of tomorrow is going to be a platform just like our current computers are now. It will no longer be a network that consists of many complex computer-system-nodes attached to it. It will be a complex computer system that serves web-services, that we will access through so-called thin clients.
Thin clients are client-computer-systems that do not have much application logic themselves, but rather depend on applications delivered to it from a remote source. In this case, that remote source will be Web 2.0.
As a result of this transformation, these are some of the advantages we’ll get from Web 2.0, according to O’Reilly:
- Services, not packaged software, with cost-effective scalability
- Control over unique, hard-to-recreate data sources that get richer as more people use them
- Trusting users as co-developers
- Harnessing collective intelligence
- Leveraging the long tail through customer self-service
- Software above the level of a single device
- Lightweight user interfaces, development models, AND business models
(read O’Reilly’s article if you want to gain a more detailed understanding)
I have three advantages of my own to add to the list:
- Because of the simplicity of future thin clients, they’re likely to be much smaller. Therefore, we are moving towards a truly mobile world.
- You won’t be storing your data locally anymore, but on the new Web 2.0 platform instead. Your data will always be available to you from anywhere.
- Not only will your own data be accessible to you. Because of plans to mass-digitize the world’s media, you’ll likely be able to stream any piece of media to your thin client whenever you like. That means only one tv-channel will exist: your own.
The main idea is that Web 2.0 will be the new computing platform.
Web 1.0 was commerce. Web 2.0 is people.
How will all of this come to pass, you ask?
If it’s up to Google, here’s how:
The same follows for the rumor that Google, as a dark fiber buyer, will turn itself into some kind of super ISP. Won’t happen. And WHY it won’t happen is because ISPs are lousy businesses and building one as anything more than an experiment (as they are doing in San Francisco with wireless) would only hurt Google’s earnings.
So why buy-up all that fiber, then?
The probable answer lies in one of Google’s underground parking garages in Mountain View. There, in a secret area off-limits even to regular GoogleFolk, is a shipping container. But it isn’t just any shipping container. This shipping container is a prototype data center. Google hired a pair of very bright industrial designers to figure out how to cram the greatest number of CPUs, the most storage, memory and power support into a 20- or 40-foot box. We’re talking about 5000 Opteron processors and 3.5 petabytes of disk storage that can be dropped-off overnight by a tractor-trailer rig. The idea is to plant one of these puppies anywhere Google owns access to fiber, basically turning the entire Internet into a giant processing and storage grid.
Google has big plans, and is likely to become an even larger and influential player than it already is. Many speculate that Google will even deal out some big fat blows to the current Microsoft monopoly.
As can be read in The Future Of Computers, the future of computers is going to be interesting.