So what is this thing called ‘Web 2.0’? Well, glad you asked that question; it is an evolution in the way we experience the web, it is a tidal wave that’s taking the web by storm, it’s been around, being discussed in industry circles for a few years but I believe it’s fullness is only beginning to show. Web 2.0 is a concept that came to being from a brainstorming session between O’Reilly and MediaLive International; it goes back to about 2004 when the first Web 2.0 conference was held.
So what does it mean? Well, according to a paper by Tim O’Reilly (read it here or stream the audio here), Web 2.0 is characterized by a number of principles, that we will get into shortly. One way to learn something is by making clear what something IS NOT and then clarifying what it IS. In this case, what is Web 2.0 NOT and what IS Web 2.0, a simple way of knowing what Web 2.0 is NOT is by looking at it’s ‘predecessor’, ‘Web 1.0’; and this is what is proposed by O’Reilly:
Web 1.0 –>Web 2.0
DoubleClick –> Google AdSense
Ofoto –> Flickr
Akamai –> BitTorrent
mp3.com –> Napster
Britannica Online –> Wikipedia
personal websites –> blogging
evite –> upcoming.org and EVDB
domain name speculation –> search engine optimization
page views –> cost per click
screen scraping –> web services
publishing –> participation
content management systems –> wikis
directories (taxonomy) –> tagging (“folksonomy”)
stickiness –> syndication
Though the list is not exhaustive, it does show a significant difference in a sense of what is predominant on the Web as it is in comparison with what was (mostly) familiar, or what predominantly characterized the web in different ways in various areas from before!
According to Tim O’Reilly’s paper, the following are key distinguishing ‘principles’ that are emergent in Web 2.0:
‘A Platform Beats an Application Every Time’
Here Tim makes use of 3 examples and some ‘Web 2.0 lessons’ that are evident in them:
Netscape vs. Google: The value of the software is proportional to the scale and dynamis of the data it helps to manage.
DoubleClick vs. Overture and AdSense: Leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head.
2. Harnessing Collective Intelligence:
Web 2.0 shows an incredible enhancement in the leveraging of collective intelligence; collection, distribution and sharing as well as finding information and making sense of it. Here, Tim mentions the roles played by Wikipedia ad collective content creation and editing, del.icio.us and Flickr and the concept of folksonomy (a style of collaborative categorization of sites using freely chosen keywords, often referred to as tags.) and others, especially blogging, RSS and sites such as bloglines that aggregate RSS content and Permalink.
3. Data is the Next Intel Inside:
The race is on to own certain classes of core data: location, identity, calendaring of public events, product identifiers and namespaces. In many cases, where there is significant cost to create the data, there may be an opportunity for an Intel Inside style play, with a single source for the data. In others, the winner will be the company that first reaches critical mass via user aggregation, and turns that aggregated data into a system service.
4. End of the Software Release Cycle:
Here, Tim O’Reilly takes notice of some key aspects that Web 2.0 companies have to embrace in their business/software development models. He claims:
Operations must become a core competency. Google‘s or Yahoo!’s expertise in product development must be matched by an expertise in daily operations. So fundamental is the shift from software as artifact to software as service that the software will cease to perform unless it is maintained on a daily basis.
Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, “release early and release often” in fact has morphed into an even more radical position, “the perpetual beta,” in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It’s no accident that services such as Gmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a “Beta” logo for years at a time… Real time monitoring of user behavior to see just which new features are used, and how they are used, thus becomes another required core competency.
5. Lightweight Programming Models:
Simplicity is the name of the new game!
A case in point being RSS and REST (Representational State Transfer)! Tim O’Reilly clearly notes the following key aspects of the Web 2.0 era in this regard:
Support lightweight programming models that allow for loosely coupled systems… The Web 2.0 mindset is very different from the traditional IT mindset!
Think syndication, not coordination. Simple web services, like RSS and REST-based web services, are about syndicating data outwards, not controlling what happens when it gets to the other end of the connection… the end-to-end principle
Design for “hackability” and remixability.
6. Software Above the Level of a Single Device:
According to Dave Stutz, “Useful software written above the level of the single device will command high margins for a long time to come.”
7. Rich User Experiences:
So, really, Web 2.0 is a paradigm shift in the way we look at the web, the way we get information from the web, the way we find information on the web, the way we develop the web, the way we build business models around the web!