The Web, version 1.0, and the forgo-profit-for-marketshare companies that inflated it past the bursting point are now distant memories. Today, a new’ish set of internet technologies and fresh philosophies on how to harness the interconnected nature of the network are gaining significant attention from consumers and venture capitalists. It’s the web, version 2.0, or Web 2.0 as it has been dubbed by many.
Online companies and services are harnessing the network effect of the web combined with syndication, micro-content, social networks, collective intelligence and open standards to change the face of interactivity both online, and offline. There will be serious implications for businesses focused on information and people.
So what is the Web 2.0? Will it make as big a splash on the future of business as its hypes predicts? What companies are poised to take advantage of the trend? And who are most vulnerable to the new upstart Web 2.0 companies and the business models of the new web world?
The Appearance of Web 2.0
It is hard to pinpoint the moment of Web 2.0’s emergence – because there was not one. The was no release of a single new technology. There was no single economic insight or business model that completely disrupted existing markets. There was no industry consortium that defined a new standard. If forced to pick a moment, perhaps Google’s IPO will serve historically as the marker – in the same way that Netscape’s IPO’s is often used to signify the birth of the dot com era. At any rate, at some point over the past 12 to 18 months several factors have aligned and a there is now recognition that the companies and web services that are moving the internet market today share many common characteristics.
Formally, “Web 2.0” as a label was first used by Tim O’Reilly and Dale Dougherty of O’Reilly Publishing. The story goes that the two were brainstorming about the shared characteristics of today’s successful internet companies compared with the characteristics of the pre-dotcom bubble companies of 2000. Their brainstorming led to the birth of a conference – Web 2.0, which was held in October of 2004. The evolution of internet focused companies and services in the 13 months since that conference have helped to reinforce – and extend - their observations.
Web 2.0 Characteristics
Ultimately, Web 2.0 is an industry buzzword. Like any buzzword, it’s usefulness can be debated because they are easily mis-used and misunderstood. For me, it is best to think of “Web 2.0” as a handy moniker to describe several characteristics about two things: Companies, and Software (though ironically, “Web 2.0” itself is about redefining software to a degree).
In that it describes both companies and software, we can look at Web 2.0’s defining characteristics as “conceptual” and “technical”. The conceptual characteristics describe a company’s approach or way of thinking about their product, and technical characteristics describe the shared programming architecture principles of the product itself.
It should be noted that these characteristics are not a checklist. Successful companies are exhibiting all or some of these characteristics. O’Reilly notes that deep adherence in one may be more advantageous than shallow adherence to all (O’Reilly, 5)
- Service the Long Tail
- Users as a Source of Value, The Network Effect
- Software as a Service, Not a Package
- Delivered over the web (but not necessarily through a web browser)
- Rich Application Interfaces
- Syndication and Micro Content
- Simple, Light Weight
- Open Data/Information API’s
- Service The Long Tail
- Users as a Source of Value: The Network Effect, Collaborative Value, Collective Intelligence
- The Web as a Platform and Software as a Service
Today, most software is distributed as a “package” and operates on a desktop computer with limited reliance upon the web for most of its functioning. Web 2.0 companies have recognized the web’s capacity to deliver “desktop-like” performance without ever having to install a piece of traditional software. The architecture of Web 2.0 applications relies upon centrally managed software, pushed to the edges of the network, and delivered as a service. This thin-client architecture is not new, but the ubiquity of the web as a delivery medium, and its low cost of distribution (essentially zero) is making the “web as a platform” incredibly attractive for software companies of the future.
Beyond simply leveraging the web as a distribution channel for software to a thin-client, Web 2.0 companies are, as stated above, baking the network effect of the network right into their applications. So “web as platform”, means more than just delivering a service to a thin client – it means enabling that service to take advantage of the collaborative value of its users to create a positive feedback loop – the capability inherit in the platform. By that I mean that Web 2.0 applications are not simply a collection of tools handed to a user (like Word or PowerPoint, for example) with the user performing tasks with that tool in isolation. Web 2.0 applications typically are tools where users perform tasks that take advantage of a central datastore, and their interaction with the tool itself is often a valuable source of information in the datastore. Flickr, for example, uses an algorithm that combines several user contributed data points (number of times a photo has been viewed or commented on for example) to determine a photos “interestingness” – and makes these “interestingness” rankings available to photo browsers.
Delivering software as a service also disrupts the traditional methods of software upgrade cycles. Software as a service means continual improvement, no installs means no patching, and no technical support beyond the support of the end users. Granted, it presents its own operational challenges – scalability, and service maintenance in particular. But the point is that a paradigm shift has been realized, and based on early returns, presents a very formidable challenge to the incumbent models.
The Long Tail, as noun, was coined by Chris Anderson, in a 2004 Wired magazine article . Technically, the Long Tail, is a type of “power curve” (figure 1, below) describing a statistical distribution characterized by a dense clustering of a population which “tails off”. Interestingly, this curve describes many distributions commonly occurring on the web.
Figure 1: The Long Tail
For example, if you were to graph iTunes song sales on the X axis and song titles on the Y – you would end up with a Long Tail curve. Plot the number of Google searches against search terms, and you get the Long Tail. Web site page views by web sites, the Long Tail. Amazon book sales by book titles, again, the Long Tail. Basically, when you plot popularity against inventory, you end up here. What does this mean?
One can use the curve to describe markets in general – with market value on the X axis and markets on the Y. Traditionally, companies like to identify one or two markets on the “head” of the tail – the tall red spike on figure 1 – and play there. Historically, it has proven difficult for a company to profitably service “the long tail”. Why? Servicing multiple markets takes more capital, and a more diverse inventory, among other things. However, the internet has significantly impacted a company’s ability to profitably service the Long Tail. And the Long Tail (the yellow on Figure 1) collectively represents a market far larger than the cluster of markets on the tail’s head. As Joe Kraus, founder of Excite and present CEO of JotSpot, puts it “Its no longer about [servicing] twelve markets of millions, its about [servicing] a million markets of twelve” .
Where are we seeing this work? The most impressive example is Google’s AdWords program. The ad serving companies pre Web 2.0 were never able to extract value out of the long tail. Companies like Double Click based their business model on creating a network of the web’s largest/most popular destinations (the head end of the tail). They wooed them to join their “advertising network”, and then sold advertisement space to advertisers looking for exposure on those major destination sites. Double Click had a network of 100’s of the largest web sites. Compare that with Google Adwords. Google recognized that while there were only a handful of very popular web sites out there, there where millions of web sites populating the long tail. They created a simple way, and very importantly, a free way, for sites occupying the long tail (those with low page-views, serving narrow content markets) to become advertisers. Any site purveyor, using Google’s adwords program can place advertisements on his site, and generate revenue from them, for free. Google handles all of the ad-serving, and serves up advertisements that are contextually relevant to the site on which they are served. As a result, Google can offer ad space to advertisers on web sites that cover virtually any topic. Google’s ad revenue is projected to top $9 Billion in 2006.
The network effect is a simple one. Some things enjoy exponential increase in utility when there are many of them. Fax machines are the perfect example. A single fax machine is a useless item but a network of millions of fax machines has tremendous value.
Web 2.0 companies have recognized that user interaction, in and of itself, represents value to their services. Your service’s users are a network. And you can leverage that network to significantly strengthen your service. In fact, for some of the most successful companies on the web, the network essentialy is their service (think eBay). You can harness the network’s collective intelligence (Amazon.com book reviews), you can harness the networks behavior as data (links in as a determining factor of relevancy for Google search results), you can harness the network to perform work (wikipedia.com, moveon.org’s Bush in 30 Seconds campaign )
Successful Web 2.0 companies and applications are baking the power of the network right into their offerings.
Let’s use another comparative example to illustrate the point more clearly. Snapfish.com is a photo sharing and printing service that lets subscribers upload and store their digital photos, share them with selected individuals, and order prints. Flickr.com is a service, that has the same core services. However, by default, photos on Flickr.com are shared with the world (the network) unless restricted. On Snapfish, the opposite is true – a photo can only be seen by those invited to see it by the owner. Further, Flickr.com allows users to describe each uploaded photo with “tags”, and allows all users to browse and discover photos by tag – not just view photos they have been “invited” to see by friends. These seemingly simply distinctions make a large difference. Flickr is leveraging its network of users directly within its application itself. As a result it is the fastest growing photo-sharing service on the web with ~775,000 users with 30% monthly growth, a database of ~18.5 Million photos, of wich 80% are “tagged” and publicly available . Flickr.com was bought by Yahoo in March of this year. Snapfish, despite having been around for years prior to Flickr has much slower growth. Snapfish was recently acquired by HP.
What are the technologies frequently used to support the conceptual characteristics that make a company or software “Web 2.0”? The interesting thing about Web 2.0 is that technologically, very little is new - most have been around since at least 1998 These technologies include:
Figure 2: AJAX vs Classic Web Application Architecture
- REST and SOAP driven open data/information API’s
Web 2.0 applications are giving programmers access to their information datastores and basic functions through simple programming interfaces (API’s) based on service oriented architecture principles. A program’s content and functions can be exposed to other programs running on the web by “wrapping” them with these API’s.
There are really two common API standards – SOAP (simple object access protocol) and REST (representational state transfer). SOAP has a more formal set of standards for handling many programming interface requirements, and is often used for connecting larger business systems. REST, which is essentially just making XML structured content available via documented HTTP requests. Amazon.com, for example uses the SOAP architecture to integrate with partners such as Toys r Us, but makes all of its product catalogue, user reviews, and customer wish list data available with simple XML and HTTP – enabling any site to very easily become a book reseller by embedding amazon content directly into their site. Amazon reports that ~80% of its API use is REST, not SOAP based.
- RSS and other light-weight “micro-content” formats for information syndication
The example above, of Amazon making its product available for integration by 3rd parties is a perfect example of “micro-content” and syndication. Amazon has broken its content into small, well structured chunks, and using open API standards mentioned above, enables the simple syndication of their content to virtually any 3rd party.
This openness of information sharing is viewed as extremely dangerous for many content organizations because it strips the content owner of control over the information’s presentation. Still, organizations like Amazon, Google, Flickr, Yahoo, and even the BBC are moving quickly to make much of their information available in small, easily distributable chunks.
- Creating “Mashups” or “Remixes” that combine 3rd party data and information in innovative ways using the technologies described above.
When you combine lightweight programming models, freely available 3rd party content, and the new ability to make a web page behave almost identically to a desktop application, interesting things start to happen. The result is a surge of applications that combine data and services available through open API’s and “mashes” them together to create some amazingly innovative uses of information that the original distributors of the information never even envisioned.
For example, housingmaps.com provides a map-based interfaced for finding homes that are for sale or rent in 37 different cities in the United States. But HousingMaps.com doesn’t own any of the listing data; and HousingMaps.com doesn’t own any of the GIS mapping tools responsible for its interactive interface. HousingMaps.com pulls list data from CraigsList.com using their open API, and “mashes” it up with Google’s available mapping API to create the application.
Who Will These New Concepts and Technologies Affect?
Companies at Risk
Who is at risk to having their existing business threatened by Web 2.0 companies? They share these characteristics:
- Organizations that rely upon the Web 1.0 model for controlling access to their information.
This includes the traditional print media, online news media, and the recording industry. The nature of the internet makes information openness a competitive advantage. It is increasingly difficult for content organizations to dictate the channels through which their information is consumed. Businesses whose model require that their consumers “come through my front door” to consume their information will end up as losers in the Web 2.0 economy.
In contrast, Companies with simple open API’s to their content will see their information distributed through innovative mashups and other 3rd party channels – offering them opportunities to explore new revenue streams. Consumers, of course, will not make their decision to abandon a service based on API availability, but that will be the net effect. They will switch their attention to services that enable them to consume information on their own terms.
- Organizations that do not recognize their user-base as a valuable network will be left behind by Web 2.0 companies that do.
Users of today’s Web 2.0 companies are essentially co-developers of the products they use. The Web 2.0 products have a built-in positive feedback loop that is powered by the network effect. Purveyors of traditional ‘stand-alone software’ that cannot take advantage of the network effect will be at a competitive disadvantage in the years to come.
Even traditional software markets like ERP and CRM, that would appear insulated from the Web 2.0 threat because of their complexity and heavy per-instance configuration and customization are not safe over the long haul. Companies like Seibel, if they do not move to embrace the Web 2.0 concepts will continue to loose market share to innovators like SalesForce.com who have fully embraced the web as-a-platform and software-as-a-service principles. The SalesForce’s of the world will have the advantage of profitably servicing the Long Tail as an ever growing revenue stream, while chipping away at the enterprise market dominated by the likes of Seibel (in CRM).
Companies Poised to Capitalize
What organizations are poised to capitalize on the Web 2.0 concepts and technologies? They share these characteristics:
- Existing participatory user base –
Yahoo, Microsoft (MSN), AOL, Google, eBay. Companies with a large number of existing registered users have a built in advantage in leveraging the network effect. They can jump start their Web 2.0 services by tapping into an existing set of users, enabling them to reach critical mass quicker.
- Mash Up Innovators and Mash Up Content Providers –
innovators: Trulia.com, Rollyo.com, GroupHop, diggdot.us, Flock, etc; providers: BBC, Flickr (Yahoo!), Google, Del.icio.us, directory services, traditional media (if willing to relinquish some control). There will undoubtedly be a shake out in this space, but those who are able to add significant value to consumers by integrating 3rd party content in innovative ways will become Web 2.0 winners. Interestingly, the winners in this space tend to become acquisition targets by the companies in bullet one, above, as opposed to companies looking to build their own organization.
- Desktop Application Replacements and Software as Service leaders–
Zimbra.com, SalesForce.com, Writely, JotSpot, SocialText, etc. Organizations that are already delivering their software as a service over the web are blazing the early adopter trail. Large companies with traditional installed software business models should be watching closely – Microsoft, Quicken, and Adobe, are good examples. These companies can either learn quickly from the trailblazers, or loose out to them, depending upon their reaction to the disruption they will pose to their market share.
- Network Effect Leveragers –
NetFlix, Technorati, Del.icio.us, Digg.com, Wikipedia, SugarCRM, Salesforce.com, ESPN.com (fantasy sports). These companies already enjoy the exponential growth in usage generated by the network effect embedded in their services.
There is still much debate as to whether Web 2.0 is more marketing hype than substance. Web 2.0 is being billed as a revolution – a major shift in how to build web companies and web applications. However, the technology to build “web 2.0” applications have been available since 1998, and some of the biggest Web 2.0 players are the survivors of the dot com bubble. So which is it? Revolution or business as usual? Hype or substance?
Does this Change Everything or Nothing?
Its obviously still early, but from what I can see, the Web 2.0 concepts and approaches are the closest we have to aligning business models and tools directly with the core strengths of the internet. The dot com boom was filled with companies looking to move offline models onto the online world, trying to grab marketshare with huge traditional mass marketing campaigns. The Web 2.0 players are more natively in tune with the web’s capabilities and benefits – and they are being built to take advantage of those benefits.
However, like the first dot com boom, the business models to turn usage into revenue are still young, and largely unproven. What business models will most effectively exploit these concepts in the future? Try-before-you-buy marketing and monthly subscription services are starting to prove their capacity to profitably service the long tail – but with the exception of ad networks like Google’s AdWords and Yahoo!’s Overture, no one has figured out a way to scale the models into the billions. Over the next 3 years we will see a glut of companies enter the marketplace using the principles outlined above to attempt to answer that very question.
 The Long Tail, Chris Anderson, Wired Magazine, Oct 2004, http://www.wired.com/wired/archive/12.10/tail.html
 Joe Kraus, Long Tail Presentation, http://bnoopy.typepad.com/bnoopy/2005/03/the_long_tail_o.html
 MoveOn.org Bush in 30 Seconds Campaign, http://www.moveon.org/bushin30seconds/
 Ajax: A New Approach to Web Applications, Jesse James Garrett, http://www.adaptivepath.com/publications/essays/archives/000385.php