306
views
0
recommends
+1 Recommend
1 collections
    0
    shares

      If you have found this article useful and you think it is important that researchers across the world have access, please consider donating, to ensure that this valuable collection remains Open Access.

      Prometheus is published by Pluto Journals, an Open Access publisher. This means that everyone has free and unlimited access to the full-text of all articles from our international collection of social science journalsFurthermore Pluto Journals authors don’t pay article processing charges (APCs).

      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Web 2.0 is cheap: supply exceeds demand

      Published
      research-article
      a , * , , b
      Prometheus
      Pluto Journals
      Bookmark

            Abstract

            The aim of this paper is to evaluate, from an economic perspective, the efficiency of Web 2.0. It demonstrates that, because of the non‐monetary nature of Web 2.0, several sources of inefficiencies (search costs, externalities, crowding out and adverse selection) exist. Nonetheless, the economic nature of digital products and the expected low value of most online content make it impossible to adopt a simple market scheme for Web 2.0. In contrast, this paper introduces a concept of demand‐driven Web 2.0 (as opposed to the current Web 2.0, which is supply‐driven) that is expected to provide stronger incentives, through financial reward, for high quality content within a Web 2.0 environment.

            Main article text

            Introduction

            It is undeniable that Web 2.0 is a success and has considerably changed the Internet and the way online products and services are created. From the very much centralised and, in fact, not very interactive paradigm that ruled the initial version of the web, there has been a switch towards a highly decentralised and fully interactive model of the Internet.

            The massive adoption of Web 2.0 technologies, as well as the tremendous amount of material that has been produced using these technologies, are testimonial to the success of Web 2.0 and its superiority over the traditional way of building the web that existed beforehand. Nonetheless, although Web 2.0 has undoubtedly enabled a further growth of the web and bettered Web 1.0, in many ways, it has also left some existing problems unsolved. It is fair to say that some problems have, in fact, been aggravated by the advent of Web 2.0.

            One problem, so far unaddressed, is that of funding the production of material and content. While this problem existed before the advent of Web 2.0, it has since been amplified. From the early days of the web, it has become evident that the structure of the Internet and the economic nature of the goods and services present on the web have made it difficult to price and charge for goods and services as before. The profitability of entire parts of the economy has been wiped out (encyclopaedias, news reports, directories) (Shapiro and Varian, 1999). Although many marketing strategies have been elaborated and tested, it has been universally admitted that the most successful strategy is often to give free access to web material and to finance the production of digital goods and services through advertisement.

            Before Web 2.0, the recipient of the advertising money was, most of the time, the producer of the good or service. However, Web 2.0 has singularly affected this in the sense that the recipients are now, very often, only intermediaries. Producers do not, usually, benefit financially from their production. Although, this might seem, at first, a non‐issue, in the light of the massive amount of material provided within Web 2.0, it raises important economic and social questions. Furthermore, Web 2.0 is in its early days and there has been a growing concern that, unless alternatives to the free‐for‐ads business model are found, Web 2.0 might not, in the long run, be sustainable (Economist, 2009).

            The economic rationale of the free market is that the right incentives are provided to all participants, through the price mechanism, for a (privately and socially) efficient outcome to prevail (Smith, 1904; Arrow and Debreu, 1954). The original version of the web was a serious departure from this, since incentives were provided to some participants (in this case, to producers only), not through the price, but through advertisement income instead. Since the relationship between advertisement and value is, to say the least, rather loose, market distortions and inefficiencies were likely to arise. Web 2.0 goes one step further in the sense that most participants do not receive any incentive from the market, but instead are incentivised by personal motivations (such as ego, reputation, altruism, self‐gratification) that do not have any reason a priori to correspond to the social value of their production.

            In many respects, Web 2.0 may look like the horn of abundance: a (virtually) unlimited amount of content available for nothing. This should not hide the fact that, from a social and economic standpoint, more is not always merrier. It may be that the current version of Web 2.0 is inefficient or even wasteful. Furthermore, today's success of Web 2.0 and the current abundance of material should not hide the fact that a sustainable success of Web 2.0 certainly requires finding new business models which provide the right incentives.

            While some of the shortcomings of Web 2.0 have been identified and discussed in the literature, designing the business models required to capture all the sources of inefficiency in the way Web 2.0 operates nowadays, as well as their consequences, remains a challenge. After considering the challenges raised by the economic characteristics of the goods and services exchanged on the web, alterations to the Web 2.0 paradigm will be envisaged to conciliate the strengths of Web 2.0 with economic and social efficiency.

            Identifying Web 2.0 inefficiencies

            The term Web 2.0 was coined some time during 2004 (presumably for the O'Reilly Media Web 2.0 conference in 2004). Although there are many definitions of what Web 2.0 exactly is, it usually refers to blogs, social networks, content communities, forums/bulletin boards and content aggregators (Constantinides and Fountain, 2008). Despite its short history, Web 2.0 applications have become a routine part of everyday life.

            While individual users have always provided online content by creating websites and participating in forums, new technologies progressively enabled a higher level of user participation. The online Web 2.0 encyclopaedia, Wikipedia, provides a perfect example of how user participation has developed over the years. In 2003, only two years after its birth, there were already more than 100,000 articles in Wikipedia, all written by users. By 2008, there were more than 7 million and Wikipedia.org is currently the ninth most popular site on the web.1 Similarly, there were only 23 known weblogs at the beginning of 1999 (Blood, 2000), while in 2007, about 120,000 new weblogs were created each day (3000–7000 of them being splogs) (Sifry, 2007). Another example of Web 2.0, YouTube, was created in 2005: by 2008, there were 83.4 million users and YouTube was hosting 6.1 million videos.

            However, the large amount of online content is currently uploaded by a rather small number of participants. Comments on YouTube, which is the most active form of participation, account for only 16% of views (Cha et al., 2007). Furthermore, 10% of the most popular videos account for 80% of views and the remaining 90% of videos have few requests. The results for Wikipedia show a higher degree of participation – 4.6% of visitors edit entries. The Hitwise Report, however, indicates that, in addition to the users who create new content, 10–15% of visitors make minor contributions (e.g. add comments or tags, etc.) (Dahlquist, 2007).

            Incentives in Web 2.0

            As Web 2.0 services do not usually require users to contribute in order to access content produced by other users, and as contributors seldom benefit directly from producing content, it might be surprising to see any material at all being available on Web 2.0. Without any incentive (because there is no direct benefit) or obligation (because there is free access) to publish, content in Web 2.0 is akin to a public good (Samuelson, 1954) to which Web 2.0 users are asked to contribute. In such a situation, the rational decision is to free ride (Samuelson, 1954; Buchanan, 1965) and to access content without providing any. Based on this theory, Web 2.0 should be totally empty, while it is, in fact, full of content.

            To understand the incentives in Web 2.0, it is worthwhile considering those related to open source software. Like Web 2.0, open source software can be used without contributing and should, theoretically, not be produced at all. Yet open source software has been increasingly successful since its introduction, in the same way Web 2.0 has. To understand why open source is so successful, in spite of the temptation to free ride, the incentives to contribute to open source software have to be considered. These incentives can be related to immediate benefits (bug fixing, improvement of software, or enjoyment of working on the project) or delayed benefits (signalling incentives related to career concerns, ego gratification and reputation) (Lerner and Tirole, 2002). It would appear that altruism, community identification and human capital are reasons to contribute to open source projects (Bernheim, 1994).

            Although it might be expected that the incentives to contribute to Web 2.0 are similar to those related to open source, there are significant differences between the two. The major difference between open source and Web 2.0 is that open source is mostly related to a professional context (although open source games, for example, exist), while Web 2.0 is mostly related to a leisure context (although professional Web 2.0 applications exist). Nonetheless, one might expect that immediate and delayed benefits (ego gratification and reputation) as well as altruism and community identification play an important role in the decision to contribute to Web 2.0. While career concerns and human capital may play a role in particular cases, it can be assumed that there are, indeed, enough incentives to produce Web 2.0 content, even in the absence of proper remuneration.

            Several studies have been recently conducted to investigate the incentives to contribute to Web 2.0. For example, the main motivation of Wikipedia contributors is their desire to publish the true facts about the world (Forte and Bruckman, 2005) and Wikipedia users edit articles when these articles correspond to their own interests or activities (Bryant et al., 2005). Furthermore, positive feedback received on a contribution motivates users to contribute further (Cheshire, 2007). Likewise, users are 12% more likely to post again when there has been a reply to their message (Joyce and Kraut, 2006).

            In rare cases, Web 2.0 can also be a source of actual financial gains and the annual salaries of some professional bloggers are within the range of $US90,000–120,000.2 In such cases, money is generated through various means, such as advertising (paid per click or sold for a fixed amount per month), affiliate commissions, product sales, donations, etc. However, before a blog starts to generate money, it has to achieve reputation and traffic. Reputation can be obtained through participation in forums and online communities and promoting one's profile on social networking sites. Traffic can be increased by marketing the blog. Therefore, a professional blogger should be familiar with concepts, such as blog publishing software, feed aggregators, blog carnivals, search engine optimisation, tagging, etc. Basically, in order to create a successful blog, the time spent on marketing is expected to be at least the same as the time spent creating the blog (Garrett, 2006).

            Web 2.0 and search costs: a reversed tragedy of commons

            It has been generally acknowledged that the Internet has significantly reduced search costs (Bakos, 1997; Shapiro and Varian, 1999; Pereira, 2005). Search costs are part of transaction costs (Coase, 1937; Williamson, 1981) and are a source of market frictions and inefficiencies (Coase, 1960). However, to comprehend recent trends in Internet search costs, it is important to consider how search costs are built.

            Search costs can, in fact, be subdivided into two (Smith et al., 1999): external search costs, related to monetary expenses or opportunity cost of search; and internal search costs related to cognitive costs. While external costs depend on exogenous factors (such as market structure, technology, etc.) and are, most likely, the same for everybody, internal costs reflect the cognitive effort of individuals or firms to direct search queries and sort information to make decisions. These costs are related to the cognitive ability to process incoming information, which, in turn, is determined by prior knowledge, as well as by factors such as intelligence, education and training.

            While the Internet has been greatly beneficial in regard to external search costs, it is rather doubtful that it has, in any way, affected internal search costs. Of course, search engines have, over the years, become more ‘intelligent’ and have converted some internal search costs to external (since they process and sort information for the users). So, comparing the results obtained, with the same query, from the 1990s market leader, Altavista, with the current market leader, Google, provides a striking example. While Altavista's engine was based solely on indexing web pages (and thus did not process information for the user), Google's algorithms have made search engines more relevant. However, there is a limit as to how much processing can be done by engines in place of users and significant internal search costs still have to be borne by Internet users.

            This issue is, in fact, directly related to the amount of information available on the Internet. Indeed, for search costs to, overall, remain constant, the ratio of external to internal search costs has to remain the same. External search costs are now minimal and it is unlikely they can be decreased further. In any case, the current level of external search costs makes any further improvement nothing but marginal. At the same time, cognitive search costs are positively correlated with the quantity of information available – the more information, the more costly it is to process. Search costs are, thus, expected to increase, unless new algorithms take a more important part in information processing. However, it is unlikely that search engine algorithms will improve at the same rate as the growth of information available online. While a steady growth in content available online has been observed since the advent of the Internet, Web 2.0, by making creation of web content accessible to the masses, has accelerated this trend. For this sole reason, search costs are bound to increase. Another reason relates to the search engines' databases. Although search engines were never able to index fully all the content available on the Internet, Web 2.0, because of its dynamic nature and because it is not based on traditional web sites, is even more likely to leave a large amount of content out of the reach of search engines (for obvious privacy reasons, Facebook personal pages are not accessible to search engines). As the proportion of content not referenced in search engines increases, the (external) search costs increase as well.

            Furthermore, the changes brought by Web 2.0 go far beyond text content. The new generation Internet services have enabled users to publish easily multimedia content on the Internet (video with YouTube, photos with Flickr, etc.). Multimedia content creates a far more difficult challenge than textual content. Search engines are unaware of the content of multimedia files unless users provide information (relevant file name, tags, etc.). With textual content, creators do not have to add information, since all relevant information is already part of the content itself. With multimedia content, creators need to provide information, as their content will remain invisible. The problem, of course, is that providing such information (tagging) is time consuming and costly.

            One could argue that this problem existed long before the advent of Web 2.0 and it is true that indexing multimedia content has always been a challenge. However, Web 2.0 differs in the sense that (relatively) more content is created, the proportion of multimedia is greater and creators are (in most cases) not rewarded for their contributions. In contrast, a typical Web 1.0 website would be likely to benefit directly (through advertisement based on higher usage, or through subscribed content) from providing exhaustive information related to its multimedia content. This would give incentives to tag content, despite the cost. Furthermore, untagged content is likely to be less valuable for such a website, so there is, at the same time, an incentive to tag (marginal benefit brought by additional tagged content published), and a disincentive to publish too much content (since there is a marginal cost of tagging new content and also extra hosting costs for each new content published), which corresponds to a form of economic rationality. A Web 1.0 website is likely to publish a quantity of tagged multimedia content such that the marginal cost of the last published content is equal to the marginal cost of publication.

            Unlike Web 1.0 publishers, Web 2.0 contributors are unlikely to see any direct benefit from tagging the content they publish. At the same time, if they choose not to tag their content, they do not face any marginal cost when publishing new material (hosting is, generally, free of charge in Web 2.0 platforms). This explains the current content of Web 2.0: untagged (because of the lack of marginal benefit) and abundant (because of the lack of marginal cost). This situation is, of course, socially inefficient. Untagged multimedia content dramatically increases search costs (both internal and external) and it is clear that less, but fully tagged, content would be preferable at a social level.

            In this respect, the current situation of Web 2.0 is a reversed tragedy of the commons. Traditionally, the tragedy of the commons relates to a common resource being over‐consumed because of a lack of cost (or a private cost lower than the social cost) (Hardin, 1968). In the case of Web 2.0, because the private marginal cost of contributors is zero (or, at least, smaller than the increase in social cost created by the increase in search costs), too much is produced and this results in a waste of social welfare. The existence of this phenomenon has been confirmed empirically. For instance, a study conducted in 2008 (Sigurbjörnsson and van Zwol, 2008) showed that in a sample of 52 million photos hosted on Flickr, 64% contained fewer than four tags. In fact, 29% (which amounts to 15 million pictures) had a single tag (most likely the year, which may have been automatically added) and less than 12% had more than six tags. Furthermore, not only is the amount of tagging clearly inadequate (it is generally acknowledged that at least a dozen tags are required to provide an accurate description of a picture), but so is the quality of tags. Indeed, the five most frequent tags in the sample were ‘2006’, ‘2005’, ‘wedding’, ‘party’, and ‘2004’ (Sigurbjörnsson and van Zwol, 2008), which are far too broad to enable an accurate search. Moreover, because of idiosyncratic tags, ambiguities (does ‘London’ refer to the city or to the author?) and misspelling, almost half the tags analysed could not be classified and are, therefore, unlikely to be meaningful in the search process. Similar insufficient tagging, both in terms of quantity and quality, can be observed on other Web 2.0 outlets, such as YouTube.3

            However, it is important to note that, though search costs have in all probability increased since the advent of Web 2.0, this is not necessarily reflected in the search results provided by search engines. The reason is twofold. First, untagged multimedia content may remain invisible (unless the file has, at least, a meaningful name), as it has fewer chances to be indexed by search engines. For instance, if only untagged photos are uploaded to Flickr, the search results remain the same, but cover only a decreasing proportion of the content available. Hence, obtaining an exhaustive listing of all relevant photos on Flickr would require browsing the millions of pictures available on Flickr one by one.

            Second, nowadays search engines (for instance Google's or Flickr's) are biased. Indeed, they do not aim to provide an exhaustive listing of even loosely relevant content, but aim instead to be user‐friendly by purposely limiting the results returned to what they think is relevant for the user. Logically, poorly tagged content is more likely to be considered unworthy by search engines.

            For these two reasons, it is likely that relevant content is not listed by search engines. Hence the increased quantity of untagged or poorly tagged content also has a serious impact on external search costs, since it makes search engine results gradually less relevant. Thus, the increase in untagged or poorly tagged content is expected to have significantly raised search costs. Where search engines have been able to remain exhaustive, this has led to an increase of cognitive search costs (more results to process). Where search engines have not remained exhaustive (by choice or because of technological limitations), this has led to an increase in external search costs (need to search beyond the results of the search engines).

            Web 2.0 and externalities: double trouble

            The current structure of Web 2.0 is such that the increase in search costs caused by the publication of new content is not reflected in the cost of publishing this content. There is, thus, a negative externality. The (marginal) private cost incurred by the producers/contributors differs from the (marginal) social cost. The difference is the (marginal) external cost that is borne by society, but not by the creators of the externality.

            As expected, in this case, without a proper corrective mechanism, the good that gives rise to the externality is overproduced (Buchanan and Stubblebine, 1962). Unless a solution is adopted to make the cost of publishing content equal to the cost borne by society (that is, unless the additional search cost is borne by the creator of the content), the amount of content produced by Web 2.0 is socially wasteful.

            A straightforward solution to this problem would be to force upon the contributors the cost of tagging. In this case, one could imagine that less content would be produced. However, there is probably no way to check that contributors do indeed produce meaningful tagging of their content. Besides, this would only be a solution for the increase in external search costs (i.e. the expenses of retrieving information), but would not reduce internal costs (i.e. the expenses of processing information). Furthermore, this would assume that the social value of any content is the same, which is obviously not the case. By doing so, it might be that socially valuable content is not published, while some totally useless content is published. In Web 2.0, incentives received by contributors are not related to the social value of their contributions.

            This dissociation between incentive received and social value also gives rise to positive externalities. Indeed, the reward obtained for publishing Web 2.0 content is, most of the time, subjective and has very little reason to correspond to the actual social benefit of the publication. Since the (marginal) private benefit is likely to differ from the (marginal) social benefit, there is a positive externality equal to the difference (marginal external benefit). In such a case, it can be expected that the products at the source of the positive externality will be under‐produced because producers misperceive the actual value of their products (and, besides, do not have enough incentives to produce more) (Buchanan and Stubblebine, 1962).

            It is peculiar that Web 2.0 is, at the same time, a source of both negative and positive externalities. This should not come as a surprise, though. There is no reason why the incentives and costs perceived by contributors should correspond to actual social benefits and costs. Unfortunately, this means that low (social) value content is very likely to be over‐produced, while high (social) value content is probably produced in insufficient quantity, thereby leading to an inefficient outcome.

            This does not mean that valuable content cannot be found on Web 2.0 sites; quite the contrary. The incentives provided by the Web 2.0 environment might well be enough for valuable content to be published. Furthermore, it might even be the case that the incentives perceived by some publishers actually correspond to the social value of their product. However, there are, at the moment, no Web 2.0 mechanisms that systematically ensure that incentives correspond to social value.

            Of course, it could be argued that because of the ‘long tail’ effect (Brynjolfsson et al., 2007), which is often associated with Web 2.0, each single content is, in fact, socially valuable, so there is no problem of efficiency. However, a study conducted on YouTube and Daum traffic (Cha et al., 2007) revealed that 10% of the videos account for 90% of the views, which is even higher than the traditional ‘80–20’ Pareto principle. Furthermore, despite the growing number of files, the file distribution is still skewed towards a few very popular files.

            Quality in Web 2.0: crowding out and adverse selection

            Although the Web 2.0 environment is largely non‐monetary, it nonetheless follows economic rules. Therefore, the large increase in the supply of content has, undoubtedly, led to a decrease in the ‘market value’ of online content (for this not to happen, the demand for content should have grown at least as fast as the supply). Indeed, Web 2.0 content has not replaced previously existing content, but instead has established itself as a new way to obtain content.

            Even though online products are never perfect substitutes for one another, there is, most of the time, some degree of substitutability between them. Even if amateur, lower quality content is not as valuable as professional high quality content, its ready availability and its low cost (it is, most of the time, free) drive the overall value of content down. Web 2.0 is, thus, at the root of a ‘crowding out’ effect. Lower quality content crowds out good quality content by way of very low prices for online content.

            This crowding effect can be easily understood from a consumer's perspective. All online products are competing for a part of consumer leisure time. Although, for consumers low quality content and high quality content do not lead to the same level of utility, their substitutability enables consumers to substitute a large quantity of (free) low quality content for a small quantity of (costly) high quality content. There is so much content available that consumers can easily fill all their leisure time with free content (and are even likely to be able to consume the best quality free content), and have neither time nor incentive to consume high quality content. Of course, if less low quality content were available, consumers would eventually run out of new free content (of reasonable quality) and would devote more time to consuming paid‐for content.

            That good quality content might be driven out by low quality content is reminiscent of Akerlof's (1970) market for lemons. Since there is a risk that high quality content is not (or is less) present in the market because of an insufficient market value caused by the presence of lower quality content, Web 2.0 may give rise to adverse selection.

            Adverse selection, however, usually arises because of information asymmetries. In the case of Web 2.0, it should be noted that online content is, in general, an experience good (Shapiro and Varian, 1999): its actual value for the consumer cannot be evaluated (or is too costly to evaluate) before consumption (Nelson, 1970, 1974; Klein, 1998). In the same way, in the market for lemons (Akerlof, 1970), the actual quality of the car is unknown to the seller until the car has been purchased and ‘consumed’. There are thus information asymmetries in Web 2.0 related to the quality of online content. Obviously, all content providers claim that their content is of good quality. However, consumers are aware of these asymmetries of information and are likely to base their valuation (and thus the amount they are willing to pay for content) on their past experience and observations.

            What is particularly interesting, when one considers traditional models of adverse selection, is that consumers' willingness to pay is a weighted average of the quality present in the market. It is thus directly related to the proportion of low quality content. If a large amount of low quality content is produced, consumers' willingness to pay becomes close to zero. In the case of Web 2.0, it is undeniable that while very valuable content is published, the significant amount of content that is socially not valuable (while it still may be valuable to its author and close relatives or friends) is expected to determine consumer willingness to pay to such a low point that high quality content may be driven out from (or never reach) the market.

            The usual solution to adverse selection is signalling. Producers should, somehow, be able to signal their quality through an objective and unambiguous signal. Although there is a clear distinction between professional content (available for a fee), and amateur content (usually available for free), the distinction between paid content/unpaid content cannot be used as a signal, since there is high quality content available free of charge and paid content of low quality.

            The problem with signalling is that it is costly. Even though cost should decrease with quality, in an environment where the benefits obtained from publishing are mostly non‐monetary, it is quite doubtful that contributors would be willing to bear additional cost. For instance, the usual signalling strategies, such as guaranties and money back offers, are largely inapplicable in a Web 2.0 context. Besides, it is doubtful that a universal signalling criterion is possible: unlike Akerlof's lemons, where value is fully objective, the value of online content is mostly subjective.

            A further complication with Web 2.0 is that the multitude of amateur providers mean that they are likely to lack knowledge of the market and do not have enough experience to assess accurately the market (or social) value of their products. In traditional models of adverse selection, sellers are fully aware of the quality and value of their products and, based on the cost of signalling, take signalling decisions accordingly. In Web 2.0, quality information is more imperfect than it is asymmetric. Consequently, even if an adequate signalling mechanism could be designed, it might still not be sufficient, since some producers of low value content might overestimate its value and decide to signal, while providers of high value content might underestimate its value and decide against signalling.

            Finally, professional bloggers usually have to devote as much time to promoting their blog as to writing it. Ultimately, the question is whether a successful blogger is someone who understands technology and keeps up with change, or someone who provides a quality content. The current incentive system, even for professional bloggers, is not so much about the quality of the content, but instead about the ability of bloggers to make their blogs more known than others.

            Why does it all matter? Efficiency in non‐monetary environments

            While the previous sections have identified several sources of inefficiency in Web 2.0, there is generally, nowadays, little concern about these inefficiencies and their consequences. The argument usually brought forward is that, since Web 2.0 mostly relates to non‐commercial/non‐monetary applications (it is ‘C2C’, consumer‐to‐consumer), the issue of efficiency is of little importance.

            This argument can be understood in two ways, either that Web 2.0 is sufficiently efficient, considering its main objective, or that inefficiencies are of little or even no consequence. With regard to the former, it can be noted that, while monetary exchange is not a prerequisite for efficiency, achieving a Pareto optimal outcome nonetheless requires bartering between participants (Madden, 1975). Since this generally does not take place within Web 2.0, an efficient outcome is unlikely to arise.

            With regard to the second part of the argument – inefficiency in Web 2.0 does not matter because it relates to non‐commercial applications – there are two important points to consider. First, even in an environment entirely devoid of commercial implications, it is reasonable to assume that the utility obtained by the participants depends positively on the value published content has for them. What was demonstrated in the previous sections is that the almost total lack of incentives (monetary or non‐monetary) and the absence of relation between perceived incentive and social value in Web 2.0 are likely to result in a low social value for Web 2.0 content. Since the consumption of Web 2.0 content has increased to become a major source of leisure and entertainment, considering its sources of inefficiency in order to increase social welfare is even more important.

            Of course, it could still be argued that, since Web 2.0 is mostly related to leisure, improving its efficiency should not really be a priority. Such an argument, however, goes against the arguments exposed in the previous sections, and this for two reasons. The first is that, regardless of what is published in Web 2.0, and regardless of its value, an external cost (in terms of search cost) is borne by the whole society. Improving its efficiency would thus (even leaving aside the value of Web 2.0 content) improve social welfare.

            The second reason why efficiency of Web 2.0 matters is that, while Web 2.0 is mostly non‐monetary and non‐commercial, it is a substitute for commercial applications. It is likely that content that was previously paid‐for (e.g. stock pictures) will increasingly be provided, most likely for free, by Web 2.0. Hence, as commercial firms are increasingly competing with and are, sometimes, replaced by Web 2.0, the question of its efficiency is crucial. In this sense, it does not really matter that Web 2.0 is mostly non‐monetary and non‐commercial, simply because it has an impact on sectors of the economy which are monetary and commercial.

            Finally, it is important to consider that Web 2.0 has affected the artistic production process so radically that it has become necessary to redefine the concept of creative industries (Potts et al., 2008). If creative industries are progressively ‘dissolving’ into Web 2.0, then establishing socially efficient incentives will become increasingly crucial.

            The challenge of monetising Web 2.0

            In the light of the arguments presented in the previous sections, it could be thought that simply monetising Web 2.0 would be sufficient to reconcile Web 2.0 outcome with economic efficiency. Unfortunately, the particular economic nature of digital products on the one hand, and the existence of transaction costs on the other hand, make it particularly difficult to apply a simple market scheme to Web 2.0. This should not come as a surprise. There is certainly a reason why Web 2.0 was born and why it has succeeded in a mostly non‐market environment.

            The public nature of digital goods

            All Internet content has a particular feature – its digital format. Because of its digital nature, Internet content is fully replicable, i.e. can be copied without loss of quality or information. Furthermore, the cost of duplicating digital goods is, nowadays, so low that it has become negligible.

            This replicability (and its low cost) has important consequences for the economic nature of digital products. Indeed, although they are usually provided privately, digital goods are public goods (i.e. non‐rival and non‐excludable). While the medium (CD, hard drive, etc.) used to store digital goods is rival, digital goods are non‐rival, since consumers are able to make copies of the same unit of digital good and everyone can consume it at the same time (a good is said to be rival when the consumption of one individual decreases the potential consumption of other individuals).

            This non‐rivalness results in digital goods being non‐excludable. Indeed, although the producers of digital goods are always able to prevent (exclude) consumers from consuming digital goods, they cannot prevent consumers from obtaining digital goods from other consumers. Thus, although digital goods are directly excludable, they are indirectly non‐excludable. Since digital goods are non‐rival, consumers have little reason not to let other consumers copy the digital goods they own (letting them copy will not deprive them from the usage of the good). This means that, as the number of consumers who own a copy of a particular digital good increases, the actual excludability of this digital good progressively decreases to a point where the digital good is virtually non‐excludable. Who would try to obtain (buy) a digital good from the producer, when the same good can be obtained (for free) from other consumers?

            Since digital goods are (evolutionary) public, they are subject to free riding, which is piracy in the case of digital goods (Ramello, 2005). The main problem arising from monetising Web 2.0 is, therefore, the large level of free‐riding/piracy that would arise. The effect of piracy on Web 2.0 might be expected to be even more dramatic than in other sectors. Considering that very powerful and large firms, such as Microsoft, have been unable to curb piracy, how could small Web 2.0 contributors protect themselves? Furthermore, large firms have means to earn money in spite of piracy that Web 2.0 contributors are unlikely to have (for instance, the OEM contracts signed by Microsoft ensure a minimum level of revenue). The very nature of Web 2.0 is collaborating and spreading. In this case, how can property rights on content and the nature of Web 2.0 be reconciled?

            Transaction costs and micro‐payment

            The idea that pay‐per‐use and micro‐payment in the Web environment would be a better model than the, widely used, free‐access advertisement model, has been popular in the e‐commerce literature. It can even be argued that the success of content providers depends on the availability of secure and convenient micro payments (Wirtz and Lihotzky, 2003). Such a model would have the potential to lead to a more efficient Web 2.0. However, were Web 2.0 to be monetised, there would be considerable transaction costs. Web 2.0 is characterised by a multitude of content and a multitude of suppliers. The large amount of content would be sufficient to generate large transaction costs. Moreover, it is coupled with a very large number of individual providers. As if this was not enough, users consume a very large amount of diverse Web 2.0 content every day. Monetising Web 2.0 would, therefore, result in a very large number of transactions.

            Another particular aspect of Web 2.0 is, as discussed in the previous sections, the relative low value of individual online content. If users had to pay to access Web 2.0 content, it is likely that most of the payments would fall within the micro‐payment category. The issue, then, is whether the micro‐payment will offset the costs generated by this extra transaction.

            Although some work has been done in recent years on how to optimise payments systems to lower transaction costs within a micro‐payment setting (Hwang et al., 2001), the question of whether transaction costs could be reduced sufficiently to make micro‐payment worthwhile remains open. More importantly, most studies consider only the financial aspects of transaction costs.

            Monetary transaction cost is only one of the components of transaction costs. The other main components are opportunity costs (related to the time spent for the transaction to happen) and cognitive costs (Coase, 1937; Williamson, 1981; Smith et al., 1999). These costs are caused not only by the transaction itself, but also by the search for opportunities, coordination, monitoring and enforcement (Tirole, 1988). Switching from free‐access content to pay‐per‐use content in Web 2.0 is likely to lead to a dramatic increase in transaction costs.

            From the user's perspective, the increase in cost, besides the actual price paid for the content, is obvious. So far, the only trade‐off consumers have to face is related to time and the expected quality of content. They have to base their decision solely on the time they can devote to consuming content and on the expected utility derived from consuming particular content. If they have pay‐per‐use, the problem becomes more complex, since the price of content becomes a part of the trade‐off. While consumers, nowadays, have to consider only the quality of content, they would then have to consider the relative prices of all online products. Currently, one can reasonably assume that users always consume the content most valuable to them, but introducing pay‐per‐use would mean that users would have to choose between consuming fewer goods of higher value or more goods of lower value.

            Consequently, one can expect that the behaviour of consumers would change. From casual browsing (made possible by the free access to content), they would have to switch to investigating all possible options and collecting all relevant information to make a rational choice. This, of course, takes time, money and cognitive resources. Furthermore, as online content is, for the most part, an experience good (i.e. actual value cannot be realised prior to consumption), even an extensive search might only consume resources.

            Collecting information would only be the first step, though. Users would then have to coordinate their actions with the chosen seller. A contract will have to be drawn (what is going to be delivered, how, for how long, using which payment system?), the transaction will have to be made (completing a form and entering credit card details, using a one‐click method, using PayPal?), and finally the contract will have to be enforced. If the contract is breached by the seller (e.g. the agreed content, quality of content and/or quantity of content was not delivered), the buyer will have to spend, again, time, money and cognitive resources taking action against the seller. Currently, if a blog does not deliver as expected, this is not costly, since there is free access and no contractual relationship.

            Can users reasonably be expected to accept all these costs for a short blog or video? Even if monetary transaction costs have been minimised, will users be willing to go through the hassle of (even leaving search costs aside) going through a one‐click payment (which still requires an authentication procedure) for something that is worth very little to them?

            Micro‐payment figures in the e‐business literature because it is generally thought that the value of online content for users does not exceed a few cents. However, even if technology improves to such a point that the monetary part of transaction costs becomes negligible, the opportunity cost and cognitive costs induced by monetising Web 2.0 make it unlikely that users will be willing to participate in such a scheme.

            The costs for sellers in a pay‐per‐use Web 2.0 would be even greater. Indeed, sellers need to collect the same amount of information as buyers (what are competitors doing? what are the substitutes?) and are expected to spend far more time and cognitive resources devising an adequate pricing and product strategy. Although the same strategy can be used for many users, in the case of the very few successful content providers, strategies have to be adapted almost in real time, in order to maximise profit. Even a very successful provider would probably find that micro‐payment does not offset the transaction costs incurred.

            It is, thus, necessary to diminish transaction costs in order to enable monetary Web 2.0. Reducing the number of actual transactions may not be sufficient, since search costs are a part of transaction costs, and the number of potential transactions would have to be reduced as well. This means that pay‐per‐use is unlikely to be the best option for Web 2.0.

            Alternative options, such as subscription to content feed, are better in terms of transaction cost volume, but are not generally applicable to all Web 2.0 content. For example, it would be difficult to price a subscription for content published irregularly. Furthermore, the very large number of Web 2.0 contributors means that large transaction costs are still to be expected.

            The only remaining option would be to operate through intermediaries who would charge a subscription fee to access content. A typical example would be YouTube charging users an access fee4 and then paying content providers according to a formula (based on usage, for example). This would not be very different from what happens when ISPs and mobile operators bundle content in their telecommunication offers. Nonetheless, all attempts to use such a scheme on the Internet have so far been unsuccessful.

            In Web 2.0, participants are both consumers and providers. Charging for constant and multiple exchanges among participants would be extremely costly. Besides, participants who consume each other's products equally would actually end up paying each other as much as they would be paid by each other. At the very least, a balancing mechanism between participants should be set up to save on transaction volumes. This, however, would have little impact on transactions related to time and cognitive resources spent searching and making decisions.

            There are, however, other situations in which participants are both consumers and producers, and are constantly exchanging. Firms, for example, are a very good example of this. The example is significant, since transaction costs theory (Coase, 1937; Williamson, 1981) was built precisely to explain why firms and institutions exist, as opposed to individual people trading goods and services. The explanation is that there is a point at which the interactions between individuals are such that the transaction costs become too high to operate within a market environment. This is why, in such cases, non‐market entities, such as firms are created. It is not surprising that a heavily collaborative environment, such as Web 2.0, has evolved into a mostly non‐monetary environment.

            On a final note, even if transaction costs associated with pay‐per‐use could actually be lowered, the profitability of such marketing strategy for providers is debatable. Indeed, the rule for profit from information goods is, in most cases, to bundle information goods and/or usages of information goods (Shapiro and Varian, 1999; Bakos and Erik Brynjolfsson, 1999). Of course, it is usually difficult for small providers to offer bundles and larger providers are more likely to apply this marketing strategy successfully. Nonetheless, the profitability of bundling is so strong that many small providers do join their products and offer them as a bundle.

            From supply‐driven Web 2.0 to demand‐driven Web 2.1

            General conditions

            The challenge of monetising Web 2.0 is to find a mechanism that is able to:

            • 1.

              better align individual incentives with social value;

            • 2.

              take into account the economic nature of digital products; and

            • 3.

              not generate a lot of transaction costs.

            It is probably not possible to align individual incentives and social value fully without undermining the benefits of Web 2.0. However, the issues described in the second section above are mainly related to the risk that Web 2.0 would hinder the production of high quality content. If incentives targeted at producers of high quality content could be created, this could improve the way Web 2.0 works.

            At the same time, the economic nature of digital products needs to be taken into account. Because of the public nature of digital goods, it is illusory to think that piracy can be prevented, so it is unlikely that a lot of money can be made on reproduction and distribution of existing digital content. Finally, since attempting to charge for each access to online content within Web 2.0 leads to large transaction costs, the success of any attempt to monetise Web 2.0 is dependent on reducing transactions.

            A way to achieve these three goals at once would be to move from the supply‐driven only Web 2.0, as it is at the moment, towards a demand‐driven Web 2.0. The idea is that, instead (or in parallel to – demand‐driven and supply‐driven Web 2.0 can complement each other) of publishing content and hoping that this content will meet a demand, content is published on demand. The content can either exist prior to publication (for example, holiday pictures of a monument) or can be created on demand (following the exact specification of the demander).

            Such a mechanism would indeed satisfy all the above‐defined conditions. It would create incentives for high quality content, since content will be paid for. Also, if such a system coexists with traditional supply‐driven Web 2.0, it is likely that providers who anticipate that their content has a high value would wait for demand to exist before publishing, while users aware of the low value of their content would publish it the traditional way.

            Secondly, such a mechanism would acknowledge the public nature of digital products. Indeed, digital goods, as with any other public goods, become non‐excludable only once they have been produced (published, in the case of digital goods). The first unit of a public good (and of a digital good) remains fully excludable and this is when it is still possible to charge for access to the good. However, after a particular content has been published, it quickly becomes non‐excludable and no attempt should be made to prevent access and/or copying from other users.

            Such a scheme also addresses the third issue, high transaction costs. Indeed, it is suggested that only the initial publication should be charged for, which would lead to few transactions (and lower transaction costs) than charging for each usage. Even if we assume that producer and the initial buyers attempt to charge for accessing the content while it is still partially excludable, this should not lead to a major increase in transaction costs.

            This idea is, in fact, compatible with a new stream of research in economics (Boldrin and Levine, 2002; Quah, 2002) which demonstrates that an efficient competitive market outcome can be achieved with digital goods (which was thought to be impossible because of the publicness of these goods) as long as there is finite expansibility (i.e. the goods do not spread instantly in the economy). Since it takes time for digital goods to spread among consumers, these goods remain partially excludable for some time and it is possible in the meantime to charge people for accessing them. Furthermore, the initial buyer is able to identify downstream profits (since it is possible for this buyer to sell access to the good) and is willing to pay even more. In this sense, it is not unlike the premium that news agencies charge important clients who are willing to pay a lot to access crucial pieces of information first.

            On‐demand Web 2.0 in practice

            In practice, such a system would consist of people registering existing potential content (holiday pictures with detailed tags) or ability to provide content (such as coding skills, or the location of their next holiday) with an intermediary. Meanwhile, users could register demands and needs with the same intermediary. Demanders and potential suppliers would then be matched and would agree on price and modalities. Once the digital good has been delivered, although the producer remains the holder of the copyright, both producers and buyers may diffuse or resell as required. A reputation system could be included to provide further incentives.

            Obviously such a system is a source of network externalities as it is unlikely that people put publishing content on hold if they do not expect demand for it to occur. Since most people would not expect a large demand for the content they can publish, but a long tail effect instead, a large number of potential users would certainly be required for the service to be adopted. Furthermore, as transactions are involved, the reputation of the company that operates the on‐demand system is certainly important.

            Therefore, it can be expected that only a large company would be able to deploy such a system successfully. Leading providers of Web 2.0 platform (Facebook, YouTube, etc.) could clearly implement such a system. Other large companies with a solid reputation (e.g. eBay) would certainly be able to do so. In fact, considering the large network externalities likely to arise, it would be interesting for a large IT/Internet company (one thinks of Microsoft, Google or Apple) to build a meta Web 2.0 system that would integrate the main Web 2.0 platform in the on‐demand system.

            The first difficulty of establishing on‐demand Web 2.0 is thus that it requires a form of standardisation to trigger sufficient adoption. The second problem is that it does not suit all content providers. In particular, companies for which downstream licensing is crucial are unlikely to be willing to participate. Minimising transaction costs basically requires that a transaction occurs only during the first publication of the content (in a way, the publication of the content is rewarded instead of the production of the content). If either or both parties want to continue selling the content afterwards, this involves much more negotiation as well as (potentially) enforcement costs.

            While consumers may be happy to let other consumers use one of their pictures after having been rewarded for its initial publication, firms or individual entrepreneurs may be reluctant to be paid only once and then see their product used by others for free. Hence, companies whose business model relies on appropriation of returns from copyrighted material are likely to continue operating as they do now.

            However, it is important to keep in mind that the aim of demand‐driven Web 2.0 is not to replace the existing supply‐driven Web 2.0, but to complement it. If it were to be implemented, demand‐driven Web 2.0 would be an intermediate step between the free content provided by supply‐driven Web 2.0 and the paid for content provided through other means. It could be conjectured that even if the usage of on‐demand Web 2.0 were to remain much lower than supply‐driven Web 2.0 and paid for content, its sole existence could improve the efficiency of the Web 2.0 sphere. For users to refrain from publishing content and to provide adequately tagged content, the existence of a potential reward (through on‐demand Web 2.0) might be sufficient incentive. Although it is not a first best solution in the sense that it does not guarantee an efficient outcome, it could increase the relative quantity of high quality content and curb search costs because less low value content is published and/or because it gives incentives to tag content (untagged content has less probability of being matched with demand).

            Concluding remarks

            The aim of this paper is to evaluate, from an economic and social perspective, the efficiency of Web 2.0. It has demonstrated that, because of the non‐monetary nature of Web 2.0, several sources of inefficiencies (search costs, externalities, crowding out and adverse selection) exist. Furthermore, it has been shown that the economic nature of digital products and the expected low value of most online content make it impossible to adopt a simple market model for Web 2.0.

            This paper has introduced the concept of demand‐driven Web 2.0 (as opposed to the current Web 2.0, which is supply‐driven). By taking seriously the economic characteristics of digital products and keeping transaction costs low, this is expected to provide, through financial reward, stronger incentives for high quality content within a Web 2.0 environment.

            The findings of this paper have several implications for managers. For providers of Web 2.0 platforms (e.g. Facebook, YouTube), whose activity is complementary to user‐generated content, consequences are twofold. First, they bear the cost of hosting and distributing Web 2.0 content. While hosting low value content still bears the same cost (which in the case of photos and videos is clearly not negligible), these providers add little value to the service offered. Furthermore, as the value of the service is directly related to the ease of finding relevant content, Web 2.0 providers also bear a significant part of the additional search cost created by content. While Web 2.0 providers, as large players, are the best suited to develop demand‐driven Web 2.0, their incentive to do so might be still low. At this still rather early stage of Web 2.0 adoption, acquiring more customers is likely to be more important than minimising cost. In this sense, it may well be the case that the more content consumers put in, the more switching costs and network externalities are created. Nonetheless, when the market matures, it is probable that Web 2.0 providers will see an interest in adopting demand‐driven Web 2.0 to increase efficiency.

            For content providers, the consequences of Web 2.0 inefficiencies are ambiguous. On the one hand, if they provide high quality content, the low efficiency of Web 2.0 should be beneficial. However, because of the massive amount of substitute products provided by Web 2.0, it may be difficult to compete, even by providing high quality content. While large companies may be able (because of brand recognition, marketing, etc.) to differentiate their products sufficiently, small and medium‐sized companies (SMEs) may be driven out of the market regardless of the quality of their products. The problem is that content providers are unlikely to be in a position to implement demand‐driven Web 2.0. Until a large provider has put this into practice, small providers often have no choice but to publish some of their content for free on Web 2.0 and hope this will generate sales for paid content. In some cases, however, SMEs might still be able to apply the demand‐driven concept by diverting some of their activities towards service (for instance, switching from providing stock pictures to on‐demand photography).

            Finally, it is important to note that the analysis conducted in this paper is based largely on the mainstream concept of economic rationality. This implies that individuals are assumed to base their decisions on their self interest. But individuals are, at times, altruistic (or less rational, from a substantive perspective) and this makes the future of Web 2.0 a little less bleak than this paper suggests.

            However, this does not, by any means, diminish the importance of the conclusions drawn as they describe the long‐term trends of Web 2.0 (and related industries) unless adequate incentives are developed. Otherwise, contributing to Web 2.0 will remain akin to contributing to a public good. As many studies have shown (Isaac et al., 1985; Andreoni, 1988; Weimann, 1994; Haan and Kooreman, 2002), although individuals, in contrast to homo economicus, do contribute voluntarily to the provision of a public good, this provision is never sufficient and eventually decreases over time. The drop in contributions to some Web 2.0 outlets (for instance, Wikipedia) may well indicate that such a stage has, in their case, already been reached (Suh et al., 2009).

            Notes

            References

            1. Akerlof G.. 1970. . ‘The market for lemons: quality uncertainty and the market mechanism’. . Quarterly Journal of Economics . , Vol. 84((3)): 488––500. .

            2. Andreoni J.. 1988. . ‘Why free ride? Strategies and learning in public goods experiments’. . Journal of Public Economics . , Vol. 37:: 291––304. .

            3. Arrow K. and Debreu G.. 1954. . ‘Existence of an equilibrium for a competitive economy’. . Econometrica . , Vol. 22((3)): 265––90. .

            4. Bakos J.. 1997. . ‘Reducing buyer search costs: implications for electronic market marketplaces’. . Management Science . , Vol. 43((12)): 1676––92. .

            5. Bakos Y. and Brynjolfsson E.. 1999. . ‘Bundling information goods: pricing, profits, and efficiency’. . Management Science . , Vol. 45((12)): 1613––30. .

            6. Bernheim B.. 1994. . ‘A theory of conformity’. . Journal of Political Economy . , Vol. 102((5)): 841––77. .

            7. Blood R.. 2000. . ‘Weblogs: a history and perspective’. . Rebecca's Pocket . ,

            8. Boldrin M. and Levine D.. 2002. . ‘The case against intellectual property’. . American Economic Review Papers and Proceedings . , Vol. 92((2)): 209––12. .

            9. Bryant S., Forte A. and Bruckman A.. ‘Becoming Wikipedian: transformation of participation in a collaborative online encyclopaedia’. In: . Proceedings of GROUP International Conference on Supporting Group Work ; . Sanibel Island , FL .

            10. Brynjolfsson E., Hu Y. and Simester D.. 2007. . Goodbye Pareto Principle, Hello Long Tail: The Effect of Search Costs on the Concentration of Product Sales . , MIT Sloan School of Management, Massachusetts Institue of Technology. .

            11. Buchanan J.. 1965. . The Demand and Supply of Public Goods . , Chicago , IL : : Rand McNally. .

            12. Buchanan J. and Stubblebine W.. 1962. . ‘Externality’. . Economica . , Vol. 29((116)): 371––84. .

            13. Cha M., Kwak H., Rodriguez P., Ahn Y‐y. and Moon S.. 2007. . ‘I tube, you tube, everybody tubes: analyzing the world's largest user generated content video system’. . IMC’07 . ,

            14. Cheshire C.. 2007. . ‘Social psychological selective incentives and the emergence of generalized information exchange’. . Social Psychology Quarterly . , Vol. 70:: 82––100. .

            15. Coase R.. 1937. . ‘The nature of the firm’. . Economica . , Vol. 4((16)): 386––405. .

            16. Coase R.. 1960. . ‘The problem of social cost’. . Journal of Law and Economics . , Vol. 3:: 1––44. .

            17. Constantinides E. and Fountain S.. 2008. . ‘Web 2.: conceptual foundations and marketing issues’. . Journal of Direct, Data and Digital Marketing Practice . , Vol. 9:: 231––44. .

            18. Dahlquist D.. 2007. . ‘Web 2.0: who really participates?’. . CMS Wire . ,

            19. 2009. . ‘The end of free lunch – again’. . Economist . ,

            20. Forte A. and Bruckman A.. ‘Why do people write for Wikipedia? Incentives to contribute to open content publishing’. In: . Proceedings of the 41st Annual Hawaii Intenational Conference on System Sciences ; . Sanibel Island , FL .

            21. Garrett C.. 2006. . ‘Killer flagship content’. . mimeo . ,

            22. Haan M. and Kooreman P.. 2002. . ‘Free riding and the provision of candy bars’. . Journal of Public Economics . , Vol. 83:: 277––91. .

            23. Hardin G.. 1968. . ‘The tragedy of the commons’. . Science . , Vol. 162((3859)): 1243––48. .

            24. Hwang M‐S., Lin I‐C. and Li L‐H.. 2001. . ‘A simple micro‐payment system’. . Journal of Systems and Software . , Vol. 55((3)): 221––29. .

            25. Isaac R., McCue K. and Plott C.. 1985. . ‘Public goods provision in an experimental environment’. . Journal of Public Economics . , Vol. 26:: 51––74. .

            26. Joyce E. and Kraut R.. 2006. . ‘Predicting continued participation in newsgroups’. . Journal of Computer‐Mediated Communication . , Vol. 11((3)): 723––47. .

            27. Klein L.. 1998. . ‘Evaluating the potential of interactive media through a new lens: search versus experience goods’. . Journal of Business Research . , Vol. 41((3)): 195––203. .

            28. Lerner J. and Tirole J.. 2002. . ‘Some simple economics of open source’. . Journal of Industrial Economics . , Vol. 50((2)): 197––234. .

            29. Madden P.. 1975. . ‘Efficient sequences of non‐monetary exchange’. . Review of Economic Studies . , Vol. 42((4)): 581––96. .

            30. Nelson P.. 1970. . ‘Information and consumer behavior’. . Journal of Political Economy . , Vol. 78((2)): 311––29. .

            31. Nelson P.. 1974. . ‘Advertising as information’. . Journal of Political Economy . , Vol. 82((4)): 729––54. .

            32. Pereira P.. 2005. . ‘Do lower search costs reduce prices and price dispersion?’. . Information Economics and Policy . , Vol. 17((1)): 61––72. .

            33. Potts J., Cunningham S., Hartley J. and Ormerod P.. 2008. . ‘Social network markets: a new definition of the creative industries’. . Journal of Cultural Economics . , Vol. 32((3)): 167––85. .

            34. Quah D.. 2002. . ‘Matching demand and supply in a weightless economy: market‐driven creativity with and without IPRs’. . De Economist . , Vol. 4((150)): 381––403. .

            35. Ramello G.. 2005. . ‘Property rights, firm boundaries, and the republic of science – a note on Ashish Arora and Robert Merges’. . Industrial and Corporate Change . , Vol. 14((6)): 1195––204. .

            36. Samuelson P.. 1954. . ‘The pure theory of public expenditure’. . Review of Economics and Statistics . , Vol. 36((4)): 387––89. .

            37. Shapiro C. and Varian H.. 1999. . Information Rules: A Strategic Guide to the Network Economy . , Boston , MA : : Harvard Business School Press. .

            38. Sifry D.. 2007. . ‘The state of the live web’. . Technorati . ,

            39. Sigurbjörnsson B. and van Zwol R.. ‘Flickr tag recommendation based on collective knowledge’. In: . Proceedings of the 17th International Conference on the World Wide Web ; . pp.p. 327––36. .

            40. Smith A.. 1904. . An Inquiry into the Nature and Causes of the Wealth of Nations, . , 5th edn. , London : : Methuen. .

            41. Smith G., Ventatraman M. and Dholakia R.. 1999. . ‘Diagnosing the search cost effect: waiting time and the moderating impact of prior category knowledge’. . Journal of Economic Psychology . , Vol. 20((3)): 285––314. .

            42. Suh B., Convertino G., Chi E. and Pirolli P.. ‘The singularity is not near: slowing growth of Wikipedia’. In: . WikiSym ‘09: Proceedings of the 5th International Symposium on Wikis and Open Collaboration ; . pp.p. 1––10. .

            43. Tirole J.. 1988. . The Theory of Industrial Organization . , Cambridge , MA : : MIT Press. .

            44. Weimann J.. 1994. . ‘Individual behaviour in a free riding experiment’. . Journal of Public Economics . , Vol. 54((2)): 185––200. .

            45. Williamson O.. 1981. . ‘The economics of organization: the transaction cost approach’. . American Journal of Sociology . , Vol. 87((3)): 548––77. .

            46. Wirtz B. and Lihotzky N.. 2003. . ‘Customer retention management in the b2c electronic business’. . Long Range Planning . , Vol. 36((6)): 517––32. .

            Footnotes

            3. Preliminary results of an ongoing study carried out by the authors show that nearly half (49.09%) of videos uploaded on YouTube contain fewer than four tags. Some 80% of YouTube videos in the sample contain fewer than eight tags. Furthermore a significant proportion of tags are redundant information (year, date, user name), relate to the means of production (such as ‘video’, ‘webcam’ or the name of the software used) instead of the content, are split words or phrases (resulting in a large number of prepositions being set as tags) or meaningless numbers.

            4. Note that this access fee would probably have to be a flat fee. Otherwise, the transaction costs, although lower than with individual pay‐per‐use, are likely to remain high.

            Author and article information

            Contributors
            Journal
            cpro20
            CPRO
            Prometheus
            Critical Studies in Innovation
            Pluto Journals
            0810-9028
            1470-1030
            September 2010
            : 28
            : 3
            : 267-285
            Affiliations
            a London Metropolitan University , London , UK
            b University College London , London , UK
            Author notes
            Article
            522332 Prometheus, Vol. 28, No. 3, September 2010, pp. 267-285
            10.1080/08109028.2010.522332
            95ec8aa1-c57d-4dfa-9b43-0c964f587653
            Copyright Taylor & Francis Group, LLC

            All content is freely available without charge to users or their institutions. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles in this journal without asking prior permission of the publisher or the author. Articles published in the journal are distributed under a http://creativecommons.org/licenses/by/4.0/.

            History
            Page count
            Figures: 0, Tables: 0, References: 46, Pages: 19
            Categories
            Research Papers

            Computer science,Arts,Social & Behavioral Sciences,Law,History,Economics

            Comments

            Comment on this article