“ACAP and the Online Challenges Facing Newspapers”

Featuring Members of the Advisory Council Of The Media Institute’s National CyberEducation Project

Reactions to a Speech by Thomas C. Rubin, Chief Counsel for Intellectual Property Strategy, Microsoft Corporation

BACKGROUND 

This “Virtual Panel Discussion” originally took place online, as a series of e-mail exchanges among members of the National CyberEducation Project’s Advisory Council.  These experts in intellectual property (some of the most notable names in the field) were reacting to a speech by Thomas C. Rubin, chief counsel for intellectual property strategy at Microsoft.  Rubin’s primary focus at Microsoft “has been to accelerate the creation of sustainable business models for content creators online.”

Rubin had delivered the speech in London on Nov. 20, 2008, at a meeting of the UK Association of Online Publishers.  The full speech, titled “The Change We Need,” can be viewed here.  As Rubin told Media Institute President Patrick Maines: “The subject was the challenges facing the newspaper industry online and ways to create a more financially viable approach.”

One option Rubin discusses is the Automated Content Access Protocol (ACAP), which he characterizes as “a more flexible means of offering content online.”  Maines circulated Rubin’s speech to NCEP Advisory Council members and asked for their thoughts on his ACAP proposal.  What follows are their lightly edited comments, responding to Rubin and each other, in the order they were received.   Most of this online discussion took place between Dec. 4 and Dec. 23, 2008.

PARTICIPANTS

Prof. James Gibson, School of Law, University of Richmond
Prof. Jane C. Ginsburg, School of Law, Columbia University
Prof. Justin Hughes, Benjamin N. Cardozo School of Law, Yeshiva University
Prof. Douglas Lichtman, School of Law, University of California – Los Angeles
Prof. Stan Liebowitz, School of Management, University of Texas – Dallas
Prof. Peter Menell, Boalt Hall School of Law, University of California – Berkeley
Prof. Robert P. Merges, Boalt Hall School of Law, University of California – Berkeley
Prof. Randal C. Picker, School of Law, University of Chicago
Dean Rodney A. Smolla, School of Law, Washington and Lee University

DISCUSSION

Doug Lichtman, UCLA:

The ACAP idea is certainly attractive from my perspective, in that it lowers the costs incurred by content owners to make clear their preferences about sharing and reusing their work.  That speaks directly to one of the main arguments frequently raised in fights like the Google Book Search fight: namely, the concern that it would cost too much for a would-be aggregator to actually make case-by-case determinations about what content can and cannot be used.

Of course, the devil remains in the details.  For instance, ACAP reduces those costs only if it is widely adopted; were Google to have to obey 15 different look-alike regimes, or 50, then high costs could return.  Similarly, ACAP itself would have to be simple enough to scale; were it to offer a thousand variations on the basic approvals, again the costs skyrocket.  But, in theory, a simple and widely adopted regime like ACAP could be an intelligent next step above and beyond the current robots.txt approach, just as Tom Rubin suggests.

This is only part of the analysis, however.  There are a lot of arguments out there about why content owners ought not be given full flexibility to determine the uses of their content.  And there is the obvious public relations constraint that has beaten down a wide range of other DRM-like protocols, even ones where it’s hard to see what everyone is so upset about.  (Poor EA Games.)

Thus, from a public policy perspective, I’m certainly interested in ACAP – but it neither strikes me as earth-shatteringly new or a panacea to the bigger struggle here over the proper relationships among aggregator, consumer, and content producer.

Jane Ginsburg, Columbia University:

I agree with Doug that the concept of ACAP has a lot of potential, but I would like to know more about how it works.  A visit to the ACAP website was not as illuminating as I’d hoped, especially since the “response to critics” page was last updated Jan. 18, 2008.  For example, I don’t know how much an improved robots.txt instruction can “teach” search engines about content-offering options, nor, equally importantly, how much a webcrawler, say Google’s, “wants” to learn about those options.  If I offer my content by choosing among a series of standard options (a la Creative Commons icons), will the search engine respect those instructions, or will it ignore them (or some of them)?

James Gibson, University of Richmond:

Like Doug and Jane, I think the notion of a universal protocol for content-sharing online has a lot of potential.

As Doug mentions, however, I think one needs to consider exactly what content ACAP would control.  One problem with DRM is that it implements control in a way that’s very different from copyright law.  For example, it’s pretty much impossible to design a DRM system that can differentiate between fair uses (which copyright does not protect) and unfair uses (which it does), or between facts (unprotected) and expression (protected).  That doesn’t necessarily mean DRM is bad, but it does make me dubious when organizations like ACAP conflate controlling content through copyright and controlling content through DRM.

For example, I doubt that Google News violates copyright law when it displays headlines and short excerpts from newspaper articles in its search results.  But I am sure that ACAP is meant to keep Google from accessing and using such material anyway.  Indeed, I imagine that much of the impetus for ACAP’s DRM exists precisely because copyright does not protect content that the newspaper sites would like to be able to prevent others from exploiting.  Again, this does not mean that ACAP is wrong; maybe it’s copyright law that needs to adjust, or maybe an Internet mediated by DRM and social norms (e.g., respecting the restrictions in a robot.txt file) is the optimal result.

I also wonder whether investigative journalism is really at risk here, and if so whether the Internet is really the culprit.  (Wasn’t the newspaper industry in pretty bad shape before the Internet came along?)  New distribution technologies often have a negative effect on established business models, but that’s not necessarily a bad thing.  It may be that we are simply witnessing a new way to generate and access news, and that those with a vested interest in the old model are understandably resistant – a resistance that is surely in their best interest, but may not be in the public’s.

P.S.  I have some other issues with the claims and comparisons that Rubin makes – for example, Google nothing like Napster – but they don’t have much to do with ACAP.

P.P.S.  When I did a couple of sample searches on Google News, I noticed that there were no ads on the search results page.  Why not, I wonder?

Jane Ginsburg, Columbia University: 

Regarding Google News, I assume the last P.P.S. is faux naiveté.  In any event, I’m not so sure that taking the headline and first two sentences isn’t at least prima facie infringement under U.S. law (qualitatively substantial taking of expression).  Moreover, under many European copyright laws, Google’s copying is likely not only to be prima facie infringing but disqualified from any applicable exceptions.

Randal Picker, University of Chicago:

Google often returns results without ads, which very well might be an exercise of market power.  I talk about this a bit in a debate I did a few weeks ago on Google (“Does Google Violate Its ‘Don’t Be Evil’ Motto?”).  Video on YouTube if you are so inclined, http://www.youtube.com/watch?v=j9yHwf21w0w , or audio on NPR, http://www.npr.org/templates/story/story.php?storyId=97216369 .

Justin Hughes, Yeshiva University:  

I agree with Jane that what Google has been doing is more clearly a problem under continental copyright laws, right or wrong, than U.S. law.

But I was really struck by Jim’s comment “whether investigative journalism is really at risk here, and if so whether the Internet is really the culprit.”  Today, the Chicago Tribune filed for bankruptcy – and the long-standing woes of the Los Angeles Times mean people on the West Coast have lost a lot of the investigative journalism base they had there, separate from what exists on the D.C.-NYC corridor.

Having grown up in a town with a wretched newspaper (Cincinnati) and enjoyed a non-N.Y. metropolis with a great paper (L.A.), what’s happening is sad and disturbing.  At the cusp of the Internet, the newspaper industry was in no great shape, but its business model has definitely been put at risk by Internet activities.

What I’d love to read is the careful, empirical comparison of the effects on newspapers [or investigative journalism] of Internet information aggregators that are authorized/participatory (like Craigslist at one end and the original content of Huffingtonpost at another) vs. the effects on newspapers of Internet business models that seek to monetize other people’s content without authorization/license.   Does anyone know, is such a comparison out there in the literature?

Robert Merges, Berkeley:  

The topic raised in Rubin’s speech is near and dear to my heart.  In fact, I just completed an essay on many of the same themes, called “The Concept of Property in the Digital Era.”  I take aim at the consequences of the weak-IP school of thought now so dominant in our field, in large part because of the negative impact of Internet technologies on what I call “creative professionals.”  These are the people who create much of what Rubin calls “quality content,” and they are the ones being squeezed by current trends.

I did not include investigative reporters, but they fit what I do say quite well to the extent they are being crowded out by ubiquitous amateur “substitutes.”  IP law, I argue, has as one of its traditional goals the nurturing of a creative professional class – a view I defend against the implicit charge of elitism, which is latent in much of the current scholarship emphasizing the “democratic” nature of the Internet.

Anyway, I would be happy to send the essay to anyone who is interested.  It is forthcoming in the Houston Law Review, because I presented it in the Baker & Botts lecture at Houston, which I know some of you have given in the past.

Peter Menell, Berkeley:

I have followed the comments with interest.  I join/reinforce/commend the sentiments that have been expressed.  I would add two elements.

(1)  There is a lot of focus in IP scholarship on the positive spillovers
of digital/Internet technology.  The changes in media markets suggest that there are negative spillovers as well.  Investigative reporting has long been cross-subsidized by mundane things like classified advertisements.  Some of the most important checks on our political system – such as Woodward’s and Bernstein’s groundbreaking revelations that toppled a presidency – were, at some levels, made possible by such cross-subsidizations.

Professional newsgathering and reporting depend on enterprises that are increasingly endangered.  That does not necessarily mean that the Internet (and news search/aggregators) are hurting society on net, but that we need to be sensitive to the overall creative eco-system.

(2)  The Google Book Search settlement creates some valuable infrastructure for maintaining incentives for creativity.  But it is important that such institutions promote competition and creativity.  And, as I have written elsewhere, there may an important public role in ensuring that such infrastructure serves the larger public interests.  [See Peter S. Menell, “Knowledge Access and Preservation Policy in the Digital Age,” 44 Houston Law Review1013 (2007).]

There is obviously a lot more that can be said about these issues, but it is notable that Google is recognizing the role and importance of the content-supply pipeline.  The growing use and refinement of filtering on UGC sites also reflects a more balanced eco-system for incentives and access.

Jane Ginsburg, Columbia University:

Peter’s reply, together with its predecessors, prompts the following thoughts:

(1)  Information wants to be cross-subsidized.  Perhaps the popular press celebrants of Napster (original version) et al now recognize they’ve formed the cheering squad for their own unemployment.  Will one of the chic ideas then circulating – the “Grateful Dead strategy” – come to their aid?  If content doesn’t pay for itself, and can’t be cross-subsidized with other content because not enough content is paying for itself any more, how about selling something whose supply the seller can control?  But what would that be?  What’s the equivalent in this context of concert tickets and t-shirts?

(2)  The Google book settlement illustrates the growing importance of collective licensing.  But while the Book Rights Registry is a welcome (and perhaps necessary) innovation for authors and book publishers, I’m not sure how it would work in the newspaper context, particularly given the importance of cross-subsidization.  That is, the Book Rights Registry enables individual authors to determine what kinds of uses of out-of-print works will be permitted (even allowing them to veto the publisher’s authorization), and dictates the revenue splits.

Thus, each creator should be able to be compensated corresponding to the actual use of her work.  Viewed from an individual author perspective, this is very good; viewed from the perspective of the publisher of a collective work, this may be less good.  Publishers could not claim those authors’ shares (and devote them to other ventures) unless they contract for them (as some have, post-Tasini) or assert work-for-hire where applicable.  I’m not a fan of either approach, so would prefer to find a solution that does not encourage divestiture of authors.

(3)  Regarding ACAP’s improved version of robots.txt, I’d like to know more about how this would work.  What instructions are communicated, and how do we know (if we do), that search engines respect them?  How much detail can the instructions communicate and still be obeyed?  Are we talking about the equivalent of a 4-icon Creative Commons-type set of options (but – importantly – with pay), or a much broader range of options?

(4)  What, if any, is the relationship of an ACAP-like system to fair use?  Will some level of free copying be programmed in?  If all copying can be licensed, would no copying be fair use anyway?

James Gibson, University of Richmond:

I think Peter is right to focus on the negative spillovers from Internet tech and their effect on newspapers.  Some of those spillovers, however, are different from what Rubin was talking about in his speech.  For example, I would think that some portion of newspapers’ revenue loss comes not from content competition (including copyright infringers), but from advertising competition.  Why would someone pay $30 for a classified ad in the printed version of the Washington Post when they can put a much more detailed ad on CraigsList.com for free – and probably reach more people?  This is a negative Internet spillover, but one that has nothing to do with copyright law.

Certainly content competitors have played a role in newspapers’ demise.  But the diverse nature of Internet spillovers may inform the breadth of the project that Patrick proposes.

Stan Liebowitz, University of Texas – Dallas:

I didn’t find much new content in the speech.  I agree with much of what was said.  I don’t know enough about the potential newspaper standard, and there wasn’t enough information in the speech, to draw any conclusions about the feasibility of ACAP to control usage of its content.

I see that Microsoft has thrown out DRM, which is unfortunate.  Although it generated bad publicity and reams of negative blather from the anti-copyright crowd, I don’t think it had much of a negative impact on sales.  But it didn’t make any sense to use DRM for digital sale when CDs had no such protection.  Now that DRM has been removed, there is no evidence of an increase in sales of digital songs from what they would have been if DRM had been kept in place.  That is probably because anyone who was annoyed enough by DRM could easily overcome it.  I believe that if you don’t have locks, property will be taken, particularly in the digital realm.

Switching to a different business model, such as concerts or advertising (or performing on the street with a hat) is almost certainly an inferior model, or it would have occurred before digital stealing became so prevalent.  There are claims that these newer models are Schumpeterian gales of creative destruction, but that is a misunderstanding of the concept.  Those models were always available, just inferior, which is why they were not used.

In one sense newspapers would be in a more difficult situation than the recording industry in a world where digital locks worked.  The latter at least have products that are clearly differentiated from one another.  If the locks worked, the recording industry would be in pretty good shape.  Newspapers, on the other hand, largely have highly substitutable national and international information.

Of course, much of that comes from the AP or a handful of other organizations that collect information at more than the local level.  Without classified ads (moved to eBay) and with more and more retailing being performed by national chains (therefore, national advertising), there is less and less reason for readers to purchase local newspapers.  Allowing their local content on the Web for free is just the stupidity on the cake.

Finally, I also want to let people know of a paper I have written attempting to measure the impact of copyright.  Strangely, it appears that marginal changes in the law (such as doubling or halving its life) may have no impact on the access side of the access/incentive tradeoff, making increases in copyright unambiguously positive.  If correct, much of the debate over copyright has been off-kilter.  Even in the most charitable case for finding a positive impact on price, the impact is still small and all of the price differential is swallowed up by payments to the author.  You can find the paper here: http://ssrn.com/abstract=1266486.

Rod Smolla, Washington and Lee University:

The discussion in recent days has been invigorating.  I wear multiple hats germane to the debate, including that of director in what was once a very traditional and profitable legacy media company (Media General) that contributes enormously to the public good at the local level through newspapers, TV stations, and local Internet portals, yet struggles in today’s market.  We are at a fateful break point in the history of our IP and communications policies.  I worry deeply about the future of creative professionals, including those who toil to support important journalism at the local community level.  Suffice it to say that this rich thread of comments is enough to convince me that there would be great value in our undertaking a program of some sort to explore these issues.


Comments From Our Readers

Rob Miller: I enjoyed your site.