Peter S. Menell*
May 3, 2012
Nearly a decade after the emergence of user-generated content (UGC) websites, appellate courts finally rendered their interpretation of the applicability of the Digital Millennium Copyright Act’s (DMCA) safe harbor with respect to such entities during the past several months. These much-anticipated decisions highlight the difficulties of interpreting copyright law in a rapidly evolving technological age. They also indicate how the risk of highly disproportionate liability can distort statutory interpretation.
In enacting the DMCA at the dawning of the Internet Age 14 years ago, Congress afforded online service providers (OSPs) immunity from monetary damages for storing content “at the direction of a user” so long as they:
(i) do not have actual knowledge that the material or an activity using the material on the system or network is infringing;
(ii) in the absence of such actual knowledge, [are] not aware of facts or circumstances from which infringing activity is apparent; or
(iii) upon obtaining such knowledge or awareness, act expeditiously to remove, or disable access to, the material.
17 U.S.C. §512(c)(A).
At the time that Congress crafted this regime, the World Wide Web was a simpler place. OSPs hosted websites managed by webmasters who actively controlled the materials made available on webpages. Industry negotiators had a relatively clear sense of how the safe harbor would function in this Web 1.0 ecosystem. Subsection (i) would eject the OSP from the safe harbor if it had actual knowledge of specific infringing content on its system and did not expeditiously remove or disable access to the unauthorized material, as provided in subsection (iii). Subsection (ii) – which Congress called the “red flag” provision – excluded OSPs from the safe harbor based on awareness of “facts or circumstances from which infringing activity is apparent.” Unlike subsection (i), the red flag provision was not limited to knowledge of “the” infringing material, but rather was triggered by awareness of facts or circumstances from which “infringing activity” was apparent – a more general class of conduct. As with actual knowledge of “the” material in subsection (i), OSPs who became aware of red flags under subsection (ii) could preserve their safe harbor status by expeditiously removing or disabling access to infringing material pursuant to subsection (iii).
The emergence of Web 2.0 applications, such as UGC sites, in 2004, complicated application of this regime in not fully anticipated ways. With users gaining the ability to upload, edit, and collaborate in information dissemination, webmasters came to be replaced by automated systems and the potential liability of OSPs became more uncertain. On the one hand, Web 2.0 websites greatly expanded Internet functionality and the ability of amateur creators, fans, and the public at large to reach worldwide audiences quickly and easily. On the other hand, they greatly expanded the level of infringing activity.
The applicability of subsection (ii) took on particular significance to the emerging Web 2.0 marketplace. The most prominent UGC sites – Veoh and YouTube – ultimately implemented filtering technologies in order to dramatically reduce the number of infringing works uploaded to their systems. Prior to those adaptations, reports circulated that these websites hosted massive amounts of infringing user-uploaded videos. A critical question remained: Did general knowledge that an OSP hosted infringing content – as might be inferred from media reports that YouTube had unauthorized clips of Viacom’s “The Daily Show” – deprive the OSP of the Section 512(c) safe harbor, thereby exposing it to potentially crushing statutory damages for each of thousands of unauthorized clips that it hosted? Or did the OSP get ejected from the safe harbor only when it had specific – i.e., a uniform resource locator (URL) address – knowledge of particular infringing works on its service and failed to remove or disable them expeditiously?
Appellate court answers to these questions began to emerge late last year. In December 2011, the Ninth Circuit affirmed the district court’s grant of summary judgment for Veoh Networks on the ground that a copyright owner must establish “specific knowledge of particular infringing activity” in order to eject a UGC host from the DMCA’s Sec. 512(c) safe harbor. See UMG Recordings, Inc. v. Shelter Capital Partners LLC, 667 F.3d 1022, 1037 (9th Cir. 2011) (emphasis added). Without any effort to reconcile Sec. 512(c)(1)(A)’s text, the court based its narrow interpretation of subsection (ii) on ambiguous general legislative history from the DMCA1 and its prior conclusory ruling in Perfect 10, Inc. v. CCBill LLC, 488 F.3d 1102 (9th Cir. 2007). Applying this specific knowledge standard, the Ninth Circuit ruled that the CEO’s awareness of a news article characterizing Veoh as “a haven for pirated content” was insufficient to create a triable issue of whether the company was “aware of facts or circumstances from which infringing activity [was] apparent.” The court apparently did not regard an internal company e-mail stating that “the majority of Veoh content” “infringe[d] copyright” to be sufficient to raise a triable issue that the red flag was waving.2
Last month, the Second Circuit followed and expanded upon the Shelter Capital interpretation that Sec. 512(c)(1)(A)(ii) requires “specific knowledge or awareness” of “facts or circumstances from which infringing activity is apparent.” See Viacom Intern., Inc. v. YouTube, Inc., ___ F.3d ___, 102 U.S.P.Q.2d 1283 (2nd Cir. 2012). Unlike the Ninth Circuit, the Second Circuit confronted the concern that requiring specific knowledge under subsection (ii) renders the red flag provision superfluous – i.e., “the provision would be satisfied only when the ‘actual knowledge’ provision is also satisfied.” The court rejected this argument for reading subsection (ii) to allow a broader trigger for ejecting an OSP from the safe harbor than subsection (i) in the following passage:
The phrase “actual knowledge,” which appears in §512(c)(1)(A)(i), is frequently used to denote subjective belief. See, e.g., United States v. Quinones, 635 F.3d 590, 602 (2d Cir. 2011) (“[T]he belief held by the defendant need not be reasonable in order for it to defeat … actual knowledge.”). By contrast, courts often invoke the language of “facts or circumstances,” which appears in §512(c)(1)(A)(ii), in discussing an objective reasonableness standard. See, e.g., Maxwell v. City of New York, 380 F.3d 106, 108 (2d Cir. 2004) (“Police officers’ application of force is excessive … if it is objectively unreasonable in light of the facts and circumstances confronting them, without regard to their underlying intent or motivation.” (internal quotation marks omitted)).
The difference between actual and red flag knowledge is thus not between specific and generalized knowledge, but instead between a subjective and an objective standard. In other words, the actual knowledge provision turns on whether the provider actually or “subjectively” knew of specific infringement, while the red flag provision turns on whether the provider was subjectively aware of facts that would have made the specific infringement “objectively” obvious to a reasonable person. The red flag provision, because it incorporates an objective standard, is not swallowed up by the actual knowledge provision under our construction of the §512(c) safe harbor. Both provisions do independent work, and both apply only to specific instances of infringement.
Viacom Intern., Inc. v. YouTube, Inc., ___ F.3d at ___.
Unfortunately, this purported distinction between subjective belief and objective reasonableness is without a difference. The Second Circuit’s interpretation boils down to: “In the absence of actual knowledge [that the material or an activity using the material on the system or network is infringing],” the OSP must have been aware of facts that would have made the specific infringement “objectively” obvious to a reasonable person. It is difficult to see how the second clause adds anything significant beyond what is covered by the first clause: In the absence of actual knowledge of specific infringing material, awareness of specific infringement (as judged by a reasonable person). More importantly, the focus on this evanescent distinction overlooks the most pertinent text. Unlike subsection (i), which refers to actual knowledge of “the” material that is infringing, subsection (ii) omits the crucial definite article. Rather, it requires only awareness of facts or circumstances from which “infringing activity” is apparent, not from which “the” “infringing activity” is apparent. By its plain terms, subsection (ii) is triggered by any facts or circumstances – specific or general – from which infringing activity is apparent. Awareness of facts or circumstances from which infringing activity is apparent ejects the OSP from the safe harbor unless the OSP expeditiously removes or disables infringing material as provided in subsection (iii).
The Second Circuit limits subsection (ii) to specific knowledge by conjoining subsections (ii) and (iii):
Under §512(c)(1)(A), knowledge or awareness alone does not disqualify the service provider; rather, the provider that gains knowledge or awareness of infringing activity retains safe-harbor protection if it “acts expeditiously to remove, or disable access to, the material.” 17 U.S.C. §512(c)(1)(A)(iii). Thus, the nature of the removal obligation itself contemplates knowledge or awareness of specific infringing material, because expeditious removal is possible only if the service provider knows with particularity which items to remove. Indeed, to require expeditious removal in the absence of specific knowledge or awareness would be to mandate an amorphous obligation to “take commercially reasonable steps” in response to a generalized awareness of infringement. Viacom Br. 33. Such a view cannot be reconciled with the language of the statute, which requires “expeditious[ ]” action to remove or disable “the material” at issue. 17 U.S.C. §512(c)(1)(A)(iii) (emphasis added).
Viacom Intern., Inc. v. YouTube, Inc., ___ F.3d at ___. This conjoining, however, overlooks the statute’s deliberate insertion of the alternative conjunction “or” in between clauses (ii) and (iii).
In effect, the Second Circuit reads §512(c)(1)(A) as providing two bases for ejecting an OSP from the safe harbor: (i) the OSP does not have actual knowledge of specific infringing material or fails to expeditiously remove or disable such material; and (ii) the OSP is not aware of facts or circumstances from which specific infringing activity is apparent or fails to expeditiously remove or disable such infringing material. This reading overlooks an important third option: that the OSP is aware of facts or circumstances from which general infringing activity is apparent and does not remove or disable infringing material. The Second Circuit circumvents this possibility by reading subsection (iii) into subsection (ii). But that distorts the statute’s plain meaning as well as the “red flag” concept. Congress stated that when the red flag is waving, the OSP loses the safe harbor unless it can bring its service into compliance with copyright law. That might be very difficult to do in the Web 2.0 context, but that is what the statute provides. And there is a valid reason. OSPs that are aware of facts or circumstances from which infringing activity is apparent have a responsibility to eradicate the infringement. If they cannot, then they face responsibility for the infringing activity. Subsection (iii) affords OSPs with specific knowledge the option of preserving the safe harbor by expeditiously removing or disabling infringing content. Failure to do so exposes them to monetary liability for direct and/or indirect infringement. Where the OSP is aware of general knowledge of potentially pervasive infringing activity, it can preserve its safe harbor status only by expeditiously disabling the domain in which that pervasive infringement occurs.
That could well produce unjust liability and chilling effects. But it also discourages highly parasitic websites that hide their heads in the sand and merely remove specific infringing content brought to their attention by copyright owners. Suppose, for example, that a UGC website – let’s call it GS – became a haven for unauthorized sound recordings. Reliable, widely distributed news outlets regularly reported on the rampant availability of all manner of popular sound recordings on the GS website. All one had to do was to put in the name of their favorite artist and/or song into GS’s search engine and the sound recording would instantaneously begin streaming. Under the Second Circuit’s interpretation of §512(c)(1)(A)(ii), the website would be immune from liability so long as it removed individual recordings specifically brought to its attention by the copyright owners, notwithstanding that new versions were appearing as fast or faster than the specifically identified recordings could be brought down. Such an interpretation, however, overlooks the plain meaning of subsection (ii) – which is not limited to specific knowledge – and treats subsection (iii) as affording OSPs a mandatory notice and takedown procedure. That procedure, however is provided for separately. See §512(c)(1)(C). Congress noted that although an OSP has no obligation to seek out copyright infringement, “it would not qualify for the safe harbor if it had turned a blind eye to ‘red flags’ of obvious infringement.” See Senate Report No. 105–190, at 48. Thus, both the text and the legislative history indicate that Congress did not intend to limit the “red flag” provision to specific URL information.
Subsection (ii)’s broader standard might or might not be good policy. Although §512(c) may be obsolete in light of Web 2.0 functionality, it is up to Congress, not the courts, to change the red flag standard. The text leaves some room for courts to interpret “awareness,” “facts or circumstances,” “infringing activity,” and “apparent,” but not to read additional requirements – such as specific knowledge – into the text, especially where such a requirement would render the provision effectively superfluous.
Even if the text of subsection (ii) were ambiguous in this regard, the other windows into legislative intent fail to support the Second and Ninth Circuits’ reading. The Ninth Circuit refers to §512(m) – providing in a section entitled “Protection of privacy” that “[n]othing in this section shall be construed to condition the applicability of subsections (a) through (d) on … a service provider monitoring its service or affirmatively seeking facts indicating infringing activity” – to support its narrow interpretation of subsection (ii). But that section is not inconsistent with general knowledge casting an OSP out of the safe harbor. It merely states that the DMCA does not force an OSP to monitor its service. An OSP is certainly free to monitor its service, and given the risks of UGC sites not doing so, it is not surprising that Veoh and YouTube eventually chose to implement filtering technologies. Section 512(m) cannot be fairly read to limit subsection (ii) to specific knowledge of infringing activity.
The legislative history to which the Ninth Circuit points (see supra n.1) does not directly address the reach of subsection (ii). Rather it highlights general purposes of the DMCA and notes the challenges of assessing whether certain activity (such as use of a photograph) is infringing. There is nothing in either of these passages, however, to indicate that Congress intended to impose a specific knowledge requirement in subsection (ii).
The Second Circuit pushed imposition of a “specific knowledge” requirement a large and questionable step further. The YouTube record contained specific instances of infringing activity known to YouTube founders:
YouTube founder Jawed Karim prepared a report in March 2006 which stated that, “[a]s of today[,] episodes and clips of the following well-known shows can still be found [on YouTube]: Family Guy, South Park, MTV Cribs, Daily Show, Reno 911, [and] Dave Chapelle [sic].” Karim further opined that, “although YouTube is not legally required to monitor content … and complies with DMCA takedown requests, we would benefit from preemptively removing content that is blatantly illegal and likely to attract criticism.” He also noted that “a more thorough analysis” of the issue would be required. At least some of the TV shows to which Karim referred are owned by Viacom. A reasonable juror could conclude from the March 2006 report that Karim knew of the presence of Viacom-owned material on YouTube, since he presumably located specific clips of the shows in question before he could announce that YouTube hosted the content “[a]s of today.” A reasonable juror could also conclude that Karim believed the clips he located to be infringing (since he refers to them as “blatantly illegal”), and that YouTube did not remove the content from the website until conducting “a more thorough analysis,” thus exposing the company to liability in the interim.
Furthermore, in a July 4, 2005 e-mail exchange, YouTube founder Chad Hurley sent an e-mail to his co-founders with the subject line “budlight commercials,” and stated, “we need to reject these too.” Steve Chen responded, “can we please leave these in a bit longer? another week or two can’t hurt.” Karim also replied, indicating that he “added back in all 28 bud videos.” Similarly, in an August 9, 2005 e-mail exchange, Hurley urged his colleagues “to start being diligent about rejecting copyrighted / inappropriate content,” noting that “there is a cnn clip of the shuttle clip on the site today, if the boys from Turner would come to the site, they might be pissed?” Again, Chen resisted:
but we should just keep that stuff on the site. i really don’t see what will happen. what? someone from cnn sees it? he happens to be someone with power? he happens to want to take it down right away. he gets in touch with cnn legal. 2 weeks later, we get a cease & desist letter. we take the video down.
And again, Karim agreed, indicating that “the CNN space shuttle clip, I like. we can remove it once we’re bigger and better known, but for now that clip is fine.”
Viacom Intern., Inc. v. YouTube, Inc., ___ F.3d at ___. Based on this evidence, the Second Circuit concluded that Viacom had raised “a material issue of fact regarding YouTube’s knowledge or awareness of specific instances of infringement.” Consequently, it overturned the district court’s grant of summary judgment and remanded for further proceedings.
As part of its remand order, the Second Circuit implicitly construed the §512(c) safe harbor on what might be called a transactional basis. Building off its questionable interpretation that §512(c)(1)(A)(ii) applies only to specific instances of infringing activity, the court suggests that exposure to liability is limited to only those instances where the OSP has been made aware of facts or circumstances from which specific infringing activity was apparent. In so doing, the court effectively applies the safe harbor on a work-by-work basis. The effect is to dramatically limit YouTube’s potential exposure to perhaps a few dozen clips.
While this interpretation may well make good policy sense by shielding OSPs from disproportionate statutory damages, it fails to comport with the structure of the safe harbor. Whereas subsection (iii) specifically provides that an OSP can maintain immunity by expeditiously removing copyrighted works upon gaining actual knowledge, there is nothing to suggest that failure to comply with subsections (ii) and (iii), as occurred in the YouTube case, limits exposure to only those works for which YouTube had specific knowledge. The plain meaning of §512(c)(1)(A) suggests that failure to comply with subsections (i) or (iii) on the one hand, or subsections (ii) or (iii) on the other, would eject the OSP from the safe harbor entirely. The legislative history refers to “safe harbor status” as a binary concept – an OSP is either in or out. See Senate Report No. 105–190, at 48. The Second Circuit suggests that the OSP remains in except for those infringing works for which it had specific knowledge.
What is needed, instead, is a middle ground whereby the statute applies a transactional approach where the OSP is only aware of specific knowledge and takes no action and a broader approach where the OSP is aware of general knowledge indicating widespread infringing activity and takes no action. Courts could address this issue through their interpretation of “expeditiously” under §512(c)(1)(A)(iii). Implementation of filtering technologies could also bear on compliance with this means of preserving the safe harbor. While that may effectively limit the range of UGC website designs, it seeks to effectuate the DMCA’s overall policy balance. Section 512(c) does not establish a right to have any form of website so long as the OSP responds to takedown notices. Rather it dictates the conditions under which a website can enjoy the safe harbor. It seeks to promote a balanced ecosystem for OSPs and copyright owners. The takedown process is a central feature of the regime, but it is not the sole constraint on OSPs.
Granted, the plain meaning of the statute could produce an overly harsh result in the context of UGC websites. The introduction to the Second Circuit’s YouTube opinion offers a telling clue as to what may have inclined the court toward such a narrow construction of §512(c)(1)(A)(ii): “The plaintiffs alleged direct and secondary copyright infringement based on the public performance, display, and reproduction of approximately 79,000 audiovisual ‘clips’ that appeared on the YouTube website between 2005 and 2008 [before YouTube implemented the ContentID filtering system]. They demanded, inter alia, statutory damages pursuant to 17 U.S.C. §504(c)….” Section 504(c) provides for the award of “not less than $750 or more than $30,000” per infringed work and up to $150,000 per work for willful infringement in the court’s discretion – creating a liability range from $59 million to nearly $12 billion. Under the Supreme Court’s decision in Feltner v. Columbia Pictures Television, Inc., 523 U.S. 340 (1998), the determination of statutory damages is a question for a jury.
While the concern with statutory damage windfalls is understandable, neither the DMCA’s text nor its legislative history support the Second and Ninth circuits’ statutory construction. Part of the problem relates to the rapid technological change in web hosting that unfolded shortly after the DMCA was enacted. The emergence of Web 2.0 applications may well have gone beyond Congress’s prescience, although a robust counterargument can be made that the “red flag” provision directly addresses the increased risk of piracy in Web 2.0’s viral technological era, encourages innovation in web technologies (such as filtering) that strike a responsible balance between the DMCA’s goals, and discourages abusive OSP behavior.
As I have argued elsewhere, cross-industry “best practices” collaborations – such as those that led to the Principles for User Generated Content Services and the Copyright Alert Memorandum of Understanding between ISPs and major content industries – may provide the most responsive solution to the inevitable technological changes affecting Web services. See Peter S. Menell, Design for Symbiosis: Promoting More Harmonious Paths for Technological Innovators and Expressive Creators in the Internet Age, Communications of the Association of Computing Machinery (forthcoming 2012); available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1995598. But the pressure to work collaboratively depends to a significant extent on the default liability regime.
While I am sympathetic with the Second and Ninth circuits’ desire to avoid disproportionate statutory damage windfalls made possible by Web 2.0 functionality, that desire cannot justify distortion of the statutory language to achieve this end. First, as noted above, the courts lack the authority to rewrite the legislation. Second, there are other policy levers within the courts’ authority to effectuate a better, even if imperfect, balance. In addition to the previously noted leeway courts have to interpret such terms as “awareness,” “facts or circumstances,” and “infringing activity,” courts possess substantial leeway in determining statutory damages. And although the Supreme Court has put that discretion in the first instance with the jury, the common law doctrine of remittitur as well as constitutional limits on damages afford judges significant leeway. See Sony BMG Music Entertainment v. Tenenbaum, 660 F.3d 487 (1st Cir. 2011). Third, as suggested by the GS hypothetical above, reading a specific knowledge requirement into the statute opens up a potentially massive loophole in the DMCA safe harbor regime. See Peter S. Menell, Jumping the Grooveshark: A Case Study in DMCA Safe Harbor Abuse (December 2011) <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1975579>. It rewards willful blindness by imposing an undue burden on copyright owners to continually monitor websites that are designed to facilitate repeat infringement, resulting in an inefficient allocation of responsibility for policing infringing activity.
The Shelter Capital and YouTube cases expose several shortcomings of copyright protection in the Web 2.0 Age. The DMCA sought to foster “reasonable assurance” to copyright owners “that they will be protected against massive piracy” while insulating OSPs from copyright liability in the ordinary course of their operations so that they will make “the necessary investment in the expansion of the speed and capacity of the Internet.” See Senate Report No. 105–190, at 8. Congress can better effectuate these dual goals in the Web 2.0 Age by tightening the responsibilities of Web 2.0 OSPs while significantly recalibrating the statutory damage regime to avoid undue digital copyright windfalls.