Prof. Douglas Lichtman, UCLA School of Law
June 25, 2009

Ten years ago, a meaningful discussion of copyright law could focus almost exclusively on the federal copyright statute and its related case law.  At that time, the primary powers wielded by copyright holders were rights granted explicitly by the statute, such as the exclusive right to authorize duplication, and the exclusive right to authorize distribution.

The primary constraints on copyright power, meanwhile, were similarly found in statutory text.  Section 107, for example, forbid copyright holders from enforcing their rights against “fair use” infringements like parody and scholarship.  Section 102 made clear that copyright protection could not be used to restrict access to ideas, concepts, and principles.

In short, the relationships between and among authors, readers, viewers, and listeners were for the most part dictated by explicit statutory rules.

Today, while those traditional rules are obviously still important, a meaningful discussion of the law can’t help but also include technological protections.  I’m thinking here of things like the encryption technologies that serve to discourage consumers from making unauthorized copies of their DVDs, or the watermarks that to some degree allow copyright holders to detect when their audio or video shows up without permission on a site like YouTube.

These so-called “digital rights management” or DRM technologies are impacting every aspect of the copyright equation.  When they work, they can expand a copyright holder’s rights, giving that content provider the ability to control things like use and access, two aspects of control that are not formally covered by the copyright statute.

At the same time, DRM has been known to get in the way of things like fair use and first sale, two types of flexibility that copyright law used to defend when it was the statute that determined what was and wasn’t permissible.  On top of all that, DRM opens a Pandora’s box of other problems — security problems, for instance, if some DRM system gets hacked, and privacy problems when a DRM system is in part implemented by monitoring who does exactly what, with what content, and when.

In a new audio podcast over at, I pick up on these themes and present an hour-long in-depth look at the law, technology, and strategy of DRM.

The audio thinks about DRM as a tool to discourage piracy; but, just as important, it also considers DRM’s other uses, for example DRM systems that track content not with an eye toward stopping the flow, but instead with an eye toward understanding who’s watching — something akin to what Nielsen ratings do for television.  I’m joined in the conversation by Professor Ed Felten, from Princeton University, and also my co-blogger here, Professor Randy Picker, from the law school at the University of Chicago.

The full audio can be streamed or downloaded from

Although the show engages a wide range of issues, I want to focus here on just two: first, the technology; and second, why people seem to hate DRM so much.

On the technology, three examples help to frame the space.  First, there are strategies like “watermarking” where a content owner in essence adds something to its content and then can watch for that special something as content travels through a player, a storage device, or the Internet.  The added something can be very specific — maybe the name of the original buyer — or something more general, like the name of the copyright holder.

Drawbacks to a watermarking strategy are that watermarks can be removed; watermarks don’t help for content that was released prior to the decision to use watermarking; and watermarking requires that some hardware or software system cooperate with the copyright holder, look for the mark, and then do something when it is detected.

A second strategy concept is fingerprinting.  This time, nothing is added to the content; instead, some computer algorithm looks or listens to the content and tries to identify it based on (say) the pattern of sounds involved, or the range of colors shown.  Fingerprinting is wonderfully flexible in the sense that you can take the fingerprint of content even after it’s been released, and, for the same reasons, you can change a fingerprinting algorithm again and again without needing to touch the actual content itself.

Drawbacks, though, come in the category of precision.  Fingerprinting systems have to be able to treat as the same two clips that differ along only a trivial dimension, or else clever copyists will avoid detection by making an inconsequential tweak; but they can’t be too trigger-happy, or else the system will get bogged down with false positives.

A third strategy concept is containment, where this time the approach is to encrypt the content and then control access to the decryption information.  Limitations here are the obvious ones: The decryption key can get out; and, even if it doesn’t, at some point you have to allow customers to watch, hear, or otherwise experience the content — and when that happens, by definition the content is no longer encrypted.  Ed Felten refers to this as the “analog hole.”

That all sounds somewhat reasonable.  So why do people hate DRM so much?

One problem is that consumers have grown accustomed to certain practices and flexibilities that were par for the course back when (a) the boundary between permissible and impermissible acts was constrained not by technology, but by statute, and (b) those legal constraints mattered only to the extent that they could actually be enforced anyway.  Technology thus marks a real change to the rules of the road.  It shifts the aforementioned boundary, and it makes the resulting rules much more plausibly enforced.

A second and related problem is the fact that copyright owners have had a hard time trying to honestly forewarn potential consumers about exactly what a given DRM system will and will not allow.  Jason Schultz, who directs the Samuelson Law Tech and Public Policy Clinic at Berkeley, raises this issue by pointing to the huge pile of mouse-print legalese that stands in the way of even a relatively simple digital transaction, like downloading a song or application to an iPhone.

This difficulty of communicating, combined with those historical expectations mentioned above, in the end sets up a lot of interactions where consumers think they are buying one thing, actually get another, and thus end up understandably hostile toward DRM.

But there’s more.

Another reason that consumers hate DRM is that DRM systems have sometimes caused real harm.  The main example here is security problems.  A typical DRM system will reduce the relevant user’s control over and understanding of his own computer system.  This is almost unavoidable, given that a DRM system is typically trying to limit the user’s degrees of freedom, and if it did that in a completely above-the-board implementation the user would simply disable it.

But when your computer keeps secrets from you, of course security is compromised, because malicious parties can plausibly hide in those same shadows and then use the darkness to do far worse than merely disable an unauthorized extra helping.

There are yet additional explanations for why consumers hate DRM — and, beyond that, a whole host of interesting strategic questions about when a content owner might nevertheless want to use DRM, and when consumers are in fact (begrudgingly) better off because of what DRM can do.  For the details, and free CLE credit too, join us at