UC Berkeley Report: Revitalize the Library and Empower the People Who Operate It

In 2010, The University of California, Berkeley Libraries started a “self review” process, which led to a much more comprehensive review that was endorsed by the campus, and led by a “blue ribbon” faculty committee. The working group is called The Commission on the Future of the UC Berkeley Library. The commission studied the library, the profession, scholarly communications, and the many opportunities and challenges that lie ahead for a full year. It delivered its report to the campus administration on October 14, and the report is now available online.

Many eyes will be drawn to the financial recommendations, which are broad in scope: five million dollars in one-time funding to make up for lost time, and new funding each year at that same level. This proposal is not unprecedented; in the late 1990s, Chancellor Robert Berdahl made a similarly-sized investment in revitalizing the campus libraries. Now, newly inaugurated Chancellor Nicholas Dirks will have an opportunity to once again revitalize a great institution. Moreover, the recommendations of the report—read the executive summary—are audaciously forward-looking and paint a picture of an academic enterprise that is lean, innovative, creative, assertive, and not least, keenly interested in empowering people.

Point of disclosure: I manage an Affiliated Library, which is outside the reporting structure of the University Librarian. Affiliated Libraries report to deans, directors and department heads, but their collections data and other metrics are reported to the Association of Research Libraries as part of campus totals. The Affiliated Libraries work in close cooperation with The University Library.

A Key Finding: Librarians are GREAT

It’s heartening to see a group of faculty members state, in unanimity, that the human beings that run research libraries are a vital resource. Moreover, in an era of digital information (and misinformation), they have become more important to scholars, not less, in the business of helping people find, analyze and interpret what they need to know. In my view this is the most exciting of the many proactive recommendations that are made in the report. I urge anyone who reads my blog to check it out.

The press has also taken note.  Steve F. Brown, writing in the San Francisco Business Times, offers a very good “Cliff Notes” summary of the report and the issues.

Read the SF Business Times article: 


Read the Report:


Notes from a 2012 User Survey

(This article appeared in the May issue of Computers in Libraries magazine.)

By now nearly every large research library is deeply involved in user surveys, and we depend on the data they produce as we craft flexible goals for changing times. Survey data is even more important when external review committees are charged to review the library’s mission, because evidence-based metrics provide a crucial basis for understanding and rating performance and service goals. Indeed, the data often fall out in our favor, as local history has often shown. In 1997 UC – Berkeley charged a “blue ribbon committee” to review The University Library in an era of state budget shortfalls. The University Library engaged its community quite effectively, relying on survey data as part of the process. In 2012, another blue ribbon committee was formed for the same reasons. The 1997 venture garnered strong support for building the staff and collections budgets, and raised consciousness levels in general. The current committee, hard at work as we go to press, has an updated charge to identify the most important objectives for all library services. Intensive survey work preceded the formation of the blue ribbon committee and definitely influenced its charge.

Although research libraries vary quite a bit in their operational structure, there are commonalities among peer institutions, and so survey work is always broadly interesting. With that in mind, I will leave aside my thoughts about the “destacked” library, and focus on the survey responses that were gathered on the Berkeley campus during fall 2012.

“Central to the University’s Mission”

After years of tight budgets, the University Library at UC – Berkeley found itself once again facing many hard decisions about its near and long-term future. Library leaders responded proactively in 2012, launching an internal “re-envisioning” process. Once they had data in hand, they took their findings to the faculty and campus administration. The survey reached more than 4,000 faculty, students, and staff. Read all about it on the Library’s web site (see Attitudes toward Re-Envisioning the UC Berkeley Library, an Online Survey of the UC Campus Community, http://www.lib.berkeley.edu/AboutLibrary/re_envision.html ).

The re-envisioning process and accompanying survey data attracted favorable attention, and a blue ribbon committee was charged to study the situation and make strategic recommendations. Survey data were influential from the get-go. The survey produced one of those “keystone” metrics we always to hope to achieve: seven in 10 users said they used the library a “great deal” or a “fair amount.” The data suggest that the library is central to the University’s mission, to be sure, but another intriguing mindset was revealed. It turns out that “liking” the library inevitably means liking librarians, and the majority of survey respondents hold them in high esteem. Indeed, the survey also confirmed that the interactions we have with patrons leave them better off than they were before. This finding is a crucial value point that we need to return to over and over again, especially since debates about library space and budgets often tilt toward self-service and self-directed research models as a solution to staffing challenges.

The value of “high touch” services is confirmed elsewhere in the survey, too. At its core, the surveyors wanted to discover whether patrons preferred “stand alone, full service” subject libraries (of which there are many on campus), or whether they would accept “hub and cluster” services points. The latter would allow a much-reduced staff to pool their efforts—a big savings, as it would give the University a basis for permanently reducing of the total FTE in the library. But graduates and undergraduates expressed greater comfort with the full-service approach, which was welcome news. The faculty split 50-50 on this point, but a 50 percent favorable rating for stand alone, full service locations is nonetheless a compelling number.

“High Quality Collections?”

The responses on collection issues are particularly intriguing. The need to maintain high quality collections was a clear priority among all segments of the respondents. Moreover, they agreed that keeping collection funding levels robust was more important that trying to keep every service location open. This implies that in order to preserve collections and help them grow, the campus user community accepts—on some level—that service locations may need to be consolidated.

However, it is hard for library patrons to grasp that collections require a substantial amount of working hours to develop, maintain and preserve. It is easier to view collections as complete artifacts that are just waiting to be tapped, whether in digital or print form, and to miss the fact that they are the work of collection developers. But that belief may be changing. Another metric shows that even though “collections” stand as a distinct priority for the library, librarians were once again highly valued. When queried, users ranked the tasks of selection, cataloging and archiving, reference, and (interestingly) instruction as important.

These kinds of responses cannot help but direct the attention of external committees to the value of the library staff. This was timely information in fall 2012, as the University Library had lost a significant number of librarians—including many selectors—to emeritus status. Ultimately the data show that library users are beginning to perceive that “high quality collections” cannot be achieved without a high quality academic staff.

“Space” and “Place” Still Matter

If everybody can study and access information resources online, who needs library space? Well, it turns out that a lot of people surveyed value that space a lot. One out of four undergraduates stated that 24 hour access at least five days a week was a top priority. At the same time, one out of four respondents also said that they prefer to access information solely by digital means.

It would appear that student study habits are by no means homogeneous, and it is vital to remember this. Indeed, the optics of library space “in action” are glaringly clear. In any library on the central UC – Berkeley campus, many if not most seats are taken by students who are studying—online. Even as digital scholarship advances, there is continued value to the premise of the library commons. It may be that libraries are a bellwether of the limits of just how “digital” students wish to be; congregating together to study, whether in quiet or social space remains a priority and key aspect of student life.

We’re Still Digital-Plus-Print

There’s another question that arises from the experience and preferences of the 25 percent group of digital-only users: what does this low (but significant) percentage imply about the habits of the other 75 percent? Evidence from this survey and from other sources indicates that students continue to read textbooks and course readers, even if their e-readers are more lightweight. This is a datum worth paying attention to, as it confirms that individual preferences still dictate media choices.

I was particularly struck by a scene I recently encountered at the campus gym during a lunch break. In the stretching area, one student was doing “core strengthening” exercises (holding herself rigid in the form of a push-up) while studying a course reader, on the floor below. This double duty workout lasted as long as I was in the stretching room, about 15 minutes. Meanwhile, out in the aerobics atrium, every cardio machine user had either a book or a smartphone to study with or listen to music. But the hardware did not outnumber the print books—quite the opposite.

So: it would seem that libraries still need to plan for costly digital services while maintaining equally costly print acquisition programs. In order to do that, there is simply no way around the fact that funds are needed, and that cutting them means cutting something that is valued. Tough decisions abound, but this one is among the toughest, because administrators would like to save funds by going digital. What they need to hear—whether from survey data or from us –is that we are still in a hybrid environment, and we should be cautious in our conclusions about the right mix of print-plus-digital media.

Willingness to Compromise

My sense is that this particular survey uncovered a widely held set of beliefs among a diverse survey population, which included a balance of faculty, students, and staff. The users demonstrated broad awareness that years of budget cuts and staff reductions have had far-reaching consequences all over campus, and they are concerned. But they also perceive that it may no longer be possible to recreate the perhaps “elysian” dream of a fully staffed and fully funded research library. Instead, they are signaling acceptance of the idea that a new library service model might pose challenges, but that it might also bring new benefits to the campus.

This openness to change implies brings at least one major challenge, but also two opportunities, in my opinion. The challenge lies in how to frame a vision for the library’s future that folds negotiation and compromise by all stakeholders into a transparent process. That is never easy but is always worth the effort.

The opportunities are surely worth that effort. First, open minds create space for dialogue and learning, and if we emphasize communication and openness, a new library service model could integrate what is important both for users and for the profession. Second, if we are going to launch a new library service model, we had better make sure that it provides ample opportunity for “high touch” experiences. It is through meeting the library staff that users gain an appreciation for what they do, and the survey data bear this out. Whether the new service model deploys hubs and clusters, or full service branch libraries, success will hinge on building the model around the people who work in libraries because they are the agents who build loyalty, offer information counsel, and can help to interpret the digital future.

Data Discovery and Data Curation Go Hand in Hand

In just a few short years, data curation has been widely embraced by the profession and is recognized by many as an emerging core competency. The reasons are many, but the power of the web as a platform for mashing up diverse data sources is certainly a key factor. New government regulations require researchers to share data compiled in grant-funded research, which also provides a powerful incentive for taking a fresh look at how data can be preserved. In 2011, the Association of Research Libraries published an excellent summation of the potential of data curation for the library profession, titled “New Roles for New Times: Digital Curation for Preservation” (See http://www.arl.org/bm~doc/nrnt_digital_curation17mar11.pdf). This report was prescient in arguing that the volume of data and the need to preserve it is opening new opportunities for librarians to take center stage as collaborators.

Exciting times to be sure, but with all the new energy surrounding data curation of web- and crowd-sourced information, it is important to remember that new discovery techniques can also uncover fresh value in conventional data resources, particularly those that are generated by public mandate. For my part, I believe that there are significant “sleeper cells” of useful data—much of it gathered by public institutions—and these data can add value when they are added to born digital, linked data sets.

Many public information databases are compiled with a single need in mind: regulating construction permits, monitoring the growth of electrical grids, and so on. These data are often in digital formats, and they can be added to web- or cloud-based resources and used in ways that may not have been foreseen by the agencies that compile the data. The trick is to recognize not only what the primary goal for collecting is, but also to discover what value the data might have in different contexts. With that in mind, I will offer two examples of how data resources can empower new ideas in the broadest sense, and I will also share an old-fashioned data acquisition story “from the trenches.” The story shows how local data gathered by a public agency made the crucial difference in a research project—and suggests how it might gain value as part of larger-scale data analysis.

Big Data, Big Results

One of the best aspects of working with linked data is the ability to combine diverse sources of information and then extrapolate more nuanced meaning from the improved data set. This trend is accelerating, and currently it focuses on “new” and exciting areas such as crowd-sourced data generation and online consumer behavior-tracking. Rightly so: President Obama’s reelection campaign used data-driven strategies alongside its political and rhetorical vision, to considerable advantage. The 2012 U.S. elections proved beyond a doubt that smart data, carefully deployed, was worth more than the hundreds of millions of dollars that were hurled at the general electorate. The overall electoral cycle demonstrated that big data is recognized by politicians and entrepreneurs, as well as academics.

In the academic sphere, big data have created all-new approaches to research. The New York Times published an interesting update on how humanists can now analyze thousands of online novels (see The New York Times, January 27, 2013, p. B3). The article describes how Matthew L. Jockers at the University of Nebraska-Lincoln conducted word- and phrase-level textual analysis of digital books to study long-term language patterns. The much larger sample revealed not only how authors use words, but also how they inspire other authors over the years. One surprise finding: a relatively small number of authors have had an outsized impact on other writers, with Jane Austen and Sir Walter Scott at the forefront. This analytical approach is groundbreaking, insofar as it goes beyond the limitations imposed by much smaller samples of literature. The data application enables researchers to place authors in a larger historical context in ways that were not possible before.

Data driven political campaigns and large scale literature analysis demonstrate the blue sky nature of big data—and the attendant opportunities to curate the data that is being produced. Yet even as the new frontier expands at a rapid rate, it is still possible to find value in existing data sources. In my opinion, big data applications and data curation will reach their fullest potential when all sources, both old and new, are reexamined with the new tools.

New Value from Not-So-New Data

Not all data worth curating are born on the web. Agencies that oversee construction variances, hospitals, nursing homes, public works, and public health all gather data, but in many cases, their charge is to gather data for a single, specific purpose. The expected “data deliverable” might be tabular information for policy makers and urban planners, flowing from the stream of new construction permits, or other relatively mundane activities. It is easy to assume that such data may be well-targeted, but do not have transferable value. The following example of wage research proves the opposite.

During the 2012 election season, one of our researchers was monitoring “living wage” campaigns across the country and was very interested to see how they would fare. In the political discourse surrounding this issue, many voices argue that increasing the minimum wage is bad for business, raising costs and placing a burden on small firms in particular. Others argue that increasing low wages in nominal increments—75 cents, for example—has a negligible effect on the economy, and yet they help household incomes significantly. Our researcher wanted to assess the actual performance and policy ramifications of living wages to shed light on the debate, and needed help.

He needed to gather employee data on every fast food restaurant in a specific metropolitan region. Easily accessible sources indicated that there were more than 3500 establishments in all. Yet within that category, movie theaters, gas station convenience stores, and other purveyors of food-on-the-go needed to be winnowed out. None of the obvious data sources could provide such a pinpointed sample.

One of the library staff contacted the county agency that monitors food safety in restaurants, and eventually got through to their information technology department. She learned that the agency had detailed data on every establishment, including the exact number of employees at each location. This was the data our researcher needed to analyze low wage market dynamics and write a policy brief—just three weeks before the election.

The agency monitors restaurants for compliance with public health regulations. But—and this is a big but—that is literally all they are concerned about. They gather detailed data, but the data are only of interest when they find a safety infraction and must fine the offending restaurant. In our case, we had no interest in restaurant health and safety, but we very much wanted to know employee counts at every restaurant location. This sample would be useful as a basis for testing how living wage policies played out “on the ground.” The agency had exactly what we wanted, and we asked if they would be willing to share data set with us.

The IT manager agreed, with the proviso that no information about regulatory compliance would be sent to us—just the whole list of restaurants and their employee count. Once this was agreed upon, it took a few days to receive a data file that had all of what wanted.

These data provide a comprehensive resource for labor economists, and they will retain their value over the long term. Moreover, good relations with the regulatory agency have established a foundation for receiving data updates periodically. The dataset will also have added value if it is mashed together with other resources, such as state- and national level employee data, or coupled with Web- and cloud-based news and information about restaurants in the region.

Curate—But Counsel Too

This reference story drives home the fact that even while we are moving full-speed into an era when crowd-sourced, web-crawled, and tagged data are creating wholly new avenues for research, value still remains in ongoing data acquisition programs. Many public agencies produce data, and more often than not, they are well-managed and have a service mentality. When locally-gathered data of this nature are obtained and merged with other larger sources, the specificity of the local enriches the “big picture” that big data can reveal.

The emergence of big data research practices, which is revolutionizing how people parse data sets large and small, can actually strengthen the impact of library discovery skills. As a result Information professionals stand to benefit not through digital curation and getting involved in big data analysis, but also through the ongoing practice of reference and resource discovery. Because of this, I believe that it is important to promote our research and discovery acumen in the same manner that we are currently promoting the library as the “solution lab” for data curation. As admirable as that effort is, curation alone is, in my opinion, just half of the needed strategy. The crucial balance may be found by remembering that the skills inherent in reference work—discovery, pattern recognition, and analysis—offer a powerful means to convey our value proposition not only as data curators, but also as information counselors with advanced data acquisition skills.

This column appeared in Computers in Libraries, Vol. 33 (No. 3), April 2013.

Duking it Out in the E-Book’s “Wild West” Marketplace

(NOTE: This article appeared in computers in libraries 33 (no 1), jan-feb 2013. in light of current litigation, I’m posting it to information | mixology)


The e-book is a new medium, but it follows many other breakthrough products with histories of disruption, adoption, market acceptance, and the forging of new business relationships. Perennials such as CD-Roms, DVDs and iPods come to mind, as each of these new technologies triggered important changes in commerce and entertainment. The disruption was real and has caused serious distress for publishers, but there is no getting around the fact that we are in a new era now. Publishers have gained expertise in digital media and are engaged in intensive experimentation. They are taking big risks with e-books and trying new innovative pricing models. And they are playing a tough game to protect revenue.

The e-book market is moving at “warp speed” and it is hard to stay abreast of events. Fortunately librarians have been lucky in our leadership. The American Library Association has been very assertive in advocating for the most expansive model acquiring and loaning e-books. The result has been a lot of “dialogue,” some tough new policies from the largest publishers, and a sense that it is hard to know what is going to happen next. Authors are involved too, and have their own turf to protect.

With so much ferment, what strategies should librarians adopt to become central to the e-book market? Also, what are the best avenues for revitalizing our long-term relationships with publishers? I see two fundamental strengths that might inform our actions. The first is our close relationship with our user communities. The second is a combination of two related sources of knowledge: how to perceive the e-book “market” from a user perspective, and how to collaborate. A collaborator understands the importance of keeping a balance between open access and making a profit—and that kind of awareness may be the “glue” that keeps libraries and publishers in conversation. Even so, the next few years might be bumpy for e-book collaborators. Here are few signposts of the times, and some thoughts about where we are collectively going.

Borrowing, Buying and Both

2012 has been a year for learning a lot about e-books—and recognizing that we need to know even more. We need better data, too. Blogger Jeremy Greenfield is one great source of intelligence. He is a journalist who follows the e-book industry, both on his own blog and for Forbes (see digitalbookworld.com). In June 2012 he reported on something many of us probably know:  libraries and publishers don’t understand each other. Publishers don’t “get” the operational side of libraries, and ALA President Molly Raphael allowed that librarians have more to learn about the e-book market and its effect on publishers and distributors. The result has been a great deal of dialogue, and that’s a good thing. We are talking intensively to publishers to advocate for better access to e-books and reasonable curbs on pricing.

We have good company in our quest to understand how to deploy e-books, too. Greenfield looks to the research findings of the Pew Project on Internet and Society, which has long been a leading voice in analyzing disruptive technologies. Pew reports that e-book consumers are likely both to buy and borrow e-books. What a fascinating approach; it suggests that there is personal joy and comfort in “owning” a digital copy of say, Tolkien’s The Silmarillion, while you might just want to “borrow” a new book by Dean Koontz have an enjoyable, one-time read.

So: buying, borrowing, and both: it’s a user’s solution to a complex market, and it works.  As an iPad Mini user, I enjoy checking out what e-books other people keep on their tablets whenever they allow me have a look. Here’s what I find: a collection of favorite books that tends to grow, slowly but surely. I also see a smart shopper’s independent streak concerning “where” they do their buying and “borrowing.” The Kindle app for iPad is widely used on iPads, even though it is an arch-competitor to iBooks. This open-minded “collection building” and shopping suggests a deep love for the artifact, even in digital form, as well as interest in value-shopping in every direction.

It’s hard not love books that enrich our lives, and it looks like the digital version, read on a retina screen, preserves that crucial value. But “process” matters too. Some people favor bookstores, others prefer Amazon Prime, and still more troll through Apple’s ecosystem of goodies.  Many are still in the process of deciding what they like best. That’s very big unknown factor for publishers, and our familiarity with user behavior gives us an edge. For example, a friend of mine recently bought a Nook e-reader, and set herself up with a library of favorite titles and authors. Within a month, she was back to print, because she didn’t enjoy the process and experience of e-reading.

The Risk of Rhetoric

It would be an understatement to say the libraries and publishers are worlds apart in how they approach the challenge of e-reading. One of the single biggest risks is frank and committed dialogue might give way to rhetorical warfare. Many advocates of open access already feel that large publishers, such as Harper Collins with its 26-times-only policy, or Random House, with its mammoth price increase to recover for simultaneous and persistent access, have gone over to the “dark side.” Fortunately ALA has taken a lead in trying to forge common understandings, which is helpful, since Jeremy Greenfield reports that 67.2 percent of libraries have been loaning e-books since 2011.  Moreover, experimentation is essential, but publishers face a serious obstacle:  they cannot collaborate to set prices without facing antitrust litigation. In the resulting free-for-all, every e-book publisher must come up with its own pricing plan. In some ways, the current e-book market has a wild west, “Dodge City” feel.

What strategies can librarians use in the “Dodge City” of e-book pricing? I can think of two. Stay close to their user communities and make sure they know that we are advocating for them, and also continue to keep a place at the table to debate a fair balance that addresses the needs of publishers, distributors, and libraries as collaborators.


Other entertainment industries offer some guidance on how to sell and how to price, particularly cinema and music. But once again it is worth noting that conditions change fast. The iTunes Library faces competition from subscription services such as Spotify, and the market may change in the near term. But some of the lessons learned may be worth a look. Blogger Michael Schatzkin reports, Hollywood has perfected the art of “windowing” —delaying the release of DVDs until new movies have had a chance to earn their keep at the box office. Move studios are reluctant to hand over their entire catalogs to Amazon and NetFlix, for good reason, if they can still sell DVDs first.

The Windowing approach is an intriguing alternative for publishers, distributors, and libraries, but it has some built-in shortcomings. Most people want to read their favorite authors right away, and many people (myself included) reserve copies new releases months before they appear in print. Would library patrons accept waiting one or two years to borrow an e-book? That seems like a stretch. Therefore my theory is that publishers can certainly try a windowing approach, delaying the release of  e-books and perhaps employing a significant markdown, but I think they may face a reader backlash. Social media give activists a very handy tool for registering their dissatisfaction. Perhaps the e-book market will spawn a “reader’s guild” of activists, who could use the power of social media to shape policy.

What’s Needed: Unity

My first career was in independent bookselling, and for that reason I follow the publishing business closely. I find that the many “year of the e-book” debates that are running at full steam follow common threads that go back as far as the release of the mass-market paperback, which was seen as a force of doom for publishers—but was anything but that. The eventual outcome of the e-book debate carries high stakes for publishers, distributors, and libraries, but there is some good news too, showing up among all three stakeholders. Publishers have become much more skilled in handling digital media, and this is making them less conservative. We can now expect some healthy innovation from them. Distributors are crucial players in the sales process, and they have gained more clout. Perhaps they too will push back on pricing and access restrictions as a form of self-preservation. If so this may help consumers. Librarians have become the most articulate advocates for the importance of open access and fair use; we have done our homework and have a compelling “social good” to use as a rallying cry.

Each group has gained through innovation, and yet each  has more to learn about a very important function of markets: mutual benefit. At a time when the e-books debate threatens to push players into armed camps, it is vital to find common ground and build unity. If we fail to do that, we should have the courage to admit that the real losers will be readers themselves, who rightly expect us to do a better job of managing the emerging e-book market.

Mobile Apps: New Library Branding Opportunity

From my March 2013 column in Computers in Libraries magazine.  Column title:  “Using Apps to Extend the Library’s Brand”

“Our first reaction to the app revolution was to design our own, particularly to link to online catalogs and e-journals. But in a few short years, the array of apps, both those that got their start in the library world and those that are commercial, have enabled tablets and smartphones to do two things quite effectively. The first is to create an enjoyable online experience that fits on your phone and tablet, offering many kinds bibliographic service in a convenient way. The second is the more tantalizing: the app ecosystem is giving libraries a new opportunity to “brand” their services by placing them on a treasured high tech “toy” that people carry with them everywhere.”

Computers in Libraries 33 (No.2), March 2013, pp 27-29

“From Bibliometrics to ‘Altmetrics'”

The latest C&RL News has a very useful article that describes how we are quickly moving beyond the traditional “journal impact factor” as a single, definitive means for ranking scholarly works. The article also explores new resources and strategies to rank and evaluate works in new media. This is not a new concept, but the ascendance of social media and new ways to publish online has accelerated, and as a result faculty members are much more concerned about how to establish credit for their work than they were just a few years ago.  Have a look at:
Robin Chin Roemer & RachelBorchadt

“From bibliometrics to altmetrics: A changing scholarly landscape.”  C&RL News 73, No.10, November 2012, pp. 596-600.  http://crln.acrl.org/content/current


Random House Revises its “Libraries ‘Own’ Their Ebooks” Stance | Techdirt

Thanks go to Blake Carver for spotting this well-written analysis of the tension between current publishers’ goal for ebooks, and library advocacy for fair use and the importance of building e-collections.

Turns Out When Random House Said Libraries ‘Own’ Their Ebooks, It Meant, ‘No, They Don’t Own Them’ | Techdirt.

Authors’ Guild, et al. v. HathiTrust et al. – Decision Summary

This summary was prepared by Brandon Butler for the Association of Research Libraries, and it provides a succinct but thorough overview of the decision.

The full decision is available on Scribd at http://www.scribd.com/doc/109647049/HathiTrust-Opinion

–This is momentous and favorable development that strengthens the principle of Fair Use–TKH

Authors’ Guild, et al. v. HathiTrust et al. – Decision Summary

Prepared by Brandon Butler for the Association of Research Libraries


Authors’ Guild (AG) and individual authors sued HathiTrust (HT) and individual members, alleging that mass digitization was an infringement of copyright, as was the (suspended) Orphan Works Project. HT responded that fair use applied, among other defenses. The parties filed motions for summary judgment on these questions. The opinion was issued 10/10/2012 by Judge Harold Baer, Southern District of New York.


1. AG lacks standing – The court held that the AG does not have standing to sue because the Copyright Act allows only the “legal or beneficial owner of a copyright” to bring a lawsuit for infringement. AG is not suing because of rights it owns, but rather is suing on behalf of rights its members own. Some laws allow this kind of lawsuit, but the Copyright Act does not. (Judge Chin has allowed the AG to sue Google on behalf of its members, but Judge Baer argues that this is only because Google did not raise the issue of standing under the Copyright Act; if a defendant doesn’t raise the issue, the judge need not decide it.

2. Section 108 does not preempt Fair Use – The court held that fair use is a supplement to Section 108, and, contrary to the AG’s arguments, libraries are entitled to a full fair use defense and are not required to rely only on §108. The Library Copyright Alliance amicus brief was mentioned in support of this holding.

3. Authors cannot sue over a future Orphan Works Project – The court held that because the project had been suspended, there was no way to judge what harm, if any, a renewed Project might cause. The dispute is not ripe for decision. Authors can sue over orphan works if and when a new program gets under way.

4. Mass digitization for search, preservation, and accessibility is a fair use – The court finds that all of HT’s uses are decisively fair. They are non-profit, educational uses, and two of HT’s purposes (search & accessibility) are “transformative,” because the works are used for a different purpose from the original, intended purpose. The court says use of the entire work is fair where appropriate to the purpose, as it is here. Finally, the court pointed to evidence showing that a market likely could not develop for licensing these kinds of uses, and further that, again because they are transformative, these uses cannot be subject to licenses. The court also dismissed as unsubstantiated the security concerns that had been a central part of the AG’s public statements about HT. AG had provided no reason to doubt the effectiveness of the complex security system that HT described at trial.

5. The ADA requires, and the Chafee Amendment allows, mass digitization for accessibility – Making library collections equally accessible is required for equal access to education for the print disabled. The market will not satisfy the need. Chafee arguably applies because the ADA makes accessibility a “primary mission” for all libraries. Even if Chafee does not apply, fair use does.

New JISC (UK) Report on Effective Use of eBooks

While we’re on the topic of eBooks:

Stephen Abram reported the JISC’s upcoming report on the effective use of eBooks, which promises to be excellent reading. In fall 2012, it’s interesting the a large percentage of undergraduates (surveyed at UC Berkeley) expressed a preference for print textbooks. However, I foresee that those numbers will be fluid for the rest of the academic year.

You can read and comment on the report at the following link: