Data Discovery and Data Curation Go Hand in Hand

In just a few short years, data curation has been widely embraced by the profession and is recognized by many as an emerging core competency. The reasons are many, but the power of the web as a platform for mashing up diverse data sources is certainly a key factor. New government regulations require researchers to share data compiled in grant-funded research, which also provides a powerful incentive for taking a fresh look at how data can be preserved. In 2011, the Association of Research Libraries published an excellent summation of the potential of data curation for the library profession, titled “New Roles for New Times: Digital Curation for Preservation” (See http://www.arl.org/bm~doc/nrnt_digital_curation17mar11.pdf). This report was prescient in arguing that the volume of data and the need to preserve it is opening new opportunities for librarians to take center stage as collaborators.

Exciting times to be sure, but with all the new energy surrounding data curation of web- and crowd-sourced information, it is important to remember that new discovery techniques can also uncover fresh value in conventional data resources, particularly those that are generated by public mandate. For my part, I believe that there are significant “sleeper cells” of useful data—much of it gathered by public institutions—and these data can add value when they are added to born digital, linked data sets.

Many public information databases are compiled with a single need in mind: regulating construction permits, monitoring the growth of electrical grids, and so on. These data are often in digital formats, and they can be added to web- or cloud-based resources and used in ways that may not have been foreseen by the agencies that compile the data. The trick is to recognize not only what the primary goal for collecting is, but also to discover what value the data might have in different contexts. With that in mind, I will offer two examples of how data resources can empower new ideas in the broadest sense, and I will also share an old-fashioned data acquisition story “from the trenches.” The story shows how local data gathered by a public agency made the crucial difference in a research project—and suggests how it might gain value as part of larger-scale data analysis.

Big Data, Big Results

One of the best aspects of working with linked data is the ability to combine diverse sources of information and then extrapolate more nuanced meaning from the improved data set. This trend is accelerating, and currently it focuses on “new” and exciting areas such as crowd-sourced data generation and online consumer behavior-tracking. Rightly so: President Obama’s reelection campaign used data-driven strategies alongside its political and rhetorical vision, to considerable advantage. The 2012 U.S. elections proved beyond a doubt that smart data, carefully deployed, was worth more than the hundreds of millions of dollars that were hurled at the general electorate. The overall electoral cycle demonstrated that big data is recognized by politicians and entrepreneurs, as well as academics.

In the academic sphere, big data have created all-new approaches to research. The New York Times published an interesting update on how humanists can now analyze thousands of online novels (see The New York Times, January 27, 2013, p. B3). The article describes how Matthew L. Jockers at the University of Nebraska-Lincoln conducted word- and phrase-level textual analysis of digital books to study long-term language patterns. The much larger sample revealed not only how authors use words, but also how they inspire other authors over the years. One surprise finding: a relatively small number of authors have had an outsized impact on other writers, with Jane Austen and Sir Walter Scott at the forefront. This analytical approach is groundbreaking, insofar as it goes beyond the limitations imposed by much smaller samples of literature. The data application enables researchers to place authors in a larger historical context in ways that were not possible before.

Data driven political campaigns and large scale literature analysis demonstrate the blue sky nature of big data—and the attendant opportunities to curate the data that is being produced. Yet even as the new frontier expands at a rapid rate, it is still possible to find value in existing data sources. In my opinion, big data applications and data curation will reach their fullest potential when all sources, both old and new, are reexamined with the new tools.

New Value from Not-So-New Data

Not all data worth curating are born on the web. Agencies that oversee construction variances, hospitals, nursing homes, public works, and public health all gather data, but in many cases, their charge is to gather data for a single, specific purpose. The expected “data deliverable” might be tabular information for policy makers and urban planners, flowing from the stream of new construction permits, or other relatively mundane activities. It is easy to assume that such data may be well-targeted, but do not have transferable value. The following example of wage research proves the opposite.

During the 2012 election season, one of our researchers was monitoring “living wage” campaigns across the country and was very interested to see how they would fare. In the political discourse surrounding this issue, many voices argue that increasing the minimum wage is bad for business, raising costs and placing a burden on small firms in particular. Others argue that increasing low wages in nominal increments—75 cents, for example—has a negligible effect on the economy, and yet they help household incomes significantly. Our researcher wanted to assess the actual performance and policy ramifications of living wages to shed light on the debate, and needed help.

He needed to gather employee data on every fast food restaurant in a specific metropolitan region. Easily accessible sources indicated that there were more than 3500 establishments in all. Yet within that category, movie theaters, gas station convenience stores, and other purveyors of food-on-the-go needed to be winnowed out. None of the obvious data sources could provide such a pinpointed sample.

One of the library staff contacted the county agency that monitors food safety in restaurants, and eventually got through to their information technology department. She learned that the agency had detailed data on every establishment, including the exact number of employees at each location. This was the data our researcher needed to analyze low wage market dynamics and write a policy brief—just three weeks before the election.

The agency monitors restaurants for compliance with public health regulations. But—and this is a big but—that is literally all they are concerned about. They gather detailed data, but the data are only of interest when they find a safety infraction and must fine the offending restaurant. In our case, we had no interest in restaurant health and safety, but we very much wanted to know employee counts at every restaurant location. This sample would be useful as a basis for testing how living wage policies played out “on the ground.” The agency had exactly what we wanted, and we asked if they would be willing to share data set with us.

The IT manager agreed, with the proviso that no information about regulatory compliance would be sent to us—just the whole list of restaurants and their employee count. Once this was agreed upon, it took a few days to receive a data file that had all of what wanted.

These data provide a comprehensive resource for labor economists, and they will retain their value over the long term. Moreover, good relations with the regulatory agency have established a foundation for receiving data updates periodically. The dataset will also have added value if it is mashed together with other resources, such as state- and national level employee data, or coupled with Web- and cloud-based news and information about restaurants in the region.

Curate—But Counsel Too

This reference story drives home the fact that even while we are moving full-speed into an era when crowd-sourced, web-crawled, and tagged data are creating wholly new avenues for research, value still remains in ongoing data acquisition programs. Many public agencies produce data, and more often than not, they are well-managed and have a service mentality. When locally-gathered data of this nature are obtained and merged with other larger sources, the specificity of the local enriches the “big picture” that big data can reveal.

The emergence of big data research practices, which is revolutionizing how people parse data sets large and small, can actually strengthen the impact of library discovery skills. As a result Information professionals stand to benefit not through digital curation and getting involved in big data analysis, but also through the ongoing practice of reference and resource discovery. Because of this, I believe that it is important to promote our research and discovery acumen in the same manner that we are currently promoting the library as the “solution lab” for data curation. As admirable as that effort is, curation alone is, in my opinion, just half of the needed strategy. The crucial balance may be found by remembering that the skills inherent in reference work—discovery, pattern recognition, and analysis—offer a powerful means to convey our value proposition not only as data curators, but also as information counselors with advanced data acquisition skills.

This column appeared in Computers in Libraries, Vol. 33 (No. 3), April 2013.

A Trailblazer’s Second Thoughts on Big Data

First the Bad News

Big data enthusiasts will want to read Janet Maslin’s  review in The New York Times of Jaron Lanier’s newest book, Who Owns the Future, and perhaps the book itself. Many of us a have a tendency to look for the upside of social media and crowd-sourced information, so it can be helpful to be reminded by someone who knows best about the “dark” side.

Read all about it: “Fighting Words Against Big Data”

And Now the Long View:

BUT–when you are done with the review and (e)book, don’t miss the extremely interesting and highly useful “Big Data Compendium” that the Times has organized for folks like us:

Big Data Compendium:

http://www.nytimes.com/compendium/collections/576/big_data

Steve Lohr on the Origins of the Term “Big Data”

Data hounds will appreciate reading Steve Lohr’s concise but informative article in the February 1 edition of the New York Times, in which he takes a look at the origins of the moniker “big data.” It’s fun insofar as the term has drifted into common parlance after being mentioned here and there, but it may not be so easy to find a single individual whom to credit for its creation. The first time I ever regarded it seriously was when it appeared in a NBER Working Paper that addressed future career opportunities for economists in big data (I’ll add the cite once I track down again).

It reminds me of a local story involving moniker-manufacturing on a grand scale. During the late 1970s, The Oakland-Berkeley regional newspaper East Bay Express published an article by humorist Alice Kahn. In the article, Ms. Kahn coined the term “Yuppie.”  So far as anyone could tell, she was the first person to use the term, which meme-exploded across the USA in a few months. In subsequent issues The Express she turned it into an ongoing gag, because everybody she knew kept telling her, “We think you should sue” –for rights to the term. Humor being an “open source” product first and foremost, she didn’t sue, but did “work it” for what it was worth.

Back to big data.  Here’s a quote from the article, given by Fred R. Shapiro, Associate Librarian at Yale Law School and editor of the Yale Book of Quotations:

“The Web…opens up new terrain.What you’re seeing is a marriage of structured databases and novel, less structured materials. It can be a powerful tool to see far more.”

This is exactly the point that Autonomy and other e-discovery firms such as Recommind make:  to analyze the full output of a given company, corporation or legal case, you now have to look at all of the data. That includes the easier-to-parse world of structured data, but more and more it includes social media, email, recorded telephone conversations and many other casual (but critical) information resources.

 

Big Data Meets Literary Scholarship

The New York Times published a very interesting update on how humanists are applying big data approaches to their scholarship (see The New York Times, January 27, 2013, p. B3). The article begins with a description of research by Matthew L. Jockers at the University of Nebraska-Lincoln. He conducted word- and phrase-level textual analysis on thousands of novels, enabling longer-term patterns to emerge in how authors use words and find inspiration. This kind of textual analysis revealed the impact of a few major authors on many others, and identified the outsized impact of Jane Austen and Sir Walter Scott.

Jockers said that “Traditionally, literary history was done by studying a relative handful of texts…what this technology does is let you see the big picture–the context in which a writer worked–on a scale we’ve never seen before.”

The implications for comparative literature and other fields that bump up against disciplinary boundaries are compelling.This kind of data analysis has long been the domain of sociologists, linguists and other social scientists, but it is increasingly finding a home in the humanities.

Steve Lohr, the Times article’s author, provides a number of other examples. One of my favorites is the research conducted by Jean Baptiste Michel and Erez Lieberman Aiden, who are based at Harvard. They utilized Google Books’ graph utility–open to the public–to chart the evolution of word use over long periods of time. One interesting example: for centuries, the references to “men” vastly outnumbered references to “women,” but in 1985 references to women began to lead references to men (Betty Friedan, are you there?)

Studying literature on this scale is indicative of the power and potential of big data to revolutionize how scholarship is done. Indeed, the availability of useful data is subtly transforming humanist scholars to the point that interested humanists are gaining a new identity as computer programmers.

Lohr also points out that quantitative methods are most effective when experts with deep knowledge of the subject matter guide the analysis, and even second-guess the algorithms.

What is new and distinctive is the ability to ramp up the study of a few texts to a few hundred text. The trick will be to keep the “humanity” in humanism.

I also draw considerable inspiration from the growing awareness that pattern recognition–a daily exercise for information professionals–is gaining new attention as part of the research process in general.

Perhaps it’s time for some of us to collaborate as co-principal investigators….