data, data, data, extracting something useful from the Big “mess”

Listening to the radio this morning, on my usual bike to the work, and couldn’t believe I missed the announcement of the release of query-able database of the so-called “The Kissinger Cables” by Wikileaks yesterday! Come on Twitteraars, I rely on you! I guess everyone was busy reacting to the Mendeley take over by Elsevier (see Menedely’s take & Elseviers statement.

We could have a long discussion about Wikileaks politics, implications of releasing and organizing such a database and so forth. If you want to talk about that I suggest you check out yesterday’s Democracy Now segment with interviews with two Wikileakers and implications of creating such a database. NOTE these ~1.7 million U.S. diplomatic and intelligence documents from 1973 to 1976 aka “The Kissinger Cables” was already publicly available via the the National Archives.

Instead, as a social scientist, that is working with Big data, I was intrigued by how databases such as these not only allowed journalists to strategically dig through the documents – with stories popping up all over the world (just do a news search on “The Kissinger Cables”); but also how the organization of such text data has great potentials for research. These potentials include a) the actual technical techniques used to organize these massive data set of text documents, b) given these boundaries, how we can operationalize the data – meaning can we thinking of it in network terms and identify relations not only through statements but by shared event presence (two mode/bipartite affiliations of individuals of entities involvements) and or other text analysis techniques that Humanities scholars work with, c) and of course how these insights contribute to the understanding of history and social phenomena of this specific period. This query-able dataset for the massive amount of text files for public use is a sign that not only the digitization of such documents are an asset to public knowledge, but also that a methodological framework needs to be developed for each unique dataset given its characteristics which only then truly unlocks the potential for knowledge in multiple domains.

Lets get digging? 🙂

Advertisements

research in the “eHumanities” needs social and computer scientists

Last week I attended the “Get going: The Nijmegen Spring School in eHumanities”. The school focused on three programs and or skills sets- Python, R and Gephi. I thoroughly enjoyed getting my hands dirty with Python, and can’t believe I have lived my scientific life, up ’til now, without it! I would recommend learning this language to all social scientists and humanities scholars working with Big data in particular, as it is a fairly straight forward language, with an increasing amount of online tutorials. Stop doing everything manually!, or within the boundaries of Excel and learn a tool that will speed up processes such as variable recoding.

Beyond learning these skills I also learned a lot from participants about the Humanities, as most attendees came from the Humanities, with a few exceptions of a few social scientists and computer scientsits. I myself am a trained social scientists and had little knowledge about the Humanities and the giant push for so-called eHumanities/digital Humanities and the like. This emergence of increased funding and interest within the academic community seemed to be coming from two sources: the increasing digitialization of sources used in the Humanities, combined with the lack of training (of course there are exceptions) about how to conceptualize, operationalize, and analyze such data. Let me make this clear I am NOT saying that pre-digital work or non-digital work in the Humanities is/was fruitless but rather that the field is challenged to formulate a new methodology and thus skill set for researchers. This emerged within this small group of mainly Humanities scholars during the workshop- few had experience in statistics, or operationalizing data into network terms, or thinking in such terms/schemas. I am not criticizing attendees, this was the goal of the workshop after all, but I was struck, as someone versed in these techniques/knowledge, how helpful these techniques could be and thus the truly great need to fill this knowledge gap.

Often a way to bridge this is to bring in Computer Scientists- experts in automating everything, organizing data, analyzing large data sets, modelling problems; on paper certainly the most logical step to aid Humanities scholars. But after this three day workshop I see a missing piece that I think may be essential to bridge this gap even further, and that is integrating social scientists as well. Of course, you will probably say this is self promotion, as certainly it has sparked my interested, but hear me out. Social Science as a field has a long tradition of discerning valid and reliable methodologies for analyzing all sorts of data types, origins, sample sizes and the like. There is a strong tradition among quantitative social scientists to be trained in various statistical methods that allow the questioning of causal mechanisms (relationships between sets of variables). Social Science as a field is also faced with the increasing availability of Big data and thus also are teaming up with computer scientists to expand applications. It seems quite obvious to me that there is a need for the three disciplines to address this e-fying of the Humanities to redefine the boundaries from looking at different (multidisciplinary) research questions, as well frameworks for integrating methodologies for using such data.

These discussion also challenged me to think about how I think about my data and thus research questions. Although I actively attempt to expand the reach of my disciplinary blinders through work with computer scientists in particular, I certainly now see the advantages of considering a combined Humanities approach; particularly brainstorming and thus exploring the increasing amount of data produced with the Humanities in mind.

Any suggestions about where to start?