research in the “eHumanities” needs social and computer scientists

Last week I attended the “Get going: The Nijmegen Spring School in eHumanities”. The school focused on three programs and or skills sets- Python, R and Gephi. I thoroughly enjoyed getting my hands dirty with Python, and can’t believe I have lived my scientific life, up ’til now, without it! I would recommend learning this language to all social scientists and humanities scholars working with Big data in particular, as it is a fairly straight forward language, with an increasing amount of online tutorials. Stop doing everything manually!, or within the boundaries of Excel and learn a tool that will speed up processes such as variable recoding.

Beyond learning these skills I also learned a lot from participants about the Humanities, as most attendees came from the Humanities, with a few exceptions of a few social scientists and computer scientsits. I myself am a trained social scientists and had little knowledge about the Humanities and the giant push for so-called eHumanities/digital Humanities and the like. This emergence of increased funding and interest within the academic community seemed to be coming from two sources: the increasing digitialization of sources used in the Humanities, combined with the lack of training (of course there are exceptions) about how to conceptualize, operationalize, and analyze such data. Let me make this clear I am NOT saying that pre-digital work or non-digital work in the Humanities is/was fruitless but rather that the field is challenged to formulate a new methodology and thus skill set for researchers. This emerged within this small group of mainly Humanities scholars during the workshop- few had experience in statistics, or operationalizing data into network terms, or thinking in such terms/schemas. I am not criticizing attendees, this was the goal of the workshop after all, but I was struck, as someone versed in these techniques/knowledge, how helpful these techniques could be and thus the truly great need to fill this knowledge gap.

Often a way to bridge this is to bring in Computer Scientists- experts in automating everything, organizing data, analyzing large data sets, modelling problems; on paper certainly the most logical step to aid Humanities scholars. But after this three day workshop I see a missing piece that I think may be essential to bridge this gap even further, and that is integrating social scientists as well. Of course, you will probably say this is self promotion, as certainly it has sparked my interested, but hear me out. Social Science as a field has a long tradition of discerning valid and reliable methodologies for analyzing all sorts of data types, origins, sample sizes and the like. There is a strong tradition among quantitative social scientists to be trained in various statistical methods that allow the questioning of causal mechanisms (relationships between sets of variables). Social Science as a field is also faced with the increasing availability of Big data and thus also are teaming up with computer scientists to expand applications. It seems quite obvious to me that there is a need for the three disciplines to address this e-fying of the Humanities to redefine the boundaries from looking at different (multidisciplinary) research questions, as well frameworks for integrating methodologies for using such data.

These discussion also challenged me to think about how I think about my data and thus research questions. Although I actively attempt to expand the reach of my disciplinary blinders through work with computer scientists in particular, I certainly now see the advantages of considering a combined Humanities approach; particularly brainstorming and thus exploring the increasing amount of data produced with the Humanities in mind.

Any suggestions about where to start?


Scientists use of the Web

Scientists are increasingly using the Web to exchange, share, and accumulate/identify knowledge. The use of the Web by scientists is a field of growing interest. Thus it made us question, who is using these Web platforms? All scientists, specific groups/ages/disciplines of science. With a group of computer scientists within the Network Institute we developed a method and tool to identify a set of known scientists to able to reflect on the representativeness of Web studies of scientists online. This work was recently presented at the Sixth Chinese Semantic Web Symposium (CSWS2012) and the First Chinese Web Science Conference (CWSC2012) in Shenzhen, China. And will be published shortly in the conference proceedings, for now you can find the publication here.


Yesterday Times Higher Education published an interesting article by Matthew Gamble, a computer scientist working on web science questions. Gamble’s article addresses the need for Web 2.0 scholarship – the use of online metrics for evaluating science; piggy backing on other discussions in the field such as alt-metrics (which Gamble also mentions).

This discussion opens doors to a number of questions about knowledge production processes as well as what is valued in science and what should/could be measured as impact. These discussions were also the topic of the recent altmetrics workshop at the Web Science Conference in Koblenz, Germany in June 2011 (which I attended). The Altmetrics workshop itself was the first steps towards building a recognized community in science who were researching alternative metrics to science. The workshop brought together researchers from multiple disciplines and facilitated great discussions on a wide number of topics that look at understanding not only Web behaviors of scientists, but collection and disambiguation problems of Web data and how to understand the implications of science and knowledge production on the Web. Overall one of the best workshops I have attended, yet, that perfectly fit my area of growing expertise.

I presented some exploratory research on the validity of online metrics in science. The work was completed with my colleague Shenghui Wang, a talented computer scientist, who I developed a crawler with (she did the actual building, I did the informing) to investigate a community of scientists online. The title was – “Who are we talking about?: the validity of online metrics for commenting on science”. You can find the complete abstract here: Paper is in the works.

Preliminary/exploratory results indicated that, in the sample of Dutch computer scientists and their co-authors from 2007 – March 2011, the higher your h-index (a measure of performance) the more likely you are to be found on LinkedIn, Slideshare and have a blog. Additionally the higher the citation score (a measure of tenure and performance) the more likely you are be on LinkedIn and have a blog. This suggests that among this community the measuring of web behaviors of a scientists own enterprise are representative of dynamics of scientist who have both a higher tenure and higher performance, thus when talking about implications of altmetrics and or analyzing behavior on these social media sites we need to be explicit about who we can generalize about and how these reflect to greater dynamics in science; as for this sample we can only reflect on the behaviors of high performance and tenured scientists. Further research needs to be completed to test this on other research communities and further develop recall precision techniques used in the web crawler to obtain the data on scientists’ presence on these sites; although we might suggest that if this holds true for other communities that altmetrics would provide a unique avenue for analyzing those leading the pack in their respective fields which would allow more immediate impact measures for understanding science overcoming the delay of impact measures that integrate citation.