Category Archives: trainings

Let’s talk about #OpenScience (with a medieval touch)

Workshop Report: How To Make Your Medieval Research More Visible With Open Scholarship Methods and Tools

As a medievalist it was an extreme honour and pleasure for me to be invited to the Annual Meeting of CARMEN (The Worldwide Medieval Network) at Tampere University, organised by Trivium, the Tampere Centre for Classical, Medieval and Early Modern Studies to give an Open Science Workshop.

The Open Science workshop suited very well in the Carmen Annual Meeting’s general theme “Passages: Beyond the Boundaries of Medieval Studies” as Open Science is all about opening up research beyond boundaries.

The Carmen workshop offered to me an excellent possibility to talk directly with researchers of all stages about Open Scholarship and to show how it can be implemented fruitfully in medieval research practices. This is in my opinion extremely important because the Open Science movement will thrive only if it is embraced bottom up by the researchers.

The focus of the workshop was on academic publishing and changes in scholarly communication. First, we talked about Open Access to scientific publications (Open Access Journals and Open Access Monographs) and Open Data & Research Data Management, topics that are gaining fast momentum for all researchers because they are increasingly supported by institutions and becoming a default requirement of research funders. Then we switched to another open science method, the communication of scientific results via social networks, blogs, videos, podcasts etc. While there are some who think that these channels are a waste of time, used strategically, they can positively impact research dissemination, and enhance the number of citations. Last but not least, they are vital channels to prove the relevance of medieval studies against the backdrop of dwindling research budgets.

Medievalists have great stories to tell and public interest is almost assured if the research is “translated” to various channels. While there is still need and space for the monograph, the edited volume, and articles, the potential of other channels to engage a broader audience for societal research impact is high and out there to be explored. During the Carmen meeting there were many exciting examples of medieval research projects engagement with the broader public, for example those presented by the Trivium researcher Jenni Kuuliala about Dis/ability History. EU citizens demand more understanding of the society they live in and the Humanities & Social Sciences can proactively contribute to fulfill this demand when they stop telling the Cinderella Story and change the story as Gabi Lombardo from the EASSH had reminded us earlier during the CARMEN meeting, they only need to seize the day. In my opinion, practicing Open Science is one facet of exactly doing this.

After having explored Open Science in theory, it was time for action. First, the participants discussed in groups which Open Scholarship method they would like to try out next. I “overheard” some discussing start using Twitter, publishing a bibliography on a specific aspect of saints, musing about making a video about their projects, or engage in Open Peer Review. Naturally, I am very curious to hear about any follow up developments!

The last part of the workshop was dedicated to a plenary discussion about doubts and needs and how to overcome them. For this discussion, James L. Smith from Trinity College Dublin and advocacy coordinator for @openlibhums had joined me.

I could almost not keep up taking notes from the vivid discussion. The main points came down for me into these categories:

  • Awareness Raising:
    – Authors need to be aware that there are alternatives to Closed Access (Gold Open Access as in immediate Open Access, Green Open Access as in publication of preprints, postprints, authorized version, maybe after embargo period) for articles and monographs
  • Education:
    – Authors need to know their rights when engaging with publishers (Green Open Access, Sparc addendum, etc.)
    – OA seems to add an extra challenge to teaching students which sources are reliable (Peer Reviewed = good quality), if the digitally published material is more diverse (and not always peer reviewed), one has to teach digital literacy skills & (digital) source criticism, which one may in fact consider as one major skill of a good historian anyway
  • Need for a Paradigm Change within the Scholarly Community:
    – We need to be more aware and critically discuss where the prestige comes from. The name of the journal or the name of the publisher? Maybe open peer review could offer a solution here? Should researchers still support closed journals, series?
    – Often OA publications have by default a low reputation and are perceived as less valuable scholarship although they are often also prone to strict peer review (which is often not recognised) also digital publishing in general has less prestige
    -many prestigious publishers to not do OA (or at very high cost)
    – It would be unfair to put ECR at risk for their career to oblige them to do so, even though there is a “Kamikaze Open Access School”, but…the established researchers should promote OA wherever they can (“Senior scholars should pave the way”)
  • Policy Making
    – Choosing Open Access or Open Scholarly Methods does often not count for tenure
  • Practical Solutions
    – OA for books is important for the humanities, we need good hybrid publication models (e.g. OA published by the library/publisher, book printed on demand)
    – at the moment journals often bring in the revenue for scholarly societies, to make them Open Access poses a problem for the sustainability of the society
    -Article Processing Charges are often very high (besides this being a problem of the commercialised system) even low APCs can cause a problem (support scholars to pay APCs)
    -Open Peer Review could change the way we do research and evaluate research, but how to organize it practically

What I learned from this workshop (and from the excellent FOSTER Open Science Trainer Bootcamp), setting the goals and the scene for the workshop participants is very important, creating a space for discussion and leaving enough time for it really gets the participants going!

What I also learned from this workshop: Let’s talk about Open Science more with the researchers and let’s talk about practical steps more, because also Rome was not build in one day!

Resources

  • In the spirit of Open Educational Resources (OER), the slides of my presentation including the practical parts are published online as PPTX on Zenodo, feel free to reuse and share. I have gratefully reused brilliant material from the Open Science/ OER Community, and would like to encourage everyone to do the same.
  • While you are here: Get started now and check out the PARTHENOS Training Module Manage, Improve and Open Up Your Research and Data!

What do you think? What is Open Science for the Humanities and how can we foster it? Leave a comment below or discuss with me on Twitter or Facebook!  

Advertisements

Was bedeutet Offene Wissenschaft (#openscience) für Sie?

Offene Wissenschaft – Open Science – Open Scholarship sind aus dem gegenwärtigem wissenschaftlichen Metadiskurs nicht mehr wegzudenken. Spätestens seit der Bekanntmachung von Open Science als einer der Hauptsäulen des nächsten EU Forschungsförderprogramm Horizon Europe, ist es deutlich, welche Rolle Open Science seitens der Wissenschaftspolitik beigemessen wird. Gleichzeitig erfordert der Schritt zu Open Science eine grundlegende Transformation des Wissenschaftsprozesses und des Diskurses über Wissenschaft und ist es nicht zu verleugnen, dass zwischen Ideal und Wirklichkeit eine Kluft besteht, die je nach Disziplin und Land unterschiedlich groß ist.

Open Science fängt bei jeder/m einzelnen Wissenschaftler*in an. Nur wenn offene wissenschaftliche Praktiken aus der Community heraus getragen werden, haben sie eine Chance, ein intrinsischer Bestandteil des Forschungsalltags und des Wissenschaftssystems zu werden. Daher ist es einerseits wichtig, Open Science und seine Bedeutung zu thematisieren sowie welche praktischen Möglichkeiten zur Öffnung wissenschaftlicher Praktiken bestehen, andererseits ist es notwendig, den Dialog darüber zu suchen, welche Potentiale und Risiken aus Sicht der Forschenden bestehen.

Wissenschaftliche Revolutionen brauchen ihre Zeit. Auch wenn heute keiner mehr befürchten muss, vor der Inquisition oder dem Scheiterhaufen zu landen, wie Galileo Galilei oder Giordano Bruno, fragen sich viele Nachwuchswissenschaftler*innen, welche Auswirkungen es auf ihre Karriere hat, wenn sie Open Science in einem System praktizieren, das stark von alten Paradigmen geprägt ist und wenige Incentives für offene Wissenschaft zu bieten scheint. Eine besonders wichtige Zielgruppe für Open Science Trainings und Gesprächsrunden sind daher Nachwuchswissenschaftler*innen, da diese sich unmittelbar im Spannungsfeld zwischen althergebrachten Methoden und Praktiken und den neuen Anforderungen und Idealen befinden.

Deshalb habe ich mich sehr darüber gefreut, im Rahmen der Serie „Uni Potsdam Career Talks“ der Uni Potsdam Graduate School am 19.07.2018 einen Impulsvortrag zum Thema der Universität Potsdam zu geben und danach in einer Paneldiskussion zusammen mit Niklas Hartman von der Universitätsbibliothek Potsdam (Fachreferent für Naturwissenschaften und Koordinator Forschungsdaten) Caroline Fischer (Wissenschaftliche Mitarbeiterin am Lehrstuhl Public und Nonprofit Management) und dem Publikum Rede und Antwort zu stehen.

Der Titel des Impulsvortrags „Was bedeutet Offene Wissenschaft für Sie?“, war bewusst zweideutig gewählt, denn für jeden ist Open Science ein wenig anders, und zum anderen kann Open Science tatsächlich etwas für die Karriere von Wissenschaftler*innen bedeuten, und zwar etwas positives. So kann die bessere Dokumentation des Forschungsprozess dazu führen, ein/e besserer Wissenschaftler*in zu werden, schützt eine höhere Kompetenz bezüglich Forschungsdatenmanagement vor Datenverlust und führen Open Access-Publikationen dazu, dass die Forschung besser sichtbar ist und unmittelbar rezipiert und nachgenutzt werden kann.

Folien des Impulsvortrags (Was ist Open Science, Warum ist dieses Thema wichtig? Herausforderungen und Barrieren, Best Practice-Beispiele zur offenen Gestaltung der eigenen Forschungspraxis und vielen weiterführenden Links):

Die Podiumsdiskussion drehte sich hauptsächlich um praktische Fragen, wie zum Beispiel zum Urheberrecht, sowie die Steine, die das akademische Bewertungssystem dem Praktizieren von Offener Wissenschaft in den Weg legt. Nehmen wir zum Beispiel Open Access-Publizieren. Da in vielen Disziplinen noch immer die Macht des Impact Factors ungebrochen ist und nur wenige Open Access-Zeitschriften mit einem hohen Impact Factor existieren bzw. die Article Processing Charges der inzwischen Open Access als lukrative zusätzliche Einnahmequelle für sich vereinbarenden kommerziellen Big Player astronomisch sind (auch wenn es viele kostenlose bzw. günstige Open Access-Zeitschriften gibt, kann hier eine Lösung sein, zusätzliche legale Wege der Zweitveröffentlichung als Green Open Access in Anspruch zu nehmen und hierfür konkret mit den jeweiligen Herausgebern zu verhandeln. Oft ist mehr möglich als man zunächst denkt.

Für konkrete Fragen zu Open Access und Open Science stehen für Wissenschaftler*innen oftmals Ansprechpartner*innen an den Institutionen bzw. Universitätsbibliotheken zur Verfügung (zum Beispiel Mitarbeiter*innen von Universitätsverlagen, Fach- und Forschungsdatenreferenten sowie Spezialist*innen für Elektronisches Publizieren). Die Podiumsdiskussion zeigte jedoch auch, dass zur höheren Umsetzung von Open Access und der weiterführenden Öffnung wissenschaftlicher Prozesse im Sinne von Open Science nicht nur geeignete Methoden und Services gehören, sondern dass auch das akademische Bewertungssystem reformiert werden muss, um Anreize für eine Kultur des Teilens und der Zusammenarbeit zu bieten. Dazu gehört unter anderem, dass zum Beispiel Datenpublikationen mit dem dazugehörigen Mehraufwand für Dokumentation etc. oder die bewusste Wahl für Open Access-Publikationskanäle in Einstellungs- und Berufungsverfahren honoriert werden. Nur so werden wir erreichen, dass zukünftig Wissenschaft Offen (by default) sein wird und wir dann wieder nur noch über Wissenschaft sprechen werden.

Was denken Sie? Was sind Ihre Erfahrungen, Hinweise, Best Practice Beispiele? Ich freue mich über einen Austausch, z.B. über das Kommentarfeld unten!

 

Screen Shot 2018-07-30 at 00.41.21

Source Picture: https://zenodo.org/record/1285575#.W09yZH59jOR (Melanie Imming, John Tennant, CC0)

Leipzig und DH: Impressionen

Summary: A lot has changed in the Humanities since I had my first academic job in the context of an edition project at Leipzig University. The ongoing digital transformation of all humanities disciplines asks for more self-reflection on methodologies and early as well as life long training. Leipzig is with the European Summer University in Digital Humanities and other important DH activities and actors a DH hot spot and therefore was a very fitting place for a presentation of the PARTHENOS Training Suite. 

Von 2003-2005 hatte ich meine erste Stelle als Wissenschaftliche Mitarbeiterin an der Niederlandistik der Universität Leipzig und bearbeitete dort zwei Editionen mittelniederländischer Texte. Diesen Juli hatte ich endlich wieder die Gelegenheit nach Leipzig zurückzukehren. Das letzte Mal, die DHd-Konferenz in Leipzig (2016), die ich noch immer in sehr guter Erinnerung habe – auch wegen meines ersten Besuchs im legendären Faustischen Auerbachs Keller (!), aber das nur am Rande – war inzwischen schon wieder eine Weile her. Der Anlass war die Einladung im Rahmen der European Summer University in Digital Humanities 2017 die PARTHENOS Training Suite zu präsentieren. Das GWZ in der Beethovenstraße, mein alter Dienstort, gibt es zwar noch immer, aber vieles hat sich verändert, an der Uni Leipzig und in den Editionswissenschaften. Genau der richtige Hintergrund für eine persönliche Reflektion.

IMG_20170719_124518

ESU 2017 Poster

Im Jahr 2003 steckten die digitalen Editionswissenschaften und vor allem die TEI (Text Encoding Initiative) noch in den Kinderschuhen, bzw. waren noch weit entfernt von dem enormen methodologischen Einfluss, den sie danach nehmen sollten (Link: Geschichte der TEI). Es stand natürlich auch damals außer Frage, dass die mittelniederländischen Editionen digital gemacht werden sollten. Aber digital bedeutete im Rahmen des Projekts mit Hilfe eines Textverarbeitungsprogramms, nicht mit XML-Editoren, oder dem damals noch stark verbreiteten, aber für die meisten Anwendungsfälle viel zu komplexen TUSTEP.

Die Editionen der beiden mittelniederländischen Texte sind schon lange erschienen. Es ist mir nicht bekannt, ob die Textdateien noch existieren, aber selbst wenn, das Endprodukt war eben keine digitale oder Hybrid-Edition, sondern eine Druckausgabe. Vier Gedanken:

  • Es zählte bei den Textdateien nur das “Aussehen” der Druckfassung und keine standardisierte Auszeichnung, wie es die TEI möglich macht, damit diese Texte in anderen Zusammenhängen, in Portalen oder mit Hilfe von Tools nachnutzbar gemacht werden können, ganz abgesehen vom Online-Zugriff.
  • Auf der anderen Seite hatte der Editor ein leichtes Spiel und konnte fachwissenschaftliche und technische Workflows gut in seiner Person vereinen. Wenn man ansatzweise die “Tücken” seines Textverarbeitungsprogramms kannte, war die Lernkurve relativ gering.
  • Heute gibt es eine viel stärkere Ausdifferenzierung der Rollen, nicht zuletzt einer der Gründe, warum digitale Editionen viel mehr in Teams als durch Einzelpersonen erstellt werden.
  • Die Versionierung war ein Graus, vor allem wenn zwischendurch Andere Kontroll- oder Teilaufgaben übernahmen, da alles in einem Dokument und (lange Zeit vor Cloud-basierten Kollaborationswerkzeugen) offline passierte.

Es ist inzwischen fast Gemeingut, dass der verstärkte wissenschaftliche Einsatz digitaler Tools, Methoden und Standards wie der TEI etc. in den Geisteswissenschaften nach Zusatzqualifikationen und methodologische Reflektionen verlangen. Natürlich darf dabei das Fachwissen nicht außer Acht gelassen werden. Wenn niemand mehr historische Handschriften lesen kann und das editionswissenschaftliche und fachwissenschaftliche Know-How und Methodenverständnis fehlen, helfen auch die TEI-Richtlinien und XML-Editoren nicht weiter… Deshalb sind maßgeschneiderte Schulungs- und Weiterbildungsangebote, ob im Rahmen universitärer Curricula oder als Workshops, Summer und Winter Schools sowie Online-Angebote für StudentInnen, Wissenschaftliche MitarbeiterInnen bis zu ProfessorInnen etc. ungemein wichtig. Nicht nur, um die Praxis zu lehren, sondern auch um über die Vor- und Nachteile, bzw. Verbesserungspotentiale zu reflektieren. Jedes Produkt lebt letztendlich vom Nutzerfeedback und sein “Marktwert” steigt mit dem Bekanntheits- und Einsatzgrad. Auf Niederländisch gibt hierzu das sehr treffende Sprichwort “Onbekend maakt onbemind” (Unbekannt macht Ungeliebt)… Ich habe beispielsweise noch während meines Masters Editionswissenschaft im Bereich Digitale Editionen “nur” TUSTEP und InDesign gelehrt bekommen, die Mächtigkeit der TEI ist mir erst später bewusst geworden, als ich mich verstärkt für digitale Editionswissenschaft zu interessieren began. Thanks to DH Oxford!

Die European Summer University (ESU) in Digital Humanities unter der Leitung von Prof. Elisabeth Burr ist ein sehr gelungenes Beispiel für ein Format, dass zum einen die Potentiale digitaler Forschung aufzeigt und zum anderem Hands On die benötigten Kompetenzen vermittelt und kritisch die Methoden befragt. Besonders spannend an der ESU ist die breite “Streuung” des Publikums, sowohl geographisch als auch soziologisch (in dem Sinn, dass tatsächlich unter den TeilnehmerInnen eine Spannbreite von StudentenInnen bis ProfessorInnen zu finden ist). Ein passender Ort somit auch die durch das H2020-Projekt PARTHENOS entwickelten Trainings- und Schulungsmaterialien und -formate, die PARTHENOS-Training Suite, im Rahmen einer Projektpräsentation vorzustellen. Noch einmal herzlichen Dank für die Einladung und die perfekte Organisation!

IMG_20170719_124422

Auslage an der ESU Registrierung

Die ESU und der Lehrstuhl von Prof. Burr sind jedoch nicht der einzige DH Hot Spot in Leipzig. Besonders zu nennen ist natürlich der Humboldt Chair of Digital Humanities mit dem Lehrstuhlinhaber Prof. Gegory Crane und seinem Team, aber auch die DH Aktivitäten der Universitätsbibliothek Leipzig. Zu letzterem könnte man fast sagen, dass DH + Bibliotheken ein “Match in Heaven” sind. Bibliotheken haben meist genau die Bestände, mit denen man DH “machen” kann. Aber wie kommt die DH Community an diese Daten? Sie kann ja nicht alles selbst digitalisieren, das wäre nicht nur uneffektiv, sondern ist auch nicht immer möglich. Ein Aufgabe, der sich Bibliotheken daher verstärkt annehmen, ist die Digitalisierung und die Bereitstellung und Archivierung der Daten aus Digitalisierungsprojekten. Besonders begrüßenswert für die Forschung ist es dann, wenn dies im Rahmen einer Open Digitization Stategy, wie an der Universitätsbibliothek Leipzig geschieht, und die Daten zum Beispiel über Digitale Sammlungen präsentiert und in an andere Recherche- und Verarbeitungssysteme weitergeben werden.

IMG_20170719_123516

Universitätsbibliothek Leipzig (Albertina)

Last but not least ist in Leipzig auch einer der beiden Sitze der Deutschen Nationalbibliothek, deren Stategische Prioritäten 2017-2020 stark durch digitale Innovationen gepägt sind.

Wer jetzt Lust bekommen hat nach Leipzig zu fahren. Neben DH sind nicht zuletzt der eindrucksvolle Kopfbahnhof und die atmosphärische Innenstadt eine Reise wert. Wenn die Reise noch etwas Aufschub erfordert, sei ein virtueller Besuch der DNB, genauer gesagt der Austellung des Deutschen Buch- und Schriftmuseums Zeichen – Bücher – Netze (2014) empfohlen. Leipzig lohnt sich!

IMG_20170719_215416

Leipzig Hauptbahnhof

My first processing visualisation

Tooling around!

 

 

 

 

GCDH Summer School Digital Analysis with Digital Tools, Göttingen, 28th July to 1st August 2014

My first processing visualisation

With a little help… My first visualisation with Processing!

I have been lucky to participate in a great Summer School organised by the Göttingen Centre of Digital Humanities about Visual Analysis with Digital Tools. And I couldn’t have spent this week better! Even though one might think that temperatures around 30° C are tempting to exchange the classroom for a swim, I truly enjoyed all the hands-on sessions and lectures and our group evenings as well! In the following I would like to share some impressions.

The morning sessions were subdivided in two strands (Visual Network analysis with VennMaker and Gephi; 3D Documentation for Cultural Heritage), of which I followed the first strand. In this strand Michael Kronenwett and Martin Starck introduced network analysis and visualisation with VennMaker and Gephi. VennMaker is a great tool if you want to draw your own customized social network visualisations, whereas Gephi is a bit more flashy and it is easier to import data (but VennMaker people are working on a data import feature!). However Gephi, as trendy and popular as it is at the moment, is not updated anymore at the moment and it was on my Mac (with Mac OS 10.8.5) rather unstable. In the end I even lost the window where you can see the network visualisation… I guess only a complete new installation would solve this strange problem. Anyway, I have learned not only how to use these tools, but also how to interpret my nice looking shiny visualisations, and that some background knowledge of statistics always comes in handy…

Our collective meals after the morning sessions at the refectory of Göttingen University were followed by a divers array of afternoon sessions and evening activities.

Monday Dirk Rieke-Zapp of Breuckmann/Aicon introduced the whole group to the technical aspects of 3D Scanning with Structured Light. He had brought with him a 70.000 (!) Euro scanning installation and showed how it works in praxis. Every scanning situation is different and every excursion should be carefully planned. Test before you ship your equipment! A truly nice episode he told us, was about scanning the Mona Lisa. Do you know how many people it takes to scan the Mona Lisa? Answer: 5. The curator, the scientist, the scanning professional, someone to hold the tripod and someone to hold the cables…

The keynote on Monday evening was given by Daniel A. Keim of the University of Konstanz, who showed us some fascinating DH visualisation of the Bible, Mark Twain’s books, and Twitter emergency situation recognition… He drew our attention to the fact that data visualisation is especially difficult in the humanities as researchers in the humanities are often confronted with fuzzy data in which they try to see something new. He strongly encouraged DH researchers to combine the (dis)advantages of computers (fast!, accurate!, stupid!) with the (dis)advantages of humans (slow!, inaccurate!, brilliant!) and to stay adventurous, innovative and not to forget to have fun also.

On Tuesday afternoon, Norbert Winnige and Alexei Matveev of the MPI MMG introduced us to the research into visualisations at their institute, especially how to visualise migration “without arrows” which always remind a bit of battle maps. Alexei Matveev demonstrated in a hands-on-session how to make visualisations using Processing. This is quite tough stuff for people who are not used to programming, but he did a great job! He also uttered the wise words, that you should not be seduced by “tool trends” , but always consider carefully which tool will serve you best to do the job. (And if you haven’t found the right tool yet, have a look at the valuable list of DH-tools provided by Project DiRT.)

But we did not only dive into visualisations for the humanities and social sciences! On Wednesday afternoon Sven Truckenbrodt of Rizzoli Lab explained more about his lab’s last scientific breakthrough, the first 3D Visualisation of a synapse, which was even featured in Science! He gave an inspiring lecture on the history of visualisations in the sciences and what can go wrong and whether you can trust them at all. He strongly recommended the book Objektivität by Lorraine Daston and Peter Galison (there also seems to be an English translation). It is good that also in the sciences researchers start appreciating visualisations, because they say so much more than “a big chunk of data on a sheet”! But 3D visualisations are also very expensive. Their model comprises 5-6 years of pure research, an incredible amount of time of Burghard Rammner who programmed the visualisation, and a lot very expensive super computer computation time! You might want to watch the video in the website of Science!

After the spacy 3D synapse a team from the SUB (Stefan Funk, Stefan Funk and Ubbo Veentjer) took the stage to present the Dariah-DE Geobrowser. With the Geobrowser you can visualise time-space-relations for example in a network of letters very easily and very beautifully. You can upload different datasets and overlay and compare them if you, let’s take an example, want to compare the spread of the books of two authors. It works with the Getty Thesaurus of Geographic Names to recognise the place names and to put them on the map (which works even with historic place names), but you have to check carefully. Sometimes the first option in the Getty Thesaurus seems to be a quite US-centric, why else would it put Edinburgh initially in the States?!? The city tour in the evening was “boycotted” by a city run, so we were not really able to walk a lot outside, but definitely our visit to the old city hall was a highlight! My first time ever with a “medieval” key… And a key it was, I guess it was some 30 cm and 0,5 kg!!!

Thursday afternoon was completely dedicated to an introduction to R. The task to lead an already a bit tired crowd of DH apprentices to places where almost none of us had been before was boldly taken over by Andreas Cordes (Institute for Psychology, Göttingen). R and R Studio are a bit tricky to install, so this took some time. I have to admit, even though R had been developed as an easy programming language for non-programmers, it still requires you to truly think “computational” to get the job done. The good thing is, that the R community is very active and very willing to share little routines to get a specific job done (these are called packages ;-)). We all were really impressed of what R enables you to do, so learning the basics will really provide you with a powerful tool for all kinds of analyses. If you understand Dutch, I would strongly recommend reading for example Karina van Dalen-Oskam’s inaugural lecture or some of her other publications to which my attention was drown by Astrid Kulsdom. I am very curious to see what else you can do with R for Computational Literature Analysis.

The last day’s afternoon was dedicated to the presentations of the achievements of the two Summer School strands. The 3D visualisation strand had set up a virtual museum of their scans which was truly impressive. My strand, the network analysis strand, also showed some of our visualisations and Ingo Boerner and me even demonstrated live how you can make a very easy, but nevertheless quite impressive analysis of your Facebook network with Gephi. I had found a tutorial for this on Youtube, so if you like to try it out, just follow this link to the Video of Data J Lab of Tilburg University. As we also learned during this week about sensitive data and not intruding other people’s privacy (anonymisation!), I won’t post my result …

Many thanks to the organisation team of the Summer School (Frank, Ele, and Andrea) and all the presenters and participants who made this an unforgettable rich learning and fun week!

Impressions from ‘Historical Documents Digital Approaches’, 5-7 September 2013

Better late than never some impressions from the HDDA 2013 workshop. This workshop took place at Ghent University begin September and did not only include fantastic speakers (hdda_leaflet), but was also very well organised, including fantastic sandwiches for lunch and splendid sunshine (though the organisers might not be held responsible for the latter). It was only a bit unfortunate for a DH workshop that Wifi was not working in the lecture room, but again I am sure that the organisers didn’t have a hand in this and I can only speculate that the planners of the UFO (the modern housing of the history department and parts of UGent administration) meant it to be like that to prevent people from checking their mails in the lecture rooms.

Unfortunately I missed the first sessions due to some private coincidences, the rest of the morning lectures of these three days spanned from Bert van Raemdonck (Ghent) who lectured on editing letters in TEI to Caroline Macé (Leuven) who lectured on how to use digital tools to analyse and visualise the history of texts (stemmata).
A very relevant point was that TEI is only one of the available codes, but also the most widely used code, so if it fits your needs USE it. This will make your results shareable (and please do share your code !), easily mineable, and it offers also the advantage that more TEI-advanced scholars are mostly very willing to lend newbies a hand (for this one can for example join the TEI-list).
Another important point was that the scholars who use digital documents and tools have to be aware what they are doing and which implications it brings for your scientific work. Worst case scenarios mentioned here included scholars who measured properties of medieval manuscripts using digital facsimiles and not taking into account that measuring a picture will maybe not lead you to correct measurements. Very fascinating were also the lectures on computational topic recognition and computational authorship attribution. I have to admit that this sounded in the beginning like magic to me, but after a while hearing and reading more about the methods and tools I start understanding the underlying logic.

The afternoons were reserved to a hands-on training in TEI-conformant XML with David Birnbaum (Pittsburgh) using Oxygen. He focussed in contrast to earlier trainings in which I have participated more on actually encoding the body of the text than the TEI-header so that in the end all participants had some ideas how to actually practically encode a historical document in TEI, including abbreviations and variants in the transmission history. And yes, we sighed…because if you want encode all this, your TEI-code starts looking very unattractive, that will say, so chaotic that you almost don’t see anymore what you are doing. That is why the advice to first think very good about what you actually need, who your audience is, what kinds of uses you want to enable, is a very good advice indeed.

To sum up, to DO Digital Humanities means in many cases learning to handle code and tools that are rather unfamiliar to the traditional humanities scholar. It is a barrier one has to take and not being afraid to ask and make mistakes probably is an essential part of the process. To remind us in future of the fact that editing with TEI is a lot of work (so rather start sooner than later) David Birnbaum gave everybody a “Shut up and Code” button. I don’t regret having spent my birthday coding…it was fun! Thanks to the organisers (especially Tjamke Snijders and Els De Paermentier, UGent) and sponsors and I really hope that next year a follow up will take place. Why not have then hands-on experience with XSLT or Mallet?

Button

Upcoming: DigHum13 Summer School 2013

I just submitted my first ever poster abstract for the DH Summer School that will take place in Leuven (Belgium) from 18 to 20 September 2013! Pretty exciting, I must say. I am really curious how the jury will like my idea about a modular digital edition of the Vierde Partie of the High German Spiegel Historiael. Of course I strongly hope they will accept it so that I will be able to receive feed back and practical advice during the poster presentation from the senior researchers that will be present. But even without a poster, I can not wait to the end of September. The program is so to say mouthwatering…

The Berlin manuscript in its original binding

The Berlin manuscript in its original binding

DHOXSS 2013 Digital Humanities in a nutshell

During the last evening of this year’s Digital Humanities @ Oxford Summer School, when the last survivors had gathered on the compounds of at a lovely pub somewhere in the middle of nowhere in the fields of Oxford, someone completely not involved in the Summer School, but a researcher in the field of let me call it ‘computer related stuff’ and a philosopher from origin asked THE question: “So what are you doing as Digital Humanists? What makes you Digital Humanists?” Astonished silence followed the question for a while, then everybody stated his case…

What have we been doing during this week of Digital Humanities Summer School? I do not at all aim to solve in this little reflection the theoretical discussion of “What are Digital Humanities?”, I suggest to maybe check as a start the answers provided on the site http://whatisdigitalhumanities.com/, but rather aim to give some random impressions of what the Oxford Summer School was all about for me. First of all, I really appreciate that I had the ability to attend to the 2013 edition of the Digital Humanities @ Oxford Summer School, which provided to me a great opportunity to learn more about several aspects of what “the humanities can do with computational methods” and which specific concerns librarians and people working at museums and archives have in this area.

Maybe the most important lesson for me was how crucial it is to look beyond one’s own nose and communicate with each other. Another important lesson was that using a computer does not mean to do Digital Humanities…Digital Humanities is all about making “stuff” available digital and to perform tasks that were traditionally very work intensive, maybe not possible at all, ask new research questions and share and connect the results with fellow researchers and the public, as was stressed in the closing lecture on Friday by Lorna Hughes. A problem that was addressed again and again especially by librarians was the threat of data loss and the silent death of unlinked, difficult accessible data. Securing data from digital projects is seen as one of the important new tasks of libraries, but one should not forget that securing the data in a sustainable way for the future is very expensive. It would be good if researcher would consider these costs from the beginning and include them in their research grant proposals. Lorna also brought in in her lecture on the last day a slight apocalyptic touch when she began speculating about the Beast of Digitization and how it would look in a bestiary. In her opinion it would have two heads, 1) users who always want more data/content, and 2) the material which gets finished at a certain point and the problem of needing more funding for new projects, also to secure jobs.

My specific aim of participating in this Summer School was to learn more about TEI conformant XML, that is learning the specific language to describe documents digitally for several purposes, like digital editions and data mining. The specific very practical minded workshop An Introduction to XML and the Text Encoding Initiative was given by Sebastian Rahtz, James Cummings, and Ian Matzen and they really did their best to prepare us in a very short time to be able to work independently with Oxygen and the TEI-guidelines on our own projects. The next step would be to also learn how to influence the output of the transcribed text, but that will be for another time… One point that became clear to me during the discussions of the workshop sessions is that although TEI is really a great tool for digital editing, it is not a magic weapon. We are still struggling with the problem of trying to describe something in a linear way that is in many cases not linear, think of all over the place scribbled pages, and then the encoding gets a bit messy…

To sum up, I have spent a fantastic, very energetic week, learning many things, having some beliefs shaken, and meeting many amazing people! And last but not least, I slipped away one morning to examine some manuscripts in the Bodleian Library, such fantastic staff, such a great experience!

Hope to see you back next year!

Short glosses: The food at lunch at Wolfson college was marvelous, the teaching facilities generally o.k. (though tables missing at Wolfson College Lecture Theater, how to take notes, especially with a laptop…), but really cold due to the air-conditioning, the diner at Queens College was quite an experience…yes, it looked very nice and indeed almost like in Harry Potter, but the food was rather bad, the wine very quickly finished, and we all together forced to leave very soon. I know this kind of places under the name of tourist trap…

Most hilarious moments: missing the boat to go punting with Sebastian Rahtz and misunderstanding Irish English…I thought the guy was talking about kicking some poor dogs when he was actually saying: “Did you see me given the ducks a bit of a cake?”