Archive for the ‘Tools’ Category

Digital Classicist Wiki editing sprints

Wednesday, May 15th, 2019

The regular Digital Classicist Wiki editing sprints that we used to run have stalled in the last year or so, but we will be restarting them as of next month.

For now, sprints will run on the first Tuesday of every month, at 16:00–18:00 UK time.

  • June 4, 2019
  • July 2, 2019
  • August 6, 2019

Information on what we get up to and what we would like to achieve can be found at the Wiki Editing page.

If you want to chat with other sprinters in real time, you may join the DigiClass IRC Channel.

If you don’t yet have an account on the Digital Classicist Wiki and would like one, please contact any of the administrators named at the Members page and we will create an account for you.

We would be happy to receive suggestions of themed sprints in the future. (In the past we have run sprints on geography, papyrology, language technologies, and other topics.) Maybe suggest them here on the Digiclass list, and see who else might be interested.

Other suggestions and ideas welcome!

Sunoikisis Digital History and Archaeology, Fall 2016

Wednesday, October 5th, 2016

The fall programme of the Sunoikisis Digital Classics course has just started, with online sessions via YouTube on Thursdays at 16h00 UK/11h00 New York. This semester, focussed on objects, space and heritage data, rus in parallel with courses taught at the Institute of Classical Studies, University of London (ICS03), and Institute for the Study of the Ancient World, New York University (ISAW-GA-3024), and includes contributions from colleagues and students worldwide.

The full programme can be found on Github at SunoikisisDC, and the sessions will be streamed on our YouTube channel.

  1. Sep 29. Introduction: Object artefact script (Gabriel Bodard)
  2. Oct 6. 3D Imaging, Photogrammetry (Sebastian Heath)
  3. Oct 13. Geography 1: Gazetteers (Valeria Vitale, Usama Gad and Gabriel Bodard)
  4. Oct 20. 3D Modelling, Computer Aided Design (Valeria Vitale)
  5. Oct 27. Geography 2: Carto (Tom Elliott)
  6. Nov 3. Geography 3: GIS (Leif Isaksen)
  7. Nov 10. Ontologies and Data Modelling (Arianna Ciula and Charlotte Tupman)
  8. Nov 17. Data Structuring and Querying (Tom Elliott and Sebastian Heath)
  9. Nov 24. Data Visualization (Jonathan Blaney, Sarah Milligan, Jane Winters)
  10. Dec 1. Network Analysis (Silke Vanbeselaere and Greg Woolf)
  11. Dec 8. Crowdsourcing Heritage and Conservation (John Pearce)
  12. Dec 15. Historical sources (Monica Berti)

All are welcome to follow these sessions remotely (live or after the event). Please get in touch if you would like to get involved more directly with these or future Sunoikisis DC programmes.

The Digital Fragmenta Historicorum Graecorum (DFHG) is online

Sunday, September 11th, 2016

The Digital Fragmenta Historicorum Graecorum (DFHG) is a project directed by Monica Berti at the Alexander von Humboldt Chair of Digital Humanities at the University of Leipzig for producing the digital version of the five volumes of the Fragmenta Historicorum Graecorum (FHG) edited by Karl Müller in the 19th century.

The FHG consists of a survey of excerpts from many different sources pertaining to more than 600 Greek fragmentary authors. Excluding the first volume, authors are chronologically distributed and cover a period of time from the 6th century BC through the 7th century CE. Fragments are numbered sequentially and arranged according to works and book numbers, when these pieces of information are available in the source texts preserving the fragments. Every Greek fragment is translated or summarized into Latin.

The digital version of FHG vol. 1 is now available online with search functionalities and citation extraction (CTS and CITE URNs). It collects the fragments of 6th-4th century authors (Hecataeus of Miletus, Charon of Lampsacus, Xanthus of Lydia, Hellanicus of Lesbos, Pherecydes of Athens, Acusilaus of Argos, Ephorus of Cuma, Theopompus of Chius, and Phylarchus), Apollodorus of Athens (with fragments of the Bibliotheca), historians of Sicily (Antiochus of Syracuse, Philistus of Syracuse, Timaeus of Tauromenius), and the Atthidographers (Clidemus, Phanodemus, Androtio, Demo, Philochorus, and Ister).


Harpokration Online

Thursday, February 11th, 2016

Posted for Joshua Sosin:

About eight months ago we announced a lightweight tool to support collaborative translation of Harpokration—we called it ‘Harpokration On Line.’ See: Well, we took our time (Mack finished a dissertation, John made serious progress on his, Josh did his first 24+ hour bike ride), and as of this afternoon there is at least one rough translation (in some cases more than one) for every entry.
We had help from others; I mention especially Chris de Lisle, whom we have never met, but who invested considerable effort, for which all should be grateful! And many special thanks to Matthew Farmer (U Missouri) who signed on at the moment when our to-do pile contained mainly entries that we had back-burnered, while we chewed through the easier ones!
So, we are done, but far from done. Now begins the process of correcting errors and infelicities, of which there will be many; adding new features to the tool (e.g. commentary, easy linking out to related digital resources such as Jacoby Online or Pleiades, enhanced encoding in the Greek and features built atop that, perhaps eventual reconciliation of text with Keaney as warranted). This is just a start really.
For next year we (Sosin & Duke Collaboratory for Classics Computing) plan a course at Duke in which the students will (1) start translating their way through Photios’ Lexicon in similar fashion and (2) working with Ryan Baumann and Hugh Cayless of the DC3 to help design and implement expanded features for the translation tool. We will welcome collaborators on that effort as well!
So, here again, please feel free log in, fix, add, correct, disagree and so on. Please note that we do handle login via google; so, if that is a deal-breaker for you, we apologize. We have a rough workaround for that and would be happy to test it out with a few folks, if any should wish.
Matthew C. Farmer (
John P. Aldrup-MacDonald (
Mackenzie Zalin (

BL Labs competition 2016

Tuesday, January 26th, 2016

Forwarded for Mahendra Mahey:

It’s that time of year again and we are proud to announce the annual BL Labs Competition and BL Labs Awards, celebrating the use of the British Library’s digital collections, are open for 2016!

The BL Labs Competition (, which closes on 11 April 2016, is looking for innovative project ideas which use our digital collections in new and exciting ways. Two winners will be selected who will get the opportunity to work on their projects in residence with BL Labs at the British Library for 6 months,  where they will receive the necessary support to make their ideas happen. The Competition winner and runner up will receive £3000 and £1000 respectively.

The BL Labs Awards (, which closes on 5 September 2016, recognises outstanding and innovative work carried out using our digital content in four key areas: Research, Commercial, Artistic and Teaching / Learning. A prize of £500 for the winner and £100 for the runner up  will be awarded in each category.

The winners, runners up and other entrants’ work will be showcased and the prizes given at the annual BL Labs Symposium on the 7 November at the British Library, St Pancras, London.

More information about the Competition and Awards is available via the Digital Scholarship blog:

Why not come to one of the 15 ‘BL Labs Roadshow 2016’ UK events we are running between February and April 2016, to learn more about our digital collections and discuss your ideas? This year, we are visiting institutions in: Aberystwyth, Birmingham, Brighton, Bristol, Cambridge, Edinburgh, Lancaster, London, Milton Keynes, Norwich, Sheffield and Wolverhampton. Please find further Roadshow details below:

Casaubon-Kaibel reference converter for the Deipnosophists of Athenaeus of Naucratis

Wednesday, December 9th, 2015

The Casaubon-Kaibel reference converter is a tool for finding concordances between the numerations used in the two editions of the 15 books of the Deipnosophists of Athenaeus of Naucratis by Isaac Casaubon and Georg Kaibel.

By inserting at least one of the two references (by Casaubon or by Kaibel), you get the corresponding reference with links to the relevant pages in the two editions.

Casaubon reference system – Isaac Casaubon (1597)
After the reference to the book number, this system includes an arabic numeral referring to the page of the edition of Casaubon followed by a letter (a-f) corresponding to the subdivision of the page into sections of about ten lines of text (e.g., 12.530d).

Kaibel reference system – Georg Kaibel (1887-1890)
In this system each book is logically divided into paragraphs corresponding to units of sense and the paragraphs are referred to with arabic numerals whose numeration starts again at the beginning of each book (e.g., 12.40).

Treebanking Ancient Greek in High School: what my students learned, what I learned

Monday, October 26th, 2015

Treebanking methodology has proven to be successful in the linguistic analysis of ancient Greek and Latin texts, and it has aroused a continuously increasing interest over the last few years. It is certainly one of the most exciting innovations in the field of Classics. Now it is time to see if it can also play a role in improving traditional education, leading didactics into the digital world.

In spring 2015, I was following a training in the Italian High School “Liceo Classico Socrate”, where I was allowed to lead a little experiment. The school has a solid tradition in Classical education, focused on Greek and Latin culture and language, so it was the ideal environment to test the potential impact of digital tools on the students’ learning process. The theoretical premise for this attempt was that the methodology of translating ancient Greek in Italian High School is similar to the process of dependency treebanking. The traditional method has a strong linguistic attitude: it requires complete analysis of the text divided into single sentences, according to a specific hierarchical structure, and the assignment of morphological and syntactic values to every single element according to traditional grammars. These tasks are performed with very limited use of the dictionary; then, a complete translation follows this established order, in order to ensure an easier conversion of the sentence into the new language. Treebanking seems to reproduce this process closely, but providing an alternative, aesthetically enjoyable and visually useful interface.

The experiment was performed with 22 students of 14 years of age, who were beginners in ancient Greek. These young cavies were involved in a four-days workshop of Ancient Greek Dependency Treebank, which took place in the School’s Informatics Lab. Their regular teacher was an additional and enthusiastic participant.


Call for Participation: Text reuse workshop at DH Estonia 2015

Friday, September 4th, 2015

Text Reuse Workshop at DH Estonia 2015
21 October 2015

Hosted by the Estonian Literary Museum, Tartu, Estonia.
Organised by: Dr. Marco Büchler, Emily Franzini, Greta Franzini and Maria Moritz (eTRAP Early Career Research Group).

The Conference on translingual and transcultural digital humanities [] is hosting a one-day Text Reuse Workshop for participants interested in learning more about semi-automatic detection of text reuse in digital textual corpora. The workshop builds on eTRAP’s research activities, some of which deploy Marco Büchler’s TRACER tool. TRACER is a suite of algorithms aimed at investigating text reuse in multifarious corpora, be those prose, poetry, in Arabic or Estonian. TRACER provides researchers with statistical information about the texts under investigation and its integrated reuse visualiser, the TRACER Debugger, displays occurrences of text reuse in a more readable format for further study.
This workshop seeks to teach participants to independently understand, use and run the TRACER tool on their own data-sets.

Eligibility & requirements
If you’re interested in exploring text reuse between two or multiple texts (in the same language) and would like to learn how to do it semi-automatically, this workshop is for you.
In order to provide everyone with adequate (technical) assistance, the workshop can only accommodate 10 participants.
To apply to the workshop, please send your CV and motivation letter to etrap-applications(at)gcdh(dot)de by September 2015.

For more information, please visit:

Linked Data for the Humanities Workshop in Oxford

Thursday, May 21st, 2015

Via Terhi Nurmikko:

Linked Data for the Humanities Workshop: A semantic web of scholarly data
Part of the Digital Humanities Oxford Summer School, held 20th – 24th July 2015.
Book your place via

Come and learn from experts and engage with participants from around the world, from every field and career stage. Develop your knowledge and acquire new skills to support your interest in Linked Data for the Humanities. Immerse yourself in this specialist topic for a week, and widen your horizons through the keynote and additional sessions.

The Linked Data in the Humanities workshop introduces the concepts and technologies behind Linked Data and the Semantic Web and teaches attendees how they can publish their research so that it is available in these forms for reuse by other humanities scholars, and how to access and manipulate Linked Data resources provided by others. The Semantic Web tools and methods described over the week use distinct but interwoven models to represent services, data collections, workflows, and the domain of an application. Topics covered will include: the RDF format; modelling your data and publishing to the web; Linked Data; querying RDF data using SPARQL; and choosing and designing vocabularies and ontologies.

The workshop comprises a series of lectures and hands-on tutorials. Lectures introduce theoretical concepts in the context of Semantic Web systems deployed in and around the humanities, many of which are introduced by their creators. Each lecture is paired with a practical session in which attendees are guided through their own exploration of the topics covered.

Book your place via

For more information about the Digital Humanities Oxford Summmer School, see .

Sunoikisis DC Planning Seminar, Leipzig, February 16-18

Monday, February 16th, 2015

Sunoikisis is a successful national consortium of Classics programs developed by the Harvard’s Center for Hellenic Studies. The goal is to extend Sunoikisis to a global audience and contribute to it with an international consortium of Digital Classics programs (Sunoikisis DC). Sunoikisis DC is based at the Alexander von Humboldt Chair of Digital Humanities at the University of Leipzig. The aim is to offer collaborative courses that foster interdisciplinary paradigms of learning. Master students of both the humanities and computer science are welcome to join the courses and work together by contributing to digital classics projects in a collaborative environment.


Reflecting on our (first ever) Digital Classicist Wiki Sprint

Wednesday, July 16th, 2014

From (Print) Encyclopedia to (Digital) Wiki

According to Denis Diderot and Jean le Rond d’Alembert the purpose of an encyclopedia in the 18th century was ‘to collect knowledge disseminated around the globe; to set forth its general system to the people with whom we live, and transmit it to those who will come after us, so that the work of preceding centuries will not become useless to the centuries to come’.  Encyclopedias have existed for around 2,000 years; the oldest is in fact a classical text, Naturalis Historia, written ca 77 CE by Pliny the Elder.

Following the (recent) digitalization of raw data, new, digital forms of encyclopedia have emerged. In our very own, digital era, a Wiki is a wider, electronic encyclopedia that is open to contributions and edits by interesting parties. It contains concept analyses, images, media, and so on, and it is freely available, thus making the creation, recording, and dissemination of knowledge a democratised process, open to everyone who wishes to contribute.


A Sprint for Digital Classicists

For us, Digital Classicists, scholars and students interested in the application of humanities computing to research in the ancient and Byzantine worlds, the Digital Classicist Wiki is composed and edited by a hub for scholars and students. This wiki collects guidelines and suggestions of major technical issues, and catalogues digital projects and tools of relevance to classicists. The wiki also lists events, bibliographies and publications (print and electronic), and other developments in the field. A discussion group serves as grist for a list of FAQs. As members of the community provide answers and other suggestions, some of these may evolve into independent wiki articles providing work-in-progress guidelines and reports. The scope of the Wiki follows the interests and expertise of collaborators, in general, and of the editors, in particular. The Digital Classicist is hosted by the Department of Digital Humanities at King’s College London, and the Stoa Consortium, University of Kentucky.

So how did we end up editing this massive piece of work? On Tuesday July 1, 2014 and around 16:00 GMT (or 17:00 CET) a group of interested parties gathered up in several digital platforms. The idea was that most of the action will take place in the DigiClass chatroom on IRC, our very own channel called #digiclass. Alongside the traditional chat window, there was also a Skype voice call to get us started and discuss approaches before editing. On the side, we had a GoogleDoc where people simultaneously added what they thought should be improved or created. I was very excited to interact with old members and new. It was a fun break during my mini trip to the Netherlands, and as it proved, very focused on the general attitude of the Digital Classicists team; knowledge is open to everyone who wishes to learn and can be the outcome of a joyful collaborative process.


The Technology Factor

As a researcher of digital history, and I suppose most information system scholars would agree, technology is never neutral in the process of ‘making’. The magic of the Wiki consists on the fact that it is a rather simple platform that can be easily tweaked. All users were invited to edit any page to create new pages within the wiki Web site, using only a regular web browser without any extra add-ons. Wiki makes page link creation easy by showing whether an intended target page exists or not. A wiki enables communities to write documents collaboratively, using a simple markup language and a web browser. A single page in a wiki website is referred to as a wiki page, while the entire collection of pages, which are usually well interconnected by hyperlinks, is ‘the wiki’. A wiki is essentially a database for creating, browsing, and searching through information. A wiki allows non-linear, evolving, complex and networked text, argument and interaction. Edits can be made in real time and appear almost instantly online. This can facilitate abuse of the system. Private wiki servers (such as the Digital Classicist one) require user identification to edit pages, thus making the process somewhat mildly controlled. Most importantly, as researchers of the digital we understood in practice that a wiki is not a carefully crafted site for casual visitors. Instead, it seeks to involve the visitor in an ongoing process of creation and collaboration that constantly changes the Web site landscape.


Where Technology Shapes the Future of Humanities

In terms of Human resources some with little involvement in the Digital Classicist community before this, got themselves involved in several tasks including correcting pages, suggesting new projects, adding pages to the wiki, helping others with information and background, approaching project-owners and leaders in order to suggest adding or improving information. Collaboration, a practice usually reserved for science scholars, made the process easier and intellectually stimulating.  Moreover, within these overt cyber-spaces of ubiquitous interaction one could identify a strong sense of productive diversity within our own scholarly community; it was visible both in the IRC chat channel as well as over skype. Several different accents and spellings, British, American English, and several continental scholars were gathering up to expand this incredibly fast-pacing process. There was a need to address research projects, categories, and tools found in non-english speaking academic cultures.  As a consequence of this multivocal procedure, more interesting questions arose, not lest methodological. ‘What projects are defined as digital, really’, ‘Isn’t everything a database?’ ‘What is a prototype?’. ‘Shouldn’t there be a special category for dissertations, or visualisations?’.  The beauty of collaboration in all its glory, plus expanding our horizons with technology! And so much fun!

MediaWiki recorded almost 250 changes made in the 1st of July 2014!

The best news, however is that this, first ever wiki sprint was not the last.  In the words of the Organisers, Gabriel Boddard and Simon Mahony,

‘We have recently started a programme of short intensive work-sprints to
improve the content of the Digital Classicist Wiki
( A small group of us this week made
about 250 edits in a couple of hours in the afternoon, and added dozens
of new projects, tools, and other information pages.

We would like to invite other members of the Digital Classicist community to
join us for future “sprints” of this kind, which will be held on the
first Tuesday of every month, at 16h00 London time (usually =17:00
Central Europe; =11:00 Eastern US).

To take part in a sprint:

1. Join us in the DigiClass chatroom (instructions at
<>) during the
scheduled slot, and we’ll decide what to do there;

2. You will need an account on the Wiki–if you don’t already have one,
please email one of the admins to be invited;

3. You do not need to have taken part before, or to come along every
month; occasional contributors are most welcome!’

The next few sprints are scheduled for:
* August 5th
* September 2nd
* October 7th
* November 4th
* December 2nd

Please, do join us, whenever you can!



EpiDoc Latest Release (8.17)

Thursday, August 8th, 2013

Scott Vanderbilt has just announced the latest release of the EpiDoc Guidelines, Schema, and Example Stylesheets.

Details are available on the Latest Release page of the EpiDoc wiki at SourceForge.

Perseus Catalog Released

Friday, June 21st, 2013

From Lisa Cerrato via the Digital Classicist List:

The Perseus Digital Library is pleased to announce the 1.0 Release of the Perseus Catalog.

The Perseus Catalog is an attempt to provide systematic catalog access to at least one online edition of every major Greek and Latin author (both surviving and fragmentary) from antiquity to 600 CE. Still a work in progress, the catalog currently includes 3,679 individual works (2,522 Greek and 1,247 Latin), with over 11,000 links to online versions of these works (6,419 in Google Books, 5,098 to the Internet Archive, 593 to the Hathi Trust). The Perseus interface now includes links to the Perseus Catalog from the main navigation bar, and also from within the majority of texts in the Greco-Roman collection.

The metadata contained within the catalog has utilized the MODS and MADS standards developed by the Library of Congress as well as the Canonical Text Services and CTS-URN protocols developed by the Homer Multitext Project.  The Perseus catalog interface uses the open source Blacklight Project interface and Apache Solr. Stable, linkable canonical URIs have been provided for all textgroups, works, editions and translations in the Catalog for both HTML and ATOM output formats. The ATOM output format provides access to the source CTS, MODS and MADS metadata for the catalog records. Subsequent releases will make all catalog data available as RDF triples.

Other major plans for the future of the catalog include not only the addition of more authors and works as well as links to online versions but also to open up the catalog to contributions from users. Currently the catalog does not include any user contribution or social features other than standard email contact information but the goal is to soon support the creation of user accounts and the contribution of recommendations, corrections and or new metadata.

The Perseus Catalog blog features documentation, a user guide, and contact information as well as comments from Editor-in-Chief Gregory Crane on the history and purpose of the catalog.

The Perseus Digital Library Team

Open Philology Project Announced

Thursday, April 4th, 2013

Via Marco Büchler, Greg Crane has just posted “The Open Philology Project and Humboldt Chair of Digital Humanities at Leipzig” at Perseus Digital Library Updates.

Abstract: The Humboldt Chair of Digital Humanities at the University of Leipzig sees in the rise of Digital Technologies an opportunity to re-assess and re-establish how the humanities can advance the understanding of the past and to support a dialogue among civilizations. Philology, which uses surviving linguistic sources to understand the past as deeply and broadly as possible, is central to these tasks, because languages, present and historical, are central to human culture. To advance this larger effort, the Humboldt Chair focuses upon enabling Greco-Roman culture to realize the fullest possible role in intellectual life. Greco-Roman culture is particularly significant because it contributed to both Europe and the Islamic world and the study of Greco-Roman culture and its influence thus entails Classical Arabic as well as Ancient Greek and Latin. The Humboldt Chair inaugurates an Open Philology Project with three complementary efforts that produce open philological data, educate a wide audience about historical languages, and integrate open philological data from many sources: the Open Greek and Latin Project organizes content (including translations into Classical Arabic and modern languages); the Historical Language e-Learning Project explores ways to support learning across barriers of language and culture as well as space and time; the Scaife Digital Library focuses on integrating cultural heritage sources available under open licenses.

Details of the project, its components, and rationale are provided in the original post.

Diccionario Griego-Español online

Friday, December 21st, 2012

Forwarded for Sabine Arnaud-Thuillier:

The members of the Diccionario Griego-Español project (DGE, CSIC, Madrid) are pleased to announce the release of DGE online (, first digital edition of the published section (α-ἔξαυος) of our Lexicon. Although still in progress, the DGE, written under the direction of Prof. F.R. Adrados, is currently becoming the largest bilingual dictionary of ancient Greek: it already includes about 60,000 entries and 370,000 citations of ancient authors and texts. Simultaneously, we are releasing the edition of LMPG online(, the digital version of the Lexicon of Magic and Religion in the Greek Magical Papyri, written by Luis Muñoz Delgado (Supplement V of DGE). The digitization of this smaller Lexicon is considered as a successful prototype of this ambitious digitization initiative: further on DGE online will be improved with similar advanced features, such as the implementation of a customized search engine. Any critics and suggestions on that matter will be very welcome. We hope these new open access dictionaries will be of your interest and will become, to some extent, valuable tools for Ancient Greek studies.

Juan Rodríguez Somolinos (Main Researcher) and Sabine Arnaud-Thuillier (responsible for the digital edition)

Workshop on Canonical Text Services: Furman May 19-22, 2013

Tuesday, December 18th, 2012

Posted for Christopher Blackwell:

What · With funding from the Andrew W. Mellon Foundation, Furman University’s Department of Classics is offering a workshop on the Canonical Text Services Protocol.

When · May 19 – 22, 2013.

Where · Greenville, South Carolina, (Wikipedia); Furman University.

Who · Applications will be accepted from anyone interested in learning about exposing canonically cited texts online with CTS. We have funds to pay for travel and lodging for six participants.

How · Apply by e-mail to by January 31, 2013.

For more information, see or contact

“Europeana’s Huge Dataset Opens for Re-use”

Friday, September 14th, 2012

According to this press-release from Europeana Professional, the massive European Union-funded project has released 20 million records on cultural heritage items under a Creative Commons Zero (Public Domain) license.

The massive dataset is the descriptive information about Europe’s digitised treasures. For the first time, the metadata is released under the Creative Commons CC0 Public Domain Dedication, meaning that anyone can use the data for any purpose – creative, educational, commercial – with no restrictions. This release, which is by far the largest one-time dedication of cultural data to the public domain using CC0 offers a new boost to the digital economy, providing electronic entrepreneurs with opportunities to create innovative apps and games for tablets and smartphones and to create new web services and portals.

Upon registering for access to the Europeana API, developers can build tools or interfaces on this data, download metadata into new platforms for novel purposes, make money off it, perform new research, create artistic works, or anything.

It’s important to note that it’s only the metadata that is being freely released here: I did a search for some Greek inscriptions, and also photographs and transcriptions are available, these are all fiercely copyrighted to the Greek Ministry of Culture: ” As for all monuments of cultural heritage, permission from the Greek Ministry of Culture is required for the reproduction of photographs of the inscriptions.” (According to this same license statement, even the metadata are licensed: “Copyright for all data in the collection belongs to the Institute for Greek and Roman Antiquity of the National Hellenic Research Foundation. These data may be used freely, provided that there is explicit reference to their provenance. ” Which seems slightly at odds with the CC0 claim of the Europeana site; no doubt closer examination of the legal terms would reveal which claim supercedes in this case.)

(It was lovely to be reminded that inscriptions provided by the Pandektis project [like this funerary monument for Neikagoras] have text made available in EpiDoc XML as well as Leiden-formatted edition.)

It would be really good to hear about any implementations, tools or demos built on top of this data, especially if that had a classics component. Any pointers yet? Or do we need to organize a hackfest to get it started….?

Editing Athenaeus Hackathon: Berlin/Leipzig, October 10-12

Saturday, September 1st, 2012

The Banquet of the Digital Scholars

Humanities Hackathon on editing Athenaeus and on the Reinvention of the Edition in a Digital Space

October 10-12, 2012 Universität Leipzig (ULEI) <> & Deutsches Archäologisches Institut (DAI) Berlin <>

Co-directors: Monica Berti – Marco Büchler – Gregory Crane – Bridget Almas

The University of Leipzig will host a hackathon that addresses two basic tasks. On the one hand, we will focus upon the challenges of creating a digital edition for the Greek author Athenaeus, whose work cites more than a thousand earlier sources and is one of the major sources for lost works of Greek poetry and prose. At the same time, we use the case Athenaeus to develop our understanding of to organize a truly born-digital edition, one that not only includes machine actionable citations and variant readings but also collations of multiple print editions, metrical analyses, named entity identification, linguistic features such as morphology, syntax, word sense, and co-reference analysis, and alignment between the Greek original and one or more later translations. (more…)

Official Release of the Virtual Research Environment TextGrid

Friday, April 27th, 2012

TextGrid ( is a platform for scholars in the humanities, which makes possible the collaborative analysis, evaluation and publication of cultural remains (literary sources, images and codices) in a standardized way. The central idea was to bring together instruments for the dealing with texts under a common user interface. The workbench offers a range of tools and services for scholarly editing and linguistic research, which are extensible by open interfaces, such as editors for the linkage between texts or between text sequences and images, tools for musical score edition, for gloss editing, for automatic collation etc.

On the occasion of the official release of TextGrid 2.0 a summit will take place from the 14th to the 15th of May 2012. On the 14th the summit will start with a workshop day on which the participants can get an insight into some of the new tools. For the following day lectures and a discussion group are planned.

For more information and registration see this German website:

With kind regards

Celia Krause

Celia Krause
Technische Universität Darmstadt
Institut für Sprach- und Literaturwissenschaft
Hochschulstrasse 1
64289 Darmstadt
Tel.: 06151-165555

TILE 1.0 released

Friday, July 22nd, 2011

Those who have been waiting impatiently for the first stable release of the Text Image Linking Environment (TILE) toolkit need wait no longer: the full program can be downloaded from: <>. From that site:

The Text-Image Linking Environment (TILE) is a web-based tool for creating and editing image-based electronic editions and digital archives of humanities texts.

TILE features tools for importing and exporting transcript lines and images of text, an image markup tool, a semi-automated line recognizer that tags regions of text within an image, and plugin architecture to extend the functionality of the software.

I haven’t tried TILE out for myself yet, but I’m looking forward to doing so.

Digital Papyrology

Tuesday, October 26th, 2010

The following is a lightly edited version of a talk that I delivered at the 26th Congress of the International Association of Papyrologists, 19 August 2010, in Geneva (program), posted here upon nudging of G. Bodard.

Colleagues. It is a great honor and a privilege to be able to speak with you today. An honor and a privilege that, I hasten to add, I did not seek, but which a number of our colleagues insisted some months back the members of this research team must try to live up to. If I approach this distinguished body with some trepidation it is perhaps because my training as an epigraphist has conditioned me to a tone less attuned to collegiality than that which informs the papyrologists’ discipline. I should add also that am here not to present my own work, but the fruits of a team whose members are in Heidelberg, London, New York, North Carolina, Alabama, and Kentucky, and who have been working heroically for more than three years now.

I shall aim to speak for no more than 40 minutes so that we may at least start discussions, which I know the rest of the team and I will be more than happy to carry on via email, Skype, phone, and separate face to face meetings. I will add also that, since the matters arising from this talk are highly technical in nature, we shall be more than happy to field questions as a team (I and my colleagues Rodney Ast, James Cowey, Tom Elliott, and Paul Heilporn) and in any of the languages within our competence.

First some background. I don’t need to tell you very much about the history of the Duke Data Bank of Documentary Papyri. It was founded in 1983, as a collaboration between William Willis and John Oates of Duke University, and the Packard Humanities Institute. A decade and a half later, around the time, as it happens, that APIS was also starting, the DDbDP decided to migrate from the old CD platform and to the web. John in particular was committed to making the data available for free, to anyone who wanted access. The Perseus Project, from Tufts University, very kindly agreed to host the new online DDbDP, to develop a search interface, to convert the data from old Beta code to a markup language called SGML–all at no cost to us. The DDbDP added a few thousand texts after switching from the Packard CD ROM to Perseus. But the landscape changed dramatically from this point onward, and the DDbDP began to fall behind. The end of the CD ROM meant the end of regular revenues to support data entry and proofreading. And of course, ongoing development of the search interface was not without cost to Perseus, whose generous efforts on our behalf were, as I mention, unremunerated. Within a few years the DDbDP was behind in data entry and the search interface was not able to grow and mature in the ways that papyrologists wanted.


Citation in Digital Scholarship: A Conversation

Monday, October 4th, 2010

I’m writing to bring readers’ attention to a series of pages that is coming together on the Digital Classicist wiki under the rubric “Citation in digital scholarship” (category). I take responsibility/blame for initiating the project, but it has already benefitted from input by Matteo Romanello (author of CRefEx) and from comments by my colleagues at NYU’s Institute for the Study of the Ancient World. You’ll also see the influence of the Canonical Text Services.

A slight preview of what you’ll find there and of where this all might go:

  1. The goal is to provide a robust and simple convention for indicating that citations are present. How robust? How simple? At a bare minimum, just wrap a citation in ‘<a class=”citation” href=””>…</a>’. That will distinguish intellectually significant citations from other links (such as to a home page for the hosting project). I cribbed the  ‘class=”citation”‘ string from Matteo’s articles cited at the bottom of the wiki page. Please also consider adding a ‘title’ and ‘lang’ attribute as described.
  2. We are also interested in encouraging convergence on best practices for communicating information about the entities being cited and about the nature of the citation itself:
    1. There is a page “Citations with added RDFa” that suggests conventions for using RDFa to add markup. It encourages use of Dublin Core terms.
    2. Matteo has begun a page “Citations with CTS and Microformats“. CTS, developed by Neel Smith and Chris Blackwell, is important by way of its potential to provide stable URIs to well-known texts.

    Merging these conventions is of ongoing interest. And they do illustrate that one goal is to converge on best practices that are extendable and not in unnecessary conflict with existing work.

  3. While it isn’t represented on the wiki yet, I intend to start a javascript library that will identify citations in a page (e.g. jQuery’s “$(‘.citation’)” ) in order to present information about, along with options for following, a particular citation. Or to list and map all the dc:Location’s cited in a text. Etc.
  4. Closing the loop: this work overlaps with a meeting held by the ISAW Digital Projects Team in NYC last week. The preliminary result is a tool for managing URIs in a shared bibliographic infrastructure. This is one example of an entity that can produce embeddable markup conforming to the ‘class=”citation”‘ convention. Such markup would be consumable by the planned js library. Any project that produces stable URIs can have an “Embed a link to here.” (vel sim) widget that produces conforming html for authors to re-use.

I’m grateful to Gabriel Bodard for letting me use the Digital Classicist wiki to start these pages and for encouraging me to summarize here. The effort is inspired by the observation that a little bit of common documentation, sharing, and tool building can lead to big wins for users and developers, as well as to greater interoperability for our citation practices going forward.

Comments here are very welcome.

Digital Classics Bibliography

Monday, September 6th, 2010

As part of my PhD in Digital Humanities I’m working on a literature review of  publications related to the theme “Classics and Computers”.

I’m looking specifically at general surveys, studies and discussions about the history of the relationship between classics and computers, a disciplinary field that has recently emerged as Digital Classics.

However, as Tom Elliott pointed me out Alison Babeu (Perseus project) has recently published on CiteULike a much broader bibliography as “as part of an IMLS-funded planning project that’s looking a digital infrastructure needs for Classics (Perseus and CLIR are the co-recipients of the grant)”.

For the time being, in order to allow anyone with any interest in this to contribute I created a group on Zotero called digitalclassics. The group is open (i.e. my authorisation is not needed to join) so please join it and start contributing your entries to the list. I’m thinking in particular of publications that I have unintentionally neglected and/or publications in other languages that I was not aware of.

Currently, the entries in the Zotero Library are divided into two main categories: general studies and applications, where the latter is meant to host publications concerning specific applications Digital Classics-related. More subcategories may be added as long as we go further or members of the list can even add new ones by themselves.As soon as the bibliography will reach a reasonably stable shape I will update the page I have already created on the DigitalClassicist wiki.

I want to thank in advance the DigitalClassicist community for the support they have shown me on the list and for the entries they have started already contributing.

Give a Humanist a Supercomputer…

Tuesday, December 22nd, 2009

The “Wired Campus” section of the Chronicle of Higher Education is reporting on the uses that humanities scholars have found for the U.S. Department of Energy’s High Performance Computing resources.  The short article reports on the efforts of several people who have made use of the resources, including Gregory Crane of the Perseus Project, David Bamman, a computational linguist who has been mining data from classical texts, and David Koller, a researcher with the Digital Sculpture Project, which has developed ways to coalesce numerous images of an object into a high-resolution 3D image.  The article reports that, according to Mr. Koller, intermediaries are needed who can help humanities and computer researchers communicate with each other.

Ruins of Pompeii now in Google Street View

Friday, December 4th, 2009

The title says it all.  Check it out here.