Archive for the ‘Projects’ Category

Treebanking Ancient Greek in High School: what my students learned, what I learned

Monday, October 26th, 2015

Treebanking methodology has proven to be successful in the linguistic analysis of ancient Greek and Latin texts, and it has aroused a continuously increasing interest over the last few years. It is certainly one of the most exciting innovations in the field of Classics. Now it is time to see if it can also play a role in improving traditional education, leading didactics into the digital world.

In spring 2015, I was following a training in the Italian High School “Liceo Classico Socrate”, where I was allowed to lead a little experiment. The school has a solid tradition in Classical education, focused on Greek and Latin culture and language, so it was the ideal environment to test the potential impact of digital tools on the students’ learning process. The theoretical premise for this attempt was that the methodology of translating ancient Greek in Italian High School is similar to the process of dependency treebanking. The traditional method has a strong linguistic attitude: it requires complete analysis of the text divided into single sentences, according to a specific hierarchical structure, and the assignment of morphological and syntactic values to every single element according to traditional grammars. These tasks are performed with very limited use of the dictionary; then, a complete translation follows this established order, in order to ensure an easier conversion of the sentence into the new language. Treebanking seems to reproduce this process closely, but providing an alternative, aesthetically enjoyable and visually useful interface.

The experiment was performed with 22 students of 14 years of age, who were beginners in ancient Greek. These young cavies were involved in a four-days workshop of Ancient Greek Dependency Treebank, which took place in the School’s Informatics Lab. Their regular teacher was an additional and enthusiastic participant.


Harpokration On Line

Tuesday, June 2nd, 2015

The Duke Collaboratory for Classics Computing (DC3) is pleased to announce the Harpokration On Line project, which aims to provide open-licensed collaboratively-sourced translation(s) for Harpokration’s “Lexicon of the Ten Orators”.

Users can view and contribute translations at or download the project data from GitHub. Detailed instructions for contributing translations can also be found in the announcement blog post.

The project (and name) draw inspiration from the Stoa-hosted Suda On Line project.

The code used to run the project is openly available at The project also leverages the existing CTS/CITE architecture. This architecture, pioneered for other Digital Classics projects, allows us to build on well-developed concepts for organizing texts and translations—concepts which are transformable to other standards such as OAC and RDF. Driving a translation project using CITE annotations against passages of a “canonical” CTS text seemed a natural fit. Using Google Fusion Tables, Google authentication, and client-side JavaScript for the core of our current implementation has also allowed us to rapidly develop relatively lightweight mechanisms for contributing, using freely-available hosting and tools (GitHub Pages, Google App Engine) for the initial phases of the project.

The DC3 is excited to see where this project leads, and hopes to also lead by example in publishing this project using open tools under an open license, with openly-licensed contributions.

Sunoikisis DC Planning Seminar, Leipzig, February 16-18

Monday, February 16th, 2015

Sunoikisis is a successful national consortium of Classics programs developed by the Harvard’s Center for Hellenic Studies. The goal is to extend Sunoikisis to a global audience and contribute to it with an international consortium of Digital Classics programs (Sunoikisis DC). Sunoikisis DC is based at the Alexander von Humboldt Chair of Digital Humanities at the University of Leipzig. The aim is to offer collaborative courses that foster interdisciplinary paradigms of learning. Master students of both the humanities and computer science are welcome to join the courses and work together by contributing to digital classics projects in a collaborative environment.


CFP: Seminar on Latin textual criticism in the digital age

Wednesday, October 1st, 2014

The Digital Latin Library, a joint project of the Society for Classical Studies, the Medieval Academy of America, and the Renaissance Society of America, with funding from the Andrew W. Mellon Foundation, announces a seminar on Latin textual criticism in the digital age. The seminar will take place on the campus of the University of Oklahoma, the DLL’s host institution, on June 25–26, 2015.

We welcome proposals for papers on all subjects related to the intersection of modern technology with traditional methods for editing Latin texts of all eras. Suggested topics:

  • Keeping the “critical” in digital critical editions
  • The scholarly value of editing texts to be read by humans and machines
  • Extending the usability of critical editions beyond a scholarly audience
  • Visualizing the critical apparatus: moving beyond a print-optimized format
  • Encoding different critical approaches to a text
  • Interoperability between critical editions and other digital resources
  • Dreaming big: a wishlist of features for the optimal digital editing environment

Of particular interest are proposals that examine the scholarly element of preparing a digital edition.

The seminar will be limited to ten participants. Participants will receive a stipend, and all travel and related expenses will be paid by the DLL.

Please send proposals of no more than 650 words to Samuel J. Huskey at by December 1, 2014. Notification of proposal status will be sent in early January.

Postdoc: Hero’s Automata (Glasgow)

Thursday, August 7th, 2014

Posted for Ian Ruffell.

(The post can be found on the University of Glasgow website via the search page here (search on the College of Arts): Some more details about the project are here:

Research Associate
Reference Number 009086
Closing date: August 24, 2014
Location Gilmorehill Campus / Main Building
College / Service COLLEGE OF ARTS
Job Family Research And Teaching
Position Type Full Time
Salary Range £32,590 – £36,661

Job Purpose

This post is part of the project ‘Hero of Alexandria and his Theatrical Automata’, funded by the Leverhulme Trust (PI: Dr Ian Ruffell, School of Humanities; Co-I Dr Euan McGookin, School of Engineering). Based in the University of Glasgow (Classics, School of Humanities), the project runs from 1 October 2014 to 30 September 2017. The project investigates Hero of Alexandria’s treatise on the making of automata, and will design, build andthe models described in that work. The post is full-time and available for 36 months from October 1, 2014. The post holder will prototype, build and test versions of the automata, working in collaboration with the rest of the project team in technical analysis of the text. The successful candidate will i) use 3D-modelling (training will be provided) and rapid prototyping equipment to explore possible designs of the automata, ii) with the aid of technicians in the School of Engineering, build full-scale working models of the automata; iii) combine practical data with textual and contextual elements in the project website, iv) test the scope and limitations of the models in performance in dialogue with practitioners and audiences. (more…)

Suda On Line milestone reached

Thursday, July 31st, 2014

The Suda On Line: Byzantine Lexicography affectionately known as SOL and one of Ross Scaife’s (et al) host of innovative projects has now reached the amazing milestone of 100% translation coverage.

A translation of the last of the Suda’s 31000+ entries was submitted to the database on July 21, 2014 and vetted the next day. This milestone is very gratifying, but the work of the project is far from over. As mentioned above, one of the founding principles of the project is that the process of improving and annotating our translations will go on indefinitely. Much important work remains to be done. We are also constantly thinking of ways to improve SOL’s infrastructure and to add new tools and features. If you are interested in helping us with the continuing betterment of SOL, please read about how you can register as an editor and/or contact the managing editors. (

Although never involved in this project myself, I often use SOL as an example and case study in my teaching. With much discussion nowadays about so-called ‘crowdsourcing’ and ‘community-sourcing’ this is surely the forerunner.

Reflecting on our (first ever) Digital Classicist Wiki Sprint

Wednesday, July 16th, 2014

From (Print) Encyclopedia to (Digital) Wiki

According to Denis Diderot and Jean le Rond d’Alembert the purpose of an encyclopedia in the 18th century was ‘to collect knowledge disseminated around the globe; to set forth its general system to the people with whom we live, and transmit it to those who will come after us, so that the work of preceding centuries will not become useless to the centuries to come’.  Encyclopedias have existed for around 2,000 years; the oldest is in fact a classical text, Naturalis Historia, written ca 77 CE by Pliny the Elder.

Following the (recent) digitalization of raw data, new, digital forms of encyclopedia have emerged. In our very own, digital era, a Wiki is a wider, electronic encyclopedia that is open to contributions and edits by interesting parties. It contains concept analyses, images, media, and so on, and it is freely available, thus making the creation, recording, and dissemination of knowledge a democratised process, open to everyone who wishes to contribute.


A Sprint for Digital Classicists

For us, Digital Classicists, scholars and students interested in the application of humanities computing to research in the ancient and Byzantine worlds, the Digital Classicist Wiki is composed and edited by a hub for scholars and students. This wiki collects guidelines and suggestions of major technical issues, and catalogues digital projects and tools of relevance to classicists. The wiki also lists events, bibliographies and publications (print and electronic), and other developments in the field. A discussion group serves as grist for a list of FAQs. As members of the community provide answers and other suggestions, some of these may evolve into independent wiki articles providing work-in-progress guidelines and reports. The scope of the Wiki follows the interests and expertise of collaborators, in general, and of the editors, in particular. The Digital Classicist is hosted by the Department of Digital Humanities at King’s College London, and the Stoa Consortium, University of Kentucky.

So how did we end up editing this massive piece of work? On Tuesday July 1, 2014 and around 16:00 GMT (or 17:00 CET) a group of interested parties gathered up in several digital platforms. The idea was that most of the action will take place in the DigiClass chatroom on IRC, our very own channel called #digiclass. Alongside the traditional chat window, there was also a Skype voice call to get us started and discuss approaches before editing. On the side, we had a GoogleDoc where people simultaneously added what they thought should be improved or created. I was very excited to interact with old members and new. It was a fun break during my mini trip to the Netherlands, and as it proved, very focused on the general attitude of the Digital Classicists team; knowledge is open to everyone who wishes to learn and can be the outcome of a joyful collaborative process.


The Technology Factor

As a researcher of digital history, and I suppose most information system scholars would agree, technology is never neutral in the process of ‘making’. The magic of the Wiki consists on the fact that it is a rather simple platform that can be easily tweaked. All users were invited to edit any page to create new pages within the wiki Web site, using only a regular web browser without any extra add-ons. Wiki makes page link creation easy by showing whether an intended target page exists or not. A wiki enables communities to write documents collaboratively, using a simple markup language and a web browser. A single page in a wiki website is referred to as a wiki page, while the entire collection of pages, which are usually well interconnected by hyperlinks, is ‘the wiki’. A wiki is essentially a database for creating, browsing, and searching through information. A wiki allows non-linear, evolving, complex and networked text, argument and interaction. Edits can be made in real time and appear almost instantly online. This can facilitate abuse of the system. Private wiki servers (such as the Digital Classicist one) require user identification to edit pages, thus making the process somewhat mildly controlled. Most importantly, as researchers of the digital we understood in practice that a wiki is not a carefully crafted site for casual visitors. Instead, it seeks to involve the visitor in an ongoing process of creation and collaboration that constantly changes the Web site landscape.


Where Technology Shapes the Future of Humanities

In terms of Human resources some with little involvement in the Digital Classicist community before this, got themselves involved in several tasks including correcting pages, suggesting new projects, adding pages to the wiki, helping others with information and background, approaching project-owners and leaders in order to suggest adding or improving information. Collaboration, a practice usually reserved for science scholars, made the process easier and intellectually stimulating.  Moreover, within these overt cyber-spaces of ubiquitous interaction one could identify a strong sense of productive diversity within our own scholarly community; it was visible both in the IRC chat channel as well as over skype. Several different accents and spellings, British, American English, and several continental scholars were gathering up to expand this incredibly fast-pacing process. There was a need to address research projects, categories, and tools found in non-english speaking academic cultures.  As a consequence of this multivocal procedure, more interesting questions arose, not lest methodological. ‘What projects are defined as digital, really’, ‘Isn’t everything a database?’ ‘What is a prototype?’. ‘Shouldn’t there be a special category for dissertations, or visualisations?’.  The beauty of collaboration in all its glory, plus expanding our horizons with technology! And so much fun!

MediaWiki recorded almost 250 changes made in the 1st of July 2014!

The best news, however is that this, first ever wiki sprint was not the last.  In the words of the Organisers, Gabriel Boddard and Simon Mahony,

‘We have recently started a programme of short intensive work-sprints to
improve the content of the Digital Classicist Wiki
( A small group of us this week made
about 250 edits in a couple of hours in the afternoon, and added dozens
of new projects, tools, and other information pages.

We would like to invite other members of the Digital Classicist community to
join us for future “sprints” of this kind, which will be held on the
first Tuesday of every month, at 16h00 London time (usually =17:00
Central Europe; =11:00 Eastern US).

To take part in a sprint:

1. Join us in the DigiClass chatroom (instructions at
<>) during the
scheduled slot, and we’ll decide what to do there;

2. You will need an account on the Wiki–if you don’t already have one,
please email one of the admins to be invited;

3. You do not need to have taken part before, or to come along every
month; occasional contributors are most welcome!’

The next few sprints are scheduled for:
* August 5th
* September 2nd
* October 7th
* November 4th
* December 2nd

Please, do join us, whenever you can!



SNAP:DRGN introduction

Thursday, May 8th, 2014

Standards for Networking Ancient Prosopography: Data and Relations in Greco-roman Names (SNAP:DRGN) is a one-year pilot project, based at King’s College London in collaboration with colleagues from the Lexicon of Greek Personal Names (Oxford), Trismegistos (Leuven), (Duke) and Pelagios (Southampton), and hopes to include many more data partners by the end of this first year. Much of the early discussion of this project took place at the LAWDI school in 2013. Our goal is to recommend standards for sharing relatively minimalist data about classical and other ancient prosopographical and onomastic datasets in RDF, thereby creating a huge graph of person-data that scholars can:

  1. query to find individuals, patterns, relationships, statistics and other information;
  2. follow back to the richer and fuller source information in the contributing database;
  3. contribute new datasets or individual persons, names and textual references/attestations;
  4. annotate to declare identity between persons (or co-reference groups) in different source datasets;
  5. annotate to express other relationships between persons/entities in different or the same source dataset (such as familial relationships, legal encounters, etc.)
  6. use URIs to annotate texts and other references to names with the identity of the person to whom they refer (similar to Pelagios’s model for places using Pleiades).

More detailed description (plus successful funding bid document, if you’re really keen) can be found at <>.

Our April workshop invited a handful of representative data-holders and experts in prosopography and/or linked open data to spend two days in London discussing the SNAP:DRGN project, their own data and work, and approaches to sharing and linking prosopographical data in general. We presented a first draft of the SNAP:DRGN “Cookbook”, the guidelines for formatting a subset of prosopographical data in RDF for contribution to the SNAP graph, and received some extremely useful feedback on individual technical issues and the overall approach. A summary of the workshop, and slides from many of the presentations, can be found at <>.

In the coming weeks we shall announce the first public version of the SNAP ontology, the Cookbook, and the graph of our core and partner datasets and annotations. For further discussion about the project, and linked data for prosopography in general, you can also join the Ancient-People Googlegroup (where I posted a summary similar to this post earlier today).

Leipzig Open Fragmentary Texts Series (LOFTS)

Monday, December 16th, 2013

The Humboldt Chair of Digital Humanities at the University of Leipzig is pleased to announce a new effort within the Open Philology Project: the Leipzig Open Fragmentary Texts Series (LOFTS).

The Leipzig Open Fragmentary Texts Series is a new effort to establish open editions of ancient works that survive only through quotations and text re-uses in later texts (i.e., those pieces of information that humanists call “fragments”).

As a first step in this process, the Humboldt Chair announces the Digital Fragmenta Historicorum Graecorum (DFHG) Project, whose goal is to produce a digital edition of the five volumes of Karl Müller’s Fragmenta Historicorum Graecorum (FHG) (1841-1870), which is the first big collection of fragments of Greek historians ever realized.

For further information, please visit the project website at:

Volunteers with excellent Latin sought

Sunday, December 8th, 2013

Ostrakon from Bu Njem

A few volunteers have started gathering for an interesting project, and it occurs to me that others may like to join us. This might be especially appropriate to someone with excellent Latin, a love for the subject, but no current involvement with the classics, and some spare time on their hands. A retired Latin teacher might fit the bill, or someone who completed an advanced classics degree some years ago, but now works in an unrelated field and misses working with ancient texts. Current students and scholars are also more than welcome to participate.

The site includes some 52,000 transcribed texts, of which about 2,000 in Latin, very few translated into English or any other modern language. The collaborative editing tool SoSOL (deployed at allows users to add to or improve existing editions of papyrological texts, for example by adding new translations.

If you think you might like to take part in this exercise, take a look for instance at O. Bu Njem, a corpus of 150 ostraka from the Roman military base at Golas in Libya. The Latin texts (often fragmentary) are already transcribed; do you think you could produce an English translation of a few of these texts, which will be credited to you? Would you like a brief introduction to the SoSOL interface to enable you to add the translations yourself (pending approval by the editorial board)?

Publishing Text for a Digital Age

Friday, December 6th, 2013

March 27-30, 2014
Tufts University
Medford MA
perseus_neh (at)

Call for contributions!

As a follow-on to Working with Text in a Digital Age, an NEH-funded Institute for Advanced Technologies in the Digital Humanities and in collaboration with the Open Philology Project at the University of Leipzig, Tufts University announces a two-day workshop on publishing textual data that is available under an open license, that is structured for machine analysis as well as human inspection, and that is in a format that can be preserved over time. The purpose of this workshop is to establish specific guidelines for digital publications that publish and/or annotate textual sources from the human record. The registration for the workshop will be free but space will be limited. Some support for travel and expenses will be available. We particularly encourage contributions from students and early-career researchers.

Textual data can include digital versions of traditional critical editions and translations but such data also includes annotations that make traditional tasks (such as looking up or quoting a primary source) machine-actionable, annotations that may build upon print antecedents (e.g., dynamic indexes of places that can be used to generate maps and geospatial visualizations), and annotations that are only feasible in a digital space (such as alignments between source text and translation or exhaustive markup of morphology, syntax, and other linguistic features).

Contributions can be of two kinds:

  1. Collections of textual data that conform to existing guidelines listed below. These collections must include a narrative description of their contents, how they were produced and what audiences and purposes they were designed to serve.
  2. Contributions about formats for publication. These contributions must contain sufficient data to illustrate their advantages and to allow third parties to develop new materials.

All textual data must be submitted under a Creative Commons license. Where documents reflect a particular point of view by a particular author and where the original expression should for that reason not be changed, they may be distributed under a CC-BY-ND license. All other contributions must be distributed under a CC-BY-SA license. Most publications may contain data represented under both categories: the introduction to an edition or a data set, reflecting the reasons why one or more authors made a particular set of decisions, can be distributed under a CC-BY-ND license. All data sets (such as geospatial annotation, morphosyntactic analyses, reconstructed texts with textual notes, diplomatic editions, translations) should be published under a CC-BY-SA license.

Contributors should submit abstracts of up to 500 words to EasyChair. We particularly welcome abstracts that describe data already available under a Creative Commons license. 


January 1, 2014:  Submissions are due. Please submit via EasyChair.

January 20, 2014:  Notification.

Perseus Catalog Released

Friday, June 21st, 2013

From Lisa Cerrato via the Digital Classicist List:

The Perseus Digital Library is pleased to announce the 1.0 Release of the Perseus Catalog.

The Perseus Catalog is an attempt to provide systematic catalog access to at least one online edition of every major Greek and Latin author (both surviving and fragmentary) from antiquity to 600 CE. Still a work in progress, the catalog currently includes 3,679 individual works (2,522 Greek and 1,247 Latin), with over 11,000 links to online versions of these works (6,419 in Google Books, 5,098 to the Internet Archive, 593 to the Hathi Trust). The Perseus interface now includes links to the Perseus Catalog from the main navigation bar, and also from within the majority of texts in the Greco-Roman collection.

The metadata contained within the catalog has utilized the MODS and MADS standards developed by the Library of Congress as well as the Canonical Text Services and CTS-URN protocols developed by the Homer Multitext Project.  The Perseus catalog interface uses the open source Blacklight Project interface and Apache Solr. Stable, linkable canonical URIs have been provided for all textgroups, works, editions and translations in the Catalog for both HTML and ATOM output formats. The ATOM output format provides access to the source CTS, MODS and MADS metadata for the catalog records. Subsequent releases will make all catalog data available as RDF triples.

Other major plans for the future of the catalog include not only the addition of more authors and works as well as links to online versions but also to open up the catalog to contributions from users. Currently the catalog does not include any user contribution or social features other than standard email contact information but the goal is to soon support the creation of user accounts and the contribution of recommendations, corrections and or new metadata.

The Perseus Catalog blog features documentation, a user guide, and contact information as well as comments from Editor-in-Chief Gregory Crane on the history and purpose of the catalog.

The Perseus Digital Library Team

Duke Collaboratory for Classics Computing (DC3)

Wednesday, May 8th, 2013


We are very pleased to announce the creation of the Duke Collaboratory for Classics Computing (DC3), a new Digital Classics R&D unit embedded in the Duke University Libraries, whose start-up has been generously funded by the Andrew W. Mellon Foundation and Duke University’s Dean of Arts & Sciences and Office of the Provost.

The DC3 goes live 1 July 2013, continuing a long tradition of collaboration between the Duke University Libraries and papyrologists in Duke’s Department of Classical Studies. The late Professors William H. Willis and John F. Oates began the Duke Databank of Documentary Papyri (DDbDP) more than 30 years ago, and in 1996 Duke was among the founding members of the Advanced Papyrological Information System (APIS). In recent years, Duke led the Mellon-funded Integrating Digital Papyrology effort, which brought together the DDbDP, Heidelberger Gesamtverzeichnis der Griechischen Papyrusurkunden Ägyptens (HGV), and APIS in a common search and collaborative curation environment (, and which collaborates with other partners, including Trismegistos, Bibliographie Papyrologique, Brussels Coptic Database, and the Arabic Papyrology Database.

The DC3 team will see to the maintenance and enhancement of data and tooling, cultivate new partnerships in the papyrological domain, experiment in the development of new complementary resources, and engage in teaching and outreach at Duke and beyond.

The team’s first push will be in the area of Greek and Latin Epigraphy, where it plans to leverage its papyrological experience to serve a much larger community. The team brings a wealth of experience in fields like image processing, text engineering, scholarly data modeling, and building scalable web services. It aims to help create a system in which the many worldwide digital epigraphy projects can interoperate by linking into the graph of scholarly relationships while maintaining the full force of their individuality.

The DC3 team is:

Ryan BAUMANN: Has worked on a wide range of Digital Humanities projects, from applying advanced imaging and visualization techniques to ancient artifacts, to developing systems for scholarly editing and collaboration.

Hugh CAYLESS: Has over a decade of software engineering expertise in both academic and industrial settings. He also holds a Ph.D. in Classics and a Master’s in Information Science. He is one of the founders of the EpiDoc collaborative and currently serves on the Technical Council of the Text Encoding Initiative.

Josh SOSIN: Associate Professor of Classical Studies and History, Co-Director of the DDbDP, Associate editor of Greek, Roman, and Byzantine Studies; an epigraphist and papyrologist interested in the intersection of ancient law, religion, and the economy.


HESTIA2: Exploring spatial networks through ancient sources

Thursday, April 25th, 2013

Copied from the Digital Classicist list on behalf of the organisers:


HESTIA2: Exploring spatial networks through ancient sources

University of Southampton 18th July 2013
Organisers: Elton Barker, Stefan Bouzarovski, Leif Isaksen and Tom Brughmans, in collaboration with The Connected Past

A free one-day seminar on spatial network analysis in archaeology, history, classics, teaching and commercial archaeology.

Spatial relationships are everywhere in our sources about the past: from the ancient roads that connect cities, or ancient authors mentioning political alliances between places, to the stratigraphic contexts archaeologists deal with in their fieldwork. However, as datasets about the past become increasingly large, these spatial networks become ever more difficult to disentangle. Network techniques allow us to address such spatial relationships explicitly and directly through network visualisation and analysis. This seminar aims to explore the potential of such innovative techniques for research, public engagement and commercial purposes.

The seminar is part of Hestia2, a public engagement project aimed at introducing a series of conceptual and practical innovations to the spatial reading and visualisation of texts. Following on from the AHRC-funded “Network, Relation, Flow: Imaginations of Space in Herodotus’s Histories” (Hestia: ), Hestia2 represents a deliberate shift from experimenting with geospatial analysis of a single text to making Hestia’s outcomes available to new audiences and widely applicable to other texts through a seminar series, online platform, blog and learning materials with the purpose of fostering knowledge exchange between researchers and non-academics, and generating public interest and engagement in this field.

For this first Hestia2 workshop we welcome contributions addressing any of (but not restricted to) the following themes:

Spatial network analysis techniques
Spatial networks in archaeology, history and classics
Techniques for the discovery and analysis of networks from textual sources
Exploring spatial relationships in classical and archaeological sources
The use of network visualisations and linked datasets for archaeologists active in the commercial sector and teachers
Applications of network analysis in archaeology, history and classics

Please email proposed titles and abstracts (max. 250 words) to: by May 13th 2013.

Open Philology Project Announced

Thursday, April 4th, 2013

Via Marco Büchler, Greg Crane has just posted “The Open Philology Project and Humboldt Chair of Digital Humanities at Leipzig” at Perseus Digital Library Updates.

Abstract: The Humboldt Chair of Digital Humanities at the University of Leipzig sees in the rise of Digital Technologies an opportunity to re-assess and re-establish how the humanities can advance the understanding of the past and to support a dialogue among civilizations. Philology, which uses surviving linguistic sources to understand the past as deeply and broadly as possible, is central to these tasks, because languages, present and historical, are central to human culture. To advance this larger effort, the Humboldt Chair focuses upon enabling Greco-Roman culture to realize the fullest possible role in intellectual life. Greco-Roman culture is particularly significant because it contributed to both Europe and the Islamic world and the study of Greco-Roman culture and its influence thus entails Classical Arabic as well as Ancient Greek and Latin. The Humboldt Chair inaugurates an Open Philology Project with three complementary efforts that produce open philological data, educate a wide audience about historical languages, and integrate open philological data from many sources: the Open Greek and Latin Project organizes content (including translations into Classical Arabic and modern languages); the Historical Language e-Learning Project explores ways to support learning across barriers of language and culture as well as space and time; the Scaife Digital Library focuses on integrating cultural heritage sources available under open licenses.

Details of the project, its components, and rationale are provided in the original post.

Official Release of the Virtual Research Environment TextGrid

Friday, April 27th, 2012

TextGrid ( is a platform for scholars in the humanities, which makes possible the collaborative analysis, evaluation and publication of cultural remains (literary sources, images and codices) in a standardized way. The central idea was to bring together instruments for the dealing with texts under a common user interface. The workbench offers a range of tools and services for scholarly editing and linguistic research, which are extensible by open interfaces, such as editors for the linkage between texts or between text sequences and images, tools for musical score edition, for gloss editing, for automatic collation etc.

On the occasion of the official release of TextGrid 2.0 a summit will take place from the 14th to the 15th of May 2012. On the 14th the summit will start with a workshop day on which the participants can get an insight into some of the new tools. For the following day lectures and a discussion group are planned.

For more information and registration see this German website:

With kind regards

Celia Krause

Celia Krause
Technische Universität Darmstadt
Institut für Sprach- und Literaturwissenschaft
Hochschulstrasse 1
64289 Darmstadt
Tel.: 06151-165555

Linking Open Data: the Pelagios Ontology Workshop

Friday, March 18th, 2011

(To register to attend this workshop, please visit

The Pelagios workshop is an open forum for discussing the issues associated with and the infrastructure required for developing methods of linking open data (LOD), specifically geodata. There will be a specific emphasis on places in the ancient world, but the practices discussed should be equally applicable to contemporary named locations. The Pelagios project will also make available a proposal for a lightweight methodology prior to the event in order to focus discussion and elicit critique.

The one-day event will have 3 sessions dedicated to:
1) Issues of referencing ancient and contemporary places online
2) Lightweight ontology approaches
3) Methods for generating, publishing and consuming compliant data

Each session will consist of several short (15 min) papers followed by half an hour of open discussion. The event is FREE to all but places are LIMITED so participants are advised to register early. This is likely to be of interest to anyone working with digital humanities resources with a geospatial component.

Preliminary Timetable
10:30-1:00 Session 1: Issues
2:00-3:30 Session 2: Ontology
4:00-5:30 Session 3: Methods

Confirmed Speakers:

Johan Alhlfeldt (University of Lund) Regnum Francorum online
Ceri Binding (University of Glamorgan) Semantic Technologies Enhancing
Links and Linked data for Archaeological Resources
Gianluca Correndo (University of Southampton) EnAKTing
Claire Grover (University of Edinburgh) Edinburgh Geoparser
Eetu Mäkelä (University of Aalto) CultureSampo
Adam Rabinowitz (University of Texas at Austin) GeoDia
Sebastian Rahtz (University of Oxford) CLAROS
Sven Schade (European Commission)
Monika Solanki (University of Leicester) Tracing Networks
Humphrey Southall (University of Portsmouth) Great Britain Historical
Geographical Information System
Jeni Tennision (

Pelagios Partners also attending are:

Mathieu d’Aquin (KMi, The Open University) LUCERO
Greg Crane (Tufts University) Perseus
Reinhard Foertsch (University of Cologne) Arachne
Sean Gillies (Institute for the Study of the Ancient World, NYU) Pleiades
Mark Hedges, Gabriel Bodard (KCL) SPQR
Rainer Simon (DME, Austrian Institute of Technology) EuropeanaConnect
Elton Barker (The Open University) Google Ancient Places
Leif Isaksen (The University of Southampton) Google Ancient Places

Cultural Heritage Imaging workshop

Tuesday, January 25th, 2011

You are warmly invited to attend
“Digital Transformations: New developments in cultural heritage imaging”
a workshop on digital imaging to be held at the University of Oxford on Friday, 25 February 2011.

The workshop will focus on documentary evidence, from 3D capture techniques to reflectance transformation imaging (RTI). This workshop is part of the collaborative University of Oxford and University of Southampton pilot project “Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts”, supported by the AHRC DEDFI scheme.

Friday, 25 February 2011
Lecture Theatre, The Ioannou Centre for Classical and Byzantine Studies, 66 St. Giles’, Oxford OX1 3LU
Time: tbc

For free registration, further details and any queries, please go to:

Best wishes,
The RTISAD Team:
Alan Bowman, Charles Crowther
Jacob Dahl, Graeme Earl
Leif Isaksen, Kirk Martinez
Hembo Pagi, Kathryn E. Piquette

Three-year IT position: digitization of the Berlin papyrus collection

Wednesday, September 8th, 2010

Seen in a post, on various lists, by Fabian Reiter:

Liebe Kollegen,

im Rahmen des von der Deutschen Forschungsgemeinschaft geförderten
Digitalisierungsprojektes der Berliner Papyrussammlung ist für 3 Jahre
die Stelle eines Fachinformatikers zu besetzen, vgl. die Ausschreibung
unter den folgenden Adresse:

Virtual museum guide

Monday, February 22nd, 2010

Researchers at the Fraunhofer Institute for Computer Graphics Research have developed a computer system to recognize images in a museum and enhance them with digital information, and have deployed such a system in Amsterdam’s Allard Pierson Museum.  When visitors point a flat-screen computer’s digital camera at an image in the exhibition (for example, an image of the Roman Forum, the Temple of Saturn, or the Colosseum), the system overlays the scene with ancillary information, including a possible reconstruction of ruins.   Such a technology is called “augmented reality,” and it may become available to tourists via smart phones.

More spacial analysis…

Monday, January 4th, 2010

While on the subject of spacial analysis, I’m sure there are archaeologists and geographers here who would have useful suggestions for what we can do with the hi-res 3-D images of the Earth that the NASA SRTM project has made available. There’s a nice overview of the imagery and some of the uses to which it’s already been put in this post, “Reading the world in Braille” at Integrity-Logic (coincidence that it’s International Braille Day today?).

So we’ve discussed what to do with a million books; now what do we do with quadrillions of bytes of geodata? Answers on the back of a postcard (or in a comment) please.

Give a Humanist a Supercomputer…

Tuesday, December 22nd, 2009

The “Wired Campus” section of the Chronicle of Higher Education is reporting on the uses that humanities scholars have found for the U.S. Department of Energy’s High Performance Computing resources.  The short article reports on the efforts of several people who have made use of the resources, including Gregory Crane of the Perseus Project, David Bamman, a computational linguist who has been mining data from classical texts, and David Koller, a researcher with the Digital Sculpture Project, which has developed ways to coalesce numerous images of an object into a high-resolution 3D image.  The article reports that, according to Mr. Koller, intermediaries are needed who can help humanities and computer researchers communicate with each other.

History in 3D

Wednesday, November 25th, 2009

Scientists in the European joint project 3D-COFORM are creating three-dimensional digital models of artifacts such as statues and vases.  Besides making for an exciting viewing experience, the 3D models constitute comprehensive documentation of objects that is useful to conservators.  The longer-term goal of correlating 3D data between different objects is still a long way off.  Read about it here.

Rome was built in a day…

Friday, September 18th, 2009

… with hundreds of thousands of digital photos.

University of Washington researchers have developed a computer system to combine tourist photos lifted from the photo-sharing site into a 3D digital model.  Using advanced techniques, they were able to ‘build’ a model of Rome from photos tagged with Rome or Roma in just 21 hours.

Read an article about it  here, and visit the project’s web page here.

UK team digs into data from scroll scans

Saturday, August 29th, 2009

Here’s a recent article from the Lexington (Kentucky) Herald-Leader about the activities of the EDUCE project.  It sounds like they’re at an exciting and critical point.  According to lead researcher and computer science professor Brent Seales:

“We’re starting the serious work now,” Seales said. “In a few weeks, we should know whether we’ll be able to tease out some of the writing. Seeing the text is going to be the trick, but we have some tricks of our own that we think will help.”

The story links to an informative video on YouTube, entitled “Reading the Unreadable,” apparently published in January of this year.