Archive for the ‘Conferences’ Category

CALL FOR PARTICIPATION: eHumanities Workshop at 40th Annual Meeting of the German Computer Science Society in Leipzig, Germany

Friday, July 23rd, 2010

Marco Büchler asked me to post the following notice:

Workshop: eHumanities – How does computer science benefit?
Organiser: Prof. Gerhard Heyer and Marco Büchler (Natural Language Processing / CS, University of Leipzig)

The workshop is compiled NOT only by presentations of computer scientists BUT researchers from humanities and infrastructure as well. HUMANISTS ARE VERY WELCOME!!!

Conference Sept. 27th – Oct. 1st, 2010
eHumanities workshop: Thursday Sept. 30th.

Registration details:
**Early bird registration:  July 30th, 2010**
Registration page:

Workshop description:
In recent years the text-based humanities and social sciences experienced a synthesis between the increasing availability of digitized texts and algorithms from the fields of information retrieval and text mining that resulted in novel tools for text processing and analysis, and enabled entirely new questions and innovative methodologies.

The goal of this workshop is to investigate which consequences and potentials for computer science have emerged in turn from the digitization of the social sciences and humanities.


Papyrology and technology

Monday, July 5th, 2010

(Thanks to Gregg Schwendner for posting the papyrological congress programme at What’s New in Papyrology.)

Thursday August 19th, morning
88. DIGITAL TECHNOLOGY AND TOOLS OF THE TRADE I Adam Bülow-Jacobsen  presiding
89. Herwig Maehler Die Zukunft der griechischen Papyrologie
90. Bart Van Beek Papyri in bits & bytes – electronic texts and how to use them
91. Marius Gerhardt Papyrus Portal Deutschland

101. Reinhold Scholl Textmining  und Papyri
102. Herbert Verreth Topography of Egypt online

107. Joshua Sosin / James Cowey Digital papyrology : a new platform for collaborative control of DDbDP, HGV, and APIS data Plenary session in Room MR080 (1 hour)

Friday August 20th, morning
133. Giovanna Menci Utilità di un database di alfabeti per lo studio della scrittura greca dei papiri
134. Marie-Hélène Marganne Les extensions du fichier Mertens-Pack3 du CEDOP AL
135. Robert Kraft Imaging the papyri collection at the University of Pennsylvania Museum (Philadelphia PA, USA)

146. Roger T. Macfarlane / Stephen M. Bay Multi-Spectral Imaging and Papyrology : Advantages and Limitations
147. Adam Bülow-Jacobsen Digital infrared photography of papyri and ostraca

So this astonishingly rich programme of digital topics at the International Papyrological Congress this year makes me wonder: what would it take to get this much digital interest at a major epigraphic meeting, or the annual Classics meetings, for that matter? (A couple of Digital Classicist panels at recent APA/AIA and CA conferences notwithstanding–there’s nothing as diverse and in-the-wild as the above at any Classics conference I’ve been to in recent years.) Can we do anything about this with top-down encouragement, or does it have to be a natural ground-swell? Or is papyrology just a naturally more technical subdiscipline than the rest of Classics?

TEI Annual Meeting: Call for Papers

Monday, May 10th, 2010

Deadline extended to May 15, see original call.

DH2010 and workshops

Thursday, March 4th, 2010

Registration for the 2010 Digital Humanities conference (July 7-10, 2010, King’s College London) is now open (

In addition to the conference programme, seven workshops are offered between July 5-7. All are free for conference attendees.

  • Access to the Grid: Interfacing the Humanities with Grid technologies (Stuart Dunn)
  • Text Mining in the Digital Humanities (Marco Buechler et al., eAQUA Project)
  • Service-Oriented Computing in the Humanities (Nicolas Gold et al.)
  • Content, Compliance, Collaboration and Complexity: Creating and Sustaining Information (Joanne Evans et al.)
  • Designing a Digital Humanities Lab (Angela Veomett et al.)
  • Peer Reviewing Digital Archives: the NINES model (Dana Wheeles et al.)
  • Introduction to Text Analysis using JiTR and Voyeur (Stéfan Sinclair et al.)

To find out more about these workshops, see Workshop Programme.

TEI Call for Papers

Monday, March 1st, 2010

Call for proposals
2010 Annual Meeting of the TEI Consortium

TEI Applied: Digital Texts and Language Resources

  • Meeting dates: Thu 11 November to Sun 14 November, 2010
  • Workshop dates: Mon 08 November to Wed 10 November, 2010

The Program Committee of the 2010 Annual Meeting of the Text Encoding Initiative Consortium invites individual paper proposals, panel sessions, poster sessions, and tool demonstrations particularly, but not exclusively, on digital texts, language resources and any topic that applies TEI to its research.

III Incontro di Filologia Digitale (Verona, March 3-5)

Thursday, February 25th, 2010

Posted for Roberto Rosselli del Turco:

III Incontro di Filologia Digitale – Verona 3-5 marzo 2010
Sala Conferenze
Banco Popolare di Verona
Via san Cosimo, 10 Verona

Conference Programme

Mercoledì 3 marzo 2010

14.30 Saluti delle Autorità
15.00 Apertura dei lavori

15.00-15.45 Federico Giusfredi / Alfredo Rizza (Hethitisches Wörterbuch, Institut für Assyriologie und Hethitologie, Ludwig-Maximilians-Universität München – Dep. of Linguistics, UCB, Rotary International Ambassadorial Scholar)
Zipf’s Law and the Distribution of Signs

15.45-16.30 Manuela Anelli / Marta Muscariello / Giulia Sarullo (Istituto di Scienze dell’Uomo, del Linguaggio e dell’Ambiente, Libera Università di Lingue e Comunicazione IULM, Milano)
The Digital Edition of Epigraphic Texts as Research Tool: the ILA Project

16.30-17.15 Margherita Farina (Dipartimento di Scienze Storiche del Mondo Antico, Università di Pisa)
Electronic analysis and organization of the Syro-Turkic Inscriptions of China and Central Asia (more…)

Digital Imaging of Ancient Textual Heritage

Wednesday, February 17th, 2010

Posting this on behalf of the organisers.

Digital Imaging of Ancient Textual Heritage: Technological Challenges and Solutions

The Academy of Finland research unit ‘Ancient Greek written sources’ (CoE) is organizing a symposium “Digital Imaging of Ancient Textual Heritage: Technological Challenges and Solutions”. The symposium takes place on 28-29 October, 2010, in Helsinki, Finland.

The programme comprises of two plenary sessions that are open for public, two workshops that are intended for the speakers only, and one open session on end-user perspective.

Participation in the symposium is free of charge (however, registration is compulsory). For the accepted speakers the CoE will be covering the travel and accommodation costs.

We would be grateful if the following short ad could be included in the web site of Digital Classicist to promote our symposium.

Maarit Kinnunen
tel. + 358 50 577 9153


Digital Imaging of Ancient Textual Heritage: Technological Challenges and Solutions 28-29 October 2010 in Helsinki, Finland.
Organizer: The Academy of Finland Research Unit “Ancient Greek written sources” (CoE)
Partner: The National Library of Finland
For more information, see

Digital Research and Collaborative Work (APA panel)

Wednesday, January 6th, 2010

There was a lot of talk of Digital Humanities at the MLA last week; as Hugh pointed out, though, there seems to be only one explicitly digital panel at our subject meeting, the APA/AIA in Anaheim. However, it should be a good one, and I’d encourage anyone with digital or collaborative interests to make sure and attend. The below is taken from the APA programme, annotated by me:

Digital Research and Developments in Collaborative Work in Classics
FRIDAY January 8, 11:15 A.M. – 1:15 P.M. Elite Ballroom 3
Gabriel Bodard and Alex Lee, Organizers

The papers in this panel concern themselves with the implications of digital editing on the research process. ‘Editing’ in this context includes the collection, research, sharing, and preparation for publication of textual, historical, or archaeological material. The digital work, which is often seen as a tool en route to creating an online publication, also transforms the editor’s research—both in terms of the speed and the sequence with which we can perform certain tasks, and of the different and new sorts of questions that the data throws up
for us to consider.

1. Valentina Asciutti & Stuart Dunn, King’s College London
Mapping Evidence for Roman Regionalism and Regional Literacy in Roman Britain from the Inscribed and Illustrated Objects (20 mins.)
*Read by Sebastian Heath*

2. Gabriel Bodard & Irene Polinskaya, King’s College London
A Digital Edition of IOSPE: Collaboration and Interoperability Enabled by e-Science Methods (20 mins.)
*Read by Tom Elliott*

3. Alex Lee, University of Chicago
Scholarly Editing in the Digital Age: the Archimedes Palimpsest as a Case Study (20 mins.)

Although two of the three papers will be read by someone other than their authors, the readers are themselves experts in closely related areas, and Alex, Tom and Sebastian (and other expert attendees, to be announced) will be conducting a round table discussion on the subject of digital research and collaboration for the remaining time of the session.

Computer Applications in Archaeology Conference (CAA2010)

Monday, October 12th, 2009

Conference: CAA 2010
XXXVIII Annual Conference on Computer Applications and Quantitative Methods in Archaeology “Fusion of Cultures”

Conference Dates: April 6-9, 2010
Conference Location: Granada, Spain

Upcoming Deadlines:

– Session proposals submission deadline November 15, 2009
– Round tables proposals submission deadline December 15, 2009
– Workshops proposals submission deadline January 31, 2010

Other importat dates:
– Full papers submission will be open on November 20th,2009
– Full papers submission deadline December 15, 2009
– Short papers submission deadline January 31, 2010
– Poster submission deadline January 31, 2010
– Virtual theatre videos submission deadline January 31, 2010

The XXXVIII Annual CAA Conference will be held in Granada, Spain, from April 6 to 9, 2010 and is expected to bring together archaeologist, computer scientist and mathematicians to explore and exchange knowledge in order to enhance our understanding of the past. Classical disciplines like archaeology, anthropology or geography, and more modern ones like computer science, geomatics or museology exchange their most recent advances during the conference.
CAA 2010 is inspired in the concept “Fusion of Cultures” that identifies the scope of the conference and the spirit of the historical city of Granada. The aim of the conference is to create an collaborative atmosphere among all disciplines, by participating via papers, posters, round tables, workshops, short papers and a novel virtual theatre non-stop show. (more…)

DH2010: Digital Humanities 2010 CFP

Wednesday, October 7th, 2009

Forwarded from DH2010 committee:

We are pleased to announce the Call for Papers for the Digital Humanities 2010 Conference.

Alliance of Digital Humanities Organizations Digital Humanities 2010
Call for Papers
Abstract Deadline: Oct. 31, 2009

Proposals must be submitted electronically using the system which will be available at the conference web site from October 8th. Presentations may be any of the following:

• Single papers (abstract max of 1500 words)
• Multiple paper sessions (overview max of 500 words)
• Posters (abstract max of 1500 words)

Call for Papers Announcement

The International Programme Committee invites submissions of abstracts of between 750 and 1500 words on any aspect of humanities computing, broadly defined to encompass the common ground between information technology and problems in humanities research and teaching. We welcome submissions in all areas of the humanities, particularly interdisciplinary work. We especially encourage submissions on the current state of the art in humanities computing, and on recent developments.

Suitable subjects for proposals include, for example,

* text analysis, corpora, language processing, language learning
* IT in librarianship and documentation
* computer-based research in cultural and historical studies
* computing applications for the arts, architecture and music
* research issues such as: information design and modelling; the cultural impact of the new media
* the role of digital humanities in academic curricula

The special theme of the 2010 conference is cultural heritage old and new.


Codicology and Palaeography in the Digital Age (Munich, July 3-4, 2009)

Wednesday, May 20th, 2009

International Conference

Codicology and Palaeography in the Digital Age

Munich, 3-4 July 2009

The conference will focus on the challenges and consequences of using IT and the internet for codicological and palaeographic research. The authors of some selected articles of an anthology to be published this summer by the Institute for Documentology and Scholarly Editing (IDE) will present and discuss their excellent research results with scholars and experts working on ancient books and manuscripts. The presentations will be given on current issues in the following fields: manuscript catalogues and descriptions, digitization of manuscripts, collaborative systems of research on manuscripts, codicological databases, manuscript catalogues, research based on digital resources, e-learning in palaeography, palaeographic databases (characters, scripts, scribes), (semi-) automatic recognition of scripts and scribes, digital tools for transcriptions, visions and prototypes of other digital tools.

A panel discussion will be held with renowned exponents in the field of codicology and palaeography and contributors of cutting edge research to get an overview of the state of the art as well as to open up new perspectives of codicological and palaeographic research in the “digital age”.

(More information including preliminary programme)

Ancient World and e-Science (report)

Tuesday, April 21st, 2009

On Saturday April 4, 2009, a panel on “Ancient World and e-Science”, organized by the Digital Classicist, was held at the Classical Association Annual Meeting at the University of Glasgow (full abstracts in GoogleDoc). The speakers and titles listed were:

  • Ryan Baumann & Gabriel Bodard, 3D Visualization and Digitization of Epigraphic Materials
  • Stuart Dunn, Seeing into the Past: Visualization, the ancient world, and the e-Science programme
  • Brian Fuchs, Rashmi Singhal, Jazz Mack Smith, & Gregory Crane, PhiloGrid: A Web Toolkit for the Ancient World
  • Caroline Macé, Ilse deVos, & Philippe Baret, Can phylogenetics methods help to cure contaminated textual traditions?

There was a slight change to the line-up on the day as Stuart Dunn’s attempts to reach Glasgow were scuppered by the incompetence of a budget airline: the three remaining papers were followed by 20 minutes open discussion, and then slightly early adjournment to the hotel bar.

Baumann spoke about the difficulties of reading, photographing, and visualizing curse tablets in general, and the steatite fragments from Amathous in Cyprus especially, which are translucent and therefore resistent to both normal photography and even the laser imaging used to take high-resolution 3-D images of inscribed objects. He then showed examples of a lead tablet (DT 25) which has degraded further in the century since it was transcribed, and argued that the high quality imaging this project is piloting is an important conservation exercise as well as having potential for improving the interpretation and transcription of the texts. The remainder of the presentation was a demonstration of some of the techniques for taking and manipulating 3-D readings using the laser scanner.

Fuchs gave a detailed history of and report on the PhiloGrid services, created by Imperial College London and the Perseus Project as part of a JISC/NEH Transatlantic collaborative digitization grant from 2008-09. He summarised the objectives and achievements of the project, including the mounting of Perseus web services such as lexical and morphological tools, the construction of a citation framework based on FRBR, and the digitization of new content. He also gave an introduction to and invited all present to attend a workshop on Arabic web services to be held at Imperial College London on Wednesday May 13 (further details to be announced here soon).

Macé and de Vos introduced the work carried out by classicists and generic biologists at the Université Catholique de Louvain on using statistical and probabilistic phylogenetic software to try and reconstruct the stemma of a contaminated manuscript tradition. They tested the phylogenetic algorithms for fitness for this task by creating a fictional manuscript tradition for a small section of the text of Proclus, including both horizontal and vertical contamination. Two phylogenetic methods—parsimony analysis and bootstrap analysis—were applied to the data, with mixed results. Vertical contamination in particular still defeats the generic technologies, but further work may improve the accuracy of such tools. (This work, needless to say, will also result in more robust algorithms and methodologies for the biologists, so this is a true e-Science interdisciplinary collaboration that really does have research interest for both fields.)

Many thanks to all who contributed to this panel, including the audience members who took part in the lively discussion afterward. Clearly there is a call for discussion of e-Science issues at Classics venues.

InterFace 2009: First Call for Papers

Wednesday, March 4th, 2009

Forwarded for Leif Isaksen from the Antiquist list:


First Call for Papers

InterFace 2009:
1st National Symposium for Humanities and Technology

9-10 July, University of Southampton, UK.

InterFace is a new type of annual event. Part conference, part workshop, part networking opportunity, it will bring together postdocs, early career academics and postgraduate researchers from the fields of Information Technology and the Humanities in order to foster cutting-edge collaboration. As well as having a focus on Digital Humanities, it will also be an important forum for Humanities contributions to Computer Science. The event will furthermore provide a permanent web presence for communication between delegates both during, and following, the conference.

Delegate numbers are limited to 80 (half representing each sector) and all participants will be expected to present a poster or a ‘lightning talk’ (a two minute presentation) as a stimulus for discussion and networking sessions. Delegates can also expect to receive illuminating keynote talks from world-leading experts, presentations on successful interdisciplinary projects, ‘Insider’s Guides’ and workshops. The registration fee for the two-day event is £30. For a full overview of the event, please visit the website.

Paper Submissions:

If you are interested in attending, please submit an original paper, of 1500 words or less, describing an idea or concept you wish to present. Please indicate whether you would prefer to produce a poster or perform a 2-minute lightning talk. Papers must be produced as a PDF or in Microsoft Word (.doc) format and submitted through our EasyChair page:

– Register for an easy chair account:
– Log in:
– Click New Submission at the top of the page and fill in the form.

Make sure you:
– Select your preference of lightning talk or poster.
– Select whether you are representing humanities or technology.
– Attach and upload your paper.

If you encounter any problems, please e-mail

A number of travel bursaries may be available to successful applicants – if you would like to be considered for one, please email and provide grounds for consideration.

Papers should focus on potential (and realistic) areas for collaboration between the Technology and Humanities Sectors, either by addressing particular problems, new developments, or both. Prior work may be presented where relevant but the nature of the paper must be forward-looking. As such, the scope is extremely broad but topics might include:


* 3D immersive environments
* Pervasive technologies
* Online collaboration
* Natural language processing
* Sensor networks
* The Semantic Web
* Agent based modelling
* Web Science


* Spatial cognition
* Text editing and analysis
* New Media
* Linguistics
* Applied sociodynamics & social network analysis
* Archaeological reconstruction
* Information Ethics
* Dynamic logics
* Electronic corpora

Due to the limited number of places, papers will be subject to review by committee in order to maintain quality and a balanced programme. Applicants will be notified by email as to their acceptance. Accepted papers will be published online one week in advance of the conference.

Important Dates:

* Paper Submission Deadline: 1 May 2009
* Acceptances Announced: 18 May 2009
* Conference: 9th-10th July 2009

Confirmed Speakers

* Dame Wendy Hall, University of Southampton,
President of the Association of Computing Machinery

Insider’s Guides:
* Stephen Brown, De Montfort University
President of the Association for Learning Technology

* Ed Parsons
Geospatial Technologist, Google

* Sarah Porter
Head of Innovation, JISC

Project Showcase:

* Mary Orr & Mark Weal, University of Southampton
Digital Flaubert

* Adrian Bell
The Soldier in Later Medieval England


1) Text Encoding Initiative (TEI)
Arianna Ciula, European Science Foundation & Sebastian Rahtz, Oxford

2) Visualisation
Facilitator TBC

3) Data Management
Facilitator TBC

4) New Media
Facilitator TBC

For further information, please visit the conference website
( or

Special issue of the DHQ in honour of Ross Scaife

Friday, February 27th, 2009

copied from Humanist:

From: Julia Flanders
Subject: DHQ issue 3.1 now available
We’re very happy to announce the publication of the new issue of DHQ:

DHQ 3.1 (Winter 2009)
A special issue in honor of Ross Scaife: “Changing the Center of
Gravity: Transforming Classical Studies Through Cyberinfrastructure”
Guest editors: Melissa Terras and Gregory Crane

Table of Contents

Acknowledgements and Dedications
Gregory Crane, Tufts University; Brent Seales, University of
Kentucky; Melissa Terras, University College London

Ross Scaife (1960-2008)
Dot Porter, Digital Humanities Observatory

Cyberinfrastructure for Classical Philology
Gregory Crane, Tufts University; Brent Seales, University of
Kentucky; Melissa Terras, University College London

Technology, Collaboration, and Undergraduate Research
Christopher Blackwell, Furman University; Thomas R. Martin, College
of the Holy Cross

Tachypaedia Byzantina: The Suda On Line as Collaborative Encyclopedia
Anne Mahoney, Tufts University

Exploring Historical RDF with Heml
Bruce Robertson, Mount Allison University

Digitizing Latin Incunabula: Challenges, Methods, and Possibilities
Jeffrey A. Rydberg-Cox, University of Missouri-Kansas City

Citation in Classical Studies
Neel Smith, College of the Holy Cross

Digital Criticism: Editorial Standards for the Homer Multitext
Casey Dué, University of Houston, Texas; Mary Ebbott, College of the
Holy Cross

Epigraphy in 2017
Hugh Cayless, University of North Carolina; Charlotte Roueché, King’s
College London; Tom Elliott, New York University; Gabriel Bodard,
King’s College London

Digital Geography and Classics
Tom Elliott, New York University; Sean Gillies, New York University

What Your Teacher Told You is True: Latin Verbs Have Four Principal
Raphael Finkel, University of Kentucky; Gregory Stump, University of

Computational Linguistics and Classical Lexicography
Gregory Crane, Tufts University; David Bamman, Tufts University

Classics in the Million Book Library
Gregory Crane, Tufts University; Alison Babeu, Tufts University;
David Bamman, Tufts University; Thomas Breuel, Technical University of
Kaiserslautern; Lisa Cerrato, Tufts University; Daniel Deckers,
Hamburg University; Anke Lüdeling, Humboldt-University, Berlin; David
Mimno, University of Massachusetts, Amherst; Rashmi Singhal, Tufts
University; David A. Smith, University of Massachusetts, Amherst; Amir
Zeldes, Humboldt-University, Berlin

Conclusion: Cyberinfrastructure, the Scaife Digital Library and
Classics in a Digital age
Christopher Blackwell, Furman University; Gregory Crane, Tufts

Best wishes from the DHQ editorial team

Les historiens et l’informatique, Roma, December 4-6, 2008

Sunday, November 23rd, 2008

From an announement circulated by Marjorie Burghart:

Les historiens et l’informatique : un métier à réinventer

Jeudi 4 décembre – 14 h 30
Marilyn Nicoud (École française de Rome)
Accueil des participants

Jean-Philippe Genet (Université de Paris I)
Peut-on prévoir l’impact des transformations de l’informatique sur le travail scientifique de l’historien ?

L’historien et ses sources : archives et bibliothèques – 15 h 00
Anna Maria Tammaro (Università di Parma)
La biblioteca digitale verso la realizzazione dell’infrastruttura globale per gli studi umanistici

Roberto Delle Donne (Università di Napoli Federico II)
Storia e Open Archive

Christophe Dessaux (Ministère de la Culture et de la Communication)
De la numérisation des collections à Europeana : des contenus culturels pour la recherche

Gino Roncaglia (Università della Tuscia)
Libri elettronici : un panorama in evoluzione

Stefano Vitali (Archivio di Stato di Firenze)
I mutamenti nel mondo degli archivi

17 h 45-18 h 45 : Discussion

Vendredi 5 décembre – 9 h 00
Michele Ansani (Università di Pavia) et Antonella Ghignoli (Università di Firenze)
Testi digitali : nuovi media e documenti medievali

Pierre Bauduin (Université de Caen) et Catherine Jacquemard (Université de Caen)
La pratique de l’édition en ligne : expériences et questionnements

Paul Bertrand (IRHT, CNRS)
Autour de l’édition électronique et des digital humanities : nouvelle érudition, nouvelle critique ?

10 h 30-11 h 00 : Discussion

Rolando Minuti (Università di Firenze)
Insegnare storia al tempo del web 2.0 : considerazioni su esperienze e problemi aperti

Giulio Romero (Atelhis)
Métier d’historiens, métiers d’historien : les impératifs d’une formation ouverte

12 h 45-13 h 15 : Discussion

Communiquer – 15 h 00
Pietro Corrao (Università di Palermo)
L’esperienza di Reti Medievali

Christine Ducourtieux (Université de Paris I) et Marc Smith (École nationale des Chartes),
L’expérience de Ménestrel

16 h 00 – 16 h. 30 Discussion

Les nouveaux horizons du métier d’historien
Aude Mairey (CESCM, CNRS-Université de Poitiers)
Quelles perspectives pour la textométrie ?

Julien Alerini (Université de Paris I) et Stéphane Lamassé (Université de Paris I)
Données et statistiques : l’avenir du travail en ligne pour l’historien

17 h 45-18 h 15 : Discussion

Samedi 6 décembre – 9 h 00
François Giligny (Université de Paris I)
L’informatique en archéologie : une révolution tranquille ?

Jean-Luc Arnaud (Telemme, CNRS-Université de Provence)
Nouvelles méthodes, nouveaux usages de la cartographie et de l’analyse spatiale en histoire

Margherita Azzari (Università di Firenze)
Geographic Information Systems and Science. Stato dell’arte, sfide future

10 h-30-11 h 00 : Discussion

L’historien et l’outil informatique
Serge Noiret (European University Institute)
Fare storia a più mani con il web 2.0 : cosa cambia nelle pratiche degli storici ?

Philippe Rygiel (Université de Paris I)
De quoi le web est-il l’archive ? Lectures historiennes de l’activité réseau

Jean-Michel Dalle (Université Pierre et Marie Curie, Paris VI)
Peut-on penser le futur d’une communauté scientifique sans tenir compte de l’économie de l’innovation et de la créativité ?

12 h 45-13 h 30 : Discussion

Conclusions d’Andrea Zorzi (Università di Firenze)

If you want to attend, please contact Marilyn Nicoud or Grazia Parrino,

Classical panels at DRHA

Sunday, September 21st, 2008

This year’s Digital Resources for the Humanities and Arts conference (Cambridge, September 14-17) included a two-part panel on Digital Classicist (sadly divided over two days), organized by Simon Mahony, Stuart Dunn, and myself. Despite some apparently last-minute (and unannounced) scheduling changes, the panel was very successful. I post here only my brief notes on the papers involved, and hope that some of my colleagues may post more detailed reactions or reports either in comments, or as posts to this or other blogs.

Gabriel Bodard

I kicked off the first Classicists’ session on Monday morning with a brief history of the Digital Classicist community and a discussion of the different approaches to studying the use of digital methods in the study of the ancient world (contrasting the historical approach of Solomon 1993 with the forward-looking theme of Crane/Terras 2008, for which authors were asked to imagine their field within Classics in 2018). I talked in general terms about the different trajectories of two very early digital classical projects, the TLG and LGPN, both of which were founded in 1972. The TLG, while a technological innovative project from the get-go, and one which changed (and continues to be indispensible to) the study of Greek literature, has not made a great contribution to the Digital Humanities because of its closed, for-profit, and self-sufficient strategy. The LGPN on the other hand began life as a very technologically conservative projects, geared to the production of paper volumes of the Lexicon, and has always been reactive to changes in technology rather than proactive as the TLG was; as a result of this, however, they have been able to change with the times, adopt new database and web technologies as they appeared, and are now actively contributing to the development of standards in XML, onomastics, and geo-tagging, and sharing data and tools widely. Finally I argued that any study of the community of digital Classics needs both to consider history (lessons to be learned from projects such as those discussed above, and other venerable projects that are still currently innovative such as Perseus and the DDbDP), and consider the newest technologies, standards, and cyberinfrastructures that will drive our work forward in the future.

(David Robey pointed out that Classics has an important and unique position with the UK arts and humanities community in that the subject associations give validity and respectability by their support of and recognition for digital resources and research.)

Stuart Dunn

In a paper titled The UK’s evolving e-infrastructure and the study of the past, Stuart discussed the national e-Science agenda and how it relates to the practices and needs of the humanities scholar, using as a basis the research process of data collection, analysis, and publication/dissemination. The essential definition of e-Science is that it centres around scholarly collaboration across and between disciplines, and the advanced computational infrastructure that enables this collaboration. e-Science often involves working with huge bodies of data or processing-intensive operations on complex material, and the example of this kind of research Stuart offered was not Classical but Byzantine: the use of agent-based modelling by colleagues in Birmingham to simulate the climactic battle of Manzikert. After some general conclusions on the opportunities for advanced e-infrastructure to be used in the study of the ancient world, there was some lively discussion of geospacial resources in the British and European academic spheres.

Simon Mahony

Simon gave a detailed presentation of the Humslides 2.0 project that he is conducting with the Classics department at King’s College London. Building upon the work carried out in a pilot project in 2006-7 to digitise the teaching slide collections of the Classics department (as a pilot study for the School of Humanities), which adopted a free trial version of the ContentDM management system (trial license now expired, and not renewed), the new project will utilize Web 2.0 tools to present and organize some 7000 slides with more metadata and more input from students and other contributors. A Humslides Flickr group has been established, inspired in part by the Commons group set up by Library of Congress and now contributed to by several other major institutions. As well as providing a teaching resource (currently restricted to KCL students until some thorny copyright issues have been wrinkled out), students will be set assessed coursework tasks to contribute to the tagging and annotating of images in this collection.

Elpiniki Fragkouli

Due to illness, Elpiniki’s paper on Training, Communities of Practice, and Digital Humanities was not delivered at this conference. We shall see whether she would be willing to upload her slides on the Digital Classicist website for discussion.

Amy Smith (Leif Isaksen, Brian Fuchs)

The paper on Lightweight Reuse of Digital Resources with VLMA: perspectives and challenges, originally commissioned for the Digital Classicist panel, was at the last minute and for unknown reasons switched over into a panel on Digital Humanites on Tuesday morning. Amy presented this paper, which discussed lessons learned from the Virtual Lightbox for Museums and Archives project (discussed in detail in their article in the special issue of Digital Medievalist journal we edited). Some conclusions and discussion followed on the topic of RDF and other metadata standards, and on browser-based versus desktop applications for viewing and organizing remote objects.

John Pybus (Alan Bowman, Charles Crowther and Ruth Kirkham)

John’s presentation on A Virtual Research Environment for the Study of Documents and Manuscripts gave a succinct and very useful summary of the history of the VRE research that has been carried out by the Centre for the Study of Ancient Documents and the humanities VRE team in Oxford. The project is one of four demo projects conducted by the second phase of work that begin with a user requirements survey in 2006-7. Built using uPortal, the VRE allows remote, parallel, and dynamic consultation and annotation of texts, images, and other resources by multiple scholars simultaneously. John showed some examples of the functionality of the VRE platform, including: the ability to show side-by-side parallel views of a tablet (different images or different renderings of the same image); the juxtaposition of multiple fragments in a lightbox; the ability to share views and exchange instant messages between scholars.

Emma O’Riordan (Michael Fulford, et al.)

In a paper that discussed another project related to the Oxford VRE programme, the Virtual Environment for Research in Archaeology: a Roman case study at Silchester, Emma discussed the origins of the VERA system in the Integrated Archaeological Database (IADB) that has been in use at Silchester for several years. The VERA system allows almost instant publication of the years results (as compared to waiting several months for paper notes to be transcribed); is cheaper than manual transcription; and more reliable than manual transcription; perhaps most importantly, the system enables live communication and collaboration between the archaeologists in the field and scholars in other parts of the world. Emma stressed one lesson from this project which was the importance of working alongside computer scientists, so that development of functionality can take into consideration the needs of the archaeologists as well as the research and interests of the programmers. It was interesting, however, that she also noted the potential pitfalls of too much tinkering with a tool while at work in the field.

Claire Warwick (Melissa Terras, et al.)

Originally scheduled in the second “Digital Humanities” on Tuesday morning, this paper followed logically on from Emma’s, and discussed Virtual Environments for Research in Archaeology (VERA): Use and Usability of Integrated Virtual Environments in Archaeological Research. Claire focussed on the evaluation of documentation of the unique needs of archaeologists in the field, and some conclusions the VERA team have been able to draw by the use of questionnaires, diaries, and anonymized interviews with the Silchester workers. Learning new IT skills was considered to be a burdern by students who were already having to learn fieldwork skills on the job; there were also new problems with the technology, as compared to the “pencil and paper” methods for which workflow and solutions had been developed over time. We look forward to a full report on the feedback and usability study that the UCL participants in the VERA project are conducting.

Leif Isaksen

Original scheduled for the “Digital Tools” panel, in this paper, Building a Virtual Community: The Antiquist Experience, Leif spoke to a Digital Classicist audience about a parallel community, Antiquist (who focus on digital approaches to cultural heritage and archaeology). The Antiquist community has an active mailing list (a Google group), a moribund blog, and a wiki whose main function is announcements of events. Antiquist boasts multiple moderators, many of whom try to keep the list active, and from the start they actively invited heritage professionals who were known to them to join the community. There is no set agenda, and membership is from a wide range of industries. Over time, traffic on the list has remained steady, with an unusually high percentage of active participants, but the content of the list traffic has tended recently to become more announcement-focussed rather than long threads and discussions. They are currently considering inviting new moderators to join the team, in the hope of injecting fresh blood and enthusiasm into a team who now rarely innovate and introduce new discussions to the group. Compared to many mailing lists, the community is still very active and very healthy, however. (Leif has usefully uploaded his slideshow and commented in a thread on the Antiquist email group.)

Digitization and the Humanities: an RLG Programs Symposium

Thursday, April 17th, 2008

Is anyone here attending this?

As primary source materials move online, in both licensed and freely available form, what will be the impact on scholarship? On teaching and learning practice? On the collecting practices of research libraries? These are questions we are hoping to explore in the third day of our annual meeting (June 4th). This symposium, which we’re calling “Digitization and the Humanities: Impact on Libraries and Special Collections,” will feature perspectives from scholars on how digital collections are impacting both their research and teaching practice. We’ll also have perspectives from university librarians (Paul Courant, University of Michigan and Robin Adams, Trinity College Dublin) on the potential impact on library collecting practices.

The symposium will be held at the Chemical Heritage Foundation, and on Tuesday evening (June 3rd), the Philadelphia Museum of Art will host a reception for attendees. It should be a great event and a thought provoking conversation, and we hope you will join us. RLG Partners may register online.

Report on NEH Workshop “Supporting Digital Scholarly Editions”

Friday, April 4th, 2008

The official report on the NEH Workshop “Supporting Digital Scholarly Editions”, held on January 14, has been released and is available in PDF form:

Attendees included representatives from funding agencies and university presses, historians, just one or two literary scholars, one medievalist, and no classicists. It appears that much of the discussion focused on creating a service provider for scholarly editions, something to work between scholars and university presses to turn scholarship into digital publications.

I’m of two minds about this. On one hand, I know a lot of “traditional scholars” who find the idea of digital publication a little scary, just the idea of having to learn the technology. So it could be a good way to bring digital publication into the mainstream. But on the other hand, this kind of model could be stifling for creativity. One of the exciting things about digital projects is that, at this time, although there are standards there is no single model to follow for publication. There’s a lot of room for experimentation. It’s certainly not either/or – those of us doing more cutting-edge work will continue to do it whether there are mainstream service providers at university presses or not. But it’s interesting that this is being discussed.

Informatique et Egyptologie, I&E 2008

Wednesday, April 2nd, 2008

A date has been set for the next meeting of the International Association of Egyptologists Computer Group (Informatique et Egyptologie, I&E), which last met in Oxford in 2006.

Thanks to the kindness of Dr Wilfried Seipel, the meeting will take place in the Kunsthistorisches Museum, Vienna, Austria, on 8-11 July 2008, with the sessions on 9-10 July.

Further information can be found here

Problems and outcomes in digital philology (session 3: methodologies)

Thursday, March 27th, 2008

The Marriage of Mercury and Philology: Problems and outcomes in digital philology

e-Science Institute, Edinburgh, March 25-27 2008.

(Event website; programme wiki; original call)

I was asked to summarize the third session of papers in the round table discussion this afternoon. My notes (which I hope do not misrepresent anybody’s presentation too brutally) are transcribed below.

Session 3: Methodologies

1. Federico Meschini (De Montfort University) ‘Mercury ain’t what he used to be, but was he ever? Or, do electronic scholarly editions have a mercurial attitude?’ (Tuesday, 1400)

Meschini gave a very useful summary of the issues facing editors or designers of digital critical editions. The issues he raised included:

  • the need for good metadata standards to address the problems of (inevitable and to some extent desirable) incompatibility between different digital editions;
  • the need for a modularized approach that can include many very specialist tools (the “lego bricks” model);
  • the desirability of planning a flexible structure in advance so that the model can grow organically, along with the recognition that no markup language is complete, so all models need to be extensible.

After a brief discussion of the reference models available to the digital library world, he explained that digital critical editions are different from digital libraries, and therefore need different models. A digital edition is not merely a delivery of information, it is an environment with which a “reader” or “user” interacts. We need, therefore, to engage with the question: what are the functional requirements for text editions?

A final summary of some exciting recent movements, technologies, and discussions in online editions served as a useful reminder that far from taking for granted that we know what a digital critical edition should look like, we need to think very carefully about the issues Mechini raises and other discussions of this question.

2. Edward Vanhoutte (Royal Academy of Dutch Language and Literature, Belgium) ‘Electronic editions of two cultures –with apologies to C.P. Snow’ (Tuesday, 1500)

Vanhoutte began with the rhetorical observation that our approach to textual editions is in adequate because the editions are not as intuitive to users, flexible in what they can contain, and extensible in use and function as a household amenity such as the refrigerator. If the edition is an act of communication, an object that mediates between a text and an audience, then it fails if we do not address the “problem of two audiences” (citing Lavagnino). We serve the audience of our peers fairly well–although we should be aware that even this is a more hetereogenous and varied a group than we sometimes recognise–but the “common audience”, the readership who are not text editors themselves, are poorly served by current practice.

After some comments on different types of editions (a maximal edition containing all possible information would be too rich and complex for any one reader, so minimal editions of different kinds can be abstracted from this master, for example), and a summary of Robinson’s “fluid, cooperative, and distributed editions”, Vanhoutte made his own recommendation. We need, in summary, to teach our audience, preferably by example, how to use our editions and tools; how to replicate our work, the textual scholarship and the processes performed on it; how to interact with our editions; and how to contribute to them.

Lively discussion after this paper revolved around the question of what it means to educate your audience: writing a “how to” manual is not the best way to encourage engagement with ones work, but providing multiple interfaces, entry-points, and cross-references that illustrate the richness of the content might be more accessible.

3. Peter Robinson (ITSEE, Birmingham) ‘What we have been doing wrong in making digital editions, and how we could do better?’ (Tuesday, 1630)

Robinson began his provocative and speculative paper by considering a few projects that typify things we do and do not do well: we do not always distribute project output successfully; we do not always achieve the right level of scholarly research value. Most importantly, it is still near-impossible for a good critical scholar to create an online critical edition without technical support, funding for the costs of digitization, and a dedicated centre for the maintenance of a website. All of this means that grant funding is still needed for all digital critical work.

Robinson has a series of recommendations that, he hopes, will help to empower the individual scholar to work without the collaboration of a humanities computing centre to act as advisor, creator, librarian, and publisher:

  1. Make available high-quality images of all our manuscripts (this may need to be funded by a combination of goverment money, grant funding, and individual users paying for access to the results).
  2. Funding bodies should require the base data for all projects they fund to be released under a Creative Commons Attribution-ShareAlike license.
  3. Libraries and not specialist centres should hold the data of published projects.
  4. Commercial projects should be involved in the production of digital editions, bringing their experience of marketing and money-making to help make projects sustainable and self-funding.
  5. Most importantly, he proposes the adoption of common infrastructure, a set of agreed descriptors and protocols for labelling, pointing to, and sharing digital texts. An existing protocol such as the Canonical Text Services might do the job nicely.

4. Manfred Thaller (Cologne) ‘Is it more blessed to give than to receive? On the relationship between Digital Philology, Information Technology and Computer Science’ (Wednesday, 0950)

Thaller gave the last paper, on the morning of the third day of this event, in which he asked (and answered) the over-arching question: Do computer science professionals already provide everything that we need? And underlying this: Do humanists still need to engage with computer science at all? He pointed out two classes of answer to this question:

  • The intellectual response: there are things that we as humanists need and that computer science is not providing. Therefore we need to engage with the specialists to help develop these tools for ourselves.
  • The political response: maybe we are getting what we need already, but we will experience profitable side effects from collaborating with computer scientists, so we should do it anyway.

Thaller demonstrated via several examples that we do not in fact get everything we need from computer scientists. He pointed out that two big questions were identified in his own work twelve years ago: the need for software for dynamic editions, and the need for mass digitization. Since 1996 mass digitization has come a long way in Germany, and many projects are now underway to image millions of pages of manuscripts and incunabula in that country. Dynamic editions, on the other hand, while there has been some valuable work on tools and publications, seem very little closer than they were twelve years ago.

Most importantly, we as humanists need to recognize that any collaboration with computer scientists is a reciprocal arrangement, that we offer skills as well as receive services. One of the most difficult challenges facing computer scientists today, we hear, is to engage with, organise, and add semantic value to the mass of imprecise, ambiguous, incomplete, unstructured, and out-of-control data that is the Web. Humanists have spent the last two hundred years studying imprecise, ambiguous, incomplete, unstructured, and out-of-control materials. If we do not lend our experience and expertise to help the computer scientists solve this problem, than we can not expect free help from them to solve our problems.

Services and Infrastructure for a Million Books (round table)

Monday, March 17th, 2008

Million Books Workshop, Friday, March 14, 2008, Imperial College London.

The second of two round tables in the afternoon of the Million Books Workshop, chaired by Brian Fuchs (Imperial College London), asked a panel of experts what services and infrastructure they would like to see in order to make a Million Book corpus useful.

  1. Stuart Dunn (Arts and Humanities e-Science Support Centre): the kinds of questions that will be asked of the Million Books mean that the structure of this collection needs to be more sophisticated that just a library catalogue
  2. Alistair Dunning (Archaeological Data Service & JISC): powerful services are urgently needed to enable humanists both to find and to use the resources in this new collection
  3. Michael Popham (OULS but formerly director of e-Science Centre): large scale digitization is a way to break down the accidental constraints of time and place that limit access to resources in traditional libraries
  4. David Shotton (Image Bioinformatics Research Group): emphasis is on accessibility and the semantic web. It is clear than manual building of ontologies does not scale to millions of items, therefore data mining and topic modelling are required, possible assisted by crowdsourcing. It is essential to be able to integrate heterogeneous sources in a single, semantic infrastructure
    1. Dunning: citability and replicability of research becomes a concern with open publication on this scale
    2. Dunn: the archaeology world has similar concerns, cf. the recent LEAP project
  5. Paul Walk (UK Office for Library and Information Networking): concerned with what happens to the all-important role of domain expertise in this world of repurposable services: where is the librarian?
    1. Charlotte Roueché (KCL): learned societies need to play a role in assuring quality and trust in open publications
    2. Dunning: institutional repositories also need to play a role in long-term archiving. Licensing is an essential component of preservation—open licenses are required for maximum distribution of archival copies
    3. Thomas Breuel (DFKI): versioning tools and infrastructure for decentralised repositories exist (e.g. Mercurial)
    4. Fuchs: we also need mechanisms for finding, searching, identifying, and enabling data in these massive collections
    5. Walk: we need to be able to inform scholars when new data in their field of interest appears via feeds of some kind

(Disclaimer: this is only one blogger’s partial summary. The workshop organisers will publish an official report on this event.)

What would you do with a million books? (round table)

Sunday, March 16th, 2008

Million Books Workshop, Friday, March 14, 2008, Imperial College London.

In the afternoon, the first of two round table discussions concerned the uses to which massive text digitisation could be put by the curators of various collections.

The panellists were:

  • Dirk Obbink, Oxyrhynchus Papyri project, Oxford
  • Peter Robinson, Institute for Textual Scholarship and Electronic Editing, Birmingham
  • Michael Popham, Oxford University Library Services
  • Charlotte Roueché, EpiDoc and Prosopography of the Byzantine World, King’s College London
  • Keith May, English Heritage

Chaired by Gregory Crane (Perseus Digital Library), who kicked off by asking the question:

If you had all of the texts relevant to your field—scanned as page images and OCRed, but nothing more—what would you want to do with them?

  1. Roueché: analyse the texts in order to compile references toward a history of citation (and therefore a history of education) in later Greek and Latin sources.
  2. Obbink: generate a queriable corpus
  3. Robinson: compare editions and manuscripts for errors, variants, etc.
    1. Crane: machine annotation might achieve results not possible with human annotation (especially at this scale), particularly if learning from a human-edited example
    2. Obbink: identification of text from lost manuscripts and witnesses toward generation of stemmata. Important question: do we also need to preserve apparatus criticus?
  4. May: perform detailed place and time investigations into a site preparatory to performing any new excavations
    1. Crane: data mining and topic modelling could lead to the machine-generation of an automatically annotated gazeteer, prosopography, dictionary, etc.
  5. Popham: metadata on digital texts scanned by Google not always accurate or complete; not to academic standards: the scanning project is for accessibility, not preservation
    1. Roueché: Are we talking about purely academic exploitation, or our duty as public servants to make our research accessible to the wider public?
    2. May: this is where topic analysis can make texts more accessible to the non-specialist audience
    3. Brian Fuchs (ICL): insurance and price comparison sites, Amazon, etc., have sophisticated algorithms for targeting web materials at particular audiences
    4. Obbink: we will also therefore need translations of all of these texts if we are reaching out to non-specialists; will machine translation be able to help with this?
    5. Roueché: and not just translations into English, we need to make these resources available to the whole world.

(Disclaimer: this summary is partial and partisan, reflecting those elements of the discussion that seemed most interesting and relevant to this blogger. The workshop organisers will publish an official report on this event presently.)

Million Books Workshop (brief report)

Saturday, March 15th, 2008

Imperial College London.
Friday, March 14, 2008.

David Smith gave the first paper of the morning on “From Text to Information: Machine Translation”. The discussion included a survey of machine translation techniques (including the automatic discovery of existing translations by language comparison), and some of the value of cross-language searching.

[Please would somebody who did not miss the beginning of the session provide a more complete summary of Smith’s paper?]

Thomas Breuel then spoke on “From Image to Text: OCR and Mass Digitisation” (this would have been the first paper in the day, kicking off the developing thread from image to text to information to meaning, but transport problems caused the sequence of presentations to be altered). Breuel discussed the status of professional OCR packages, which are usually not very trainable and have their accuracy constrained by speed requirements, and explained how the Google-sponsored but Open Source OCRopus package intends to improve on this situation. OCRopus is highly extensible and trainable, but currently geared to the needs of the Google Print project (and so while effective at scanning book pages, may be less so for more generic documents). Currently in alpha-release and incorporating the Tesseract OCR engine, this tool currently has a lower error-rate than other Open Source OCR tools (but not the professional tools, which often contain ad hoc code to deal with special cases). A beta release is set for April 2008, which will demo English, German, and Russian language versions, and release 1.0 is scheduled for Fall 2008. Breuel also briefly discussed the hOCR microformat for describing page layouts in a combination of HTML and CSS3.

David Bamman gave the second in the “From Text to Information” sequence of papers, in which he discussed building a dynamic lexicon using automated syntax recognition, identifying the grammatical contexts of words in a digital text. With a training set of some thousands of words of Greek and Latin tree-banked by hand, auto-syntactic parsing currently achieves an accuracy rate something above 50%. While this is still too high a rate of error to make this automated process useful as an end in itself, to deliver syntactic tagging to language students, for example, it is good for testing against a human-edited lexicon, which provides a degree of control. Usage statistics and comparisons of related words and meanings give a good sense of the likely sense of a word or form in a given context.

David Mimno completed the thread with a presentation on “From Information to Meaning: Machine Learning and Classification Techniques”. He discussed automated classification based on typical and statistical features (usually binary indicators: is this email spam or not? Is this play tragedy or comedy?). Sequences of objects allow for a different kind of processing (for example spell-checking), including named entity recognition. Names need to be identified not only by their form but by their context, and machines do a surprisingly good job at identifying coreference and thus disambiguating between homonyms. A more flexible form of automatic classification is provided by topic modelling, which allows mixed classifications and does not require the definition of labels. Topic modelling is the automatic grouping of topics, keywords, components, relationships by the frequency of clusters of words and references. This modelling mechanism is an effective means for organising a library collection by automated topic clusters, for example, rather than by a one-dimensional and rather arbitrary classmark system. Generating multiple connections between publications might be a more effective and more useful way to organise a citation index for Classical Studies than the outdated project that is l’Année Philologique.

Simon Overell gave a short presentation on his doctoral research into the distribution of location references within different language versions of Wikipedia. Using the tagged location links as disambiguators, and using the language cross-reference tags to compare across the collections, he uses the statistics compiled to analyse bias (in a supposedly Neutral Point-Of-View publication) and provide support for placename disambiguation. Overell’s work is in progress, and he is actively seeking collaborators who might have projects that could use his data.

In the afternoon there were two round-table discussions on the subjects of “Collections” and “Systems and Infrastructure” that I may report on later if my notes turn out to be usable.

Registration: 3D Scanning Conference at UCL

Tuesday, February 26th, 2008

Kalliopi Vacharopoulou wrote, via the DigitalClassicist list:

I would like to draw to your attention the fact that registration for the 3D Colour Laser Scanning Conference at UCL on the 27th and 28th of March has now opened.

The first day (27th of March) will include a keynote presentation and papers on the themes of General Applications of 3D Scanning in the Museum and Heritage Sector and of 3D Scanning in Conservation.

The second day (28th of March) will offer a keynote presentation and papers on the themes of 3D Scanning in Display (and Exhibition) and Education and Interpretation. A detailed programme with the papers and the names of the speakers can be found in our website.

If you would like to attend the conference, I would kindly request to fill in the registration form which you can find in this link and return it to me as soon as possible.

There is no fee for participating (or attending the conference) (coffee and lunch are provided free of charge). Please note that attendance is offered on a first-come, first-served basis.

Please feel free to circulate the information about the conference to anyone who you think might be interested.

In the meantime, do not hesitate to contact me with any inquiries.

International Seminar of Digital Philology: Edinburgh, March 25-27, 2008

Saturday, February 23rd, 2008

Seen on the AHeSSC mailing list:

The e-Science Institute Event Announcement

The e-Science Institute is delighted to host the “The Marriage of Mercury and Philology: Problems and Outcomes in Digital Philology”. The conference welcomes both leading scholars and young researchers working on the problems of textual criticism and editorial scholarship in the electronic medium, as well as students, teachers, librarians, archivists, and computing professionals who are interested in representation, access, exchange, management and conservation of texts.

Organiser: Cinzia Pusceddu
Dates and Time: Tuesday 25th March 09.00 – Thursday 27th March 17.00
Place: e-Science Institute
University of Edinburgh
13-15 South College Street

For registration and more details see