Archive for the ‘Events’ Category

Digitizing Medieval and Early Modern Material Culture

Saturday, November 29th, 2008

Posted on the Digital Classicist list by Melissa Terras.

Call for Papers: Digitizing Medieval and Early Modern Material Culture

Editors Brent Nelson (University of Saskatchewan) and Melissa Terras
(University College London) invite submissions for a collection of
essays on “Digitizing Medieval and Early Modern Material Culture” to
be published in the New Technologies in Medieval and Renaissance
Studies Series edited by Ray Siemens and William Bowen.

This collection of essays will build on the accomplishments of recent
scholarship on materiality by bringing together innovative research
on the theory and praxis of digitizing material cultures from roughly
500 A.D. to 1700 A.D. Scholars of the medieval and early modern
periods have begun to pay more attention to the material world not
only as a means of cultural experience, but also as a shaping
influence upon culture and society, looking at the world of material
objects as both an area of study and a rich source of evidence for
interpreting the past. Digital media enable new ways of evoking,
representing, recovering, and simulating these materials in
non-traditional, non-textual (or para-textual) ways and present new
possibilities for recuperating and accumulating material from across
vast distances and time, enabling both preservation and comparative
analysis that is otherwise impossible or impractical. Digital
mediation also poses practical and theoretical challenges, both
logistical (such as gaining access to materials) and intellectual
(for example, the relationship between text and object). This volume
of essays will promote the deployment of digital technologies to the
study of material culture by bringing together expertise garnered
from complete and current digital projects, while looking forward to
new possibilities for digital applications; it will both take stock
of the current state of theory and practice and advance new
developments in digitization of material culture. The editors welcome
submissions from all disciplines on any research that addresses the
use of digital means for representing and investigating material
culture as expressed in such diverse areas as:

• travelers’ accounts, navigational charts and cartography
• collections and inventories
• numismatics, antiquarianism and early archaeology
• theatre and staging (props, costumes, stages, theatres)
• the visual arts of drawing, painting, sculpture, print making, and
architecture
• model making
• paper making and book printing, production, and binding
• manuscripts, emblems, and illustrations
• palimpsests and three-dimensional writing
• instruments (magic, alchemical, and scientific)
• arts and crafts
• the anatomical and cultural body

We welcome approaches that are practical and/or theoretical, general
in application or particular and project-based. Submissions should
present fresh advances in methodologies and applications of digital
technologies, including but not limited to:

• XML and databases and computational interpretation
• three-dimensional computer modeling, Second Life and virtual worlds
• virtual research environments
• mapping technology
• image capture, processing, and interpretation
• 3-D laser scanning, synchrotron, or X-ray imaging and analysis
• artificial intelligence, process modeling, and knowledge representation

Papers might address such topics and issues as:

• the value of inter-disciplinarity (as between technical and
humanist experts)
• relationships between image and object; object and text; text and image
• the metadata of material culture
• curatorial and archival practice
• mediating the material object and its textual representations
• imaging and data gathering (databases and textbases)
• the relationship between the abstract and the material text
• haptic, visual, and auditory simulation
• tools and techniques for paleographic analysis

Enquiries and proposals should be sent to brent.nelson[at]usask.ca by
10 January 2009. Complete essays of 5,000-6,000 words in length will
be due on 1 May 2009.

CFP: Natural Language Processing for Ancient Language

Monday, November 24th, 2008

Chuck Jones has just posted a call for papers for a special issue of the TAL journal (Revue TAL) on the topic “Natual Language Processing for Ancient Language” over at AWBG.

Les historiens et l’informatique, Roma, December 4-6, 2008

Sunday, November 23rd, 2008

From an announement circulated by Marjorie Burghart:

Les historiens et l’informatique : un métier à réinventer

Jeudi 4 décembre – 14 h 30
Marilyn Nicoud (École française de Rome)
Accueil des participants

Jean-Philippe Genet (Université de Paris I)
Peut-on prévoir l’impact des transformations de l’informatique sur le travail scientifique de l’historien ?

L’historien et ses sources : archives et bibliothèques – 15 h 00
Anna Maria Tammaro (Università di Parma)
La biblioteca digitale verso la realizzazione dell’infrastruttura globale per gli studi umanistici

Roberto Delle Donne (Università di Napoli Federico II)
Storia e Open Archive

Christophe Dessaux (Ministère de la Culture et de la Communication)
De la numérisation des collections à Europeana : des contenus culturels pour la recherche

Gino Roncaglia (Università della Tuscia)
Libri elettronici : un panorama in evoluzione

Stefano Vitali (Archivio di Stato di Firenze)
I mutamenti nel mondo degli archivi

17 h 45-18 h 45 : Discussion

Vendredi 5 décembre – 9 h 00
Éditer
Michele Ansani (Università di Pavia) et Antonella Ghignoli (Università di Firenze)
Testi digitali : nuovi media e documenti medievali

Pierre Bauduin (Université de Caen) et Catherine Jacquemard (Université de Caen)
La pratique de l’édition en ligne : expériences et questionnements

Paul Bertrand (IRHT, CNRS)
Autour de l’édition électronique et des digital humanities : nouvelle érudition, nouvelle critique ?

10 h 30-11 h 00 : Discussion

Enseigner
Rolando Minuti (Università di Firenze)
Insegnare storia al tempo del web 2.0 : considerazioni su esperienze e problemi aperti

Giulio Romero (Atelhis)
Métier d’historiens, métiers d’historien : les impératifs d’une formation ouverte

12 h 45-13 h 15 : Discussion

Communiquer – 15 h 00
Pietro Corrao (Università di Palermo)
L’esperienza di Reti Medievali

Christine Ducourtieux (Université de Paris I) et Marc Smith (École nationale des Chartes),
L’expérience de Ménestrel

16 h 00 – 16 h. 30 Discussion

Les nouveaux horizons du métier d’historien
Aude Mairey (CESCM, CNRS-Université de Poitiers)
Quelles perspectives pour la textométrie ?

Julien Alerini (Université de Paris I) et Stéphane Lamassé (Université de Paris I)
Données et statistiques : l’avenir du travail en ligne pour l’historien

17 h 45-18 h 15 : Discussion

Samedi 6 décembre – 9 h 00
François Giligny (Université de Paris I)
L’informatique en archéologie : une révolution tranquille ?

Jean-Luc Arnaud (Telemme, CNRS-Université de Provence)
Nouvelles méthodes, nouveaux usages de la cartographie et de l’analyse spatiale en histoire

Margherita Azzari (Università di Firenze)
Geographic Information Systems and Science. Stato dell’arte, sfide future

10 h-30-11 h 00 : Discussion

L’historien et l’outil informatique
Serge Noiret (European University Institute)
Fare storia a più mani con il web 2.0 : cosa cambia nelle pratiche degli storici ?

Philippe Rygiel (Université de Paris I)
De quoi le web est-il l’archive ? Lectures historiennes de l’activité réseau

Jean-Michel Dalle (Université Pierre et Marie Curie, Paris VI)
Peut-on penser le futur d’une communauté scientifique sans tenir compte de l’économie de l’innovation et de la créativité ?

12 h 45-13 h 30 : Discussion

Conclusions d’Andrea Zorzi (Università di Firenze)

If you want to attend, please contact Marilyn Nicoud or Grazia Parrino, secrma@efrome.it

Digital Classicist Occasional Seminars: Lamé on digital epigraphy

Tuesday, November 4th, 2008

For those who are not subscribed to the Digital Classicist podcast RSS, I’d like to call attention to the latest “occasional seminar” audio and slides online: Marion Lamé spoke about “Epigraphical encoding: from the Stone to Digital Edition” in the internation video-conference series European Culture and Technology. Marion talked about her PhD project which is to use an XML-encoded edition of the Res Gestae Diui Augusti as an exercise in digital recording and presentation of an extremely important and rich historical text and encoding historical features in the markup.

We shall occasionally record and upload (with permission) presentations of interest to digital classicists that are presented in other venues and series. If you would be interested in contributing a presentation to this series, please contact me or someone else at the Digital Classicist.

Classical panels at DRHA

Sunday, September 21st, 2008

This year’s Digital Resources for the Humanities and Arts conference (Cambridge, September 14-17) included a two-part panel on Digital Classicist (sadly divided over two days), organized by Simon Mahony, Stuart Dunn, and myself. Despite some apparently last-minute (and unannounced) scheduling changes, the panel was very successful. I post here only my brief notes on the papers involved, and hope that some of my colleagues may post more detailed reactions or reports either in comments, or as posts to this or other blogs.

Gabriel Bodard

I kicked off the first Classicists’ session on Monday morning with a brief history of the Digital Classicist community and a discussion of the different approaches to studying the use of digital methods in the study of the ancient world (contrasting the historical approach of Solomon 1993 with the forward-looking theme of Crane/Terras 2008, for which authors were asked to imagine their field within Classics in 2018). I talked in general terms about the different trajectories of two very early digital classical projects, the TLG and LGPN, both of which were founded in 1972. The TLG, while a technological innovative project from the get-go, and one which changed (and continues to be indispensible to) the study of Greek literature, has not made a great contribution to the Digital Humanities because of its closed, for-profit, and self-sufficient strategy. The LGPN on the other hand began life as a very technologically conservative projects, geared to the production of paper volumes of the Lexicon, and has always been reactive to changes in technology rather than proactive as the TLG was; as a result of this, however, they have been able to change with the times, adopt new database and web technologies as they appeared, and are now actively contributing to the development of standards in XML, onomastics, and geo-tagging, and sharing data and tools widely. Finally I argued that any study of the community of digital Classics needs both to consider history (lessons to be learned from projects such as those discussed above, and other venerable projects that are still currently innovative such as Perseus and the DDbDP), and consider the newest technologies, standards, and cyberinfrastructures that will drive our work forward in the future.

(David Robey pointed out that Classics has an important and unique position with the UK arts and humanities community in that the subject associations give validity and respectability by their support of and recognition for digital resources and research.)

Stuart Dunn

In a paper titled The UK’s evolving e-infrastructure and the study of the past, Stuart discussed the national e-Science agenda and how it relates to the practices and needs of the humanities scholar, using as a basis the research process of data collection, analysis, and publication/dissemination. The essential definition of e-Science is that it centres around scholarly collaboration across and between disciplines, and the advanced computational infrastructure that enables this collaboration. e-Science often involves working with huge bodies of data or processing-intensive operations on complex material, and the example of this kind of research Stuart offered was not Classical but Byzantine: the use of agent-based modelling by colleagues in Birmingham to simulate the climactic battle of Manzikert. After some general conclusions on the opportunities for advanced e-infrastructure to be used in the study of the ancient world, there was some lively discussion of geospacial resources in the British and European academic spheres.

Simon Mahony

Simon gave a detailed presentation of the Humslides 2.0 project that he is conducting with the Classics department at King’s College London. Building upon the work carried out in a pilot project in 2006-7 to digitise the teaching slide collections of the Classics department (as a pilot study for the School of Humanities), which adopted a free trial version of the ContentDM management system (trial license now expired, and not renewed), the new project will utilize Web 2.0 tools to present and organize some 7000 slides with more metadata and more input from students and other contributors. A Humslides Flickr group has been established, inspired in part by the Commons group set up by Library of Congress and now contributed to by several other major institutions. As well as providing a teaching resource (currently restricted to KCL students until some thorny copyright issues have been wrinkled out), students will be set assessed coursework tasks to contribute to the tagging and annotating of images in this collection.

Elpiniki Fragkouli

Due to illness, Elpiniki’s paper on Training, Communities of Practice, and Digital Humanities was not delivered at this conference. We shall see whether she would be willing to upload her slides on the Digital Classicist website for discussion.

Amy Smith (Leif Isaksen, Brian Fuchs)

The paper on Lightweight Reuse of Digital Resources with VLMA: perspectives and challenges, originally commissioned for the Digital Classicist panel, was at the last minute and for unknown reasons switched over into a panel on Digital Humanites on Tuesday morning. Amy presented this paper, which discussed lessons learned from the Virtual Lightbox for Museums and Archives project (discussed in detail in their article in the special issue of Digital Medievalist journal we edited). Some conclusions and discussion followed on the topic of RDF and other metadata standards, and on browser-based versus desktop applications for viewing and organizing remote objects.

John Pybus (Alan Bowman, Charles Crowther and Ruth Kirkham)

John’s presentation on A Virtual Research Environment for the Study of Documents and Manuscripts gave a succinct and very useful summary of the history of the VRE research that has been carried out by the Centre for the Study of Ancient Documents and the humanities VRE team in Oxford. The project is one of four demo projects conducted by the second phase of work that begin with a user requirements survey in 2006-7. Built using uPortal, the VRE allows remote, parallel, and dynamic consultation and annotation of texts, images, and other resources by multiple scholars simultaneously. John showed some examples of the functionality of the VRE platform, including: the ability to show side-by-side parallel views of a tablet (different images or different renderings of the same image); the juxtaposition of multiple fragments in a lightbox; the ability to share views and exchange instant messages between scholars.

Emma O’Riordan (Michael Fulford, et al.)

In a paper that discussed another project related to the Oxford VRE programme, the Virtual Environment for Research in Archaeology: a Roman case study at Silchester, Emma discussed the origins of the VERA system in the Integrated Archaeological Database (IADB) that has been in use at Silchester for several years. The VERA system allows almost instant publication of the years results (as compared to waiting several months for paper notes to be transcribed); is cheaper than manual transcription; and more reliable than manual transcription; perhaps most importantly, the system enables live communication and collaboration between the archaeologists in the field and scholars in other parts of the world. Emma stressed one lesson from this project which was the importance of working alongside computer scientists, so that development of functionality can take into consideration the needs of the archaeologists as well as the research and interests of the programmers. It was interesting, however, that she also noted the potential pitfalls of too much tinkering with a tool while at work in the field.

Claire Warwick (Melissa Terras, et al.)

Originally scheduled in the second “Digital Humanities” on Tuesday morning, this paper followed logically on from Emma’s, and discussed Virtual Environments for Research in Archaeology (VERA): Use and Usability of Integrated Virtual Environments in Archaeological Research. Claire focussed on the evaluation of documentation of the unique needs of archaeologists in the field, and some conclusions the VERA team have been able to draw by the use of questionnaires, diaries, and anonymized interviews with the Silchester workers. Learning new IT skills was considered to be a burdern by students who were already having to learn fieldwork skills on the job; there were also new problems with the technology, as compared to the “pencil and paper” methods for which workflow and solutions had been developed over time. We look forward to a full report on the feedback and usability study that the UCL participants in the VERA project are conducting.

Leif Isaksen

Original scheduled for the “Digital Tools” panel, in this paper, Building a Virtual Community: The Antiquist Experience, Leif spoke to a Digital Classicist audience about a parallel community, Antiquist (who focus on digital approaches to cultural heritage and archaeology). The Antiquist community has an active mailing list (a Google group), a moribund blog, and a wiki whose main function is announcements of events. Antiquist boasts multiple moderators, many of whom try to keep the list active, and from the start they actively invited heritage professionals who were known to them to join the community. There is no set agenda, and membership is from a wide range of industries. Over time, traffic on the list has remained steady, with an unusually high percentage of active participants, but the content of the list traffic has tended recently to become more announcement-focussed rather than long threads and discussions. They are currently considering inviting new moderators to join the team, in the hope of injecting fresh blood and enthusiasm into a team who now rarely innovate and introduce new discussions to the group. Compared to many mailing lists, the community is still very active and very healthy, however. (Leif has usefully uploaded his slideshow and commented in a thread on the Antiquist email group.)

2009 Conference of Computer Applications to Archaeology (CFP)

Sunday, September 14th, 2008

Via Centernet:

CALL FOR PAPERS AND PROPOSALS FOR SESSIONS, WORKSHOPS, AND ROUNDTABLES at the 2009 Conference of Computer Applications to Archaeology (CAA)
Deadline: October 15, 2008

The 37th annual conference on Computer Applications to Archaeology (CAA) will take place at the Colonial Williamsburg Foundation in Williamsburg, Virginia from March 22 to 26, 2009. The conference will bring together students and scholars to explore current theory and applications of quantitative methods and information technology in the field of archaeology. CAA members come from a diverse range of disciplines, including archaeology, anthropology, art and architectural history, computer science, geography, geomatics, historic preservation, museum studies, and urban history.

The full CFP is available here: http://www.caa2009.org/PapersCall.cfm

CFP: Digital Humanities 09

Thursday, September 11th, 2008

The Call for Papers for Digital Humanities 09, scheduled for 22-25 June at the University of Maryland, has just been issued. Abstracts are due on 31 October 2008.

MITH’s Digital Dialogues schedule

Thursday, September 4th, 2008

The Maryland Institute for Technology in the Humanities (MITH) has released the fall schedule for their “digital dialogues” lecture series. There are a number of interesting talks. I wonder if any of these will be podcast?

Since the full schedule is only available as a PDF at the moment, I’m taking the liberty of pasting the contents here:

Maryland Institute for Technology in the Humanities
an applied think tank for the digital humanities
Digital Dialogues Schedule
Tuesdays @12:30-1:45
Fall 2008 in MITH’s Conference Room
B0135 McKeldin Library, U. Maryland

  • 9.9 Doug Reside (MITH and Theatre), “The MITHological AXE: Multimedia Metadata Encoding with the Ajax XML Encoder
  • 9.16 Stanley N. Katz (Princeton University), “Digital Humanities 3.0: Where We Have Come From and Where We Are Now?”
  • 9.23 Joyce Ray (Institute of Museum and Library Services), “Digital Humanities and the Future of Libraries”
  • 9.30 Tom Scheinfeldt and Dave Lester (George Mason University), “Omeka: Easy Web Publishing for Scholarship and Cultural Heritage”
  • 10.7 Brent Seales (University of Kentucky), “EDUCE: Enhanced Digital Unwrapping for Conservation and Exploration”
  • 10.14 Zachary Whalen (University of Mary Washington), “The Videogame Text”
  • 10.21 Kathleen Fitzpatrick (Pomona College), “Planned Obsolescence: Publishing, Technology, and the Future of the Academy”
  • 10.28 “War (and) Games” (a discussion in conjunction with the ARHU semester on War and Representations of War, facilitated by Matthew Kirschenbaum [English and MITH])
  • 11.4 Bethany Nowviskie (University of Virginia), “New World Ordering: Shaping Geospatial Information for Scholarly Use”
  • 11.11 Merle Collins (English), Saraka and Nation (film screening and discussion)
  • 11.18 Ann Weeks (iSchool and HCIL), “The International Children’s Digital Library: An Introduction for Scholars”
  • 11.25 Clifford Lynch (Coalition for Networked Information), title TBA
  • 12.2 Elizabeth Bearden (English), “Renaissance Moving Pictures: From Sidney’s Funeral materials to Collaborative, Multimedia Nachleben”
  • 12.9 Katie King (Women’s Studies), “Flexible Knowledges, Reenactments, New Media”

All talks are free and open to the public!

University of Maryland
McKeldin Library B0131
College Park, MD 20742

Neil Fraistat, Director

http://www.mith.umd.edu/

tel: 301.405.8927
fax: 301.314.7111
mith@umd.edu

Open Access Day Announced: 14 October 2008

Friday, August 29th, 2008

By way of Open Access News we learn of the announcement of Open Access Day 2008:

SPARC (the Scholarly Publishing and Academic Resources Coalition), the Public Library of Science (PLoS), and Students for Free Culture have jointly announced the first international Open Access Day. Building on the worldwide momentum toward Open Access to publicly funded research, Open Access Day will create a key opportunity for the higher education community and the general public to understand more clearly the opportunities of wider access and use of content.

Open Access Day will invite researchers, educators, librarians, students, and the public to participate in live, worldwide broadcasts of events.

Digital Classicist Podcast

Friday, July 18th, 2008

The Institute for Classical Studies and Digital Classicist Summer seminar series is about half-way through, and the first several audio recordings of the proceedings are now available as part of the Digital Classicist podcast. You can find a list of all seminars in this series, along with links for those that have audio and/or presentations uploaded, at:

Or you can subscribe to the podcast feed itself by pointing your RSS aggregator, iTunes subscription, aut sim., at:

We should welcome ideas for further events to add to this podcast series, and/or partnerships to podcast the results of seminar series of interest to Digital Classicists in the future.

Digitizing Early Material Culture, new deadline

Wednesday, May 7th, 2008

The Digitizing Early Material Culture conference, for which we posted a CFP back in February, has a new deadline and slightly changed line-up of speakers (Meg Twycross replaces Melissa terras). See the new programme here (PDF).

Institute of Classical Studies Work-in-Progress seminars (London)

Thursday, May 1st, 2008

Digital Classicist Work-in-Progress seminars
Institute of Classical Studies

Fridays at 16:30 in NG16, Senate House, Malet St, London, WC1E 7HU
(June 20th, July 4th-18th seminars in room B3, Stewart House)
(June 27th seminar room 218, Chadwick Bdg, UCL, Gower Street)

**ALL WELCOME**

6 June (NG16): Elaine Matthews and Sebastian Rahtz (Oxford), The Lexicon of Greek Personal Names and classical web services

13 June (NG16) Brent Seales (University of Kentucky), EDUCE: Non-invasive scanning for classical materials

20 June (STB3) Dot Porter (University of Kentucky), The Son of Suda On Line: a next generation collaborative editing tool

27 June (UCL Chadwick 218) Bruce Fraser (Cambridge), The value and price of information: reflections on e-publishing in the humanities

4 July (STB3) Andrew Bevan (UCL), Computational Approaches to Human and Animal Movement in the Archaeological Record

11 July (STB3) Frances Foster (KCL), A digital presentation of the text of Servius

18 July (STB9) Ryan Bauman (University of Kentucky), Towards the Digital Squeeze: 3-D imaging of inscriptions and curse tablets

25 July (NG16) Charlotte Tupman (KCL), Markup of the epigraphy and archaeology of Roman Libya

1 Aug (NG16) Juan Garcés (British Library), Digitizing the oldest complete Greek Bible: The Codex Sinaiticus project

8 Aug (NG16) Charlotte Roueché (KCL), From Stone to Byte

15 Aug (NG16) Ioannis Doukas (KCL), Towards a digital publication for the Homeric Catalogue of Ships

22 Aug (NG16) Peter Heslin (Durham), Diogenes: Past development and future plans

**ALL WELCOME**

We are inviting both students and established researchers involved in the application of the digital humanities to the study of the ancient world to come and introduce their work. The focus of this seminar series is the interdisciplinary and collaborative work that results at the interface of expertise in Classics or Archaeology and Computer Science.

The seminar will be followed by wine and refreshments.

Audio recordings and slideshows will be uploaded after each event.

(Sponsored by the Institute of Classical Studies, University of London, and the Centre for Computing in the Humanities, King’s College London.)

For more information please contact gabriel.bodard@kcl.ac.uk or simon.mahony@kcl.ac.uk, or visit the seminar website at http://www.digitalclassicist.org/wip/wip2008.html

EpiDoc Summer School, July 14th-18th, 2008

Wednesday, April 23rd, 2008
The Centre for Computing in the Humanties, Kings College London, is again offering an EpiDoc Summer School, on July 14th-18th, 2008. The training is designed for epigraphers or papyrologists (or related text editors such as numismatists, sigillographers, etc.) who would like to learn the skills and tools required to mark up ancient documents for publication (online or on paper), and interchange with international academic standards.You can learn more about EpiDoc from the EpiDoc home page and the Introduction for Epigraphers; you wil find a recent and user-friendly article on the subject in the Digital Medievalist. (If you want to go further, you can learn about XML and about the principles of the TEI: Text Encoding Initiative.) The Summer School will not expect any technical expertise, and training in basic XML will be provided.

Attendees (who should be familiar with Greek/Latin and the Leiden Conventions) will need to bring a laptop on which has been installed the Oxygen XML editor (available at a reduced academic price, or for a free 30-day demo).

The EpiDoc Summer School is free to participants; we can try to help you find cheap (student) accommodation in London. If any students participating would like to stay on afterwards and acquire some hands-on experience marking up some texts for the Inscriptions of Roman Cyrenaica project, they would be most welcome!

All interested please contact both charlotte.roueche@kcl.ac.uk and gabriel.bodard@kcl.ac.uk as soon as possible. Please pass on this message to anyone who you think might benefit.

Digitization and the Humanities: an RLG Programs Symposium

Thursday, April 17th, 2008

Is anyone here attending this?

As primary source materials move online, in both licensed and freely available form, what will be the impact on scholarship? On teaching and learning practice? On the collecting practices of research libraries? These are questions we are hoping to explore in the third day of our annual meeting (June 4th). This symposium, which we’re calling “Digitization and the Humanities: Impact on Libraries and Special Collections,” will feature perspectives from scholars on how digital collections are impacting both their research and teaching practice. We’ll also have perspectives from university librarians (Paul Courant, University of Michigan and Robin Adams, Trinity College Dublin) on the potential impact on library collecting practices.

The symposium will be held at the Chemical Heritage Foundation, and on Tuesday evening (June 3rd), the Philadelphia Museum of Art will host a reception for attendees. It should be a great event and a thought provoking conversation, and we hope you will join us. RLG Partners may register online.

Report on NEH Workshop “Supporting Digital Scholarly Editions”

Friday, April 4th, 2008

The official report on the NEH Workshop “Supporting Digital Scholarly Editions”, held on January 14, has been released and is available in PDF form:

http://www.virginiafoundation.org/NEH%20Workshop%20Report%20FINAL-3.pdf

Attendees included representatives from funding agencies and university presses, historians, just one or two literary scholars, one medievalist, and no classicists. It appears that much of the discussion focused on creating a service provider for scholarly editions, something to work between scholars and university presses to turn scholarship into digital publications.

I’m of two minds about this. On one hand, I know a lot of “traditional scholars” who find the idea of digital publication a little scary, just the idea of having to learn the technology. So it could be a good way to bring digital publication into the mainstream. But on the other hand, this kind of model could be stifling for creativity. One of the exciting things about digital projects is that, at this time, although there are standards there is no single model to follow for publication. There’s a lot of room for experimentation. It’s certainly not either/or – those of us doing more cutting-edge work will continue to do it whether there are mainstream service providers at university presses or not. But it’s interesting that this is being discussed.

Informatique et Egyptologie, I&E 2008

Wednesday, April 2nd, 2008

A date has been set for the next meeting of the International Association of Egyptologists Computer Group (Informatique et Egyptologie, I&E), which last met in Oxford in 2006.

Thanks to the kindness of Dr Wilfried Seipel, the meeting will take place in the Kunsthistorisches Museum, Vienna, Austria, on 8-11 July 2008, with the sessions on 9-10 July.

Further information can be found here

Problems and outcomes in digital philology (session 3: methodologies)

Thursday, March 27th, 2008

The Marriage of Mercury and Philology: Problems and outcomes in digital philology

e-Science Institute, Edinburgh, March 25-27 2008.

(Event website; programme wiki; original call)

I was asked to summarize the third session of papers in the round table discussion this afternoon. My notes (which I hope do not misrepresent anybody’s presentation too brutally) are transcribed below.

Session 3: Methodologies

1. Federico Meschini (De Montfort University) ‘Mercury ain’t what he used to be, but was he ever? Or, do electronic scholarly editions have a mercurial attitude?’ (Tuesday, 1400)

Meschini gave a very useful summary of the issues facing editors or designers of digital critical editions. The issues he raised included:

  • the need for good metadata standards to address the problems of (inevitable and to some extent desirable) incompatibility between different digital editions;
  • the need for a modularized approach that can include many very specialist tools (the “lego bricks” model);
  • the desirability of planning a flexible structure in advance so that the model can grow organically, along with the recognition that no markup language is complete, so all models need to be extensible.

After a brief discussion of the reference models available to the digital library world, he explained that digital critical editions are different from digital libraries, and therefore need different models. A digital edition is not merely a delivery of information, it is an environment with which a “reader” or “user” interacts. We need, therefore, to engage with the question: what are the functional requirements for text editions?

A final summary of some exciting recent movements, technologies, and discussions in online editions served as a useful reminder that far from taking for granted that we know what a digital critical edition should look like, we need to think very carefully about the issues Mechini raises and other discussions of this question.

2. Edward Vanhoutte (Royal Academy of Dutch Language and Literature, Belgium) ‘Electronic editions of two cultures –with apologies to C.P. Snow’ (Tuesday, 1500)

Vanhoutte began with the rhetorical observation that our approach to textual editions is in adequate because the editions are not as intuitive to users, flexible in what they can contain, and extensible in use and function as a household amenity such as the refrigerator. If the edition is an act of communication, an object that mediates between a text and an audience, then it fails if we do not address the “problem of two audiences” (citing Lavagnino). We serve the audience of our peers fairly well–although we should be aware that even this is a more hetereogenous and varied a group than we sometimes recognise–but the “common audience”, the readership who are not text editors themselves, are poorly served by current practice.

After some comments on different types of editions (a maximal edition containing all possible information would be too rich and complex for any one reader, so minimal editions of different kinds can be abstracted from this master, for example), and a summary of Robinson’s “fluid, cooperative, and distributed editions”, Vanhoutte made his own recommendation. We need, in summary, to teach our audience, preferably by example, how to use our editions and tools; how to replicate our work, the textual scholarship and the processes performed on it; how to interact with our editions; and how to contribute to them.

Lively discussion after this paper revolved around the question of what it means to educate your audience: writing a “how to” manual is not the best way to encourage engagement with ones work, but providing multiple interfaces, entry-points, and cross-references that illustrate the richness of the content might be more accessible.

3. Peter Robinson (ITSEE, Birmingham) ‘What we have been doing wrong in making digital editions, and how we could do better?’ (Tuesday, 1630)

Robinson began his provocative and speculative paper by considering a few projects that typify things we do and do not do well: we do not always distribute project output successfully; we do not always achieve the right level of scholarly research value. Most importantly, it is still near-impossible for a good critical scholar to create an online critical edition without technical support, funding for the costs of digitization, and a dedicated centre for the maintenance of a website. All of this means that grant funding is still needed for all digital critical work.

Robinson has a series of recommendations that, he hopes, will help to empower the individual scholar to work without the collaboration of a humanities computing centre to act as advisor, creator, librarian, and publisher:

  1. Make available high-quality images of all our manuscripts (this may need to be funded by a combination of goverment money, grant funding, and individual users paying for access to the results).
  2. Funding bodies should require the base data for all projects they fund to be released under a Creative Commons Attribution-ShareAlike license.
  3. Libraries and not specialist centres should hold the data of published projects.
  4. Commercial projects should be involved in the production of digital editions, bringing their experience of marketing and money-making to help make projects sustainable and self-funding.
  5. Most importantly, he proposes the adoption of common infrastructure, a set of agreed descriptors and protocols for labelling, pointing to, and sharing digital texts. An existing protocol such as the Canonical Text Services might do the job nicely.

4. Manfred Thaller (Cologne) ‘Is it more blessed to give than to receive? On the relationship between Digital Philology, Information Technology and Computer Science’ (Wednesday, 0950)

Thaller gave the last paper, on the morning of the third day of this event, in which he asked (and answered) the over-arching question: Do computer science professionals already provide everything that we need? And underlying this: Do humanists still need to engage with computer science at all? He pointed out two classes of answer to this question:

  • The intellectual response: there are things that we as humanists need and that computer science is not providing. Therefore we need to engage with the specialists to help develop these tools for ourselves.
  • The political response: maybe we are getting what we need already, but we will experience profitable side effects from collaborating with computer scientists, so we should do it anyway.

Thaller demonstrated via several examples that we do not in fact get everything we need from computer scientists. He pointed out that two big questions were identified in his own work twelve years ago: the need for software for dynamic editions, and the need for mass digitization. Since 1996 mass digitization has come a long way in Germany, and many projects are now underway to image millions of pages of manuscripts and incunabula in that country. Dynamic editions, on the other hand, while there has been some valuable work on tools and publications, seem very little closer than they were twelve years ago.

Most importantly, we as humanists need to recognize that any collaboration with computer scientists is a reciprocal arrangement, that we offer skills as well as receive services. One of the most difficult challenges facing computer scientists today, we hear, is to engage with, organise, and add semantic value to the mass of imprecise, ambiguous, incomplete, unstructured, and out-of-control data that is the Web. Humanists have spent the last two hundred years studying imprecise, ambiguous, incomplete, unstructured, and out-of-control materials. If we do not lend our experience and expertise to help the computer scientists solve this problem, than we can not expect free help from them to solve our problems.

DHI Now Known as Office of Digital Humanities (ODH)

Tuesday, March 25th, 2008

Not specifically classics, but this news from the National Endowment for the Humanities should be of interest, at least to those of us in the US: The Digital Humanities Initiative (DHI) has been made permanent, and is now the Office of Digital Humanities (ODH)
From the ODH Webpage:

The Office of Digital Humanities (ODH) is an office within the National Endowment for the Humanities (NEH). Our primary mission is to help coordinate the NEH’s efforts in the area of digital scholarship. As in the sciences, digital technology has changed the way scholars perform their work. It allows new questions to be raised and has radically changed the ways in which materials can be searched, mined, displayed, taught, and analyzed. Technology has also had an enormous impact on how scholarly materials are preserved and accessed, which brings with it many challenging issues related to sustainability, copyright, and authenticity. The ODH works not only with NEH staff and members of the scholarly community, but also facilitates conversations with other funding bodies both in the United States and abroad so that we can work towards meeting these challenges.

Digital Classicist seminars update

Tuesday, March 25th, 2008

To bring you all up to date with what is going on with the Digital Classicist seminar series:

Some papers from the DC seminar series held at the Institute of Classical Studies in London in the summer of 2006 have been published as a special issue of the Digital Medievalist (4:2008).

 See: http://www.digitalmedievalist.org/index.html

The dedication reads: In honour of Ross Scaife (1960-2008), without whose fine example of collaborative spirit, scrupulous scholarship, and warm friendship none of the work in this volume would be what it is.

Gabriel and I are putting together a collection of papers from the DC summer series of 2007 and working on the programme for the coming summer (2008). With the continued support of the Institute of Classical Studies (London) and the Centre for Computing in the Humanities, King’s College London it is anticipated that this seminar series will continue to be an annual event.  

Services and Infrastructure for a Million Books (round table)

Monday, March 17th, 2008

Million Books Workshop, Friday, March 14, 2008, Imperial College London.

The second of two round tables in the afternoon of the Million Books Workshop, chaired by Brian Fuchs (Imperial College London), asked a panel of experts what services and infrastructure they would like to see in order to make a Million Book corpus useful.

  1. Stuart Dunn (Arts and Humanities e-Science Support Centre): the kinds of questions that will be asked of the Million Books mean that the structure of this collection needs to be more sophisticated that just a library catalogue
  2. Alistair Dunning (Archaeological Data Service & JISC): powerful services are urgently needed to enable humanists both to find and to use the resources in this new collection
  3. Michael Popham (OULS but formerly director of e-Science Centre): large scale digitization is a way to break down the accidental constraints of time and place that limit access to resources in traditional libraries
  4. David Shotton (Image Bioinformatics Research Group): emphasis is on accessibility and the semantic web. It is clear than manual building of ontologies does not scale to millions of items, therefore data mining and topic modelling are required, possible assisted by crowdsourcing. It is essential to be able to integrate heterogeneous sources in a single, semantic infrastructure
    1. Dunning: citability and replicability of research becomes a concern with open publication on this scale
    2. Dunn: the archaeology world has similar concerns, cf. the recent LEAP project
  5. Paul Walk (UK Office for Library and Information Networking): concerned with what happens to the all-important role of domain expertise in this world of repurposable services: where is the librarian?
    1. Charlotte Roueché (KCL): learned societies need to play a role in assuring quality and trust in open publications
    2. Dunning: institutional repositories also need to play a role in long-term archiving. Licensing is an essential component of preservation—open licenses are required for maximum distribution of archival copies
    3. Thomas Breuel (DFKI): versioning tools and infrastructure for decentralised repositories exist (e.g. Mercurial)
    4. Fuchs: we also need mechanisms for finding, searching, identifying, and enabling data in these massive collections
    5. Walk: we need to be able to inform scholars when new data in their field of interest appears via feeds of some kind

(Disclaimer: this is only one blogger’s partial summary. The workshop organisers will publish an official report on this event.)

What would you do with a million books? (round table)

Sunday, March 16th, 2008

Million Books Workshop, Friday, March 14, 2008, Imperial College London.

In the afternoon, the first of two round table discussions concerned the uses to which massive text digitisation could be put by the curators of various collections.

The panellists were:

  • Dirk Obbink, Oxyrhynchus Papyri project, Oxford
  • Peter Robinson, Institute for Textual Scholarship and Electronic Editing, Birmingham
  • Michael Popham, Oxford University Library Services
  • Charlotte Roueché, EpiDoc and Prosopography of the Byzantine World, King’s College London
  • Keith May, English Heritage

Chaired by Gregory Crane (Perseus Digital Library), who kicked off by asking the question:

If you had all of the texts relevant to your field—scanned as page images and OCRed, but nothing more—what would you want to do with them?

  1. Roueché: analyse the texts in order to compile references toward a history of citation (and therefore a history of education) in later Greek and Latin sources.
  2. Obbink: generate a queriable corpus
  3. Robinson: compare editions and manuscripts for errors, variants, etc.
    1. Crane: machine annotation might achieve results not possible with human annotation (especially at this scale), particularly if learning from a human-edited example
    2. Obbink: identification of text from lost manuscripts and witnesses toward generation of stemmata. Important question: do we also need to preserve apparatus criticus?
  4. May: perform detailed place and time investigations into a site preparatory to performing any new excavations
    1. Crane: data mining and topic modelling could lead to the machine-generation of an automatically annotated gazeteer, prosopography, dictionary, etc.
  5. Popham: metadata on digital texts scanned by Google not always accurate or complete; not to academic standards: the scanning project is for accessibility, not preservation
    1. Roueché: Are we talking about purely academic exploitation, or our duty as public servants to make our research accessible to the wider public?
    2. May: this is where topic analysis can make texts more accessible to the non-specialist audience
    3. Brian Fuchs (ICL): insurance and price comparison sites, Amazon, etc., have sophisticated algorithms for targeting web materials at particular audiences
    4. Obbink: we will also therefore need translations of all of these texts if we are reaching out to non-specialists; will machine translation be able to help with this?
    5. Roueché: and not just translations into English, we need to make these resources available to the whole world.

(Disclaimer: this summary is partial and partisan, reflecting those elements of the discussion that seemed most interesting and relevant to this blogger. The workshop organisers will publish an official report on this event presently.)

Million Books Workshop (brief report)

Saturday, March 15th, 2008

Imperial College London.
Friday, March 14, 2008.

David Smith gave the first paper of the morning on “From Text to Information: Machine Translation”. The discussion included a survey of machine translation techniques (including the automatic discovery of existing translations by language comparison), and some of the value of cross-language searching.

[Please would somebody who did not miss the beginning of the session provide a more complete summary of Smith’s paper?]

Thomas Breuel then spoke on “From Image to Text: OCR and Mass Digitisation” (this would have been the first paper in the day, kicking off the developing thread from image to text to information to meaning, but transport problems caused the sequence of presentations to be altered). Breuel discussed the status of professional OCR packages, which are usually not very trainable and have their accuracy constrained by speed requirements, and explained how the Google-sponsored but Open Source OCRopus package intends to improve on this situation. OCRopus is highly extensible and trainable, but currently geared to the needs of the Google Print project (and so while effective at scanning book pages, may be less so for more generic documents). Currently in alpha-release and incorporating the Tesseract OCR engine, this tool currently has a lower error-rate than other Open Source OCR tools (but not the professional tools, which often contain ad hoc code to deal with special cases). A beta release is set for April 2008, which will demo English, German, and Russian language versions, and release 1.0 is scheduled for Fall 2008. Breuel also briefly discussed the hOCR microformat for describing page layouts in a combination of HTML and CSS3.

David Bamman gave the second in the “From Text to Information” sequence of papers, in which he discussed building a dynamic lexicon using automated syntax recognition, identifying the grammatical contexts of words in a digital text. With a training set of some thousands of words of Greek and Latin tree-banked by hand, auto-syntactic parsing currently achieves an accuracy rate something above 50%. While this is still too high a rate of error to make this automated process useful as an end in itself, to deliver syntactic tagging to language students, for example, it is good for testing against a human-edited lexicon, which provides a degree of control. Usage statistics and comparisons of related words and meanings give a good sense of the likely sense of a word or form in a given context.

David Mimno completed the thread with a presentation on “From Information to Meaning: Machine Learning and Classification Techniques”. He discussed automated classification based on typical and statistical features (usually binary indicators: is this email spam or not? Is this play tragedy or comedy?). Sequences of objects allow for a different kind of processing (for example spell-checking), including named entity recognition. Names need to be identified not only by their form but by their context, and machines do a surprisingly good job at identifying coreference and thus disambiguating between homonyms. A more flexible form of automatic classification is provided by topic modelling, which allows mixed classifications and does not require the definition of labels. Topic modelling is the automatic grouping of topics, keywords, components, relationships by the frequency of clusters of words and references. This modelling mechanism is an effective means for organising a library collection by automated topic clusters, for example, rather than by a one-dimensional and rather arbitrary classmark system. Generating multiple connections between publications might be a more effective and more useful way to organise a citation index for Classical Studies than the outdated project that is l’Année Philologique.

Simon Overell gave a short presentation on his doctoral research into the distribution of location references within different language versions of Wikipedia. Using the tagged location links as disambiguators, and using the language cross-reference tags to compare across the collections, he uses the statistics compiled to analyse bias (in a supposedly Neutral Point-Of-View publication) and provide support for placename disambiguation. Overell’s work is in progress, and he is actively seeking collaborators who might have projects that could use his data.

In the afternoon there were two round-table discussions on the subjects of “Collections” and “Systems and Infrastructure” that I may report on later if my notes turn out to be usable.

Changing the Center of Gravity

Tuesday, March 4th, 2008

Changing the Center of Gravity: Transforming Classical Studies Through Cyberinfrastructure

http://www.rch.uky.edu/CenterOfGravity/

University of Kentucky, 5 October 2007

This is the full audio record of “Changing the Center of Gravity: Transforming Classical Studies Through Cyberinfrastructure”, a workshop funded by the National Science Foundation, sponsored by the Center for Visualization and Virtual Environments at the University of Kentucky, and organized by the Perseus Digital Library at Tufts University.

1) Introduction (05:13)
– Gregory Crane
(download this presentation as an mp3 file – 4.78 MB)

2) Technology, Collaboration, & Undergraduate Research (26:23)
– Christopher Blackwell and Thomas Martin, respondent Kenny Morrell
(download this presentation as an mp3 file – 24.1 MB)

3) Digital Criticism: Editorial Standards for the Homer Multitext (29:02)
– Casey Dué and Mary Ebbott, respondent Anne Mahoney
(download this presentation as an mp3 file – 26.5 MB)

4) Digital Geography and Classics (20:23)
– Tom Elliot, respondent Bruce Robertson
(download this presentation as an mp3 file – 18.6 MB)

5) Computational Linguistics and Classical Lexicography (39:16)
– David Bamman and Gregory Crane, respondent David Smith
(download this presentation as an mp3 file – 35.9 MB)

6) Citation in Classical Studies (38:34)
– Neel Smith, respondent Hugh Cayless
(download this presentation as an mp3 file – 35.3 MB)

7) Exploring Historical RDF with Heml (24:10)
– Bruce Robertson, respondent Tom Elliot
(download this presentation as an mp3 file – 22.1 MB)

8) Approaches to Large Scale Digitization of Early Printed Books (24:38)
– Jeffrey Rydberg-Cox, respondent Gregory Crane
(download this presentation as an mp3 file – 22.5 MB)

9) Tachypaedia Byzantina: The Suda On Line as Collaborative Encyclopedia (20:45)
– Anne Mahoney, respondent Christopher Blackwell
(download this presentation as an mp3 file – 18.9 MB)

10) Epigraphy in 2017 (19:00)
– Hugh Cayless, Charlotte Roueché, Tom Elliot, and Gabriel Bodard, respondent Bruce Robertson
(download this presentation as an mp3 file – 17.3 MB)

11) Directions for the Future (50:04)
– Ross Scaife et al.
(download this presentation as an mp3 file – 45.8 MB)

12) Summary (01:34)
– Gregory Crane
(download this presentation as an mp3 file – 1.44 MB)

CFP: DRHA 2008: New Communities of Knowledge and Practice

Monday, March 3rd, 2008

By way of a long string of reposts, originally to AHESSC:

Date: Fri, 29 Feb 2008 17:37:17 -0000
From: Stuart Dunn
To: AHESSC@JISCMAIL.AC.UK

CALL FOR PAPERS AND PERFORMANCES

Forthcoming Conference

DRHA 2008: New Communities of Knowledge and Practice

The DRHA (Digital Resources in the Humanities and Arts) conference is held annually at various academic venues throughout the UK. The conference theme this year is to promote discussion around new collaborative environments, collective knowledge and redefining disciplinary boundaries. The conference, hosted by Cambridge with its fantastic choice of conference venues will take place from Sunday 14th September to Wednesday 17th September.

The aim of the conference is to:

  • Establish a site for mutually creative exchanges of knowledge.
  • Promote discussion around new collaborative environments and collective knowledge.
  • Encourage and celebrate the connections and tensions within the liminal spaces that exist between the Arts and Humanities.
  • Redefine disciplinary boundaries.
  • Create a forum for debate around notions of the ‘solitary’ and the collaborative across the Arts and Humanities.
  • Explore the impact of the Arts and Humanities on ICT: design and narrative structures and visa versa.

There will be a variety of sessions concerned with the above but also with a particular emphasis on interdisciplinary collaboration and theorising around practice. There will also be various installations and performances focussing on the same theme. Keynote talks will be given by our plenary speakers who we are pleased to announce are Sher Doruff, Research Fellow (Art, Research and Theory Lectoraat) and Mentor at the Amsterdam School for the Arts, Alan Liu, Professor of English, University of California Santa Barbara and Sally Jane Norman, Director of the Culture Lab, Newcastle University. In addition to this, there will be various round table discussions together with a panel relating to ‘Second Life’ and a special forum ‘Engaging research and performance through pervasive and locative arts projects’ led by Steve Benford, Professor of Collaborative Computing, University of Nottingham. Also planned is the opportunity for a more immediate and informal presentation of work in our ‘Quickfire’ style events. Whether papers, performance or other, all proposals should reflect the critical engagement at the heart of DRHA.

Visit the website for more information and a link to the proposals website.

The Deadline for submissions will be 30 April 2008 and abstracts should be approximately 1000 words.

Cambridge’s venues range from the traditional to the contemporary all situated within walking distance of central departments, museums and galleries. The conference will be based around Cambridge University’s Sedgwick Site, particularly the West Road concert hall, where delegates will have use of a wide range of facilities including a recital room and a ‘black box’ performance space, to cater for this year’s parallel programming and performances.

Sue Broadhurst DRHA Programme Chair

Dr Sue Broadhurst
Reader in Drama and Technology, Head of Drama, School of Arts
Brunel University
West London, UB8 3PH
UK
Direct Line:+44(0)1895 266588 Extension: 66588
Fax: +44(0)1895 269768
Email: susan.broadhurst@brunel.ac.uk.

Rieger, Preservation in the Age of Large-Scale Digitization

Sunday, March 2nd, 2008

CLIR (the Council on Library and Information Resources in DC) have published in PDF the text of a white paper by Oya Rieger titled ‘Preservation in the Age of Large-Scale Digitization‘. She discusses large-scale digitization initiatives such as Google Books, Microsoft Live, and the Open Content Alliance. This is more of a diplomatic/administrative than a technical discussion, with questions of funding, strategy, and policy rearing higher than issues of technology, standards, or protocols, the tension between depth and scale (all of which were questions raised during our Open Source Critical Editions conversations).

The paper ends with thirteen major recommendations, all of which are important and deserve close reading, and the most important of which is the need for collaboration, sharing of resources, and generally working closely with other institutions and projects involved in digitization, archiving, and preservation.

One comment hit especially close to home:

The recent announcement that the Arts and Humanities Research Council and Joint Information Systems Committee (JISC) will cease funding the Arts and Humanities Data Service (AHDS) gives cause for concern about the long-term viability of even government-funded archiving services. Such uncertainties strengthen the case for libraries taking responsibility for preservation—both from archival and access perspectives.

It is actually a difficult question to decide who should be responsible for long-term archiving of digital resources, but I would argue that this is one place where duplication of labour is not a bad thing. The more copies of our cultural artefacts that exist, in different formats, contexts, and versions, the more likely we are to retain some of our civilisation after the next cataclysm. This is not to say that coordination and collaboration are not desiderata, but that we should expect, plan for, and even strive for redundancy on all fronts.

(Thanks to Dan O’Donnell for the link.)