Archive for the ‘Conferences’ Category

Digital Imaging of Ancient Textual Heritage

Wednesday, February 17th, 2010

Posting this on behalf of the organisers.

Digital Imaging of Ancient Textual Heritage: Technological Challenges and Solutions

The Academy of Finland research unit ‘Ancient Greek written sources’ (CoE) is organizing a symposium “Digital Imaging of Ancient Textual Heritage: Technological Challenges and Solutions”. The symposium takes place on 28-29 October, 2010, in Helsinki, Finland.

The programme comprises of two plenary sessions that are open for public, two workshops that are intended for the speakers only, and one open session on end-user perspective.

Participation in the symposium is free of charge (however, registration is compulsory). For the accepted speakers the CoE will be covering the travel and accommodation costs.

We would be grateful if the following short ad could be included in the web site of Digital Classicist to promote our symposium.

Maarit Kinnunen
tel. + 358 50 577 9153
maarit.kinnunen@expericon.fi

============================================

Digital Imaging of Ancient Textual Heritage: Technological Challenges and Solutions 28-29 October 2010 in Helsinki, Finland.
Organizer: The Academy of Finland Research Unit “Ancient Greek written sources” (CoE)
Partner: The National Library of Finland
For more information, see www.eikonopoiia.org

Digital Research and Collaborative Work (APA panel)

Wednesday, January 6th, 2010

There was a lot of talk of Digital Humanities at the MLA last week; as Hugh pointed out, though, there seems to be only one explicitly digital panel at our subject meeting, the APA/AIA in Anaheim. However, it should be a good one, and I’d encourage anyone with digital or collaborative interests to make sure and attend. The below is taken from the APA programme, annotated by me:

SECTION 28
Digital Research and Developments in Collaborative Work in Classics
FRIDAY January 8, 11:15 A.M. – 1:15 P.M. Elite Ballroom 3
Gabriel Bodard and Alex Lee, Organizers

The papers in this panel concern themselves with the implications of digital editing on the research process. ‘Editing’ in this context includes the collection, research, sharing, and preparation for publication of textual, historical, or archaeological material. The digital work, which is often seen as a tool en route to creating an online publication, also transforms the editor’s research—both in terms of the speed and the sequence with which we can perform certain tasks, and of the different and new sorts of questions that the data throws up
for us to consider.

1. Valentina Asciutti & Stuart Dunn, King’s College London
Mapping Evidence for Roman Regionalism and Regional Literacy in Roman Britain from the Inscribed and Illustrated Objects (20 mins.)
*Read by Sebastian Heath*

2. Gabriel Bodard & Irene Polinskaya, King’s College London
A Digital Edition of IOSPE: Collaboration and Interoperability Enabled by e-Science Methods (20 mins.)
*Read by Tom Elliott*

3. Alex Lee, University of Chicago
Scholarly Editing in the Digital Age: the Archimedes Palimpsest as a Case Study (20 mins.)

Although two of the three papers will be read by someone other than their authors, the readers are themselves experts in closely related areas, and Alex, Tom and Sebastian (and other expert attendees, to be announced) will be conducting a round table discussion on the subject of digital research and collaboration for the remaining time of the session.

Computer Applications in Archaeology Conference (CAA2010)

Monday, October 12th, 2009

Conference: CAA 2010
XXXVIII Annual Conference on Computer Applications and Quantitative Methods in Archaeology “Fusion of Cultures”

Conference Dates: April 6-9, 2010
Conference Location: Granada, Spain
URL: http://www.caa2010.org

Upcoming Deadlines:

- Session proposals submission deadline November 15, 2009
- Round tables proposals submission deadline December 15, 2009
- Workshops proposals submission deadline January 31, 2010

Other importat dates:
- Full papers submission will be open on November 20th,2009
- Full papers submission deadline December 15, 2009
- Short papers submission deadline January 31, 2010
- Poster submission deadline January 31, 2010
- Virtual theatre videos submission deadline January 31, 2010

The XXXVIII Annual CAA Conference will be held in Granada, Spain, from April 6 to 9, 2010 and is expected to bring together archaeologist, computer scientist and mathematicians to explore and exchange knowledge in order to enhance our understanding of the past. Classical disciplines like archaeology, anthropology or geography, and more modern ones like computer science, geomatics or museology exchange their most recent advances during the conference.
CAA 2010 is inspired in the concept “Fusion of Cultures” that identifies the scope of the conference and the spirit of the historical city of Granada. The aim of the conference is to create an collaborative atmosphere among all disciplines, by participating via papers, posters, round tables, workshops, short papers and a novel virtual theatre non-stop show. (more…)

DH2010: Digital Humanities 2010 CFP

Wednesday, October 7th, 2009

Forwarded from DH2010 committee:

We are pleased to announce the Call for Papers for the Digital Humanities 2010 Conference.

Alliance of Digital Humanities Organizations Digital Humanities 2010
Call for Papers
Abstract Deadline: Oct. 31, 2009

Proposals must be submitted electronically using the system which will be available at the conference web site from October 8th. Presentations may be any of the following:

• Single papers (abstract max of 1500 words)
• Multiple paper sessions (overview max of 500 words)
• Posters (abstract max of 1500 words)

Call for Papers Announcement

The International Programme Committee invites submissions of abstracts of between 750 and 1500 words on any aspect of humanities computing, broadly defined to encompass the common ground between information technology and problems in humanities research and teaching. We welcome submissions in all areas of the humanities, particularly interdisciplinary work. We especially encourage submissions on the current state of the art in humanities computing, and on recent developments.

Suitable subjects for proposals include, for example,

* text analysis, corpora, language processing, language learning
* IT in librarianship and documentation
* computer-based research in cultural and historical studies
* computing applications for the arts, architecture and music
* research issues such as: information design and modelling; the cultural impact of the new media
* the role of digital humanities in academic curricula

The special theme of the 2010 conference is cultural heritage old and new.

(more…)

Codicology and Palaeography in the Digital Age (Munich, July 3-4, 2009)

Wednesday, May 20th, 2009

International Conference

Codicology and Palaeography in the Digital Age

Munich, 3-4 July 2009

The conference will focus on the challenges and consequences of using IT and the internet for codicological and palaeographic research. The authors of some selected articles of an anthology to be published this summer by the Institute for Documentology and Scholarly Editing (IDE) will present and discuss their excellent research results with scholars and experts working on ancient books and manuscripts. The presentations will be given on current issues in the following fields: manuscript catalogues and descriptions, digitization of manuscripts, collaborative systems of research on manuscripts, codicological databases, manuscript catalogues, research based on digital resources, e-learning in palaeography, palaeographic databases (characters, scripts, scribes), (semi-) automatic recognition of scripts and scribes, digital tools for transcriptions, visions and prototypes of other digital tools.

A panel discussion will be held with renowned exponents in the field of codicology and palaeography and contributors of cutting edge research to get an overview of the state of the art as well as to open up new perspectives of codicological and palaeographic research in the “digital age”.

(More information including preliminary programme)

Ancient World and e-Science (report)

Tuesday, April 21st, 2009

On Saturday April 4, 2009, a panel on “Ancient World and e-Science”, organized by the Digital Classicist, was held at the Classical Association Annual Meeting at the University of Glasgow (full abstracts in GoogleDoc). The speakers and titles listed were:

  • Ryan Baumann & Gabriel Bodard, 3D Visualization and Digitization of Epigraphic Materials
  • Stuart Dunn, Seeing into the Past: Visualization, the ancient world, and the e-Science programme
  • Brian Fuchs, Rashmi Singhal, Jazz Mack Smith, & Gregory Crane, PhiloGrid: A Web Toolkit for the Ancient World
  • Caroline Macé, Ilse deVos, & Philippe Baret, Can phylogenetics methods help to cure contaminated textual traditions?

There was a slight change to the line-up on the day as Stuart Dunn’s attempts to reach Glasgow were scuppered by the incompetence of a budget airline: the three remaining papers were followed by 20 minutes open discussion, and then slightly early adjournment to the hotel bar.

Baumann spoke about the difficulties of reading, photographing, and visualizing curse tablets in general, and the steatite fragments from Amathous in Cyprus especially, which are translucent and therefore resistent to both normal photography and even the laser imaging used to take high-resolution 3-D images of inscribed objects. He then showed examples of a lead tablet (DT 25) which has degraded further in the century since it was transcribed, and argued that the high quality imaging this project is piloting is an important conservation exercise as well as having potential for improving the interpretation and transcription of the texts. The remainder of the presentation was a demonstration of some of the techniques for taking and manipulating 3-D readings using the laser scanner.

Fuchs gave a detailed history of and report on the PhiloGrid services, created by Imperial College London and the Perseus Project as part of a JISC/NEH Transatlantic collaborative digitization grant from 2008-09. He summarised the objectives and achievements of the project, including the mounting of Perseus web services such as lexical and morphological tools, the construction of a citation framework based on FRBR, and the digitization of new content. He also gave an introduction to and invited all present to attend a workshop on Arabic web services to be held at Imperial College London on Wednesday May 13 (further details to be announced here soon).

Macé and de Vos introduced the work carried out by classicists and generic biologists at the Université Catholique de Louvain on using statistical and probabilistic phylogenetic software to try and reconstruct the stemma of a contaminated manuscript tradition. They tested the phylogenetic algorithms for fitness for this task by creating a fictional manuscript tradition for a small section of the text of Proclus, including both horizontal and vertical contamination. Two phylogenetic methods—parsimony analysis and bootstrap analysis—were applied to the data, with mixed results. Vertical contamination in particular still defeats the generic technologies, but further work may improve the accuracy of such tools. (This work, needless to say, will also result in more robust algorithms and methodologies for the biologists, so this is a true e-Science interdisciplinary collaboration that really does have research interest for both fields.)

Many thanks to all who contributed to this panel, including the audience members who took part in the lively discussion afterward. Clearly there is a call for discussion of e-Science issues at Classics venues.

InterFace 2009: First Call for Papers

Wednesday, March 4th, 2009

Forwarded for Leif Isaksen from the Antiquist list:

—————————–

First Call for Papers

InterFace 2009:
1st National Symposium for Humanities and Technology

9-10 July, University of Southampton, UK.

http://www.interface09.org.uk

InterFace is a new type of annual event. Part conference, part workshop, part networking opportunity, it will bring together postdocs, early career academics and postgraduate researchers from the fields of Information Technology and the Humanities in order to foster cutting-edge collaboration. As well as having a focus on Digital Humanities, it will also be an important forum for Humanities contributions to Computer Science. The event will furthermore provide a permanent web presence for communication between delegates both during, and following, the conference.

Delegate numbers are limited to 80 (half representing each sector) and all participants will be expected to present a poster or a ‘lightning talk’ (a two minute presentation) as a stimulus for discussion and networking sessions. Delegates can also expect to receive illuminating keynote talks from world-leading experts, presentations on successful interdisciplinary projects, ‘Insider’s Guides’ and workshops. The registration fee for the two-day event is £30. For a full overview of the event, please visit the website.

Paper Submissions:

If you are interested in attending, please submit an original paper, of 1500 words or less, describing an idea or concept you wish to present. Please indicate whether you would prefer to produce a poster or perform a 2-minute lightning talk. Papers must be produced as a PDF or in Microsoft Word (.doc) format and submitted through our EasyChair page:

- Register for an easy chair account:
http://www.easychair.org/conferences/account_apply.cgi
- Log in: https://www.easychair.org/?conf=interface09
- Click New Submission at the top of the page and fill in the form.

Make sure you:
- Select your preference of lightning talk or poster.
- Select whether you are representing humanities or technology.
- Attach and upload your paper.

If you encounter any problems, please e-mail
submissions@interface09.org.uk

A number of travel bursaries may be available to successful applicants – if you would like to be considered for one, please email bursaries@interface09.org.uk and provide grounds for consideration.

Papers should focus on potential (and realistic) areas for collaboration between the Technology and Humanities Sectors, either by addressing particular problems, new developments, or both. Prior work may be presented where relevant but the nature of the paper must be forward-looking. As such, the scope is extremely broad but topics might include:

Technology

* 3D immersive environments
* Pervasive technologies
* Online collaboration
* Natural language processing
* Sensor networks
* The Semantic Web
* Agent based modelling
* Web Science

Humanities

* Spatial cognition
* Text editing and analysis
* New Media
* Linguistics
* Applied sociodynamics & social network analysis
* Archaeological reconstruction
* Information Ethics
* Dynamic logics
* Electronic corpora

Due to the limited number of places, papers will be subject to review by committee in order to maintain quality and a balanced programme. Applicants will be notified by email as to their acceptance. Accepted papers will be published online one week in advance of the conference.

Important Dates:

* Paper Submission Deadline: 1 May 2009
* Acceptances Announced: 18 May 2009
* Conference: 9th-10th July 2009

Confirmed Speakers

Keynote:
* Dame Wendy Hall, University of Southampton,
President of the Association of Computing Machinery

Insider’s Guides:
* Stephen Brown, De Montfort University
President of the Association for Learning Technology

* Ed Parsons
Geospatial Technologist, Google

* Sarah Porter
Head of Innovation, JISC

Project Showcase:

* Mary Orr & Mark Weal, University of Southampton
Digital Flaubert

* Adrian Bell
The Soldier in Later Medieval England

Workshops:

1) Text Encoding Initiative (TEI)
Arianna Ciula, European Science Foundation & Sebastian Rahtz, Oxford
University

2) Visualisation
Facilitator TBC

3) Data Management
Facilitator TBC

4) New Media
Facilitator TBC

For further information, please visit the conference website
(http://www.interface09.org.uk) or
e-mail info@interface09.org.uk

Special issue of the DHQ in honour of Ross Scaife

Friday, February 27th, 2009

copied from Humanist:

From: Julia Flanders
Subject: DHQ issue 3.1 now available
We’re very happy to announce the publication of the new issue of DHQ:

DHQ 3.1 (Winter 2009)
A special issue in honor of Ross Scaife: “Changing the Center of
Gravity: Transforming Classical Studies Through Cyberinfrastructure”
Guest editors: Melissa Terras and Gregory Crane
http://www.digitalhumanities.org/dhq/

Table of Contents

Acknowledgements and Dedications
Gregory Crane, Tufts University; Brent Seales, University of
Kentucky; Melissa Terras, University College London

Ross Scaife (1960-2008)
Dot Porter, Digital Humanities Observatory

Cyberinfrastructure for Classical Philology
Gregory Crane, Tufts University; Brent Seales, University of
Kentucky; Melissa Terras, University College London

Technology, Collaboration, and Undergraduate Research
Christopher Blackwell, Furman University; Thomas R. Martin, College
of the Holy Cross

Tachypaedia Byzantina: The Suda On Line as Collaborative Encyclopedia
Anne Mahoney, Tufts University

Exploring Historical RDF with Heml
Bruce Robertson, Mount Allison University

Digitizing Latin Incunabula: Challenges, Methods, and Possibilities
Jeffrey A. Rydberg-Cox, University of Missouri-Kansas City

Citation in Classical Studies
Neel Smith, College of the Holy Cross

Digital Criticism: Editorial Standards for the Homer Multitext
Casey Dué, University of Houston, Texas; Mary Ebbott, College of the
Holy Cross

Epigraphy in 2017
Hugh Cayless, University of North Carolina; Charlotte Roueché, King’s
College London; Tom Elliott, New York University; Gabriel Bodard,
King’s College London

Digital Geography and Classics
Tom Elliott, New York University; Sean Gillies, New York University

What Your Teacher Told You is True: Latin Verbs Have Four Principal
Parts
Raphael Finkel, University of Kentucky; Gregory Stump, University of
Kentucky

Computational Linguistics and Classical Lexicography
Gregory Crane, Tufts University; David Bamman, Tufts University

Classics in the Million Book Library
Gregory Crane, Tufts University; Alison Babeu, Tufts University;
David Bamman, Tufts University; Thomas Breuel, Technical University of
Kaiserslautern; Lisa Cerrato, Tufts University; Daniel Deckers,
Hamburg University; Anke Lüdeling, Humboldt-University, Berlin; David
Mimno, University of Massachusetts, Amherst; Rashmi Singhal, Tufts
University; David A. Smith, University of Massachusetts, Amherst; Amir
Zeldes, Humboldt-University, Berlin

Conclusion: Cyberinfrastructure, the Scaife Digital Library and
Classics in a Digital age
Christopher Blackwell, Furman University; Gregory Crane, Tufts
University

Best wishes from the DHQ editorial team

Les historiens et l’informatique, Roma, December 4-6, 2008

Sunday, November 23rd, 2008

From an announement circulated by Marjorie Burghart:

Les historiens et l’informatique : un métier à réinventer

Jeudi 4 décembre – 14 h 30
Marilyn Nicoud (École française de Rome)
Accueil des participants

Jean-Philippe Genet (Université de Paris I)
Peut-on prévoir l’impact des transformations de l’informatique sur le travail scientifique de l’historien ?

L’historien et ses sources : archives et bibliothèques – 15 h 00
Anna Maria Tammaro (Università di Parma)
La biblioteca digitale verso la realizzazione dell’infrastruttura globale per gli studi umanistici

Roberto Delle Donne (Università di Napoli Federico II)
Storia e Open Archive

Christophe Dessaux (Ministère de la Culture et de la Communication)
De la numérisation des collections à Europeana : des contenus culturels pour la recherche

Gino Roncaglia (Università della Tuscia)
Libri elettronici : un panorama in evoluzione

Stefano Vitali (Archivio di Stato di Firenze)
I mutamenti nel mondo degli archivi

17 h 45-18 h 45 : Discussion

Vendredi 5 décembre – 9 h 00
Éditer
Michele Ansani (Università di Pavia) et Antonella Ghignoli (Università di Firenze)
Testi digitali : nuovi media e documenti medievali

Pierre Bauduin (Université de Caen) et Catherine Jacquemard (Université de Caen)
La pratique de l’édition en ligne : expériences et questionnements

Paul Bertrand (IRHT, CNRS)
Autour de l’édition électronique et des digital humanities : nouvelle érudition, nouvelle critique ?

10 h 30-11 h 00 : Discussion

Enseigner
Rolando Minuti (Università di Firenze)
Insegnare storia al tempo del web 2.0 : considerazioni su esperienze e problemi aperti

Giulio Romero (Atelhis)
Métier d’historiens, métiers d’historien : les impératifs d’une formation ouverte

12 h 45-13 h 15 : Discussion

Communiquer – 15 h 00
Pietro Corrao (Università di Palermo)
L’esperienza di Reti Medievali

Christine Ducourtieux (Université de Paris I) et Marc Smith (École nationale des Chartes),
L’expérience de Ménestrel

16 h 00 – 16 h. 30 Discussion

Les nouveaux horizons du métier d’historien
Aude Mairey (CESCM, CNRS-Université de Poitiers)
Quelles perspectives pour la textométrie ?

Julien Alerini (Université de Paris I) et Stéphane Lamassé (Université de Paris I)
Données et statistiques : l’avenir du travail en ligne pour l’historien

17 h 45-18 h 15 : Discussion

Samedi 6 décembre – 9 h 00
François Giligny (Université de Paris I)
L’informatique en archéologie : une révolution tranquille ?

Jean-Luc Arnaud (Telemme, CNRS-Université de Provence)
Nouvelles méthodes, nouveaux usages de la cartographie et de l’analyse spatiale en histoire

Margherita Azzari (Università di Firenze)
Geographic Information Systems and Science. Stato dell’arte, sfide future

10 h-30-11 h 00 : Discussion

L’historien et l’outil informatique
Serge Noiret (European University Institute)
Fare storia a più mani con il web 2.0 : cosa cambia nelle pratiche degli storici ?

Philippe Rygiel (Université de Paris I)
De quoi le web est-il l’archive ? Lectures historiennes de l’activité réseau

Jean-Michel Dalle (Université Pierre et Marie Curie, Paris VI)
Peut-on penser le futur d’une communauté scientifique sans tenir compte de l’économie de l’innovation et de la créativité ?

12 h 45-13 h 30 : Discussion

Conclusions d’Andrea Zorzi (Università di Firenze)

If you want to attend, please contact Marilyn Nicoud or Grazia Parrino, secrma@efrome.it

Classical panels at DRHA

Sunday, September 21st, 2008

This year’s Digital Resources for the Humanities and Arts conference (Cambridge, September 14-17) included a two-part panel on Digital Classicist (sadly divided over two days), organized by Simon Mahony, Stuart Dunn, and myself. Despite some apparently last-minute (and unannounced) scheduling changes, the panel was very successful. I post here only my brief notes on the papers involved, and hope that some of my colleagues may post more detailed reactions or reports either in comments, or as posts to this or other blogs.

Gabriel Bodard

I kicked off the first Classicists’ session on Monday morning with a brief history of the Digital Classicist community and a discussion of the different approaches to studying the use of digital methods in the study of the ancient world (contrasting the historical approach of Solomon 1993 with the forward-looking theme of Crane/Terras 2008, for which authors were asked to imagine their field within Classics in 2018). I talked in general terms about the different trajectories of two very early digital classical projects, the TLG and LGPN, both of which were founded in 1972. The TLG, while a technological innovative project from the get-go, and one which changed (and continues to be indispensible to) the study of Greek literature, has not made a great contribution to the Digital Humanities because of its closed, for-profit, and self-sufficient strategy. The LGPN on the other hand began life as a very technologically conservative projects, geared to the production of paper volumes of the Lexicon, and has always been reactive to changes in technology rather than proactive as the TLG was; as a result of this, however, they have been able to change with the times, adopt new database and web technologies as they appeared, and are now actively contributing to the development of standards in XML, onomastics, and geo-tagging, and sharing data and tools widely. Finally I argued that any study of the community of digital Classics needs both to consider history (lessons to be learned from projects such as those discussed above, and other venerable projects that are still currently innovative such as Perseus and the DDbDP), and consider the newest technologies, standards, and cyberinfrastructures that will drive our work forward in the future.

(David Robey pointed out that Classics has an important and unique position with the UK arts and humanities community in that the subject associations give validity and respectability by their support of and recognition for digital resources and research.)

Stuart Dunn

In a paper titled The UK’s evolving e-infrastructure and the study of the past, Stuart discussed the national e-Science agenda and how it relates to the practices and needs of the humanities scholar, using as a basis the research process of data collection, analysis, and publication/dissemination. The essential definition of e-Science is that it centres around scholarly collaboration across and between disciplines, and the advanced computational infrastructure that enables this collaboration. e-Science often involves working with huge bodies of data or processing-intensive operations on complex material, and the example of this kind of research Stuart offered was not Classical but Byzantine: the use of agent-based modelling by colleagues in Birmingham to simulate the climactic battle of Manzikert. After some general conclusions on the opportunities for advanced e-infrastructure to be used in the study of the ancient world, there was some lively discussion of geospacial resources in the British and European academic spheres.

Simon Mahony

Simon gave a detailed presentation of the Humslides 2.0 project that he is conducting with the Classics department at King’s College London. Building upon the work carried out in a pilot project in 2006-7 to digitise the teaching slide collections of the Classics department (as a pilot study for the School of Humanities), which adopted a free trial version of the ContentDM management system (trial license now expired, and not renewed), the new project will utilize Web 2.0 tools to present and organize some 7000 slides with more metadata and more input from students and other contributors. A Humslides Flickr group has been established, inspired in part by the Commons group set up by Library of Congress and now contributed to by several other major institutions. As well as providing a teaching resource (currently restricted to KCL students until some thorny copyright issues have been wrinkled out), students will be set assessed coursework tasks to contribute to the tagging and annotating of images in this collection.

Elpiniki Fragkouli

Due to illness, Elpiniki’s paper on Training, Communities of Practice, and Digital Humanities was not delivered at this conference. We shall see whether she would be willing to upload her slides on the Digital Classicist website for discussion.

Amy Smith (Leif Isaksen, Brian Fuchs)

The paper on Lightweight Reuse of Digital Resources with VLMA: perspectives and challenges, originally commissioned for the Digital Classicist panel, was at the last minute and for unknown reasons switched over into a panel on Digital Humanites on Tuesday morning. Amy presented this paper, which discussed lessons learned from the Virtual Lightbox for Museums and Archives project (discussed in detail in their article in the special issue of Digital Medievalist journal we edited). Some conclusions and discussion followed on the topic of RDF and other metadata standards, and on browser-based versus desktop applications for viewing and organizing remote objects.

John Pybus (Alan Bowman, Charles Crowther and Ruth Kirkham)

John’s presentation on A Virtual Research Environment for the Study of Documents and Manuscripts gave a succinct and very useful summary of the history of the VRE research that has been carried out by the Centre for the Study of Ancient Documents and the humanities VRE team in Oxford. The project is one of four demo projects conducted by the second phase of work that begin with a user requirements survey in 2006-7. Built using uPortal, the VRE allows remote, parallel, and dynamic consultation and annotation of texts, images, and other resources by multiple scholars simultaneously. John showed some examples of the functionality of the VRE platform, including: the ability to show side-by-side parallel views of a tablet (different images or different renderings of the same image); the juxtaposition of multiple fragments in a lightbox; the ability to share views and exchange instant messages between scholars.

Emma O’Riordan (Michael Fulford, et al.)

In a paper that discussed another project related to the Oxford VRE programme, the Virtual Environment for Research in Archaeology: a Roman case study at Silchester, Emma discussed the origins of the VERA system in the Integrated Archaeological Database (IADB) that has been in use at Silchester for several years. The VERA system allows almost instant publication of the years results (as compared to waiting several months for paper notes to be transcribed); is cheaper than manual transcription; and more reliable than manual transcription; perhaps most importantly, the system enables live communication and collaboration between the archaeologists in the field and scholars in other parts of the world. Emma stressed one lesson from this project which was the importance of working alongside computer scientists, so that development of functionality can take into consideration the needs of the archaeologists as well as the research and interests of the programmers. It was interesting, however, that she also noted the potential pitfalls of too much tinkering with a tool while at work in the field.

Claire Warwick (Melissa Terras, et al.)

Originally scheduled in the second “Digital Humanities” on Tuesday morning, this paper followed logically on from Emma’s, and discussed Virtual Environments for Research in Archaeology (VERA): Use and Usability of Integrated Virtual Environments in Archaeological Research. Claire focussed on the evaluation of documentation of the unique needs of archaeologists in the field, and some conclusions the VERA team have been able to draw by the use of questionnaires, diaries, and anonymized interviews with the Silchester workers. Learning new IT skills was considered to be a burdern by students who were already having to learn fieldwork skills on the job; there were also new problems with the technology, as compared to the “pencil and paper” methods for which workflow and solutions had been developed over time. We look forward to a full report on the feedback and usability study that the UCL participants in the VERA project are conducting.

Leif Isaksen

Original scheduled for the “Digital Tools” panel, in this paper, Building a Virtual Community: The Antiquist Experience, Leif spoke to a Digital Classicist audience about a parallel community, Antiquist (who focus on digital approaches to cultural heritage and archaeology). The Antiquist community has an active mailing list (a Google group), a moribund blog, and a wiki whose main function is announcements of events. Antiquist boasts multiple moderators, many of whom try to keep the list active, and from the start they actively invited heritage professionals who were known to them to join the community. There is no set agenda, and membership is from a wide range of industries. Over time, traffic on the list has remained steady, with an unusually high percentage of active participants, but the content of the list traffic has tended recently to become more announcement-focussed rather than long threads and discussions. They are currently considering inviting new moderators to join the team, in the hope of injecting fresh blood and enthusiasm into a team who now rarely innovate and introduce new discussions to the group. Compared to many mailing lists, the community is still very active and very healthy, however. (Leif has usefully uploaded his slideshow and commented in a thread on the Antiquist email group.)

Digitization and the Humanities: an RLG Programs Symposium

Thursday, April 17th, 2008

Is anyone here attending this?

As primary source materials move online, in both licensed and freely available form, what will be the impact on scholarship? On teaching and learning practice? On the collecting practices of research libraries? These are questions we are hoping to explore in the third day of our annual meeting (June 4th). This symposium, which we’re calling “Digitization and the Humanities: Impact on Libraries and Special Collections,” will feature perspectives from scholars on how digital collections are impacting both their research and teaching practice. We’ll also have perspectives from university librarians (Paul Courant, University of Michigan and Robin Adams, Trinity College Dublin) on the potential impact on library collecting practices.

The symposium will be held at the Chemical Heritage Foundation, and on Tuesday evening (June 3rd), the Philadelphia Museum of Art will host a reception for attendees. It should be a great event and a thought provoking conversation, and we hope you will join us. RLG Partners may register online.

Report on NEH Workshop “Supporting Digital Scholarly Editions”

Friday, April 4th, 2008

The official report on the NEH Workshop “Supporting Digital Scholarly Editions”, held on January 14, has been released and is available in PDF form:

http://www.virginiafoundation.org/NEH%20Workshop%20Report%20FINAL-3.pdf

Attendees included representatives from funding agencies and university presses, historians, just one or two literary scholars, one medievalist, and no classicists. It appears that much of the discussion focused on creating a service provider for scholarly editions, something to work between scholars and university presses to turn scholarship into digital publications.

I’m of two minds about this. On one hand, I know a lot of “traditional scholars” who find the idea of digital publication a little scary, just the idea of having to learn the technology. So it could be a good way to bring digital publication into the mainstream. But on the other hand, this kind of model could be stifling for creativity. One of the exciting things about digital projects is that, at this time, although there are standards there is no single model to follow for publication. There’s a lot of room for experimentation. It’s certainly not either/or – those of us doing more cutting-edge work will continue to do it whether there are mainstream service providers at university presses or not. But it’s interesting that this is being discussed.

Informatique et Egyptologie, I&E 2008

Wednesday, April 2nd, 2008

A date has been set for the next meeting of the International Association of Egyptologists Computer Group (Informatique et Egyptologie, I&E), which last met in Oxford in 2006.

Thanks to the kindness of Dr Wilfried Seipel, the meeting will take place in the Kunsthistorisches Museum, Vienna, Austria, on 8-11 July 2008, with the sessions on 9-10 July.

Further information can be found here

Problems and outcomes in digital philology (session 3: methodologies)

Thursday, March 27th, 2008

The Marriage of Mercury and Philology: Problems and outcomes in digital philology

e-Science Institute, Edinburgh, March 25-27 2008.

(Event website; programme wiki; original call)

I was asked to summarize the third session of papers in the round table discussion this afternoon. My notes (which I hope do not misrepresent anybody’s presentation too brutally) are transcribed below.

Session 3: Methodologies

1. Federico Meschini (De Montfort University) ‘Mercury ain’t what he used to be, but was he ever? Or, do electronic scholarly editions have a mercurial attitude?’ (Tuesday, 1400)

Meschini gave a very useful summary of the issues facing editors or designers of digital critical editions. The issues he raised included:

  • the need for good metadata standards to address the problems of (inevitable and to some extent desirable) incompatibility between different digital editions;
  • the need for a modularized approach that can include many very specialist tools (the “lego bricks” model);
  • the desirability of planning a flexible structure in advance so that the model can grow organically, along with the recognition that no markup language is complete, so all models need to be extensible.

After a brief discussion of the reference models available to the digital library world, he explained that digital critical editions are different from digital libraries, and therefore need different models. A digital edition is not merely a delivery of information, it is an environment with which a “reader” or “user” interacts. We need, therefore, to engage with the question: what are the functional requirements for text editions?

A final summary of some exciting recent movements, technologies, and discussions in online editions served as a useful reminder that far from taking for granted that we know what a digital critical edition should look like, we need to think very carefully about the issues Mechini raises and other discussions of this question.

2. Edward Vanhoutte (Royal Academy of Dutch Language and Literature, Belgium) ‘Electronic editions of two cultures –with apologies to C.P. Snow’ (Tuesday, 1500)

Vanhoutte began with the rhetorical observation that our approach to textual editions is in adequate because the editions are not as intuitive to users, flexible in what they can contain, and extensible in use and function as a household amenity such as the refrigerator. If the edition is an act of communication, an object that mediates between a text and an audience, then it fails if we do not address the “problem of two audiences” (citing Lavagnino). We serve the audience of our peers fairly well–although we should be aware that even this is a more hetereogenous and varied a group than we sometimes recognise–but the “common audience”, the readership who are not text editors themselves, are poorly served by current practice.

After some comments on different types of editions (a maximal edition containing all possible information would be too rich and complex for any one reader, so minimal editions of different kinds can be abstracted from this master, for example), and a summary of Robinson’s “fluid, cooperative, and distributed editions”, Vanhoutte made his own recommendation. We need, in summary, to teach our audience, preferably by example, how to use our editions and tools; how to replicate our work, the textual scholarship and the processes performed on it; how to interact with our editions; and how to contribute to them.

Lively discussion after this paper revolved around the question of what it means to educate your audience: writing a “how to” manual is not the best way to encourage engagement with ones work, but providing multiple interfaces, entry-points, and cross-references that illustrate the richness of the content might be more accessible.

3. Peter Robinson (ITSEE, Birmingham) ‘What we have been doing wrong in making digital editions, and how we could do better?’ (Tuesday, 1630)

Robinson began his provocative and speculative paper by considering a few projects that typify things we do and do not do well: we do not always distribute project output successfully; we do not always achieve the right level of scholarly research value. Most importantly, it is still near-impossible for a good critical scholar to create an online critical edition without technical support, funding for the costs of digitization, and a dedicated centre for the maintenance of a website. All of this means that grant funding is still needed for all digital critical work.

Robinson has a series of recommendations that, he hopes, will help to empower the individual scholar to work without the collaboration of a humanities computing centre to act as advisor, creator, librarian, and publisher:

  1. Make available high-quality images of all our manuscripts (this may need to be funded by a combination of goverment money, grant funding, and individual users paying for access to the results).
  2. Funding bodies should require the base data for all projects they fund to be released under a Creative Commons Attribution-ShareAlike license.
  3. Libraries and not specialist centres should hold the data of published projects.
  4. Commercial projects should be involved in the production of digital editions, bringing their experience of marketing and money-making to help make projects sustainable and self-funding.
  5. Most importantly, he proposes the adoption of common infrastructure, a set of agreed descriptors and protocols for labelling, pointing to, and sharing digital texts. An existing protocol such as the Canonical Text Services might do the job nicely.

4. Manfred Thaller (Cologne) ‘Is it more blessed to give than to receive? On the relationship between Digital Philology, Information Technology and Computer Science’ (Wednesday, 0950)

Thaller gave the last paper, on the morning of the third day of this event, in which he asked (and answered) the over-arching question: Do computer science professionals already provide everything that we need? And underlying this: Do humanists still need to engage with computer science at all? He pointed out two classes of answer to this question:

  • The intellectual response: there are things that we as humanists need and that computer science is not providing. Therefore we need to engage with the specialists to help develop these tools for ourselves.
  • The political response: maybe we are getting what we need already, but we will experience profitable side effects from collaborating with computer scientists, so we should do it anyway.

Thaller demonstrated via several examples that we do not in fact get everything we need from computer scientists. He pointed out that two big questions were identified in his own work twelve years ago: the need for software for dynamic editions, and the need for mass digitization. Since 1996 mass digitization has come a long way in Germany, and many projects are now underway to image millions of pages of manuscripts and incunabula in that country. Dynamic editions, on the other hand, while there has been some valuable work on tools and publications, seem very little closer than they were twelve years ago.

Most importantly, we as humanists need to recognize that any collaboration with computer scientists is a reciprocal arrangement, that we offer skills as well as receive services. One of the most difficult challenges facing computer scientists today, we hear, is to engage with, organise, and add semantic value to the mass of imprecise, ambiguous, incomplete, unstructured, and out-of-control data that is the Web. Humanists have spent the last two hundred years studying imprecise, ambiguous, incomplete, unstructured, and out-of-control materials. If we do not lend our experience and expertise to help the computer scientists solve this problem, than we can not expect free help from them to solve our problems.

Services and Infrastructure for a Million Books (round table)

Monday, March 17th, 2008

Million Books Workshop, Friday, March 14, 2008, Imperial College London.

The second of two round tables in the afternoon of the Million Books Workshop, chaired by Brian Fuchs (Imperial College London), asked a panel of experts what services and infrastructure they would like to see in order to make a Million Book corpus useful.

  1. Stuart Dunn (Arts and Humanities e-Science Support Centre): the kinds of questions that will be asked of the Million Books mean that the structure of this collection needs to be more sophisticated that just a library catalogue
  2. Alistair Dunning (Archaeological Data Service & JISC): powerful services are urgently needed to enable humanists both to find and to use the resources in this new collection
  3. Michael Popham (OULS but formerly director of e-Science Centre): large scale digitization is a way to break down the accidental constraints of time and place that limit access to resources in traditional libraries
  4. David Shotton (Image Bioinformatics Research Group): emphasis is on accessibility and the semantic web. It is clear than manual building of ontologies does not scale to millions of items, therefore data mining and topic modelling are required, possible assisted by crowdsourcing. It is essential to be able to integrate heterogeneous sources in a single, semantic infrastructure
    1. Dunning: citability and replicability of research becomes a concern with open publication on this scale
    2. Dunn: the archaeology world has similar concerns, cf. the recent LEAP project
  5. Paul Walk (UK Office for Library and Information Networking): concerned with what happens to the all-important role of domain expertise in this world of repurposable services: where is the librarian?
    1. Charlotte Roueché (KCL): learned societies need to play a role in assuring quality and trust in open publications
    2. Dunning: institutional repositories also need to play a role in long-term archiving. Licensing is an essential component of preservation—open licenses are required for maximum distribution of archival copies
    3. Thomas Breuel (DFKI): versioning tools and infrastructure for decentralised repositories exist (e.g. Mercurial)
    4. Fuchs: we also need mechanisms for finding, searching, identifying, and enabling data in these massive collections
    5. Walk: we need to be able to inform scholars when new data in their field of interest appears via feeds of some kind

(Disclaimer: this is only one blogger’s partial summary. The workshop organisers will publish an official report on this event.)

What would you do with a million books? (round table)

Sunday, March 16th, 2008

Million Books Workshop, Friday, March 14, 2008, Imperial College London.

In the afternoon, the first of two round table discussions concerned the uses to which massive text digitisation could be put by the curators of various collections.

The panellists were:

  • Dirk Obbink, Oxyrhynchus Papyri project, Oxford
  • Peter Robinson, Institute for Textual Scholarship and Electronic Editing, Birmingham
  • Michael Popham, Oxford University Library Services
  • Charlotte Roueché, EpiDoc and Prosopography of the Byzantine World, King’s College London
  • Keith May, English Heritage

Chaired by Gregory Crane (Perseus Digital Library), who kicked off by asking the question:

If you had all of the texts relevant to your field—scanned as page images and OCRed, but nothing more—what would you want to do with them?

  1. Roueché: analyse the texts in order to compile references toward a history of citation (and therefore a history of education) in later Greek and Latin sources.
  2. Obbink: generate a queriable corpus
  3. Robinson: compare editions and manuscripts for errors, variants, etc.
    1. Crane: machine annotation might achieve results not possible with human annotation (especially at this scale), particularly if learning from a human-edited example
    2. Obbink: identification of text from lost manuscripts and witnesses toward generation of stemmata. Important question: do we also need to preserve apparatus criticus?
  4. May: perform detailed place and time investigations into a site preparatory to performing any new excavations
    1. Crane: data mining and topic modelling could lead to the machine-generation of an automatically annotated gazeteer, prosopography, dictionary, etc.
  5. Popham: metadata on digital texts scanned by Google not always accurate or complete; not to academic standards: the scanning project is for accessibility, not preservation
    1. Roueché: Are we talking about purely academic exploitation, or our duty as public servants to make our research accessible to the wider public?
    2. May: this is where topic analysis can make texts more accessible to the non-specialist audience
    3. Brian Fuchs (ICL): insurance and price comparison sites, Amazon, etc., have sophisticated algorithms for targeting web materials at particular audiences
    4. Obbink: we will also therefore need translations of all of these texts if we are reaching out to non-specialists; will machine translation be able to help with this?
    5. Roueché: and not just translations into English, we need to make these resources available to the whole world.

(Disclaimer: this summary is partial and partisan, reflecting those elements of the discussion that seemed most interesting and relevant to this blogger. The workshop organisers will publish an official report on this event presently.)

Million Books Workshop (brief report)

Saturday, March 15th, 2008

Imperial College London.
Friday, March 14, 2008.

David Smith gave the first paper of the morning on “From Text to Information: Machine Translation”. The discussion included a survey of machine translation techniques (including the automatic discovery of existing translations by language comparison), and some of the value of cross-language searching.

[Please would somebody who did not miss the beginning of the session provide a more complete summary of Smith's paper?]

Thomas Breuel then spoke on “From Image to Text: OCR and Mass Digitisation” (this would have been the first paper in the day, kicking off the developing thread from image to text to information to meaning, but transport problems caused the sequence of presentations to be altered). Breuel discussed the status of professional OCR packages, which are usually not very trainable and have their accuracy constrained by speed requirements, and explained how the Google-sponsored but Open Source OCRopus package intends to improve on this situation. OCRopus is highly extensible and trainable, but currently geared to the needs of the Google Print project (and so while effective at scanning book pages, may be less so for more generic documents). Currently in alpha-release and incorporating the Tesseract OCR engine, this tool currently has a lower error-rate than other Open Source OCR tools (but not the professional tools, which often contain ad hoc code to deal with special cases). A beta release is set for April 2008, which will demo English, German, and Russian language versions, and release 1.0 is scheduled for Fall 2008. Breuel also briefly discussed the hOCR microformat for describing page layouts in a combination of HTML and CSS3.

David Bamman gave the second in the “From Text to Information” sequence of papers, in which he discussed building a dynamic lexicon using automated syntax recognition, identifying the grammatical contexts of words in a digital text. With a training set of some thousands of words of Greek and Latin tree-banked by hand, auto-syntactic parsing currently achieves an accuracy rate something above 50%. While this is still too high a rate of error to make this automated process useful as an end in itself, to deliver syntactic tagging to language students, for example, it is good for testing against a human-edited lexicon, which provides a degree of control. Usage statistics and comparisons of related words and meanings give a good sense of the likely sense of a word or form in a given context.

David Mimno completed the thread with a presentation on “From Information to Meaning: Machine Learning and Classification Techniques”. He discussed automated classification based on typical and statistical features (usually binary indicators: is this email spam or not? Is this play tragedy or comedy?). Sequences of objects allow for a different kind of processing (for example spell-checking), including named entity recognition. Names need to be identified not only by their form but by their context, and machines do a surprisingly good job at identifying coreference and thus disambiguating between homonyms. A more flexible form of automatic classification is provided by topic modelling, which allows mixed classifications and does not require the definition of labels. Topic modelling is the automatic grouping of topics, keywords, components, relationships by the frequency of clusters of words and references. This modelling mechanism is an effective means for organising a library collection by automated topic clusters, for example, rather than by a one-dimensional and rather arbitrary classmark system. Generating multiple connections between publications might be a more effective and more useful way to organise a citation index for Classical Studies than the outdated project that is l’Année Philologique.

Simon Overell gave a short presentation on his doctoral research into the distribution of location references within different language versions of Wikipedia. Using the tagged location links as disambiguators, and using the language cross-reference tags to compare across the collections, he uses the statistics compiled to analyse bias (in a supposedly Neutral Point-Of-View publication) and provide support for placename disambiguation. Overell’s work is in progress, and he is actively seeking collaborators who might have projects that could use his data.

In the afternoon there were two round-table discussions on the subjects of “Collections” and “Systems and Infrastructure” that I may report on later if my notes turn out to be usable.

Registration: 3D Scanning Conference at UCL

Tuesday, February 26th, 2008

Kalliopi Vacharopoulou wrote, via the DigitalClassicist list:

I would like to draw to your attention the fact that registration for the 3D Colour Laser Scanning Conference at UCL on the 27th and 28th of March has now opened.

The first day (27th of March) will include a keynote presentation and papers on the themes of General Applications of 3D Scanning in the Museum and Heritage Sector and of 3D Scanning in Conservation.

The second day (28th of March) will offer a keynote presentation and papers on the themes of 3D Scanning in Display (and Exhibition) and Education and Interpretation. A detailed programme with the papers and the names of the speakers can be found in our website.

If you would like to attend the conference, I would kindly request to fill in the registration form which you can find in this link and return it to me as soon as possible.

There is no fee for participating (or attending the conference) (coffee and lunch are provided free of charge). Please note that attendance is offered on a first-come, first-served basis.

Please feel free to circulate the information about the conference to anyone who you think might be interested.

In the meantime, do not hesitate to contact me with any inquiries.

International Seminar of Digital Philology: Edinburgh, March 25-27, 2008

Saturday, February 23rd, 2008

Seen on the AHeSSC mailing list:

The e-Science Institute Event Announcement

The e-Science Institute is delighted to host the “The Marriage of Mercury and Philology: Problems and Outcomes in Digital Philology”. The conference welcomes both leading scholars and young researchers working on the problems of textual criticism and editorial scholarship in the electronic medium, as well as students, teachers, librarians, archivists, and computing professionals who are interested in representation, access, exchange, management and conservation of texts.

Organiser: Cinzia Pusceddu
Dates and Time: Tuesday 25th March 09.00 – Thursday 27th March 17.00
Place: e-Science Institute
University of Edinburgh
13-15 South College Street
Edinburgh
EH8 9AA

For registration and more details see http://www.nesc.ac.uk/esi/events/854/.

(more…)

Humanities GRID Workshop (30-31 Jan; Imperial College London)

Monday, January 21st, 2008

By way of the Digital Classicists List:

Epistemic Networks and GRID + Web 2.0 for Arts and Humanities
30-31 January 2008
Imperial College Internet Centre, Imperial College London

http://www.internetcentre.imperial.ac.uk/events

Data driven Science has emerged as a new model which enables researchers to move from experimental, theoretical and computational distributed networks to a new paradigm for scientific discovery based on large scale GRID networks (NSF/JISC Digital Repositories Workshop, AZ 2007). Hundreds of thousands of new digital objects are placed in digital repositories and on the web everyday, supporting and enabling research processes not only in science, but in medicine, education, culture and government.  It is therefore important to build interoperable infra-structures and web-services that will allow for the exploration, data-mining, semantic integration and experimentation of arts and humanities resources on a large scale.  There is a growing consensus that GRID solutions alone are too heavy, and that coupling it with Web 2.0 allows for the development of a more light-weight service oriented architecture (SOA) that can adapt readily to user needs by using on demand utility computing, such as morphological tools, mash-ups, surf clouds, annotation and automated workflows for composing multiple services.  The goal is not just to have fast access to digital resources in the arts and humanities, but to have the capacity to create new digital resources, interrogate data and form hypotheses about its meaning and wider context.  Clearly what needs to emerge is a mixed-model of GRID + Web 2.0 solutions for the arts and humanities which creates an epistemic network that supports a four step iterative process: (i) retrieval, (ii) contextualisation, (iii) narrative and hypothesis building, and (iv) creating contextualised digital resources in semantically integrated knowledge networks.  What is key here is not just managing new data, but the capacity to share, order, and create knowledge networks from existing resources in a semantically accessible form.

To create epistemic networks in the arts and humanities there are core technologies that must be developed.  The aim of this expert METHNET Workshop is to focus on developing a strategy for the implementation of these core technologies on an inter-national scale by bringing together GRID computing specialists with researchers from Classics, Literature and History who have been involved in the creation and use of electronic resources.  The core technologies we will focus on in this two day work-shop are: (i) infrastructure, (ii) named entity, identity and co-reference services, (iii) morphological services and parallel texts, (iv) epistemic networks and virtual research environments.  The idea is to bring together expertise from the UK, US, and European funded projects to agree upon a common strategy for the development of core infra-structure and web-services for the arts and humanities that will enable the use of GRID technologies for advanced research.

DAY ONE- 10:00 – 6:00

SESSION I: GRID + Web 2.0 Infrastructure

SESSION II: Computational and Semantic Services: Named Entity, Identity and Co-reference

  • Paul Watry: Named Entity and Identity Services for the National Archives www.liv.ac.uk
  • Greg Crane –  Co-Reference (Perseus)
  • Hamish Cunningham/Kalina Bontcheva: AKT and GATE: GRID-WEB Services AKT/GATE
  • Martin Doerr – Co-Reference and Semantic Services for Grid + Web 2.0 (FORTH)

DAY TWO: 10:00 – 6:00

SESSION I:  Morphological, Parallel Texts and Citation Services

  • Greg Crane – “Latin Depedency Treebank”, Perseus Project
  • Marco Passarotti – “Index Thomisticus” Treebank
  • Notis Toufexis – ‘Neither Ancient, nor Modern:  Challenges for the creation of a Digital Infrastructure for Medieval Greek’
  • Rob Iliffe – Intelligent Tools for Humanities Researchers, The Newton Project

SESSION II: Epistemic Networks and Virtual Research Environments

Registration fee is £60 and places are limited.

Please contact Dolores Iorizzo (d.iorizzo@ic.ac.uk) to secure a place or for further information.  Please send registration to Glynn Cunin (g.cunin@imperial.ac.uk).

The Imperial College Internet Centre would like to acknowledge generous support from the AHRC METHNET for co-hosting this conference.

International School in Archaeology and Cultural Heritage

Friday, January 18th, 2008

By way of Jack Sasson’s Agade list:

We would like to bring your attention to the International School in Archaeology and Cultural Heritage that we’re organizing in May 2008, in Ascona, Switzerland.

It’s a jointly organization between:

The School will face the problem of the modern technologies in the heritage field, giving participants the opportunity to obtain a detailed overview of the main methods and applications to archaeological and conservation research and practice. Furthermore, our School will give the chance to participants to enter in a very short time the kernel of the scientific discussion on 3D technologies — surveying methods, documentation, data management and data interpretation — in the archaeological research and practice.

The School will be open to ca 60 participants at graduate level, to those carrying out doctoral or specialist research, to established research workers, to members of State Archaeology Services and to professionals specializing in the study and documentation, modeling and conservation of the archaeological heritage.

The deadline for the registration is 31st March, 2008.

Grants provided by UNESCO and ISPRS will be available for students with limited budgets and travel possibilities. The deadline for the grant application is 15st February, 2008.

The grant application and registration form are available online [pdf].

The School is to be held in the congress centre Centro Stefano Franscini, Monte Verità, Ascona, Switzerland. The centre is an ETH-affiliated seminar complex located in a superb botanical park on the historic and cultural Monte Verità area, which will also be the residence of the participants with its integrated hotel and restaurant.

We would be grateful if you could also circulate this announcement to all the possible participants.

Don’t hesitate to contact by email info@3darchaeology.org the organization if you should have any question.

Thank you and best regards,

Prof. Armin Gruen

Dr. Stefano Campana

Dr. Fabio Remondino

Prof. Maurizio Forte

THATCamp: May 31 – June 1, 2008

Thursday, January 17th, 2008

See further http://thatcamp.org/:

a BarCamp-style, user-generated “unconference” on digital humanities … organized and hosted by the Center for History and New Media at George Mason University, Digital Campus, and THATPodcast

NEH/JISC joint event at King’s College, London

Thursday, January 17th, 2008

Monday, 21 January 2008 in room 2B08, Strand Campus, King’s College London:

Digital Humanities Tool Survey

Thursday, January 17th, 2008

In Brett Bobley’s recent email, he alerted us to Susan Schreibman’s survey:

Colleagues,

Over the past few years, the idea of tool development as a scholarly activity in the digital humanities has been gaining ground. It has been the subject of numerous articles and conference presentations. There has not been, however, a concerted effort to gather information about the perceived value of tool development, not only as a scholarly activity, but in relation to the tenure and promotion process, as well as for the advancement of the field itself.

Ann Hanlon and myself have compiled such a survey and would be grateful if those of you who are or have been engaged in tool development for the digital humanities would take the time to complete an online Digital Humanities Tool Developers’ Survey.

You will need to fill up a consent form before you begin, and there is an opportunity to provide us with feedback on more than one tool (you simply take the survey again). The survey should not take more than 10-15 minutes. It is our intention to present the results of our survey at Digital Humanities 2008.

With all best wishes,

Susan Schreibman
Assistant Dean
Head of Digital Collections and Research McKeldin Library University of
Maryland College Park

Tutorial: The CIDOC Conceptual Reference Model

Tuesday, January 15th, 2008

Noted by way of JISC-REPOSITORIES:

DCC Tutorial: The CIDOC Conceptual Reference Model – A New Standard for Knowledge Sharing
January 29 2008
University of Glasgow

The DCC and FORTH are delighted to announce that they will be delivering a joint one-day tutorial on the CIDOC Conceptual Reference Model.

This tutorial will introduce the audience to the CIDOC Conceptual Reference Model, a core ontology and ISO standard (ISO 21127) for the semantic integration of cultural information with library, archive and other information. The CIDOC CRM concentrates on the definition of relationships, rather than terminology, in order to mediate between heterogeneous database schemata and metadata structures. This led to a compact model of 80 classes and 130 relationships, easy to comprehend and suitable to serve as a basis for mediation of cultural and library information and thereby provide the semantic ‘glue’ needed to transform today’s disparate, localised information sources into a coherent and valuable global resource. It comprises the concepts characteristic for data structures employed by most museum, archive and library documentation. Its central idea is the explicit modelling of events, both for the representation of metadata, such as creation, publication, and use, as well as for content summarization and the creation of integrated knowledge bases. It is not prescriptive, but provides a framework to describe common high-level semantics that allow for information integration at the schema level for a wide area of domains.

The CIDOC CRM, as an effort of the museums community, is paralleled by the Functional Requirements for Bibliographic Records (FRBR) by IFLA for the librarians community. Both Working Groups have come together since 2003 and started to develop a common harmonized model. The first draft version is now available as a compatible extension of the CRM, the ooFRBR, covering equally libraries and museums.

The tutorial aims at rendering the necessary knowledge to understand the potential of applying the CRM – where it can be useful and what the major technical issues of an application are. It will present an overview of the concepts and relationships covered by the CRM. As an example of a simple application, it will present the CRM Core Metadata Element Set, a minimal metadata schema of about 20 elements, still compatible with the CRM, and demonstrate how even this simple schema can be used to create large networks of integrated knowledge about physical and digital objects, persons, places and events. As an example of a simple compatible extension, it will present the core model of digitization processes used in the CASPAR project to describe digital provenance.

In part two, the tutorial will present in detail the draft ooFRBR Model. This model describes in detail the intellectual creation process from the first conception to the publishing in industrial form such as books or electronically. It should be considered equally interesting for the digital libraries community, and it is a fine example of the extensibility of the CRM for dedicated domains.
There will be enough time for questions and discussion.

Presenter:
Martin Doerr, Information Systems Lab, Institute of Computer Science, Foundation for Research and Technology – Hellas (FORTH), Vassilika Vouton.

Target audience: Ontology experts, digital library designers, data warehouse designers, system integrators, portal designers that work in the wider area of cultural and library information, but also IT-Staff of libraries, museums and archives, vendors of cultural and other information systems. Basic knowledge of object-oriented data models is required.

Duration: Part one: 3 hours
Part two: 1.5 hours
Cost: £50 for DCC Associate Network members and £75 for non members.

If you are interested in taking part, please email british.editor@erpanet.org. Please feel free to forward this message on to any interested parties.