Archive for March, 2008

Scholarly legacy: an argument for open licensing now?

Monday, March 31st, 2008

Back in November, Gabriel Bodard posted about the importance of attaching explicit licenses (or public domain declarations) to on-line works so as to clarify for users how they can, and can’t, use these works. A new post by Cathy Davidson (“Permission Denied” in Cat in the Stack, 31 March 2008), highlights the case of an academic author who has been unable to include in his book various images of artworks created by the subject of that book because the artists’ heirs have refused permission.

Which all makes me wonder: is explicit release, in one’s own lifetime, of a work into the public domain or under license terms that permit redistribution and remixing, sufficient to prevent post-mortem claw-back by one’s institutional or personal heirs?

Problems and outcomes in digital philology (session 3: methodologies)

Thursday, March 27th, 2008

The Marriage of Mercury and Philology: Problems and outcomes in digital philology

e-Science Institute, Edinburgh, March 25-27 2008.

(Event website; programme wiki; original call)

I was asked to summarize the third session of papers in the round table discussion this afternoon. My notes (which I hope do not misrepresent anybody’s presentation too brutally) are transcribed below.

Session 3: Methodologies

1. Federico Meschini (De Montfort University) ‘Mercury ain’t what he used to be, but was he ever? Or, do electronic scholarly editions have a mercurial attitude?’ (Tuesday, 1400)

Meschini gave a very useful summary of the issues facing editors or designers of digital critical editions. The issues he raised included:

  • the need for good metadata standards to address the problems of (inevitable and to some extent desirable) incompatibility between different digital editions;
  • the need for a modularized approach that can include many very specialist tools (the “lego bricks” model);
  • the desirability of planning a flexible structure in advance so that the model can grow organically, along with the recognition that no markup language is complete, so all models need to be extensible.

After a brief discussion of the reference models available to the digital library world, he explained that digital critical editions are different from digital libraries, and therefore need different models. A digital edition is not merely a delivery of information, it is an environment with which a “reader” or “user” interacts. We need, therefore, to engage with the question: what are the functional requirements for text editions?

A final summary of some exciting recent movements, technologies, and discussions in online editions served as a useful reminder that far from taking for granted that we know what a digital critical edition should look like, we need to think very carefully about the issues Mechini raises and other discussions of this question.

2. Edward Vanhoutte (Royal Academy of Dutch Language and Literature, Belgium) ‘Electronic editions of two cultures –with apologies to C.P. Snow’ (Tuesday, 1500)

Vanhoutte began with the rhetorical observation that our approach to textual editions is in adequate because the editions are not as intuitive to users, flexible in what they can contain, and extensible in use and function as a household amenity such as the refrigerator. If the edition is an act of communication, an object that mediates between a text and an audience, then it fails if we do not address the “problem of two audiences” (citing Lavagnino). We serve the audience of our peers fairly well–although we should be aware that even this is a more hetereogenous and varied a group than we sometimes recognise–but the “common audience”, the readership who are not text editors themselves, are poorly served by current practice.

After some comments on different types of editions (a maximal edition containing all possible information would be too rich and complex for any one reader, so minimal editions of different kinds can be abstracted from this master, for example), and a summary of Robinson’s “fluid, cooperative, and distributed editions”, Vanhoutte made his own recommendation. We need, in summary, to teach our audience, preferably by example, how to use our editions and tools; how to replicate our work, the textual scholarship and the processes performed on it; how to interact with our editions; and how to contribute to them.

Lively discussion after this paper revolved around the question of what it means to educate your audience: writing a “how to” manual is not the best way to encourage engagement with ones work, but providing multiple interfaces, entry-points, and cross-references that illustrate the richness of the content might be more accessible.

3. Peter Robinson (ITSEE, Birmingham) ‘What we have been doing wrong in making digital editions, and how we could do better?’ (Tuesday, 1630)

Robinson began his provocative and speculative paper by considering a few projects that typify things we do and do not do well: we do not always distribute project output successfully; we do not always achieve the right level of scholarly research value. Most importantly, it is still near-impossible for a good critical scholar to create an online critical edition without technical support, funding for the costs of digitization, and a dedicated centre for the maintenance of a website. All of this means that grant funding is still needed for all digital critical work.

Robinson has a series of recommendations that, he hopes, will help to empower the individual scholar to work without the collaboration of a humanities computing centre to act as advisor, creator, librarian, and publisher:

  1. Make available high-quality images of all our manuscripts (this may need to be funded by a combination of goverment money, grant funding, and individual users paying for access to the results).
  2. Funding bodies should require the base data for all projects they fund to be released under a Creative Commons Attribution-ShareAlike license.
  3. Libraries and not specialist centres should hold the data of published projects.
  4. Commercial projects should be involved in the production of digital editions, bringing their experience of marketing and money-making to help make projects sustainable and self-funding.
  5. Most importantly, he proposes the adoption of common infrastructure, a set of agreed descriptors and protocols for labelling, pointing to, and sharing digital texts. An existing protocol such as the Canonical Text Services might do the job nicely.

4. Manfred Thaller (Cologne) ‘Is it more blessed to give than to receive? On the relationship between Digital Philology, Information Technology and Computer Science’ (Wednesday, 0950)

Thaller gave the last paper, on the morning of the third day of this event, in which he asked (and answered) the over-arching question: Do computer science professionals already provide everything that we need? And underlying this: Do humanists still need to engage with computer science at all? He pointed out two classes of answer to this question:

  • The intellectual response: there are things that we as humanists need and that computer science is not providing. Therefore we need to engage with the specialists to help develop these tools for ourselves.
  • The political response: maybe we are getting what we need already, but we will experience profitable side effects from collaborating with computer scientists, so we should do it anyway.

Thaller demonstrated via several examples that we do not in fact get everything we need from computer scientists. He pointed out that two big questions were identified in his own work twelve years ago: the need for software for dynamic editions, and the need for mass digitization. Since 1996 mass digitization has come a long way in Germany, and many projects are now underway to image millions of pages of manuscripts and incunabula in that country. Dynamic editions, on the other hand, while there has been some valuable work on tools and publications, seem very little closer than they were twelve years ago.

Most importantly, we as humanists need to recognize that any collaboration with computer scientists is a reciprocal arrangement, that we offer skills as well as receive services. One of the most difficult challenges facing computer scientists today, we hear, is to engage with, organise, and add semantic value to the mass of imprecise, ambiguous, incomplete, unstructured, and out-of-control data that is the Web. Humanists have spent the last two hundred years studying imprecise, ambiguous, incomplete, unstructured, and out-of-control materials. If we do not lend our experience and expertise to help the computer scientists solve this problem, than we can not expect free help from them to solve our problems.

DHI Now Known as Office of Digital Humanities (ODH)

Tuesday, March 25th, 2008

Not specifically classics, but this news from the National Endowment for the Humanities should be of interest, at least to those of us in the US: The Digital Humanities Initiative (DHI) has been made permanent, and is now the Office of Digital Humanities (ODH)
From the ODH Webpage:

The Office of Digital Humanities (ODH) is an office within the National Endowment for the Humanities (NEH). Our primary mission is to help coordinate the NEH’s efforts in the area of digital scholarship. As in the sciences, digital technology has changed the way scholars perform their work. It allows new questions to be raised and has radically changed the ways in which materials can be searched, mined, displayed, taught, and analyzed. Technology has also had an enormous impact on how scholarly materials are preserved and accessed, which brings with it many challenging issues related to sustainability, copyright, and authenticity. The ODH works not only with NEH staff and members of the scholarly community, but also facilitates conversations with other funding bodies both in the United States and abroad so that we can work towards meeting these challenges.

Digital Classicist seminars update

Tuesday, March 25th, 2008

To bring you all up to date with what is going on with the Digital Classicist seminar series:

Some papers from the DC seminar series held at the Institute of Classical Studies in London in the summer of 2006 have been published as a special issue of the Digital Medievalist (4:2008).

 See: http://www.digitalmedievalist.org/index.html

The dedication reads: In honour of Ross Scaife (1960-2008), without whose fine example of collaborative spirit, scrupulous scholarship, and warm friendship none of the work in this volume would be what it is.

Gabriel and I are putting together a collection of papers from the DC summer series of 2007 and working on the programme for the coming summer (2008). With the continued support of the Institute of Classical Studies (London) and the Centre for Computing in the Humanities, King’s College London it is anticipated that this seminar series will continue to be an annual event.  

Ross Scaife Memorial Services

Friday, March 21st, 2008

There will be two memorial services held in honor of Ross Scaife. The first will be held at Belmont (the home and studio of Fredericksburg artist Gari Melchers) in Fredericksburg, Virginia on Wednesday, April 2, at 2 pm (http://www.umw.edu/gari_melchers/). The second service will be held in Memorial Hall on the University of Kentucky Campus in Lexington, Kentucky on Saturday, April 12, at 1pm (http://ukcc.uky.edu/cgi-bin/dynamo?maps.391+campus+0049). Feel free to contact me if you’d like more information (Dot Porter, dporter@uky.edu).

A note from the blog editors

Friday, March 21st, 2008

The authors and editors of the Stoa blog have been hesitant to post a new item to this blog that would take the obituary and memorial to Ross Scaife off the top of the page. However, we are determined that the blogging should go on, and that this site should continue to serve the functions for which Ross founded it.

This blog exists to report, highlight, and comment upon issues of interest to Classicists and Digital Humanists (since 2005 it has also been the official blog of the Digital Classicist community). Its core themes include digital research and publication, events, publications and jobs. We place particular focus on standards, Open Access, Open Source, and other issues that are vital to the future of our fields.

We are in communication with the Stoa Advisory Board, whose members are communicating with the various Stoa project leaders concerning steps for the maintenance and preservation of Stoa content. As their plans formalize, we will report on them here.

Ross Scaife (1960-2008)

Tuesday, March 18th, 2008

Allen Ross Scaife, 47, Professor of Classics at the University of Kentucky and founding editor of the Stoa Consortium for Electronic Publication in the Humanities, died of cancer on March 15, at his home in Lexington, Kentucky.

Photo of Ross Scaife taken in January 2007
Ross was born in Fredericksburg, VA on March 31, 1960. He graduated from the Tilton School in Tilton, New Hampshire in 1978 and from the College of William and Mary in 1982 with a major in Classics and Philosophy. He earned a PhD in 1990 in Classical Studies at the University of Texas at Austin. In 1988 he participated in the summer program at the American Academy in Rome, and in 1985 was awarded a Fulbright Fellowship for a year of study at the American School of Classical Studies in Athens, Greece.

From 1991 to the time of his death, Ross was on the faculty at the University of Kentucky in the Department of Modern and Classical Languages, Literature, and Cultures where he taught courses on women in the ancient world, Greek art, Aristophanes, and the Greek historians, as well as Greek and Latin language courses.

A pioneer in using computer technology to advance scholarship in the humanities, Ross is perhaps best known as the founding editor of the Stoa Consortium for Electronic Publication in the Humanities. The Stoa, established in 1997, set the standard for Open Access publication of digital humanities work in the classics, serving as an umbrella project for many diverse projects that provide functionality, and have requirements, not supported by traditional (print) publishers. In addition to providing Open Access publication for the work of other scholars, Ross strived to make his own work (and the raw materials behind that work) available freely to others. He was the co-creator of Diotima: Materials for the Study of Women and Gender in the Ancient World and of the Neo-Latin Colloquia collection, both of which are published on The Stoa.

According to his principled belief in Open Access, Ross was always a stern critic of models of scholarship that were needlessly exclusionary in their presentation or implementation. He firmly believed in the potential afforded by technology to bring the highest levels of scholarship to the widest possible audience, and in the obligation of learned societies to make their work freely available to all interested readers.

Ross’s influence is most noticeable in his long-standing belief in the power of collaborative work. With humor, generosity, and a keen editor’s discretion, he worked throughout his career to build working relationships among an international circle of collaborators, for his own projects, as well as for others. As a founding editor of the Suda On Line, a web accessible database for work on Byzantine Greek lexicography, Ross helped to build a framework that allowed a large number of people to work together on a single edition. SOL was founded in 1998 at a time when such large-scale collaborative editing was rare, if not unheard of. The influence of the SOL is still being felt as the next generation of collaborative editing tools are being developed. Ross had long-term associations with Harvard’s Center for Hellenic Studies, the Perseus Project, and more recently with the Digital Classicist. Those who knew him will remember him for his generosity and willingness to offer advice, and for his ability to see connections and build bridges between projects and people.

Most recently, Ross was instrumental in forging the collaboration that resulted in the high resolution digital imaging of the Venetus A, a 10th century manuscript of the Iliad located at the Biblioteca Marciana in Venice, and was a co-Principal Investigator of project EDUCE, which aims to use non-invasive, volumetric scanning technologies for virtually “unwrapping” and visualizing ancient papyrus scrolls. Since July, 2005 Ross has been the director of the Collaboratory for Research in Computing for Humanities, a research unit at the University of Kentucky which provides technical assistance to faculty who wish to undertake humanities computing projects, and to encourage and support interdisciplinary partnerships between faculty at UKY and researchers around the world.

His many interests included sailing in the Northern Neck of Virginia, hunting, cooking, woodworking, and photography.

Ross was the proud father of three sons, Lincoln (16), Adrian (13), and Russell (9). In addition, Ross is survived by his wife, Cathy Edwards Scaife, his parents, William and Sylvia Scaife, and three siblings, Bill Scaife, Susan Duerksen, and John Scaife, as well as their spouses and children.

Two memorial services are planned. The first will be held at Belmont (home and studio of Fredericksburg artist Gari Melchers) in Fredericksburg, Virginia on Wednesday, April 2, at 2 pm. The second service will be held in Memorial Hall on the University of Kentucky Campus in Lexington, Kentucky on Saturday, April 12, at 1pm.

Memorial donations may be made to the Swift/Longacre Scholarship Fund which provides support for students of classical studies at the University of Kentucky. Please make checks payable to the University of Kentucky and send in care of Dr. Jane Phillips, Department of MCLLC, 1055 Patterson Office Tower, University of Kentucky, Lexington, KY 40506-0027.

Services and Infrastructure for a Million Books (round table)

Monday, March 17th, 2008

Million Books Workshop, Friday, March 14, 2008, Imperial College London.

The second of two round tables in the afternoon of the Million Books Workshop, chaired by Brian Fuchs (Imperial College London), asked a panel of experts what services and infrastructure they would like to see in order to make a Million Book corpus useful.

  1. Stuart Dunn (Arts and Humanities e-Science Support Centre): the kinds of questions that will be asked of the Million Books mean that the structure of this collection needs to be more sophisticated that just a library catalogue
  2. Alistair Dunning (Archaeological Data Service & JISC): powerful services are urgently needed to enable humanists both to find and to use the resources in this new collection
  3. Michael Popham (OULS but formerly director of e-Science Centre): large scale digitization is a way to break down the accidental constraints of time and place that limit access to resources in traditional libraries
  4. David Shotton (Image Bioinformatics Research Group): emphasis is on accessibility and the semantic web. It is clear than manual building of ontologies does not scale to millions of items, therefore data mining and topic modelling are required, possible assisted by crowdsourcing. It is essential to be able to integrate heterogeneous sources in a single, semantic infrastructure
    1. Dunning: citability and replicability of research becomes a concern with open publication on this scale
    2. Dunn: the archaeology world has similar concerns, cf. the recent LEAP project
  5. Paul Walk (UK Office for Library and Information Networking): concerned with what happens to the all-important role of domain expertise in this world of repurposable services: where is the librarian?
    1. Charlotte Roueché (KCL): learned societies need to play a role in assuring quality and trust in open publications
    2. Dunning: institutional repositories also need to play a role in long-term archiving. Licensing is an essential component of preservation—open licenses are required for maximum distribution of archival copies
    3. Thomas Breuel (DFKI): versioning tools and infrastructure for decentralised repositories exist (e.g. Mercurial)
    4. Fuchs: we also need mechanisms for finding, searching, identifying, and enabling data in these massive collections
    5. Walk: we need to be able to inform scholars when new data in their field of interest appears via feeds of some kind

(Disclaimer: this is only one blogger’s partial summary. The workshop organisers will publish an official report on this event.)

What would you do with a million books? (round table)

Sunday, March 16th, 2008

Million Books Workshop, Friday, March 14, 2008, Imperial College London.

In the afternoon, the first of two round table discussions concerned the uses to which massive text digitisation could be put by the curators of various collections.

The panellists were:

  • Dirk Obbink, Oxyrhynchus Papyri project, Oxford
  • Peter Robinson, Institute for Textual Scholarship and Electronic Editing, Birmingham
  • Michael Popham, Oxford University Library Services
  • Charlotte Roueché, EpiDoc and Prosopography of the Byzantine World, King’s College London
  • Keith May, English Heritage

Chaired by Gregory Crane (Perseus Digital Library), who kicked off by asking the question:

If you had all of the texts relevant to your field—scanned as page images and OCRed, but nothing more—what would you want to do with them?

  1. Roueché: analyse the texts in order to compile references toward a history of citation (and therefore a history of education) in later Greek and Latin sources.
  2. Obbink: generate a queriable corpus
  3. Robinson: compare editions and manuscripts for errors, variants, etc.
    1. Crane: machine annotation might achieve results not possible with human annotation (especially at this scale), particularly if learning from a human-edited example
    2. Obbink: identification of text from lost manuscripts and witnesses toward generation of stemmata. Important question: do we also need to preserve apparatus criticus?
  4. May: perform detailed place and time investigations into a site preparatory to performing any new excavations
    1. Crane: data mining and topic modelling could lead to the machine-generation of an automatically annotated gazeteer, prosopography, dictionary, etc.
  5. Popham: metadata on digital texts scanned by Google not always accurate or complete; not to academic standards: the scanning project is for accessibility, not preservation
    1. Roueché: Are we talking about purely academic exploitation, or our duty as public servants to make our research accessible to the wider public?
    2. May: this is where topic analysis can make texts more accessible to the non-specialist audience
    3. Brian Fuchs (ICL): insurance and price comparison sites, Amazon, etc., have sophisticated algorithms for targeting web materials at particular audiences
    4. Obbink: we will also therefore need translations of all of these texts if we are reaching out to non-specialists; will machine translation be able to help with this?
    5. Roueché: and not just translations into English, we need to make these resources available to the whole world.

(Disclaimer: this summary is partial and partisan, reflecting those elements of the discussion that seemed most interesting and relevant to this blogger. The workshop organisers will publish an official report on this event presently.)

Million Books Workshop (brief report)

Saturday, March 15th, 2008

Imperial College London.
Friday, March 14, 2008.

David Smith gave the first paper of the morning on “From Text to Information: Machine Translation”. The discussion included a survey of machine translation techniques (including the automatic discovery of existing translations by language comparison), and some of the value of cross-language searching.

[Please would somebody who did not miss the beginning of the session provide a more complete summary of Smith's paper?]

Thomas Breuel then spoke on “From Image to Text: OCR and Mass Digitisation” (this would have been the first paper in the day, kicking off the developing thread from image to text to information to meaning, but transport problems caused the sequence of presentations to be altered). Breuel discussed the status of professional OCR packages, which are usually not very trainable and have their accuracy constrained by speed requirements, and explained how the Google-sponsored but Open Source OCRopus package intends to improve on this situation. OCRopus is highly extensible and trainable, but currently geared to the needs of the Google Print project (and so while effective at scanning book pages, may be less so for more generic documents). Currently in alpha-release and incorporating the Tesseract OCR engine, this tool currently has a lower error-rate than other Open Source OCR tools (but not the professional tools, which often contain ad hoc code to deal with special cases). A beta release is set for April 2008, which will demo English, German, and Russian language versions, and release 1.0 is scheduled for Fall 2008. Breuel also briefly discussed the hOCR microformat for describing page layouts in a combination of HTML and CSS3.

David Bamman gave the second in the “From Text to Information” sequence of papers, in which he discussed building a dynamic lexicon using automated syntax recognition, identifying the grammatical contexts of words in a digital text. With a training set of some thousands of words of Greek and Latin tree-banked by hand, auto-syntactic parsing currently achieves an accuracy rate something above 50%. While this is still too high a rate of error to make this automated process useful as an end in itself, to deliver syntactic tagging to language students, for example, it is good for testing against a human-edited lexicon, which provides a degree of control. Usage statistics and comparisons of related words and meanings give a good sense of the likely sense of a word or form in a given context.

David Mimno completed the thread with a presentation on “From Information to Meaning: Machine Learning and Classification Techniques”. He discussed automated classification based on typical and statistical features (usually binary indicators: is this email spam or not? Is this play tragedy or comedy?). Sequences of objects allow for a different kind of processing (for example spell-checking), including named entity recognition. Names need to be identified not only by their form but by their context, and machines do a surprisingly good job at identifying coreference and thus disambiguating between homonyms. A more flexible form of automatic classification is provided by topic modelling, which allows mixed classifications and does not require the definition of labels. Topic modelling is the automatic grouping of topics, keywords, components, relationships by the frequency of clusters of words and references. This modelling mechanism is an effective means for organising a library collection by automated topic clusters, for example, rather than by a one-dimensional and rather arbitrary classmark system. Generating multiple connections between publications might be a more effective and more useful way to organise a citation index for Classical Studies than the outdated project that is l’Année Philologique.

Simon Overell gave a short presentation on his doctoral research into the distribution of location references within different language versions of Wikipedia. Using the tagged location links as disambiguators, and using the language cross-reference tags to compare across the collections, he uses the statistics compiled to analyse bias (in a supposedly Neutral Point-Of-View publication) and provide support for placename disambiguation. Overell’s work is in progress, and he is actively seeking collaborators who might have projects that could use his data.

In the afternoon there were two round-table discussions on the subjects of “Collections” and “Systems and Infrastructure” that I may report on later if my notes turn out to be usable.

Signs that social scholarship is catching on in the humanities

Friday, March 14th, 2008

By way of Peter Suber’s Open Access News:

Spiro, Lisa. “Signs that social scholarship is catching on in the humanities.” Digital Scholarship in the Humanities, March 11, 2008. http://digitalscholarship.wordpress.com/2008/03/11/signs-that-social-scholarship-is-catching-on-in-the-humanities/.

Spiro asks: “To what extent are humanities researchers practicing ‘social scholarship’ … embracing openness, accessibility and collaboration in producing their work?” By way of a provisional answer, she makes observations about “several [recent] trends that suggest increasing experimentation with collaborative tools and approaches in the humanities:”

  1. Individual commitment by scholars to open access
  2. Development of open access publishing outlets
  3. Availability of tools to support collaboration
  4. Experiments with social peer review
  5. Development of social networks to support open exchanges of knowledge
  6. Support for collaboration by funding agencies
  7. Increased emphasis on “community” as key part of graduate education

She also points to the “growth in blogging” and the proliferation of collaborative bibliographic tools.

Changing the Center of Gravity

Tuesday, March 4th, 2008

Changing the Center of Gravity: Transforming Classical Studies Through Cyberinfrastructure

http://www.rch.uky.edu/CenterOfGravity/

University of Kentucky, 5 October 2007

This is the full audio record of “Changing the Center of Gravity: Transforming Classical Studies Through Cyberinfrastructure”, a workshop funded by the National Science Foundation, sponsored by the Center for Visualization and Virtual Environments at the University of Kentucky, and organized by the Perseus Digital Library at Tufts University.

1) Introduction (05:13)
– Gregory Crane
(download this presentation as an mp3 file – 4.78 MB)

2) Technology, Collaboration, & Undergraduate Research (26:23)
– Christopher Blackwell and Thomas Martin, respondent Kenny Morrell
(download this presentation as an mp3 file – 24.1 MB)

3) Digital Criticism: Editorial Standards for the Homer Multitext (29:02)
– Casey Dué and Mary Ebbott, respondent Anne Mahoney
(download this presentation as an mp3 file – 26.5 MB)

4) Digital Geography and Classics (20:23)
– Tom Elliot, respondent Bruce Robertson
(download this presentation as an mp3 file – 18.6 MB)

5) Computational Linguistics and Classical Lexicography (39:16)
– David Bamman and Gregory Crane, respondent David Smith
(download this presentation as an mp3 file – 35.9 MB)

6) Citation in Classical Studies (38:34)
– Neel Smith, respondent Hugh Cayless
(download this presentation as an mp3 file – 35.3 MB)

7) Exploring Historical RDF with Heml (24:10)
– Bruce Robertson, respondent Tom Elliot
(download this presentation as an mp3 file – 22.1 MB)

8) Approaches to Large Scale Digitization of Early Printed Books (24:38)
– Jeffrey Rydberg-Cox, respondent Gregory Crane
(download this presentation as an mp3 file – 22.5 MB)

9) Tachypaedia Byzantina: The Suda On Line as Collaborative Encyclopedia (20:45)
– Anne Mahoney, respondent Christopher Blackwell
(download this presentation as an mp3 file – 18.9 MB)

10) Epigraphy in 2017 (19:00)
– Hugh Cayless, Charlotte Roueché, Tom Elliot, and Gabriel Bodard, respondent Bruce Robertson
(download this presentation as an mp3 file – 17.3 MB)

11) Directions for the Future (50:04)
– Ross Scaife et al.
(download this presentation as an mp3 file – 45.8 MB)

12) Summary (01:34)
– Gregory Crane
(download this presentation as an mp3 file – 1.44 MB)

CFP: DRHA 2008: New Communities of Knowledge and Practice

Monday, March 3rd, 2008

By way of a long string of reposts, originally to AHESSC:

Date: Fri, 29 Feb 2008 17:37:17 -0000
From: Stuart Dunn
To: AHESSC@JISCMAIL.AC.UK

CALL FOR PAPERS AND PERFORMANCES

Forthcoming Conference

DRHA 2008: New Communities of Knowledge and Practice

The DRHA (Digital Resources in the Humanities and Arts) conference is held annually at various academic venues throughout the UK. The conference theme this year is to promote discussion around new collaborative environments, collective knowledge and redefining disciplinary boundaries. The conference, hosted by Cambridge with its fantastic choice of conference venues will take place from Sunday 14th September to Wednesday 17th September.

The aim of the conference is to:

  • Establish a site for mutually creative exchanges of knowledge.
  • Promote discussion around new collaborative environments and collective knowledge.
  • Encourage and celebrate the connections and tensions within the liminal spaces that exist between the Arts and Humanities.
  • Redefine disciplinary boundaries.
  • Create a forum for debate around notions of the ‘solitary’ and the collaborative across the Arts and Humanities.
  • Explore the impact of the Arts and Humanities on ICT: design and narrative structures and visa versa.

There will be a variety of sessions concerned with the above but also with a particular emphasis on interdisciplinary collaboration and theorising around practice. There will also be various installations and performances focussing on the same theme. Keynote talks will be given by our plenary speakers who we are pleased to announce are Sher Doruff, Research Fellow (Art, Research and Theory Lectoraat) and Mentor at the Amsterdam School for the Arts, Alan Liu, Professor of English, University of California Santa Barbara and Sally Jane Norman, Director of the Culture Lab, Newcastle University. In addition to this, there will be various round table discussions together with a panel relating to ‘Second Life’ and a special forum ‘Engaging research and performance through pervasive and locative arts projects’ led by Steve Benford, Professor of Collaborative Computing, University of Nottingham. Also planned is the opportunity for a more immediate and informal presentation of work in our ‘Quickfire’ style events. Whether papers, performance or other, all proposals should reflect the critical engagement at the heart of DRHA.

Visit the website for more information and a link to the proposals website.

The Deadline for submissions will be 30 April 2008 and abstracts should be approximately 1000 words.

Cambridge’s venues range from the traditional to the contemporary all situated within walking distance of central departments, museums and galleries. The conference will be based around Cambridge University’s Sedgwick Site, particularly the West Road concert hall, where delegates will have use of a wide range of facilities including a recital room and a ‘black box’ performance space, to cater for this year’s parallel programming and performances.

Sue Broadhurst DRHA Programme Chair

Dr Sue Broadhurst
Reader in Drama and Technology, Head of Drama, School of Arts
Brunel University
West London, UB8 3PH
UK
Direct Line:+44(0)1895 266588 Extension: 66588
Fax: +44(0)1895 269768
Email: susan.broadhurst@brunel.ac.uk.

Rieger, Preservation in the Age of Large-Scale Digitization

Sunday, March 2nd, 2008

CLIR (the Council on Library and Information Resources in DC) have published in PDF the text of a white paper by Oya Rieger titled ‘Preservation in the Age of Large-Scale Digitization‘. She discusses large-scale digitization initiatives such as Google Books, Microsoft Live, and the Open Content Alliance. This is more of a diplomatic/administrative than a technical discussion, with questions of funding, strategy, and policy rearing higher than issues of technology, standards, or protocols, the tension between depth and scale (all of which were questions raised during our Open Source Critical Editions conversations).

The paper ends with thirteen major recommendations, all of which are important and deserve close reading, and the most important of which is the need for collaboration, sharing of resources, and generally working closely with other institutions and projects involved in digitization, archiving, and preservation.

One comment hit especially close to home:

The recent announcement that the Arts and Humanities Research Council and Joint Information Systems Committee (JISC) will cease funding the Arts and Humanities Data Service (AHDS) gives cause for concern about the long-term viability of even government-funded archiving services. Such uncertainties strengthen the case for libraries taking responsibility for preservation—both from archival and access perspectives.

It is actually a difficult question to decide who should be responsible for long-term archiving of digital resources, but I would argue that this is one place where duplication of labour is not a bad thing. The more copies of our cultural artefacts that exist, in different formats, contexts, and versions, the more likely we are to retain some of our civilisation after the next cataclysm. This is not to say that coordination and collaboration are not desiderata, but that we should expect, plan for, and even strive for redundancy on all fronts.

(Thanks to Dan O’Donnell for the link.)