Archive for May, 2007

Job: Research Associate at Telota Initiative, BBAW

Thursday, May 24th, 2007

Alexander Czmiel posts to following job to the Digital Classicist list:

I would like to point you to an open position at the Telota-Initiative (“The electronic life of the Academy”) of the Berlin-Brandenburg Academy of Sciences and Humanities. Though the job advertisement is in German and knowledge of German is advantageous the project language is not necessarily German.

Basic data:
– position: research associate
– duration: as soon as possible until 31 December 2008
– qualifications:
– knowledge in producing digital scholarly editions
– knowledge of XML and related technologies
– knowledge of a programming language

You can find an overview of our last projects at

Applications should be addressed to Regina Reimann (

Please don’t hesitate to contact me ( for further informations.

Computing Panel at European Association of Archaeologists

Thursday, May 24th, 2007

A Call for Proposals from Geoff Carver (seen on Antiquist). Send abstracts or suggestions to

I still need a few abstracts for a session I’ve organised for the European Association of Archaeologists conference, to be held in Zadar, Croatia in September; I’ve included my session abstract, and if you have any more questions, let me know.

Is Invention the Mother of Necessity?

Sometimes it seems like all the recent developments in computer applications for archaeology are technology-driven: increasingly realistic graphics, higher resolution cameras and scanners, new uses for existing software, etc.

At its worst, this approach can result in technology for its own sake: cool innovations that might impress the “geeks” and “nerds,” but don’t seem to take the real needs of archaeologists into consideration.

This session aims to turn things around by discussing not just what we can do with computers in archaeology, but what we would like to do, if the technology should someday become available. We want to discuss why we use computers – our aims and goals – and why some of us feel threatened not just by the machines we use, but also by the jargon that surrounds them.

Ultimately, the goal is to begin addressing the apparent paradox that – although in some ways archaeologists escape the modern world by retreating into the past – we still study the past largely in terms of technological changes (stone, bronze, iron ages, etc.), without necessarily understanding the relationships between technology and modern archaeology.

This is a valuable discussion which touches on the perennial question Digital Humanists face about whether the “digital” or the “humanities” drives our research. On the one hand we must never lose touch with the fact that we are scholars in a humanistic discipline (be that Classics, Archaeology, History, or whatever), and that the history and expectation of scholarship in that discipline must be at the forefront of our endeavors. On the other hand, it is generally not the classicists or archaeologists who invent new technologies, but either disciplines with better funding than us, or science, medicine, and industry. It would be irresponsible of us not to borrow and build upon these technologies as they become available, so it is inevitable that digital technology (and the expertise of information scientists) will to some extent drive developments in humanistic scholarship also. Where we allow the balance to be drawn will decide the future of our disciplines.

BASP licensing

Thursday, May 17th, 2007

In a post yesterday I complained that I couldn’t find a license statement associated with the newly announced online version of the Bulletin of the American Society of Papyrologists. Gregg Schwendner had a look and found it.

The table-of-contents view for each issue (e.g., no. 25 2006) includes the following statement:

Permission must be received for any subsequent distribution in print or electronically. Please contact for more information.

I still think that this information should also appear at the level of the individual article (e.g., John Oates, “Sale of a Donkey”), and that the sitewide text access policy page should include some general guidance.

BASP goes open access … or something

Wednesday, May 16th, 2007

I was filled with glee when I saw Chuck Jones’ post on Blegen Library News to the effect that the Bulletin of the American Society of Papyrologists is now “online open access.”

I eagerly clicked through to the BASP holdings (actually served up by the University of Michigan Digital Library Production Service) . I found that, yes, all but the two most recent issues are available as page images (not online fulltext, but raw text can be downloaded from the “bookbag”).

I salute the appearance of this resource; however, I’m troubled by the absence of a license statement. What am I allowed to do with these documents? Can I copy them to give to students? Can I put them into my institution’s digital repository?

The site-wide “text access policy” is somewhat less than enlightening:

Access policy:

Coming soon

The help page for the “holdings display” presents a non-BASP example in which there is a full statement of “availability.” No such statements accompany the BASP articles I put in my bookbag.

Dear editors of BASP: thanks for making this resource available online. Please clarify the rights and licensing for us, and make this information part of the metadata records associated with the online version.

Dear UMDL and UMich Scholarly Publishing Office Management: thanks for this valuable suite of services and tools. Is licensing and availability left solely to the discretion of contributors with no business rules to prevent surfacing of content that lacks an availability statement? If so, it would be very helpful to see an explanation of this policy replacing the “coming soon” notice pointed out above, or — better yet — incorporated in some basic but ubiquitous form throughout the interface.

It is actually hard to find online a good primer on why rights and licenses are important things to address when placing content online. For the moment, I’ll just cite the following …

The NISO Framework Advisory Group’s A Framework of Guidance for Building Good Digital Collections (2nd edition, 2004) is endorsed by the Digital Library Federation (of which the University of Michigan is a member). The framework clearly states (emphasis mine):

Collections principle 5: A good collection respects intellectual property rights. Collection managers should maintain a consistent record of rightsholders and permissions granted for all applicable materials.

Intellectual property law must be considered from several points of view: what rights the owners of the original source materials retain in their materials, what rights or permissions the collection developers have to digitize content and make it available, what rights collection owners have in their digital content, and what rights or permissions the users of the digital collection have to make subsequent use of the materials.

New Project – Image, Text, Interpretation: e-Science, Technology and Documents

Tuesday, May 15th, 2007

Oxford University and UCL are pleased to announce the recent funding of a new project to work on aiding scholars in reading and interpreting Ancient Texts. Three year funding, including one additional phd studentship, has been secured from the AHRC-EPSRC-JISC Arts and Humanities e-Science Intitiative to work on new computational tools and techniques to aid papyrologists and palaeographers in their complex task.

Original documents are primary, often unique, resources for scholars working in literature, history, archaeology, language and palaeography of all periods and cultures. The complete understanding and interpretation of textual documents is frequently elusive because of damage or degradation, which is generally more severe the older the document. Building on successful earlier research at the Centre for the Study of Ancient Documents (Professor Alan Bowman) and Engineering Science (Professor Sir Mike Brady) at Oxford University, in collaboration with UCL SLAIS (Dr Melissa Terras), this project aims to construct a signal to symbol system, which will aid scholars in propagating interpretations of texts through a combination of image processing, computational interactive reasoning under uncertainty, the provision of tools to construct datasets of palaeographical information for dissemination in the research community, and through the provision of training methods and resources in the application of e-science technology to texts and documents.

The project will begin in October 2007, with studentship and postdoc details to be posted shortly.

Open Source Critical Editions Workshop

Tuesday, May 15th, 2007

Gabriel Bodard and Juan Garces have reported on the aims, discussions and conclusions of the Open Source Critical Editions Workshop. This is published on the AHRC ICT Methods Network site:

The workshop addressed the technological, legal and administrative issues that digital critical editions present:

The Open Source Critical Editions Workshop was held on 22 September 2006 at the Centre for Computing in the Humanities, King’s College London under the auspices of the AHRC ICT Methods Network; the meeting was also supported in part by the Perseus Project and the Digital Classicist (1). This workshop was set up with the aim of exploring the possibilities, requirements for, and repercussions of a new generation of digital critical editions of Greek and Latin texts with underlying code made available under an open license such as Creative Commons or GPL (2). This topic broached many technological, legal, and administrative issues, and the participants were selected for their interest and/or expertise in these areas, and asked to consider how such editions advance classical philology as a whole, both in terms of the internal value to the subject itself, and in terms of outreach, interdisciplinarity, and the value of philology to the wider world outside the academy.

Technological questions discussed at this event included: the status of open critical editions within a repository or distributed collection of texts; the need for and requirements of a registry to bind together and provide referencing mechanisms for such texts (the Canonical Texts Services protocols being an obvious candidate for such a function (3) ); the authoritative status of this class of edition, whether edited by a single scholar or collaboratively; the role of e-Science and Grid applications in the creation and delivery of editions.

Legal issues largely revolved around the question of copyright and licensing: what status should the data behind digital critical editions have? It was an assumption of this group that source texts should be both Open Source and Public Domain, but the specifics remain to be discussed. Attribution of scholarship is clearly desirable, but the automatic granting of permission to modify and build upon scholarly work is also essential. There were also questions regarding the classical texts upon which such editions are based: what is the copyright status of a recently-published critical edition of a text or manuscript, that the editor of a new edition needs to incorporate?

Administrative questions posed by open critical editions included: issues of workflow and collaboration (in which Ross Scaife of the Stoa Consortium has considerable experience, for example through the Suda Online and other projects (4) ); protocols for publication and reuse of source data: a genealogy of reuse and citation could be generated using version control tools, or a system of passive link-back generating an automatic citation index through a web search engine. Issues of peer review and both pre- and post-publication validation of scholarship were also discussed.

Papers on critical editions, technologies and protocols were presented, and the dichotomies existing within digital publication of texts were explored. These included the scale of projects and the detail in which data should be presented; the types of edition that should be made available digitally; and to what extent classical scholars need to create their own tools rather than using those that are already used in, and funded by, other fields.

The authors conclude that

while the format of and contributions to the workshop were a success, there is still need to further develop and test some of the discussions initiated at the workshop, before undertaking a large, collaborative, and international project.

Encyclopedia of Life

Tuesday, May 15th, 2007

The New Scientist this week reports on the Encyclopedia of Life, a new, massive, collaborative, evolving resource to catalogue the 1.8 million known species of life on the planet. Although this is a biology resource and so, for example, has access to greater funding sources than most of us in the humanities dream of (E. O. Wilson has apparently already reaised $50 million in set-up funds), a lot of the issues of collaborative research and publication, of evolving content, of citability, of authority, of copyright, of free access, and of winning the engagement of the research community as a whole are exactly the same as we face. It would serve us well to watch how this resource develops.

It is a truism that we can learn a lot from the way scientists conduct their research, as they are better-funded than we are. But, dare I say it, the builders of this project could also do worse than to consult and engage with digital humanists who have spent a lot of time thinking about innovative and robust solutions to these problems in ways that scientists have not necessarily had to.

Text Mining for Historians (workshop)

Monday, May 14th, 2007

Just announced by the Methods Network:

TEXT MINING FOR HISTORIANS workshop at University of Glasgow 17 – 18 July 2007

A workshop organized by Zoe Bliss, AHDS History and the Association for History and Computing UK (AHC-UK)

Texts are central to historical research and an increasing body of historical texts are becoming available in electronic format. Despite a long-standing interest in computer aided text analysis the use of computer assisted methods and tools are not widespread amongst historians.

This workshop aims to:

  • Introduce participants to the methods and tools developed and currently employed by corpus linguists
  • Provide practical hands on experience of using these tools
  • Enable participants to explore the pros and cons of employing these tools and methods in historical research.

It builds upon the successful Methods Network Workshop on Historical Text Mining in Lancaster in July 2006 (

The workshop is aimed at academic staff and post graduates whose research involves the analysis of significant bodies of textual material and who would like to know more about computerised techniques and tools that they could potentially use to aid their research. Moreover, the workshop will be particularly useful for researchers who would like practical hands on experience of using these tools. The workshop is free of charge, with lunch and refreshments included.

For more information about the programme, and details of how to register, please visit:

EpiDoc Summer School, 11-15 June, 2007

Sunday, May 13th, 2007

Over the last few years an international group of scholars has been developing a set of conventions for marking up ancient documents in XML for publication and interchange. The EpiDoc Guidelines started from the case of inscriptions, but the principles are also being applied to papyri and coins, and the aim has always been to produce standards consistent with those of the Text Encoding Initiative, used for all literary and linguistic texts.

Following on from the interest we have seen in EpiDoc training events (including recent sessions in Rome and San Diego) and the success of the London EpiDoc Summer School over several years now, we shall be holding another week-long workshop here at King’s College London, from the 11th-15th June this year.

* The EpiDoc Guidelines provide a schema and associated tools and recommendations for the use of XML to publish epigraphic and papyrological texts in interchangeable format. For a fuller description of the project and links to tools and guidelines see
* The Summer School will offer an in-depth introduction to the use of XML and related technologies for publication and interchange of epigraphic and papyrological editions.
* The event will be hosted by the Centre for Computing in the Humanities, King’s College London, which will provide the venue and tuition. The school is free of charge, but attendees will need to fund their own travel, accommodation, and subsistence. (There may be cheap accommodation available through KCL; please inquire.)
* The summer school is targeted at epigraphic and papyrological scholars (including professors, post-docs, and advanced graduate students) with an interest and willingness to learn some of the hands-on technical aspects necessary to run a digital project (even if they would not be marking-up texts by hand very much themselves). Knowledge of Greek/Latin, the Leiden Conventions and the distinctions expressed by them, and the kinds of data and metadata that need to be recorded by philologists and ancient historians, will be an advantage. Please enquire if you’re unsure. No particular technical expertise is required.
* Attendees will require the use of a relatively recent laptop computer (Win XP+ or Mac OSX 10.3+), with up-to-date Java installation, and should acquire a copy of the oXygen XML editor (educational discount and one-month free trial available); they should also have the means to enter Unicode Greek from the keyboard. Full technical specifications and advice are available on request. (CCH may be able to arrange the loan of a prepared laptop for the week; please inquire asap.)

Places on the workshop will be limited so if you are interested in attending the summer school, or have a colleague or student who might be interested, please contact as soon as possible with a brief statement of qualifications and interest.

Google Earth with Audio

Sunday, May 13th, 2007

This interesting post over at New Scientist Tech:

Bernie Krause has spent 40 years collecting over 3500 hours of sound recordings from all over the world, including bird and whale song and the crackle of melting glaciers. His company, Wild Sanctuary in Glen Ellen, California, has now created software to embed these sound files into the relevant locations in Google Earth. Just zoom in on your chosen spot and listen to local sounds.

“Our objective is to bring the world alive,” says Krause. “We have all the continents of the world, high mountains and low deserts.”

He hopes it will make virtual visitors more aware of the impact of human activity on the environment in the years since he began making and collecting the recordings. Users will be able to hear various modern-day sounds at a particular location, then travel back in time to compare them with the noises of decades gone by.

This is more than just a cool mashup of sounds with locations; the idea has repercussions in all sorts of departments, not least technical. At the end of the NS article is a note:

Another project, called Freesound, is making contributors’ sound files available on Google Earth. Unlike these recordings, Krause’s sound files are of a consistent quality and enriched with time, date and weather information.

Freesound is a Creative Commons site and more interesting from the Web 2.0 perspetive, as content is freely user-generated. What is exciting is the way that sites can make all sorts of media available through georeferences in Google Earth/Maps now (as for example the Pleiades Project are doing with classical sites). The question will be how such rich results are filtered: will Google provide overlays that filter by more than just keywords, or will third-party sites (like Wild Sanctuary and Pleiades) need to create web services that take advantage of the open technologies but provide their own filters? (Tom can probably answer these questions already…)

Google Books Pushback

Thursday, May 10th, 2007

Robert B. Townsend’s post on the AHA blog is getting noticed in various places. His complaints center around poor scanning quality, shoddy metadata, and not providing full access to Public Domain works.

While it’s true Google is structuring their business around organizing the world’s information, it is worth keeping in mind that they are doing so as a business centered on search. Their views on cost effectiveness vs. error rate are not going to be the same as a scholar’s or librarian’s views. The conclusion may be that while Google will be very good at finding things, they won’t be so good about keeping them. Libraries and other institutions concerned with digital preservation and access are therefore not excused from taking charge of those tasks.

New work on Johannes Tinctoris

Wednesday, May 9th, 2007

Ronald Woodley, Professor of Music in the Department of Research at the Birmingham Conservatoire, University of Central England, has substantially upgraded his site here at the Stoa on Johannes Tinctoris, prominent music theorist of the Renaissance. The project includes a Biographical Outline, a Work List, a Bibliography, original Latin treatises by Tinctoris, a selection of scholarly articles, and numerous sound files and graphical aids.

Image [&] Narrative issue on The Digital Archive

Wednesday, May 9th, 2007

The April 2007 issue of the online journal Image [&] Narrative is devoted to ‘The Digital Archive’. The table of Contents includes:

Georg Vogeler on digital tools, visualisation, and history

Tuesday, May 8th, 2007

An interview posted in the Methods Network Digital Historian forum, with an open invitation to participate in the discussion there.

There are lots of historians who would say: OK, it’s good to write e-mail and very convenient to use the OPAC for requesting books from library stacks but would never admit that the computer has change the historical research. Theo Kölzer, who isn’t explicitly arguing against the use of computers for historical research, recently said in his presentation at the digital diplomatics conference: The tools have changed – but new diplomatics haven’t emerged: The tools only make the “Diktatvergleich” of Theodor Sickel more easy.

The discussion goes on much longer, and I haven’t read it all yet, but it looks to be worth following and engaging with.

Stop teaching historians to use computers!

Tuesday, May 8th, 2007

Bill Turkel has started what looks to be an important and potentially influential thread on the nexus of history and the digital. His opening salvo:

Teaching history students how to use computers was a really good idea in the early 1980s. It’s not anymore. Students who were born in 1983 have already graduated from college. If they didn’t pick up the rudiments of word processing and spreadsheet and database use along the way, that’s tragic. But if we concentrate on teaching those things now, we’ll be preparing our students for the brave new world of 1983.

Posts so far:

Creative Commons and research

Tuesday, May 8th, 2007

A post on the Creative Commons blog draws together four articles on the value of Creative Commons licensing for newspapers, scientists, film students, and Wikipedia “SEOers” respectively. All are worth reading, but it is the article on scientists that is of most interest here. This article, posted at ScienceBlogs on 1st May by Rob Knop makes the case that:

Scientists do not need, and indeed should not have, exclusive (or any) control over who can copy their papers, and who can make derivative works of their papers.

The very progress of science is based on derivative works! It is absolutely essential that somebody else who attempts to reproduce your experiment be able to publish results that you don’t like if those are the results they have. Standard copyright, however, gives the copyright holders of a paper at least a plausible legal basis on which to challenge the publication of a paper that attempts to reproduce the results— clearly a derivative work!

I would extend this argument (and indeed have done so repeatedly and vocally) to assert that this applies to equally to all academic research, including the Humanties. This is a key part of the philosophy behind the Open Source Critical Editions network that I helped convene last year. All published research includes the requirement to publish the “source code” (by way of citations, arguments, primary and secondary references, retraceable argumentation), and the expectation that others will use this “source” to verify, reproduce, modify, or refute your work. Copyright, and especially digital copyright and crippleware, should not be allowed to get in the way of this process because without this freedom a publication can not be considered research.

Fair use and blogs

Wednesday, May 2nd, 2007

This post seen in the Creative Commons blog is relevant to the discussion we had here a few weeks ago about copyright and blogs. What is the status of all these copy-n-pasted paragraphs below?

Copyright and fair use in the blogosphere

April 30th, 2007 by Kaitlin Thaney

A recent incident in the blogosphere has sparked a discussion on the role of copyright and fair use laws in the digital world.

Last week, Shelley Batts – a PhD student – was accused of a fair use violation for pulling a figure and a chart from a scientific paper to post on her blog. Soon after Batts posted the data on her site, she received a cease-and-desist letter via e-mail from lawyers from the Journal of the Science of Food and Agriculture, a journal owned by John Wiley. The representative who contacted her accused her of violating fair use by reproducing the material from the journal on her blog. Batts soon took down the figures, reproduced the data in an Excel format, and avoided legal penalty.

Her experience raises a larger question, though. In the world of blogging where cutting and pasting is common practice, how do copyright and fair use laws apply? Katherine Sharpe addressed this very question on ScienceBlogs, calling on Springer Publishing’s Johannes Velterop and Science Commons’ John Wilbanks to comment.

The full article at the (cc) Science Commons blog.

Junicode update released

Wednesday, May 2nd, 2007

Keeping with the Unicode/font theme, here is the announcement of the latest release of the useful Junicode font package. Although it focuses on Medieval characters (Correction: and *does* now cover polytonic Greek) is has very good coverage of Latin, symbols, ligatures, as well as Runic, etc.

Junicode 0.6.13 is now available at Here are the release notes:

This release continues to add characters from the MUFI recommendation to benefit medievalists. Many messy outlines have been cleaned up, improving efficiency and reducing the likelihood of bugs. Most of the goodies in this release are for users of OpenType-aware programs such as InDesign and XeTeX: the OpenType features list has been thoroughly worked over and rationalized, and consistency imposed across all four faces (though it is still true that there are more OpenType features in Regular than in the other three). Use of ccmp, mark and mkmk has been greatly expanded, making use of combining diacritics more practical than before. Many MUFI glyphs have been made accessible via OpenType features, especially ccmp (for glyph+diacritic combination) and hlig (Historical Ligatures). Fractions, Roman numbers, subscripts and the various “Enclosed Alphanumerics” have been made accessible as ligatures (either liga, Standard Ligatures, or dlig, Discretionary Ligatures).

Type Greek

Tuesday, May 1st, 2007

Type Greek is a web-based software tool that converts text from a standard keyboard into beautiful, polytonic Greek characters as you type. Using an easy-to-learn and standardized system called beta code, TypeGreek converts your keystrokes into Unicode-compliant Greek in real-time… The TypeGreek code is released under a Creative Commons license, so you are free to download it, modify it, or host it on your own site.

Social Scholarship on the Rise

Tuesday, May 1st, 2007

As an academic librarian, I’ve been trying to get a handle on the emerging parameters of social scholarship. This is the practice of scholarship in which the use of social tools is an integral part of the research and publishing process. The process gains a number of characteristics, including openness, conversation, collaboration, access, sharing and transparent revision.

In this entry, I’m going to paint an idealized picture of this process, gathering together both observations and speculations. I’m not suggesting that any one individual would do all of these things. I’m just looking at the options – or better yet, the opportunities. This list is by no means comprehensive, but rather a starting point toward considering practices of scholarship that reflect a 2.0 social mindset and make use of 2.0 social tools.

Worth a look.