Sunoikisis is a successful national consortium of Classics programs developed by the Harvard’s Center for Hellenic Studies. The goal is to extend Sunoikisis to a global audience and contribute to it with an international consortium of Digital Classics programs (Sunoikisis DC). Sunoikisis DC is based at the Alexander von Humboldt Chair of Digital Humanities at the University of Leipzig. The aim is to offer collaborative courses that foster interdisciplinary paradigms of learning. Master students of both the humanities and computer science are welcome to join the courses and work together by contributing to digital classics projects in a collaborative environment.
Archive for the ‘Projects’ Category
The Digital Latin Library, a joint project of the Society for Classical Studies, the Medieval Academy of America, and the Renaissance Society of America, with funding from the Andrew W. Mellon Foundation, announces a seminar on Latin textual criticism in the digital age. The seminar will take place on the campus of the University of Oklahoma, the DLL’s host institution, on June 25–26, 2015.
We welcome proposals for papers on all subjects related to the intersection of modern technology with traditional methods for editing Latin texts of all eras. Suggested topics:
- Keeping the “critical” in digital critical editions
- The scholarly value of editing texts to be read by humans and machines
- Extending the usability of critical editions beyond a scholarly audience
- Visualizing the critical apparatus: moving beyond a print-optimized format
- Encoding different critical approaches to a text
- Interoperability between critical editions and other digital resources
- Dreaming big: a wishlist of features for the optimal digital editing environment
Of particular interest are proposals that examine the scholarly element of preparing a digital edition.
The seminar will be limited to ten participants. Participants will receive a stipend, and all travel and related expenses will be paid by the DLL.
Please send proposals of no more than 650 words to Samuel J. Huskey at email@example.com by December 1, 2014. Notification of proposal status will be sent in early January.
Posted for Ian Ruffell.
(The post can be found on the University of Glasgow website via the search page here (search on the College of Arts): http://www.gla.ac.uk/about/jobs/vacancies/. Some more details about the project are here: http://www.gla.ac.uk/schools/humanities/research/classicsresearch/researchprojects/heroandhisautomata/.)
Reference Number 009086
Closing date: August 24, 2014
Location Gilmorehill Campus / Main Building
College / Service COLLEGE OF ARTS
Department SCHOOL OF HUMANITIES
Job Family Research And Teaching
Position Type Full Time
Salary Range £32,590 – £36,661
This post is part of the project ‘Hero of Alexandria and his Theatrical Automata’, funded by the Leverhulme Trust (PI: Dr Ian Ruffell, School of Humanities; Co-I Dr Euan McGookin, School of Engineering). Based in the University of Glasgow (Classics, School of Humanities), the project runs from 1 October 2014 to 30 September 2017. The project investigates Hero of Alexandria’s treatise on the making of automata, and will design, build andthe models described in that work. The post is full-time and available for 36 months from October 1, 2014. The post holder will prototype, build and test versions of the automata, working in collaboration with the rest of the project team in technical analysis of the text. The successful candidate will i) use 3D-modelling (training will be provided) and rapid prototyping equipment to explore possible designs of the automata, ii) with the aid of technicians in the School of Engineering, build full-scale working models of the automata; iii) combine practical data with textual and contextual elements in the project website, iv) test the scope and limitations of the models in performance in dialogue with practitioners and audiences. (more…)
The Suda On Line: Byzantine Lexicography affectionately known as SOL and one of Ross Scaife’s (et al) host of innovative projects has now reached the amazing milestone of 100% translation coverage.
A translation of the last of the Suda’s 31000+ entries was submitted to the database on July 21, 2014 and vetted the next day. This milestone is very gratifying, but the work of the project is far from over. As mentioned above, one of the founding principles of the project is that the process of improving and annotating our translations will go on indefinitely. Much important work remains to be done. We are also constantly thinking of ways to improve SOL’s infrastructure and to add new tools and features. If you are interested in helping us with the continuing betterment of SOL, please read about how you can register as an editor and/or contact the managing editors. (http://www.stoa.org/sol/history.shtml)
Although never involved in this project myself, I often use SOL as an example and case study in my teaching. With much discussion nowadays about so-called ‘crowdsourcing’ and ‘community-sourcing’ this is surely the forerunner.
From (Print) Encyclopedia to (Digital) Wiki
According to Denis Diderot and Jean le Rond d’Alembert the purpose of an encyclopedia in the 18th century was ‘to collect knowledge disseminated around the globe; to set forth its general system to the people with whom we live, and transmit it to those who will come after us, so that the work of preceding centuries will not become useless to the centuries to come’. Encyclopedias have existed for around 2,000 years; the oldest is in fact a classical text, Naturalis Historia, written ca 77 CE by Pliny the Elder.
Following the (recent) digitalization of raw data, new, digital forms of encyclopedia have emerged. In our very own, digital era, a Wiki is a wider, electronic encyclopedia that is open to contributions and edits by interesting parties. It contains concept analyses, images, media, and so on, and it is freely available, thus making the creation, recording, and dissemination of knowledge a democratised process, open to everyone who wishes to contribute.
A Sprint for Digital Classicists
For us, Digital Classicists, scholars and students interested in the application of humanities computing to research in the ancient and Byzantine worlds, the Digital Classicist Wiki is composed and edited by a hub for scholars and students. This wiki collects guidelines and suggestions of major technical issues, and catalogues digital projects and tools of relevance to classicists. The wiki also lists events, bibliographies and publications (print and electronic), and other developments in the field. A discussion group serves as grist for a list of FAQs. As members of the community provide answers and other suggestions, some of these may evolve into independent wiki articles providing work-in-progress guidelines and reports. The scope of the Wiki follows the interests and expertise of collaborators, in general, and of the editors, in particular. The Digital Classicist is hosted by the Department of Digital Humanities at King’s College London, and the Stoa Consortium, University of Kentucky.
So how did we end up editing this massive piece of work? On Tuesday July 1, 2014 and around 16:00 GMT (or 17:00 CET) a group of interested parties gathered up in several digital platforms. The idea was that most of the action will take place in the DigiClass chatroom on IRC, our very own channel called #digiclass. Alongside the traditional chat window, there was also a Skype voice call to get us started and discuss approaches before editing. On the side, we had a GoogleDoc where people simultaneously added what they thought should be improved or created. I was very excited to interact with old members and new. It was a fun break during my mini trip to the Netherlands, and as it proved, very focused on the general attitude of the Digital Classicists team; knowledge is open to everyone who wishes to learn and can be the outcome of a joyful collaborative process.
The Technology Factor
As a researcher of digital history, and I suppose most information system scholars would agree, technology is never neutral in the process of ‘making’. The magic of the Wiki consists on the fact that it is a rather simple platform that can be easily tweaked. All users were invited to edit any page to create new pages within the wiki Web site, using only a regular web browser without any extra add-ons. Wiki makes page link creation easy by showing whether an intended target page exists or not. A wiki enables communities to write documents collaboratively, using a simple markup language and a web browser. A single page in a wiki website is referred to as a wiki page, while the entire collection of pages, which are usually well interconnected by hyperlinks, is ‘the wiki’. A wiki is essentially a database for creating, browsing, and searching through information. A wiki allows non-linear, evolving, complex and networked text, argument and interaction. Edits can be made in real time and appear almost instantly online. This can facilitate abuse of the system. Private wiki servers (such as the Digital Classicist one) require user identification to edit pages, thus making the process somewhat mildly controlled. Most importantly, as researchers of the digital we understood in practice that a wiki is not a carefully crafted site for casual visitors. Instead, it seeks to involve the visitor in an ongoing process of creation and collaboration that constantly changes the Web site landscape.
Where Technology Shapes the Future of Humanities
In terms of Human resources some with little involvement in the Digital Classicist community before this, got themselves involved in several tasks including correcting pages, suggesting new projects, adding pages to the wiki, helping others with information and background, approaching project-owners and leaders in order to suggest adding or improving information. Collaboration, a practice usually reserved for science scholars, made the process easier and intellectually stimulating. Moreover, within these overt cyber-spaces of ubiquitous interaction one could identify a strong sense of productive diversity within our own scholarly community; it was visible both in the IRC chat channel as well as over skype. Several different accents and spellings, British, American English, and several continental scholars were gathering up to expand this incredibly fast-pacing process. There was a need to address research projects, categories, and tools found in non-english speaking academic cultures. As a consequence of this multivocal procedure, more interesting questions arose, not lest methodological. ‘What projects are defined as digital, really’, ‘Isn’t everything a database?’ ‘What is a prototype?’. ‘Shouldn’t there be a special category for dissertations, or visualisations?’. The beauty of collaboration in all its glory, plus expanding our horizons with technology! And so much fun!
MediaWiki recorded almost 250 changes made in the 1st of July 2014!
The best news, however is that this, first ever wiki sprint was not the last. In the words of the Organisers, Gabriel Boddard and Simon Mahony,
‘We have recently started a programme of short intensive work-sprints to
improve the content of the Digital Classicist Wiki
(http://wiki.digitalclassicist.org/). A small group of us this week made
about 250 edits in a couple of hours in the afternoon, and added dozens
of new projects, tools, and other information pages.
We would like to invite other members of the Digital Classicist community to
join us for future “sprints” of this kind, which will be held on the
first Tuesday of every month, at 16h00 London time (usually =17:00
Central Europe; =11:00 Eastern US).
To take part in a sprint:
1. Join us in the DigiClass chatroom (instructions at
<http://wiki.digitalclassicist.org/DigiClass_IRC_Channel>) during the
scheduled slot, and we’ll decide what to do there;
2. You will need an account on the Wiki–if you don’t already have one,
please email one of the admins to be invited;
3. You do not need to have taken part before, or to come along every
month; occasional contributors are most welcome!’
The next few sprints are scheduled for:
* August 5th
* September 2nd
* October 7th
* November 4th
* December 2nd
Please, do join us, whenever you can!
Standards for Networking Ancient Prosopography: Data and Relations in Greco-roman Names (SNAP:DRGN) is a one-year pilot project, based at King’s College London in collaboration with colleagues from the Lexicon of Greek Personal Names (Oxford), Trismegistos (Leuven), Papyri.info (Duke) and Pelagios (Southampton), and hopes to include many more data partners by the end of this first year. Much of the early discussion of this project took place at the LAWDI school in 2013. Our goal is to recommend standards for sharing relatively minimalist data about classical and other ancient prosopographical and onomastic datasets in RDF, thereby creating a huge graph of person-data that scholars can:
- query to find individuals, patterns, relationships, statistics and other information;
- follow back to the richer and fuller source information in the contributing database;
- contribute new datasets or individual persons, names and textual references/attestations;
- annotate to declare identity between persons (or co-reference groups) in different source datasets;
- annotate to express other relationships between persons/entities in different or the same source dataset (such as familial relationships, legal encounters, etc.)
- use URIs to annotate texts and other references to names with the identity of the person to whom they refer (similar to Pelagios’s model for places using Pleiades).
More detailed description (plus successful funding bid document, if you’re really keen) can be found at <http://snapdrgn.net/about>.
Our April workshop invited a handful of representative data-holders and experts in prosopography and/or linked open data to spend two days in London discussing the SNAP:DRGN project, their own data and work, and approaches to sharing and linking prosopographical data in general. We presented a first draft of the SNAP:DRGN “Cookbook”, the guidelines for formatting a subset of prosopographical data in RDF for contribution to the SNAP graph, and received some extremely useful feedback on individual technical issues and the overall approach. A summary of the workshop, and slides from many of the presentations, can be found at <http://snapdrgn.net/archives/110>.
In the coming weeks we shall announce the first public version of the SNAP ontology, the Cookbook, and the graph of our core and partner datasets and annotations. For further discussion about the project, and linked data for prosopography in general, you can also join the Ancient-People Googlegroup (where I posted a summary similar to this post earlier today).
The Humboldt Chair of Digital Humanities at the University of Leipzig is pleased to announce a new effort within the Open Philology Project: the Leipzig Open Fragmentary Texts Series (LOFTS).
The Leipzig Open Fragmentary Texts Series is a new effort to establish open editions of ancient works that survive only through quotations and text re-uses in later texts (i.e., those pieces of information that humanists call “fragments”).
As a first step in this process, the Humboldt Chair announces the Digital Fragmenta Historicorum Graecorum (DFHG) Project, whose goal is to produce a digital edition of the five volumes of Karl Müller’s Fragmenta Historicorum Graecorum (FHG) (1841-1870), which is the first big collection of fragments of Greek historians ever realized.
For further information, please visit the project website at: http://www.dh.uni-leipzig.de/wo/open-philology-project/the-leipzig-open-fragmentary-texts-series-lofts/
A few volunteers have started gathering for an interesting project, and it occurs to me that others may like to join us. This might be especially appropriate to someone with excellent Latin, a love for the subject, but no current involvement with the classics, and some spare time on their hands. A retired Latin teacher might fit the bill, or someone who completed an advanced classics degree some years ago, but now works in an unrelated field and misses working with ancient texts. Current students and scholars are also more than welcome to participate.
The Papyri.info site includes some 52,000 transcribed texts, of which about 2,000 in Latin, very few translated into English or any other modern language. The collaborative editing tool SoSOL (deployed at papyri.info/editor) allows users to add to or improve existing editions of papyrological texts, for example by adding new translations.
If you think you might like to take part in this exercise, take a look for instance at O. Bu Njem, a corpus of 150 ostraka from the Roman military base at Golas in Libya. The Latin texts (often fragmentary) are already transcribed; do you think you could produce an English translation of a few of these texts, which will be credited to you? Would you like a brief introduction to the SoSOL interface to enable you to add the translations yourself (pending approval by the editorial board)?
- See the translation assignments doc for some of the texts being focussed on at the moment.
- We have created basic instructions for adding a new translation.
- Please get in touch with me if you’d like more information, or suggestions for where to start.
- But don’t feel the need to ask for permission or approval; go ahead and start translating whenever you’re ready!
March 27-30, 2014
perseus_neh (at) tufts.edu
As a follow-on to Working with Text in a Digital Age, an NEH-funded Institute for Advanced Technologies in the Digital Humanities and in collaboration with the Open Philology Project at the University of Leipzig, Tufts University announces a two-day workshop on publishing textual data that is available under an open license, that is structured for machine analysis as well as human inspection, and that is in a format that can be preserved over time. The purpose of this workshop is to establish specific guidelines for digital publications that publish and/or annotate textual sources from the human record. The registration for the workshop will be free but space will be limited. Some support for travel and expenses will be available. We particularly encourage contributions from students and early-career researchers.
Textual data can include digital versions of traditional critical editions and translations but such data also includes annotations that make traditional tasks (such as looking up or quoting a primary source) machine-actionable, annotations that may build upon print antecedents (e.g., dynamic indexes of places that can be used to generate maps and geospatial visualizations), and annotations that are only feasible in a digital space (such as alignments between source text and translation or exhaustive markup of morphology, syntax, and other linguistic features).
Contributions can be of two kinds:
- Collections of textual data that conform to existing guidelines listed below. These collections must include a narrative description of their contents, how they were produced and what audiences and purposes they were designed to serve.
- Contributions about formats for publication. These contributions must contain sufficient data to illustrate their advantages and to allow third parties to develop new materials.
All textual data must be submitted under a Creative Commons license. Where documents reflect a particular point of view by a particular author and where the original expression should for that reason not be changed, they may be distributed under a CC-BY-ND license. All other contributions must be distributed under a CC-BY-SA license. Most publications may contain data represented under both categories: the introduction to an edition or a data set, reflecting the reasons why one or more authors made a particular set of decisions, can be distributed under a CC-BY-ND license. All data sets (such as geospatial annotation, morphosyntactic analyses, reconstructed texts with textual notes, diplomatic editions, translations) should be published under a CC-BY-SA license.
Contributors should submit abstracts of up to 500 words to EasyChair. We particularly welcome abstracts that describe data already available under a Creative Commons license.
January 1, 2014: Submissions are due. Please submit via EasyChair.
January 20, 2014: Notification.
From Lisa Cerrato via the Digital Classicist List:
The Perseus Catalog is an attempt to provide systematic catalog access to at least one online edition of every major Greek and Latin author (both surviving and fragmentary) from antiquity to 600 CE. Still a work in progress, the catalog currently includes 3,679 individual works (2,522 Greek and 1,247 Latin), with over 11,000 links to online versions of these works (6,419 in Google Books, 5,098 to the Internet Archive, 593 to the Hathi Trust). The Perseus interface now includes links to the Perseus Catalog from the main navigation bar, and also from within the majority of texts in the Greco-Roman collection.
The metadata contained within the catalog has utilized the MODS and MADS standards developed by the Library of Congress as well as the Canonical Text Services and CTS-URN protocols developed by the Homer Multitext Project. The Perseus catalog interface uses the open source Blacklight Project interface and Apache Solr. Stable, linkable canonical URIs have been provided for all textgroups, works, editions and translations in the Catalog for both HTML and ATOM output formats. The ATOM output format provides access to the source CTS, MODS and MADS metadata for the catalog records. Subsequent releases will make all catalog data available as RDF triples.
Other major plans for the future of the catalog include not only the addition of more authors and works as well as links to online versions but also to open up the catalog to contributions from users. Currently the catalog does not include any user contribution or social features other than standard email contact information but the goal is to soon support the creation of user accounts and the contribution of recommendations, corrections and or new metadata.
The Perseus Digital Library Team
We are very pleased to announce the creation of the Duke Collaboratory for Classics Computing (DC3), a new Digital Classics R&D unit embedded in the Duke University Libraries, whose start-up has been generously funded by the Andrew W. Mellon Foundation and Duke University’s Dean of Arts & Sciences and Office of the Provost.
The DC3 goes live 1 July 2013, continuing a long tradition of collaboration between the Duke University Libraries and papyrologists in Duke’s Department of Classical Studies. The late Professors William H. Willis and John F. Oates began the Duke Databank of Documentary Papyri (DDbDP) more than 30 years ago, and in 1996 Duke was among the founding members of the Advanced Papyrological Information System (APIS). In recent years, Duke led the Mellon-funded Integrating Digital Papyrology effort, which brought together the DDbDP, Heidelberger Gesamtverzeichnis der Griechischen Papyrusurkunden Ägyptens (HGV), and APIS in a common search and collaborative curation environment (papyri.info), and which collaborates with other partners, including Trismegistos, Bibliographie Papyrologique, Brussels Coptic Database, and the Arabic Papyrology Database.
The DC3 team will see to the maintenance and enhancement of papyri.info data and tooling, cultivate new partnerships in the papyrological domain, experiment in the development of new complementary resources, and engage in teaching and outreach at Duke and beyond.
The team’s first push will be in the area of Greek and Latin Epigraphy, where it plans to leverage its papyrological experience to serve a much larger community. The team brings a wealth of experience in fields like image processing, text engineering, scholarly data modeling, and building scalable web services. It aims to help create a system in which the many worldwide digital epigraphy projects can interoperate by linking into the graph of scholarly relationships while maintaining the full force of their individuality.
The DC3 team is:
Ryan BAUMANN: Has worked on a wide range of Digital Humanities projects, from applying advanced imaging and visualization techniques to ancient artifacts, to developing systems for scholarly editing and collaboration.
Hugh CAYLESS: Has over a decade of software engineering expertise in both academic and industrial settings. He also holds a Ph.D. in Classics and a Master’s in Information Science. He is one of the founders of the EpiDoc collaborative and currently serves on the Technical Council of the Text Encoding Initiative.
Josh SOSIN: Associate Professor of Classical Studies and History, Co-Director of the DDbDP, Associate editor of Greek, Roman, and Byzantine Studies; an epigraphist and papyrologist interested in the intersection of ancient law, religion, and the economy.
Copied from the Digital Classicist list on behalf of the organisers:
CALL FOR PAPERS
HESTIA2: Exploring spatial networks through ancient sources
University of Southampton 18th July 2013
Organisers: Elton Barker, Stefan Bouzarovski, Leif Isaksen and Tom Brughmans, in collaboration with The Connected Past
A free one-day seminar on spatial network analysis in archaeology, history, classics, teaching and commercial archaeology.
Spatial relationships are everywhere in our sources about the past: from the ancient roads that connect cities, or ancient authors mentioning political alliances between places, to the stratigraphic contexts archaeologists deal with in their fieldwork. However, as datasets about the past become increasingly large, these spatial networks become ever more difficult to disentangle. Network techniques allow us to address such spatial relationships explicitly and directly through network visualisation and analysis. This seminar aims to explore the potential of such innovative techniques for research, public engagement and commercial purposes.
The seminar is part of Hestia2, a public engagement project aimed at introducing a series of conceptual and practical innovations to the spatial reading and visualisation of texts. Following on from the AHRC-funded “Network, Relation, Flow: Imaginations of Space in Herodotus’s Histories” (Hestia: http://www.open.ac.uk/Arts/hestia/ ), Hestia2 represents a deliberate shift from experimenting with geospatial analysis of a single text to making Hestia’s outcomes available to new audiences and widely applicable to other texts through a seminar series, online platform, blog and learning materials with the purpose of fostering knowledge exchange between researchers and non-academics, and generating public interest and engagement in this field.
For this first Hestia2 workshop we welcome contributions addressing any of (but not restricted to) the following themes:
Spatial network analysis techniques
Spatial networks in archaeology, history and classics
Techniques for the discovery and analysis of networks from textual sources
Exploring spatial relationships in classical and archaeological sources
The use of network visualisations and linked datasets for archaeologists active in the commercial sector and teachers
Applications of network analysis in archaeology, history and classics
Please email proposed titles and abstracts (max. 250 words) to:
firstname.lastname@example.org by May 13th 2013.
Via Marco Büchler, Greg Crane has just posted “The Open Philology Project and Humboldt Chair of Digital Humanities at Leipzig” at Perseus Digital Library Updates.
Abstract: The Humboldt Chair of Digital Humanities at the University of Leipzig sees in the rise of Digital Technologies an opportunity to re-assess and re-establish how the humanities can advance the understanding of the past and to support a dialogue among civilizations. Philology, which uses surviving linguistic sources to understand the past as deeply and broadly as possible, is central to these tasks, because languages, present and historical, are central to human culture. To advance this larger effort, the Humboldt Chair focuses upon enabling Greco-Roman culture to realize the fullest possible role in intellectual life. Greco-Roman culture is particularly significant because it contributed to both Europe and the Islamic world and the study of Greco-Roman culture and its influence thus entails Classical Arabic as well as Ancient Greek and Latin. The Humboldt Chair inaugurates an Open Philology Project with three complementary efforts that produce open philological data, educate a wide audience about historical languages, and integrate open philological data from many sources: the Open Greek and Latin Project organizes content (including translations into Classical Arabic and modern languages); the Historical Language e-Learning Project explores ways to support learning across barriers of language and culture as well as space and time; the Scaife Digital Library focuses on integrating cultural heritage sources available under open licenses.
Details of the project, its components, and rationale are provided in the original post.
TextGrid (http://www.textgrid.de) is a platform for scholars in the humanities, which makes possible the collaborative analysis, evaluation and publication of cultural remains (literary sources, images and codices) in a standardized way. The central idea was to bring together instruments for the dealing with texts under a common user interface. The workbench offers a range of tools and services for scholarly editing and linguistic research, which are extensible by open interfaces, such as editors for the linkage between texts or between text sequences and images, tools for musical score edition, for gloss editing, for automatic collation etc.
On the occasion of the official release of TextGrid 2.0 a summit will take place from the 14th to the 15th of May 2012. On the 14th the summit will start with a workshop day on which the participants can get an insight into some of the new tools. For the following day lectures and a discussion group are planned.
For more information and registration see this German website:
With kind regards
Technische Universität Darmstadt
Institut für Sprach- und Literaturwissenschaft
(To register to attend this workshop, please visit http://pelagios.eventbrite.com)
The Pelagios workshop is an open forum for discussing the issues associated with and the infrastructure required for developing methods of linking open data (LOD), specifically geodata. There will be a specific emphasis on places in the ancient world, but the practices discussed should be equally applicable to contemporary named locations. The Pelagios project will also make available a proposal for a lightweight methodology prior to the event in order to focus discussion and elicit critique.
The one-day event will have 3 sessions dedicated to:
1) Issues of referencing ancient and contemporary places online
2) Lightweight ontology approaches
3) Methods for generating, publishing and consuming compliant data
Each session will consist of several short (15 min) papers followed by half an hour of open discussion. The event is FREE to all but places are LIMITED so participants are advised to register early. This is likely to be of interest to anyone working with digital humanities resources with a geospatial component.
10:30-1:00 Session 1: Issues
2:00-3:30 Session 2: Ontology
4:00-5:30 Session 3: Methods
Johan Alhlfeldt (University of Lund) Regnum Francorum online
Ceri Binding (University of Glamorgan) Semantic Technologies Enhancing
Links and Linked data for Archaeological Resources
Gianluca Correndo (University of Southampton) EnAKTing
Claire Grover (University of Edinburgh) Edinburgh Geoparser
Eetu Mäkelä (University of Aalto) CultureSampo
Adam Rabinowitz (University of Texas at Austin) GeoDia
Sebastian Rahtz (University of Oxford) CLAROS
Sven Schade (European Commission)
Monika Solanki (University of Leicester) Tracing Networks
Humphrey Southall (University of Portsmouth) Great Britain Historical
Geographical Information System
Jeni Tennision (Data.gov.uk)
Pelagios Partners also attending are:
Mathieu d’Aquin (KMi, The Open University) LUCERO
Greg Crane (Tufts University) Perseus
Reinhard Foertsch (University of Cologne) Arachne
Sean Gillies (Institute for the Study of the Ancient World, NYU) Pleiades
Mark Hedges, Gabriel Bodard (KCL) SPQR
Rainer Simon (DME, Austrian Institute of Technology) EuropeanaConnect
Elton Barker (The Open University) Google Ancient Places
Leif Isaksen (The University of Southampton) Google Ancient Places
You are warmly invited to attend
“Digital Transformations: New developments in cultural heritage imaging”
a workshop on digital imaging to be held at the University of Oxford on Friday, 25 February 2011.
The workshop will focus on documentary evidence, from 3D capture techniques to reflectance transformation imaging (RTI). This workshop is part of the collaborative University of Oxford and University of Southampton pilot project “Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts”, supported by the AHRC DEDFI scheme.
Friday, 25 February 2011
Lecture Theatre, The Ioannou Centre for Classical and Byzantine Studies, 66 St. Giles’, Oxford OX1 3LU
For free registration, further details and any queries, please go to: http://rtisad-oxford.eventbrite.com/
The RTISAD Team:
Alan Bowman, Charles Crowther
Jacob Dahl, Graeme Earl
Leif Isaksen, Kirk Martinez
Hembo Pagi, Kathryn E. Piquette
Seen in a post, on various lists, by Fabian Reiter:
Liebe Kollegen, im Rahmen des von der Deutschen Forschungsgemeinschaft geförderten Digitalisierungsprojektes der Berliner Papyrussammlung ist für 3 Jahre die Stelle eines Fachinformatikers zu besetzen, vgl. die Ausschreibung unter den folgenden Adresse: http://hv.spk-berlin.de/deutsch/beruf_karriere/freie_stellen/museum/AeMP2-2010.pdf
Researchers at the Fraunhofer Institute for Computer Graphics Research have developed a computer system to recognize images in a museum and enhance them with digital information, and have deployed such a system in Amsterdam’s Allard Pierson Museum. When visitors point a flat-screen computer’s digital camera at an image in the exhibition (for example, an image of the Roman Forum, the Temple of Saturn, or the Colosseum), the system overlays the scene with ancillary information, including a possible reconstruction of ruins. Such a technology is called “augmented reality,” and it may become available to tourists via smart phones.
While on the subject of spacial analysis, I’m sure there are archaeologists and geographers here who would have useful suggestions for what we can do with the hi-res 3-D images of the Earth that the NASA SRTM project has made available. There’s a nice overview of the imagery and some of the uses to which it’s already been put in this post, “Reading the world in Braille” at Integrity-Logic (coincidence that it’s International Braille Day today?).
So we’ve discussed what to do with a million books; now what do we do with quadrillions of bytes of geodata? Answers on the back of a postcard (or in a comment) please.
The “Wired Campus” section of the Chronicle of Higher Education is reporting on the uses that humanities scholars have found for the U.S. Department of Energy’s High Performance Computing resources. The short article reports on the efforts of several people who have made use of the resources, including Gregory Crane of the Perseus Project, David Bamman, a computational linguist who has been mining data from classical texts, and David Koller, a researcher with the Digital Sculpture Project, which has developed ways to coalesce numerous images of an object into a high-resolution 3D image. The article reports that, according to Mr. Koller, intermediaries are needed who can help humanities and computer researchers communicate with each other.
Scientists in the European joint project 3D-COFORM are creating three-dimensional digital models of artifacts such as statues and vases. Besides making for an exciting viewing experience, the 3D models constitute comprehensive documentation of objects that is useful to conservators. The longer-term goal of correlating 3D data between different objects is still a long way off. Read about it here.
… with hundreds of thousands of digital photos.
University of Washington researchers have developed a computer system to combine tourist photos lifted from the Flickr.com photo-sharing site into a 3D digital model. Using advanced techniques, they were able to ‘build’ a model of Rome from photos tagged with Rome or Roma in just 21 hours.
Here’s a recent article from the Lexington (Kentucky) Herald-Leader about the activities of the EDUCE project. It sounds like they’re at an exciting and critical point. According to lead researcher and computer science professor Brent Seales:
“We’re starting the serious work now,” Seales said. “In a few weeks, we should know whether we’ll be able to tease out some of the writing. Seeing the text is going to be the trick, but we have some tricks of our own that we think will help.”
The story links to an informative video on YouTube, entitled “Reading the Unreadable,” apparently published in January of this year.
At the request suggestion one of our translators, Nick Nicholas, I have added a link to the SOL front page called “Entire list of entries”. If you go there, you will find a list of the entire Suda entries, whether translated or not. Each is a link that gets you to the current translation; if there is none (as with phi,849 for instance), you get to the source text.
There are two things to note. First, the links are in the form http://www.stoa.org/sol-entries/alpha/3, which is new. I have introduced a URL-rewrite rule in the web server that converts this sort of URL to the less memorable
You can use this new form of URL if you wish to embed pointers to the SOL in other web pages.
Second, the real purpose of this list of entries is so that web crawlers like Google will find it and index the contents of the entire SOL. Within a short time, we should be able to use a search engine with a search like “aaron biography omicron kurion” and find this same entry. We’ll see if that works.
Very exciting news – the complete dataset of the Archimedes Palimpsest project (ten years in the making) has been released today. The official announcement is copied below, but I’d like to point out what I think it is that makes this project so special. It isn’t the object – the manuscript – or the content – although I’m sure the previously unknown texts are quite exciting for scholars. It isn’t even the technology, which includes multispectral imaging used to separate out the palimpsest from the overlying text and the XML transcriptions mapped to those images (although that’s a subject close to my heart).
What’s special about this project is its total dedication to open access principles, and an implied trust in the way it is being released that open access will work. There is no user interface. Instead, all project data is being released under a Creative Commons 3.0 attribution license. Under this license, anyone can take this data and do whatever they want to with it (even sell it), as long as they attribute it to the Archimedes Palimpsest project. The thinking behind this is that, by making the complete project data available, others will step up and build interfaces… create searches… make visualizations… do all kinds of cool stuff with the data that the developers might not even consider.
To be fair, this isn’t the only project I know of that is operating like this; the complete high-resolution photographs and accompanying metadata for manuscripts digitized through the Homer Multitext project are available freely, as the other project data will be when it’s completed, although the HMT as far as I know will also have its own user interface. There may be others as well. But I’m impressed that the project developers are releasing just the data, and trusting that scholars and others will create user environments of their own.
The Stoa was founded on principles of open access. It’s validating to see a high-visibility project such as the Archimedes Palimpsest take those principles seriously.
Ten years ago today, a private American collector purchased the Archimedes Palimpsest. Since that time he has guided and funded the project to conserve, image, and study the manuscript. After ten years of work, involving the expertise and goodwill of an extraordinary number of people working around the world, the Archimedes Palimpsest Project has released its data. It is a historic dataset, revealing new texts from the ancient world. It is an integrated product, weaving registered images in many wavebands of light with XML transcriptions of the Archimedes and Hyperides texts that are spatially mapped to those images. It has pushed boundaries for the imaging of documents, and relied almost exclusively on current international standards. We hope that this dataset will be a persistent digital resource for the decades to come. We also hope it will be helpful as an example for others who are conducting similar work. It published under a Creative Commons 3.0 attribution license, to ensure ease of access and the potential for widespread use. A complete facsimile of the revealed palimpsested texts is available on Googlebooks as “The Archimedes Palimpsest”. It is hoped that this is the first of many uses to which the data will be put.
For information on the Archimedes Palimpsest Project, please visit: www.archimedespalimpsest.org
For the dataset, please visit: www.archimedespalimpsest.net
We have set up a discussion forum on the Archimedes Palimpsest Project. Any member can invite anybody else to join. If you want to become a member, please email:
I would be grateful if you would circulate this to your friends and colleagues.
Thank you very much
The Walters Art Museum
October 29th, 2008.