Archive for the ‘Tools’ Category

Paper on Archaeology in Second Life

Saturday, June 23rd, 2007

Shawn Graham (at the Electric Archaeology blog) has uploaded a copy of his paper, at the recent Immersive Worlds conference at Brock. The paper can be downloaded (as a .wav) here: ‘On Second Lives and Past Lifes: Archaeological Thoughts on the Metaverse‘ (via the EA post).

This is obviously a huge and very relevant topic at the moment, since the Digital Classicist Seminar in London yesterday was addressed by Timothy Hill under the title: ‘Wiser than the Undeceived? Past Worlds as Virtual Worlds in the Electronic Media’. (And Dunstan Lowe will also address recreational software in the same series in three weeks time.)

Encyclopedia of Life

Tuesday, May 15th, 2007

The New Scientist this week reports on the Encyclopedia of Life, a new, massive, collaborative, evolving resource to catalogue the 1.8 million known species of life on the planet. Although this is a biology resource and so, for example, has access to greater funding sources than most of us in the humanities dream of (E. O. Wilson has apparently already reaised $50 million in set-up funds), a lot of the issues of collaborative research and publication, of evolving content, of citability, of authority, of copyright, of free access, and of winning the engagement of the research community as a whole are exactly the same as we face. It would serve us well to watch how this resource develops.

It is a truism that we can learn a lot from the way scientists conduct their research, as they are better-funded than we are. But, dare I say it, the builders of this project could also do worse than to consult and engage with digital humanists who have spent a lot of time thinking about innovative and robust solutions to these problems in ways that scientists have not necessarily had to.

Google Earth with Audio

Sunday, May 13th, 2007

This interesting post over at New Scientist Tech:

Bernie Krause has spent 40 years collecting over 3500 hours of sound recordings from all over the world, including bird and whale song and the crackle of melting glaciers. His company, Wild Sanctuary in Glen Ellen, California, has now created software to embed these sound files into the relevant locations in Google Earth. Just zoom in on your chosen spot and listen to local sounds.

“Our objective is to bring the world alive,” says Krause. “We have all the continents of the world, high mountains and low deserts.”

He hopes it will make virtual visitors more aware of the impact of human activity on the environment in the years since he began making and collecting the recordings. Users will be able to hear various modern-day sounds at a particular location, then travel back in time to compare them with the noises of decades gone by.

This is more than just a cool mashup of sounds with locations; the idea has repercussions in all sorts of departments, not least technical. At the end of the NS article is a note:

Another project, called Freesound, is making contributors’ sound files available on Google Earth. Unlike these recordings, Krause’s sound files are of a consistent quality and enriched with time, date and weather information.

Freesound is a Creative Commons site and more interesting from the Web 2.0 perspetive, as content is freely user-generated. What is exciting is the way that sites can make all sorts of media available through georeferences in Google Earth/Maps now (as for example the Pleiades Project are doing with classical sites). The question will be how such rich results are filtered: will Google provide overlays that filter by more than just keywords, or will third-party sites (like Wild Sanctuary and Pleiades) need to create web services that take advantage of the open technologies but provide their own filters? (Tom can probably answer these questions already…)

Stop teaching historians to use computers!

Tuesday, May 8th, 2007

Bill Turkel has started what looks to be an important and potentially influential thread on the nexus of history and the digital. His opening salvo:

Teaching history students how to use computers was a really good idea in the early 1980s. It’s not anymore. Students who were born in 1983 have already graduated from college. If they didn’t pick up the rudiments of word processing and spreadsheet and database use along the way, that’s tragic. But if we concentrate on teaching those things now, we’ll be preparing our students for the brave new world of 1983.

Posts so far:

Type Greek

Tuesday, May 1st, 2007

Type Greek is a web-based software tool that converts text from a standard keyboard into beautiful, polytonic Greek characters as you type. Using an easy-to-learn and standardized system called beta code, TypeGreek converts your keystrokes into Unicode-compliant Greek in real-time… The TypeGreek code is released under a Creative Commons license, so you are free to download it, modify it, or host it on your own site.

Propylaeum-DOK

Friday, April 27th, 2007

An interesting new project at Heidelberg:

Propylaeum-DOK, der Volltextserver der Virtuellen Fachbibliothek Altertumswissenschaft, Propylaeum wird von der Universitätsbibliothek Heidelberg bereitgestellt. Die Publikationsplattform bietet Wissenschaftlerinnen und Wissenschaftlern weltweit die Möglichkeit, ihre Veröffentlichungen aus allen Fachbereichen der Altertumswissenschaften kostenlos und in elektronischer Form nach den Grundsätzen des Open Access im WWW verfügbar zu machen. Die Arbeiten werden mit standardisierten Adressen (URN) und Metadaten (OAIPMH) dauerhaft zitierfähig archiviert. Sie sind damit in verschiedenen Bibliothekskatalogen und Suchmaschinen weltweit recherchierbar.”

Announcing TAPoR version 1.0

Tuesday, April 24th, 2007

Seen on Humanist:

Announcing TAPoR version 1.0
http://portal.tapor.ca

We have just updated the Text Analysis Portal for Research (TAPoR) to
version 1.0 and invite you to try it out.

The new version will not appear that different from previous
versions. The main difference is that we are now tracking data about
tool usage and have a survey that you can complete after trying the
portal in order to learn more about text analysis in humanities
research.

You can get a free account from the home page of the portal. If you
want an introduction you can look at the following pages:

Streaming video tutorials are at
http://training.tapor.ca

Text analysis recipes to help you do things are at:
http://tada.mcmaster.ca/Main/TaporRecipes

Starting points can be found at
http://tada.mcmaster.ca/Main/TAPoR

The survey is at
http://taporware.mcmaster.ca/phpESP/public/survey.php?name=TAPoR_portal

A tour, tutorial, and useful links are available on the home page,
portal.tapor.ca.

Please try the new version and give us feedback.

Yours,

Geoffrey Rockwell

Open Source OCR

Wednesday, April 11th, 2007

Seen in Slashdot and Google Code updates:

Google has just announced work on OCRopus, which it says it hopes will ‘advance the state of the art in optical character recognition and related technologies.’ OCRopus will be available under the Apache 2.0 License. Obviously, there may be search and image search implications from OCRopus. ‘The goal of the project is to advance the state of the art in optical character recognition and related technologies, and to deliver a high quality OCR system suitable for document conversions, electronic libraries, vision impaired users, historical document analysis, and general desktop use. In addition, we are structuring the system in such a way that it will be easy to reuse by other researchers in the field.’

Interestingly:

The project is expected to run for three years and support three Ph.D. students or postdocs. We are announcing a technology preview release of the software under the Apache license (English-only, combining the Tesseract character recognizer with IUPR layout analysis and language modeling tools), with additional recognizers and functionality in future releases.

It would be interesting to learn how this application compares in accuracy and power with commercial OCR systems (which have apparently gotten much better since the days when I used to get very frustrated with Omnipage and the like).

ImaNote – Image and Map Annotation Notebook

Wednesday, April 4th, 2007

This looks a useful tool. Anyone tried it? Claims to allow annotation and links to be added to images with RSS to keep track of everything.

Following text copied from Humanist:

We are really happy to announce the release of ImaNote 1.0 version.

ImaNote – (Image and Map Annotation Notebook) is a web-based multi-user tool that allows you, and your friends, to display a high-resolution image or a collection of images online and add annotations and links to them. You simply mark an area on an image (e.g. a map) and write an annotation related to the point.

You can keep track of the annotations using RSS (Really Simple Syndication) or link to them from your own blog/web site/email. The links lead right to the points in the image.

The user management features include resetting lost passwords and account email verification. Through the group management features you can create communities that share images and publish annotations.

ImaNote is Open Source and Free Software released under the GNU General Public Licence (GPL).

ImaNote is a Zope product, written in Python, with a javascript-enhanced interface. Zope and ImaNote run on almost all Operating Systems (GNU/Linux, MacOS X, *BSD, etc.) and Microsoft Windows. It currently works with most modern browsers including Mozilla Firefox, IE7 and Opera.

Imanote was developed as a collaboration between the Systems of Representation and the Learning Environments research groups of the Media Lab at the University of Art and Design Helsinki, Finland.

For more information go to http://imanote.uiah.fi

Conference: Historic Environment Information Resources Network

Monday, March 26th, 2007

Data Sans Frontières: web portals and the historic environment

25 May 2007: The British Museum, London

Organised by the Historic Environment Information Resources Network (HEIRNET) and supported by the AHRC ICT Methods Network and the British Museum, this one-day conference takes a comprehensive look at exciting new opportunities for disseminating and integrating historic environment data using portal technologies and Web 2.0 approaches. Bringing together speakers from national organisations, national and local government and academia, options for cooperation at both national and international levels will be explored.

The aims of the conference are:

  • To raise awareness of current developments in the online dissemination of Historic Environment Data
  • To set developments in the historic environment sector in a wider national and European information context
  • To raise awareness of current portal and interoperability technologies
  • To create a vision for a way forward for joined up UK historic environment information provision

This conference should be of interest to heritage professionals, researchers and managers from all sectors.

The conference costs £12 and a full programme and online registration facilities are available at http://www.britarch.ac.uk/HEIRNET/ There may be tickets available on the day, but space is limited so please register as soon as possible.

Tools and Methods for the Digital Historian

Sunday, March 25th, 2007

Posted by the Methods Network:

The AHRC ICT Methods Network, a UK initiative for the exchange and dissemination of expertise in the use of ICT for arts and humanities research, has just launched an online community forum on ‘digital’ history:

‘Tools and Methods for the Digital Historian’

(http://www.digital-historian.net) is the first of a set of integrated online communities related to Methods Network activities and resources and is a forum for open discussion of all issues relating to digital history. In particular we invite comments on a working paper by Neil Grindley (Methods Network) entitled ‘Tools and Methods for Historical Research’ which we hope will become the basis of a community resource. We are keen on getting more input and would very much like to include your feedback in future versions of the paper.

Wikis and Blogs in Education

Sunday, March 25th, 2007

Seen in the Creative Commons Feed:

“The wiki is the center of my classroom”

That’s a quote from Wikis and Blogs in Education, one of three educational remixes from students of open content pioneer David Wiley.

The other two are Interviewing Basics and the Open Water Project, an excellent disaster preparedness video that probably everyone should watch.

Each project is licensed under CC Attribution-NonCommercial-ShareAlike and incorporates CC licensed and public domain audio, images, and video as well as original materials.

Wikis and Blogs in Education, potentially the most interesting site for readers of this forum is a site that combines text and video in an animated Flash and Javascript framework. It seems to run smoothely, but I don’t know if that would have implications for the free reuse of the material.

Gentium resurgens: refined Cyrillic, Unicode 5, smart rendering

Monday, February 19th, 2007

From Victor Gualtney, on the latest regarding the Gentium font by way of the Gentium-Announce List (links mine):

Update #4 – Gentium project revived, Cyrillic, Charis

Dear friends of Gentium,

No – there’s not a new version out yet. :-) But we’re pleased to report that Gentium is under development again after a while in hibernation. We’re actively refining the Cyrillic, adding support for Unicode 5, and preparing the font for the addition of smart rendering support using three different smart font technologies – OpenType, Graphite and Apple AAT.

If you want to see the target character, glyph set and behavior we’ll be supporting in the next version, you can take a look at our Doulos SIL and Charis SIL fonts:

Gentium will support every character and behavior that these fonts do, plus Greek. This also means that if you’re wondering whether the next version will support a specific character, see if it’s in Doulos SIL or Charis SIL. Note that since these fonts do not support full Greek, some of the Greek improvements (digamma, etc.) will not be there, but will be in Gentium.

Because we want to get this major upgrade to you as soon as possible, the next version will still be only regular and italic. We hope, however to get it to you sometime mid-year (that’s 2007, if we’re able to keep on track).

One more little note: Since Gentium has been released under the SIL Open Font License, it has gained lots of support in the GNU/Linux community. It has also made its way into some Linux distributions, and even has been shown on the OLPC (One Laptop Per Child). There’s a good pic (in both large and small resolutions) of Gentium Greek on the OLPC at:

Thanks for your continued interest in Gentium!

100 Alternative Search Engines

Wednesday, January 31st, 2007

Seen in Read/Write Web (by Charles S. Knight):

Ask anyone which search engine they use to find information on the Internet and they will almost certainly reply: “Google.” Look a little further, and market research shows that people actually use four main search engines for 99.99% of their searches: Google, Yahoo!, MSN, and Ask.com (in that order). But in my travels as a Search Engine Optimizer (SEO), I have discovered that in that .01% lies a vast multitude of the most innovative and creative search engines you have never seen. So many, in fact, that I have had to limit my list of the very best ones to a mere 100.

But it’s not just the sheer number of them that makes them worthy of attention; each one of these search engines has that standard “About Us” link at the bottom of the homepage. I call it the “why we’re better than Google” page. And after reading dozens and dozens of these pages, I have come to the conclusion that, taken as a whole, they are right!

Worth investigating these. There used to be several search engines that touted themselves as academic resources (Northernlights?), or as having cleverer algorithms than the big ones (Teoma?). I wouldn’t know what to look for any more, though. Do we actually need cleverer search engines, or is raw power all that matters?

Humanities Computing Links from TAPoR

Monday, January 29th, 2007

Geoffrey Rockwell has put up a collection of tagged links to online works about humanities computing. It’s a good complement to Bill Turkel’s Readings in Digital History. And, best of all, it’s TAPoRized, so you can search the collection and run its contents through any of the TAPoR text analysis tools.

Google maps and millions of books

Friday, January 26th, 2007

It was only a matter of time: Google has added overview maps for full-view books in Google Book Search.

Even though Google is not the first organization to employ geoparsing technologies and autogenerated maps in the interface to a digital library, they certainly are the biggest media darling to do so. Consequently — and because of the prominent role Google plays in web search, earth visualization and on-going mass digitization efforts — the average person is likely to be introduced to this class of information interaction via Google’s new feature.

But what good is it? Will it get better? Why should humanists care?

I’m contemplating a series of posts offering some idiosyncratic answers to these questions … but first off, let’s just focus on what it does …
(more…)

Second Life experient in social copyright

Sunday, January 21st, 2007
I spotted this several weeks ago in Wired magazine, but have only just gotten around to taking it in fully. The scenario:

Businesses in Second Life are in an uproar over a rogue [ed. note: modified from Open Source] software program that duplicates “in world” items. They should be. But the havoc sewn by Copybot promises to transform the virtual word into a bold experiment in protecting creative work without the blunt instrument of copyright law.

Linden Labs, the owners of Second Life, decided against employing DRM (which “won’t work”) or adjudicating copyright disputes themselves, but instead have added creator and creation-date indicators to all items.

The next phase of Linden’s response is more interesting. The company plans to develop an infrastructure to enable Second Life residents and landowners to enforce IP-related covenants within certain areas, or as a prerequisite for joining certain groups. In effect, Second Life’s inhabitants will self-police their world, according to rules and social norms they develop themselves.

There are some interesting comments in the full article about the innovation incentive value of copyright, and the possible success of social norms as against enforcible law as a means of controlling this.

Second Life to open code

Wednesday, January 10th, 2007

I’ve posted here several times about the educational fun to be had with ancient and other reconstructions in Second Life (see e.g. 3D Egyptian Archaeology in Second Life). Now more good news from Linden Labs, which may make SL an even more user-designed and progressive virtual word environment. This announcement seen in Lawrence Lessig’s blog:

I’ve been a long time supporter of SecondLife. Yesterday, they made me proud. SecondLife announced it will GPL its client software. And it committed itself to freeing the back-end as well. How significant is SecondLife? Here’s a really interesting empirical study by Tristan Louis about SecondLife activity.

Thesaurus linguae Latinae CD-Version workshop

Wednesday, January 10th, 2007

Something of interest for all Latinists in or near London:

The Centre for Computing in the Humanities and the Digital Classicist would like to invite all those interested to a workshop on the CD-Version of the Thesaurus linguae Latinae.

Dr Bianca Schröder (Munich), will be giving a seminar titled: A Traditional Dictionary in a New Medium (abstract) as part of the Humanities Computing series at CCH on 15th February at 1 pm.

In addition to this Dr Bianca Schröder will be running two workshops to be held in the seminar room at CCH (address below):

13:00 to 16:00 on Wednesday 14th February

and

14:00 to 17:00 on Friday 16th February

Workshop description:

The Thesaurus linguae Latinae is the most comprehensive dictionary of the Latin language; it covers every author and work from the first
items of Latin up to 600 AD. The long-term project, situated at the
‘Bayerische Akademie der Wissenschaften, took up the work in 1894 and the first fascicle was printed in 1900. At the present moment, the
staff at Munich are treating words beginning with the letter ‘p’. The
articles are still published in printed form, but they are now also
available in a CD-Version.

After an introduction into the highly elaborate, principally
dichotomic, structure of the articles and a short exercise in using
the articles, the participants of the workshop will have the
opportunity to work on a lemma by themselves. We will look at material
illustrating the Latin verb ‘computare’ and think about the general
questions of lexicography : about the meaning of a word in different
contexts, about the various syntactic usages, about the change of
meaning and usage throughout the times, and about the presentation of the development of a word in an TLL-article. One important issue will be to compare the printed edition with the digital version and to
discuss the questions and needs that can be served by a digital Latin
dictionary.

If you wish to book a place on one of these workshops please contact:
simon.mahony@kcl.ac.uk
I will send out further details to those registerd nearer the time.

regards
Simon Mahony

Centre for Computing in the Humanities
King’s College London
Kay House
7 Arundel St
London WC2R 3DX

New technologies for Euclid’s Elements

Tuesday, December 19th, 2006

Greg Crane points out a new paper by Mark J. Schiefsky:

The specific purpose of this paper is to describe a set of new software tools and some of their applications to the study of Euclid’s Elements. More generally, it is intended as a case study to illustrate some of the ways in which recent developments in information technology can open up new perspectives for the study of source materials in the history of mathematics and science. I argue that the creative and judicious use of such technology can make important contributions to historical scholarship, both by making it possible to pursue old questions in new ways and by raising new questions that cannot easily be addressed using traditional means of investigation.

Threats to Preservation

Tuesday, December 19th, 2006

Jill Hurst-Wahl at Digitization 101 lists some Bad Things That Can Happen

  • Media failure
  • Hardware failure
  • Software failure
  • Network failure
  • Obsolescence
  • Natural Disaster
  • Operator error
  • Internal Attack
  • External Attack
  • Organization Failure
  • Economic Failure

and she notes that LOCKSS is one of several possible responses.

Teaching and learning scenarios for Google Earth

Tuesday, November 14th, 2006

EduCause has a two-pager, and Google has its own page for educators.

Firefox extension for humanities scholars

Friday, November 3rd, 2006

Zotero is a free, easy-to-use Firefox extension to help you collect, manage, and cite your research sources. It lives right where you do your work — in the web browser itself.

Has anyone tried using this? Is it actually useful?

On avoiding death-by-PowerPoint

Thursday, August 31st, 2006

Just enountered this engaging blog: Presentation Zen.

Google’s Writely now available (again)

Friday, August 18th, 2006

from Ars Technica:

Editing is straightforward enough, and not noticeably different from working in good old MS Word or OpenOffice. Tables, images, the ususal lineup of fonts—it’s all there. The right-click menu tends to be obscured by the browser’s equivalent. That’s slightly annoying, but you can work around that by hitting ESC once, and the menu isn’t all that useful to begin with. You’ll likely do fine with just the toolbar buttons and pulldown menus at the top of the editing window. The usual keyboard shortcuts work, too—CTRL-S for save, CTRL-Z for undo, et cetera.

The most notable feature of the editing process is the AJAXified collaboration. You can invite others to co-edit your document and see their additions or subtractions with a slight time delay, live in your window. The editor autosaves every ten seconds, which pushes out changes and pulls down new versions from the central repository. That could certainly come in handy. You’ll also always see who else is working on your document right now because they’re listed at the bottom of the screen. Writely keeps a revision history, and you can revert to any earlier version you like.

When you’re done editing, you can download the document in .doc, .rtf, or .odt formats, as a PDF file, or as a self-contained zipped HTML files with all images included… You can also publish the document and send a link to whoever you want to read it, or publish an RSS feed of document revisions.