Archive for the ‘Open Source’ Category

Open Source Critical Editions workshop

Thursday, September 21st, 2006

A workshop on Open Source Critical Editions will be held on Friday 22nd September in King’s College London. The workshop is co-organised by the AHRC ICT Methods Network, the Perseus Project, and the Digital Classicist. The workshop programme is available online, and we have also made the text of positioning papers available in full. Responses may also be posted in the Wiki, and discussion will continue beyond the workshop itself either here or on the Digital Classicist mailing list.

Oral Tradition

Thursday, September 21st, 2006

Via rogueclassicism comes the news that The Center for Studies in Oral Tradition now offers universal, free access to its academic journal.

Whose side are they on, anyway?

Monday, September 11th, 2006

Timidity and obsequiousness watch; or, Peter Suber nails it:

Universities take industry word for copyright law

By Peter Suber

Cory Doctorow, USC Copyright rules are flawed, Daily Trojan, September 11, 2006. Excerpt:

As students were returning to the USC campus for the 2006-2007 year, they were sent an ominous memo on “Copyright Compliance,” signed by Michael Pearce, USC deputy chief information officer and Michael L. Jackson, vice president for Student Affairs. This extraordinary document set out a bizarre, nonlegal view of copyright’s intent and the university’s purpose, and made it clear that in its authors’ views, scholarship takes a backseat to copyright….

The memo’s purpose was to warn the student body from using peer-to-peer programs and other file-sharing tools. They did so not to warn them against using these tools to infringe copyright, but rather to warn them against using them at all on pain of losing their Internet access. The memo equates file sharing with infringement.

But this is a narrow and inaccurate view of P2P. P2P systems are the largest libraries of human creativity ever assembled. Even Grokster, the system shut down by the Supreme Court in a highly publicized case last year, was found by the Ninth Circuit U.S. Court of Appeals to have more noninfringing documents than were held in the world’s largest library collections – millions, tens of millions of works that were lawful to search and download.

P2P is a collection of material that might have reduced an earlier generation of scholars to tears. As a science-fiction writer, I’ve grown up with grandiose predictions about the future, but no jet-pack futurist was so audacious as to imagine a repository of knowledge as rich and potent as P2P….

Why would USC trumpet this one-sided, extremist view of copyright? Isn’t the university’s purpose to promote scholarship? Shouldn’t a university be aggressively defending scholarship against organizations like the Recording Industry Association of America, whose indiscriminate enforcers send sloppy takedown notices to university profs named “Usher” whose lecture audio files called “usher.mp3″ are mistaken for songs by the artist Usher?

The answer is that, according to the memo, “USC’s purpose is to promote and foster the creation and lawful use of intellectual property.”

It’s hard to imagine a more shocking statement in an official university communique. If this statement were true, then the measure of USC’s success would be the number of patents filed and the number of copyrights registered rather than the amount of original research undertaken, the number of diplomas granted, the volume of citations in scholarly journals…

Comment. Cory is right and the problem extends far beyond USC. Universities routinely accept propaganda from the copyright industry as an accurate statement of copyright law. This causes two kinds of harm. First, universities needlessly shrink the scope of fair use and retreat from permissible (i.e. licensed) copying and redistribution, both for entertainment and for scholarship. Second, they abdicate their responsibility to understand the actual rules and teach them to students.

Google Book Search grants some PDF downloads

Wednesday, August 30th, 2006

from ars technica:

Google went ahead and did it. Books no longer in copyright are now available for download from the Google Book Search site. If you’re looking for something tasty, might we recommend an early English translation of Montaigne’s provocative essay “On Some Verses of Virgil”? (Hint: the naughtiest bits are in the Latin epigrams, the worst of which aren’t even translated).

There’s plenty of precendent for this sort of thing. Project Gutenberg provides access to 19,000 classic books, but in a text-only format. The Christian Classics Ethereal Library offers both text and PDF versions of a massive collection of source material, but only one one particular topic. There’s also the Perseus Project, which offers ancient and Renaissance texts. Google could top all of these projects by providing fully-searchable versions of a much wider selection of books, many of which can also be downloaded as PDFs that are ready to print.

While this only applies to older books, it’s still a great way of democratizing access to the world’s knowledge (in English, at any rate), and it can’t raise any objections from publishers. Books which were before available only on the shelves of large academic libraries are now available to anyone with a Web connection and some curiosity. Scienta vincit omnia!

But not everyone is thrilled with the results so far. From Planet PDF:

There’s no doubt Google needs to be applauded for the idea, but the execution (i.e. the books they’ve produced) could definitely do with some work. The PDF books are difficult to download, large in size, of such low resolution they’re difficult to read, unsearchable, and do not allow the user to copy text from them. It’s left me wondering what Google expects people to do with the books.

And more critique here.

DAI Archaeological Bibliography goes OA

Thursday, August 3rd, 2006

Sayonara, Dyabola:

We would like to inform you that the German Archaeological Institute (DAI) compiles the four most important Bibliographies on archaeology:

  • Archaeological Bibliography (Realkatalog)
  • Bibliography of the Archaeology of the Iberian Peninsula
  • Subject catalogue of the Roman-German Commission
  • Bibliography of the Archaeology of Eurasia (completed bibliography)

You are now able to search through the Archaeological Bibliography, which will be permanently updated, free of charge at the website of the DAI (previously accessible via Dyabola). Please take a look at our central online catalogue ZENON: opac.dainst.org .

We are also very pleased to inform you that the English version of the revised “Guidelines for Contributors to Publications of the German Archaeological Institute” is now available at the website of the DAI under RESEARCH Guidelines for Contributors to Publications. Please, take note of the general remarks of the editors and the several lists for detailed information (checklist, keyword list, list of abbreviations). The guidelines may be viewed online or can be downloaded as PDF-files, they are valid immediately.

Surprising History of Copyright

Tuesday, July 25th, 2006

This Thursday, 27 July 2006, Karl Fogel (of Google) is scheduled to chair a session at OSCON entitled: The (Surprising) History of Copyright, and What It Means for Open Source. You can view the abstract online, whence the following:

Much of today’s copyright debate is predicated on the notion that copyright was invented to subsidize authors, when it was actually invented to subsidize distributors … viewing copyright in this new light transforms the question from “Does copying hurt artists?” (no, and anyway copyright wasn’t about the artists) to “What kind of support mechanisms should distribution have today?”

Open Access on the ANE-2 List

Monday, July 17th, 2006

A subscriber to the ANE-2 List has reposted there, with permission, an e-circular attributed to the M S Swaminathan Research Foundation [MSSRF] (Chennai, India) which calls for the:

[proactive promotion] of ‘open access’ to scientific and scholarly literature so even those working in institutions whose libraries cannot afford to subscribe to many journals can have free and unfettered access to all research papers

This post has touched off a familiar sort of discussion on the subject, with a recent post from E. Bruce Brooks (of the Warring States Project) asking pointed questions about:

  • Discipline
  • Money
  • Printability
  • Prestige
  • Double-publishing
  • Typography

Can History Be Open Source?

Thursday, July 13th, 2006

The Center for History and New Media at George Mason University has recently posted to the web a long and thoughtful article by Roy Rosenzweig entitled “Can History be Open Source? Wikipedia and the Future of the Past.” It was originally published in The Journal of American History Volume 93, Number 1 (June, 2006): 117-46. [Spotted on the Maps History discussion list in a post by Joel Kovarsky].

Rosenzweig does a good job explaining the origins, development and practices of Wikipedia for a professional, academic audience unfamiliar with the details. R. goes on to examine “Wikipedia as History,” comparing the breadth, depth, accuracy and style of its treatment of historical topics to that found in other popular and professional encyclopedic works.

R. concludes with a section entitled “Why Should We Care? Implications for Historians” in which he opines:

Still, Wikipedia and Linux show that there are alternative models to producing encyclopedias and software than the hierarchical, commercial model …. And whether or not historians consider alternative models for producing their own work, they should pay closer attention to their erstwhile competitors at Wikipedia than Microsoft devoted to worrying about an obscure free and open-source operating system called Linux.

Fair Use Day

Wednesday, July 12th, 2006

So, only 364 days left until the next Fair Use Day!
(Thanks for the tip, Tom.)

Microsoft bends on OpenDocument

Thursday, July 6th, 2006

(hat tip Peter Suber)

Microsoft said it plans to sponsor an open-source project to create software that will convert Office documents to OpenDocument, a rival format gaining ground, particularly among governments.

The software giant on Thursday launched the Open XML Translator project on SourceForge.net, a popular site for hosting code-sharing projects. The software will be available under the BSD open-source license.

The software, developed by a France-based Microsoft partner, will allow people to use Microsoft Office to open and save documents in the OpenDocument, or ODF, format.

… The goal is to have a Word plug-in for Office 2007 by the end of this year and translators for Excel and PowerPoint next year, said Jean Paoli, the general manager of interoperability and XML architecture at Microsoft. The conversions will be based on Microsoft’s Open Office XML, the XML-based file formats that will be the default setting in Office 2007, due next year.

Open Context: Sharing Archaeological Data Digitally

Friday, June 30th, 2006

from About Archaeology:

A new tool in the open source arsenal announced its beta launch last week. Called Open Context, the project involves scientists from Cambridge University (UK), Harvard University, the Smithsonian Institution, U.C. Berkeley, and the University of Chicago, and is supported by grants from the William and Flora Hewett Foundation, “inkind” services from Deloitte and Touche and the Electronic Frontier Foundation, and help from individual donors.

Open Context is a project out of the Alexandria Archive Institute (AAI), a nonprofit institute named after the famous Ptolemaic Library of Alexandria in Egypt. The AAI is intent on building a place to share data on world history and archaeology. Developed by Eric Kansa, Sarah Whicher Kansa and Jeanne Loipiparo, the AAI’s demonstration system, Open Context, combines “reports, observations, maps, plans, analyses, digital files and images of excavations and surveys” generated by archaeological research, and makes them available to students and researchers around the globe.

Jill Coffin: Analysis of open source principles in diverse collaborative communities

Friday, June 30th, 2006

From Infobits:

The June 2006 issue of FIRST MONDAY features selected papers from “FM10 Openness: Code, Science, and Content,” a conference held in May and sponsored by First Monday journal, the University of Illinois at Chicago University Library, and the Maastricht Economic Research Institute on Innovation and Technology (MERIT). The theme of the conference was open access (in journals, communities, and science) and open source. Links to the online papers, along with citations to those not available online, are available at http://www.firstmonday.org/issues/issue11_6/.

The paper by Jill Coffin caught my eye for its useful list of characteristics.

This paper applies traits common to successful free software and open source hacker communities as a framework to analyze three non–hacker collaborative communities. These traits were distilled from my analysis of various open source communities including the Linux, Debian and Mozilla communities. While this framework may not tell the complete story of these communities, the analysis yields observations relevant to the design of collaborative systems. The framework consists of the following characteristics of successful free software/open source communities:

  • open and widespread membership based upon participation
  • geographically distributed, asynchronous, networked collaboration
  • project transparency, particularly open, recorded dialog and peer review of project materials,
  • discussion and decisions
  • a compelling foundational artifact to organize participation and build upon
  • collaborative, iteratively clarified, living documents and project artifacts
  • a mechanism for institutional history
  • a community–wide sense of project ownership
  • a hybrid political system based upon meritocracy
  • a trusted benevolent dictator, typically the project founder
  • foundational developers and early adopters who, along with the benevolent dictator, set project ethos
  • consensus as a decision–making tool
  • upholding the right to fork.

New book on OA

Friday, June 30th, 2006

from the mailbox:

A new book, documenting the major strands and issues of open access, will be published 17th July.

Jacobs, N., Eds. (2006) Open Access: Key Strategic, Technical and Economic Aspects. Chandos

It covers the rationale, history, economics, technology and culture of open access, views from major stakeholders, updates from around the world, and visions of the future. The following authors have contributed:

Alma Swan, Charles W. Bailey, Jr., Jean-Claude Guédon, Andrew Odlyzko, Michael Kurtz, Tim Brody, Chris Awre, Stevan Harnad, Arthur Sale, Robert Terry, Robert Kiley, Matthew Cockerill, Mary Waltham, Colin Steele, Leo Waaijers, Peter Suber, Frederick J. Friend, John Shipp, D. K. Sahu, Ramesh C. Parmar, Clifford Lynch, Nigel Shadbolt and Les Carr.

Many of the chapters are, of course, available open access on the web.Further details of the book available at:
http://www.chandospublishing.com/catalogue/record_detail.php?recordID=103

To pre-order a copy, please contact:

Turpin Distribution Services Limited
Pegasus Drive
Stratton Business Park
Biggleswade
Bedfordshire SG18 8TQ
U.K.
Tel: +44 (0)1767 604951
Fax: +44 (0)1767 601640
General e-mail: custserv@turpin-distribution.com

A new Blog: Digging Digitally

Wednesday, May 31st, 2006

Eric Kansa of The Alexandria Archive Initiative, has initiated a new blog Digging Digitally: Archaeology, data sharing, digitally enabled research and education on behelf of the Digital Data Interest Group of the SAA.

“DDIG members can use this blog to share news and announcements about their programs and activities. Hopefully, DDIG members will post suggestions on developing data sharing standards, intellectual property frameworks, policies, and other issues. DDIG members are also invited to use this weblog as a way to share links to individuals, projects, programs and organizations…”

FRPAA 2006

Tuesday, May 30th, 2006

From Current Cites:

Sternstein, Aliya. “Bill Demands Free Public Access to Science ReportsFederal Computer Week 20(15)(15 May 2006): 56. – It only makes sense, right? Taxpayers should have free access to the science research that they’ve paid for. Well, that access would be guaranteed if a bill introduced by Sens. John Coryn (R-TX) and Joe Lieberman (D-CT) — the Federal Research Public Access Act of 2006 — makes it into law. Says the article, “It mandates that agencies with annual research budgets of more than $100 million to implement a public access policy granting swift access to research supported by those agencies.” Basically. this means that articles reporting on publicly funded research must be made freely available online six months after publication in a scholarly journal. Some 11 agencies are covered: the departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security and Transportation; the Environmental Protection Agency; NASA; and the National Science Foundation. The article notes that “some publishers believe the six-month provision will disrupt their business models, and they remain skeptical that legislation is needed.” The Association of American Publishers (AAP), which opposes the bill, “is urging that an independent study be conducted to measure the bill’s potential impact on scientific quality, the peer-review process, and the financial standing of journals…” – SK

Arxiv includes some research on matters related to the ancient world

Tuesday, May 30th, 2006

Alun Salt at blogographos notes the presence of certain academic papers on classics-related topics in the physicists’ OA repository.

Baidu Baike

Friday, May 19th, 2006

from a CHE piece on a rip-off of Wikipedia in China:

It’s worth noting that China’s censorship policies make it virtually impossible for a true open-source encyclopedia to exist.

New Mellon prizes

Wednesday, May 17th, 2006

from the CHE:

The Andrew W. Mellon Foundation is seeking nominations for the 2006 Mellon Awards for Technology Collaboration, a new contest that will recognize leaders in the field of open-source software. The awards will recognize nonprofit groups that have made great strides in collaborative software development over the past year. And the winners will collect handsome (and, the foundation hopes, helpful) prizes: Noteworthy projects will receive grants of $25,000 or $100,000, depending on how widespread their appeal is. The Mellon Foundation has recruited a virtual who’s who of computing experts to judge the entrants. The panel includes Vinton G. Cerf, a key figure in the founding of the Internet, and Timothy J. Berners-Lee, the creator of the World Wide Web.

A new translation of Euripides’ Medea, bearing a CC license

Tuesday, May 16th, 2006

Celia Luschnig has produced a new translation of the Medea as part of the Diotima anthology. Special thanks to John T Quinn, Translation Editor for Diotima, for his help. (There’s also a pretty-print PDF version.)

This work bears a Creative Commons Attribution-NonCommercial-NoDerivs 2.5 License.

Other works by Celia Luschnig:

And by John T. Quinn:

Speakers at Convocation on Humanities Warn About Privatization of Materials (CHE)

Monday, May 15th, 2006

(An excerpt from an article by Richard Byrne in the CHE — subscription needed)

A joint convocation held by the American Council of Learned Societies and the Association of American Universities to assess the state of the humanities drew over 200 scholars and administrators — as well as two prominent Congressional advocates for arts and letters — to a hotel here on Friday.

The convocation, which was pegged in part to a 2004 report issued by the association, “Reinvigorating the Humanities,” eschewed much of the doom and gloom that has surrounded such gatherings in recent decades. Speakers largely agreed that scholarship in the humanities was vigorous, but that the disciplines still faced serious challenges posed by the digital revolution, a rigidity in academic organization, and a lack of public outreach.

Ideas that struck the strongest chord at the convocation included a call from some speakers to resist the increasing privatization of the raw material of scholarship by corporations as such material is digitized.

Changes in copyright law to extend the length of time that material remains in copyright and efforts by companies such as Google to digitize books into privately controlled databases have increasingly placed the source material that scholars in the humanities use in private control for longer periods of time. Both access to such material and permission to reproduce it in published scholarly work have been tightened significantly.

Paul N. Courant, a professor of economics and public policy at the University of Michigan at Ann Arbor, argued that such trends are leading to “a pervasive inaccessibility of cultural materials.”

“The humanities are at risk here,” he said at one of the convocation sessions. “We risk losing our own source material. There will be a hole in our history.”

He recommended that universities wage an aggressive campaign to defend and extend the “fair use” provisions of copyright law.

“Scholarship is fair use,” Mr. Courant declared. “Period.”

Copyright term and the public domain

Monday, May 15th, 2006

A handy chart I had not yet seen, from the Cornell Copyright Information Center.

Footnote 7 is useful:

A 1961 Copyright Office study found that fewer than 15% of all registered copyrights were renewed. For books, the figure was even lower: 7%. See Barbara Ringer, “Study No. 31: Renewal of Copyright” (1960), reprinted in Library of Congress Copyright Office. Copyright law revision: Studies prepared for the Subcommittee on Patents, Trademarks, and Copyrights of the Committee on the Judiciary, United States Senate, Eighty-sixth Congress, first [-second] session. (Washington: U. S. Govt. Print. Off, 1961), p. 220. A good guide to investigating the copyright and renewal status of published work is Samuel Demas and Jennie L. Brogdon, “Determining Copyright Status for Preservation and Access: Defining Reasonable Effort,” Library Resources and Technical Services 41:4 (October, 1997): 323-334. See also Library of Congress Copyright Office, How to investigate the copyright status of a work. Circular 22. [Washington, D.C.: Library of Congress, Copyright Office, 2004]. The Online Books Page FAQ, especially “How Can I Tell Whether a Book Can Go Online?” and “How Can I Tell Whether a Copyright Was Renewed?“, is also very helpful.

Scan this Book!

Monday, May 15th, 2006

Kevin Kelly, Scan This Book! New York Times, May 14, 2006:

For 2,000 years, the universal library, together with other perennial longings like invisibility cloaks, antigravity shoes and paperless offices, has been a mythical dream that kept receding further into the infinite future. Until now. When Google announced in December 2004 that it would digitally scan the books of five major research libraries to make their contents searchable, the promise of a universal library was resurrected….Brewster Kahle, an archivist overseeing another scanning project, says that the universal library is now within reach. “This is our chance to one-up the Greeks!” he shouts. “It is really possible with the technology of today, not tomorrow. We can provide all the works of humankind to all the people of the world. It will be an achievement remembered for all time, like putting a man on the moon.” And unlike the libraries of old, which were restricted to the elite, this library would be truly democratic, offering every book to every person….Ideally, in such a complete library we should also be able to [go beyond books and] read any article ever written in any newspaper, magazine or journal. And why stop there?…

From the days of Sumerian clay tablets till now, humans have “published” at least 32 million books, 750 million articles and essays, 25 million songs, 500 million images, 500,000 movies, 3 million videos, TV shows and short films and 100 billion public Web pages. All this material is currently contained in all the libraries and archives of the world. When fully digitized, the whole lot could be compressed (at current technological rates) onto 50 petabyte hard disks. Today you need a building about the size of a small-town library to house 50 petabytes. With tomorrow’s technology, it will all fit onto your iPod. When that happens, the library of all libraries will ride in your purse or wallet — if it doesn’t plug directly into your brain with thin white cords….

Turning inked letters into electronic dots that can be read on a screen is simply the first essential step in creating this new library. The real magic will come in the second act, as each word in each book is cross-linked, clustered, cited, extracted, indexed, analyzed, annotated, remixed, reassembled and woven deeper into the culture than ever before….Buoyed by [the] success [of Wikipedia], many nerds believe that a billion readers can reliably weave together the pages of old books, one hyperlink at a time. Those with a passion for a special subject, obscure author or favorite book will, over time, link up its important parts. Multiply that simple generous act by millions of readers, and the universal library can be integrated in full, by fans for fans….When books are deeply linked, you’ll be able to click on the title in any bibliography or any footnote and find the actual book referred to in the footnote. The books referenced in that book’s bibliography will themselves be available, and so you can hop through the library in the same way we hop through Web links, traveling from footnote to footnote to footnote until you reach the bottom of things….

Science is on a long-term campaign to bring all knowledge in the world into one vast, interconnected, footnoted, peer-reviewed web of facts. Independent facts, even those that make sense in their own world, are of little value to science. (The pseudo- and parasciences are nothing less, in fact, than small pools of knowledge that are not connected to the large network of science.) In this way, every new observation or bit of data brought into the web of science enhances the value of all other data points. In science, there is a natural duty to make what is known searchable. No one argues that scientists should be paid when someone finds or duplicates their results. Instead, we have devised other ways to compensate them for their vital work. They are rewarded for the degree that their work is cited, shared, linked and connected in their publications, which they do not own….To a large degree, they make their living by giving away copies of their intellectual property in one fashion or another.

(hat tip, Peter Suber)

Lessig speech: Who Owns Culture?

Friday, May 5th, 2006

from the CHE:

Who Owns Culture?

Lawrence Lessig, a Stanford University law professor and cyberspace theorist, is well-known for challenging traditional notions of copyright. A 20-minute video of a recent speech given by Mr. Lessig is making the rounds on some popular blogs. The speech, “Who Owns Culture?,” provides a brief look at how new technologies, starting with the player piano, have challenged traditional models of how copyrighted materials are distributed and how artists are paid. Mr. Lessig says that we’re now in a “remix culture” where people find creative ways to meld existing creative works to make something completely new. He argues that copyright laws need to be reformed to allow such digital creativity to thrive.

OpenDocument Format accepted as ISO standard

Thursday, May 4th, 2006

from Peter Suber:

The OASIS OpenDocument Format (ODF) has been approved as ISO/IEC 26300. ODF is an XML-based, Open Source file specification for the storage of files produced by office productivity applications (word processor documents, spreadsheets, presentations, drawings, etc.). ODF is already fully supported by the OpenOffice.org productivity suite, an Open Source software bundle issued under the GNU Lesser General Public License (GNU LGPL). OpenOffice.org editions are available in 65 languages and may be run on Windows (98/ME/NT/2000/XP), Mac OS, Linux, and Solaris, among other operating systems, even Windows 95. OpenOffice.org software can read and write to the proprietary document storage formats employed in the Microsoft Office suite.

EpiDoc: Epigraphic Documents in TEI XML

Monday, April 24th, 2006

There’s a new home on SourceForge for Epidoc, and the Epidoc guidelines themselves are available here on the Stoa server.

Principles:

Five important principles have governed the elaboration of EpiDoc techniques and tools from the beginning:

  • EpiDoc and its tools should be open and available to the widest possible range of individuals and groups; therefore, all documents and software produced by the EpiDoc Community are released under the GNU Public License
  • Insofar as possible, EpiDoc should be compliant or compatible with other published standards: we should strive to avoid re-inventing wheels or creating data silos
  • Insofar as possible, EpiDoc projects should work collaboratively and supportively with other digital epigraphy initiatives, especially those sanctioned by the Association Internationale d’ Épigraphie Grecque et Latine
  • In the arena of transcription, EpiDoc must facilitate the encoding of all editorial observations and distinctions signaled in traditional print editions through the use of sigla and typographic indicia
  • We avoid encoding the appearance of these sigla and indicia; rather, we encode the character (or semantics) of the distinction or observation the human editor is making. The rendering of typographic representations of these distinctions are accomplished using XSLTs or other methods.