Every book ever published in every language
Writing about web page http://news.bbc.co.uk/1/hi/magazine/6924022.stm
The BBC article raises some interesting issues about the Open Library project, contrasting it with Google’s own library project and the way information about books on the web often links back to Amazon.
It will be interesting to see if the concept of the Open Library takes off. I think there are dangers of malicious editing & spam as with any collaborative project, and there is a possibility that the owners of Open Library will try to make profit from it in some way in the future. But then if they invested in the database design as the article describes, and they are having to police it and protect it, then maybe they deserve some of the profit. Which leads me back to the question of why we don’t just use Google’s project or Amazon anyway?
I think it would be far better to allow people to edit library catalogues that already exist, to contribute their own reviews and tags on top of the professionally created information, but not in place of it.
Library catalogues already adhere to an international standard of machine readable bibliographic data (known as MARC) and are therefore theoretically cross-searchable. All you would need to achieve the aim of collating all the books ever published is a cross-searching platform, and a selection of libraries wide enough to cover every book ever published… and an eternity to wait for the results of your search :-) or else a metadata harvesting tool with access to the library catalogue records (via the OAI protocol) and a super-huge database to store all the records in, on a super-fast machine to return results to you quickly.
Not that I would want to search the records of every book every published in every language anyway. Imagine the time it would take you to find what you were actually looking for. Imagine how overwhelmed you would feel once you got the results set. Only someone with a very precise query and the advanced information skills to express it accurately would be able to handle such a search with any degree of accuracy.
It’s not ever likely to be necessary for someone to search every book ever published, surely? That is why we have small branch libraries and subject libraries, etc and ways of selecting what goes into a library in the first place. The library acts as a filter for you, and its content reflects the interests and needs of its patrons… which is why I like the idea of putting users’ reviews and tags onto library records.
Wouldn’t you rather read a review from someone who has studied on the same course as you, than from a random person on the Amazon website or Open Library?
If you weren’t finding what you needed from one library, you could then look for another library with a different specialty or focus. So a search for a library, perhaps based on the libraries’ own collections descriptions (Libraries have standards for those, too) would be a good place for you to identify which catalogues you could/should be searching (whether separately or through a cross-searching platform). What you would need alongside that is access to those other specialist libraries and their content. Which is when reciprocal visiting/loan arrangements and digitisation initiatives become interesting.
If Open Library or Google were ever to succeed with creating a collection of every book ever published in every language, who would use it and how? In order to simplify the search process, and in order for people to handle the number of results returned, someone somewhere is making decisions for you about what you should find… and in Google’s case not being very open about how they do it. Wouldn’t you rather that that someone was your friendly librarian who you can speak to, who can explain how they chose a particular book or collection, than someone you’ve never heard of doing techie things with algorithms that you don’t understand?