All entries for July 2007
July 31, 2007
Writing about web page http://news.bbc.co.uk/1/hi/magazine/6924022.stm
The BBC article raises some interesting issues about the Open Library project, contrasting it with Google’s own library project and the way information about books on the web often links back to Amazon.
It will be interesting to see if the concept of the Open Library takes off. I think there are dangers of malicious editing & spam as with any collaborative project, and there is a possibility that the owners of Open Library will try to make profit from it in some way in the future. But then if they invested in the database design as the article describes, and they are having to police it and protect it, then maybe they deserve some of the profit. Which leads me back to the question of why we don’t just use Google’s project or Amazon anyway?
I think it would be far better to allow people to edit library catalogues that already exist, to contribute their own reviews and tags on top of the professionally created information, but not in place of it.
Library catalogues already adhere to an international standard of machine readable bibliographic data (known as MARC) and are therefore theoretically cross-searchable. All you would need to achieve the aim of collating all the books ever published is a cross-searching platform, and a selection of libraries wide enough to cover every book ever published… and an eternity to wait for the results of your search :-) or else a metadata harvesting tool with access to the library catalogue records (via the OAI protocol) and a super-huge database to store all the records in, on a super-fast machine to return results to you quickly.
Not that I would want to search the records of every book every published in every language anyway. Imagine the time it would take you to find what you were actually looking for. Imagine how overwhelmed you would feel once you got the results set. Only someone with a very precise query and the advanced information skills to express it accurately would be able to handle such a search with any degree of accuracy.
It’s not ever likely to be necessary for someone to search every book ever published, surely? That is why we have small branch libraries and subject libraries, etc and ways of selecting what goes into a library in the first place. The library acts as a filter for you, and its content reflects the interests and needs of its patrons… which is why I like the idea of putting users’ reviews and tags onto library records.
Wouldn’t you rather read a review from someone who has studied on the same course as you, than from a random person on the Amazon website or Open Library?
If you weren’t finding what you needed from one library, you could then look for another library with a different specialty or focus. So a search for a library, perhaps based on the libraries’ own collections descriptions (Libraries have standards for those, too) would be a good place for you to identify which catalogues you could/should be searching (whether separately or through a cross-searching platform). What you would need alongside that is access to those other specialist libraries and their content. Which is when reciprocal visiting/loan arrangements and digitisation initiatives become interesting.
If Open Library or Google were ever to succeed with creating a collection of every book ever published in every language, who would use it and how? In order to simplify the search process, and in order for people to handle the number of results returned, someone somewhere is making decisions for you about what you should find… and in Google’s case not being very open about how they do it. Wouldn’t you rather that that someone was your friendly librarian who you can speak to, who can explain how they chose a particular book or collection, than someone you’ve never heard of doing techie things with algorithms that you don’t understand?
July 27, 2007
Writing about web page http://www.library.cornell.edu/about/announce.html
Digitise you out-of-print holdings and sell them as print-on-demand facsimile publications. Then come come back to me and let me know how the business is doing!
Cornell University Library has partnered with BookSurge, a subsidiary of Amazon.com, to so publish over 6,000 titles from their collections. “When a book is retrieved online from the library’s Web site, records now indicate whether it is also available as a print-on-demand title via Amazon.com” and there is also a dedicated online bookstore homepage.
July 24, 2007
Writing about web page http://chronicle.com/wiredcampus/article/2202/facebook-shuns-some-library-search-tools
The blog posting referenced has an interesting take on Facebook applications and how they get approved, and the comments consider whether libraries should even be on Facebook at all. Personally I don’t think it would do any harm and would love to put a Facebook application that searches our library catalogue onto my profile.
Not sure what is involved with creating such an application but I’d be interested to know how it works out for the libraries that have done it.
July 18, 2007
I’ve just finished a report on Web 2.0 technologies in libraries that is now available online. We’re now waiting to see what our library management group make of the recommendations…
July 17, 2007
Researcher support in academic libraries is a current interest of mine. I would like to hear about different models used in Universities anywhere in the world.
In particular I am interested in how a physical space in a library can integrate into one single hub or point of access all the research support services, including a virtual research environment, that researchers may need. By researchers I mean the whole range from Masters by Research and PhD students to Post-Doctoral and Research Fellows as well as the established academics.
This range of services is not limited to library services, or to the services provided by the institution for its researchers. I am sure there are already many universities where such ideas have developed and are successful.
July 16, 2007
Writing about web page http://userslib.com/?p=74
Every day I learn something new about how these social networking technologies can be or are being used in libraries. It definitely still feels like early days, though.
This poll illustrates how even Facebook users are reluctant to contact their librarians through Facebook. I note that the survey was carried out in July this year, but there is no mention of whether any Facebook Librarian application was available for the University of Michigan Library, who carried out the survey. It very telling that most people want to see a librarian face to face, though.
This particular blog posting is a follow up to an earlier one about their experience of using a Facebook flyer and a Facebook marketplace ad to advertise the library home page. The marketplace ad is free, and to my mind it sounds like a better idea than creating a library “profile” on Facebook.
I’m not keen on a library “profile” because I don’t think that people are realistically going to make “friends” with the library, and that is what the profile is supposed to be all about. Using this technology in a way that is so obviously not what it’s supposed to be used for would make us look like embarrassing old people! Besides, the terms and conditions prohibit me from signing up for a library profile as I have to agree that I’m creating a personal account.
So I might investigate the marketplace ad idea, as well as the Librarian application, and a catalogue search application that I am currently in favour of trying out.
July 12, 2007
Writing about web page http://artfossett.blogspot.com/2007/07/big-red-button.html
I really like the idea of the Twitterbot on Eduserv island that Andy Powel (Art Fosset) describes in his blog. It could work as a way of embedding a library enquiry service into SL educational environments. I’ve not tried it out though, so perhaps I should…
July 11, 2007
Writing about web page http://www.rss4lib.com/2007/07/rss_focuses_site_readership.html
By using my Bloglines account to read RSS feeds of interest to me, I found this blog posting about a report on the impact of RSS feeds on visits to websites. I find it interesting that someone thought this worth measuring. When I read RSS feeds in Bloglines, I’m not actually visiting the site from which they came. Most of the time I just read the entry in Bloglines itself without ever visiting the site itself. So, presumably RSS feeds ought really to be reducing the number of visits to a web page even whilst the content reaches a desired audience? Or does my access through Blogline get counted? I don’t really understand how RSS works nor, how the statistics of web page accesses are counted. The report itself discusses the results, but not the methods used.
July 10, 2007
Writing about web page http://alumni.media.mit.edu/~fviegas/papers/history_flow.pdf
Last week I had a few days off, but I also attended a talk on “Transliteracies” at De Montfort’s Institute of Creative Technologies, by Professor Alan Liu from the University of California. I thought it sounded relevant to my investigations into the potential of Web 2.0 in the academic library setting.
It was a fairly highbrow talk, and I confess to feeling a little bit lost as I had never heard the term “transliteracy” before, and I still can’t be sure what it means after the lecture! Professor Liu did not attempt to explain it, so no doubt it is a term well understood by others, that I should look up. A quick Google indicates many results from the US, so perhaps it is a term more commonly used over there. In any case my lack of understanding didn’t impinge on the content of the lecture.
Professor Liu’s perspective is that online reading is really networked reading. It is non-linear reading, involving browsing, scanning and cursory reading. With Web 2.0 technologies collective and social reading practices are encouraged. Such practices have existed for a very long time before the advent of the Web or Web 2.0, eg with reading groups and reading aloud, etc. This long term view of reading and of published information was a theme throughout Professor Liu’s lecture.
The Web has need of information quality assurance, and the wider community of users of that information can “police” the content to ensure some kind of quality. Under 5% of visitors to wikipedia have contributed content to it, apparently. The community of users should police the quality of web content, and be separate from the community of authors, according to Professor Liu. Really what we need is for all users of web information to be aware of the provenance and reliability of the information they are reading.
The next part of the lecture got a bit more technical, talking about mark-up languages and ontologies to help computers to understand the text that we can read on a web page, and to follow who is reading what, who those people are linked to and so on. Such “social mark-up” can show what underlies authority and currency and help us in our evaluation of the quality of web resources. Professor Liu mentioned various projects relating to such developments.
One very interesting piece of work that Professor Liu highlighted was some research being done to graphically illustrate how Wikipedia entries are edited over time. I’ve linked to a pdf article describing this. The graphs created by researchers at MIT show patterns that indicate edit wars (eg for the “chocolate” page) and instances when a page had to be taken down. Such patterns indicate that the content of a particular page is controversial and therefore should be considered in a wider context. Professor Liu advocated highly visible flags on Wikipedia entries to indicate such usage patterns, or filters for searching based on the patterns. (Contentious pages on Wikipedia that are locked to prevent edit wars have a small padlock on the top right. This is a very discrete symbol and is easily missed.)
“Social mark-up” could be added to web pages off the back of Web 2.0 technologies, to help us to evaluate what we find… it sounds to me a bit like the complicated ways in which Google calculates which search results to present first. No doubt there will be new methods of assessing information quality and new ways of circumventing or taking advantage of those methods that will lead to more invention and so on… Information begets information (about the information) and all of it should be made available on the Web. It looks like the role of the information professional should be secure as the Web becomes ever more complicated to navigate, evaluate and understand.