1984 and Brave New LIS

I have been describing library and information science as an understanding and study of the information communication chain for several years now. More recently, I have promoted the effusive declaration that LIS underpins civilized society; no organization and access to information, no civilization. Having the good fortune to have been working in China earlier this year, I was naively stunned when I couldn’t access twitter there – it’s so easy to take for granted that we can make our own decisions about what we read and write isn’t it?

This lovely infographic (not mine, linked to authors) Orwell vs Huxley, reminds me of why I study and teach LIS; facilitating understanding of the information communication chain at least allows us to know what is out there, even if it is not allowed.

Information is the new black

It must be the popularising effect of James Gleick’s new book “The Information”, because suddenly everyone I meet wants to talk about information: its history, its epistemology and Shannon-Weaver’s 1948 mathematical theory of communication (MTC), which became known as the mathematical theory of information. This is certainly good news for our information science course, where information has been considered from an academic perspective since 1961. I feel my time has come; all those hours spent memorizing equations to show that I truly, deeply understood how many signals you can push down a channel of a certain size, allowing for noise, have finally been rewarded, and I can now brandish my information-science credentials with a superior air of I told you so. Information is the new black, and everyone is wearing it.

I believed that I would forget Shannon’s theory entirely, as soon as the exam was over. It did not seem so relevant to my work at the time, which was with information resources in toxicology. Life, however, with a patient smirk, ensured that the ashes of the MTC rose like a phoenix 20 years later, when I was faced with presenting the mathematical good news to contemporary LIS students taking our Library and Information Science Foundation module as part of their masters. I dusted off my 1986 copy of Robert Cole’s “Computer Communications”, my notes still there in the margins of page 10, where I left them.

The issue I faced was one of presenting a definition of ‘information-science’, and of outlining its history as a discipline, to modern LIS students. Many of the papers considering the origins of information science gaze back in time to illuminate Shannon’s equations with a rosy pink glow, suggesting that his theory somehow led to the birth of information science as a true science (Shera 1968, Meadows 1987). This was the story in the 1980s, but in the 21st century, a more plausible thread is emphasized, the work of Kaiser, Otlet and Farradane on the indexing of documents, which suggests that the MTC was a bit of a red herring in respect to the history of information science. Rather then that information science grew out of a need to control scientific information, coupled with the feeling amongst scientists that this activity was somehow separate from either special-librarianship or the more continental term for dealing with the literature, documentation (see Gilchrist 2009, Vickery 2004, Webber 2003).

MTC

A look back at the original ideas and documents show that Shannon’s work was built on that of Hartley (1928). Stonier (1990 p 54) refers to Hartley:

“.. who defined information as the successive selection of signs or words from a given list. Hartley, concerned with the transmission of information, rejected all subjective factors such as meaning, since his interest lay in the transmission of signs or physical signals.”

Consequently, Shannon used the term information, even though his emphasis was on signalling. The interpretation of the MTC as a theory of information was thus somewhat coincidental, but this did not prevent it being embraced as a foundation of a true ‘information science’.

Shannon himself suggested that there were likely to be many theories of information. More recently, contemporary authors such as Stonier (1992) and Floridi (2010), have reiterated that MTC is about data communication rather than meaningful information.

Floridi (2010 p 42 and 44) explains:

“MTC is primarily a study of the properties of a channel of communication, and of codes that can efficiently encipher data into recordable and transmittable signals.”

“.. since MTC is a theory of information without meaning, (not in the sense of meaningless, but in the sense of not yet meaningful), and since [information – meaning = data], mathematical ‘theory of data communication’ is a far more appropriate description…”

He quotes Weaver as confirming:

“The mathematical theory of communication deals with the carriers of information, symbols and signals, not with information itself.”

Floridi’s definition of information as ‘meaningful data’ is more aligned to the field of information science as understood for our LIS related courses. Whilst we can still argue what is data and what is meaning, we can see that the MTC utilizes ‘information’ as a physical quantity more akin to the bit, rather than the meaningful information handled by library and information scientists.

This difference is set out  by Stonier (1990, p 17):

“In contrast to physical information, there exists human information which includes the information created, interpreted, organised or transmitted by human beings.”

Nonetheless, the MTC is still relevant to today’s information science courses because it has a played a pivotal role in the subsequent definitions and theories about information per se. And it is rather hard to have information science without an understanding of ‘information’. Many papers have been written on theories of information, and on the relevance of such theories to information science (see, for example Cornelius 2002).

MTC and other disciplines

The MTC provides the background for signalling and communication theory within fields as diverse as engineering and neurophysiology. At the same time that Shannon was writing, Norbert Wiener was independently considering the problems of signalling and background noise. Wiener (1948 p 18) writes that they:

“.. had to develop a statistical theory of the amount of information, in which the unit amount of information was that transmitted as a single decision between equally probable alternatives.”

Further (p 19), that

“This idea occurred at about the same time to several writers, among them the statistician R.A. fisher, Dr. Shannon of the Bell Telephone Laboratories, and the author.”

Wiener decided to:

“call the entire field of control and communication theory, whether in the machine or in the animal, by the name Cybernetics”.

The relationship of information to statistical probability (the amount of information being a statistical probability) meant that information in Shannon and Wiener’s sense related readily to entropy (anecdotally von Neumann is said to have suggested to Shannon that he use the term entropy, as it was already in use within the field of thermodynamics, but not widely understood).

“The quantity which uniquely meets the natural requirements that one sets up for ‘information’ turns out to be exactly that which is known in thermodynamics as entropy.”

Shannon and Weaver (1949) p 103

“As the amount of information in a system is a measure of its degree of organization, so the entropy of a system is a measure of its degree of disorganization; and the one is simply the negative of the other.”

Wiener (1948) p 18

The link between information and entropy had been around for some time. In 1929, Szilard wrote about Maxwell’s demon, which could sort out the faster molecules from the slower ones in a chamber of gas. Szilard concluded that the demon had information about the molecules of gas, and was converting information into a form of negative entropy.

The term ‘negentropy’ was coined in 1956 by Brillouin:

“… information can be changed into negentropy, and that information, whether bound or free, can be obtained only at the expense of the negentropy of some physical system.”

Brillouin (1956) p 154

Brillouin’s outcome was that information is associated with order or organization, and that as one system becomes organized, (entropy decrease), another system must becomes more disorganized (entropy increase).

Stonier (1992 p 10), agrees:

“Any system exhibiting organization contains information.”

A well-known anomaly becomes apparent, however, when over 60 years later we try to understand the correlation between information and either entropy or probability. A trawl through the original equations and explanations, and subsequent revisitations, reveals that an increase in information can be associated with either an increase or decrease in entropy/probability according to your viewpoint. Tom Stonier (1990) refers to this in chapter 5, but Qvortrup (1993) gives a more detailed explanation:

“In reality, however, Wiener’s theory of information is not the same, but the opposite of Shannon’s theory. While to Shannon information is inversely proportional to probability, to Wiener it is directly proportional to probability. To Shannon, information and order are opposed; to Wiener they are closely related.”

The correlation between the measurement of entropy and information did however, lead to the separate field of information-physics, where information is considered to be a fundamental, measurable property of the universe, similar to energy (Stonier 1990).

This field stimulates much debate, and is currently enjoying what passes for popularity in science. A recent article in New Scientist tells how Shannon’s entropy provides a reliable indicator of the unpredictability of information, and of thus of uncertainty, and how this has been related to the quantum world and Heisenberg’s uncertainty principle. Ananthaswamy (2011).

Information-biology also appears to stem from work undertaken around the MTC. The connection between signalling in engineering and physiology was made by Wiener in the 1940s, and in 1944 Schrödinger, in his book “What is Life?”, made a connection with entropy as he considered that a living organism:

“… feeds upon negative entropy.”

Further that:

“.. the device by which an organism maintains itself stationary at a fairly high level of orderliness (= fairly low level of entropy) really consists in continually sucking orderliness from its environment.”

In the same book, Schrödinger outline the way in which genetic information might be stored, although the molecular structure of DNA was not published until 1953, by Crick and Watson (see Crick 1988). The genetic information coded in the nucleotides of the DNA is transcribed by messenger RNA and used to synthesize proteins. Information contained in genetic sequences also plays a role in the inheritance of phenotypes, so that informational approaches have been made within the study of biology (see Floridi 2010, also for discussion of neural information).

Information and LIS

For the purposes of our library and information science courses here at City University, we consider information as that which is ‘recorded for the purposes of meaningful, human communication’. Although I personally find Floridi’s definition helpful, information in our model is open to definition and interpretation, and is often used interchangeably with the term ‘knowledge’. In either case we regard the information as being instantiated within a ‘document’. The term ‘document’ also does not demand a definitive explanation, it merely needs to be understood as the focus of ‘information science’, its practitioners and researchers.

To complete the picture, when I became Program Director for #citylis at City University London, I wanted to strengthen and clarify the way in which we defined ‘information science’, and particularly to explain its relationship with library science (Robinson 2009). I suggested that library science and information science were part of the same disciplinary spectrum, and that information science (used here to include library-science) could be understood as the study of the information-communication chain, represented below:

Author  —> Publication and Dissemination —> Organisation —> Indexing and Retrieval —>  User

The chain represents the flow of recorded information, instantiated as documents, from the original author or creator, to the user. The understanding and development of the activities within the communication chain is what library and information specialists do in both practice and research. As a point of explanation, I take organisation in the model to include the working of actual organisations such as libraries and institutions, information management and policy, and information law. Information organisation per se, fits within the indexing and retrieval category.

Our subject is thus a very broad area of study, one which is perhaps better referred to as the information sciences. The question of how we study the activities of the model can be answered by applying Hjorland’s underlying theory for information science, domain analysis (Hjorland 2002). The domain analytic paradigm describes the competencies of information specialists, such as knowledge organization, bibliometrics, epistemology and user studies. The competencies or aspects distinguish what is unique about the information specialist, in contrast to the subject specialist. Further, domain analysis can be seen as the bridge between academic theory and vocational practice; each competency of domain analysis can be approached from either the point of view of research or of practice.

There are many definitions of information science, and there are other associated theories or meta-theories. The latter of which may also be associated with a philosophical stance. Nonetheless, the model portrayed above has proved to be a robust foundation for teaching and research, yet it is flexible enough to accommodate diverse opinions and debate as to what is meant by ‘information’. It allows for diverse theories of information.

It is interesting to reflect on whether ‘information’ as understood for the purposes of library and information science has any connection with ‘information’ as understood by physics and/or biology, or whether it is a standalone concept. Indeed later authors such as Bateson (1972) have suggested that if information is inversely related to probability, as Shannon says, then it is also related to meaning, as meaning is a way of reducing complexity. Cornelius (2002) reviews the literature attempting to elucidate a theory of information for information science (see also Zunde 1981, Meadow and Yuan 1997).

At a recent conference in Lyon, Birger Hjorland’s (2011) presentation considered the question of whether it was possible to have information science without information. He writes that there should at least be some understanding of the concept that supports our aims, but concludes:

“.. we cannot start by defining information and then proceed from that definition. We have to consider which field we are working in, and what kind of theoretical perspectives are best suited to support our goals.”

I agree with him. I do not think we can have information science without a consideration of what we mean by information – but information is a complex concept, and one that can be interpreted in several ways, according to the discipline doing the interpretation, and then again within any given discipline per se. It is not an easy subject to study, despite its sudden popularity. The literature of information theory is extensive, and scary maths can be found in most of it. Nonetheless, it is essential for anyone within our profession to have in mind an understanding of what we are working with; otherwise it is impossible to justify what we are doing, and we appear non-descript. Understanding information is like wearing black. Any colour will do, but black makes you look so much taller and slimmer.

References

Ananthaswamy A (2011). Uncertainty untangled. New Scientist. 30th April. 2011, 28-31

Bateson G (1972). Steps to an ecology of mind. Ballantine: New York

Brillouin L (1956). Science and information theory. Academic Press: New York

Cornelius I ( 2002). Theorizing information science. Annual Review of Information Science and Technology 2002. 393-425

Crick F (1988). What mad pursuit. A personal view of scientific discovery. Penguin: London

Floridi L (2010). Information: a very short introduction. Oxford University Press: Oxford

Gilchrist A (2009). Editorial. In: Information science in transition. Facet: London

Hartley RVL (1928). Transmission of information. Bell system Tech. Journal, vol 7 535-563

Hjorland B (2011). The nature of information science and its core concepts. Paper presented at: Colloque sur l’épistémologie comparée des concepts d’information et de communication dans les disciplines scientifiques (EPICIC), Université Lyon3, April 8th 2011. Available from: http://isko-france.asso.fr/epicic/en/node/18

Meadow CT and Yuan W (1997). Measuring the impact of information: defining the concepts. Information Processing and Management, vol 33(6) 697-714

Meadows AJ (1987). Introduction. In: The origins of information science. Taylor Graham: London

Qvortrup L (1993). The controversy of the concept of information. Cybernetics and Human Knowing, vol 1(4) 3-24

Robinson L (2009). Information science: communication and domain analysis. Journal of Documentation, vol 65(4) 578-591

Schrödinger E (1944). What is life? The physical aspect of the living cell. Cambridge University Press: Cambridge

Shannon CE and Weaver W (1949). The mathematical theory of communication. University of Illinois Press: Urbana

Shera JH (1968). Of librarianship, documentation and information science. Unesco Bulletin for Libraries, 22(2) 58-65

Stonier T (1992). Beyond information. The natural history of intelligence. Springer-Verlag: New York

Stonier T (1990). Information and the internal structure of the universe. Springer-verlag: New York

Szilard L (1929). Uber die Entropieverminderung in einem thermodynamischen System bei Eingriffen intelligenter Wesen. Zeitschrift fur Physik, vol 53 840-856

Vickery B (2004). The long search for information. Occasional Papers no. 213. Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign

Webber S (2003). Information science in 2003: a critique. Journal of Information Science, vol 29(4) 311-330

Wiener N (1948). Cybernetics: or control and communication in the animal and the machine. Wiley: New York

Zunde P (1981). Information theory and information science. Information Processing and Management, vol 17(6) 341-347

Libraries in a Digital Age

Royal Astronomical Society, London

The title of this one-day event caught my eye  because of its relevance to the content of our library science masters’ course, and the sessions,  arranged by the Association of Independent Libraries, did not disappoint. An added bonus was that the lectures were delivered at the very lovely Royal Astronomical Society in Mayfair – a significantly motivating factor in persuading me to attend.

The day focused on web 2.0 applications, the fate of public libraries in the face of funding cuts etc., the (apparently) idealistic aspirations of Google Books, changes in the publishing industry and the restrictions on knowledge access resulting from copyright. All topical aspects that any LIS professional, as well as masters student, should find compelling.

Gwyneth Price, Institute of Education, took us through her thoughts on web 2.0 applications and their use in the library and for information literacy. I have been long been an advocate of social media, and was interested to hear Gwyneth’s experience of  introducing blogs, wikis, media-sharing, social-networking and current awareness tools to facilitate new ways for the library to engage with its users. LIS workers have been associated with the promotions of information literacy for many years, now, although I think it is something they have always done. There is a much greater recognition of the role of library professionals as teachers, these days, and Gwyneth highlighted the work of the 9 month LASSIE project (undertaken with Jane Secker, LSE, completed Jan 2008), funded by the Centre for Distance Education at the University of London. This project explored how social software  (used as a synonym for web 2.0 applications) might enhance distance learners’ use of libraries. The resulting case study reports, available from the LASSIE website, suggested ways in which web 2.0 tools could be used in the broader capacity of library engagement and outreach. If you are not familiar with this project, I think it is worth taking a look at as part of any effort to both get to grips with what is meant by web 2.0 tools and to understand library related approaches to their implementation.

Outlining her experience with specific tools, Gwyneth referred to  the overheads in time and effort needed to set up and maintain a library blog – her response to encourage a regular supply of postings was to create a staff blogging rota  – once engaged with the blog, staff enthusiasm rose considerably. With respect to social networking software, Gwyneth felt that this was an area best covered by the VLE – as students tended to use Facebook for social contacts, and the VLE for academic ‘networking’. Mention was made of LinkedIn, for professional use. Wikis were used as a repository for library FAQs – i.e. as a way of capturing the effort made to answer queries to avoid duplication, and media sharing tools were helpful for distributing resources, especially YouTube for training videos, and Delicious for web links. Finally, Gwyneth suggested a viewpoint which I share, in observing a move away from RSS feeds to Twitter . RSS feeds and readers, although entirely workable and useful, somehow always cause the greatest number of puzzled looks in my CPD web 2.0  classes – and I concede that, I too, glean most of my current awareness from Twitter – although in their defense, feed readers are still the best solution if you follow selected blogs or websites for updates. Gwyneth did not mention Netvibes – which is good for pulling all your information sources into one – so I will do so  for completeness. Finally, came a mention for TAGXEDO, a tag cloud generator, and its ability to produce tag clouds in a variety of shapes – find one to suit your mood.

Tim Coates presented his plans to secure the future of our public library service, in which he announced the formation of Library Alliance, a new, not-for-profit, non-governmental body, being launched to help improve the  public library service, funded by charitable donation. (Tim’s speech). Tim is often described as ‘controversial’, and whilst it may be that his views do not always suit everyone, his consistent support for public libraries is undeniable. Tim asks the question “can libraries survive in times of austerity?” and suggests that we should consider the reasons why people use libraries. This is one of the places where differences of opinion can creep in, as the exact reasons why people do or do not use public libraries are not agreed upon anywhere (though doubtless studies aiming to elucidate these reasons exist). Tim’s suggestions as to why people use libraries include reading the books (!) and an appreciation of an inspiring space. It is not just for the technology. Tim emphasizes the need to improve stock,  access and opening hours, and that it is important for public libraries to do what the public wants – but this is another area where controversial opinions enter the arena; not everyone is agreed on what the public wants.  Tim emphasizes the trend to link public library services with the agendas or ambitions of local councils, and ultimately government – he counters this with the 1964 public libraries act, which says that ‘public libraries are for the benefit of those people who wish to use them’. This arguably, does not necessarily link with the ambitions of government targets. Should not public libraries address the needs of the individuals who use them, rather than the state? And whilst co-location of social (and other) services within libraries may be convenient for them, it does not improve the library service per se. Tim is a very well known speaker, and I will not attempt to digest his speech further; please see the link above to his actual words. To end with however, Tim feels that the financial management of public libraries is rather poor, and that with some improvement, libraries can indeed survive.

Michael Popham, Oxford Digital Library, talked about Oxford’s collaboration with Google Books, and the lessons learned from their combined efforts to digitize the Bodleian’s estimated 1 million holdings of out-of copyright, and mostly out-of-print 19th century material (this arrangement was different from Google’s projects with Harvard and Michigan). The project stemmed from the desire to widen access; currently 60% of those who use and work in the Bodliean Libraries have no direct connection with the university.  The card catalogue offers only limited information to readers, and  access would be greatly enhanced by digitizing and indexing entire books.

Michael reminded us that Oxford’s “digital library” began in the 1960s, when machine readable text was made available for scholarly research purposes. The Oxford Text Archive was founded in the 1970s.

The project built on work  with Proquest  to convert Early English Books Online, from microfilm images, into fully-searchable texts. This effort proved  slow and  expensive, so the offer from Google to assist with digitization was viewed as a chance to cut the waiting time and costs for improved access. Once digitized by Google, one copy of the item files go into Google Books, and a second into the Oxford Digital Asset Management System. There is a link from the Oxford Libraries Information Service (OLIS)  catalogue to the copy in the management system.

The details of the project revealed that digitization relies on good planning and a lot of work, requiring effort from all the staff concerned. Google provides the metadata checks, the digitization, quality assurance, OCR and indexing, reprocessing, mounting of  files in Google Books and preservation of the master files. Bodleian staff carriy out the item selection (33% of the items are too fragile to scan and a further 33% are the wrong size.), handling and re-shelving. Google retains the master files as the images are large, and storage requirements are onerous. Google committed to the project (commencing in 2005) for 20 years, and for now the content is free – what happens after 20 years is as yet undecided; so far 388,000 of the 1 million items have been digitized.

The process of digitization is a craft – with highly specialized equipment being used by skilled operators to obtain accurate images, whilst not damaging the original works in any way. Devices that hold pages in place using gentle air pressure for example – but still not every item is suitable for digitization.

There are some issues, one of which is that Oxford has no control over how Google uses the images; Google aims to promote the material to end-users, not just scholars. The digitization process itself can have hiccups, resulting in images of the operators hands and blank pages in odd places. There  are things that cannot be digitized, including fold-out pages and missing pages. Copyright is another minefield, as laws between countries differ. Whilst the UK has the 70 year rule (i.e. copyright ends 70 years after author’s death), other parts of the world (e.g. the US) do not. For the moment Google attempts to determine where in the world a reader is situated, and to apply copyright restrictions accordingly – not always with the greatest accuracy as sometimes the date of the author’s death is not known. However, despite the drawbacks, there is now the potential to analyse texts linguistically, and to find things beyond the possibilities offered by print on paper. Michael gave the example of attempting to locate an early use of the phrase “.. beginning of the end and the end of the beginning ..”. Google’s skill at marketing also helps to draw attention to special items in the collection including first editions (Emma, Origin of the Species) which can be seen be anyone all over the world.

Moving on to publishing, John B Thompson, Professor of Sociology at Cambridge University,  considered the changes in the industry between the 1960s and the present day. Publishing has an intrinsic link with libraries; how authors disseminate their work and how users find it. LIS is concerned with both scholarly publishing and with what John Thompson refers to as ‘general trade publishing’ – i.e. book publishing in general. The talk focused on changes brought about by the dual action of the economic downturn, and the digital revolution. Both these aspects are well known as drivers for change within the LIS field too, and the consequences for the publishing industry echo throughout the information industry as a whole. The talk centered around John’s research into the book trade, detailed in his new publication “Merchants of Culture“, which I have added to my reading list for our library science masters. Whilst John’s previous book, Books in the Digital Age, (also on our reading list), considered scholarly book publishing in today’s society, Merchants of Culture considers the wider world of “general trade publishing”,

“..that is the world of general interest books that are aimed at a wider public and sold through high street bookstores”

how it is organised and how it is changing. John introduced “the logic of the field” as his model for how US/UK publishing works –  emphasizing that publishing in other locations works to different rules. His fluid and engaging presentation summarized his book very well in a short space of time – starting with changes from the 1960s including A) the growth of retail chains, leading to a decrease in independent booksellers and the number of people in the trade choosing books, a shift in the way books are stocked and sold, and the hardback revolution as mass marketing increases sales, B) the rise of literary agents (not seen in Europe) and C) the emergence of publishing corporations. This has all lead to a polarization of the field, where there are no medium sized publishing houses, just very large enterprises and very small ‘indy’ presses. You either have a lot of money to fund winners, or a very small amount to fund esoteric chancers. Once a book is a success, major funding will be required to publish a second tome. We face a preoccupation with ‘big books’ – the hoped for best sellers.

“Hype is the talking up of books by those who have an interest in generating excitement about them…. buzz exists when the recipients of hype respond with affirmative talk backed up by money.”

It matters what people think. But we now have “extreme publishing” and “shrinking windows” where a book has six weeks to sell or face withdrawal. High returns though – 30% on average.

The day ended with Martyn Everett, former librarian and Chairman of Saffron Waldon Town Library Society, talking about how re-interpretation of copyright is restricting access to information.

“Knowledge is stifled by restriction and censorship”

Martyn drew a comparison between today’s knowledge commons and the historical commons where land was shared for the general good, highlighting the desire to share underlying the ethos behind many of today’s information workers. Businesses such as Amazon and Abe Books offer realistic alternatives to public libraries as books can be sourced cheaply and quickly. The “long tail” purpose of libraries is eroded by such services, as they are often quicker than ILL. Where does this leave libraries ? Furthermore, research collections are often denied to members of the public  – what is the role of open access here ? And what of copyright held by organizations rather than the individual authors ? Should not publicly funded research by readily and freely available to the public? Good question and one which is regularly in the discussion forums. And what of the content of public libraries ? Martyn noted the lack of books on anthropology available on shelves for public browsing – is this true ?

Library at the Royal Astronomical Society, London

I was fortunate to be able to have a look around the library before I left – amazing collection including star charts and paintings of comets, and very early photos of the solar eclipse.

As I was leaving, we returned to the comment that there are no anthropology books in the public libraries anymore – I suggested that dumbing down was reason – ‘no’ replied our speaker – it was a more complex issue than merely dumbing down – this issue deserves more exploration – however, I do wonder whether the public gets what the public asks for from its libraries, which is clearly not anthropology. There is a feeling that librarians should protect the public from their own failings, stocking the Times Literary Supplement recommendations just in case someone has an epiphany. (What does the public library act say?) Should they though? Do people want to chance upon something life-changing, improving, inspirational or even just useful from the library ? I hope so. I mean isn’t that why we do this?