UK Web Focus (Brian Kelly)

Innovation and best practices for the Web

Archive for December, 2011

My Predictions for 2012

Posted by Brian Kelly on 29 December 2011

Predictions for 2012

How will the technology environment develop during 2012? I’m willing to set myself up for a fall my outlining my predictions for 2012 :-)

Tablet Computers …

After a couple of years in which use of smart phones, whether based on Apple’s iOS or Goole’s Android operating system), became mainstream for many when away from the office, 2012 will see use of Tablets becoming mainstream, with the competition provided by vendors of Android continue to bring the prices for those reluctant to pay a premium for an iPad.

Once the new term starts we’ll see increased numbers of students who received a Tablet PC for Christmas making use of them, not only for watching videos and listening to music in their accommodation, but also in lectures. As well as note-taking the devices, together with smart phones, will be used for recording lectures. In some cases this will lead to concerns regarding ownership and privacy infringements but students will argue that they are paying for their education and they should be entitled to time-shift their lecturers. Since it will be difficult to prevent students from making such recordings lecturers will start to encourage such practices and will seek to develop an understanding of when comments made during lecturers and tutorials should be treated as ‘off-the-record’.

Open Practices …

Such lecturers will be providing one example of an ‘open practice’. Such encouragement of recording or broadcasting lecturers will become the norm in several research areas, with organisers of research conferences acknowledging that they will need to provide an event amplification infrastructure (including free WiFi for participants, an event hashtag, live streaming or recording of key talks) in order to satisfy the expectations of those who are active in participation in research events.

Such open practices will complement more well-established examples of openness including open access and open content, such as open educational resources. We’ll see much greater use of Creative Commons licences, especially licence which minimise barriers to reuse.

Social Applications …

Social applications will become ubiquitous, although the term may be rebranded in order to avoid the barrier to use faced by those who regard the term ‘social’ as meaning ‘personal’ or ‘trivial’. Just as Web 2.0 became rebranded as the Social Web and the Semantic Web as Linked Data, we shall see such applications being marked as collaborative or interactive services.

Social networking services will continue to grow in importance across the higher education sector. However the view that the popularity of such services will be dependent on conformance with a particular set of development (open source and distributed) or ownership criteria (must not be owned by a successful multi-national company) will be seen to be of little significance. Rather than a growth in services such as or Diaspora, we will see Facebook continue to develop (with its use by organisations helped by mandatory legal requirements regarding conformance with EU privacy legislation described in a post on 45 Privacy Changes Facebook Will Make To Comply With Data Protection Law). In addition to Facebook, Twitter and Google+ will continue to be of importance across the sector.

Learning and Knowledge Analytics ….

The ubiquity of mobile devices coupled with greater use of social applications as part of a developing cultural of open practices will lead to an awareness of the importance of learning and knowledge analytics. Just as in the sporting arena we have seen huge developments in using analytic tools to understand and maximise sporting performances, we will see similar approaches being taken to understand and maximise intellectual performance, in both teaching and learning and research areas.

Collective Intelligence

Just as the combination of developments will help us to have a better understanding of intellectual performance, so too will these development help to in the growth of Collective Intelligence, described in Wikipedia as the “shared or group intelligence that emerges from the collaboration and competition of many individuals and appears in consensus decision making in bacteria, animals, humans and computer networks“. The driving forces behind Collective Intelligence will be the global players which have access to large volumes of data and the computational resources (processing power and storage) to analyse the data.

How Will I Know If I’m Right?

In a way it is easy to make predictions. A greater challenge is being able to demonstrate that such predictions have come true. How might we go about deciding, in December 2012, whether these predictions reflect reality?

Monitoring Trends

There will be statistics which can help support the predictions. For example a few days ago Glyn Moody tweeted that:

Google announces 3.7m #Android activations over the Christmas weekend – impressive

But there are a range of other indicators which can help to spot trends which may be applicable.

A Google Trend comparison of the terms ‘tablet computer’ and ‘smartphone’ currently show the greater popularity of the latter term although there was a peak in searches for ‘tablet computer’ after the news (labelled F in the screenshot) that “India launches $35 tablet computer“.

Using Wikipedia

Wikipedia articles may also have a role to play. For example we can compare the entries for tablet computer and collective intelligence between January and December 2011 which might help to provide a better understanding of how the Wikipedia community is describing these terms. Similarly looking for the usage statistics for these two entries shows 40,567 visits in January and 73,181 in November 2011 for the entry for tablet computer and 10,711 visits in January and 11,126 in November 2011 for the entry for collective intelligence.

In addition to the content coverage and usage statistics for Wikipedia articles, the creation of an article may also indicate that the term has become significant. It is interesting to note that there is currently no entry for ‘open practice’. Will this have changed by this time next year, I wonder?

Snapshots of Social Network Usage

I have previously provided snapshots of institutional use of Facebook from November 2007 up to January 2011, together with similar surveys of institutional use of services such as Twitter, YouTube and iTunes. It would be interesting to capture early examples of institutional uses of Google+, and Diaspora. However I am currently unaware of such institutional uses. Until I discover some examples I will provide a personal summary of my uses of these services.

Service Nos. of posts Nos. of followers Nos. I follow
Google+ 12 170 476
Diaspora   1    5    5   5  10   9

This data was gathered on 29 December 2011. It will be interesting to see how this compares with the data for the end of 2012. Of course the above table only indicates the extent of my interest and engagement with the services. I have documented these figures so I will be able to benchmark any changes on my usage of these services over the year.

Institutional Trends

It will be interesting to see examples of institutional trends, perhaps by observing topics presented at conferences and also by reading about new developments. One useful source of new developments is Chris Sexton’s From a Distance blog. Chris, Director of Corporate Information and Computing Services at the University of Sheffield, has recently published a post entitled Tablet News in which she describes how:

Today sees the publication of our newsletter, myCiCSnews, which can be downloaded as a pdf from here. There’s articles on learning technologies, research on the campus compute cloud, information security, and many more.
For the first time we’ve made it available in a tablet version, which works really well on iPads and other tablets, and includes embedded video etc.

The Flip Side

The flip side of the growth in use of new services and in discussions about the benefits of such services is the criticisms of such developments.

Criticism and scepticism can take several forms. We can probably remember when mobile phones were large and expensive and, together with the yuppies and businessmen who could afford such devices, were the butt of jokes on comedy sketches.

Mike Ellis has provided his take on the development of online reputation tools such as Klout in his Klunt parody which he announced on Twitter back in September.

We are unlikely to see this example in the Daily Mail but I think we can expect middle England to express outrage at some of the developments I’ve described in this post.

We have already come across examples of the way in which Facebook, Twitter and Blackberry phones have been used to organise illegal events or promote riots. I wonder if the Android tablet will be next in line to race the wrath of the Daily Mail?

Or perhaps the success will be indicated by the backlash. Might we find that the move towards open practices beyond the early adopters will be met by opposition from those who point out the legal risks of such practices, with examples of such risks becoming widely tweeted and retweeted?

Revisiting Predictions

On 29 December 2010 I asked Will #Quora Be Big In 2011? It is difficult to provide an answer to that question. Looking at the Wikipedia article for Quora I find that others also felt that the service would be significant:

Quora has been praised by several publications such as New York TimesUSA TodayTime Magazine and The Daily Telegraph.[28][29][30][31]

According to Robert Scoble, Quora succeeded in combining attributes of TwitterFacebookGoogle Wave and various websites that employ a system of users voting content up.[32] Scoble later criticized Quora, however, saying that it was a “horrid service for blogging,” and while it was a decent question and answer website, it was not substantially better than competing sites.[33] The Daily Telegraph of the United Kingdom has predicted that Quora will go on to become larger than Twitter in the future.[31][34] Quora, along with Airbnb and Dropbox, has been named among the next generation of multibillion dollar start-ups by the New York Times.[35]

Quora itself hosts a question which asks How fast is Quora growing on a weekly basis? What are the growth metrics? However the responses fail to give a clear answer to this question.

I intend to revisit this post in December 2012. I’d welcome suggestions on additional ways in which it will be possible to detect if predictions have become true. I’d also welcome comments on the predictions I’ve outlined in this post.

Twitter conversation from Topsy: [View]

Posted in General, jiscobs | 8 Comments »

The Need for an Evidence-based Approach to Demonstrating Value

Posted by Brian Kelly on 28 December 2011

When I read the Editor’s View column in the current issue of IWR (Information World Review, Nov/Dec 2011) the words seemed familiar. The column began “Evaluating the shortlist for the IWR Information Professional of the Year Award, one of the judges noted that at a time when the library profession was suffering from the economic turmoil there was a need for an evidence-based approach to demonstrating the value for libraries“.

Checking my email it seems that these were the words I used when I voted for Ian Anstice as this year’s IWR Information Professional of the Year. As described in the announcement about the award published in IWR “The judges – all previous winners- gave Anstice, a branch manager of a public library in Cheshire, the honour for his work in documenting the changes taking place across the public library sector as a whole“. Ian Anstice was quoted as saying “In a time of cuts to library services and being aware that knowledge is power, I was surprised to see there was no publicly available site to show what was going in each authority. I started the blog [at] in October 2010. This includes all news articles on public library cuts, doing a map of the cuts, and producing a tally of cuts and proposals by authority.

But what does “evidence-gathering” entail? There is a real danger that selective evidence-gathering is used in order to justify a particular position. This is an approach which has been discredited when governments in the UK and US sought evidence to demonstrate Saddam Hussein’s possession of weapons of mass destruction. Quite clearly we expect a higher level of integrity from the library sector!

A great example of an honest and open approach to the current challenges facing the library sector can be seen in Aaron Tay’s recent post which asked “Is librarianship in crisis and should we be talking about it?” Aaron, a librarian at National University of Singapore, is a prolific blogger on his Musings About Librarianship blog. In his post Aaron described how:

Librarians are worriers, and one thing we like to worry a lot about is the future of libraries.

Veronica Arellano however thinks that we should stop writing about it. Why? She gives several reasons in “A Crisis of Our Own Making” but concludes with

“Writing about the ‘crisis’ in libraries tries to elicit change out of fear, rather than a desire to better serve our communities. By continuing to write our own obituaries, we are dissuading enthusiastic, forward-minded young scholars, technologists, and community leaders from entering the profession by painting ourselves as stuck in the past and obsolete.”

Really?  Should discussions of the implications of the perfect storm caused by the combination of the cuts being faced across many public sector organisations, the technical revolution caused initially by the first generation of the Web and subsequently by the popularity of Web 2.0 and the Social Web together with the changing expectations in the user community be ignored?

Aaron feels that “thinking that everything is fine, and business as usual, always choosing the options with the least risk (when there is no such option in fact) will suffice is equally perhaps a recipe for disaster” and this is a view which I would support.

Aaron’s post asks how one should advise potential newcomers to the profession:

Imagine a young potential librarian-to-be contacts you and asks you for advice on whether he should enter the profession. What picture of librarianship should you paint? I believe it would be irresponsible not to at least mention the challenges and potential stumbling blocks that libraries are facing in the future, so they will know what they will be up against.

and concludes by encouraging a response which is honest about the changing context to the library profession:

For the record, I don’t think libraries are definitely doomed to extinction, but there is much to be done and the library world needs passionate and energetic librarians to fight for the future of libraries and the last thing we need is for recruits to come in because they think libraries are a soft option or because the job outlook is stable.

We do need to continue to gather evidence of the value of services, and not just library services. But we need to understand that the evidence will not necessarily justify a continuation of established approaches to providing services. And if evidence is found which supports the view that libraries will be extinct by 2020 (PDF format) then the implications need to be openly and widely discussed. I’m pleased that Aaron is helping to encourage such a debate. And in light of Aaron’s post I’d like to slightly modify the reason why I supported Ian Anstice’s well-deserved award:

At a time when the library profession is suffering from the economic turmoil there is a need for an evidence-based approach to demonstrating the value for libraries and for open debate on the interpretation of such evidence and the implications of policy decisions based on such interpretations.

Twitter conversation from Topsy: [View]

Posted in General | 10 Comments »

How can universities ensure that they dispose of their unwanted IT equipment in a green and socially responsible way?

Posted by ukwebfocusguest on 26 December 2011

Christmas is a time for sharing and thinking of others. In this guest blog post I’m pleased to provide a forum for Anja ffrench, Director of Marketing and Communications at Computer Aid International. I met Anja at the recent Computer Weekly Social Media Awards and we discussed ways in which the importance of universities could ensure that their unwanted IT equipment could be disposed in a green and socially responsible way. Whilst I’m sure most universities will have appropriate policies and procedures in place, I would like to use this opportunity to raise the visibility of the Computer Aid International.

The Environmental Cost of using Computers

At every step of the PCs product life-cycle carbon footprints are left behind, during the initial extraction of minerals from the environment; the processing of raw materials; production of sub-components; PC assembly and manufacture; global distribution; and power consumption in usage.

The production of every PC requires 10 times its own weight in fossil fuels. According to empirical research published by Williams and Kerr from the UN University in Tokyo, the average PC requires 240kg of fossil fuels, 22kg of chemicals and 1,500kg of water. That’s over 1.7 metric tonnes of materials consumed to produce each and every PC. PCs require so much energy and materials because of the complex internal structure of microchips.

Why it is better to reuse rather than recycle

Given the substantial environmental cost of production it important we recover the full productive value of every PC through reuse before eventually recycling it to recover parts and materials at its true end-of-life. A refurbished computer can provide at least another three years productive life.

How does the WEEE directive affect UK Universities?

Since July 2007 the Waste Electrical and Electronic Equipment (WEEE) Directive has been in force. The WEEE directive is an EU initiative which aims to minimise the impact of electrical and electronic goods on the environment, by increasing reuse and recycling and reducing the amount of WEEE going to landfill.

The WEEE directive affects every organisation and business that uses electrical equipment in the workplace. The regulations cover all types of electrical and electronic equipment including the obvious computers, printers, fax machines and photocopiers, as well as fridges, kettles and electronic pencil sharpeners. The regulations state that business users are responsible, along with producers, for ensuring their WEEE is correctly treated and reprocessed. The regulations encourage the reuse of whole appliances over recycling. When you are disposing of your IT equipment you must ensure that it is sent to an organisation that has been approved by the Environment Agency to take in WEEE who will provide you with Waste Transfer Notes for your equipment.

Do I need to worry about data security?

Under the Data Protection Act 1998 it is your responsibility to destroy any data that may be stored on the machines. Just hitting the delete button is not enough to wipe the data. To ensure you are protected make sure any organisation you use to dispose of your IT equipment uses a professional data wiping solution that has been approved by CESG or similar.

An environmentally friendly and socially responsible solution to your unwanted IT equipment

Donating your unwanted IT equipment to a charity such as Computer Aid International is both environmentally friendly and socially responsible. You will be fully complying with the WEEE directive and benefiting from a professional low cost PC decommissioning service, which includes free UK Secret Services approved Ontrack Eraser data wiping.

Computer Aid is the world’s largest provider of professionally refurbished PCs to the not-for-profit sector in the developing world. It has been in the business of IT refurbishing for over 14 years. The charities aim is to reduce poverty through practical ICT solutions.

To date Computer Aid has provided just under 200,000 fully refurbished PCs and laptops – donated by UK universities and businesses – to where they are most needed in schools, hospitals and not-for-profit organisations in over 100 countries, predominantly in Africa and Latin America. In order for Computer Aid to continue with its work it relies on universities and companies donating their unwanted computers to them.

Schools and universities in the developing world using a PC professionally refurbished by Computer Aid will enjoy at least 3 years more productive PC use. This effectively doubles the life of a PC halving its environmental footprint whilst enabling some of the poorest and most marginalised people in the world to have access to computers.

Anja ffrench

Director of Marketing and Communications
Computer Aid International
10 Brunswick Industrial Park
Brunswick Way, London, N11 1JL
Registered Charity no. 1069256

Tel: +44 (0) 208 361 5540
Fax: +44 (0) 208 361 7051

Twitter: and


Computer Aid International is the world’s largest and most experienced not-for-profit provider of professionally refurbished PCs to developing countries. We have provided over 185,000 computers to educational institutions and not-for-profit organisations in over 100 different countries since 1998. Our aim is to reduce poverty through practical ICT solutions.

Posted in Gadgets, Guest-post | Leave a Comment »

My Technological Highlight of 2011

Posted by Brian Kelly on 23 December 2011

What has been the big new thing of 2011? Was this the year in which Facebook succumbed to personal concerns over privacy, ownership of content and legal threat with users moving in large numbers to the safe environment provided by Diaspora? I think not. Similarly although Google+ has had more or an impact than Diaspora, the early adopters still seem unconvinced that it can provide significant benefits over, say, Twitter.

Perhaps 2011 has been the year of the mobile, with a range of new devices and applications transforming our work and study environment? When I asked for a show of hands at the start of the IWMW 2011 event for people who had a mobile device with them, the sea of hands was unexpected. But I also found that significant numbers had brought along multiple mobile devices and, in response to a question as to whether people preferred use a handheld device to, say, a laptop whilst at home in front of the TV, I was pleased to discover that I am not alone in using my mobile phone rather than my laptop when I wish to look up the TV guide, the football scores or take part in a Twitter discussion. But to be honest I feel that the growth in the importance of mobile has been gradual, with no sudden large scale change being noticeable, not even after the subdued launch of the latest iPhone – although whether we will see the expected large numbers of Android Tablet PCs being bought this Christmas (and cheaper models in the January sales) making 2012 the year of mobile remains to be seen.

Or has 2011 saw the belated arrival of Linked Data? Again despite the feeling that more pragmatic approaches to linking data from disparate sources are becoming accepted, Linked Data doesn’t seem to have yet set the world alight.

I don’t think there has been a significant new major technical development during 2011. But for me 2011 has been the year in which amplified events have started to grow beyond their roots in technologically-focussed events to become more widely embedded.

But what evidence do I have to back up this assertion? It does seem that we are finding that delegates at conferences:

  • Expect events to have a WiFi network so that they can discuss talks with other attendees and share their thoughts with a remote audience.
  • Expect event organisers to provide an event hashtag to make the event back channel easy to find.

In addition to seem to be finding that speakers:

  • Are willing to be live streamed.
  • Are appreciated that delegates who are using their mobile devices during their talks are likely to be actively engaged in the topic and helping to engage others in discussing the ideas

There is also a growing expectation that large-scale events will provide dedicated effort to support such activities:

  • An event amplifier who will be responsible for expanding the audience, an enhancing the experience and spreading and sharing ideas.
  • Technical support to manage video-streaming and/or recording of talk.

I’m looking forward to participating in more amplified events in 2012. But what have your technological highlights of 2011 been?

Posted in General | Leave a Comment »

The (Technology) Ghosts of Christmas Past and Present and Christmas Yet To Come

Posted by Brian Kelly on 22 December 2011

The Technology Ghosts of Christmas Past and Present

We are approaching not only the end of the year but also, if you start counting at ‘1’ rather than ‘0’, the end of the millennium’s first decade. It is therefore timely to consider not only the developments which may be influential for the next decade (for which I feel that large-scale collaborative and communications technologies will result in Collective Intelligence being significant for the sector, which will be helped by a continuing trend towards Openness) and the new technologies of a few years ago which were initially dismissed as irrelevant and unsustainable, but are now used by many mainstream users (in December 2009 I asked 2009 – The Year Of Twitter?; I now wonder when not having a Twitter account will be regarded as odd) but also technologies which have been widely used in the past but now seem to be in decline.

In this post I’ll avoid temptations to be speculative about emerging and emerged technologies and reflect on an aspect of IT which I first started using in, if I recall correctly, 1983 and have used on a variety of platforms, from Prime and VAX mini-computers, Multics and IBM mainframes, through to today’s PC and Apple Macintosh desktop computers and Android and Apple phones and tablet computers. I’ve also used the default mail application on various platforms as well as Pegasus, Eudora, Outlook, Thunderbird and K-9 email clients. We can truly day that email has proved itself to be popular, ubiquitous, platform and application independent and clearly long-living. Email, we can safely say, provides an example in which the IT profession should be pleased to have delivered such a well-liked and robust service.

But is this really the case? Are we starting to see weak signals which suggest that email may be in decline? Might we be in the early stages of a move away from use of email towards an environment in which other forms of collaboration, communication and dissemination tools may provide benefits which email may fail to provide?

“Email is Dying”

At the ILI 2005 conference in London in October 2005 I gave a talk entitled “Email Must Die!” in which I, rather provocatively, argued that if we information professionals, in particular, were well-placed to appreciate the implications of the suggestion that  “E-mail is where knowledge goes to die” and should be welling to take a lead in exploiting a variety of Web 2.0 tools which were starting to emerge at the time which could address the various well-known deficiencies of email: the spam; the duplication of information; office politics based on use of cc: and bcc: the lack of structure; the difficulties of content reuse; etc.

A subsequent Ariadne article with the rather more hesitant question “Must Email Die?” discussed these issues in more depth and outlined how technologies such as blog, wikis, instant messaging, RSS, Skype and other VOIP systems could all replace various uses for which email has traditionally been used.

Two years later, in May 2007, a post entitled “Email IS Dying” referenced an article on “Firms to embrace Web 2.0 tools” published in the Computing newsletter from an original article published in a Gartner report. This article reminded me of a UCISA Poll on Instant Messaging published in 2004 in which a correspondent from the University of Bath stated that “mail seen by younger people to be ‘boring’ ‘full of spam’, IM and SMS immediacy preferred“.

The Gartner report described how:

MySpace and FaceBook are the most successful community environments on the planet because they have pulled people away from email, which is the one thing that nothing else has managed to do so far’.

Facebook has clearly developed significantly in its user base and functionality since Gartner published the report in 2007 although, on the other hand, MySpace has declined significantly. Perhaps the uncertainty as to who would ‘win’ in the battle over the social networking environments – a battle which is irrelevant for email users for which application independence has always been a key feature – has been a barrier to takeup of alternatives to email?

Are Email Lists on Life-Support?

The talks and articles which were presented and published over five years ago where meant to highlight to both early adopters and policy makers that there may be significant changes in the offing, which advance planning will need to consider. At the time the suggestions of a growth in importance of instant messaging (in itself, not a new technology, but one which had previously had little significant role in mainstream university activities) was meant to highlight a possible need to change institutional acceptable use policies which may previously have banned instant messaging services as having no useful role in support teaching and learning or research activities.  I suspect that use of instant messaging technologies is now widely permitted across the sector, perhaps because of an acknowledgment of the value of instant messaging, but also possibly due to the difficulty in banning such technologies, which seems to be now provided within many networked environments.

But although there is a need for advocacy and highlighting potential changes there is also a need to monitor changes in order to see if predictions are coming true or not.

Graph of JISCMail usageIn June 2010 a post on The Decline in JISCMail Use Across the Web Management Community documented evidence on 10 years use of two JISCMail lists which clearly demonstrated the decline in usage since about 2004 (illustrated in accompanying image).

A follow-up survey which explored use of JISCMail by the Dublin Core community was described in a post on DCMI and JISCMail: Profiling Trends of Use of Mailing Lists. This showed that although the overall numbers of lists is still growing, the total volume of traffic has been in decline since 2005. That survey caused me to speculate that new lists which have been created are failing to stimulate discussion and debate but are merely used to replicate posting advertising events, job vacancies and similar broadcast announcements across a range of lists. Although the limited interface options to JISCMail lists meant that I was not able to validate this speculation, in a post entitled Are Mailing Lists Now Primarily A Broadcast Medium? I did discover some small-scale evidence which backs up this assertion for a number of lists to which I subscribe.

Email lists are clearly still being used but evidence is starting to question their value. But at least email lists work across platforms. Or do they?

Are Email Lists Really Interoperable?

Client Limitations

A somewhat tongue-in-cheek post by Scott Wilson describes a Revolutionary messaging technology will challenge FB, Twitter, IM which:

  • It works on all kinds of devices and across all networks
  • You can search, read and respond to messages even when you’re offline
  • Works with intelligent filtering services
  • You can send and receive messages with anyone on any network, not just the same service provider you use
  • The server code is open source so you can run your own
  • Completely distributed architecture with no central server or hub node
  • Uses open standards for pretty much everything
  • Clients for all platforms including mobile, even TV – and anyone can make their own client as the API isn’t proprietary

Of course Scott is describing email. Scott goes on to add; that:

However, not everyone is convinced yet and think that we should stick with proprietary messaging silos tied to one service provider such as Facebook and Twitter, despite the obvious risk of these services being discontinued, monetized, tracking your communications for nefarious purposes, and spamming you with advertising at any opportunity. 

But is email really as interoperable as has been suggested? I used to think that email was interoperable – until I started to use email clients on a variety of platforms.

I’ve experienced particular problems with reading digests of messages from JISCMail lists. This is my preferred way of using mailing lists, as it helps to minimise the numbers of messages arriving in my incoming mail folder. However despite being able to view messages successfully using the digest’s MIME interface in the past, since moving to new email clients I have found that either such messages can’t be viewed (on an Apple Macintosh or iPod Touch email client) or have to be viewed by Notepad (using Thunderbird on an MS Windows platform).

HTML and Email

A W3C Note on Conventions for use of HTML in emailwas published way back in January 1998. However it wasn’t until May 2007 that the W3C organised a W3C HTML Mail Workshop and the minutes failed to provide details of any actions which arose from the meeting. It does appear that, despite the paper on Web standards: a must for html email which was presented at the meeting, there is a lack of agreed standards for how HTML should be used in email, resulting in IT Service departments, such as Glasgow University’s “recommend[ing] sending ‘plain text’ email instead of HTML or rich text email, particularly if sending email to a large distribution list“. Despite suggestions that we should we moving towards use of more semantically rich content we do seem to often be discarding the simple structural elements provided in HTML when we make use of email.

Technical Challenges in Reusing Email Content

As well as the lack of visual clues which can be presented by HTML, I am also aware that software developers who wish to process content held in email archives can find it difficult to process the variety of ways in which messages and accompanying attachments can be stored.

Email has been described as the place “where knowledge goes to die“. A cynic might also regarded mailing lists as a DRM system which makes it harder for content to be reused!

Email is Happy in its Rest Home?

Two years ago Esther Steinfeld asking people to Stop Saying Email is Dying. It’s Not. But last week an article on the Financial Times Web site (free subscription needed to view article) reported on the story about how:

When Thierry Breton, chief executive of Atos, said the IT services company would ban use of internal email by 2014, it caused a sensation across the media, with commentators describing the idea as either “brave”, “stupid” or doomed to failure.

but went on to point out that:

a number of companies have been quietly moving away from using email as the primary way of communicating within the company.

The article described how companies such as Capgemini are making use of social networking tools such as Yammer to replace some of the functionality traditionally provided by email, with Capgemini stating that “it has reduced its internal email traffic by 40 per cent in the 18 months since staff began using Yammer“.  Capgemini, together with companies such as Klick and Atos continue to use email for communicating with people outside the companies and expect that email will continue to exist in some form for many years to come. However email management consultant Monica Seely suggested that “In three to five years we will see a more pluralistic landscape with messages being transferred to some kind of social media platform. But email will remain a bedrock of businesses for some time to come.

A post on the Social Media in Organisations blog entitled The “End” of Email: Reflections from a Digital Era Thinker also highlighted “the recent statement made by Thierry Breton, CEO of Atos, about the “elimination” of email at the company [which] churned up quite a bit of controversy in cyberspace” and suggested that “It All Boils Down to Leadership“.

The (Technology) Ghosts of Christmas Yet To Come

This post was initially entitled “Reflections on the Slow Death of Email“. But since there have been 10 responses in May 200714 responses in June 20103 responses in December 2010 and 8 responses in May 2011 to previous posts on this topic, rather than revisiting the discussions on the flaws and merits of email we need to accept that there will be a divergence in views on the merits of email and on the merits of promoting changes or accepting user preferences.

It should also be clear that a move towards making greater use of richer alternatives to email isn’t to imply a matter of leadership, as was suggested above. In the commercial sector companies may find it easier to enforce policy decisions about technologies, as was seen when WH Smiths made the business decision to stop selling LPs. In the public sector, however, there is a need to support sectoral needs rather than being driven by purely commercial interests. And since it is clear that there is no clear support for a move away from email, the suggestion that it boils down to leadership does seem incorrect.

For me, therefore, a broader question which considerations of the slow decline in email raises is “What technologies do we have today which we might like to replace and how do we, if at all, address a reluctance to change?

An example of a technology which some people expected to experience a sharp decline was Microsoft Windows and Microsoft Office applications. Back in the mid to late 1990s I can recall people arguing that due to factors including:

  • The cost of these products
  • The proprietary nature of the products
  • Legal moves within the EU and the US based on possible illegal selling practices
  • The growing maturity of open source alternatives such as Linux and Star Office and Open Office

we would see Microsoft decline in importance.

This clearly didn’t happen. Microsoft is still around but is now facing other threats including a renewed popularity of Apple Macintosh computers and a growth in mobile devices, including smart phones and tablet computers, with Apple and Android providing the main threats.

But writing off Microsoft can be easy (and tempting) to do. It will be more interesting to think about other areas of technology in which we might expect innovations to replace existing well-established products and services, but subsequently find that users are content with the existing working patterns, even if flawed, and remain unconvinced that it is worth making a change. I’d welcome your suggestions.

Posted in General | 8 Comments »

Final Reports from UKOLN’s Evidence, Impact, Metrics Work

Posted by Brian Kelly on 21 December 2011

During 2010-11 I led UKOLN’s Evidence, Impact, Metrics work. The aim of this work was to identify best practices for gathering quantitive evidence and supporting metrics on the use of networked services to support institutional and project activities.

An Evidence, Impact, Metrics blog was set up on the UKOLN Web site, but the usage statistics for the first few blog posts provided evidence of a lack of use of the blog. This evidence led to a decision to set up an Evidence category on the UK Web Focus blog which was used with related posts which were published on this blog. The aim of the blog posts was to raise awareness of the importance of metrics, explore ways of gathering and interpretting such metrics and encourage discussion on the advantages and disadvantages of using metrics, leading to recommendations on how metrics can be used.

As described in a report on Blogs Posts about Evidence of Impact by 13 December 2011 there had been 28,5907 views of the 35 relevant posts published in this category. In addition there had been 275 comments, although the numbers for the comments include trackbacks and may also contain automatically-generated links from other WordPress blogs which may subsequently be deleted.

This example provides an illustration of how metrics can be used. It should be noted that this does not say anything about the quality or relevance of the posts. It also summarises ways in which the metrics may be misleading (and note it was only when updating the figures on the numbers of comments posted on the blog that I became aware that automatically generated links to posts on this blog may subsequently be deleted.

The final report on this work has been published on the Evidence, Impact, Metrics blog. The report has been produced as a series of self-contained documents which are suitable for printing as well as being published in HTML format.

The following sections of the report are available:

  • Why the Need for this Work?:  This document provides the background to the work.
    [HTML] – [MS Word]
  • Summary of Events:  This document provides a summary of the three one-day workshops and talks given at other events.
    [HTML] – [MS Word]
  • Summary of Blog Posts:  This document provides a summary of the blog pots published related to this work.
    [HTML] – [MS Word]
  • Feedback from the Second Workshop: This document provides a summary of the feedback received at the second one-day workshop.
    [HTML] – [MS Word]
  • Summary of the Final Workshop:  This document provides a report on the third and final one-day workshop.
    [HTML] – [MS Word]
  • A Framework For Metrics:  This document provides a summary of the lightweight framework developed for gathering quantitative evidence.
    [HTML] – [MS Word]
  • Metrics FAQ:  This document provides an FAQ about the metrics work.
    [HTML] – [MS Word]

Note that the MS Word files are intended for printing in A5 format on a printer which supports double-side printing. For a number of the reports the content is duplicated to enable A5 summaries to be printed. The HTML format contain the same information in a more universal format.

As can be seen from the altmetrics manifesto the research community has strong interests in developing metrics which can help to identify evidence of value related to various aspects of research activities. The manifesto highlights the changes in ways in which research activities is being carried out and points out that “as many as a third of scholars are on Twitter, and a growing number tend scholarly blogs“.

The Evidence, Impact, Metrics work has sought to engage in a related area of work for those involved in both project and service who wish to make use of new approaches for which metrics can help to identify the value (or not) or new ways of working and share examples of appropriate best practices. Feedback on this work is welcomed.

Twitter conversation from Topsy:  [View]

Posted in Evidence | 2 Comments »

The Failure of Citizendium

Posted by Brian Kelly on 20 December 2011

Remembering Citizendium

A few days ago I read Steve Wheeler’s post on Content as Curriculum? having being alerted to it by Larry Sanger’s post on An example of educational anti-intellectualism to which Steve provided a riposte in which Steve argued the need to Play the ball, not the man.

From the blog posts I learnt that Larry Sanger is a co-founder of Wikipedia and, as described on his blog is the “‘Founding Editor-in-Chief’ of the Citizendium, the Citizens’ Compendium: a wiki encyclopedia project that is expert-guided, public participatory, and real-names-only”.

I have to admit that I had forgotten about Citizendium but the little spat caused me to revisit the Web site. While searching I came across a discussion entitled Why did Citizendium fail? and yes, it does seem that this “endeavor to achieve the highest standards of writing, reliability, and comprehensiveness through a unique collaboration between Authors and Editors” has failed. But although we often talk about success criteria, it can be more difficult to identify failures. How then, can we describe Citizendium as a failure?

Experiences With Citizendium

A few years ago I signed up for a Citizendium account. In order to register you need to provide your real name and include “a CV or resume … as well as some links to Web material that tends to support the claims made in the CV, such as conference proceedings, or a departmental home page. Both of these additional requirements may be fulfilled by a CV that is hosted on an official work Web page“.

I registered as I felt that if Citizendium became successful being an author could provide a valuable dissemination channel for those areas in which I have expertise. In particular I had an interest in helping to manage the Web accessibility entry in Citizendium. However I found that I did not have the time – or inclination – to edit this article. Looking at the article today it seems that the “page was last modified 09:25, 10 January 2008” and “has been accessed 221 times“. It is perhaps good news that the page has been viewed so little as it is not only very out-of-date but is also poorly written. It also seems that there have been no content added to the Talk, Related Articles, Bibliography or External Links pages or the also no entries

In comparison we can find that the Web Accessibility entry in Wikipedia has been edited 575 times by 277 users. There were also 10,911 views in November 2011.


Perhaps there may be those who could argue that Citizendium isn’t a failure, but has a valuable role to play in a particular niche area which is not being addressed by Wikipedia. But how can this argument be made when Citizendium’s aim to “endeavor to achieve the highest standards of writing, reliability, and comprehensiveness through a unique collaboration between Authors and Editors” results in entries such as this one on Silverlight vs Flash:

With the rocket development of Internet, the techniques used for building web pages is improving all the time, which not only brings people more information but new experience of surfing on the Internet. Many techniques have been applied to enrich the web page these years, from totally the plaintext in early 90’s, first to web page with pictures and then that with embedded sounds. Later, Sun Microsystems proposed Java Applet, which was popular for not long time until being conquered by Adobe Flash.

Back in March 2008 the Citizendium FAQ asked the question:

How can you possibly succeed? Wikipedia is an enormous community. How can you go head-to-head with Wikipedia, now a veritable goliath?

The solid interest and growth of our project demonstrates that there are many people who love the vibrancy and basic concept of Wikipedia, but who believe it needs to be governed under more sensible rules, and with a special place for experts. We hope they will join the Citizendium effort. We obviously have a long way to go, but we just started. Give us a few years; Wikipedia has had a rather large head start.

Three and a half years later it seems clear that in the battle between the online encyclopedia “governed under more sensible rules, and with a special place for experts” has been unable to compete with the “vibrancy and basic concept of Wikipedia“.

I’m pleased that Steve Wheeler’s link to Larry Sanger’s blog post helped me to remember my initial curiosity regarding the more managed approach to gathering experts’ knowledge provided by Citizendium and demonstrated the failings in such an approach. Let’s continue making Wikipedia even better is my call for 2012.

Posted in General, Wikipedia, Wikis | Tagged: | 8 Comments »

The Half Term Report on Cookie Compliance

Posted by Brian Kelly on 15 December 2011

The EU’s Privacy and Communications Directive

Back in May 2011 I asked  The context to this post was the EU’s Privacy and Communications Directive which officially came into force on 26 May 2011, the day the post was published.  However as I described  “the good news is that the ICO has recognised the complexities in implementing this legislation” with UK websites being given a year to comply with EU cookie laws.

My initial post was followed by a report on a survey of . This helped to identify how cookies are currently being used on the institutional home page for a selected group of institutions, explore a tool which can be used to report on the various types of cookies and to help raise the importance of institutional activity in this area, in particular in identifying cookie usage and ensuring that documentation on such usage is provided for visitors to the institutional web site.

Update On Institutional Activities

Over six months since those two posts were published, how are institutions responding to the year’s grace which the ICO has granted?

There has been some discussion on the website-info-mgt JISCMail list on how institutions should respond. Back in May Claire Gibbons, Senior Web and Marketing Manager at the University of Bradford initiated a discussion on the Changes to the rules on using cookies and similar technologies for storing information which seems to have been the liveliest discussion on the list all year. The following month Web managers became aware of the news that 90% of visitors declined ICO website’s opt-out cookie and were worried that implementation of the legislation would result in similar loss of traffic to UK University Web sites.

Moving forward six months on 13 December the ICO, announced a new set of Guidelines on the Rules on use of Cookies and Similar Technologies (available in PDF format) in a blog post entitled Half term report on cookies compliance. And it seems that they have taken a pragmatic approach which describes realistic and implementable solutions for Web site managers.

If you have a responsibility for managing a Web site I would advise you to read this 26 page report. However here are some of the key points are given below with my personal comments.

Text Commentary
The changes to the Directive in 2009 were prompted in part by concerns about online tracking of individuals and the use of spyware. These are not rules designed to restrict the use of particular technologies as such, they are intended to prevent information being stored on people’s computers, and used to recognise them via the device they are using, without their knowledge and agreement. [Page 2] Universities should recognise the benefits of these intention.
The initial effort is where the challenge lies – auditing of cookies, resolving problems with reliance on cookies built into existing systems and websites, making sure the information provided to users is clear and putting in place specific measures to obtain consent. [Pages 3-4] A good summary of what institutions need to do.
Most importantly user awareness will be likely to increase as people become used to being prompted to read about cookies and make choices. A variety of consumer initiatives – such as the use of icons to highlight specific uses of cookies will also help in this area. [Page 4] User education is key.
Setting cookies before users have had the opportunity to look at the information provided about cookies, and make a choice about those cookies, is likely to lead to compliance problems. The Information Commissioner does however recognise that currently many websites set cookies as soon as a user accesses the site. This makes obtaining consent before the cookie is set difficult. Wherever possible the setting of cookies should be delayed until users have had the opportunity to understand what cookies are being used and make their choice. Where this is not possible at present websites should be able to demonstrate that they are doing as much as possible to reduce the amount of time before the user receives information about cookies and is provided with options. A key point here is ensuring that the information you provide is not just clear and comprehensive but also readily available. [Page 6] Guidelines acknowledge difficulties in implementing best practices and provides mechanism for documenting decisions.
You should also consider whether users who might make a one-off visit to your site would have a persistent cookie set on their device. If this is the case, you could mitigate any risk that they would object to this by shortening the lifespan of these cookies or, where possible given the purpose for using them, making them session cookies. [Page 6] Guidelines accept that a risk assessment strategy may be appropriate.
This shared understanding is more likely to be achieved quickly if websites make a real effort to ensure information about cookies is made clearly available to their users, for example, displaying a prominent link to ‘More information about how our website works and cookies’ at the top of the page rather than through a privacy policy in the small print. [Page 6-7] Importance of consistent UI to privacy information
The Information Commissioner is aware that there has been discussion in Europe about the scope of this exception. The argument has been made in some areas that cookies that are used for resource planning, capacity planning and the operation of the website, for example, could come within the scope of the exemption. The difficulty with this argument is that it could equally be made for advertising and marketing cookies (whose activities help to fund websites). The intention of the legislation was clearly that this exemption is a narrow one and the Commissioner intends to continue to take the approach he has outlined clearly in published guidance since the 2003 Regulations were introduced. [Page 9] Analytics code which use cookies will be subject to the guidance.
Government is working with the major browser manufacturers to establish which browser level solutions will be available and when. In future many websites may well be able to rely on the user’s browser settings as part, or all, of the mechanism for satisfying themselves of consent to set cookies. Standards-based privacy solutions provided by browsers will be important in the future.
First steps should be to: 1. Check what type of cookies and similar technologies you use and how you use them. 2. Assess how intrusive your use of cookies is. 3. Where you need consent – decide what solution to obtain consent will be best in your circumstances. [Page 12] Clear instruction on what institutions should be doing now.
The Information Commissioner will take a practical and proportionate approach to enforcing the rules on cookies. He has to enforce the law, but he does have some discretion in how he exercises his formal enforcement powers. [Page 24]  The Commissioner is more likely to take discretion if organisations are shown to be seeking to implement best practices.
We will be keeping the situation under review and will consider issuing more detailed advice if appropriate in future. However, we do not intend to issue prescriptive lists on how to comply. You are best placed to work out how to get information to your users, what they will understand and how they would like to show that they consent to what you intend to do. What is clear is that the more directly the setting of a cookie or similar technology relates to the user’s personal information, the more carefully you need to think about how you get consent. [Page 26] Further guidance may be produced in light of experience.
In our view the rules do not apply in the same way to intranets. [Page 26] This seems to suggest that the legislation does not cover content which is hosted on Intranets, VLEs, etc.

My optimistic interpretation of the guidelines seems to be shared by Matt Jukes who, on the Digital by Default blog, yesterday suggested that we might be seeing A crack in the cookie craziness? Matt felt that “The final entry in the FAQ offers a glimmer of hope for those of us stressing about losing access to our usage data“, although his views were tempered slightly by some concerns that “the wording seems intentionally vague and non-committal” which may “scare a lot of public sector organisations into total compliance“.  Overall, however Matt was reassured that the guidelines ” does at least seem to be saying that noone is going to prosecute you for using Google Analytics – especially if you make some concerted effort to inform and educate your users about the existence of those Cookies“.

Further commentary on the guidelines have been provided by Ranjit Sidhu on the Sidspace blog. Ranjit comments that:

This is the key statement “Which method (of consent) will be appropriate to get for cookies will depend in the first instance on what cookies you use” – In other words- ‘we are not making a blanket ban- check what you are doing, if you are not being evil and creating a profile on the user without them knowing with a persistent cookie, then be sensible, do all that we have told you to do and you will be ok. And to confirm….

On the last page (p 27) specifically on “analytical cookies” they say ” In practice we would expect you to provide clear information to users about analytical cookies and take what steps you can to seek their agreement…… Provided clear information is given about their activities we are highly unlikely to prioritise first party cookies used only for analytical purposes in any consideration of regulatory action.”

Ranjit’s post concludes:

As a last point as I know there has been a lot of talk on this, and plenty of scare stories peddled by legal practitioners in particular, make sure you and your bosses are aware as to the enforcement of this (p24 of the report). The ICO will first issue an information notice if they think the organisation is doing something wrong, then ask it to take an “undertaking” notice which asks the organisation to change some practice to comply or an “enforcement” notice to make it comply, only finally if your organisation totally doesn’t listen at all will be fined! In other words, it is about the ICO helping organisations comply and improve rather then jumping out of the blue on organisations naming them as illegal and shutting them down. There are some industries this is going to effect badly…newspapers etc.. but honestly, what you Uni’s do in tracking is very, very low in its privacy implications.

I should probably add that neither Ranjit nor I are lawyers so our posts should not be construed as providing legal advice! However we are both in agreement that the important step for institutions is to follow the guidelines which state:

First steps should be to:

    1. Check what type of cookies and similar technologies you use and how you use them.
    2. Assess how intrusive your use of cookies is.
    3. Where you need consent – decide what solution to obtain consent will be best in your circumstances.

followed by “provid[ing] clear information to users about analytical cookies and take what steps you can to seek their agreement“.

Many institutions will use technologies such as Google Analytics for which documentation will need to be provided. In addition there will be other commonly-used systems, such as content management systems, for which shared approaches in documenting information about the purposes of the information being gathered and the approaches to seeking user agreement would be beneficial.

Claire Gibbons, Senior Web and Marketing Manager at University of Bradford is currently developing guidelines for the University of Bradford which she has described in a post on Cookies and legislation – some thoughts and a sector invite. As suggested by the title, Claire would like to invite others to contribute to:

 a Google spreadsheet … to store our info so we can share and learn from each other [in areas including:]

    • Institution name
    • Audit done
    • Types of cookies used
    • Technologies used
    • Where consent is needed
    • Any other comments
    • Link to published or draft policy

This initiative, which is being driven by practitioners, is to be welcomed. Textual information, such as details of policies, processes, etc. can be added to the Google Document on Cookie Policies. In addition a Google Spreadsheet on UK HEI Privacy Policies is also available which can be used to provide links to privacy policies and provide brief comments. Finally Delicious users may wish to add a link to their privacy polices using the privacy-uk-heis tag so that their contribution can be included in an aggregation of tagged resources (although note that following recent changes to Delicious service the usefulness of this service is currently uncertain).

Why You Should Actively Engage

As Ranjit points out “the ICO helping organisations comply and improve rather then jumping out of the blue on organisations naming them as illegal and shutting them down“. He points out that some sectors seem to be doing badly. If the higher education sector can be seen to be implementing appropriate and achievable best practices, respecting users’ needs whilst understanding the difficulties in blunt implementation of the legislation this will be beneficial for the sector as a whole. I do hope you will spend a small amount of time in giving comments on this post and on Claire’s, in providing links to your policy statements, so that others can learn from their peers and in documenting other aspects of this work which may be useful to others.

It should also be remembered that ways in which we should respond to cookie legislation will go beyond those working in institutional Web management teams. Clearly it will also be important for institutions which have a devolved approach to Web management. But responsibilities must also be shared by individuals who provide Web content, whether hosted within their institution or by third party services.

I have just added a widget on the right hand sidebar of this blog which describes how, who host the blog, make use of Google Analytics. I have gone beyond the issued of cookies by reminding people who leave comments on this blog that they are required to provide an email address. I have now published a policy which states that such email addresses will not be disclosed.

Is this an approach which we can recommend to others?

Posted in Legal | 12 Comments »

Beyond Blogging as an Open Practice, What About Associated Open Usage Data?

Posted by Brian Kelly on 14 December 2011


Should Projects Be Required To Blog? They Should Now!

A recent post on Blogging Practices Session at the JISC MRD Launch Event (#jiscmrd) contains access to the slides hosted on Slideshare used at the JISC MRD Programme Launch Meeting. In the talk I reflected on the discussion on Should Projects Be Required To Have Blogs? which took place initially on Twitter and then on this blog in February 2009.

The context to the discussion was described by Amber Thomas: “I should clarify that my colleagues and I were thinking of mandating blogs for a specific set of projects not across all our programmes“. During the discussion the consensus seemed to be that we should encourage a culture of openness rather than mandate a particular technology such as blogs. One dissenting voice was Owen Stephens who commented “I note that Brian omitted one of my later tweets – not sure if this was by mistake or deliberately because he recognised it for a slightly more light-hearted comment “i say mandate – let them write blogs!” – but I wasn’t entirely joking.

Owen’s view is now becoming more widely accepted across the JISC development environment with a number of programmes, including the recently established JISC Managing Research Data and the open JISC OER Rapid Innovation call both requiring funded projects to provide blogs. This current call (available in MS Word and PDF formats) states that:

In keeping with the size of the grants and short duration of the projects, the bidding process is lightweight (see the Bid Form) and the reporting process will be blog-based

and goes on to state that:

We would also expect to see projects making use of various media for dissemination and engagement with subject and OER communities, including via project blogs and twitter (tag: ukoer)

I’m pleased that JISC have formalised this requirement as I feel that blogs can help to embed open working practices in development activities as well as providing access to information which is more easily integrated into other systems and viewed on variety of devices than formats normally used for reporting purposes.

But how should projects go about measuring the effectiveness of their blogging processes and should should the findings we made openly available, as part of the open practices which projects may be being encouraged to adopt, and as data which is available under an appropriate open data – as we might expect data associated with these two programmes in particular – which is unencumbered by licencing restrictions which may be imposed by publishers or other content owners?

Openness for Blog Usage Data

In addition to providing project blogs there may be a need to be able to demonstrate the value of project blogs. And as well as the individual blogs, programme managers may wish to be able to demonstrate the value of the aggregation of blogs. But how might this be done?

A simple approach would be to publish a public usage icon on the blog. As well as providing usage statistics such tools should also be able to provide answers to questions such as “Has IE6 gone yet?” and “What proportion of visitors use a mobile device?“. But beyond the tools which we will be familiar with in the context of traditional Web sites there may be a need to be able to measures aspects which are of particular relevance to blogs, such as comments posted on blogs and links to blogs posted from elsewhere.

A post on Blog Analytic Services for JISC MRD Project Blogs explored this issue and described how tools such as Technorati and eBuzzing may provide lightweight solutions which may help to provide a better understanding of a blog’s engagement across the blogosphere. It should be acknowledged that such tools do have limitations and can be ‘gamed’. However in some circumstances they may help to identify examples of good practice. In addition gaining an understanding of the strengths and weaknesses of such analytic tools may be helpful if the altmetrics initiative which, in its manifesto, describes how “the growth of new, online scholarly tools allows us to make new filters; these altmetrics reflect the broad, rapid impact of scholarship in this burgeoning ecosystem” and goes on to “call for more tools and research based on altmetrics“.

In a post The OER Turn (which is, according to the author, ” the most read post of 2011 on [the JISC Digital Infrastructure] team blog“) Amber Thomas reflects on developments in the Open Educational Resources environment and describes how she now “find[s] [her]self asking what the “Open” in Open Content means” and concludes by asking “What questions should be asking about open content?“.

My contribution to the discussion is that I propose that when adopting open practices, one should be willing to provide open accesses to usage data associated with the practices.

This was an idea I explored in a post on Numbers Matter: Let’s Provide Open Access to Usage Data and Not Just Research Papers in which I highlighted the comment published in JISC-funded report on Splashes and Ripples: Synthesizing the Evidence on the Impacts of Digital Resources which said that:

Being able to demonstrate your impact numerically can be a means of convincing others to visit your resource, and thus increase the resource’s future impact. For instance, the amount of traffic and size of iTunesU featured prominently in early press reports.

which suggests how quantitative data can be used to support marketing activities. But beyond such marketing considerations, shouldn’t those who believe in the provision of open content and who, in addition, wish to minimise limitations on how the content can be reused (by removing non-commercial and share-alike restrictions from Creative Commons licences, for example) also be willing to make usage statistics similarly freely available? And to argue that “my use case is unique and usage statistics won’t provide the nuanced understanding which is needed” is little different from those who wish to keep strict control on their data?

In other words, what is the limit to the mantra “set your data free“? Does this include setting your usage data free?

Twitter conversation from Topsy: [View]

Posted in Blog, Evidence | 7 Comments »

Responding to the Forthcoming Demise of TwapperKeeper

Posted by Brian Kelly on 11 December 2011

Twapper Keeper Archive Service to be Shut Down

On 8th December 2011 the following announcement was made on the Twapper Keeper Web site:

Transition update
Twapper Keeper’s archiving is now available in HootSuite! As a result, we will be shutting down Twapper Keeper. Existing archives will be kept running until Jan 6, 2012, after which you will not be able to access your archives anymore.

Twapper Keeper has been widely used within the UK’s higher education sector, especially for archiving tweets containing event hashtags at events aimed at the developer, researcher and library sectors.

The popularity in the service has helped to demonstrate the importance of Twitter archiving, something which was not necessarily widely appreciated a few years ago. But in light of, for example, the recent news item on the JISC Web site which announced that “Social media ‘not to blame’ for inciting rioters” and went on to describe how:

A study of 2.4 million Twitter messages from the time of the riots has found that politicians and other commentators were wrong to claim the website played an important role in inciting and organising the disturbances.  

we can see that the importance of Twitter archiving for a variety of purposes is now more widely understood.  However it seems that Twapper Keeper will not be providing a long term repository of tweets. This does not necessarily mean that tweets will be lost since, as described in an article on Tweet Eternal: Pros and Cons of the Library of Congress Twitter Archive published in Time on 8 December 2011 “Thanks to a deal between Twitter and the United States Library of Congress, every public tweet sent on the social messaging service since its creation will become part of the Library of Congress’ digital archive, available to researchers and historians as an example of contemporary life and culture“. However as highlighted in Nature in  n article on Social science: Open up online researchSocial media hold[s] a treasure trove of information [but]  the secretive methods of ethics review boards are hindering their analysis, says Alexander Halavais.

Since it unclear when and if the Library of Congress archives will be made publicly available people and organisations which have made use of Twapper Keeper may wish to migrate the content of these archives. This post will describes approaches for migrating existing data, ways of identifying which archives may need to be preserved and ways of identifying key stakeholders who may need to make such decisions.

Migration of Existing Archives


Since creators and users of Twapper Keeper archives have less than a month to migrate their content, this post will outline ways in which the archives can be managed, and a discussion about the implications of the announcement of the closure of the service will be made at a later date.

Martin Hawksey has published a post on his MASHe blog which describes how you can Free the tweets! Export TwapperKeeper archives using Google Spreadsheet.  Martin’s post also links to a post entitled LIBREAS.Library Grab your TwapperKeeper Archive before Shutdown! which describes a technique which can be used by those familiar with R code. Tont Hirst on the blog has also listed a technical solution based on R code in his post on Rescuing Twapperkeeper Archives Before They Vanish.

For people who may not be familiar with use of Google Spreadsheets or implementation of software applications for accessing Twitter archives you should note that you can also use a Web browser to view archives of interest (having ensured that all items are displayed and not just the default 10 items). You can then view the HTML source and save the file so that you have a HTML representation of the tweets which you can take manage locally.  In addition, you can also save an RSS representation of the tweets which will provide a more structured format which should be more amenable to subsequent processing, if you wish to do this. Examples of this approach can be seen from the copies of the  IWMW10 and IWMW11 archives.

Selection Criteria

In addition to being aware of the tools which can be used there will also be a need to decide which archives may be still be of relevance and identifying who may need to take responsibility for migrating the content to an appropriate location. Tony Hirst, in his post on Rescuing Twapperkeeper Archives Before They Vanish, has suggested that “one approach might be to look to see what archives have been viewed using @andypowe11′s Summarizr“. However although the Summarizr home page  lists recently viewed Summarizr summaries of Twapper Keeper archives, it is not clear if a comprehensive list is available and, even if such a list could be made available, how this would inform decisions on the selection of archives to be migrated.

An alternative approach is to look at the TwapperKeeper archives which have been created by particular Twitter IDs.  We can see, for example, that Tony Hirst (@psychemedia) has created 27 archives.  Similarly using Twapper Keeeper’s search facility I find that I have created a total of 62 Twapper Keeper archives. Perhaps the initial stage in identifying archives to be migrated is for active Twapper Keeper users to identify the archives they have created, and then for them to make a decision of archives to be migrated, where the new archives are to be hosted and what to do for acrhives which will not be migrated, which might include informing key stakeholders.

Case Study

Rather than attempting to keep a copy of all of of the Twapper Keeper archives I have created, in this post I will provide a summary of the archives I created and docum the decisions I have taken regarding migration of the content and the reasons for these decisions.

Migrated to UKOLN Web site: The IWMW2009, IWMW10 and IWMW11 archives, which will be made publicly available, together the UKOLN and Ariadne_Mag archives which will be stored locally if we decide at a later date to analyse the tweets.

Key stakeholders informed:  A number of archives may of interest to organisations such as JISC, CILIP, ALT, UCISA and CETIS. These organisations will be notified of the archives which I have created and will be informed of the techniques described in this post if they wish to migrate the content.

Archives of personal interest: Archives of personal tweets and personal interests have not been migrated.

Other archives: Other archives include archives for broad subject areas (e.g. #a11y, #phdchat) for which a general tweet about the forthcoming demise of the Twapper Keeper archive will be made and archives for events and areas of interest for which I had a short-term interest and wished to be able to view the tweets but which which I have no longer term interest.

A summary of the Twitter archives and the decisions I have made are given below.  Please note that:

  • The data given in the table was collected on 9 December 2011.
  • The decisions given in the table may be changed at a later date.
  • Twapper Keeper archives for other areas relevant to myself and UKOLN colleagues  may have also been created.  The #IWMW09 archive, for example, will be migrated and decisions about other archives will be made shortly.
Archive Type Name Description # of Tweets Create Date Comment
#Hashtag #a11y Accessibility (a11y) 96,491 04-25-10 #a11y community to be informed.
#Hashtag #a11yhack DevCSI hack day 329 06-21-11 One-off DevCSI event. Report has been published.
#Hashtag #accbc CETIS/BSI Accessibility SIG meeting 396 02-28-11 One-off DevCSI event. CETIS SIG coordinator to be notified.
#Hashtag #altc2009 The ALTC 2009 conference 4,754 08-28-09 Large annual event. Report has been published. Event organisers to be notified.
#Hashtag #altc2012 The ALT-C 2012 conference (Association for Learning Technology) 104 09-12-11 Created for next year’s event. Content not migrated.
#Hashtag #altmetrics New approaches for developing metrics for scholarly research 1,393 01-15-11 #altmetrics community to be informed.
#Hashtag #Ariadne The Ariadne hashtag – which may be used for UKOLN’s Ariadne ejournal. 42,102 09-21-10 Content not migrated due to multiple uses of tag.
Keyword Ariadne Archive of tweets contains the string ‘Ariadne’ 79,991 09-21-10 Content not migrated due to multiple uses of keyword.
@Person ariadne_ukoln Tweets about the Ariadne web magazine. 2,792 05-28-10 Content to be migrated to UKOLN.
#Hashtag #Bathcr The University of Bath’s Connected Researcher activity. 296 04-14-11 #Bathcr community to be informed
#Hashtag #brdidc11 Symposium on Data Attribution and Citation Practices and Standards, August 22-23 2011, Berkeley 51 08-22-11 Content not migrated.
@Person briankelly Tweets about Brian Kelly 9,952 03-19-10 Content not migrated as alternative backup available.
#Hashtag #CETIS The CETIS service, based at the University of Bolton. 9,561 09-24-10 CETIS colleagues to be informed.
#Hashtag #CILIP CILIP, the Chartered Institute of Library and Information Professionals. 14,356 09-24-10 CILIP colleagues to be informed.
#Hashtag #CILIP1 Campaign on future of CILIP organisation based on CILIP’s 1-minute messages. 357 06-13-10 Content not migrated.
#Hashtag #CSR Comprehensive Spending Review 0 10-15-10 Content not migrated.
#Hashtag #dataprato Invitational workshop to identify & agree areas for joined-up international action in research data management. 128 04-11-11 Content not migrated.
#Hashtag #digdeath The conference on Death and Dying in a Digital Age held in Bath, UK 72 06-25-11 Content not migrated.
#Hashtag #eduwebconf The eduwebconf conference 33 11-07-11 Content not migrated.
#Hashtag #falt09 ALTC Fringe 219 08-28-09 Content not migrated.
#Hashtag #fbdevlove The Facebook developers hack day 1,297 03-26-11 Content not migrated.
#Hashtag #fpw11 The Future of the Past of the Web conference, British Library, London on 7 October 2011. 755 09-22-11 Event organisers to be notified.
#Hashtag #heweb10 Tag for the HigherEdWeb 2010 conference 8,812 09-28-10 Content not migrated.
#Hashtag #heweb11 The HighEdWeb 2011 conference, 23-26 October 2011 11,505 10-23-11 Content not migrated.
#Hashtag #ILI2011 Internet Librarian International 2011 conference held in London on 27-28 Oct 2011. 3,067 10-27-11 ILI organisers to be notified. Report has been published.
#Hashtag #ili2012 Tweets for the Internet Librarian International (ILI) 2012 conference 3 10-29-11 Created for next year’s event. Content not migrated.
#Hashtag #ipres10 Tweets for the iPres10 conference, Vienna, 19-24 Sept 2010. 5 08-27-10 Content not migrated.
#Hashtag #ipres2010 Archive for the IPres 2010 conference to be held in Vienna on 19-25 Sept 2010. 1,424 08-27-10 Content not migrated.
#Hashtag #ISKB A holder for the ISKB 27 09-17-11 Content not migrated.
#Hashtag #iwmw12 UKOLN’s Institutional Web Management Workshop (IWMW) 2012 event 2 10-29-11 Created for next year’s event. Content not migrated.
@Person iwmwlive IMWM live blogging account 3,744 04-30-10 Content to be migrated.
#Hashtag #jisc10 JISC 2010 conference 2,065 04-02-10 Event organisers to be notified.
#Hashtag #jiscHTML5 JISC HTML5 Case study work 18 11-18-11 Content not migrated.
#Hashtag #jiscpowr Archive of tweets related to the JISC PoWR project provided by UKOLN and ULCC 13 07-09-10 Content not migrated.
#Hashtag #jiscpowrguide Archive of tweets about the Guide to Web Preservation published by the JISC-funded PoWR project and launched on 12 July 2010. 2 07-09-10 Content not migrated.
#Hashtag #JISCPP The JISC-Funded Patients Participate project. 0 05-25-11 Content not migrated.
#Hashtag #ldow2010 Linked Data on the Web 2010 conference 530 04-25-10 Content not migrated.
#Hashtag #loveHE Times Higher Education campaign to support Higher Education in UK. 20,719 06-12-10 Content not migrated.
#Hashtag #mdforum UKOLN’s Metadata Forum 1,746 12-10-10 Content to be migrated.
#Hashtag #morris Tweets about Morris dancing 183,338 10-16-10 Content not migrated.
#Hashtag #OAweek Open Access week 4,603 10-19-11 Content not migrated.
#Hashtag #online11 The Online Information 2011 conference held in London on 29 November -1 December 3,915 11-29-11 Content not migrated.
#Hashtag #oxsmc09 socialmediaconference 1,063 09-18-09 Content not migrated.
#Hashtag #PhD Tweets for researchers using the #PhD tag 161,215 09-24-10 Content not migrated.
#Hashtag #s113 Workshop session at ALTC 2009. 1417 09-03-09 Content not migrated.
#Hashtag #scl2010 Scholarly Communication Landscape (SCL): Opportunities and challenges symposium, held at Manchester Conference Centre on 30 November 2010. 0 12-02-10 Content not migrated.
#Hashtag #SHB11 Security and Hunan Behavior conference 1,117 06-18-11 Content not migrated.
#Hashtag #SLG2011 CILIP School Librarian Group conference. 283 04-03-11 Content not migrated.
#Hashtag #thatlondon People (Northerners?) talking about going to “that London” 1,781 07-09-11 Content not migrated.
#Hashtag #ucassm Social Media Marketing Conference organised by UCAS. 225 10-18-10 Content not migrated.
#Hashtag #ucsoc12 UCISA SSG (Support Services Group) event. 5 09-05-11 Content not migrated.
#Hashtag #udgamp10 What Can We Learn From Amplifed Events seminar, given by Brian Kelly, UKOLN at the University of Girona 395 09-01-10 Content migrated.
#Hashtag #ukmw09 UKMuseumsandtheWeb 750 12-05-09 Content not migrated.
Keyword ukoln Tweets about UKOLN 3,385 03-19-10 Content to be migrated.
#Hashtag #ukolneim UKOLN’s Evidence, Impact, Metric work 523 11-05-10 Content to be migrated.
#Hashtag #UKOLNseminar UKOLN seminars 69 04-01-11 Content to be migrated.
#Hashtag #UniofBath Tweets about the University of Bath 1,798 06-15-11 Content not migrated.
#Hashtag #UniWeek The UK’s Universities Week campaign. 1,767 06-15-11 Content not migrated.
#Hashtag #Virtualfutures The Virtual Futures conference 2,216 06-18-11 Content not migrated.
#Hashtag #w3ctrack W3C Track at WWW 2010 conference 205 04-30-10 Content not migrated.
#Hashtag #W3CUKI W3C UK and Ireland Office 266 04-18-11 Content not migrated.
#Hashtag #ww2010 Misspelling of WWW2010 hashtag 904 04-29-10 Content not migrated.

I welcome suggestions on other tools and approaches which can be used for managing such archives and also approaches to selection and deletion criteria for Twitter archives.

Posted in Twitter | 16 Comments »

Paper on Metrics Accepted

Posted by Brian Kelly on 5 December 2011

“How can metrics be developed that fulfill requirements such as validity, reliability, and suitability?”

The Call for Papers was unambiguous about the important of metrics:

The goal of this symposium is to bring researchers and practitioners together to scope the extent and magnitude of existing …. metrics, and to develop a roadmap for future research and development in the field.

although there was an acknowledgement of the challenges in developing appropriate metrics:

Using numerical metrics potentially allows a more continuous scale for [measurements] and, to the extent that the metrics are reliable, could be used for comparisons. However, it is unclear how metrics can be developed that fulfill requirements such as validity, reliability, and suitability.

I’m pleased to say that I’ve had a paper accepted for the online symposium which will take place on 5 December 2011.  But what is the subject of the symposium?  I have recently published posts about the complexity of metrics for research papers, including issues such as download statistics for papers which are distributed across multiple services and metrics for providing answers to the question of “what makes a good repository?”.  Or perhaps the paper concerned metrics associated with use of Social Web services, another area I have addressed in several posts over the past year.

Both areas are very complex, with people questioning the validity of current approaches which are being taken to developing metrics which can be used to make comparisons – clearly areas worthy of research into how metrics can be developed and to have a questioning and critical appraisal of approaches which are being proposed. But this wasn’t the area addressed in the paper and in the symposium.

Online Symposium on Website Accessibility Metrics

The paper was, in fact, accepted for the Online Symposium on Website Accessibility Metrics (and is available in MS Word, PDF and HTML formats)

As the call for papers points out “conformance to the Web Content Accessibility Guidelines (WCAG) is based on 4 ordinal levels of conformance (none, A, AA, and AAA) but these levels are too far apart to allow granular comparison and progress monitoring; if a websites satisfied many success criteria in addition to all Level A success criteria, the website would only conform to level A of WCAG 2.0 but the additional effort would not be visible.”  It seems that rather than having simple four conformance levels, WAI are looking for more sophisticated algorithms which will be able to differentiate cases in which, for example, a Web page contains hundreds of images, none of which contain the alt attributes which are needed to enhance access to assistive technologies and a Web page which also contains hundreds of images, only one of which fails to have a meaningful alt attribute.  Currently both pages with fail WCAG conformance, since this requires all images to contain alt attributes.

It seem that the goal is a Klout score for Web accessibility, but with the difference that the underlying algorithms will be made public. But just as with Klout there is, I feel, a need to question the underlying assumptions which underpin the belief that accessibility can be determined by conformance with a set of rules, developed as part of the WAI’s model based on conformance with guidelines for content (WCAG), authoring tools (ATAG) and browsers and other user agents (UAAG). It is worth, therefore, making some comparisons between metrics-based tools such as Klout for measuring and the range of web accessibility measurement tools of which the now defunct Bobby tool was an early example.

Metrics for Online Reputation (Twitter)  Metrics for Online Web Accessibility Impact of Scholarly Research
Example of Tools Klout, Peerindex, … A-Checker, Bobby (defunct) and others listed in the Complete list of accessibility evaluation tools (last updated in 2006 with several broken links) Publish or Perish, Microsoft Academic Search, Google Scholar Citations, …
Purpose Measurement of online influence Measurement of accessibility of Web resources Measurement of productivity and impact of published scientific works
Underlying model Undocumented algorithms based on analysis of Twitter communities, posts, retweets, etc. Based on conformance with WAI model, based on three sets of guidelines, for content, authoring tools and user agents. Conformance, however, focuses only on WCAG guidelines. h-index, g-index, ….
Legal status No legal status. Conformance required in several countries. No legal status but may be used to determine research funding.
Limitations The system can be easily ‘gamed’. Tools such as Klout provide use of themselves in order to increase scores. The tools fail to take into account differences across different communities (e.g. use same approaches for comparing reputation of celebrities, brands and public sector organisations). The system can be easily ‘gamed’. The WGAC 1.0 guidelines promoted use of technologies developed within the host consortium, even when such technologies were little used. The tools fail to take into account the different ways in which the Web can be used (e.g. to provide access to information, to support teaching and learning, to provide access to cultural resources, for games, …). May be skewed by numbers of authors, self-citations, context of citations, …

Using Metrics In Context

However I do feel that there is value in metrics, whether this is for helping to identify the quality of research publications, online reputation or  accessibility of online resources.  The difficulty arises when the metric is regarded as the truth, and becomes a goal in itself.  So whilst I feel there is validity in publishing details of Klout, PeerIndex and Tweetstat statistics across a selection of institutional Twitter accounts in order to help understand patterns of usage and, I should add, to understand the limitations of such metrics-based tools, I also feel that institutions would be foolhardy to regard such statistics as concrete evidence of value.  Rather such statistics can be useful when used in conjunction with other evidence-based parameters.

The danger with Web accessibility metrics is that they have been used as a goal in their own right. In addition, sadly, the previous government has mandated conformance with these metrics across Government Web sites.  And back in 2004 WAI gave their views on Why Standards Harmonization is Essential to Web Accessibility, which seems to be leading to WCAG conformance being mandated across EU countries. If a proposal on “Why Online Reputation Standards Harmonisation is Essential” was published, especially by the body responsible for the online reputation standard which was proposed as the only standard which should be used,  there would be uproar, with, I would hope, the research community seeking to explore limitations in the proposed standard.

Fortunately the organisers of the WAI symposium do seem to be aware or criticisms of their approaches to Web accessibility as providing the only legitimate approach.  The Call for Papers invited contribution which “may include approaches for measuring ‘accessibility in terms of conformance‘ (metrics that reflect violations of conformance of web content with accessibility guidelines such as WCAG or derivatives such as Section 508) and ‘accessibility in use‘ (metrics that reflect the impact that accessibility issues have on real users, regardless of guidelines)” (my emphasis).

The fundamental objection myself and fellow author of our series of paper on this subject, is that accessibility is not an innate characteristic  of a digital object, but of the user’s difficulty in engaging with an object to fulfil a desired purpose. The view that all Web resources must be universally accessible to everyone, which underlies pressures for organisations to conform with WCAG guidelines, is a flawed approach.

So if I’m critical of metrics related to conformance with guidelines, what do I feel is needed?  Our papers argues for making use of metrics related to guidelines related to the processes surround the development of online resources.  In the UK the BS 8878 guidelines provide the relevant Code of Practice.  As Jonathon Hassell pointed out in a recent post on For World Usability Day: The state of accessibility on the HassellInclusion blog:

[BS8878’s] goals were to share best practice in the first Standard about the process of accessibility rather than it’s technical aspects. It’s succeeded in helping harmonise the separate worlds of inclusive design, personalisation and WCAG approaches to accessibility.

Jonathon went on to add:

Uptake is always difficult to measure, and it’s still early days for organisations to go public and say they have changed the way they work to follow BS8878. However, some organisations already have including: Royal Mail, and Southampton University. And many others are working on it. BS8878 is one of the best-selling standards BSI have ever created – so it’s met their goals. I’ve trained many organisations globally and my BS8878 presentations on slideshare have been viewed by over 6000 people from over 25 countries.

There is a need to encourage greater take-up of BS 8878, and I hope our paper will help in describing ways in which such take-up can be measured.

But what of the development of new ways of measuring WCAG conformance? As described in a paper on Involving Users in the Development of a Web Accessibility Tool at a cost of over 2M Euros the EU-funded European Internet Accessibility Observatory Project developed a robot for measuring conformance with WCAG guidelines across a range of government Web sites in the EU. As described on the eGovernment Monitor Web site has released  the eAccessibility Checker which builds on the EU-funded project and can be found at  However looking at the results of a survey carried out last month across a number of Norwegian Web sites it seems that  there of a number of problems which are experienced by over 80% of the Web sites! If such tools report a low-level of conformance can’t we then use this as evidence of the failures of the WAI model rather than, as has been the case in the past, a failure in organisations to be willing to enhance the accessibility of their services?

Posted in Accessibility, Evidence | Leave a Comment »