UK Web Focus

Innovation and best practices for the Web

Archive for the ‘Web2.0’ Category

Making Effective Use of Google Docs (and Who Will Support Researchers?)

Posted by Brian Kelly on 20 November 2014

Looking to Make Google Docs a Richer Authoring Environment

Google Docs: table of contents addon

The Table of Contents add-on for Google Docs

These days I find myself making extensive use of Google Docs. This is the tool of choice for the LACE project I am involved in. Although Google Docs doesn’t have the power of MS Word it does provide access control capabilities which are important for project work with partners working at different institutions across Europe.

As a long-standing user of MS Word (since its days which it competed with WordPerfect on MS DOS!)  I have become accustomed to its functionality and user interface. As I described in a post which summarised the collaborative authoring approach myself and my co-authors used in writing a “Paper Accepted for #W4A2012 Conference” I have made use of MS Word and Microsoft’s Onedrive (then called Skydrive) so that we could edit the document using our preferred authoring tool. Since our paper was hosted in the Cloud we could edit a single copy and avoided the problem of authors editing multiple copies of a paper. However although the approach worked for a small group of authors who were happy to use MS Word, it is not necessarily the best approach when there are a more diverse group of contributors.

In my current environment we used a shared Google Drive folder and I typically create project documents using Google Docs and receive contributions and comments from project partners. Some of the documents, which are intended for use by the project team, will continue to be hosted on Google Drive. However other documents. which are intended for submission to the European Commission, will migrate to an MS Word environment using the project’s template for submission of deliverables.

I have recently started to explore ways to enhance the Google Docs environment for producing documents. Sometime ago I installed the Google Docs Table of Contents add-on which, as shown, provides a document outliner which can be useful, especially for longer documents, in depicting the document structure.

What Do I Do Need to Do More in Google Docs?

It seems that at some point I also installed the Gliffy Diagrams add-on, which can be used to “create professional looking diagrams and flowcharts in Google Docs“.  As I often include diagrams in documents I produce using  MS Word I have felt the need for such functionality, but I haven’t got around to using this tool on a regular basis. This may be because I use Google Docs as the initial authoring environment but produce the final version in MS Word and use MS Word tools for embedding images and producing the polished final version.

Google Docs add-onsBut what more do I need to make greater use of Google Docs, I wonder?

As described in a TechCrunch article published in March 2014 “Google Launches Add-On Store For Google Docs“.  The article explains how on 11 March 2014:

Google announced the launch of its add-on store for Google Docs’ spreadsheet and word processor apps. The store, which resembles the Chrome Web Store in its design, currently features about 50 add-ons, with more coming in the near future.

According to Google, the idea here is to provide users with new tools that will give them access to more features — especially features that aren’t currently available through Google’s own products.

I’d be interested to hear if anyone has experiences in use of these add-ons for Google Docs. Are there any power users who are using Google Docs in sophisticated ways and are making use of add-ons to enhance the functionality of the service?

Beyond the Tools – Managing Google Docs

As a long-standing user of MS Word I can remember when using a word processor was a solo experience. However nowadays tools such as Google Docs are designed to provide collaborative authoring environments. Such tools also provide collaborative commenting and viewing capabilities, with the ability to manage access to document, co-authors, commenters or viewers.

There will therefore be a need to understand best practices for managing access to Google Docs. This will go beyond the use of folders and file naming conventions: there will be a need to make use of scaleable approaches which will enable  authors to be able to manage large numbers of documents shared with  potentially a wide range of contributors and viewers. Giving world write access to documents is one way of managing access, but this approach does have risks! Note that there will also be a need to manage access when collaborators leave projects or change their host institution.

Supporting Researchers

Earlier today Dave Flanders alerted  me to the Research Bazaar Conference (#ResBaz) which aims to “kick-start a training programme in Australia assuring the next generation of researchers are equipped with the digital skills and tools to make their research better“. The event is described as:

an academic training conference (i.e. think of this event as a giant Genius Bar at an Apple store), where research students and early career researchers can come to acquire the digital skills (e.g. computer programming, data analysis, etc.) that underpin modern research

I suspect there will be a lot of sharing of open source tools at the event. But I wonder if making effective use of mainstream tools such as Google Docs will be covered? And if such issues aren’t addressed at events such as #RezBaz, who will take responsibility for training of postgraduate students?

View Twitter conversations and metrics using: [Topsy] – []

Posted in General, Web2.0 | Tagged: | Leave a Comment »

Preparing our Users for Digital Life Beyond the Institution

Posted by Brian Kelly on 3 March 2014

About This Post

This blog post provides background information on digital literacy and argues that digital literacy needs to go beyond student teaching and ensure that staff and researchers, who may wish to continue their professional activities when they leave their current institution, are able to migrate content and services to the Cloud, so that content and tools can be reused once access to institutional services is no longer available.

The post concludes with an invitation for those with responsibilities for or interest in digital literacy to complete a survey which aims to gather information about current work in providing digital literacy support for staff and researchers, especially in preparing for digital life outside the host institution. The results of the survey will be presented at the LILAC 2014 information literacy conference.

Common Craft and LILAC on Digital and Information Literacy

Digital literacy by Common CraftI recently came across an animated cartoon on Digital Literacy published by Commoncraft. The cartoon explains that:

… there is a new kind of literacy that touches almost everyone in our modern world. It’s not related to a specific industry or job title. This literacy matters to both young and old and has become more important as computers and electronic devices have become more of a necessity in daily life.

I’m talking about digital literacy – the ability to use technology to navigate, evaluate and create information. 

LILAC, the Librarians’ Information Literacy Annual Conference, defines information literacy as ‘the ability to find, use, evaluate and communicate information’. I find this latter definition more useful, as it includes the importance of using and not just evaluating information. However I prefer the term ‘digital literacy‘ as this goes beyond information and can include digital services and not just digital content.

The LILAC Web site goes on to describe how information literacy is “an essential skill in this digital age and era of life-long learning“. This emphasis on the importance of use of information to support life-long learning highlights the need to be able to use and manage digital information – and the digital services which manages the digital information – throughout one’s life, and not just when one if studying or working within an education institute.

SCONUL, the main representative body for academic libraries in the UK and the Republic of Ireland, uses the term ‘Digital Literacies‘ in a page on the Jisc Web site. This describes how the SCONUL Working Group on Information Literacy has developed the 7 Pillars of Information Literacy through a Digital Literacy ‘lens’ (MS Word format) which includes the ability to “Use a range of digital retrieval tools and technology effectively“, “Use appropriate tools to organise digital content and data” and “Manage digital resources effectively taking account of version control, file storage and record keeping issues“. This emphasis on the need to be able to use tools to organise and manage digital resources is important. I therefore find a definition of digital literacy as “‘the ability to find, use, reuse, evaluate, manage and communicate digital information” helpful. I’ve expanded ‘use‘ to ‘use and reuse‘ to highlight the importance of addressing the life cycle of digital content, in which content may migrate to new services.

Digital Literacy for Members of Staff and Researchers

Many staff and researchers in higher educational institutions will make use of digital content and services and would regard themselves as digitally literate. Within the context of the services they use within their host institution this may be true. But what happens when they leave their host institution (which we all will at some stage) and wish to continue using content and services and their online communities? This may be particularly relevant for researchers on short term research contracts.

The ability for highly skilled academics and researchers to be able to continue to be productive members of society is important when one considers that “Universities in the UK contributed £3.3 billion to the economy in 2010-11 through services to business, including commercialisation of new knowledge, delivery of professional training, consultancy and services” – might the commercial value to the economy provided by the sector be undermined if members of staff leave their host institution and are hindered from continuing to make use of their digital content due to a lack of expertise?

Ensuring that staff and researchers were able to continue to make use of their digital content and manage their online communities was probably not of great importance in the past, when one’s content could often be transported on floppy disks or memory sticks and the digital services which were used were could only be accessed within the institution’s network. However there is now a need to be able to respond to the radically changed environment in which Cloud services can be accessed by anyone, anywhere, there is a much greater volatility in the job market and the increasing important of open content, open data and open source software is minimising licence barriers to reuse of digital content and tools.

What Should Be Done and Who Should Do It?

Last year, in the run-up to my redundancy following the announcement of the Jisc cessation of core funding for UKOLN, I gave a talk on When Staff and Researchers Leave Their Host Institution at the LILAC 2013 Conference. The talk was based on personal experiences and described my views on the importance of researchers profiling services, such as and Researchgate, not only for providing a record of my research outputs but also  osting the content so that I could continue to manage the papers and the metadata once I lost the ability to manage information for my content hosted on Opus, the University of Bath repository.

Who is Responsible?

Vitae's Concordat

But what is happening across the sector in terms of ensuring that members of staff and researchers are being provided with the skills and expertise needed to continue to be effective professionals when they leave their host institution?

Should it be the responsibility of the Library, who have responsibilities for information literacy? In light of the importance of digital tools and services, perhaps it should be the responsibility of IT Service departments? Or maybe research support units or careers advisory services? In cases in which staff are being made redundant, perhaps the UCU or other unions could have a role to play in ensuing that union members are provided with appropriate training.

National Strategies?

If it is felt that there is a need for approaches provided at a national level perhaps SCONUL should look to ensure that their 7 Pillars of Information Literacy through a Digital Literacy ‘lens’ goes beyond undergraduate teaching.

For researchers, it might be appropriate ensure that Vitae’s Concordat to Support the Career Development of Researchers and, in particular, the Concordat’s support and career development:

Principle 3: Researchers are equipped and supported to be adaptable and flexible in an increasingly diverse, mobile, global research environment

is implemented across the sector to address researchers ability to manage their digital content in a Cloud environment.

What is Being Done?

Ubfirmatuion Literacy Policy SurveyJenny Evans, the Maths and Physics Librarian at Imperial College London, and myself have had a proposal accepted for the LILAC 2014 conference entitled “Are Institutions Preparing Staff for Digital Life Beyond the Institution?” This will be based on a survey of institutional practices in providing support for staff and researchers so that they will have the skills needed to make use of digital content and services when they leave their current institution. We have created an online survey in which we invite staff across the sector who may have responsibilities for developing policies in this area and delivering the appropriate training and support to summarise their current practices or their plans. Since we appreciate that there may be a number of groups with interests in this area, including:

  • Library departments within institutions.
  • Library organisations such as SCONUL and CILIP.
  • IT departments within institutions.
  • IT organisations such as UCISA.
  • Staff development departments within institutions.
  • Academic departments.
  • National bodies such as Jisc, Vitae, etc.
  • Research funding organisations.
  • Unions.
  • The BCS (British Computer Society) and its Digital Literacy for Life programme.

We invite feedback from anyone with strong interests and involvement in this area; it would be better to get duplicate information than to have gaps in the information we gather. We would also invite those working outside to UK to provide information in related activities happening outside the UK. We will, of course, provide a public summary of our findings. In addition to the invitation to complete the survey, comments on this topics are also welcome on this blog post. The comments may address the topic area, but suggestions on ways of sending an invitation to complete the survey to relevant groups would also be welcome.

View Twitter conversations and metrics using: [Topsy] – []

Posted in Web2.0 | Tagged: | 4 Comments »

Beyond MOOCs: Sustainable Online Learning in Institutions

Posted by Brian Kelly on 22 January 2014

Personal Experiences of MOOCs

Cetis MOOC paperLast year I completed the Hyperlinked Library MOOC. I had previously signed up for several MOOCs but this has been the only MOOC which I have completed.

I found the experiences I gained in participating in the MOOC useful and felt they were worth sharing and so I published post on my Initial Reflections on The Hyperlinked Library MOOC and the Badges I Have Acquired and on my final Reflections On The Hyperlinked Library MOOC. In brief I felt that the Hyperlinked Library MOOC was valuable for staff development for those working in a library environment who wish to learn more about the potential of social media in a library context.

The Bigger Picture

But what of the bigger picture? How should institutions respond to the hype which has surrounded MOOCs? What impact can MOOCS have in enriching the teaching and learning activities which take place in institutions? What technological options need to be considering when considering deploying a MOOC? And what are the strategic challenges and opportunities which MOOCs can provide?

These issues are addressed in a 20 page white paper on “Beyond MOOCs: Sustainable Online Learning in Institutions” by my Cetis colleagues Li Yuan, Stephen Powell and Bill Oliver which was published yesterday.

The Executive Summary of the paper describes the opportunities which MOOCs can provide:

The key opportunity for institutions is to take the concepts developed by the MOOC experiment to date and use them to improve the quality of their face-to-face and online provision, and to open up access to higher education. Most importantly, the understanding gained should be used to inform diversification strategies including the development of new business models and pedagogic approaches that take full advantage of digital technologies.

It was interesting to note the emphasis placed on supporting diversification strategies, new business model and pedagogic approaches: although the paper mentions a number of MOOC platforms, the technological infrastructure is not felt to be the main challenge which institutions need to consider. Rather, the key themes which have emerged from uses of MOOCs to date are openness; revenue models and service disaggregation.

The technological options (the platforms and services used, the functions they provide and whether single platforms or a collection of integrated tools and services will be used) will need to be addressed, but such considerations cannot be divorced from other important areas including the pedagogic opportunities which may be provided and the learner choices which the provision of new and affordable ways for learners to access courses can provide.

The white paper is available from the Cetis Web site and is recommended reading for those involved in developing or supporting MOOCs, those with management and policy responsibilities, those who may be evaluating MOOCs or simple those with a general interest in MOOCs.

I would be interested in learning more about people’s experiences in using MOOCs. I therefore invite people to complete a brief survey (which is embedded below) and to share your experiences in the comments for this blog post.

View Twitter conversation from: [Topsy] – [Tweetreach]

Posted in Web2.0 | Tagged: | 4 Comments »

New Year Resolution: I Won’t Ditch Software on a Whim!

Posted by Brian Kelly on 9 January 2014

Why I’ll Still Explore New Tools and Services

In my previous role as UK Web Focus at UKOLN and my current position as Innovation Advocate at Cetis I’ve tried to be an early adopter of new technologies and services which seem to have the potential of enhancing the range of activities carried out in the higher/further education sector.  An important aspect of such evaluation is the open sharing of thoughts on the potential benefits of the innovations but also associated risks and concerns.

I will continue to evaluate new technologies. But there is a question as to what is being replaced if new technologies prove successful and become embedded in normal working practices.  Over the years this has happened with technologies such as Skype. As discussed in a post published in 2009 which reflected on Skype, Two Years After Its Nightmare Weekend, at one stage institutions, and indeed, JANET, where looking to provide standards-based VOIP services. In, back in 2006 a UKERNA report (PDF format) described how  “Uncontrolled use of Skype, and particularly its bandwidth-hungry super-node behaviour, is likely to breach one or both of these [Acceptable Use Policy]sections.” But how, I strongly suspect, use of Skype is now widely embedded across the sector (are there any institutions which still block the service?).

There are other services which at one stage were considered to have risks by IT service staff but which have similarly become widely used by the user community: Google Docs is a good example of a tool which is often suggested when you are collaborating with people outside one’s host institution. Clearly changes to one’s IT infrastructure does happen and seems likely to continue.  But what are the processes which one should take when choosing to replace an existing tool with an alternative?

Moving To New Tools – A Case Study

In a recent post Doug Belshaw, the Web Literacy Lead for the Mozilla Foundation, gave his thoughts on “Why I’m ditching Evernote for Simplenote (and Notational Velocity)”.

Since I am interested in new tools which can enhance my productivity or provide a richer working environment I was very interested in this post. As a long-standing Evernote user which I use on a range of devices I have an interest in looking for signals which hint at problems with the application and an alternative solutions.

However, on further investigation, I’m unconvinced that it would be sensible to move away from Evernote or make use of Simplenote.

Doug#s post references Jason Kincaid post on “Evernote, the bug-ridden elephant“. Jason has experienced problems with Evernote which led to data loss. I was pleased he shared his experiences and the approaches he took in identifying the problem areas. I was previously unaware of Evernote’s activity log. Jason described how the activity log contained “Thousands of lines of gibberish, dates and upload counts” although, confusingly, also complained that the file contained sensitive data. Anyway I looked at my Activity log file. It too, contains thousands of lines such as, earlier today:

10:14:36 [8840] 0% Connecting to
10:14:36 [8840] 0% * loaded updateCount: 628
10:14:37 [8840] 0% Usage Metrics: sessionCount=0
10:14:37 [8840] 0% Client is up to date with the server, updateCount=628
10:14:37 [8840] 0% * saved updateCount: 628
10:14:37 [8840] 0% Skipping uploading shortcuts because local shortcuts are not newer than the server shortcuts.
10:14:37 [8840] 0% Session terminated normally, elapsed time: 0s

A useful debugging aid, it seems to me. Indeed the Activity log did help to identify Jason’s problem:

Turns out there’s a bug, this time compliments of Evernote for Mac’s ‘helper’ — an official mini app that’s meant for jotting down notes without having to switch to the hulking beast that is the desktop application.

Oh, so there’s a bug in the software. But it has been identified and therefore it should be able to be fixed.

But there are other problems:

They say to file another ticket.

As for the audio file: even more bad news.

It’s been nearly a month and the most substantive thing Evernote has said is that it is “seeing multiple users who have created audio notes of all sizes where they will not play on any platform.” The company has given me no information on what’s wrong with the corrupted file, and no indication that they might find a way to get it working in the future.“.

The problems seem to be confirmed on the comments list and even the CEO of Evernote “apologized, saying the post rings true and that there is a lot of work to be done both on the application and service fronts. In the short-term the company will be implementing fixes for the issues above, with plans to focus on general quality improvements in the months ahead.

So there are problems. But these have been acknowledged and Evernote have stated that they work on improving the software

But I’ve not had problems using Evernote – and as I don’t use audio notes I’m unlikely to encounter the bug mentioned above. But since Doug has suggested an alternative I felt it would be useful to investigate Simplenote further.

I read information about Simplenote. But since my data will be held in the Cloud I am more concerned about the sustainability of the company rather than whether the software is open source or not. What do I find? The Wikipedia article for Simplenote is fairly basic. On further investigation it seems that a Mac app and an Android app were launched in September 2013.

Simplenote bug reportSince software is prone to bugs, it is not surprising that we can see examples of Simplenote users complaining of bugs. What is somewhat worrying is that, as illustrated, Simplenote’s official bug reporting service contains spam which has been there for over two weeks so far. And the bug report which was submitted in September 2013 has not been acknowledged.

Out of the frying pan into the fire? Isn’t there a need to investigate the business model for important tools and not just sect a tool because it is open source? At least Evernote have acknowledged there is a problem and have said it will be addressed. Evernote also seem to have a sustainable business model. Sadly, I see no evidence that Simplenote do!

What To Do?

It seems to me that ditching an existing tool which provides a useful service but which appears to have bugs for an unproven alternative simply because it is open source would be a mistake.

The dangers of having an over-simplistic view of the merits of open sources software were described in a paper on Openness in Higher Education: Open Source, Open Standards, Open Access by myself, my Cetis colleague Scott Wilson (who is now manager of the OSS Watch service) and Randy Metcalfe, former manager of OSS Watch. The paper describes how:

OSS Watch therefore avoids making specific software recommendations. Instead the principal task is to help universities and colleges understand legal, social, technical and economic issues that arise when they engage with free and open source software. The goal is not the promotion of open source software for its own sake. Indeed, for OSS Watch the choice of proprietary or open source solutions is immaterial. What matters is that institutions have the resources to think through their procurement, deployment, or development IT concerns in a sensible and rational fashion. The best solution for any single institution will depend upon local conditions and individual needs.

The paper goes on to add that:

This pragmatic approach to advice and guidance is consistent with that employed by UKOLN in its work on standards. It is also a guiding principle in the JISC Policy on Open source software for JISC projects and services (JISC, 2005). This policy is based on the UK government policy in this area and should be seen as an implementation of that policy.

The paper, which was published in 2007, focussed on institutional policies on use of open standards and open source software. Over 6 years later, in light of the importance of software which is not hosted within the institution but selected by individuals, there is a need to revisit the advice provided in the paper and explore how it can be applied when an individual is considering replacing use of an existing tool or service.

In the case of replacing Evernote in the comments on Doug’s blog post it was pointed out that:

Sadly there isn’t an alternative to Evernote if you store anything other than plaintext. PDF with annotation, automatic OCR on PDFs and image files, ability to attach MS Office files, audio notes etc. Evernote needs to stop with feature push and spend some time sanitizing what is already there.

And indeed, Jason Kincaid’s post was successful in getting his concerns acknowledged by the CEO of Evernote who responded by admitting that “that there is a lot of work to be done both on the application and service fronts.” A subsequent blog post  On Software Quality and Building a Better Evernote in 2014 was published on the Evernote blog which began:

I got the wrong sort of birthday present yesterday: a sincerely-written post by Jason Kincaid lamenting a perceived decline in the quality of Evernote software over the past few months. I could quibble with the specifics, but reading Jason’s article was a painful and frustrating experience because, in the big picture, he’s right. We’re going to fix this.

The post has generated a large number of comments (96 to date) which seem to primarily be from other Evernote users who are frustrated by bugs i the software and are unhappy that the official Evernote response was written following the publication of  Jason Kincaid, a blogger with a high profile.

In this post I don’t want to go into the ins and outs of Evernote’s limitations.  Rather I’ll conclude that alternatives to existing tools may not prove to address the limitations of tools which are currently being used and, indeed, may have other disadvantages, which may get worse if the company is not able to handle increased usage. In the case of Simplenote, for example, the online support service does appear to be non-existent. Ineed looking at the Support Center home page I notice a post entitled “Is Simplenote dying? I’ll concluded by quoting this post, published on 8 January, in full:

No blog posts since October.
Unanswered questions in support center.
Spam-filled support center.
Web app that CRAWLS.
Is Simplenote going away?

When announced it was closing up shop, I landed at Simplenote. I started using it for new notes right away, but I continued to look for an easy way to import my notes to another cloud-based note app. Yesterday, I gave up and did a manual copy-and-paste on all of my notes to get them into Simplenote. Now I’m experiencing performance issues I wasn’t experiencing before. I’m hoping this is a temporary thing, but based on posts I’m reading here in the “support center,” it looks like this has been going on for a while now. Please don’t tell me that I am going to have to migrate all my notes to another tool.

And no, there hasn’t been a response. Caveat emptor!


Posted in Web2.0 | 5 Comments »

“Your SlideShare account has been suspended”

Posted by Brian Kelly on 1 October 2013

Loss of Access to Content Hosted on Slideshare

Slideshare account suspendedOn Wednesday 25 September 2013 I received an email message which informed me that my SlideShare account had been suspended.  The reason given for this was that:

SlideShare activity was flagged as inappropriate by our community. We looked into it and found at least one of your activities (i.e. uploads, comments, follows or favorites) to be in violation of SlideShare’s Terms of Service or Community Guidelines.

To make matters worse:

… your account lisbk has been suspended and marked for deletion.

I received the message at 9.50pm on Wednesday evening. The following morning I contacted the Slideshare Support Desk complaining about the loss of access to my slides (which meant that Web sites which had embedded the content contained a message saying the account had been suspended) and asking for the files to be restored. I received the following automated response:

Thank you for contacting SlideShare. This email is to confirm we have received your inquiry and will respond within one business day.

I failed to receive a reply so yesterday evening I submitted another message to the support desk. Twelve hours later I received a reply

Thank you for contacting us again about this issue. I sincerely apologize for the delay in getting back to you. It looks like the automated system has incorrectly marked your account. I have removed the suspension and your account should be working normally now. Thank you for your patience and understanding.

And now my Slideshare account has been restored. I was pleased when I found that not only had the 148 slidedecks had been restored, but the slides still had the usage statistics and my 315 followers.

Lessons Learnt

I’m pleased that my Slideshare account has been restored with seemingly no data lost. All that seems to have been lost is 5 days access to the 148 slide decks which I have uploaded to the service. But this incident also gives rise to some concerns. Why did this happen? Could it happen again? Did I make a mistake in setting up my Slideshare account almost 7 years ago (my oldest slides, entitled Web 2.0: Addressing Institutional Barriers, were used in a talk given at the ILI 2006 conference and uploaded to Slideshare on 13 October 2006)?

Back in 2008/9 I was the lead author of a paper entitled “Library 2.0: balancing the risks and benefits to maximise the dividends” . The abstract described how:

The paper acknowledges that there are a variety of risks associated with such approaches. The paper describes the different types of risks and outlines a risk assessment and risk management approach which is being developed to minimize the dangers whilst allowing the benefits of Library 2.0 to be realized.

The risks and opportunities frameworkThe risks and opportunities framework was subsequently developed further and later in 2009 in a paper entitled “Empowering Users and Institutions: A Risks and Opportunities Framework for Exploiting the Social Web” a diagram which depicted the framework was provided, as illustrated.

How might this have been applied in the specific context of use of Slideshare?

Intended use: Slideshare will be used to provide a copy of slides used in significant presentations so that (a) the slides can be embedded in blogs, web pages, etc; (b) comments on the slides can be given; (c) the slides can be accessed using a popular service in order to enhance access to the slides to help maximise the take-up of the ideas provided in the slides and (d) the slides can be ‘favourited’ in order to identify individuals with interests in the content.

Perceived benefits: Use of Slideshare  should help maximise access to the resources and provide commenting facilities which may be useful for reports on the impact of associated work.

Perceived risks: There may be risks that the Slideshare service is not sustainable and data lost. Spam comments may be made which would be time-consuming to delete. It was felt that the risks of loss of data was small since the Slideshare service appeared to be popular and sustainable.

Missed opportunities: Failing to use Slideshare would mean lost opportunities for reaching ou to a large number of users.

Costs: The free version of Slideshare has been used. The only additional costs have been the time taken in uploaded slides to the service and providing the relevant metadata.

Risk minimisation: The risks of data loss have been addressed by ensuring that the master copy of the slides is hosted on the UKOLN Web site.

Evidence base: The slide decks hosted on Slideshare have proved popular, with my three most popular slide decks having been viewed 24,536, 18,211 and 10,172 times. In addition a blog post entitled Evidence of Slideshare’s Impact highlighted the benefits of use of Slideshare for hosting slides for an event. It should be noted, however that a post on Understanding the Limits of Altmetrics: Slideshare Statistics did point out the need to treat these statistics with some caution.

I therefore feel that Slideshare has provided a valuable return on my investment. However just because Slideshare has proved useful in the past does not necessarily mean that this will continue to be true. Back in May 2012 TechCrunch announced that LinkedIn Acquires Professional Content Sharing Platform SlideShare For $119M. A concern might be that following the take-over there has been a lack of investment in the company, with asset-stripping of intellectual property, technical expertise, usage data  or other valuable assets taking place prior to the closure of the service or significant changes in its terms and conditions.

Quantcast stats for SlideshareHowever the usage figures provided by Quantast, available from the Techcrunch page about SlideShare, shows no cause for concerns. So perhaps my experience was a one-off glitch.  However the experience has led me to consider some additional risks which I hadn’t thought about previously:

Service makes mistakes: Although this mistake did not have any significant adverse affect, what would have happened if my account had been unavailable during a large event, such as IWMW events,  during which slides hosted on Slideshare are used during the event amplification?

Vexatious complaints: The automated email I received stated that my Slideshare content “was flagged as inappropriate by our community“. Could people submit anonymous complaints about content hosted on Slideshare, I wonder, leading to accounts being removed with an innocent Slideshare user having to make their case for the content to be be restored?

Contentious content: Slideshare’s Community Guidelines state: “Don’t post content or comments about issues like child exploitation, animal abuse, drug abuse, bomb making etc. They will be removed and your account will get suspended.” But what if a lecturer is giving a talk about, say, drug abuse? The guidelines do not seem to provide any scope for flexibility.

I’d welcome feedback on my experiences. I’d also like to invite Slideshare to respond to  the concerns I’ve raised. As I have said, I’ve been a longstanding fan of the service; I would hope that Slideshare’s support desk will be proactive in responding to concerns.

My Slideshare statisticsNOTE: Shortly after publishing this post I received an email from Slideshare containing a summary of the statistics of use of the service. As illustrated the figures provide an indication of significant levels of outreach for my slides (together with a small number of slides I have published on behalff of others). I hope that I can be reassured that Slideshare will continue to provide benefits for me and that I have my concerns addressed.

Posted in Repositories, Web2.0 | Tagged: | 14 Comments »

Preparing For The Future: Helping Libraries Respond to Changing Technological, Economic and Political Change

Posted by Brian Kelly on 5 July 2013

Umbrella 2013 paper by Kelly and HollinsI recently described a paper on “Reflecting on Yesterday, Understanding Today, Planning for Tomorrow” which I presented at the Umbrella 2013 conference – although in the 20 minutes available it was only possible to give a high level overview of the approaches which had been used in the work of the JISC Observatory.

Yesterday, however, I facilitated a 90 minute workshop session at the University of York on “Preparing For The Future: Helping Libraries Respond to Changing Technological, Economic and Political Change“. The workshop was part of a Staff Development Festival for University of York staff who work in the university Library and IT Services department. I have to admit I was pleased to see this level of commitment to staff development which, I heard, reflects institutional commitment to continued professional development. Such ongoing staff development will be particularly relevant to the University of York since, last August, it joined the ranks of the Russell Group universities. Staff in the Library and IT Services will have a key role to play in supporting the research activities across the university. My workshop was aimed to help participants identify key development areas for the future, in light of learning from lessons from the past.

Since the session was a workshop this provided time for the participants to take part in a Delphi exercise. As Paul Hollins (CETIS) and I described in our paper presented at the Umbrella 2013 Conference:

The Delphi process is an established and structured communication technique for interactive forecasting reliant on a selected panel of experts. The technique has been adopted by the US-based New Media Consortium (NMC) for the NMC Horizon project centrepiece activity charting the international landscape of emerging technologies initiative as they relate to “teaching, learning , research creative inquiry and information management”. 

The four groups each identified four technologies or technology-related developments which were felt to have significant impact of library/IT activities in 2-4 years. The groups then voted on the technologies to identify the three most significant areas. At yesterday’s event these technology areas were mobile; social media and cloud services. Perhaps nothing surprising there, but this provided an opportunity for open discussions on the implications for Library and IT Services policies and practices. In addition we also discussed the implications of the technologies, such as ‘gamification’ which had been mentioned but received few votes.

The workshop provided a valuable opportunity to make an institution aware of the methodology which has been used by the JISC Observatory team prior to the cessation of the JISC Observatory work, in light of the cessation of JISC core-funding for UKOLN and CETIS. In light of moves away from centralised advice and support for the sector it will be important, I feel, that the sector is made aware of such methodologies in order that these approaches can be used at an institutional level.
A post entitled I want to be a Dandelion! published yesterday on the Unravelled Bookshelves blog gave another perspective of the importance of supporting diversity across the sector and a move away from Big Projects. The post reported on a talk given by Ben Showers at the Umbrella 2013 conference:

The final thought is one that led to my title today, Ben Showers spoke about tooling up, I thought I would hear about what tools I need for the future instead I got a wonderful talk about how the future is change and we must embrace as much as we can, that we are valuable and valued we just need to keep up, he drew the talk to a close saying we should be less like mammals and more like dandelions, a mammal has few children and spends long time nurturing them and puts a lot of care and thought in tot hem, and they as many to survive to keep life going as possible, while Dandelions throw many many seeds out, no care , no energy, no worries and they spread and get taken by all. We need to focus on less big things and more on smaller and freer items that float and go further, and need much much less from us.

I agree. And I’m looking forward to working across the sector in supporting institutions in identifying what those “smaller and freerer items” may be and in best practices in exploiting them. There are opportunities for consultants to work with the sector, I feel. I’m looking forward to such work after I am made redundant in less than 4 weeks time!

Note that the slides I used in the workshop session are available on Slideshare and embedded below:

View Twitter conversation from: [Topsy] | View Twitter statistics from: [TweetReach] – []

Posted in Web2.0 | Tagged: | 1 Comment »

Reflecting on Yesterday, Understanding Today, Planning for Tomorrow

Posted by Brian Kelly on 3 July 2013

The Umbrella 2013 Conference

Plenary talk at Umbrella 2013Yesterday I attended the first day of the Umbrella 2013 conference. The opening day of the two-day conference was full of fascinating talks and interesting discussions – the highlight of which was the closing plenary talk which asked “Is it a bird? Is it a plane? No it’s a librarian?“. But no ordinary librarian – Victoria Treadway, Clinical Library at the Wirral Hospital Teaching Hospital Trust, in an engrossing double act with Doctor Girendra Sadera described how, by going beyond one’s comfort zone and working closely with others in a team working in the hospital’s Critical Care Unit, librarians could literally save lives.

We’re All Information Professionals Now!

Umbrella tweetIf this was the highlight of the first day, there was also an undercurrent related to the uncertainties of the future of the library profession and CILIP, the professional organisation for librarians and information professionals. Perhaps it would appear strange for librarians and information professionals to be uncertain of their future in an information-rich society. But as Annie Mauger (CLIP CEO) tweeted during the opening plenary earlier today: “We’re all information professionals“. But if we all all information professionals (Channel 4 news journalists, researchers and, indeed, ordinary people many of whom will now have to curate increasingly large volumes pf digital resources) what differentiates information professionals who choose – or choose not – to belong to a professional organisation?

Reflecting on Yesterday, Understanding Today, Planning for Tomorrow

My contribution to the conference was to present a paper on “Reflecting on Yesterday, Understanding Today, Planning for Tomorrow” which argues that librarians need to adopt evidence-based approaches to planning for the implications of technological developments. The paper summarised the approaches which have been taken by the JISC Observatory and argued that, in light of the imminent demise of the JISC Observatory following the cessation of the core funding for UKOLN and CETIS, institutions may wish to adopt the methodology developed by the JISC Observatory team.

Since the presentation only lasted for 20 minutes it was possibly to give an overview of the JISC Observatory team work. However I would hope that the paper (for which Paul Hollins, Director of CETIS, was a co-author) will be published shortly. In addition an extended version of the slides are available on Slideshare and are embedded below.

View Twitter conversation from: [Topsy] | View Twitter statistics from: [TweetReach] – []

Posted in Events, Evidence, Web2.0 | Tagged: | 2 Comments »

Using Social Media to Enhance Your Research Activities

Posted by Brian Kelly on 24 June 2013

SRA paperLater today I’ll be presenting an invited paper on “Using Social Media to Enhance Your Research Activities” at the Social Media in Social Research conference which is being organised by SRA (Social Research Organisation). The paper is available from the University of Bath repository in PDF and MS Word formats.

The abstract for the paper describes how:

In this paper the author summarises the benefits which can be gained from use of social media to support research activities. The paper is based on personal experiences in using social media to engage with fellow researchers, meet new collaborators and co-authors and enhance awareness and impact of research papers.

The accompanying slides are available on Slideshare and embedded below:

Posted in Web2.0 | 2 Comments »

This Year’s Experiment at #IWMW13 – the Bizzabo Mobile Event App

Posted by Brian Kelly on 30 May 2013

Experiments With Online Technologies at IWMW Events

Bizzabo mobile app

The mobile app for the IWMW 2013 event

A video summary entitled Use of Social Media at IWMW Events is available on YouTube. The brief video (which lasts for just over one minute) explains how since 2005 we have tried to make use of a new online technologies at UKOLN’s IWMW (Institutional Web Management Workshop) events. The video clip describes how the availability of a WiFi network at the University of Manchester, the venue for the IWMW 2005 event, provided our first opportunity to explore the benefits which use of communications technologies could provide at an event. Back then we were using IRC, which was available to a small number of people (about 18) who had brought along a laptop with WiFi capabilities.

I was one of those 18 people, and was therefore one of the first to hear the news of the London bombings. It was a strange experience to be aware of the news, but not the full extent of the news, whilst most people in the audience were listening to the speaker. I waited until the speaker had finished before announcing the news, with many of the London based participants then using the coffee break to ring home.

The incident brought home to me the importance of online communications at events, not only for significant incidents but also for more mundane occurrences such as missing keys, speakers delays and problems with public transport.

In addition to the need for event organisers to be able to communicate with speakers and delegates, the experiments a few years ago demonstrated the value of peer-to-peer communications using popular technologies such as Twitter for enriching the experience of events by allowing open discussions and questions to take place.

This Year’s Experiment: The Bizzabo Mobile App

Since mobile technologies are now mainstream, especially amongst Web professionals, this key we are experimenting with Bizzabo, a mobile app we are using to provide access to the IWMW 2013 timetable together with the event’s Twitter stream, as well as providing a communication channel for IWMW 2013 participants and other interested parties.

As can be seen from the screenshot, the opening page for the event shows its name and location, people who have signed up to the community, and recent tweets with the event hashtag.

The agenda for the three-day event is also available and you can bookmark your favourite sessions and add details to your mobile device.

One limitation I have found with the Bizzabo app is that the number of parallel sessions if limited to ten. As the IWMW 2013 event has eleven parallel sessions on Wednesday 26 June and ten on Thursday 27 June this causes a slight problem as one of the slots has to be allocated to the main plenary sessions.

Timetable shown in Bizzabo

The IWMW 2013 timetable for day 2 shown in Bizzabo

However this isn’t an insurmountable problems, and won’t be relevant for events which have fewer parallel sessions.

For me the success of apps such as this is whether they will be actively used by sufficient numbers of people. As described on the Bizzabo blog:

The community is the most important part of Bizzabo and what we’re all about. Once you join the community, you’ll be able to see all other members, go through their profiles, discover mutual connections and interact with the people you want to connect with. 

Note that the Bizzabo app is available for the iPhone and Android environments. The event organiser’s interface is available using a Web browser, which enables the event organiser to provide details about the event (name, location, programmes, times, etc.) as well as information about the speakers. It should be noted that speaker profiles can include details of the speaker’s Web site, blog, Twitter account and LinkedIn profile.

The programme for the IWMW 2013 event is also available on Lanyrd, which also provides a mobile interface. It will be interesting to see how Bizzabo compares with Lanyrd. The latter, to be fair, is more of a social directory for events, allowing you to see participants at events via their Twitter ID. However it will also be interesting to make a comparison between a responsive Web site (Lanyrd) and a dedicated mobile app (Bizzabo). From a provider’s perspective it can be advantageous to provide a single source of information which is available for both desktop and mobile browsers. However might users prefer a solution which could exploit a mobile phone’s characteristics more effectively and, arguably, is more easily found via the phone providers’ app store?

Bizzabo provides a simple way of ensuring that an event programme is available in a format suitable for viewing on a mobile device for free. However for me the important thing is whether the community aspect of Bizzabo takes off. I’m willing to give it a go. If you are attending the IWMW 2013 event, or are simply interested in the event, why not download the app and give it a go. Your feedback would be welcomed, including comments on the mobile app versus mobile web approach to providing information about events.

As mentioned above a brief video summary of the history of use of social media tools at IWMW events is available on YouTube and embedded below.

View Twitter conversation from: [Topsy] | View Twitter statistics from: [TweetReach] – []

Posted in Events, Web2.0 | Tagged: , | 2 Comments »

Embedded Metadata in PDFs Hosted in Institutional Repositories: An Inside-Out & Outside-In View

Posted by Brian Kelly on 4 January 2013

PDF Metadata – Why Is it So Poor?

Metadata in PDF sourcePDF metadata – why so poor? asked Ross Mounce in a blog post published on New Year’s eve.

In the post Ross expressed surprise that although “with published MP3 files of audio you get rather good metadata … the results from a little preliminary survey of academic publisher PDF metadata” were poor: “Out of the 70 PDFs I’ve published (meta)data on over at Figshare, only 8 of them had Keywords metadata embedded in them“.

This made we wonder about the quality of the metadata for papers I have uploaded to Opus, the University of Bath repository.

I looked at a paper on A Challenge to Web Accessibility Metrics and Guidelines: Putting People and Processes First which is available in Opus in PDF and MS Word formats.

I first used Adobe Acrobat in order to display the metadata for the original source PDF file, prior to uploading to the repository. As can be seen from the accompanying screen shot the metadata included the title, the author details (with the email address for one of the authors) and two keywords.

Metadata for repository copy of paperHowever looking at the display for the PDF downloaded form the repository we find that no metadata is available!

This PDF differs from the original source in that a cover page is added dynamically by the repository in order to provide appropriate institutional branding. It would appear that in the creation of the new PDF resource, the original metadata is lost.

Metadata for MS Word masterLooking at the metadata created in the original source document – an MS Word file – we can see how the authors’ names which were subsequently concatenated into a single field. We can also see that although the title of the paper was given correctly, poor keywords had been included, which did not reflect the keywords which were included in the paper itself (Web accessibility, disabled people, policy, user experience, social inclusion, guidelines, development lifecycle, procurement).

I suspect that I am not alone in not spending much time in ensuring that appropriate metadata is embedded in the master source of a peer-reviewed paper. I have also previously not considered how such metadata might be lost in the workflow processes when uploading to an institutional repository: after all, surely the important metadata is added when the paper is deposited into the repository?

Ross’s blog post made me check the embedded metadata – and I discovered that the correct metadata is still included in the MS Word file which was uploaded to the repository along with the PDF copy.

Does the loss of the metadata embedded in the PDF matter? After all, surely people will use the search facilities provided in the repository in order to find papers of interest?

But people will not necessarily visit a repository to find papers of interest. A post which described A Survey of Use of Researcher Profiling Services Across the 24 Russell Group Universities showed that on 1 August 2012 there were over 18,000 users of ResearchGate in the 24 Russell Group universities and judging by the messages along the lines of “28 of your colleagues from University of Bath have joined ResearchGate in the last month. Why not follow them today?” which I am currently receiving, use of this service is growing.

researchgate-papers-abstractAs can be seen from the screenshot of my ResearchGate profile, the service provides access to PDF copies of my papers. I normally simply provide a link to the PDF hosted in the repository but the example illustrated contains a copy of original PDF which was uploaded to the service by one of the co-authors.

In the case of most of my papers it is clear from the thumbnail of the PDF that the paper contains the coversheet provided by the repository.

Researchgate Paper (hosted in Opus)


We can see that the PDF copy of a paper hosted in a repository should not be regarded as a final destination; rather the PDF may be surfaced in other environments.

It will therefore be important to ensure that workflow processes do not degrade the quality of the PDF. It will also be important to ensure that authors are made aware of how embedded metadata may be used by services beyond the institutional repository. But to what extend do repository managers feel they have a responsibility to advise on practices which will enhance the discoverability of content on services hosted outside the institution?

Taylor FrancisIn a paper which asked “Can LinkedIn and Enhance Access to Open Repositories?” myself and Jenny Delasalle commented on how “commercial publishers are encouraging authors to use social media to drive traffic to papers hosted on publishers’ web sites” and provided examples of such approaches from Taylor and Francis, Springer, Sage and Oxford Journals. As an example, Taylor and Francis describe how they are “committed to promoting and increasing the visibility of your article and would like to work with you to promote your paper to potential readers” and go on to document services which can help achieve this goal.

In a blog post which discussed the ideas describe din the paper I described how we had failed to find significant evidence of similar approaches being employed by repository managers:

It was interesting that in Jenny’s research she found that a number of commercial publishers encourage their authors to use services such as LinkedIn and to link to their papers hosted behind the publishers paywalls – and yet we are not seeing institutional views of the benefits of coordinated use of such services by their researchers. Institutional repository managers, research support staff and librarians could be prompting their institutions to make the most of these externally provided services, to enhance the visibility of their researchers’ work in institutional repositories.

But that paper was limited to use of third-party services to provide access routes to research papers. What of the bigger picture in which institutional work flow processes should be designed to enhance discoverability?

The ‘inside-out and outside-in library’

On Wednesday in a post entitled Discovery vs discoverability … Lorcan Dempsey explored the idea of the “inside-out and outside-in library“. In the post Lorcan described how:

Throughout much of their existence, libraries have managed an outside-in range of resources: they have acquired books, journals, databases, and other materials from external sources and provided discovery systems for their local constituency over what they own or license.

However in a digital and network world, there have been two major changes, which shift the focus towards inside-out:

First access and discovery have now scaled to the level of the network: they are web scale. If I want to know if a particular book exists I may look in Google Book Search or in Amazon, or in a social reading site, in a library aggregation like Worldcat, and so on. … Secondly the institution is also a producer of a range of information resources: digitized images or special collections, learning and research materials, research data, administrative records (website, prospectuses, etc.), faculty expertise and profile data, and so on.

Lorcan goes on to describe the challenge facing libraries:

How effectively to disclose this material is of growing interest across libraries or across the institutions of which the library is a part. This presents an inside-out challenge, as here the library wants the material to be discovered by their own constituency but usually also by a general web population.

I would suggest that institutional repositories could usefully adopt the approach taken by Taylor and Francis:

 “[The institution is] committed to promoting and increasing the visibility of your article and would like to work with you to promote your paper to potential readers

But rather than simply encourage researchers to simply add links to papers deposited in the repository from popular services such as LinkedIn and ResearchGate might the institutional goal be enhanced by encouraging researchers to make the content of their papers available in such third party services (subject to copyright considerations) – with the institutional repository providing both a destination and a component in a workflow, with papers being surfaced in services such as ResearchGate, as I have illustrated above.

If such an approach were to be embraced there would be a need to ensure that embedded metadata was not corrupted through repository workflow processes. If, however, the repository is regarded as the sole access point, there would be little motivation to address such limitations in the work flow.

Or to put it another way, repository managers will have a need to manage content hosted within the institution, including management to support the use of the content by services they have no control over.

To a certain extent, this has already been accepted: repositories were designed to have “cool URIs” which can help resources to be discovered by Google. I am suggesting that there is a need to observe usage patterns which indicate emerging ways in which users are finding content. The growing numbers of email alerts from ResearchGate suggest that it may be a service to monitor – with Ross Mounce’s recent post of on the quality of metadata embedded in PDFs suggesting one area in which there will be a need to revisit existing workflow processes.

PS. Ross Mounce described “a little preliminary survey of academic publisher PDF metadata” and has published the data on Figshare. Has anyone harvested the metadata embedded in PDFs hosted on repositories and published the findings?

View Twitter conversation from: [Topsy]

Posted in Repositories, Web2.0 | 21 Comments »

How I Learnt That “Google Scholar Has New Updates”

Posted by Brian Kelly on 10 August 2012

“Google Scholar Has New Updates For You”

Yesterday while visiting Google Scholar I noticed an alert which informed me that there were 10 new notifications for me (see image but note that as I have viewed the updates the alert which was displayed in the top right is no longer shown).

I’d not seen this alert before so I followed the link and discovered a set of recommended papers based on my citations. The second recommended paper in this list seemed particularly interesting: a paper on How Well Do Ontario Library Web Sites Meet New Accessibility Requirements?

I viewed the paper (available in PDF and HTML formats) and found that a recent accessibility audit of Library web sites in Ontario and found that, despite legal requirements for web sites to conform with WCAG 2.0 guidelines “an average of 14.75 accessibility problems were found per web page“.

Back in 2002 I published An Accessibility Analysis of UK University Entry Points which found that only 3 University home pages out of 163 conformed with WCAG 1.0 AA guidelines. Two years later a follow-up survey was published which reported that 9 out of 161 home pages conformed with WCAG 10. AA guidelines. Since I was well aware of the importance University Web managers placed on addressing Web accessibility issues, especially since the Special Educational Needs and Disability Act (SENDA) accessibility legislation was enacted in 2002, I regarded this as evidence of the limitations of WCAG guidelines. Around this time our first peer-reviewed paper on Web accessibility, Developing A Holistic Approach For E-Learning Accessibility, was published. In 2005 a paper on Forcing Standardization or Accommodating Diversity? A Framework for Applying the WCAG in the Real World documented the limitations of WCAG guidelines and the WAI model. A series of accessibility papers followed with the most recent paper, A Challenge to Web Accessibility Metrics and Guidelines: Putting People and Processes First, describing how:

This paper argues that web accessibility is not an intrinsic characteristic of a digital resource but is determined by complex political, social and other contextual factors, as well as technical aspects which are the focus of WAI standardisation activities. It can therefore be inappropriate to develop legislation or focus on metrics only associated with properties of the resource.

It was therefore disheartening to read the paper on Ontario Library Web sites concluding:

Since none of the library web sites examined in this study currently conform to WCAG 2.0, many changes will need to be made before sites can meet the new legal requirements for accessibility. Web accessibility guidelines and standards will need to be incorporated and integrated into the vocabulary, thinking, and processes of web content creators to successfully achieve WCAG 2.0 conformance. Complying with new web accessibility standards will involve a significant change in web development processes.

However the good news is that Google Scholar Updates correctly identified a paper of interest to me.

Learning More About Google Scholar Updates

This morning I spotted a tweet from Glyn Moody which stated:

Moody’s Microblog Daily Digest 120809 – yesterday’s tweets as a single Web page

Since I know that Glyn uses his Twitter account to post links to resources which are likely to be of interest to me (especially related to a variety of open practices) followed the link to Glyn’s most recent tweets. There I spotted a timely tweet:

Wow – Google Scholar “Updates” a big step forward in sifting through the scientific literature – nice

This provided a link to a blog post by Jonathan Eisen, Professor at UC Davis who described his reaction when encountering this new service from Google:

Wow. Completely awesome if it works well. So, well, let’s see if it works well. For me the system recommends the following

Jonathan Eisen went on to share his experiences in identifying the value of the recommendations. After concluding that the first recommendation was of little interest, like me he then looked at another suggestion:

paper number 2 seems a bit closer to my heart: REGEN: Ancestral Genome Reconstruction for Bacteria. And bonus – it is freely available. And so, well, I read over it. And it is definitely related to what I do and I probably would not have seen it without this notification. Cool.


From a post entitled Scholar Updates: Making New Connections posted on the Google Scholar blog it seems that this new service was only released two days ago, on Wednesday 8 August. The post describes how:

We analyze your articles (as identified in your Scholar profile), scan the entire web looking for new articles relevant to your research, and then show you the most relevant articles when you visit Scholar. We determine relevance using a statistical model that incorporates what your work is about, the citation graph between articles, the fact that interests can change over time, and the authors you work with and cite. You don’t need to configure updates or enter any queries. We’ll notify you about new updates by displaying a preview on the homepage and highlighting a bell icon on search results pages.

I therefore seems that researchers can gain value by ensuring that they have a Google Scholar account containing information about their research publications which Google’s sophisticated search algorithms can use to suggest other relevant papers. It’s therefore interesting to note that last week’s Survey of Use of Researcher Profiling Services Across the 24 Russell Group Universities reported that 5,115 users at Russell Group universities have claimed a Google Scholar account, ranging from 77 at the University of Exeter to 580 at UCL.

In addition to the value of Google Scholar Updates it also occurred to me how valuable the links to resources provided by Glyn Moody in his tweets could me, if they were more easily accessed that the daily updates posted on his blog.

Aaron Tay is another person I follow who also provided valuable links to resources using his Twitter account. Back in February 2012 in a post entitled My Trusted Social Librarian I described how I had set up a Twitter list containing just @aarontay. I used this list with the Smartr app to view the content of links which Aaron tweeted. However Smartr is no longer available. In addition such access to Aaron’s links required every individual user to install Smartr or a similar app. Wouldn’t it be useful if there could be a web-based aggregation providing a summary of links which a Twitter user has tweeted? As I described last week, this is what RebelMouse provides. Even better, Aaron also uses RebelMouse. And, as can be seen, 19 hours ago Aaron also tweeted a link to the blog post about the Google Scholar Updates:

RT @figshare: Wow – Google Scholar “Updates” a big step forward in sifting through the scientific literature: by @p …

To conclude, if you use your Twitter account for sharing links, consider using a service such as RebelMouse to make it easier for others to see the content of the links you’ve shared.

Twitter conversation from Topsy: [View]

Posted in search, Web2.0 | Tagged: | 1 Comment »

A Survey of Use of Researcher Profiling Services Across the 24 Russell Group Universities

Posted by Brian Kelly on 1 August 2012

Looking Back

Back in March 2012 in a post on Profiling Staff and Researcher Use of Cloud Services Across Russell Group Universities I summarised usage of Academia.eduLinkedInResearcherID and Google Scholar Citations across the 2o Russell Group universities. The post highlighted complementary surveys which had been carried out by Jenny Delasalle, who in Twitter profile describes herself as aResearch support Librarian: interested in bibliometrics, copyright, scholarly communications, and all sorts!” based at the University of Warwick. That connection subsequently led to Jenny and I writing a paper which asked “Can LinkedIn and Enhance Access to Open Repositories?” which was presented at the Open Repositories 2012 conference, OR 2012.

As described in a one-minute video summary and a 4 minute slidecast, in our paper Jenny and I described personal evidence which suggested that use of LinkedIn and can help to raise the profile of peer-reviewed papers hosted in institutional repositories if links to the papers are provided in these popular services as this may enhance the Google ranking for the institutional repository.

As described on the Russell Group University Web site: “Through their outstanding research and teaching, unrivalled links with businesses and a commitment to civic responsibility, Russell Group universities make an enormous impact on the economic, social and cultural wellbeing of the UK“. But to what extent are the Russell Group universities making use of researcher profiling services to enhance access to their research outputs, especially, those hosted in institutional open access repositories?

Updated Survey of Russell Group University Use of Researcher Profiling Services

The methodologies which were used in the previous blog posts and repeated for the findings published in our paper has been used again, this time to provide a benchmark for use of these services across the enlarged collection of Russell Group universities, which was enlarged to 24 institutions on 1 August 2012 following the incorporation of Durham and Exeter University, Queen Mary, University of London and the University of York.

In addition to benchmarking four additional institutions, following Jenny Delasalle’s blog post about ResearchGate the ResearchGate service was also included in the survey.

The findings are given in the following table. Note that the data for the, Google Scholar CitationsResearcherID and ResearchGate services was collected on 25 July 2012.

Ref. No. Institution  Academia LinkedIn LinkedIn ResearcherID Google Scholar
(Followers) (Current) Members Impact Points Publications
1 University of Birmingham     1,210      5,000     5,667            89   131   782  54,959.25 19,515
2 University of Bristol     1,018      4,320     3,477          254   170   641  64,661.22 21,249
3 University of Cambridge     3,020      8,741     7,220          460   330   972 157,728.66 39,713
4 Cardiff University        906      4,287     3,609          468   140
  646  26,620.70   9,596
5 Durham University     1,001      2,620     1,904          148   131   273  13,151.25   1,151
6 University of Exeter        919      3,742     2,735          113    77   269  13,099.47   5,150
7 University of Edinburgh     2,079      7,090     6,123          263   236 1,181  87,934.30 25,918
8 University of Glasgow 1,004      3,802     4,099          293   219    613  59,662.76 20,041
9 Imperial College        798      8,981     6,914          465   362 1,096 105,989.84 30,404
10 King’s College London     1,420      5,994         27          380   174 1,406  60,114.47 18,264
11 University of Leeds     1,657      6,273     6,599          225   164    848  45,132.67 16,944
12 University of Liverpool        866      3,926     4,814          166     91    582  44,800.42 16,475
13 London School of Economics     1,131      8,464     2,075            20     95    191   2,825.73   1,838
14 University of Manchester     2,279      7,601     8,244          305    357 1,113  71,887.98 25,139
15 Newcastle University       906      4,275     3,347          173    143    704  51,783.84 17,307
16 University of Nottingham     1,299      6,269     6,703          355    160    970  56,478.57 20,513
17 University of Oxford     3,842      9,447     9,823          402    405 1,221 159,620.47 38,224
18 Queen Mary       715      3,519
    2,267            20     139    228  15,556.27   5,232
19 Queen’s University Belfast       689      2,317        185
           83       62    479  23,917.28 10,750
20 University of Sheffield     1,082      5,008     5,941           276    174
   823  47,573.65 18,127
21 University of Southampton     1,083      4,935     5,162           287    182    670  37,618.63 16,887
22 University College London     2,776    10,866     7,164           709    580 1,624 138,134.10 35,035
23 University of Warwick     1,143      4,350     3,142           216    119    448  18,142.13   8,098
24 University of York        986      2,824
    2,394           125    474    386  15,808.07   4,841
TOTAL 33,829 134,669 109,634       6,147  5,115  18,166   426,414


It was noted that the figures given in this table for the Google Scholar Citation are an underestimate. This appears to be due to the design of the REST interface to the entries.  The table has been updated with the correct figures.


  • The numbers may be skewed by errors or variants in names of institutions. For example there are 140 people in who are associated with the rather than the domain.
  • The numbers for and ResearcherID were obtained by a search for the institution’s name. However a link to the findings is not available.
  • Searches for ResearcherID were for institution name except for the University of Birmingham which included UK to avoid name clashes.
  • The findings for institutions such as Queen’s University Belfast and King’s College London with apostrophes in the institution’s name may be skewed due to different policies on resolving such names.


It should be noted that the five services covered in this survey are different and it would be inappropriate to make comparisons across the services – in particular although, ResearcherID, Google Scholar Citations and ResearchGate are intended for the research community, LinkedIn  has a wider remit and, understandably, has a larger audience.

In addition, as described in the Notes, there may be flaws or inconsistencies in the way in which the data was gathered and displayed. In particular it seems that the lack of an agreed institutional ID means that users may associate themselves with different variants of their institution, with this seemingly being the case for institutions contains apostrophes, in particular.

The previous survey and subsequent paper suggested that use of popular social media services by researchers could enhance access to the researchers’ research outputs if links to their outputs were provided from the services.  I am still convinced that this is the case but appreciate that further evidence may be needed in order to convince decision-makers that a coordinated approach to providing links to the content of open access repositories would help to maximise access to the resources.  For now, however, this post is intended to provide a benchmark of use of the services on the launch day for the enlarged group of Russell Group Universities.  In addition I would welcome feedback on the survey methodology, especially from the Russell Group Universities who may find that their information is fragmented across several variants of the institution’s name.

I would also, of course, welcome comments in the implications of the findings and their relevance in the context of the 24 institutions referenced in the survey. Researchgate, for example, appears to have information on over 426K papers ranging from 1.8K at LSE to 39K at the University of Cambridge.  What proportion of research papers hosted in institutional repositories does this cover?  And if the numbers appear low for some institutions does this mean that the institutions should seek to take appropriate actions to increase the numbers, or ignore such findings as it may simply demonstrate the  lack of relevance of the services?

Paradata:   As described in  a post on Paradata for Online Surveys blog posts which contain live links to data will include a summary of the survey environment in order to help ensure that survey findings are reproducible, with information on potentially misleading information being highlighted.

The data for the AcademiaLinkedIn,  Google Scholar Citations,  ResearchGate and ResearcherID was collected on 25 July 2012.

The values for Google Scholar Citation for the universities of Birmingham and Newcastle include ‘UK’ in the search field in order to avoid including information from US and Australian universities with the same name.

It should also be noted that I was logged into the services when I gathered the information.

It should also be noted that the low values for LinkedIn followers for King’s College London and Queen’s University Belfast are felt to be due to the apostrophe used in the institution’s names. For example of search (carried out on 31 July 2012) on LinkedIn for King’s College London gives 3,758 hits but a search for Kings College London gives 328 hits.

Posted in Evidence, Web2.0 | 4 Comments »

Using RebelMouse to Summarise How You Use Twitter

Posted by Brian Kelly on 31 July 2012

Back in February 2011 I asked Who Needs Murdoch – I’ve Got Smartr, My Own Personalised Daily Newspaper! I was a fan of the Smartr app which provided a personalised newspaper based on the content of links tweeted by people I followed on Twitter or on Twitter lists I had created. Despite the fact that Smartr no longer exists, a wide range of similar personalised news services are now available which appear to be particularly useful on table devices and mobile phones.

Yesterday I came across RebelMouse which initially appeared to provide a similar service. However after having had my application for an account approved I realised that RebelMouse was providing something slightly different – it was providing others with a display of the content of links which I tweet. As described in a post entitled Why This Month-Old Startup Is The Most Promising To Launch In A While “RebelMouse is a snapshot of your social media activity“. The article went on to explain how:

RebelMouse is like a Facebook profile page; it’s meant to help other people learn your interests.

When you scan other people’s Rebel Mouse pages, you learn a lot about them, even if you already follow them on Twitter. It resurfaces things you may have missed in social media streams in a visually compelling way.

In March 2010 in a post entitled It Started With A Tweet I described the importance of one’s Twitter bio and the link to further information which can help potential followers to decide whether to follow an unknown Twitter user, especially in cases in which Twitter is used to support professional activities. Two years later I realise that such decision-making processes can be helped by providing an easy-to-digest summary of a Twitter user use of Twitter.

I’ve therefore created my RebelMouse page and have also embedded it within the UKOLN Web site. In addition I have  updated my Twitter biographical details to include a link to the RebelMouse account, as illustrated.

To summarise in an application-independent way:

As part of my open practices which support my professional activities I will make it easy for others to see how I use Twitter so that potential followers can decide whether to follow my Twitter account.

I’m currently using RebelMouse to achieve this goal but will be willing to use an alternative should I come across a service which I feel supports this goal more effectively.

Posted in Web2.0 | Tagged: | 3 Comments »

Making An Impression; Making Connections

Posted by Brian Kelly on 12 July 2012

Social Media: For Ourselves and For Our Customers

A recent post entitled IWMW 2012: The Feedback summarised the feedback we had received for the recent IWMW 2012 event. In addition to this summary more detailed information was sent to the individual speakers and workshop facilitators on their talks and workshop sessions. Such feedback can be valuable in either showing the value of the contribution made at the event or providing suggestions on how the talk could be improved in repeated in future.

We published the feedback two weeks after the event as it is important that such information is available while the event is fresh in people’s memories. But, of course, there can be other ways of getting feedback. At the UCISA User Support Services Conference which took place a few day’s ago at the impressive Crewe Hall Hotel I was pleased to receive feedback on Twitter on the talk I gave on “Social Media: For Ourselves and For Our Customers” which have been summarised on Storify. The feedback included:

  • Excellent presentation, you gave me a lot of new ideas for how I can communicate with my staff and customers. Thanks!
  • Brilliant presentation from @briankelly – good to have a push to tweet a bit more!
  • Brilliant talk from @briankelly – typically informative, insightful, and full of #lolz…
  • Also really enjoyed @briankelly talk about social media. Engaging. Had a chuckle. And I think he likes a real ale so is in my good books

together with an example of an action taken as a result of the talk:

  • Inspired to send my first tweet

Beyond the tweets, a post entitled What a difference a day makes published on the Musings from the frontline blog described how

Today we sat and listened to people who had not only aspired to do things differently and better but, most importantly, had achieved it.

and went on to conclude:

So, thank you @heloukee@maffrigby@briankelly and #ussc12 for the inspiration. You have provided the relationship counselling that I needed and me and conferences are now blissfully happy together again (for now anyway…

It’s About Links; It’s About Connectedness!

The topic of my talk was the importance of social networks to facilitate more effective collaborative working by making use of the existing social networking infrastructure. Although this is a subject I have spoken about previously, as described recently in a post on It’s About Links; It’s About Connectedness! I was fortunate to see Cameron Neylon’s opening plenary talk at the Open Repositories 2012 conference. As described in the live blog of the closing session for the conference given by Peter Burnhill:

we need to think about connectivity, as flagged by Cameron. And these places ie Twitter and Facebook… We don’t own them but we need to be I them, to make sure that citations come back to us from here.

The importance of use of such social media services to provide links to papers hosted in open repositories was also highlighted by Peter Burnhill in his observation that:

And there was talk of citation… LinkedIn, etc. is all about linking back to research to data

It was pleasing to see that the ideas described in a paper by myself and Jenny Delasalle which asked “Can LinkedIn and Enhance Access to Open Repositories?” had been highlighted in the conference conclusions. But these particular ideas were just a simple example of the bigger picture provide by Cameron Neylon on the importance of networks which, on a global scale, can enable researchers to address difficult research topics which cannot be achieved by the single researcher or research group.

The Video For Connecting, For Sharing

Cameron’s talk, which is available on YouTube and embedded below, makes the point about the importance of connectivity (the social web) and ease-of-use (the lack of ‘friction’ needed to embed social web tools in workflow practices) very eloquently and is well worth viewing (and I’d like to give my thanks to the OR 12 organisers for publishing this video recording so quickly – and also for making it available on YouTube so it can be embedded in this blog).

It would, however, be a mistake to regard social networks as being purely a tool for scientific researchers – just as some people mistakenly feel that social networks are just for young people or for purely ‘social’ purposes a confusion caused by the different meanings of the term ‘social’. As I described in my talk, for which a video recording is also available, social networks can also be valuable for those working in support services – and institutions should gain benefits in use of social networking services across teaching and learning, research, marketing and support areas if they are regarded as valuable tools rather than treated with suspicion as is current the case in some areas.

Another important point made by Cameron is the importance of openness for both facilitating connections and minimising the friction caused by licensing barriers. The videos of Cameron’s talk and my talk provide another example of the ways in which connections can be made and knowledge and ideas shared by facilitating access to videos of talks at conferences. As I have described in previous talks on amplified events, such approaches can help the ideas shared at conferences escape the constraints of space and time. Many thanks to the OR 2012 and UCISA conference organisers for providing the live videos streams (escaping the constraints of space) and providing rapid access with little access barriers to the recordings of the talks (escaping the constraints of time).  Long may this continue – and if you are considering organising an amplified event the recent “Event Amplification Report” may be of interest.

Twitter conversation via Topsy: [View]

Posted in Repositories, Web2.0 | Tagged: | 1 Comment »

“Our students love Google!”: Thoughts on the Strategic Web Team

Posted by Brian Kelly on 5 July 2012

Last month I attended the second Google Apps for EDU European User Group (GEUG12) meeting which was held at the University of Portsmouth. The meeting was aimed at members of educational institutions which have signed up to Google Apps in Education, but I was invited to chair one of the sessions. I found a great deal of enthusiasm of the value which Google Apps can play not only in the panel discussion which I chaired on Embedding Google Apps in the Institution but also across the range of presentations which were given during the day.

[Note After publishing this post I came across Sarah Horrigan’s Event Report: Google European User Group 2012 post in which she described how “One of the things that was most interesting from this session [of student portals] was the student response to it – they LOVED it” and pointed out that “Universities feel comfortable moving to Google when others have already moved. For example, 25% institutions in Spain now on Google Apps“].

I was particularly interested in the talk given by Sarah Horrigan, Learning Technologies Manager at the University of Sheffield on Opening up our Practices – Going Google. The event organisers streamed several of the sessions and have provided access to recordings of the talks so I was able to replay Sarah’s presentation.

Four minutes 20 seconds into the talk Sarah told us that:

Our students love Google. They don’t just like it, they love it. The Students Union did a survey on technology in learning and teaching. One of the questions they were asked was “What web sites or online services could you not live without?” Do you know what came number 1? It wasn’t our VLE! It was Google Apps. They love it: everything from Docs to Mail to Scholar – the whole shabang! They love Google Apps.

The popularity of the IT applications provided at the University of Sheffield didn’t come about by chance. Back in March 2011 Chris Sexton, head of CICS, the IT Services department at the University left a comment on this blog:

We made the decision to move to Google for students nearly two years ago, and are just in the process of moving all of our staff over. That will be for mail, calendar, docs, chat, etc. we see it as much more than just mail. The data side isn’t an issue. Google store all of their data under the safe harbor agreement which is perfectly sufficient for UK data protection/privacy law – I have personally confirmed this with the ICO. And anyway, even if it was all held in Europe, it is still covered by the Patriot Act if it is a US company.

I can see no reason for any HE IT department to run their own email service. 

Further back, 0n 27 May 2009 Chris reported on the move to GMail for students:

Formally announced the Google mail for students option last night by sending an email to all staff and students. Replies are split almost 50/50. From students saying this is great news, and from staff saying why can’t we have it!

I picked up on the importance of being aligned with one’s user communities a few days after I attended the GEUG12 event. In the session on New to the Sector? New to Web Management? New to IWMW? I suggested that the perform storm which has hit the sector, in general, and IT and the Web in particular means that there is a need to revisit assumptions about the role of the institutional Web team and the approaches taken to delivering this role – and, perhaps, to unlearn established beliefs and conventions.

I illustrated this point by giving a specific example: the role of events such as the IWMW series. UKOLN does not exist to provide a successful IWMW event; rather our aim is to ensure that the event delivers a specified objective for its community: “To keep web managers up-to-date with developments and best practices in order that institutions can exploit the web to its full potential“. In the talk I explained how technological developments were changing the nature of events and external factors, such as reduced levels of funding and environmental concerns, meant that we needed to not only acknowledge that the nature of our events might change, but that we should also be prepared to be instrumental in leading such changes – something we have been doing in our role in delivering amplified events and, in particular with our Greening Events II: Event Amplification Report” sharing best practices with others.

I went on to argue that institutional Web teams need to ensure that they are aligned with institutional aims and with the needs of their user communities. Easy words to say – but what if they are in conflict with well-established cultural norms in Web and IT teams? We have seen an example in students are happy with the services provided by Google, suggestions that staff did not want to be left behind and the IT Service departments is aligned to support these preferences. But is this the norm in the sector? Are we more likely to see users, IT staff and perhaps Web teams arguing for their preferred technological environment? And perhaps the argument “we use open sources solutions” in preference to licences solutions is becoming increasingly redundant when there are Cloud service providers?

Note that a video recording of Sarah’s talk on Opening up our Practices – Going Google is available on YouTube and embedded below.

In addition Sara’s slides are hosted on Sliideshare and also embedded below:

Twitter conversation from Topsy: [View]

Posted in Web2.0 | 4 Comments »

Paper Accepted for OR12: Can LinkedIn and Enhance Access to Open Repositories?

Posted by Brian Kelly on 3 July 2012

I’m pleased to say that a paper by myself and Jenny Delasalle, Academic Services Manager (Research) at the University of Warwick, which asked “Can LinkedIn and Enhance Access to Open Repositories?” has been accepted for the Open Repositories conference, OR 2012.

This paper, which is available from the University of Bath institutional repository, is based on work initially published on this blog.

A blog post entitled “How Researchers Can Use Inbound Linking Strategies to Enhance Access to Their Papers” published on 2 March 2012 described an Inbound linking strategy to get to the top listing on google fast. It occurred to me that my willingness to make use of researcher profiling services such as, ResearcherID, Scopus, Researchergate, Mendeley, Microsoft Academic Search and Google Scholar Citations may have helped to enhance the visibility of my research papers which are hosted in the University of Bath repository. The blog post went on to describe how I found that I was author of 15 of the most downloaded papers in the repository from my department.

More recent investigations reveal that, as illustrated, I have the largest number of downloads of any author at the University of Bath! This was recently brought to the attention of the PVC for Research who, in a departmental meeting, informed me that a University of Bath Research Group had discussed these figures and asked me to share the approaches with other researchers at Bath. In response I mentioned that the approaches I’d taken, the evidence I’d gathered, the hypothesis I had proposed for explaining the evidence, possible alternative hypotheses, the limitations of the approaches, the implications of the findings and areas for further work had been submitted to the Open Repositories 2012 conference – and if the paper was accepted the findings would be available to all, and not just researchers at my host institution.

The paper explores other possible reasons for the high visibility of these papers – and one possibility worthy of further investigation is the provision of many papers in HTML formats and not just PDF and MS Word. However the use of popular researcher profiling services such as LinkedIn and are felt to be worth recommending to researchers in order (a) to ensure that their research papers can be more easily found by their peers on these services and (b) so that links to the paper on their institutional repository can enhance the visibility to Google of the papers as well as enhancing the Google ranking of the repository itself.

Of course it probably needs to be said that that the number of downloads is not necessarily an indicator of quality. However the converse is also true: just because a paper in a repository is seldom viewed does not indicate that it must be a great paper! I am quite happy to promote the use of such approaches since increased numbers of views, especially for the target communities, can help to both embed the ideas given in the papers by practitioners and increase the likelihood that the papers will be cited by other researchers. In my case I’m pleased that, according to Google Scholar Citations, my most cited papers have been cited 87, 67, 54 and 40 times.

My co-author Jenny Delasalle has been investigating use of researcher profiling service at the University of Warwick, her host institution. It was interesting that in Jenny’s research she found that a number of commercial publishers encourage their authors to use services such as LinkedIn and to link to their papers hosted behind the publishers paywalls – and yet we are not seeing institutional views of the benefits of coordinated use of such services by their researchers. Institutional repository managers, research support staff and librarians could be prompting their institutions to make the most of these externally provided services, to enhance the visibility of their researchers’ work in institutional repositories.

Surely it is time for the research community to develop inbound linking strategies to their research work, especially as this can be done so simply. Indeed the OR12 conference organisers have invited us to summarise the ideas described in a poster and a one-minute presentation. The ideas have been summarised using the Pixton cartoon generation tool in four strips.

[link to source]
[link to source]
[link to source]
[link to source]

I’m not sure if it will be possible to use PowerPoint during the one-minute madness but I have prepared some slides which are available on Slideshare and embedded below.

NOTE: A one minute summary of this paper was given on the opening day of the OR 12 conference. A video recording of the summary is available on Vimeo and embedded below.

Also note that a slightly modified version of this post was published on the LSE Impact of Social Sciences blog on Thursday 23 August 2012. You can also view the statistics for access to the post via the URL.

Twitter conversation from Topsy: [View]

Posted in Evidence, Repositories, Web2.0 | 7 Comments »

“Conferences don’t end at the end anymore”: What IWMW 2012 Still Offers

Posted by Brian Kelly on 25 June 2012

IWMW 2012 Is Over: Long Live IWMW 2012!

Conferences don’t end at the end anymoretweeted @markpower two days after IWMW 2012 delegates had left Edinburgh and returned home.  This has always been the case: conferences organisers will have evaluation forms to analyse and invoices to chase.  But the point Mark was making related to the continuing discussions about the ideas discussed at an event and the accompanying resources, resources which increasingly these days may have been created during the event and support for the participants, which can help to ensure that an event is not just an collection of individuals who are co-located for a few days but, as I described in a recent post, a sustainable and thriving community of practice.  A related point was made recently in a post on “#mLearnCon 2012 Backchannel – Curated Resources” in which David Kelly described how “The backchannel is an excellent resource for learning from a conference or event that you are unable to attend in-person” and went on to add that he finds “collecting and reviewing backchannel resources to be a valuable learning experience …, even when [he is] attending a conference in person. Sharing these collections on this blog has shown that others find value in the collections as well.” But what are the resources from the IWMW 2012 which may be of interest to others, where can they be found and what value may they provide?

Key Resources


The slides used by the plenary speakers were uploaded to Slideshare in advance of the talks in order to allow the slides to be embedded in relevant Web pages and enable a remote audience to view the slides.  It should also be added that this also allowed participants at the event to view the slides if they were not able to view the main display of the slides. The slides have been tagged with the “iwmw12″ tag on Slideshare.  This enables the collection of slides to be accessed by a search for this string or by  browsing slideshows which use this tag.  Note that in previous years an event tag had been used, but this service was discontinued recently, after Slideshare had been bought by LinkedIn.

Creating a collection of slides used at the event enables a Slideshare presentation pack to be created, as illustrated, thus making it easy to access all slides used at the event which have been made available. As can be seen from the IWMW 2012 web site, the presentation pack can be embedded in Web pages. This service is being used since participants at IWMW have frequently asked to be able to access slides, including slides used in parallel sessions which they were not able to attend. Using Slideshare makes it easy to respond to this user need. In addition it helps to raise the profile  and visibility of speakers at the event.


The IWMW 2012 Lanyrd page was set up in advance to provide a social directory for participants at the event so they could see who else was attending. The value of this grows as Lanyrd is used across a number of events: from my Lanyrd, profile, for example, I can see that I have appeared at events on 12 occasions with my colleagues Marieke Guy and on 5 occasions with Paul Boag, Tony Hirst, Andy Powell, Keith Doyle and  Mike Nolan. In addition to the social dimension. Lanyrd also provides calendar entries for sessions at events. The date and time of sessions at IWMW 2012 has been provided together with links to the main page on the IWMW 2012 web site have been added, together with slideshows and links to reports on the sessions which we are aware of. It should be noted that, as illustrated, a Lanyrd has a Wiki-style environment for uploading resources which avoids the single-curator bottleneck. As the person who set up the IWMW 2012 Laynrd entry, together with the IWMW guide for all IWMW events, it should be noted that I receive an email alert when new entries are added to the coverage, such as:

<> (In guide IWMW) [22nd Jun 2012 07:52] *
@sheilmcn added coverage “Developing Digital Literacies and the role  of institutional support services” (  type:slides)
to session  “B2: Developing Digital Literacies and the Role of Institutional  Support Services”

This can help to spot if inappropriate entries are being added.


As described in a post on Streaming of IWMW 2012 Plenary Talks – But Who Pays? we used the service for the live video stream. The videos are currently being processed and will be made available via UKOLN’s Vimeo account shortly. This service will be used to wider access to the plenary talks so that they are available for those who were not present at the event – although, of course, they can also be viewed by people who were at the event and wish to watch the talks again. In addition to the video recordings of the talks we have also taken a number of short interviews with participants at the event which will enable their thoughts on the event to be shared with a wider audience.


With so many delegates now having digital cameras and smartphones there are a large number of photographs which have been uploaded to Flickr with the IWMW12 tag which can help to provide a collective memory of the event.

Having a large number of photographs, rather than a small set of selected ones taken  by an official photographer, provides a much broader perspective on the event. It also means that images browsing interface services, such as Tag Galaxy, are more useful by having a more diverse range of content.

The two images show a display of a Tag Galaxy search for photographs on Flickr with the “iwmw12″ tag and one of the many photographs taken by Sharon Steeples of the final conclusions session during which I showed an image of the video stream, captured earlier that morning when Dawn Ellis gave a summary of Web developments at the University of Edinburgh, subverting normal conference-style approaches to case studies by telling this as a fairy tale. The video recording of this talk will be particularly worth watching.


As can be seen from the image shown above, the lecture theatre also has a large blackboard.  The opportunity to use a blackboard during the final session provided too much temptation to ignore –  so in the summing up a tweet posted on the backboard was displayed, as a reminder that not everyone necessarily has a mobile device they could use for tweeting. However many people did use Twitter during the event. As is widely known, content posted on the Twitter stream becomes unavailable available a short period. There is therefore a need to analyse event tweets shortly after an event – or archive the tweets to allow them to be analysed subsequently.


As can be seen from the image of the Topsy search for #IWMW12 tweets posted over a period of the past 7 days (click for a larger display) there were 666 mentions on 18 June and 574 on 19 June.  The most highly tweeted link was to the IWMW 2012 video page, which was mentioned in 43 tweetsduring the week on 17-24 June 2012. In total Topsy reported that there were 748 tweets during the week on 17-24 June 2012, 808 in the month from 24 May-24 June and an overall total of 846 tweets to date.

Other Commercial Twitter Analytics Tools

It should be noted that a large number of Twitter analytics tools are available which be used to analyse how Twitter has been used. The Tweetreach service, for example, reports that tweets containing the #iwmw12 hashtag have reached 7,553 Twitter accounts. However, as is often the case with usage statistics, such figures need to be treated with a pinch of salt.

Beyond Commercial Twitter Analysis Tools

Topsy, Tweetreach and other Twitter analytics tools can provide a useful summary of use of Twitter hashtags. However  in the UK higher education development community we are fortunate to have the expertise of developers such as Martin Hawksey and Tony Hirst who have a well-established track record in the development of value Twitter analysis tools and who can continually develop their tools based on particular needs and interests of the community.

As Martin described in a post entitled IWMW12 Data Hacks for the IWMW 2012 event he was  “collecting an archive of tweets which already gives you the TAGSExplorer view“.

Looking at Martin’s Twitter archive of #iwmw12 tweets, provided by the TAGS v.40 service, we can see that the top five Twitterers were @iwmwlive (281 tweets), @PlanetClaire (149 tweets), @sharonsteeples (103 tweets), @mariekeguy (100 tweets) and @jessica_hobbs (81 tweets). Since the @iwmwlive Twitter account was managed by Kirsty Pitkin it seems that the top twitters at the event were all female: this seems particularly interesting in light of the fact that only about a quarter of the participants were female.

It should also be noted that this tool also provides a display of the tweets over time.  It can also be seen (right) that tweeting peaked at 2pm on Tuesday, 19 June 2012 with 229 tweets.

Finally I should mention Martin’s most recent development:  a filterable/searchable archive of IWMW12 tweets. As illustrated below, this provides a clickable word cloud of the content of the tweets, together with a search box and browse interface for the tweets.  It was while browsing the tweets that I came across a comment from @JohnGreenway who, during the conclusions, tweeted:

As someone from a commercial background, #iwmw12 has been excellent – hope everyone in HE realises how rare this is in other industries!

Such live tweeting helped in providing useful real time feedback not only to the event organisers but also the plenary speakers.  Other comments received during the event included:

  •  Excellent talk by Stephen Emmott – always a reliable IWMW speaker! #iwmw12 from @adriant
  • First time at #iwmw12 and had a brilliant time. Great ideas, great people, great weather, who could ask for more. from @millaraj
  • First time at IWMW: great speakers, interesting topics, fantastic Ceilidh. Many thanks to organisers and presenters. #IWMW12 #new #social from@seajays
  • Great summary by @sloands on how to build accessibility into project management processes using BS8878 #iwmw12 from @chistabel6

Further examples of tools which Martin Hawksey developed at the IWMW 2012 event can be accessed from his Delicious IWMW12 Hacks set of bookmarks.

The Daily newspaper

Finally I should mentioned the IWMW12 daily newspaper, which had been set up in advance of the event. This automated newspaper consisted of articles based on links which had been tweeted  containing the event hashtag.


Conferences have never ended immediately after the final talk has been given – this is always the paperwork to be processed, the evaluation forms to be analysed and feedback given to the speakers and local event organisers. What is different nowadays is that event resources and discussions are no longer ‘trapped in space and time’.  If an event has value, it should surely have value for those who may not have been able to attend.

It was therefore appropriate that during my opening talk I was able to announce the launch of the JISC-funded Greening Events II; Event Amplification report. We hope that the report will be useful for others who are planning amplified events.  As Mark Power put it: “Conferences don’t end at the end anymore” – you need to make plans for managing the resources after the conference is over. We hope the report will be useful for those planning amplified events.

NOTE: Shortly after this post was published a post entitled “But who is going to read 12,000 tweets?!” How researchers can collect and share relevant social media content at conferences was posted on the LSE Impact of Social Sciences blog which echoed the approaches described in this post.

Posted in Events, Evidence, preservation, Twitter, Web2.0 | 3 Comments »

Trends in Slideshare Views for IWMW Events

Posted by Brian Kelly on 31 May 2012

“Why does everybody ask for slides during/after a presentation?”

Why does everybody ask for slides during/after a presentation? What do you do with them? I’m genuinely curious.asked @MattMay last night. I use Slideshare for a number of reasons:

  • To enable a remote audience to view slides for a presentation they may be watching on a live video stream, on an audio stream or even simply listening to the tweets (and a provide a slide number on the slides to make it easier for people tweeting to identify the slide being used.
  • To enable the slides to be viewed in conjunction with a video recording of the presentation.
  • To enable my slides to be embedded elsewhere, so that the content can be reused in a blog post or on a web page.
  • To enable the content of the slides to be reused, if it is felt to be useful to others. Note that I provide a Creative Commons licence for the text of my slide, try to provide links to screenshots and give the origin of images which I may have obtained from others.
  • To enable my slides to be viewed easily on a mobile device.
  • To provide a commentable facility for the slides.
  • To enable my slides to be related, via tags, to related slideshows.

It seems that I am not alone in wishing to share my slides in this way. Slideshare, the market leader in this area, was recently acquired by LinkedIn. As described in a TechCrunch article published on 3 May 2012: “LinkedIn has just acquired professional content sharing platform SlideShare for $119 million in cash and stock“.  The article went on to state that: “SlideShare users have uploaded more than nine million presentations, and according to comScore, in March SlideShare had nearly 29 million unique visitors”.

Slideshare is also widely used in higher education. But how is it being used, especially in the context of annual events for those involved in web management and web development activities?

Use of Slideshare at IWMW Events

A year ago today, on 31 May 2011, in a post entitled Evidence of Slideshare’s Impact I reported on the number of views on slides of talks which had been given at UKOLN’s IWMW event since 2006.  hosted on Slideshare. It is timely to update that survey.

The slideshows for each year are available in the following Slideshow event groups: IWMW-2006IWMW-2007IWMW2008IWMW2009 and IWMW2010 (note we changed the naming convention in 2008 once Twitter started to gain in popularity).  Note that since not all of the slideshows have been added to the event groups the analysis also made use of the Slideshare tags: IWMW2006,IWMW2007IWMW2008IWMW2009, IWMW10 and IWMW11. It should also be noted that on 20 May Slideshare discontinued event groups so we will not be able to use this approach for grouping slides used at IWMW 2012.

The numbers of views for each slide are available on Slideshare.  A Google Spreadsheet has been created which summarises the figures. The overall totals are given below.

Year Nos. of views
(May 2011)
Nos. of views
(May 2012)
Total nos.
of slides
Nos. of
plenary slides
Nos. of slides from
parallel sessions
2006 48,360  51,535 11 11  0 Slides added retrospectively.
In May 2012 most popular plenary: 12,216 views.
In May 2011 most popular plenary: 10,190 views.
2007 44,495  61,739 7 5  2 Slides from 2 w/shop sessions included.
In May 2012 most popular plenary: 27,814 views; w/shop: 12,267 views.
In May 2011 most popular plenary: 21,679 views; w/shop: 9,838 views
2008 94,629 109,055 17 8  9 W/shop facilitators encouraged to use Slideshare.
In May 2012 most popular plenary: 33,656 views; w/shop: 18,369 views.
In May 2011 most popular plenary: 26,005 views; w/shop: 22,525 views.
2009 38,877  46,238 29 10 19 In May 2012 most popular plenary: 2,489 views; barcamp: 2,839 views.
In May 2011 most popular plenary: 3,313 views; barcamp: 4,023 views.
2010 11,833 18,758 18 10  8 In May 2012 most popular plenary: 1,896 views; w/shop: 1,601 views.
In May 2011 most popular plenary: 2,816 views; w/shop: 2,599 views.
2011 -   6,393  11  5  6 In May 2012 most popular plenary: 1,119 views; w/shop: 944 views.
TOTAL 238,259 297,741  88  44  44 Growth: 2011 to 2012 = 25%

Note that these figures were mostly collected on 25 May 2012, but a small number of changes were made on 30 May. Also note that two different slideshows used in workshop session at IWMW 2012 had the largest numbers of views in May 21011 and 2012.


A paper on “Who are we talking about?: the validity of online metrics for commenting on science [v0]” presented at the Altmetrics11 Tracking scholarly impact on the social Web workshop described how:

… we are not searching in online bibliographic databases for evidence of publications but that we are isolating the existence of online activity on the social web including: blogs; micro-blogging (Twitter); activity on social platforms – LinkedIn, and Mendeley; and sharing of presentations through Slideshare. 

The potential importance of Slideshare metrics was also highlighted yesterday in an article entitled Scientists: your number is up published in  Nature:

Herbert Van de Sompel at the Research Library of the Los Alamos National Laboratory in New Mexico, who is a long-standing proponent of author identifiers, hopes that the [ORCID] system might be used to generate alternative metrics by linking authors to their outputs in “less traditional venues of scholarly communication, such as tweets, blog posts, presentations on Slideshare and videos on SciTV”.

To illustrate the possible benefits of using Slideshare to host a slideshow consider Kristen Fisher Ratan’s slides on “Metrics: The New Black?“. From this I can view Kristen’s other slideshows and discover that she is the Product Director at PloS (Public Library of Science) and that her Twitter ID is @kristenratan. I can also find related slides hosted on Slideshare with the tags almsmetricspublishing and altmetrics.  This can be useful and I haven’t even looked at the slides yet! Slide 18 (illustrated) states that “Powerpoint download feature inadvertently tracked sub-article usage” which suggests that links to a PowerPoint presentation from a paper might provide usage information about the paper which might be difficult to find in other ways. I’m please that this slideshow has been uploaded to Slideshare!

But if Slideshare have a role to play in a portfolio of online metrics which may help to provide a better understanding of the impact of scientific research, what can be learnt from these metrics taken over a period of six years? Although the IWMW event is aimed at practitioners rather than researchers, it did occur to me that the experiences gained in collating these statistics might be of interest to those who are considering use of Slideshare statistics in an alt.metrics context.  Some thoughts that occurred to me:

  • Fragmented statistics: A number of speakers uploaded slides to their own Slideshare account. In cases where this was done after the slides had been uploaded to our main IWMW Slideshare account, we did not always know about the alternative location, which could result in difficulties in aggregating the usage statistics.
  • Reuse of slides at other events: On a couple of occasions, slides used for presentations at IWMW event were also subsequently used at another event.

However there are clearly more significant things to consider when looking at Slideshare metrics: namely, what is it that is being measured?  In this post I will not attempt to answer that question.  Instead I will simply conclude by providing a simple answer to Matt May’s question: “Why does everybody ask for slides during/after a presentation? What do you do with them? I’m genuinely curious.” by pointing out what the evidence tells us “They ask for them because they wish to view them. Why, therefore, would you not provide access to the slides?“. Even if the slides don’t provide significant textual content, they may be useful by letting others see how you have designed your slides and structured your ideas.

As I concluded in last year’s post:

Martin Weller made [the] point in his post on The Slideshare Lessons when he said: “by sharing good Slideshare presentations you are sharing ideas, and people will react to these. It can be in the form of comments on your blog post which features the presentation, on the Slideshare site itself, or through other social media such as twitter“.  Why, I wonder, are people still hosting their slides in the silo of an institutional Web site when the slides can easily be made available as a social object?

Or to put it another way, why would you not publish your slides on Slideshare?

Posted in Events, Evidence, Web2.0 | Tagged: | 2 Comments »

Getting a Kik Messenger Account – and Assessing Risks and Benefits

Posted by Brian Kelly on 3 May 2012


I recently heard about the Kik Messenger app, an instant messaging application for mobile devices which, according to Wikipedia “took only 15 days for Kik Messenger to reach one million user registrations“. Kik Messenger has been described as a BBM killer – and as someone who has never owned a Blackberry phone I was interested in evaluating a cross-platform application who appears to be a competitor to the Blackberry’s key selling point: instant messaging.

I have now installed the app on my Android phone and iPod Touch. I’m familiar with the benefits which messaging applications can provide over email through over five years of Twitter use and am interested in exploring the potential of an app which can be used with non-Twitter users.

However in order to use such communication tools, you need to have people to communicate with. At present I only know the Kik username of one person. My username is ukwebfocus and I’d be interested in seeing how this app might be used to support my professional activities. Perhaps a tool such as Kik Messenger could have a role to play at an event, such as UKOLN’s 3-day IWMW 2012 event, in which it might not be appropriate to use Twitter for, say, administrative queries.

When making use of such new services I use three guiding principles to assist the decision-making process which were described in a paper on “Empowering Users and Institutions: A Risks and Opportunities Framework for Exploiting the Social Web“:

  1. Understanding the reasons why a service will be used.
  2. Understanding possible risks in using the service.
  3. Identification of ways of minimising such risks.

A summary of how these principles have been applied in installing Kik Messenger are given below:

Reasons for using Kik Messenger
The reasons include:

    • A desire to evaluate instant messaging tools to complement use of Twitter.
    • A need to evaluate tools which can be used to support communication needs at an event.
    • A wish to be an early adopter in use of a social networking / communications tool in order to claim a meaningful identifier and to facilitate the development of a community.

Risks in using Kik Messenger

The risks in making use of the tool include:

    • The tool may fail to reach a critical mass.
    • The service may not be sustainable and the terms and conditions may change or the service itself, and the accompanying network and data may be lost.
    • Use of the tool may result in a failure to make use of richer alternatives.
    • The tool may not address a significant need.
    • The benefits provided by the tool may not be sufficient to motivate others to use it.

Approaches for minimising risks in using Kik Messenger

The approaches being taken to minimising the risks include:

    • Raising awareness of the tool across my network.
    • Acceptance of possible loss of content and community (as is the case with use of Twitter and text messaging on my mobile phone).
    • Evaluation of use of the toll in different contexts.
    • A willingness to use the tool in a small-scale context if it fails to gain significant market penetration.
    • A willingness to accept the time lost in downloading and learning use of the tool if the service itself is not sustainable.

On his blog Doug Belshaw has documented his “3 principles for a more Open approach” which appear to provide a similar goal in documenting principles to aim the selection of new services:

“I’ve come up three principles to guide me:

    1. I will use free and Open Source software wherever possible. (I’m after the sustainable part of OSS, not the ‘free’ part)
    2. If this is not possible then I will look for services which have a paid-for ‘full-fat’ offering.
    3. I will only use proprietary services and platforms without a paid-for option if not doing so would have a significant effect on my ability to connect with other people.”

It is interesting to note the differences between our two approaches. Doug, it seems, very much focusses on the service itself (it needs to be available as open source software) and a particular business model (a subscription service, rather than one which is funded through advertising, for example) although, like me, he provides an escape clause which acknowledges that there are risks in failing to use a service if doing so would mean he was unable to fulfil particular requirements. My approach, on the other hand, focusses on the outputs of the service and takes a disinterested view of the development approaches.

The principles which Doug mentions do, of course, have validity. However for me Open Source Software is simply software which should be evaluated alongside proprietary software, with an open source software licence being no guarantee of the value of the software or it sustainability. I agree with Doug on the value of services having a variety of business models for their sustainability. However although the availability of open source software so that users can install the software on their own server may help Doug, who runs his own domain, and others who have the technical expertise, time and motivation to be system administrators, for many people this will not be the case. It should also be added to the availability of open source software is also not necessarily a guarantee that one’s host institution, which has traditionally provided the IT infrastructure will install the software. Indeed, even if software, including social software, is installed within one’s host institution, there is no guarantee that the service, the data or the community will be available if one leaves the institution. As Sarah Lewthwaite in a post entitled University Email: A PhD Exit Strategy reminded research students who were about to finish their PhD:

Your email account has been an academically sanctioned identity for three or more years. And, unless you have a particularly benevolent institution that guarantees email for life, your account is about to end. Full stop. You may receive a letter asking you to ‘forward all important emails to an external account’ before your account is sedated (suspended) and put out of its misery (erased). If, like me, you have come to rely on your university email, you need an exit strategy, fast.

Sarah went on to reiterate this point:

“Now, two essential factors come into play. They’re so important; so you can quote me.

    1. Your email is not yours. It belongs to your university.
    2. Your university email address constitutes and validates your academic identity. This signifier is about to expire.”

If you (as is the case for me) you do not wish to become a system administrator, you should understand alternative sustainability options. Many people will be happy to make use of free services for which advertising and other uses of activity data help to fund the service whereas others, such as Doug, will be willing to pay a fee for such advertisements to be removed.

It will be interesting to see the approaches to sustainability which users will select. There will be personal factors which come into play – and as someone who is happy to pay my TV licence feed and accept that when I watch ITV for ‘free’ that “I’m the product, not the user” I have chosen not to subscribe to Sky because of my antipathy towards Murdoch (although I have watch football on Sky in pubs).

Revisiting my initial comments about the Kik Messenger service, I should probably add that there would also be costs and risks in using an open alternative (perhaps Jabber/XMPP). But what if a proprietary approach, though not platform-specific such as Blackberry’s BBM, is needed in order to establish that there is a real user need and establish appropriate technical requirements before the open alternatives are developed? Karl Marx suggested that there were a number of evolutionary stages in society’s development (the slave society, feudalism and capitalism) which had to be passed before a more equitable society was reached. The evidence of Twitter’s success and social networks such as Facebook hints at the difficulties of achieving the seemingly more equitable online environment which, as Doug describes in a post on Why we need open, distributed social networks supporters of and Diaspora claim these services will provide. But can we build Openness in one country or might Blackberry BBM users benefit from moving to a more open cross-platform solution which has an API, albeit a solution which is not open source and for which, according to the FAQ, it does not seem possible to pay for an account?

Twitter conversation from Topsy: [View]

Posted in openness, Web2.0 | 3 Comments »

Have You Got Your Free Google Drive, Skydrive & Dropbox Accounts?

Posted by Brian Kelly on 24 April 2012

A few hours ago I visited Microsoft’s Skydrive Web site in order to see if I was entitled to the free upgrade from 7Gb to 25 Gb of storage. As an existing Skydrive users it seems that I was so I’m pleased that I have additional storage space which I can use for transferring files between my mobile devices (iPod Touch and Android phone) and desktop computers. As I describe in a recent post on Paper Accepted for #W4A2012 Conference Skydrive has proved particularly useful for working with my co-authors of the final versions of a peer-reviewed paper which was produced using MS Word.

Whilst installing the Skydrive tool on my PC I noticed a tweet which announced that Google Drive had been released. Google Drive, like Skydrive and Dropbox (the utility I normally use for shipping files between various devices) provide cloud storage – and, as described in a BBC News article, Google Drive offers up to 16TB of storage with 5Gb for free – not as much as Microsoft’s offering but, to be fair, I’m getting that deal as an early adopter.

Shortly after the initial tweet I encountered the scepticism with a tweet from @sydlawrence saying:

Holy crap. Google owns everything on google drive. Tell me a business that will use it… … 

which linked to the following screenshot of the Google Drive terms and conditions:

There is clearly a discrepancy between the tweet and the terms and conditions: how is “Google owns everything on google drive” reconciled with “You retain ownership of any intellectual property that you hold in that content. In short, what belongs to you stays yours“?

But if we ignore such hyperbole, what should we make of the terms and conditions page which states:

When you upload or otherwise submit content to our Services, you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content.

Although it was truncated in the screenshot I should add that the terms and conditions went on to say that:

 The rights you grant in this license are for the limited purpose of operating, promoting, and improving our Services, and to develop new ones. 

Indeed, as I asked on Twitter in a different context though related to terms and conditions for social media service, what should we make of terms and conditions which state:

We may update these Terms (including our Privacy Statement) from time to time. Changes will have immediate effect from the date of posting on this Site and you should therefore review these Terms regularly. Your continued use of this Site after changes have been made will be taken to indicate that you accept that you are bound by the updated Terms.

My view is that I will use these three Cloud storage services for both personal and work-related activities. I’m pleased that Google have been open about the fact that they may modify my content as this will include compressing my files – a Cloud storage service which did not do this would be guilty of using energy unnecessarily: something which should not be done in light of global warming concerns.

I’m also happy if Google decide to explore ways in which they can monetise my attention data, just as Facebook do when they observe my interests in beer and sport and present me with a personalised ad.

But what if they use the terms and conditions to take a copy of my content and sell it on? I don’t think this is likely, but I do accept that it is risk. I will therefore assess such risks when I make use of the service – and would advise others to take a similar approach if they store content on the service. But I’m also aware of the missed opportunity costs if I don’t use such services.

So I’ll use Google Drive, once I’ve been given access to the service. What about you?

Twitter conversation from Topsy: [View]

Posted in Legal, Web2.0 | 15 Comments »

Risk Register for Blogs

Posted by Brian Kelly on 17 February 2012


Bloggers’ Squabble Involves Lawyers

RisksAn article published in the Guardian the week before Christmas announced “Hacked climate emails: police seize computers at West Yorkshire home” and went on to describe how “Police officers investigating the theft of thousands of private emails between climate scientists from a University of East Anglia server in 2009 have seized computer equipment belonging to a web content editor based at the University of Leeds“. It seems that “detectives from Norfolk Constabulary entered the home of Roger Tattersall, who writes a climate sceptic blog under the pseudonym TallBloke, and took away two laptops and a broadband router“.

But rather than comment on a climate denier’s blog of more interest was Tattersall’s post regarding Greg Laden: Libellous article which describes how “Blogger Greg Laden has libelled me [Tattersall] in a scurrilous article on his blog“. In brief, Greg Laden appears to have accused Roger Tattersall of illegal activities. However being a climate denier is not illegal and Laden seems to have opened himself up to accusations of libel. He seems to have realised this and has updated his post so that it now begins:

I’ve decided to update this blog entry (20 Dec 2011) because it occurs to me that certain things could be misinterpreted, in no small part because of the common language that separates us across various national borders, and differences in the way debate and concepts of free speech operate in different lands.

I want to make it clear that I do not think that the blogger “TallBloke” a.k.a. Roger Tattersall has broken British law

I hope that will be the end of that matter, but it does highlight some additional legal risks related to publishing a blog, beyond the issue of the cookie legislation which was discussed in a recent post. This incident highlights possible reputational risks for an organisation which employs a blogger (even if, as in this case, the blog is published anonymously and is not related to work activities) and risks that impassioned debate may lead to libellous comments being posted.

Managing risksA Risk Register For Blogs

There may be dangers that risk averse institutions may use such incidents as an opportunity to restrict or even ban blogs provided by their staff. In order to minimise such risks it may be advantageous to take a lead in providing a risk register which documents possible risks and ways in which such risks may be minimised.

I am in the process of providing a risk register and the draft is given below. I welcome feedback on the risks listed below and the approaches described to minimising the risks. In addition I would welcome suggestions for additional risks which I may have failed to address = and suggestions for how such unforeseen risks can be minimised.

Risk Description Risk Minimisation
Legal Risks
Infringement of ‘cookie’ legislation Since the service uses cookies to measure Web site usage, this may be regarded as infringing the ICO’s ‘cookie’ legislation. The ICO’s guidance suggests that due to the technical difficulties in requiring users to opt-in, they will be unlikely to take further action, provided appropriate measures to address privacy concerns are being taken. In the case of this blog, a sidebar widget provides information on cookie usage.
Publication of copyrighted materials Blog posts may contain copyrighted materials owned by others. Images, such as screen shots, may be included without formal permission being granted. Where possible, links will be provided to the source. If copyright owners feel that use of their materials is inappropriate, the content will be removed normally within a period of a week.
Plagiarism Blog posts may plagiarise content published by others. Where possible links will be provided to content published by others and quoted content will be clearly identified.
Publication of inappropriate comments. Inappropriate blog comments may be published. The policy for this blog states that inappropriate comments will be deleted.
Sustainability Risks
Loss of content due to changes in policies. may change its policies on content which can be hosted. Alternatively since the service is based in the US the US Government may force content published on this blog to be removed. Since this blog has a technical focus, it is felt unlikely that this will happen.
Loss of blog service due to service being unsustainable. The service may go out of business or change its terms and conditions so that the blog cannot continue to be hosted on the service. It is felt unlikely that the service will go out of business in the short term. If the service does go out of business or changes in terms and conditions it is felt that due notice will be given which will allow content to be exported and the blog hosted elsewhere.
Reputational Risks
Damage to blog author’s reputation due to inappropriate posts being published. The author’s professional reputation will be undermined in inappropriate posts are published. The blog’s policy states that “the blog will provide an opportunity for me to ‘think out loud': i.e. describe speculative ideas, thoughts which may occur to me“. If such thoughts are felt to be inappropriate or if incorrect or inappropriate content is published an apology will be given.
Damage to blog author’s host institution or funder due to inappropriate posts being published. The reputation of the author’s host institution or funder will be undermined in inappropriate posts are published. The author will seek to ensure that the conversational style of the blog does not undermine the position of the author’s host institution or funder. Occasional surveys will be undertaken to ensure that the content provided on the blog is felt to be relevant for the blog’s target audience.

Twitter conversation from Topsy: [View]

Posted in Blog, Legal, Web2.0 | 3 Comments »

The Failure of Citizendium

Posted by Brian Kelly on 20 December 2011

Remembering Citizendium

A few days ago I read Steve Wheeler’s post on Content as Curriculum? having being alerted to it by Larry Sanger’s post on An example of educational anti-intellectualism to which Steve provided a riposte in which Steve argued the need to Play the ball, not the man.

From the blog posts I learnt that Larry Sanger is a co-founder of Wikipedia and, as described on his blog is the “‘Founding Editor-in-Chief’ of the Citizendium, the Citizens’ Compendium: a wiki encyclopedia project that is expert-guided, public participatory, and real-names-only”.

I have to admit that I had forgotten about Citizendium but the little spat caused me to revisit the Web site. While searching I came across a discussion entitled Why did Citizendium fail? and yes, it does seem that this “endeavor to achieve the highest standards of writing, reliability, and comprehensiveness through a unique collaboration between Authors and Editors” has failed. But although we often talk about success criteria, it can be more difficult to identify failures. How then, can we describe Citizendium as a failure?

Experiences With Citizendium

A few years ago I signed up for a Citizendium account. In order to register you need to provide your real name and include “a CV or resume … as well as some links to Web material that tends to support the claims made in the CV, such as conference proceedings, or a departmental home page. Both of these additional requirements may be fulfilled by a CV that is hosted on an official work Web page“.

I registered as I felt that if Citizendium became successful being an author could provide a valuable dissemination channel for those areas in which I have expertise. In particular I had an interest in helping to manage the Web accessibility entry in Citizendium. However I found that I did not have the time – or inclination – to edit this article. Looking at the article today it seems that the “page was last modified 09:25, 10 January 2008” and “has been accessed 221 times“. It is perhaps good news that the page has been viewed so little as it is not only very out-of-date but is also poorly written. It also seems that there have been no content added to the Talk, Related Articles, Bibliography or External Links pages or the also no entries

In comparison we can find that the Web Accessibility entry in Wikipedia has been edited 575 times by 277 users. There were also 10,911 views in November 2011.


Perhaps there may be those who could argue that Citizendium isn’t a failure, but has a valuable role to play in a particular niche area which is not being addressed by Wikipedia. But how can this argument be made when Citizendium’s aim to “endeavor to achieve the highest standards of writing, reliability, and comprehensiveness through a unique collaboration between Authors and Editors” results in entries such as this one on Silverlight vs Flash:

With the rocket development of Internet, the techniques used for building web pages is improving all the time, which not only brings people more information but new experience of surfing on the Internet. Many techniques have been applied to enrich the web page these years, from totally the plaintext in early 90’s, first to web page with pictures and then that with embedded sounds. Later, Sun Microsystems proposed Java Applet, which was popular for not long time until being conquered by Adobe Flash.

Back in March 2008 the Citizendium FAQ asked the question:

How can you possibly succeed? Wikipedia is an enormous community. How can you go head-to-head with Wikipedia, now a veritable goliath?

The solid interest and growth of our project demonstrates that there are many people who love the vibrancy and basic concept of Wikipedia, but who believe it needs to be governed under more sensible rules, and with a special place for experts. We hope they will join the Citizendium effort. We obviously have a long way to go, but we just started. Give us a few years; Wikipedia has had a rather large head start.

Three and a half years later it seems clear that in the battle between the online encyclopedia “governed under more sensible rules, and with a special place for experts” has been unable to compete with the “vibrancy and basic concept of Wikipedia“.

I’m pleased that Steve Wheeler’s link to Larry Sanger’s blog post helped me to remember my initial curiosity regarding the more managed approach to gathering experts’ knowledge provided by Citizendium and demonstrated the failings in such an approach. Let’s continue making Wikipedia even better is my call for 2012.

Posted in General, Wikipedia, Wikis | Tagged: | 8 Comments »

Signals From Sheffield

Posted by Brian Kelly on 7 November 2011

What are IT Service departments doing these days? Frustrated users sometimes regard IT Services as seemingly having responsibilities for developing barriers to use of IT , with comedy sketches such as “Computer Says No” from Little Britain and Channel 4’s The IT Crowd illustrating that such views are commonplace. A few month’s ago as described on the Communities and Government Web site Local Government Minister Grant Shapps and Decentralisation Minister Greg Clark “called on a new generation of councillors to shake up their town halls in the interests of the people they serve and help banish the ‘computer says no’ culture that exists in some councils“.

Do University IT Service departments also need shaking up? A few days ago I came cross a post on Social Media in CiCS on Chris Sexton’s From a Distance blog in which she described use of social media within CiCS, the Corporate Information and Computing Services department. Chris, the CiCS director, explained:

Blogging is something some individual members of the department do. Some, like me, use commercial products like Blogger or WordPress and have them hosted off-site, some use our in-house blogging software, uSpace, based on a Jive product. Some blog regularly, some less often. What we haven’t had before is a departmental blog, so we’ve changed what used to be a static news page on our web pages into a blog. So much better – it’s easy to update, we can include pictures, links and videos, and, more importantly we can collect feedback in the comments field.

and went on to add that the department has “been using Twitter in the department for a few years” and has “finally taken the plunge and set up a [Facebook] page“.

In addition to running an IT Services department for a large Russell Group University Chris has been a prolific blogger since she set up the blog in October 2007, having posted 62 posts in the first three months of the blog, 208 posts in 2008, 183 in 2009, 162 in 2010 and 147 to date this year. It does seem to me that Chris’s blog will provide a good insight into IT departments in a large University so I hope that the content of the blog, which is hosted on Google’s Blogspot service, will be preserved. But it also seems to me that the sector would benefit if such openness and transparency were to be the norm across not only IT Service departments but also other service departments including the Library. So whilst Chris’s recent post demonstrates a commitment to use of social media to support the user community at Sheffield University, and a willingness to exploit both in-house and cloud services, perhaps the most important signal being sent from Sheffield University is the willingness to be open and invite comments and feedback on development plans. I’d be interesting in hearing if there are other IT Service departments which have taken a similar approach.

Posted in Web2.0 | 3 Comments »

“I Predict A Riot”: Thoughts on Collective Intelligence

Posted by Brian Kelly on 29 September 2011

Technology Outlook: UK Higher Education

The New Media Horizon’s “Technology Outlook: UK Higher Education” report, which was commissioned by UKOLN and CETIS,  explores the impact of emerging technologies on teaching, learning, research or information management in UK tertiary education over the next five years. As described in a recent post on What’s On The Technology Horizon? Implications for Librarians I’ll be summarising the technologies featured in the report which I feel will have particular relevance to those working in Libraries at the forthcoming Internet Librarian International (ILI 2011) conference.

The report highlights ‘Collective Intelligence‘ as one emerging technology which is predicted  to have an time-to-adoption horizon of 4-5 years. But what exactly is ‘collective intelligence’ and what impact might it have on those working in libraries?

Collective intelligence is defined in Wikipedia as “a shared or group intelligence that emerges from the collaboration and competition of many individuals and appears in consensus decision making in bacteria, animals, humans and computer networks“. The article uses the social bookmarking service as an example of collective intelligence :

Recent research using data from the social bookmarking website, has shown that collaborative tagging systems exhibit a form of complex systems (or self-organizing) dynamics. Although there is no central controlled vocabulary to constrain the actions of individual users, the distributions of tags that describe different resources has been shown to converge over time to a stable power law distributions. Once such stable distributions form, examining the correlations between different tags can be used to construct simple folksonomy graphs, which can be efficiently partitioned to obtained a form of community or shared vocabularies. Such vocabularies can be seen as a form of collective intelligence, emerging from the decentralised actions of a community of users.

Other examples of ways of the relevance of social media in providing collective intelligence might include:

Predicting flu epidemics by observing search terms in Google: Back in 2008 an article published in the Guardian entitled “Google predicts spread of flu using huge search data” described how “Google Flu Trends takes the general search tracking technology pioneered by Google Trends and applies it specifically to influenza. The firm’s engineers claim to have devised a way of analysing millions of individual searches related to the disease that in tests proved to correlate closely with the actual incidence of illness.“.  A Google Scholar search for “predicting flu epidemics using google

Predicting earthquakes using Twitter:  An article entitled “Twitter can predict earthquakes, typhoons and rainbows too..” described an  “academic paper introduced by Takeshi Sakaki, Makoto Okazaki and Yutaka Matsuo from the University of Tokyo [which] investigates the real-time interaction of events such as earthquakes in Twitter and proposes an algorithm to monitor tweets and to detect a target event“.

Predicting social unrest in the Middle East using social media: An article on “The Social Media Revolution” described how “the CIA has been criticized for not being ‘followers’ on Facebook and Twitter and therefore failing to capitalize on the information those sites could have provided in predicting the recent turmoil“.

“We Predict A Riot”

These examples illustrate how social media can be used for predictions.  But predictions usually aren’t provided in isolation: rather predictions are used to identify appropriate actions which may need to be taken.  In the first example we might example doctors to ensure that they stock up on medical supplies and, if a particularly severe flu epidemic is predicted, the NHS may decide to fund a marketing campaign aimed at sectors of the population most at risk.  The second example  could also result in government action such as as mobilising emergency forces which could help to save lives.  The third example, however, could result in less benign interventions.

The Kaiser Chiefs sang “I predict a riot” but as suggested in a blog post which hosted the accompanying carton it might now be the crowds which are now predicting upheavals, whether geo-physical or social.

This move to collective intelligence might seem to challenge notions of centralisation and authority and thus, returning to the talk I’ll be giving at the ILI 2011 conference, be challenging to the traditional roles of libraries.

But these examples also highlight both the potential benefits and risks associated with trends which may be predicted through large scale use of social media. As has been highlighted in recent posts about privacy concerns related for Facebook users such issues are very relevant for mainstream users of social media today. (And yesterday’s announcement about the new range of Amazon Kindle devices and the Amazon Silk browser have raised additional privacy concerns).

Facebook’s analysis of users’ attention data can be clearly financially beneficial to Facebook in providing targetted advertising (which may also be beneficial for the end user) and of concern to users when information thought to be private is made available to others in unexpected ways (which tends to be the current focus of user education of the risks of use of social media).  But rather than the obvious embarrassing photos which people may be worried about, might it be the less obvious activities which may have the more significant impact in the future?

If I update my status saying I’ll be celebrating with a few pints of Deuchars IPA if England beat Scotland in the Rugby World Cup game on Saturday (while I am in Glasgow) this might be used to suggest that myself and others in my demographic like real ale and use this for targetting adverts (which might help me discover a Scottish real ale which I am unfamiliar with).

If I update my status saying I’m getting a sore throat this might help in providing signals of the flu (and could be more significant in terms of instigating change than my wasted vote in a General Election in Bath).

And if I update my status if I notice possibly illegal activities taking place, am I being helpful to society or could my status update be used by the authorities to justify unnecessary actions?  And could a provocative status update (which might be part of a large number of updates which cause people to riot) be therefore treated as incitement?  Has the future described in Minority Report (which addresses the theme of  “the role of preventative government in protecting its citizenry“) arrived?

Lots of questions, I know.  But I also feel that information professionals should have an important role in engaging with the debate. I should also add, as suggested in the post on “The Facebook Chart That Freaks Google Out” and the accompanying chart which is illustrated above, Facebook’s popularity does mean that it is a significant harvester of activity data, since people spend their time on the service and will often have provided their profile information.   But if Facebook users migrated overnight to, say, Diaspora would that mean that the benefits of analysis of activity data and content updates could be lost, including the positive benefits?  Or might it mean that although users will own their own data, they, understandably, won’t be aware of the possible misuses which could be made of their content updates?

There is a need to address the concerns raised by Facebook’s dominance and their cavalier approaches to privacy – but there’s also a need to look at the wider issues and not assume that any service which provides an alternative to Facebook will necessarily provide benefits across all areas.

Posted in Web2.0 | 3 Comments »

Sharing Job Information More Effectively

Posted by Brian Kelly on 19 September 2011

Vacancies in Institutional Web Teams

Orla Weir, head of Digital Strategy at the University of Salford, recently asked me for suggestions on place places to publish information about a number of vacancies which are available in the new central digital team at the University of Bath.  My initial suggestion was to use the website-info-mgt and the web-support JISCMail lists and, as can be seen from the list archives, the message, which is summarised below, has been sent to the 564 members of the website-info-mgt list and the 588 members of the web-support list:

Digital Communications Officer
Grade 7 – £29,972 – £35,788
The Central Digital Team is part of the Communications Directorate and leads and manages the digital engagement, visual standards and digital presence of the University. As a small team, it is responsible for the creation and implementation of Digital Strategy including governance, platforms, innovation, and best practice and is essentially the glue that sits across many of the university engagement tools and services. The remit of the team includes: CMS, Web, Social, eLearning (promotion and visualisation), eCRM, Mobile, SEO and URL strategy.
The Digital Communications Officer will be a critical part of this small team and will be responsible for managing the digital presence and engagement model for the University. This includes internal customer relationships, project leadership, implementation, content delivery and evaluation. The role is a hybrid role which requires both technical and communications skills. It also requires an appetite and enthusiasm for all things digital and a desire to be part of creating an exemplar digital engagement presence.
The purpose of the role is to implement and manage the University’s web presence and digital engagement including content and platform implementation, design, communications and measurement. The scope of the role includes the appropriate use of all digital platforms using truly multichannel integration consistent with the University Digital Strategy.
Closing Date – 21/09/2011

But in a era of social media and syndicated content might there be additional ways of making such information available to a wider range of potential applicants?  And might we not expect those working in institutional Web teams within higher education to be pro-active in looking oat communication channels which may provide  additional benefits, such as being able to reach out to potential applicants who may not be members of these two mailing lists?

Careers 2.0: the Stack Overflow Careers Site

Coincidentally a recent tweet from @psychemedia (Tony Hirst) asked':

Wondering if any HEIs ever post developer job ads to stack overflow careers site? #devcsi @briankelly

I had a look at this site and found that although a search for vacancies containing the string “Web” shows that there are currently 404 (!) positions currently available, carrying out a search for “University” in the UK results in only five hits, as illustrated.

However, as described in the FAQ, the Careers 2.0 service is intended for employers who are looking for programmers, which they suggest can have a role to play as “part of the process as the first technical interview. Instead of scheduling a screening call with a member of your technical staff, just have your staff review the candidate’s profile“.  The service, which provides information on 38,127 , is aimed primarily at developers who have contributed to the Stack Overflow service – although there is a job listing service which employers may be interested in using as a means of reaching out to developers who are users of Stack Overflow.

IWTB, the Institutional Web Team Blog Aggregator

UKOLN’s IWTB (Institutional Web Team blog aggregator) service was officially launched at the IWMW 2011 event held at the University of Reading on 26-27 July. The service aggregates blogs provided by those working in (or have close affiliations with) institutional Web teams.  The service can be used to help identify what one’s peers across UK’s institutional Web teams are doing and what they are communicating to their users.

A search for ‘vacancies’ shows that several vacancies were advertised in July on the University of Bath Web services blog, with the University of Essex also advertising vacancies in their team in July.

As will many social Web services, the blog aggregator will become more effective as the numbers of users grows. We will shortly be promoting use of this service more actively. For now I will give a reminder that an online form for submitting the URL for an institutional Web team blog can be accessed from the IWMW home page.

Harvesting RSS Feeds

In a post entitled Autodiscoverable Feeds and UK HEIs (Again…) Tony Hirst revisited the provision of auto-discoverable RSS feeds on institutional Web sites. Tony’s post listed a number of areas in which RSS feeds can add value which  included:

jobs: if every UK HEI published a jobs/vacancies RSS feed, it would trivial to build an aggregator and let people roll their own versions of

Tony’s post also that he had developed a  developed a Scraperwiki tool to find auto-discoverable RSS feeds on University home page, which builds on his previous work in this area which used a Yahoo Pipe.  The Scraperwiki tool now analyses the RSS feeds and the output from the tool provides listings of news feeds, event feeds, research information feeds, Twitter feeds, as well as for jobs feeds. A summary of the UK University home pages which provide autodiscoveable job feeds is given below:

Feed Title URL
Jobs at Bath
Great careers start here…
Great careers start here…
Great careers start here…
Great careers start here…
Edge Hill University Job Vacancies latest vacancies

The Future

In a recent guest post entitled Lend Me Your Ears Dear University Web Managers! Dave Flanders, a JISC Programme Manager which summarised work carried out in the “Linking You“ project at the  University of Lincoln. A survey of 40 Web sites across the domain (ten from each university group) was carried out in order to compare patterns of usage for URLs to key information sources.  The project found there were inconsistencies in the representation of information for graduates and undergraduates.  However there were also good conventions that have emerged across the sector. From this work the ‘Linking You’ project proposed a common set of URL syntaxes that could be used in principle across multiple corporate institutional Web sites.

The project outlined a number of benefits to the sector which can be gained from agreement on common URI practices, which included:

  • Provision of news feed aggregators: If we all knew where all the corporate news feeds were e.g. we could create a UK University News Aggregation Service where the sector could have their news published on demand, let alone text mining goodness and other filters for highlight key news developments across all higher and further education institutions.
  • A sector wide directory: Common information such as institutional policies, contact information, news, about, events, etc. could be aggregated into a searchable directory; useful to both the public and HEI data geeks.

I can’t help but feel that Universities (and institutional Web teams) which are early adopters of such practices may gain advantages.  The Web teams which highlight their vacancies in a Web team blog will be able to see the content surfaced to viewers of the IWMW service and content linked in from University home pages in ways which can be found by software will continue to of interest to developers who will be looking for institutional data. I wonder how long it will take before others start to follow the approaches taken at the Universities of Bath, Cumbria, St. Andrews and Edge Hill?

Posted in Web2.0 | 1 Comment »

Microattributions, Wikipedia and Dissemination

Posted by Brian Kelly on 9 September 2011

Microattributions Session at #SOLO11

One of the sessions I attended at the SOLO (Science Online London) 2011 event held in London last week addressed the role of ‘microattributions’ in science (note that there isn’t a specific page on the SOLO11 Web site which I can link to so I have created a Lanyrd page about the Microattributions breakout session).

Use of Microattributions in Wikipedia

The session began with Mike Peel (@Mike_Peel) showing how contributions to Wikipedia provided an example of a service which supports microattributions. Looking at an example which I am familiar with, a year ago in a post entitled How Can We Assess the Impact and ROI of Contributions to Wikipedia? I commented on the potential value of entries in Wikipedia with the example of Andy Powell’s update to the HTTP_303 entry. This entry has been viewed no fewer that 5,032 times in the past 30 days which I think illustrates Wikipedia’s strengths in providing outreach. However I hadn’t been aware that it was possible to view details of the contributions made to Wikipedia articles. Looking at the list of contributors for the HTTP_303 entry I find that Andy Powell is the top contributor, having made 7 updates – between 09.53 and 10:13 on 24 September 2010.

Looking at a more significant article, such as the Wikipedia entry for World Wide Web, we can see that the top contributor, Susan Lesch, has made 253 edits between March 2008 and July 2011. The next most prolific contributor, NigelJ, has made 127 updates followed by the Cluebot bot, which has made 70 automated updates (fixing vandalised updates to the article).

Mike Peel illustrated the importance of being able to identify significant contributors to Wikipedia in a story of Professor Gets Tenure With The Help Of His Wikipedia Contributions. The Wikimedia blog provided further information on the contributions which Professor Michel Aaij had made: “more than 60,000 edits, a couple of Good Articles, a Featured List, almost 150 Did You Knows“.

Microattributions in Scientific Research

Following Mike Peel’s very tangible example of both use of microattributions and the value that they can provide for an individual, Martin Fenner (@mfenner) described the origin of the term. As Martin described in a recent blog blog one of the first mentions of the term appears to be an August 2007 Editorial in Nature Genetics (Compete, collaborate, compel). Martin provided a definition of the term:

Microattribution ascribes a small scholarly contribution to a particular author.

and went on to describe how a paper published in March 2011 in Nature Genetics (Systematic documentation and analysis of human genetic variation in hemoglobinopathies using the microattribution approachconcluded that “microattribution demonstrably increased the reporting of human variants, leading to a comprehensive online resource for systematically describing human genetic variation“.

A Microattribution Article in Wikipedia

During the Microattributions session we heard of several other examples of microattritibutions including contributions to source code on software repositories such as Github.

During the session Mike Peel updated his personal page on Wikipedia with some of the ideas which were discussed. On the page Mike pointed out that there wasn’t a Wikipedia entry on Microattributions and invited volunteers to create a page.

I responded to this challenge and created the initial stub entry for the article, as illustrated.

In my initial draft which, following the suggestion provided by the article creation wizard, I created in my personal Wikipedia space, I included the other examples of microattributions which I mentioned above. However since I wasn’t aware of any significant publication which had documented use of the term in these contexts I defined microattributions in the context of its use in the Nature Genetics paper.

Making Use of Wikipedia in Other Areas

I don’t know if the Microattributions will remain in Wikipedia. It might be deemed to be not sufficiently note-worthy. Or perhaps it could be included in some other entry: what, for example is the relationship between a microattribution and a nanopublication – a term coined, I think, by Barend Mons.

However I am convinced of the importance of Wikipedia for defining scientific and technical terms and documenting significant issues related to their origin and use. Should funders, such as Research Councils and JISC, encourage funded projects to make use of Wikipedia as a dissemination channel which can help to enhance the impact of funded work? If this does happen there will be a need to understand best practices for creating and maintaining sustainable items in Wikipedia, including concepts such as NPOV.

I also feel it would be useful to be able to monitor contributions to Wikipedia across sectors, such as JISC-funded project developments. Although it seems that we can identify individual contributors I don’t know if it is possible to aggregate information related to groups of individuals. Since myself and Andy Powell both have profiles in Wikipedia, is it possible, I wonder, for statistical information about our contributions to be automatically gathered and analysed? I’ll leave that as a challenge to developers :-)

Twitter conversation from Topsy: [View]

Posted in Wikipedia, Wikis | 2 Comments »

Do We Want Technical Diversity or Harmonisation?

Posted by Brian Kelly on 25 July 2011

Current Diversity of Approaches to the Mobile Access to Institutional Services

Do we want diversity in the technologies used to provide various institutional Web services or should we be seeking to gain benefits which may be provided by adoption of a small range of technological approaches?

The instinctive answer for some is likely to be a desire to embrace diversity and to encourage ‘a thousand flowers to bloom‘. Others, however, will be concerned that such approaches will be costly and lead to confusions for user communities and will make it difficult to provide a harmonised services once the best approaches become accepted.

Such issues are likely to be revisited in the context of approaches and technologies which will be used to deliver mobile web services.  UKOLN and CETIS recently carried out a survey on Institutional Use of the Mobile Web. Although we are still working on our report on the survey the  findings for the initial question “How is your institutional website(s) delivered to mobile devices?” appear interesting. It seems that the most widely used approach to the provision of access to mobile devices is no different to that taken to provide access to conventional devices.  A significant number are providing a separate site for mobile users whilst similar number have developed stylesheets for their Web styles which are specifically designed for mobile devices.  One institution is providing access to mobile devices using a mobile plugin which is provided by their institutional CMS.

Whilst CETIS’s Mobile Web Apps briefing paper (PDF format) (which is described on the CETIS blog) provides the advice that:

There is no such thing as the Mobile Web. Design for the usual Internet and then make your site adaptable for mobile devices for example decreasing the screen size using CSS media queries and then scaling up for larger devices like tablets and PCs by progressively enhancing access for larger audiences.

in reality it appears that such advice is not currently being widely implemented. There will be understandable reasons why such advice cannot necessarily be easily implemented (there will, for example, be existing technologies in place which cannot easily be updated or replaced and there will be the need to find resources to carry our usability testing on a variety of devices). In addition, as discussed on this blog in the context of the Shhmooze app for Apple’s mobile devices,  there may be business reasons for developing an app for a popular mobile device in order to validate the potential demand for a new service by providing a tool which maximises the usability provided on a specific device before developing device-independent solutions once the demand has been established.

But whilst one can appreciate the current diversity, there will be a need to understand how the landscape may develop in the future.  The comments in the survey describe how, in addition to existing implementation challenges, staff in institutions are still debated longer-term strategic policies.

Revisiting Decisions on Institutional Web Site Search Facilities

Might there be understandable reasons for diversities in the technical directions which institutions across the sector take?  Is there value in welcoming a thousand mobile flowers blooming?  In order to provide a historical context to such discussions I thought I would revisit the ideas which were being discussed regarding the provision of search engine technologies on institutions’ Web sites over ten years ago and look at how institutions are currently providing such services.

In a short paper on Approaches to Indexing in the UK which was delivered at conference on Managing the Digital Future of Libraries hosted in Moscow in 2000 I presented the results of a survey of software used to provide search facilities on institutional Web sites in UK Universities.  As shown in the accompanying table the most widely used indexing tool was the open source ht://Dig solution. The survey shown a wide range of applications were used, with 13 institutions using software which was used by only one or two institutions.   It was also noticeable that no few than fifty higher education institutions in the UK were failing to provide a search facility on their institutional Web site back in July/August 1999.

My recollection of the discussions on the mailing lists back then tended to focus on a variety of factors: ht://Dig was preferred by many as it was an open source solutions, whereas others were happy to use the service provided by the Web server software provided.  I can recall the Ultraseek’s management capabilities where appreciated by institutions hosted multiple Web servers who were  willing to pay the licence fee for this commercial product, whereas Harvest’s distributed indexing was felt to provide a scalable solution which can be used to provide a national indexing across UK University Web sites, known as AC/DC. Only three institutions, however, made use of an externally hosted solution (two used Freefind and one used the public Alta Vista search facility.

A survey of institutional search engines was carried out for the twenty Russell Group Universities in December 2010. As described in the post on Trends For University Web Site Search Engines we found that “15 Russell Groups institutions (75%) use Google to provide their main institutional Web site search facility, with no other search engine being used more than once“.

In this case we can clearly see that arguments for a diversity of solutions based on preferences for open source or bundled solutions, ease of management or the distributed architecture seem no longer to be relevant, with a Google solution now being the preferred option.

Will we see simple arguments for diversity in the ways in which institutions provide support for the Mobile Web until we eventually arrive an approach which is used by most institutions? And whilst it may be dangerous to mandate a preferred solution too soon (after all, the majority of search engines used in 1999 are probably no longer in existence) might there not also be risks in failing to engage with mainstream approaches?

Posted in Web2.0 | 1 Comment »

Memolane Timelines (Not Only For WordPress Blogs)

Posted by Brian Kelly on 20 July 2011

Last week’s news on the blog that “ oEmbed Provider API Now Available” will be appreciated by developers who feel that the WordPress platform provides a rich and interoperable environment not only as a blogging platform but also as a content management system.  The announcement describes how:

oEmbed is a format for allowing an embedded representation of a URL on third-party sites. The simple API allows a website to display embedded content (such as photos or videos) when a user posts a link to that resource, without having to parse the resource directly.

Whilst reading this news earlier today I followed a link to Third Party Applications on the Develop site which currently only lists one application which is “built to work with and enable you to interact with your blog in new ways” – namely Memolane.

I registered with the Memolane service for producing timelines some time ago but the connection with WordPress made me revisit the service. A display of my timeline is illustrated.

I have configured Memolane to include a feed from this blog. In addition to a display of recent blog posts I have also included RSS feeds of areas of work for which several years ago I recognised that RSS could have a significant role to play.  In particular I have included a link to the RSS feeds for my forthcoming  events, previous events (for every year since I started in UKOLN in 1997)  and for my peer-reviews and related papers.

As show in the bottom of the image you can quickly display previous events, so I can find that in the latter part of 2000 I gave a talk on “Externally Hosted Web Services” on 12 October 2000 (well-before the current hype about Cloud Computing!) and a talk on “Approaches To Resource Discovery In The UK HE Community” at the Verity 2000 conference on 30 November 2000.

It seems from this timeline display that life was much more leisurely eleven years ago,  with the record of public engagement suggesting a six week gap between my activities! Of course I will have posted to email lists and written documents, but it is now difficult to see what I was doing back then.

RSS feeds provide a means of keeping a reusable record of activities which can be processed by a variety of applications. This is the reason why I maintain a page of RSS Feeds For UK Web Focus Web Site and provide similar links for the QA Focus project which I was the project director for from 2002-2004.

Despite a number of third party services having withdrawn support for RSS I am still convinced of the benefits of RSS.  Those who make use of WordPress software either as a blogging platform or as a CMS will be able to exploit the feeds provided by the platform and many other services still provide RSS.  The most significant gap in the services I make use of, however, is ePrints which drives our institutional repository service.  Sadly ePrints support for RSS is very limited and so I am forced to maintain my RSS feed for my publications separately :-(  It would be great if ePrints were to support the interoperably provided in a Web 2.0 world by RSS and not just the much smaller Library world based around OAI-PMH.  But, as I asked last year: Is It Too Late To Exploit RSS In Repositories?

Posted in Blog, rss, Web2.0 | Tagged: | 5 Comments »

Potential for at Events

Posted by Brian Kelly on 30 June 2011

Alan Cann is a fan of In a post on “ masterclass” he described how he has “written about several times recently, but [is]still getting blank looks from lots of folks” and so went on to explain that “Curation, it’s all about curation. What is curation? Adding value to information“. In a subsequent post Alan reported that “The saga continues” and admits that, although he is a fan, “What I still haven’t figured out is how to use for education, beyond the informal contexts that I’m already using it for“.

I have also been exploring I am thinking about the potential the service may have curating content related to an event, as opposed to subject areas such as “The latest news about microbiology” and “Annals of Botany: Plant Science Research” which have been the focus of Alan’s curation activities.

I have therefore set up a topic on “IWMW 2011 (Institutional Web Management“, UKOLN’s annual Institutional Web Management Workshop (which this year takes place at the University of Reading on 26-27 July).

The page currently contains content published by event organisers, primarily on the IWMW 2011 blog. The blog has been set up to highlight the key aspects of the event (the plenary talks and the parallel sessions) in advance of the event. We hope that this will provide evidence of the relevance of the event for those who are involved in the important task of managing institutional Web services and convince managers that, at a time when funding is tight,£250 for a two-day event (which includes accommodation) is a bargain for the professional development and networking opportunity which the event provides (especially in comparison with similar events for those involved in Web management activities).

I suspect, however, that the page should become more interesting as more varied content is published about the event (ideally with the #iwmw11 event hashtag so that such content can be easily discovered) by those intending to attend the event or have an interest in the topics which will be addressed at the workshop.

Our intention is to update the IWMW 2011 page on a weekly basis over the next few weeks and then see if we can update it more frequently during the event itself. I should add that although the official programme for the event has been finalised in light of various recent announcements (such as the Cookie legislation and the requirement for Universities to publish data related to the services they provide) we are exploring ways in which such topics may be address at the event.

If you do have an interest in either the topics which may be published on or, indeed, the opportunities which may provide, we invite you to follow our page. And if you’d like to read some more about this service, which, perhaps surprisingly, was developed in France, you may wish to read the guest post on the TechCrunch Europe blog by Guillaume Decugis, CEO of the company behind who explainsWhy this could be the moment for the curators“.

Twitter conversation from Topsy: [View]

Posted in Web2.0 | Tagged: | 8 Comments »

Social Analytics for Russell Group University Twitter Accounts

Posted by Brian Kelly on 28 June 2011

“Students to get best-buy facts”

On a day on which the main headline on the BBC News Web site announces the Government’s Competition Plan For Universities which “could bring more competition between universities and greater powers for students” it would seem timely to publish a survey which makes use of a number of social media analytic tools to explore how Russell Group Universities are making use of their institutional Twitter accounts and to invite discussion on the strengths and weaknesses of such approaches. After all if, as described in an accompanying articleStudents [are] to get best-buy facts“, shouldn’t the facts about Universities’ online presence also be provided – especially if you believe in openness and transparency?


A survey of Institutional Use of Twitter by Russell Group Universities was published back in January 2011. This survey provided a snapshot of institutional use of Twitter across the twenty Russell Group Universities based on the statistics provided on Twitter account profile pages (numbers of followers, numbers of tweets, etc.). The survey was warmly received by those involved in managing institutional Twitter accounts or with an interest in activities in this area, with Mario Creatura expressing the view that the survey provided an “excellent gathering of data in an area that quite honestly is chock full of confusing stats“.

The interest in gathering further evidence of the value of Social Web services continues to grow. A recent study, for example, sought to answer the question “What’s the ROI with advertising on Facebook?” and concluded that “1 Facebook fan = 20 additional visits to your website“. But what approaches can institutions take to gain a better understanding of institutional use of Twitter?

Use of Social Analytic Services

In a recent post entitled Analysing influence .. the personal reputational hamsterwheel Lorcan Dempsey highlighted three social media analytic services. The post described how it has been suggested that the “Klout score will become a new way of measuring people and their influence online“. In addition to Klout, (which according to Crunchbase “allows users to track the impact of their opinions, links and recommendations across your social graph“) Lorcan’s post also referenced PeerIndex (which according to Crunchbase “identifies and ranks experts in business and finance based on their digital footprints“) and Twitalyser (described in a Mashable article as “provid[ing] detailed metrics on things like impact, engagement, clout and velocity for individual Twitter accounts“) .

Although Lorcan’s blog post addressed the relevance of such service for helping to understand personal reputation I felt it would be useful to gain a better understanding of how these service work by using them to analyse institutional Twitter accounts. I have therefore used the Klout, Peerindex and Twitalyzer social media analytic tools to analyse the twenty Russell Group University Twitter accounts. The table below summarises the findings of the survey which was carried out on Thursday 23 June 2011. It should also be noted that the table contains live links to the services which will enable the current findings to be displayed (and also for any errors to be easily detected and reported).

Institution /
Twitter Account
Klout Peerindex Twitteralyzer
Score Network
Description Score Activity Audience Authority Impact Percentile Type Full
1 University of Birmingham:
55 61 34 3K Thought
19 31 70 4 3.3% 88.6 Everyday
2 University of Bristol:
49 54 28 2K Specialist 16 16 68 0 1.7% 75.2 Everyday
3 University of Cambridge:
56 63 39 7K Thought
29 38 0 37 5.4% 94.6 Everyday
4 Cardiff University:
48 52 26 3K Specialist 43 47 76 33 0.8% 57.1 Everyday
5 University of Edinburgh:
52 60 35 2K Thought
14 6 69 0 1.7% 75.2 Everyday
6 University of Glasgow:
51 58 29 3K Specialist 40 47 78 28 1.1% 65.1 Everyday
7 Imperial College:
51 57 30 3K Specialist 39 24 74 24 2.8% 85.7 Everyday
8 King’s College London:
46 53 26 1K Networker 16 19 53 4 1.3% 69.1 Everyday
9 University of Leeds:
51 59 32 2K Specialist 23 37 62 12 1.8% 76.4 Everyday
10 University of Liverpool:
43 48 21 2K Networker 2 40 0 0 1.4% 70.9 Everyday
11 LSE:
39 48 18 797 Networker 33 43 0 43 0.4% 38.8 Everyday
12 University of Manchester:
14 10 10 46 Feeder 27 ? ? ? ?%      ?  - View
13 Newcastle University:
No official account found
14 University of Nottingham:
51 57 30 2K Specialist 41 41 65 33 1.9% 77.6 Everyday
15 University of Oxford:
58 65 37 8K Specialist 58 44 83 52 2.7% 85.1 Everyday
16 Queen’s University Belfast:
41 48 23 779 Specialist 11 0 53 0 0.7% 53.6 Everyday
17 University of Sheffield:
54 59 36 3K Networker 41 44 73 37 2.9% 86.4 Everyday
18 University of Southampton:
46 55 27 1K Networker 46 46 57 44 0.9% 60.1 Everyday
19 University College London:
54 63 39 2K Specialist 62 68 71 59 2% 78.7 Everyday
20 University of Warwick:
53 58 31 3K Thought
52 42 77 45 1.2% 67.3 Everyday

Please note that you will need to sign in to Klout in order to view the findings.

Russell Group Universities Peerindex group and two Klout groups (since there is a limit of ten entries these are split into Russell Group Universities (1 of 2) and Russell Group Universities (2 of 2) ) have been set up) which should enable comparisons to be made across the institutions based on the particular social media analytic service elected.

It should be noted that since the original survey of institutional use of Twitter by Russell Group Universities accounts for the Universities of Liverpool and Manchester have been identified. The University of Liverpool account (@livuni) seems to have replaced an older @liverpooluni account which was never used (although it did have over 2.000 followers). The University of Manchester account (@UniofManc) was set up on 14 March 2011 and there have been insufficient numbers of tweets for the PeerIndex and Twitteralyzer services to provide meaningful reports.

About the Social Media Analytic Metrics

In Klout:

The Klout Score is the measurement of your overall online influence. The scores range from 1-100 with higher scores representing a wider and stronger sphere of influence.

Network Influence is the influence level of your engaged audience. Capturing the attention of influencers is no easy task, and those who are able to do so are typically creating spectacular content.

Amplification Probability is the likelihood that your content will be acted upon. The ability to create content that compels others to respond and high-velocity content that spreads into networks beyond your own is a key component of influence.

The True Reach does not appear to be defined.

PeerIndex is built up of three components: authority, activity and audience score (all three are normalised ranks out of 100):

Authority is the measure of trust; how much can you rely on that person’s recommendations and opinion on a given topic. The authority is calculated from eight benchmark topics for every profile: AME (arts, media and entertainment); TEC ( technology and internet); SCI (science and environment); MED (health and medical); LIF leisure and lifestyle); SPO (sports); POL news, politics and society) and BIZ (finance, business and economics). These are used to generate the overall authority score as well as produce the PeerIndex Footprint diagram.

The authority is a relative positioning against everyone else in each benchmark topic. The rank is a normalised measure against all the other authorities in the topic area.

Note that the PeerIndex findings for the University of Oxford are illustrated with a comparison being made with the the Peerindex findings for the University of Cambridge. The analysis suggests that both institutions have a broadly similar ‘fingerprint’ but Oxford tends to focus on news, politics and society whilst Cambridge on technology and Internet.

Audience is indication of an individual’s reach. It is not simply determined by the number of people who follow you, but instead generate from the number of people who listen and are receptive to what you are saying.
Being followed by large number of spam accounts, bots, inactive accounts will reduce an audience score. The audience takes into account the relative size of the audience to the size of the audiences for the rest of community.

Activity is the measure of how much you do that is related to the topic area. Being to active and people will stop listening to you and if you are too inactive people will never know to listen to you. The Activity Score takes into account this behaviour. Like the other scores Activity Score is done relative to the community. If you are part of a community that has lots of activity your level of activity will need to be higher to achieve the same relative score as in a topic that has a lot less activity.

Realness is a metric that indicates the likelihood that the profile is of a real person, rather than a spambot or Twitter feed. A score above 50 means Peerindex thinks this account is of a real person; a score below 50 means it is less likely to be a real person. When Peerindex comes across a new profile, it gives it a score of 50. Initially, Peerindex doesn’t have the information to make any determination. As more information is gathered Peerindex modifies the number accordingly. Peerindex looks at a range of information to generate realness such as whether the profile is claimed and been linked to Facebook or LinkedIn. Peerindex is continually adding new signals to the realness calculations to improve it. The calculations are modified by the realness metric in order to penalise non-real people. Claiming a profile will boost the authority, audience and activity scores and consequently the PeerIndex as well.

Note that before the PeerIndex scores are displayed that are normalized. This means every number in PeerIndex is based on a scale of 1 to 100, showing relative positions. An aggressive normalization calculation is used which helps to discriminate between top authorities. The benefit is that you can more easily understand who the top authorities are. The trade-off is that many users end up with seemingly lower scores. Here’s an example: If you are in the top 20% by authority in a topic like climate change, it means you have higher authority than 80% of other people who we measure within this topic. Your normalized authority score for this topic will be in the range of 55 to 65 (that is, significantly lower than 80). Remember, however, that a score of 60 puts you higher that 80% of people we track in that topic. A score of 65, means you rank higher than 95% of the people we track. PeerIndex focuses on tracking the top people on a specific topic, not just anyone.

In Twitalyzer the Impact measure is a combination of the following factors:

  • The number of followers a user has.
  • The number of unique references and citations of the user in Twitter.
  • The frequency at which the user is uniquely retweeted.
  • The frequency at which the user is uniquely retweeting other people.
  • The relative frequency at which the user posts updates.
  • Twitalyzer’s “Impact Percentile” score provides insight into the relative rank of the individual within the service’s dataset. A ranking in the 69.8th percentile means that the user’s Twitalyzer Impact score is higher than 69.8 percent of the hundreds of thousands of active Twitter accounts the service is tracking.
  • Twitalyzer’s user profiles report 30-day trailing averages for Impact to help visualize how the user’s Impact trends over a longer period of time. This mitigates out weekends, vacations, etc.

Thoughts on Openness of Social Media Analytics Data

We are starting to see a stream of social media analytic services being developed, together with companies offering to analyse institutional use of social media and advise on best practices. There is a danger, I feel, of unnecessary duplications of such analyses being carried out, with funds which could be used to enhance the teaching and learning and research services provided by institutions being used to pay for unnecessary consultancy work. Whilst there maybe legitimate justifications for such consultancy, I feel that factual data which is gathered should be made openly available. In addition I feel that there is a need for open discussion on how social media analytic findings should be interpreted and used.

Issues for the “Metrics and Social Web Services: Quantitative Evidence for their Use and Impact” Workshop

On 11 July I am facilitating a one-day workshop on “Metrics and Social Web Services: Quantitative Evidence for their Use and Impact” which will be held at the Open University. The workshop aims to ensure that the participants:

  • Have a better appreciation of the importance of the need to gather and interpret evidence.
  • Understand how metrics can be used to demonstrate the value and ROI of services.
  • Have seen examples of how institutions are gathering and using evidence.
  • We aware of limitations of such approaches.
  • Have discussed ways in which such approaches can be used across the sector.
Some questions which I hope will be addressed at the workshop (which, incidentally, is now fully subscribed, indicating the interest across the sector in this area) include:
  • Do existing social media analytic services, such as those described above, have a role to play in helping to gain a better understanding of how social media services are being used to support institutional goals?
  • Can such  existing social media analytic service be used to help identify personal professional reputation?
  • Should the higher education sector be developing its own social media analytic tools in order to ensure that the specific requirements of higher education institutions are being addressed?
  • What are the dangers and limitations of seeking to analyse and make use of social media metrics and how should such concerns be addressed?

If you have any answers to these questions, or general comments or queries you would like to raise feel free to add a comment to this post.

Twitter conversation from Topsy: [View]

Posted in Evidence, Twitter, Web2.0 | 5 Comments »

Don’t Just Embed Objects; Add Links To Source Too!

Posted by Brian Kelly on 16 June 2011

I’m a great fan of the JISC’s Access Management blog. Nicole Harris is the main contributor to the blog and Nicole’s interests in issues related to access management (a topic many may find rather dry and boring) help to engage readers beyond techies who may have interests in the intracies of Shibboleth and related access management technologies.

When I updated my RSS Reader this morning and opened my JISC folder I noticed that there were several unread posts which had been published a few weeks ago. I looked at the post on “Early Findings for Shibboleth Futures” which told me that Nicole’s “slides are available below, and might be of interest!“. In my RSS reader, however, there was just a blank space.  Not a problem, I thought, I can view it in the Safari browser.  But, as can be seen in the accompanying image, nothing was displayed in the Web browser either.

The problem is that the embedded slideshow was hosted on Slideshare and the embedding technology uses Flash which is not support on my iPod Touch or other Apple devices such as iPhones or iPads.  Some may respond “You should use an Android device” to which my response could be that I do own an Android phone but prefer the usability of my iPod Touch.  But rather than getting drawn into such platform wars there is a very simple solution to embedding Slideshare resources in blog posts whilst allowing the slides still to be viewed by users of Apple’s mobile devices.

A post published on this blog recently on Metrics for Understanding Personal and Institutional Use of the Social Web also contained an embedded Slideshare presentation. As can be seen when viewing the blog post on an iPod Touch a blank screen was displayed where the embedded Flash object would be displayed on a typical desktop PC.  However the post contained a link to the resource hosted on Slideshare. Clicking on the link took me to a mobile-friendly version of the resource which made use of HTML5 so that the slides could be viewed on device which don’t support Flash, as illustrated below.

My advice to people who wish to embed objects (which might include other types of images and videos and not just Slideshare resources) is:

  • Include a direct link to the host which is provided in the HTML of your page.
  • Use linking phrases of the form “The slides for the talk are available slides for the talk are available on Slideshare ” rather than “The slides for the talk are available on Slideshare” since the latter more clearly links directly to the resource rather than the Slideshare home page which is implied on the latter example.
  • Avoid links such as “Click here to view the slides” as this is bad practice from an accessibility perspective.
And if you are interested in the contents of the slides Nicole Harris used at the recent TNC2011 meeting in which she spoke about the creation of the Shibboleth Consortium and presented some early findings from the Shibboleth Futures Survey her slides are available on Slideshare and are embedded below :-)

Posted in Web2.0 | Tagged: | 7 Comments »