UK Web Focus (Brian Kelly)

Innovation and best practices for the Web

Archive for the ‘Evidence’ Category

Should We Boycott

Posted by Brian Kelly on 9 Dec 2015

The “Why Are We Not Boycotting” Event

I recently came across a tweet which announced an event which addressed the question “Why Are We Not Boycotting“.  As described on the Eventbrite booking page:

With over 36 million visitors each month, the San Francisco-based platform-capitalist company is hugely popular with researchers. Its founder and CEO Richard Price maintains it is the ‘largest social-publishing network for scientists’, and ‘larger than all its competitors put together’. Yet posting on is far from being ethically and politically equivalent to using an institutional open access repository, which is how it is often understood by academics.’s financial rationale rests on the ability of the venture-capital-funded professional entrepreneurs who run it to monetize the data flows generated by researchers. can thus be seen to have a parasitical relationship to a public education system from which state funding is steadily being withdrawn. Its business model depends on academics largely educated and researching in the latter system, labouring for for free to help build its privately-owned for-profit platform by providing the aggregated input, data and attention value.

The abstract concluded by summarising questions which will be address at the event, including:

  • Why have researchers been so ready to campaign against for-profit academic publishers such as Elsevier, Springer, Wiley-Blackwell, and Taylor & Francis/Informa, but not against for-profit platforms such as, ResearchGate and Google Scholar?
  • Should academics refrain from providing free labour for these publishing companies too?
  • Are there non-profit alternatives to such commercial platforms academics should support instead?

The event was organised by The Centre for Disruptive Media and took place at Coventry University on 8 December 2015 from 3-6pm. Unfortunately I was not able to attend the event, but as this is an area of interest to me I thought I would publish this post, in which I argue that rather than boycotting we should make use it it (and similar services) by complementing institutional repository services with such services.

Background to My Interests

Slide on my use of academia.eduOver a year ago I was invited to give a talk on “Using Social Media to Build Your Academic Career” at a symposium in Brussels on “How to Build an Academic Career” for the five Flemish universities. Over the past two years I have also given modified versions of the talk  at the annual DAAD conference, the IRISS Research Unbound conference and for the iSchool@northumbria’s Research Seminar Series. As can be seen from the accompanying screenshot of one of the slides in the presentations I summarised the benefits which can be gained from making use of, based on personal experiences and recommending best practices.

My advice, as well as that provided by librarians and research support staff who promote use of social media by early career researchers, would appear to be in conflict with the general theme of the question “Why Are We Not Boycotting” event. But rather than address the issue of whether universities should own the online services they use I will present evidence of how existing services are being used and the implications of such usage patterns.

What Does The Evidence Suggest?

Personal Experiences

The slides for my a talk on “Using Social Media to Build Your Academic Career” are available on Slideshare. In the talk I described the benefits of making one’s research content available in popular places, rather than restricting access to niche web sites such as institutional repositories. In particular I described the SEO benefits which can be gained by using popular sites which contain links to research papers which are hosted on an institutional repository. This advice was based on findings published in a paper which asked “Can LinkedIn and Enhance Access to Open Repositories?” by myself and Jenny Delasalle and presented at the Open Repositories 2012 conference. I Googled for the paper using the search term “linkedin researchgate opus” in order to find the copy of the paper hosted on Opus, the University of Bath institutional repository; however the first hit was for the copy hosted by Researchgate. This suggested that hosting a research paper on a popular service such as or, in this case, Researchgate, would provide better discoverability for Google than use of an institutional repository.

But since Google will remember previous searches a more objective tool to use would be Duckduckgo, which does not keep a record of previous searches. In this case the search for “linkedin researchgate opus” found the paper hosted on Researchgate in second place. Using the full title of the paper, as shown the Duckduckgo search for “Can LinkedIn and Enhance Access to Open Repositories?” the order of the search results was (1) paper hosted by; (2) paper hosted by Opus instituional repository; (3) slides hosted on Slideshare and (4) paper hosted by Researchgate.

Personally. therefore, I have found benefits through use of and Researchgate in helping to raise the visibility of my research papers. But how popular in across the UK research sector?

Institutional Evidence

In order to answer this question a survey of use across the 24 Russell Group universities was carried out on 26 November 2015. The findings are given in the following table, with the link in the final column enabling the current results to be determined.

Ref. no. Institution No. of People Link
 1 University of Birmingham      5,408 [Link]
 2 University of Bristol      5,759 [Link]
3 University of Cambridge    12,770 [Link]
4 Cardiff University      5,372 [Link]
5 Durham University      5,198 [Link]
6 University of Exeter      5,346 [Link]
7 University of Edinburgh      9,252 [Link]
8 University of Glasgow      6,094 [Link]
9 Imperial College London      3,943 [Link]
10 King’s College London      8,568 [Link]
11 University of Leeds      8,396 [Link]
12 University of Liverpool      4,911 [Link]
13 London School of Economics       6,184 [Link]
14 University of Manchester     11,249 [Link]
15 Newcastle University     4,756 [Link]
16 University of Nottingham      7,963 [Link]
17 University of Oxford    19,709 [Link]
18 Queen Mary, University of London      4,083 [Link]
19 Queen’s University Belfast      2,639 [Link]
20 University of Sheffield      4,821 [Link]
21 University of Southampton       5,646 [Link]
22 University College London    13,481 [Link]
23 University of Warwick      6,457 [Link]
24 University of York      5,297 [Link]
Total 173,301


  • This information was collected on 9 December.
  • The figures were obtained by entering the name of the institution and using the highest number listed. As can be seen from the accompanying image there may be other variants of the name of the institution: the figure shown with therefore give an under-estimation of the number of items related to the institution (the total given in the table is for the largest variant of the institution’s name i.e. 4,744 in this example).

Note that a post entitled A Survey of Use of Researcher Profiling Services Across the 24 Russell Group Universities published in August 2012 summarised usage of several researcher profiling services  (Researchgate, ResearcherID, LinkedIn and Google Scholar Citations as well The survey found 33,812 users of from the Russell Group universities, which suggested that there has been an increase of nearly 400% in just over 3 years.

Also note that the findings of a survey carried out in February 2013, which compared take-up of and Researchgate described in a post entitled Profiling Use of Third-Party Research Repository Services found that Researchgate appeared to have entries for 426,414 researchers from Russell Group Universities, compared with 39,546 for


My personal experiences, together with the institutional evidence of suggest that the service is popular with the user community. But what of the issues raised at yesterday’s meeting?

Researchgate is a social networking-site

It seems to me that it will be difficult to find funding for the development of large-scale non-profit alternatives to commercial services such as and Researchgate. And even if funding to development and maintain the technical infrastructure was available, it may prove difficult to get researchers to see the benefits and  make their research content available on a new, unproven service, espcially in light of evidence such as that provided in a paper on “Open Access Meets Discoverability: Citations to Articles Posted to” which described how:

Based on a sample size of 34,940 papers, we find that a paper in a median impact factor journal uploaded to receives 41% more citations after one year than a similar article not available online,50% more citations after three years,and 73% after five years.

Coincidentally a week ago I came across a tweet by Jon Tenant which stated that:

Reminder: @ResearchGate and @academia are networking sites, not #openaccess repositories

As shown in the accompanying screenshot this tweet contained an image which highlighted some concerns regarding use of and Researchgate. However the first part of the tweet highlighted an important aspect of these services which are typically not provided by institutional repositories: @ResearchGate and @academia are networking sites.

It is worth expanding on this summary slightly, based n the evidence given above:

@ResearchGate and @academia are popular networking sites, with content likely to be more easily found using Google than content hosted on institutional repositories. 

In addition the services may also enhance the visibility of resources hosted on institutional repositories:

Providing links from @ResearchGate and @academia to content hosted on institutional repositories should prpvide SEO benefits, and make the content of institutional repositories more easily found using search engines such as Google. 

opus top author statistics for December 2015This was the conclusion based on a survey published in the paper which asked “Can LinkedIn and Enhance Access to Open Repositories?“. Revisiting personal experiences use of the University of Bath’s Opus repository usage service it can be seen that my papers are the most-viewed of all researchers (and, interestingly, my formed UKOLN colleagues Alex Ball and Emma Tonkin are to be found in the top 5 researchers based on download statistics).

This, of course, does not necessarily provide evidence of the quality of the papers; rather, as described in the paper cited above, it suggests that providing in-bound links from popular services will enhance the Google ranking of papers hosted by the repository.


Rather than developing open alternatives to and Researchgate my feeling is that the existing infrastructure of institutional repositories  services such as and Researchgate can be used in conjunction, with the institutional repository providing the robust and secure management of content, with researcher profiling services providing SEO benefits in addition to the community benefits these social networking services can provide for researchers.

Such use of multiple services will also help address the risk of cessation of services, which is often highlighted as a risk of use of commercial services where there is no formal contractual agreement. It should be noted, of course, that sectoral not-for-profit services may also be closed down, as has happened with the Jorum OER repository service, whose closure Jisc announced in June 2015.

Of course when researchers leave their host institution they may wish to ensure that they continue to have full read/write access to their publications, in which case storing copies of the papers in the commercial services themselves will provide continued access after they have left their host institution and can no longer manage their publications – this, incidentally, was the approach I took after leaving UKOLN, University of Bath in July 2013.

I’d be interested to hear your thoughts on the relevance of commercial research profiling/repository services, whether the sector should look into providing open alternatives and the strategies needed to ensure that such approaches would be successful.

View Twitter: [Topsy] – []

Posted in Evidence, Repositories | Leave a Comment »

MTSR 2015, the 9th Metadata & Semantics Research Conference (and its use of Facebook)

Posted by Brian Kelly on 24 Feb 2015

MTSR 2015, the 9th Metadata and Semantics Research Conference

MTSR 2015 conference web siteThe 9th Metadata and Semantics Research Conference has recently announced its call for papers, which is also available as a PDF document.

Metadata has been a area of interest to UKOLN, my former organisation and Cetis, my current organisation, also have interests in this area including recent support for LRMI, the Learning Resource Metadata Initiative.

MTSR 2015, to use the conference’s abbreviation, will take place at the University of Manchester on 9-11th September 2015. As described on the conference home page:

the ninth International Conference on Metadata and Semantics Research (MTSR’15) aims to bring together scholars and practitioners that share a common interest in the interdisciplinary field of metadata, semantics, linked data and ontologies. Participants will share novel knowledge and best practice in the implementation of these semantic technologies across diverse types of Information Environments and applications. These include Cultural Informatics; Open Access Repositories & Digital Libraries; E-learning applications; Search Engine Optimization & Information Retrieval; Research Information Systems and Infrastructures; e-Science and e-Social Science applications; Agriculture, Food and Environment; Bio-Health & Medical Information Systems.

The deadline for submission is 9th May and authors will be notified of acceptance or rejection of their submission on 16th June.

What have you noticed is now a mainstream practice?

MTSR 2015 Facebook pageIn a post on his OUseful blog over a year ago Tony Hirst described the “What did you notice for the first time today?” exercise which he used in a workshop on Future Technologies and Their Applications which Tony and I co-facilitated at the ILI 2013 conference.

Tony describe how this approach could be important for trend spotting: “it may signify that something is becoming mainstream that you hadn’t appreciated before“. However I found that trying to reflect on something I’ve notice for the first time today too constraining, so I proposed a tweaked version: What Have You Noticed Recently?

However another variant may be “What have you noticed is now a mainstream practice which may have been considered inappropriate in the recent past?“: this might be particularly useful in identifying acceptance of emerging practices and a willingness to accept some level of risk.

This came to me when I notice that, as shown in the image at the top of this post, the MTSR 2015 conference home page provides details of the MTSR Metadata Semantics Research Conference Facebook page. The Facebook page, which currently has 217 ‘likes’, contains a small number of updates: the launch of the Facebook page, the first announcement of the call for papers, an update to the page’s photograph, details of the conference Twitter account, dates for the call for papers and award details.

In addition to this content (which are primarily links to content hosted on the conference web site)  as can be seen from the screen shot the Facebook page also provides links to Facebook pages for related content including Open Repositories 2015 (283 likes including 3 researchers/librarians I am connected with), the Research Data Alliance (363 likes) and ICCMI 2014 (309 likes).

In answer to the question I posed “What have you noticed is now a mainstream practice which may have been considered inappropriate in the recent past?” I can answer “Use of Facebook to promote research conference and apparent ‘liking’ of the page by hundreds of researchers and practitioners“.

Or, to generalise this “An acceptance of the risks of using Facebook by well-educated researchers and library practitioners and an acknowledgement of the benefits which can be gained“.

MTSR 2015 Facebook statisticsThe use of Facebook to promote research conferences seems to no longer be one of “should we?”  but instead one based on a cost-benefit analysis – can the effort in updating a Facebook page for a conference be justified? Fortunately the Facebook statistics for the page provides usage data for helping to answer this question (it should also be noted, incidentally, that the conference’s MTSR 2015 Twitter account currently only has 9 followers).

Would you agree that this is now a mainstream practice? Would you also agree that in the past this type of use was frowned upon?

Posted in Events, Evidence, Facebook | Leave a Comment »

Implications for Institutions of a Backlash Against Facebook

Posted by Brian Kelly on 10 Dec 2014

Moves Away from Facebook?

We are apparently seeing a backlash against use of Facebook. Over the weekend a post on Facebook’s struggle to keep teens described how, although in September 2014 had 864 million daily active users, 703 million mobile daily active users and 1.35 billion monthly active users, “the company faces a significant challenge keeping the service relevant to younger users“. Earlier this week Niall Sclater provided a personal anecdote in a post on How a fifteen year-old Scottish girl uses social media which suggested “Facebook is mainly for older people who want to share baby photos“.

Also over the weekend the Observer included an article by Ben Goldacre entitled When data gets creepy: the secrets we don’t realise we’re giving away. The popularity of this article can be gauged by the 323 comments made on the article to date as well as, ironically, the 7,390 shares made on social media.

But Continued Use By Professionals in Higher Education?

Linkedin Facebook groupThe article on Facebook’s struggle to keep teens suggests that  teenagers may be moving towards use of messaging applications although a post published  by Piper Jaffray earlier this year described how a Survey finds teens still tiring of Facebook, prefer Instagram.

However moves towards use of messaging applications do not seem to be reflected in my professional uses of social media services. Twitter continues to be a valuable tool but I also seem to be making greater use of Facebook for professional purposes, in part  helped by the release earlier this year of a bookmarking facility for Facebook posts. I also find that there are various Facebook groups for professional purposes which have been subscribed by my professional contacts who also make use of Twitter. A good example of this is the Linked Web Data for Education Facebook group which has 102 members, including 35 I am connected with on various social networks.

In additional to use of Facebook by professionals to support their work in additional to social interests we also see continued use of Facebook by institutions.

Back in July 2014 a post on Facebook Usage for Russell Group Universities, July 2014 provided figures related to Facebook usage by the 24 Russell group universities and summarised growth since similar surveys were published in 2011 and 2012.

An updated survey was carried out on 8 December 2014 which includes metrics which weren’t collected fin previous surveys, including the numbers of Facebook users ‘talking’ about the institutional Facebook page and the numbers of new ‘likes’ for the page. I hope that this data will provide a benchmark which could help to indicate a decline in interest in Facebook use: we might expect Facebook ‘likes’ to continue to grow users probably don’t ‘unlike’ pages, but the number of new pages ‘likes’ should provide a better indication of current levels of interest. Clearly Facebook page administrators will be able to view the metrics for their institutional Facebook page but will not be able to detect trends unless such data is shared.

It should also be noted that, as described in a Facebook support page,  an apparent decline in the numbers of ‘likes’ may be due to Facebook users managing their privacy settings:  Earlier this year Facebook announced new privacy controls for Making It Easier to Share With Who You Want and we are now seeing a range of resources, such as this poster on “How to Stay Safe on Facebook” which can help to educate users on how to manage Facebook more effectively.

Should Institutions and Experienced Professionals Cease Using Facebook?

However rather that making decisions on professional use of social media services such as Facebook on business criteria such as the level of use, the audience profiles, the costs of providing the services and the estimated benefits, might there be an argument that organisations and individuals who place a high value on ethical business practices should cease making use of services which infringe users’ privacy and exploit their intellectual property from unfairly using their dominant position in the market place. We have been here before – back in 2004, the EU ordered Microsoft to pay a fine of over £380 million for abuse of its dominant position in the market. But beyond such infringement of monopoly legislation which are decided by a court case, might the business decisions being taken by social media companies such as Facebook be regarded as a social injustice which individuals and organisations should face up to, just as many did by disinvesting in South Africa during the apartheid era?

I’ve already mentioned Ben Goldacre’s Observer article which highlighted such dangers: When data gets creepy: the secrets we don’t realise we’re giving away.  And a few days ago Pete Johnston posted a comment on this blog in which he summarised his concerns:

Targetting, profiling and “personalisation” are central, and vast resources are ploughed into trying to gather or infer information about individuals’ activities and preferences based on our behaviour on the Web. Sometimes that data collection is overt and explicit: we are invited to volunteer personal data to a social media service in exchange for access to communication channels and the creation of an online profile without which we are told we are a second-class citizen. Sometimes it is rather more covert, as in the surreptitious tracking of our behaviour across Web sites through ever more complex digital sleight of hand tricks. And that tracking increasingly extends into our physical world behaviour (tracking mobile wireless signals in shopping malls, linking email addresses to “loyalty” cards and so on).

That personal information is gathered, stored, merged with other information, analysed/mined, transferred/bought/sold/brokered/requisitioned/intercepted/lost/found/stolen, and (re)used by different parties for purposes and in contexts over which we have no control – from “personalised offers” to spam to profiling to surveillance to identity theft to ending up on Theresa’s Big List of Domestic Extremists because someone on your “friends” list once “liked” a Bad Book. Services’ promises of ephemerality, security and anonymisation appear to mean little when the price is right (ba-dum-tish) and they can fall back on that get-out clause buried deep in the terms of service.

 Your Thoughts

Facebook currently has 864 million daily active users and 703 million mobile daily active users. The data provided in Appendix 1 gives some figures for Facebook’s use across official pages for the 24 Russell Group universities. But is such data irrelevant? I’d welcome your thoughts.

Appendix 1: Institutional Use of Facebook by Russell Group Universities, December 2014

Note that the data provided in the following table is also available as a Google Spreadsheet.

Ref. No. Institution and Web site link
Facebook name and link
Nos. of likes
(Dec 2014)
Nos. of visits
(Dec 2014)
Nos. talking
(Dec 2014)
Nos. of new page
likes (Dec 2014)
 1 InstitutionUniversity of Birmingham
Fb nameunibirmingham
   10,853 118,057  4,020 4,020 [Link]
 2 InstitutionUniversity of Bristol
Fb namebristoluniversity
    32,715   3,053     950      183 [Link]
 3 InstitutionUniversity of Cambridge
1,081,801 14,240   8,336 [Link]
 4 InstitutionCardiff University
Fb namecardiffuni
      57,660  –     241     200 [Link]
 5 InstitutionDurham University
Fb nameDurham-University/109600695725424
     32,636  11,004  –  – [Link]
 6 InstitutionUniversity of Exeter
Fb nameexeteruni
      34,572   47,314    2,242   202 [Link]
 7 InstitutionUniversity of Edinburgh
Fb nameUniversityOfEdinburgh
      87,865    32,402     1,598    552 [Link]
 8 InstitutionUniversity of Glasgow
Fb Name: glasgowuniversity
     82,736  69,505     4,646     555 [Link]
 9 InstitutionImperial College
Fb nameimperialcollegelondon
     83,629     1,391    437 [Link]
10 InstitutionKing’s College London
Fb nameKings-College-London/54237866946
     44,189    32,020       544    190 [Link]
11 InstitutionUniversity of Leeds
Fb nameuniversityofleeds
    29,410   20,856     1,557    237 [Link]
12 InstitutionUniversity of Liverpool
Fb name: University-of-Liverpool/103803892992025
    70,647  48,447  – [Link]
13 InstitutionLSE
Fb name: lseps 
   163,854   50,554   2,140  1,070 [Link]
14 InstitutionUniversity of Manchester
Fb name:TheUniversityOfManchester
     67,346  23,170   6,709     567 [Link]
15 InstitutionNewcastle University
Fb namenewcastleuniversity
    42,868  19,383    3,038    379 [Link]
16 InstitutionUniversity of Nottingham
Fb nameTheUniofNottingham
     72,015  85,596    2,381     279 [Link]
17 InstitutionUniversity of Oxford
2,038,667   25,012 13,531 [Link]
18 InstitutionQueen Mary, University of London
Fb name: QMLNews
     85,861   21,478       1,132       291 [Link]
19 InstitutionQueen’s University Belfast
Fb nameQueensUniversityBelfast
     25,162     2,930     209 [Link]
20 InstitutionUniversity of Sheffield
Fb nametheuniversityofsheffield
  76,932    83,375    1,011      279 [Link]
21 InstitutionUniversity of Southampton
Fb nameunisouthampton
  58,409   71,726   2,039     384 [Link]
22 InstitutionUniversity College London
Fb nameUCLOfficial
  105,048    83,375   4,938    489 [Link]
23 InstitutionUniversity of Warwick
Fb namewarwickuniversity
   55,146  –    1,459   285 [Link]
24 InstitutionUniversity of York
Fb nameuniversityofyork
  20,785 14,605      467    56 [Link]
TOTAL  4,460,806    835,920     84,685          37,731


Posted in Evidence, Facebook | 4 Comments »

#1amconf, Altmetrics and Raising the Visibility of One’s Research

Posted by Brian Kelly on 29 Sep 2014

1:AM, the First Altmetrics Conference

Lanyrd entry for 1:AM altmetrics -conferenceAs described in a post entitled , the 1:AM conference, the first dedicated altmetrics conference took place in London last week.

This was a fascinating conference, with lively discussion taking place at the conference and on the #1amconf Twitter back channel.

The conference embraced event amplification technologies, with a number of remote speakers giving their talks using Google Hangouts and all of the plenary talks being live-streamed and made available on the conference’s YouTube channel.

With so much discussion taking place across a range of channels I created a Lanyrd entry for the conference and publicised it on the final day of the conference.

I’m pleased to say that many of the participants and event organisers used the Lanyrd page to provide access to the various reports on the sessions, access to slides used by the speakers and video recordings of the talks, photos of the event and archives of the discussions and arguments which took place on Twitter: at the time of writing links have been added to 35 separate resources.

Altmetrics as an Indicator of Quality or of Interest?

On the first morning of opening day of the conference in particular there were lively discussions on the value of altmetrics with Professor David Colquhoun (@David_Colquhoun) in particular being scathing in his criticisms:

To show that trivialises and corrupts science is to look at high scoring papers

The blog post on Why you should ignore altmetrics and other bibliometric nightmares mentioned in this tweet generated much discussion on the blog and elsewhere. For those with an interest in this area I recommend reading the post and the follow-up comments, such as this response from Euan Adie, founder of the company:

Hi David. Thanks for writing the post! I founded I think you and Andrew have some fair points, but wanted to clear up the odd bit of confusion.

I think your underlying point about metrics is fair enough (I am happy to disagree quietly!). You’re conflating metrics, altmetrics and attention though.

Before anything else, to be absolutely, completely clear: I don’t believe that you can tell the quality of a paper from numbers (or tweets). The best way to determine the quality of a paper is to read it. I also happen to agree about post publication review and that too much hype harms science. 

Euan concluded his comment by providing a link to his post which suggested that those with interests in the impact of scientific research to Broaden your horizons: impact doesn’t need to be all about academic citations.

The consensus at the conference seemed to be that the view (perhaps based on misunderstandings)  that altmetrics would provide an alternative to citation analysis to determine the quality of research and should determine how research should be funding is no longer widely accepted; instead altmetrics are regarded as being complementary to citation data and can provide a broader picture, especially of how research is being discussed and debated.

Raising the Visibility of One’s Research: Kudos

In discussions with other participants I heard how the view that researchers (and funders of research) had responsibilities for raising the visibility of their research is becoming accepted: the view that only one’s peers need be interested in the research was felt to be no longer relevant. “We need to be seen to be able to justify funding for research“was one comment I heard.

Back in March 2012 in a post on Marketing for Scientists Martin Fenner made a similar point:

Scientists may feel uncomfortable about marketing their work, but we all are doing it already. We know that giving a presentation at a key meeting can be a boost for our career, and we know about the importance of maintaining an academic homepage listing our research interests and publications. And people reading this blog will understand that a science blog can be a powerful marketing tool.

But if researchers are now accepted the need to raise the visibility of their research, the question then is what tools can they use to support this goal?

The Kudos dashboardThe session on Altmetrics in the last year and what’s on the roadmap provided brief summaries about altmetrics application including talks about  Altmetric, Plum Analytics, Impactstory, PLOS, Mendeley, Open Access Scholarly Publishing Association and Kudos.

Kudos was the one tool which was new to me. A recent post which describes how Kudos Integrates Altmetric Data to Help Researchers see Online Dissemination of Articles summarised the aim of the service:

Kudos is a new service designed to help scholars and their institutions increase the impact of their published research articles. Altmetric tracks and collates mentions of research articles on social media, blogs, news outlets and other online sources. This integration means mentions are now incorporated on the Kudos metrics pages for individual authors, and accompanied by a short summary which further details the number of mentions per source. Each article is assigned a score based on the amount of attention it has received to date, and authors are able to click through to see a sample of the original mentions of their article.

I have created an account on Kudos. I was able to quickly claim many of my research papers. As can be seen from the screenshot of the dashboard  a number of my papers already have an Altmetric score, which is defined as “a reflection of the amount of interest your publication has attracted across news outlets and social media“.

Altmetric score for paper on "Accessibility 2.0: Next Steps for Web Accessibility"My paper on Accessibility 2.0: Next Steps for Web Accessibility, for example, has an Altmetrics score of 6. If I wanted to raise the visibility and impact of the paper the Kudos tool allows me to:

Explain: Explain your work and tell readers what it’s about and why it’s important.

Enrich: Enrich your publication by adding links to related materials.

Share: Share a link to your publication by email and social media.

Measure: Measure the impact on your publication performance.

Raising the Visibility of One’s Research: Wikipedia

Wikimedia and Metrics posterIn a recent post entitled Wikimedia and Metrics: A Poster for the 1:AM Altmetrics Conference I described the metrics for Wikipedia articles which may provide indications of the effectiveness of the outreach of the article. The post summarised a poster which was displayed at the conference and which is shown in this post.

As may be shown by usage metrics, Wikipedia can provide a mechanism for raising the visibility of topics described in Wikipedia articles, which can include articles based on research work.

It would appear that Kudos and Wikipedia both provide mechanisms for enhancing interest in research work. But these two tools provide contrasting approaches to the way they support such dissemination work.

With Kudos, authors of research papers are expected to provide summaries of their work. by (a) adding a short title to the publication to help make it easier to find and can help increase citations; (b) adding a simple, non-technical explanation of your publication will make it easier to find, and more accessible to a broader audience and (c) adding an explanation of what is most unique and/or timely about your work, and the difference it might make, will help increase readership.

In contrast, content added to Wikipedia should be provided based on the fundamental principles of Wikipedia , known as the five pillars. In brief:

  1. Wikipedia is an encyclopedia: It combines many features of general and specialized encyclopedias, almanacs, and gazetteers. Wikipedia is not a soapbox, an advertising platform, a vanity press, an experiment in anarchy or democracy, an indiscriminate collection of information, or a web directory.
  2. Wikipedia is written from a neutral point of view: We strive for articles that document and explain the major points of view, giving due weight with respect to their prominence in an impartial tone. We avoid advocacy and we characterize information and issues rather than debate them.
  3. Wikipedia is free content that anyone can use, edit, and distribute: Since all editors freely license their work to the public, no editor owns an article and any contributions can and will be mercilessly edited and redistributed. Respect copyright laws, and never plagiarize from sources.
  4. Editors should treat each other with respect and civility: Respect your fellow Wikipedians, even when you disagree. Apply Wikipedia etiquette, and don’t engage in personal attacks. Seek consensus, avoid edit wars, and never disrupt Wikipedia to illustrate a point.
  5. Wikipedia has no firm rules: Wikipedia has policies and guidelines, but they are not carved in stone; their content and interpretation can evolve over time. Their principles and spirit matter more than their literal wording, and sometimes improving Wikipedia requires making an exception.

The second of these principles,  which expects Wikipedia articles to be written from a neutral point of view, will be the most challenging for researchers who would like to use Wikipedia to raise the visibility of their research to a wider audience. One of three core content policies for Wikipedia articles is that, content should be provided from a neutral point of view – and it will be difficult to do this if you wish to publish or cite content based on one’s own research. Another challenge for researchers is a second core content policy  which states that Wikipedia articles must not contain original research.

What Is To Be Done?

Perhaps a simple approach which could be made by open researchers who are willing to share their experiences openly would be ensure that initial desktop research  which typically may be used as a literature review is used to support existing articles.

However the bigger challenge is to address the tensions between the funders’ requirement to ensure that research they fund is widely disseminated and exploited by others and Wikipedia’s requirement for neutrality.

In a recent post on Links From Wikipedia to Russell Group University Repositories I highlighted similar challenges for universities which may be tempted to seek to exploit the SEO benefits which links from Wikipedia to institutional web pages may provide.

In the blog post I cited an article from the PR community who had recognised the dangers that PR companies can be easily tempted to provide links to clients’ web sites for similar reasons. In response to concerns raised by the Wikipedia community Top PR Firms Promise[d] They Won’t Edit Clients’ Wikipedia Entries on the Sly. The article,which is hosted on Wikipedia, describes the Statement on Wikipedia from participating communications firms . The following statement was issued in 10 June 2014:

On behalf of our firms, we recognize Wikipedia’s unique and important role as a public knowledge resource. We also acknowledge thattheprior actions of some in our industry have led to a challenging relationshipwiththe community of Wikipedia editors.Our firms believe that it is in the best interest of our industry, and Wikipedia users at large, that Wikipedia fulfill its mission of developing anaccurate andobjective online encyclopedia. Therefore, it is wise for communications professionals to follow Wikipedia policies as part of ethical engagement practices.We therefore publicly state and commit, on behalf of our respective firms, to the best of our ability, to abide by the following principles:

  • To seek to better understand the fundamental principles guiding Wikipedia and other Wikimedia projects.
  • To act in accordance with Wikipedia’s policies and guidelines, particularly those related to “conflict of interest.”
  • To abide by the Wikimedia Foundation’s Terms of Use.
  • To the extent we become aware of potential violations of Wikipedia policies by our respective firms, to investigate the matter and seek corrective action, as appropriate and consistent with our policies.
  • Beyond our own firms, to take steps to publicize our views and counsel our clients and peers to conduct themselves accordingly.

We also seek opportunities for a productive and transparent dialogue with Wikipedia editors, inasmuch as we can provide accurate, up-to-date, and verifiable information that helps Wikipedia better achieve its goals.

A significant improvement in relations between our two communities may not occur quickly or easily, but it is our intention to do what we can to create a long-term positive change and contribute toward Wikipedia’s continued success.

Might research councils and other funders of research find it useful to embrace similar principles? And is there a role for research librarians and others with responsibilities for supporting members of the research community in developing similar guidelines which will help ensure that researchers make use of Wikipedia in a way which supports the Wikipedia principles which have helped to ensure that the encyclopedia is regarded as a valuable source of information?

View Twitter conversations and metrics using: [Topsy]

Posted in Events, Evidence, Wikipedia | Tagged: | 2 Comments »

Analytics Events: For Learning and For Research

Posted by Brian Kelly on 24 Sep 2014

Moves Towards Analysis of Data

I suspect I am not alone in finding that my interests and activities in my professional life no longer focus primarily on digital content but now encompass data. There are two events taking place over then next four weeks which may be of interest to those with interests in the analysis of data to support learning and research.

The SoLAR Flare Event

The EU-funded LACE (Learning Analytics Community Exchange) project is organised a one-day event which will be held at the Open University on 24 October 2014.

As described on the event booking web site:

SoLAR Flare eventThis is a networking gathering for everyone interested in learning analytics. Under the auspices of the Society for Learning Analytics Research (SoLAR) and organized by Europe’s Learning Analytics Community Exchange (LACE), this event forms part of an international series. SoLAR Flares provide opportunities to learn what’s going in learning analytics research and practice, to share resources and experience, and to forge valuable new connections.

SoLAR defines learning analytics as ‘the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs’. The LACE project is working to promote knowledge exchange and increase the evidence base in this field, so that analytics can be used effectively in a wide range of educational settings, including schools, higher education, workplace learning and within MOOCs.
We therefore invite technology specialists, teachers, researchers, educators, ICT purchasing decision-makers, senior leaders, business intelligence analysts, policy makers, funders, students, and companies to join us in Milton Keynes.

 The event is free to attend, so I suggest that you sign up quickly in order to guarantee a place.

1:AM: The First Altmetrics Conference

On Thursday and Friday of this week 1:AM London, the first altmetrics conference is taking place at the Welcome Collection, London.

1:am ALtmetris conferenceAlthough the conference is fully-subscribed the conference organisers are seeking to maximise engagement through event amplification. As described on the event blog:

Can’t make the conference in person, or missed out on a delegate place? Fear not! Along with a blog write up of each session, we’ll be live-tweeting on the #1amconf hashtag, and live streaming on our YouTube channel:

Note that a Twubs archive for the event hashtag is available.

As illustrated in the screenshot the conference progamme begins with a review of recent altmetrics activities followed by a session on how people are currently using altmetrics. Further sessions on the first day cover research communications, ethical implications of research involving social media, impact assessment in the funding sector: the role of altmetrics and uses of metrics within institutions.

The sessions on the second day cover altmetrics and publishers, lessons learnt, tracking other research outputs, update on standards and a group workshop session to review activity around altmetrics to date, and to propose ideas for future development.

I will be representing Wikimedia UK at the conference and will present a poster on Wikimedia and Metrics.

Are You Attending?

In a recent post I summarised the benefits of Using Twitter to Meet New People on the Way to Conferences. If you are attending either of these events and would be interested in making contact with others you may find the Lanyrd entry for these events of interest. Simply go to the Learning Analytics SoLAR Flare Event in UK or the 1:AM London Lanyrd entries and either track the events of interest or register yourself as a participant or speaker.

As I’ve found with the IWMW event series, the details for the 40 speakers, 59 attendees and 15 people who tracked the IWMW 2013 event can help to identify key members of a community of practice with shared interests. Use of Lanyrd may help, I feel, to support the community of open practitioners who have interests in learning analytics and altmetrics.

View Twitter conversations and metrics using: [Topsy] – []

Posted in Events, Evidence | Tagged: , | 2 Comments »

The Launch of Twitter’s Analytics Service and Thoughts on Free Alternatives

Posted by Brian Kelly on 1 Sep 2014

The Launch of Twitter’s Analytics Service

It was via a tweet I received last week when I first heard the news about the public launch of Twitter’s analytics service:

Today, opened its analytics platform to the public TLDR: Images get more engagement

This tweet was of particular interest as it not only provided news of the new service and a link to a post in which the service was reviewed but also a brief summary of findings from the analysis of the posters’ use of Twitter with suggestions for best practice: “Images get more engagement“. The longer version explained how:

Finally, what Twitter Media and News staff had already told people who are listening is backed up by what they’re showing me: including pictures, maps and graphics in your tweets will raises your “engagement” numbers, at least as measured by people resharing tweets, favoriting them, @mentioning or @replying to them.

Twitter analytics for briankellyAs illustrated the service provides statistics on tweets (potential impressions, engagement and engagement rate). Additional tabs provide information on followers (changes in the numbers of followers and profiles of their gender, location and interests) and Twitter cards.

If you don’t use Twitter, make small-scale use of the tool or use it purely for social purposes you probably won’t have an interest in what the analytics may tell you about your use of the tool. However increasingly researchers will have an interest in use of alt.metrics measures which provide indications of interest in their research outputs. In additional research support librarians will have an interest in this area in order to support and advice their users. Finally, those involved in digital marketing are likely to be interested in the information provided by this new service.

Other Analytic Tools

There are, of course, a number of other Twitter analytics tools.

TweepsmapI use Tweetstats which, as illustrated, provided a display of the locations of one’s followers. The free version of the tool also provides information on inactive Twitter followers and other profiles of one’s followers, although subscription to the premium service is needed for the full range of services.

I also use Sumall which provides similar information for which ‘payment’ consists of a public tweet about one’s metrics which is published weekly:

My week on twitter: 2 New Followers, 3 Mentions, 3.66K Mention Reach, 6 Replies, 14 Retweets. via

TwtrlandThe other tool I use is Twtrland. I receive a weekly email summary from this service but I’ve not logged in to the dashboard for some time but, as show, the free version does appear to provide a comprehensive set of information.

Finally I should mentioned Social Bro.  I’ve used this tool in the past and found that the free version was useful in providing recommendations on the best time to tweet and in profiling my followers’ community (e.g. I found that people I follow typically tweet on a daily basis, publish between 1 and 5 tweets daily and follow between 100 and 500 Twitter accounts.  This, for me, was a particularly useful insight into ‘normal’ Twitter use patterns and helped confirm my belief that to make effective use of Twitter to support one’s professional interests  you will need to achieve a critical mass for your Twitter community.

Unfortunately the free version of this service is only available in you have a total Twitter community (your followers and the accounts you follow) which is less than 5,000. Since my community exceeds this by a few hundred I am not able to give an update on the information the tool currently provides, but I did find it useful when I first used it.

Your Thoughts

I’d be interested in hearing about other Twitter analytics tools which people find useful, especially free services which are appropriate for researchers who will not be in a position to afford premium accounts which may be used by those who work in marketing departments. And is anyone advising researchers on such tools (including the dangers of reading too much into the information provided!)

View Twitter conversations and metrics using: [Topsy] – []

Posted in Evidence, Twitter | 5 Comments »

Links From Wikipedia to Russell Group University Repositories

Posted by Brian Kelly on 28 Aug 2014

Wikipedia as the Front Matter to all Research

A session at the recent Wikimania conference provided an opportunity for discussion on the topics: “The fount of all knowledge – wikipedia as the front matter to all research“. The abstract describes how:

This discussion focuses on how Wikipedia could become the entry or discovery point to all significant research for the general public, and for scholars who are working just outside of the topic of interest. For most people, even researchers from closely related areas, summaries and explanations of a piece of research can be a crucial means both to discover and to begin to get into a new piece of research.

Currently overviews of research topics are supported through two mechanisms: reviews and “front matter” content. A review is a systematic summary of a field, written by an expert. These go out of date quickly, particularly in rapidly moving areas of research. Front matter is “News and Views” pieces, often found at the “front” of scientific journals that explain newly published research and put it in context. This often includes a discussion of explaining how the research is an important advance and its broader societal implications.

Both of these functions could easily be provided in a more up to date and scalable manner by tapping into a global community of experts. Wikipedia articles are often the top web search result for initial queries in many research areas and these articles are a major source of traffic for scientific journals. As the first port of call for many users of research and a significant discovery route the potential for Wikipedia as a form of dynamic, expertly curated “front matter” for the whole research literature is substantial. This facilitated discussion session will focus on how this role could be enhanced, what is currently missing and what risks exist in taking this route.

Reading this I wondered about the extent to which Wikipedia articles currently link to papers hosted in institutional repositories.

In order to explore this question I made use of Wikipedia’s External links search tool to monitor the number of links to from Wikipedia pages from to institutional repositories provided by the Russell Group universities.

The survey was carried out on 28 August 2014 using the service. Note that the current finding can be obtained by following the link in the final column.

Table 1: Numbers of Links to Wikipedia from Repositories Hosted at Russell Group Universities


Institutional Repository Details Nos. of links

from Wikipedia

View Results
1   2 [Link]
InstitutionUniversity of Bristol
Repository used: ROSE (
  6 [Link]
3  82  [Link]
InstitutionCardiff University
Repository usedORCA (
   1  [Link]
InstitutionUniversity of Durham
Repository usedDRO (
109  [Link]
6  55 [Link]
InstitutionUniversity of Exeter
 17 [Link]
InstitutionUniversity of Glasgow
120 [Link]
InstitutionImperial College
   5 [Link]
Repository used: King’s Research Portal (
  45 [Link]
InstitutionUniversity of Leeds
  65 [Link]
12    1 [Link]
 186 [Link]
14    74 [Link]
InstitutionNewcastle University
   4 [Link]
16   10 [Link]
InstitutionUniversity of Oxford
Repository usedORA (
   19 [Link]
Repository used: QMRO (
  15 [Link]
19     3 [Link]
Repository used: The University of Sheffield also uses the White Rose repository which is also used by Leeds and York. See the Leeds entry for the statistics.
 (65) [Link]
21  134 [Link]
22   98 [Link]
InstitutionUniversity of Warwick
Repository usedWRAP (
  57 [Link]
InstitutionUniversity of York
Repository used: The University of York uses the White Rose repository which is also used by Leeds and Sheffield. See the Leeds entry for the statistics.
  (65) [Link]
 Total 1,108


  • The URL of the repositories is taken from the OpenDOAR service.
  • Since the universities of Leeds, Sheffield and York share a repository the figures are provided in the entry for Leeds.
  • A number of institutions appear to host more than one research repository. In such cases the repository which appears to be the main research repository for the institution is used.


The Survey Methodology

It should be noted that this initial survey does note pretend to provide an answer to the question “How many research papers hosted by institutional repositories provided by Russell group universities are cited in Wikipedia articles?” Rather the survey reflects the use of this blog as an ‘open notebook’ in which the initial steps in gathering evidence are documented openly in order to solicit feedback on the methodology. This post also documents flaws and limitations in the methodology in order that others who may wish to use similar approaches are aware of the limitations. Possible ways in which such limitations can be addressed are given and feedback is welcomed.

In particular it should be noted that the search engine used in the survey covers all public pages on the Wikipedia web site and not just Wikipedia articles. It includes Talk pages and user profile pages.

In addition the repository web sites include a variety of resources and not just research papers; for example it was observed that some user profile pages for researchers provide links to their profile on their institutional repository.

It was also noticed that some of the files linked to from Wikipedia were listed in the search results as PDFs. Since it seems likely that PDFs referenced on Wikipedia which are hosted on institutional repositories will be research papers a more accurate reflection on the number of research papers which are cited in institutional repositories may be obtained by filtering the findings to include only PDF results.

In addition if the findings from the search tool were restricted to Wikimedia articles only (and omitted Talk pages, user profile pages, etc.) we should get a better understanding of the extent to which Wikipedia is being used as the “front matter” to research hosted in Russell group university institutional repositories.

If any Wikipedia developers would be interested in talking up this challenge, this could help to provide a more meaningful benchmark which could be useful in monitoring trends.

Policy Implications of Encouraging Wikipedia to Act as the Front Matter to Research

Links from Wikipedia to Instituoonal Repositories (pie chart)There are risks when gathering such data that observers with vested interests will seek to make too much of the findings if they suggest a league table, particularly if there seem to be runaway leaders.

However as can be seen from the accompanying pie chart in this case no single institutional repository has more than 17% of the total number of links (and remember that these figures are flawed due to the reasons summarised above).

However there will be interesting policy implications if universities agree with the suggestion that Wikipedia can act as “the front matter to all research”, especially if links from Wikipedia to the institution’s repository results in increased traffic to the repository. Another way of characterising the proposal would be to suggest that Wikipedia can act as “the marketing tool to an institution’s research outputs”.

This could easily lead to institutions failing to abide by Wikipedia’s core principles regarding providing content updates from a neutral point of view and a failure to abide by the Wikimedia Foundation’s terms of use.

Earlier today I came across an article entitled “So who’s editing the SNHU Wikipedia page?” which described how analysis of editing patterns and deviations from the norm may be indicative of inappropriate Wikipedia editing strategies, such as pay-for updates to institutional Wikipedia articles.

The article also pointed out how the PR sector has responded to criticisms that PR companies have been failing to abide by the Wikimedia Foundation’s terms of use: Top PR Firms Promise They Won’t Edit Clients’ Wikipedia Entries on the Sly. The article describes the Statement on Wikipedia from participating communications firms which is hosted on Wikipedia. The following statement was issued in 10 June 2014:

On behalf of our firms, we recognize Wikipedia’s unique and important role as a public knowledge resource. We also acknowledge that the prior actions of some in our industry have led to a challenging relationship with the community of Wikipedia editors.

Our firms believe that it is in the best interest of our industry, and Wikipedia users at large, that Wikipedia fulfill its mission of developing an accurate and objective online encyclopedia. Therefore, it is wise for communications professionals to follow Wikipedia policies as part of ethical engagement practices.

We therefore publicly state and commit, on behalf of our respective firms, to the best of our ability, to abide by the following principles:

  • To seek to better understand the fundamental principles guiding Wikipedia and other Wikimedia projects.
  • To act in accordance with Wikipedia’s policies and guidelines, particularly those related to “conflict of interest.”
  • To abide by the Wikimedia Foundation’s Terms of Use.
  • To the extent we become aware of potential violations of Wikipedia policies by our respective firms, to investigate the matter and seek corrective action, as appropriate and consistent with our policies.
  • Beyond our own firms, to take steps to publicize our views and counsel our clients and peers to conduct themselves accordingly.

We also seek opportunities for a productive and transparent dialogue with Wikipedia editors, inasmuch as we can provide accurate, up-to-date, and verifiable information that helps Wikipedia better achieve its goals.

A significant improvement in relations between our two communities may not occur quickly or easily, but it is our intention to do what we can to create a long-term positive change and contribute toward Wikipedia’s continued success.

If we wish to see Wikipedia acting as the front matter to research provided by the university sector should we be seeking to develop a similar statement on how we will do this whilst ensuring that we act in accordance with Wikipedia’s policies and guidelines? Of course the challenge would then be to identify what the appropriate best practices should be.

View Twitter conversations and metrics using: [Topsy] – []

Posted in Evidence, Repositories, Wikipedia | 2 Comments »

Facebook Usage for Russell Group Universities, July 2014

Posted by Brian Kelly on 14 Jul 2014

Facebook Usage for Russell Group Universities

In order to gather evidence of use of Facebook in the higher education sector periodic surveys of usage of official institutional Facebook pages have been carried out for the Russell Group universities since January 2011. The last survey was carried out 0n 31 July 2012, the day before the number of Russell Group universities grew from 20 to 24.

The aim of the surveys is to provide factual evidence which can be used to inform policy decisions on institutional use of social media and corresponding operational practices and stimulate debate.

The latest survey has just been carried out. It is intended that the survey will help inform discussions at the IWMW 2014 event, which starts on Wednesday.

Note that the data provided in the following table is also available as a Google Spreadsheet.

Ref. No. Institution and Web site link
Facebook name and link
Nos. of Likes
(Jan 2011)
Nos. of Likes
(Sep 2011)
Nos. of Likes
(May 2012)
Nos. of Likes
(Jul 2012)
Nos. of Likes
(Jul 2014)
% increase
since Aug 2012
 1 InstitutionUniversity of Birmingham
Fb nameunibirmingham
8,558  14,182  18,611   20,756   88,694    327%
 2 InstitutionUniversity of Bristol
Fb nameUniversity-of-Bristol/108242009204639
2,186   7,913  11,480  12,357   27,071    219%
 3 InstitutionUniversity of Cambridge
58,392 105,645 153,000 168,000  787,347    369%
 4 InstitutionCardiff University
Fb namecardiffuni
20,035  25,945   30,648  31,989   51,108      60%
 5 InstitutionDurham University
Fb nameDurham-University/109600695725424
N.A.  N.A.   N.A.  10,843   31,153   187%
 6 InstitutionUniversity of Exeter
Fb nameexeteruni
N.A.  N.A.   N.A. 15,387    29,054    89%
 7 InstitutionUniversity of Edinburgh
Fb nameUniversityOfEdinburgh
(Page URL changed since first survey)
 12,053   24,507   27,574    70,667  156%
 8 InstitutionUniversity of Glasgow
Fb Name: glasgowuniversity
  1,860   27,149  29,840    68,667  130%
 9 InstitutionImperial College
Fb nameimperialcollegelondon
5,490  10,257  16,444  19,020    68,347   259%
10 InstitutionKing’s College London
Fb nameKings-College-London/54237866946
2,047   3,587   5,384   7,534   37,370   396%
11 InstitutionUniversity of Leeds
Fb nameuniversityofleeds
   899   2,143    3,091   20,722    570%
12 InstitutionUniversity of Liverpool
Fb name: livuni University-of-Liverpool/103803892992025
(Page URL changes since survey in May and August 2012)
2,811  3,742   4,410   5,239    63,790   1,118%
13 InstitutionLSE
Fb name: lseps 
22,798  32,290 43,716   50,287  134,799     168%
14 InstitutionUniversity of Manchester
Fb nameUniversity-Of-Manchester/365078871967  – TheUniversityOfManchester   (Page URL changed for this survey)
1,978   4,734   9,356   13,751  51,659    278%
15 InstitutionNewcastle University
Fb namenewcastleuniversity
    115      693    1,084    34,975   3,126%
16 InstitutionUniversity of Nottingham
Fb nameTheUniofNottingham
3,588    9,991  14,692   17,133   119,444      597%
17 InstitutionUniversity of Oxford
137,395 293,010 541,000 628,000 1,564,871     149%
18 InstitutionQueen Mary, University of London
Fb nameQueen-Mary-University-of-London/107998909223423 – QMLNews (Page URL changed for this survey)
N.A.  N.A.  N.A.  13,362    55,545     316%
19 InstitutionQueen’s University Belfast
Fb name: QueensUniversityBelfast
5,211   10,063   16,989    19,783       16%
20 InstitutionUniversity of Sheffield
Fb nametheuniversityofsheffield
6,646 12,412  19,308   22,746    67,472     197%
21 InstitutionUniversity of Southampton
Fb nameunisouthampton
3,328 6,387  18,062   19,790   49,876    152%
22 InstitutionUniversity College London
Fb nameUCLOfficial
977 4,346  33,853  37,493    91,152   143%
23 InstitutionUniversity of Warwick
Fb namewarwickuniversity
8,535 12,112 14,472   15,103    47,204     212%
24 InstitutionUniversity of York
Fb nameuniversityofyork
N.A.  N.A.   N.A.    11,212  19,256      73%
TOTAL 287,767 566,691 998,991 1,116,077   3,600,2652    208%



Overall Facebook Usage over time: 2011-2014

Figure 1: Overall number of Facebook ‘likes’ for Russell Group universities from January 2011 – July 2014

As can be seen from Figure 1 which shows the growth in the overall number of Facebook ‘likes’ for Russell Group universities from January 2011 – July 2014 there has been a significant growth since the last survey. However please note the following caveats:

  • There has been a gap of two years before the latest survey.
  • There are now 24 Russell Group universities as opposed to the 20 covered in the initial set of surveys.

It should also be noted that comparisons of the numbers of ‘likes’ across individual institutions are probably not very meaningful due to the differing numbers of staff and students across the institutions. However the trends may be more meaningful. especially the trends across the aggregation of the institutions.

The survey published on 2 August 2012 reported that the number of Facebook ‘likes’ for the 24 Russell Group Universities had exceeded 1 million for the first time. However as shown in Figure 2 over half of these likes were for the University of Oxford with the University of Cambridge being the next most popular: these two institutions represent 67% of the total. As can be seen from Figure 3 these two institutions have maintained their positions and now represent 65%.

Figure 2: Facebook ‘Likes’ for Russell Group universities in August 2012


When Facebook was first launched access was restricted to approved institutions (which the University of Cambridge being the first in the UK to provide accounts for its students). In may 2007 John Kirriemuir felt that Something IS Going On With Facebook! after spotting weak signals of its potential importance. We then saw doubts expressed regarding its relevance for institutions characterised, perhaps, by the statement “stay out of my space“). However the popularity of the service led to suggestions that there was a need for an open alternative – but Diaspora was felt to have the potential to provide an open alternative, but as the post on Whatever Happened to Facebook-Killer Diaspora? concluded the answer was “nothing“.

Now, it would appear, institutional use of Facebook is no longer a policy issue (should we have an account) but rather raises a number of operational issues to be addressed: How should we manage it? How much effort should we allocate to it? and what metrics should we measure to demonstrate the value we get from the service?

Perhaps these are questions which will be asked at IWMW 2014 later this week.

Posted in Evidence, Facebook | 4 Comments »

Video is now a ‘must have’ in Higher Education – but what are the implications for accessibility?

Posted by Brian Kelly on 22 Apr 2014

” Video is now a ‘must have’ in Higher Education”

Video is a 'must have' in HEA recent  tweet from @OpenEduEU (described as ‘Open Education Europa portal is the gateway to European innovative learning’) caught my attention:

RT @RECall_LLP: Video now a ‘must have’ in Higher Education? Report by @Kaltura #lecturecapture #elearning #edtech

The article was based on a survey which received 550 responses. The respondents were drawn from IT, digital media, instructional design, senior administration and faculty departments of K12 and HE worldwide who completed an online surveyed between January and March 2014.

Is seems that this is broad agreement that “video has a significantly positive impact on all aspects of the student lifecycle, from attracting and retaining students to enhancing learning, boosting learning outcomes and building stronger alumni relations“.

Note that the full report can be downloaded after completing a registration form.

It should be noted that the report has been published by a company called Kaltura which describes itself as “The leading video platform: video solutions, software and services for video publishing, management, syndication and monetization“. A cynic might suggest that the company has a vested interest in commissioning a survey which show significant interest in use of video in higher education. I feel that the implications of the survey findings are worth considering but it would be helpful to have evidence of the popularity of video usage in the UK higher education sector.

YouTube Use in Selected UK Higher Education Institutions

Back in October 2010 in a post entitled How is the UK HE Sector Using YouTube? I explained how “It can be useful for the higher education sector to be able to identify institutional adoption of new services at an early stage so that institutions across the sector are aware of trends and can develop plans to exploit new dissemination channels once the benefits have been demonstrated“.

The post provided benchmark details on YouTube usage statistics for what appeared to be 15 official UK institutional YouTube channels which were easily identifiable at the time, together with details for the University of Bath and the Open University.

A comparison of the usage statistics recorded in the initial survey with the current findings is given in Table 1.

Table 1: Growth of YouTube Usage Across Selected Official UK Universities from October 2010 to April 2014
Institution Total Nos. of Views No. of Subscribers
Oct 2010 Apr 2014 %age
1 Adam Smith College  25,606 1,063,820  4,055% 39 1,758 4,408%
2 Cambridge University 1,189,778 7,200,870  505%  6,921  37,030  435%
3 Coventry University 1,039,817  2,904,121  179%  1,147  3,668 220%
4 Cranfield School of Management      20,607  459,196  2,128%      82  1,502  1,732%
5 Edinburgh University    236,884 1,759,174  643%  1,280  9,338 630%
6 Imperial College    353,355 2,682,861  659%     859  8,131  847%
7 LSBF (London School of
Business and Finance)
     96,212  676,297  603%     244  2,778  1,039%
8 Leeds Metropolitan University    589,659 1,675,534  184%     512  2,465  381%
9 Nottingham University    284,820 2,151,187  655%     596  7,038  1,081%
10 The Open University    392,720    872,706  122%  2,944  16,562  463%
11 Said Business School,
University of Oxford
   660,541  1,545,331  134%  1,808  6,598 265%
 12 St George’s, University of London    338,276  1,209,538   258%     825  2,650      221%
 13 UCL    287,198 1,491,114  419%     810  5,718  606%
 14 University of Derby    117,906  758,874  544%     106  1,144 979%
15 University of Warwick     90,608 439,492   385%     276  1,520 451%
TOTALS  5,722,987 26,890,115    370%  18,449  107,900 485%

The survey carried out in October 2010 also provided statistics for additional UK University YouTube accounts which were found. A comparison with the current findings is given in Table 2.

Table 2: Growth of YouTube Usage Across Selected UK Universities from October 2010 to April 2014
Institution Total Nos. of Views No. of Subscribers
Oct 2010 Apr 2014 %age
1 University of Bristol     18,171     56,651     212%     27      83     207%
2 Coventry University (CovStudent) 1,036,671 2,904,121     181% 1,139 3,668     222%
3 RHULLibrary       3,847      8,000     108%     10      27    170%
4 Aston University      (89,080)      –  –  (132)  –   –
5 UoL International Programmes 74,017 1,522,574  1,957% 499 5,640 1,030%
6 University of Greenwich          9,254     388,501   4,098%      19    712   3,647%
7 Northumbriauni          6,226     389,268   6,104%      23    412   1,691%
8 Huddersfield University International study 24,195 76,373     216% 22 111 405%
9 The University of Leicester 246,986 2,304,959 833% 320 5,019 1,468%
10 University of Kent 26,996 178,207 560% 102 935 817%
11 Canterbury Christ Church University 25,439 60,755 139% 36 244 578%
 12 Open University     391,625    872,706   139% 2,936 16,557      464%
 13 University of Bath 252,850 675,769   167% 93 1,196 1,186%
TOTALS 2,116,277 9,438,244  346%  5,226  34,604 562%

Note that the channel for Aston University from the initial survey no longer exists. In order to provide comparable statistics the data from the initial survey has been omitted. Also note that the data in the tables was collected on 7 October 2010 and 20 April 2014.


The tables provide evidence of the, perhaps unsurprising, popularity of video usage in the UK higher education sector.

It should be pointed out that this information is based solely on use of YouTube. Institutions are likely to make use of a number of other video delivery services (the University of Leeds, for example, has an official YouTube channel which has 246,989 views and 949 subscribers and also a Lutube video service which currently hosts 3,447 public videos, although no download statistics appear to be available). Based on the sample evidence it would appear that we can agree with the statement “Video is now a ‘must have’ in Higher Education“.

This will have many implications for the sector including the question of what video management and delivery tools should be used. But in this post I wish to focus on the accessibility implications of greater use of video resources.

Accessibility Considerations

Institutional Accessibility Policy Statements

In a recent webinar on ‘MOOCs and Inclusive Practice’  I gave a brief presentation on Accessibility, Inclusivity and MOOCs: What Can BS 8878 Offer?.

University accessibility statementIn the presentation I suggested that institutional accessibility policy statements were likely to be based on WCAG conformance. A quick search for accessibility policies available at http:/// helped me to identify two ways in which WCAG policies are used:

  1. The University is committed to ensuring the all web pages are compliant with WCAG guidelines
  2. The University will seek to ensure the all web pages are compliant with WCAG guidelines

But are policy statements such as (1) achievable in an environment in which significant use is made of video resources? Will all video resources used on institutional web sites be captioned? In light of the greater use of video resources, it would appear to be timely to revisit accessibility statements – it should be noted, for example, that according to the Internet Archive the policy statement shown above is unchanged since at least September 2009.

But would a policy statement of the type shown in (2) be appropriate? Such statement do appear to be very vague. Are there not alternatives between these two extremes?

The Potential for BS 8878

In  my presentation on Accessibility, Inclusivity and MOOCs: What Can BS 8878 Offer? (which is available on Slideshare and embedded below) I suggested that the sector should explore the relevance of BS 8878 Web Accessibility Code of Practice, a British Standard which provides a framework in which appropriate policies can be determined for use in the development and deployment of Web products.

Due to the lack of time during the webinar it was not possible to discuss the details of how BS 8878 could be used in an elearning context. However at the Cetis 2014 conference on Building the Digital Institution I will be co-facilitating with Andy Heath  a session which will address the challenge of Building an Accessible Digital Institution. In this session we will “explore how the BS 8878 Web Accessibility Code of practice may address limitations of the WAI approach and see how BS 8878 may be applied in a learning context” and go on to “explore emerging emerging models of accessibility and developing architectures and technical standards“.

Note that the early bird rate (£100 for the 2-day event) for the conference is available until 1 May. I hope that those who have an interest in accessibility for elearning, as well as in the broad range of learning issues which will be addressed at the conference, will consider attending the event.  In the meantime I’d be interested to hear what your current policies and practices are for the accessibility of your elearning resources and, in particular, whether your practices reflect the policies. Feel free to leave a comment on this post.

View Twitter conversations and metrics using: [Topsy] – []

Posted in Accessibility, Evidence | Tagged: | 4 Comments »

Reflections on 16 years at UKOLN (part 5)

Posted by Brian Kelly on 26 Jul 2013

Overview of This Week’s Posts

This week I’ve been posting my reflections on working at UKOLN over the past 16 years. In the first post I described my early involvement with the Web, dating back to December 1992 and how the approaches I took to promoting take-up of the Web across the sector informed my job as UK Web Focus after I started at UKOLN in 1996.

The second post summarised my outreach activities, and this was followed by a post which reviewed my research activities. Yesterday I summarised my work with UKOLN’s core funders and used the work with standards to illustrate the important role which JISC had in adopted a hands-off approach, leaving the work activities to experts across the community.

Evidence-based Policies and Openness

In today’s post, the final one in the series, I’ll reflect on recent work – gathering evidence in order to inform policy and practice – and how the interpretation of the evidence and the formulation of policies and developments to operational practices should be based on a culture of openness.

My interest in this area dates back to 1997 following a successful bid to BLRIC to develop and use monitoring software to analyse trends in use of the Web across the UK’s higher education and library sectors. In 2001 a paper on “Automated Benchmarking Of Local Government Web Sites” was presented at the EuroWeb 2001 conference which described the work of the WebWatch project.

More recently UKOLN and CETIS were involved with the JISC in providing the JISC Observatory. As described in a paper entitled “Reflecting on Yesterday, Understanding Today, Planning for Tomorrow” :

The JISC Observatory provides horizon-scanning of technological developments which may be of relevant for the UK’s higher and further education sectors. The JISC Observatory team has developed systematic processes for the scanning, sense-making and synthesis activities for the work. This paper summarises the JISC Observatory work and related activities carried out by the authors. The paper outlines how the processes can be applied in a local context to ensure that institutions are able to gather evidence in a systematic way and understand and address the limitations of evidence-gathering processes. The paper describes use of open processes for interpreting the evidence and suggests possible implications of the horizon-scanning activities for policy-making and informing operational practices. The paper concludes by encouraging take-up of open approaches in gathering and interpretation of evidence used to inform policy-making in an institutional context.

A series of posts have been published on this blog which have sought to gather evidence of use of various Web technologies across the sector in order to detect trends and encourage discussion on the implication of such trends.

University of Bristol confirm use of Google AppsA few days ago I came across evidence of what may perhaps become a significant trend. It seems that the University of Bristol has recently announced a decision to provide Google Apps. Via a tweet they confirmed that this service will be available for both staff and students.

Other Russell Group universities also  use Google Apps for Education. Back in May 2009 Chris Sexton, IT Services director at the University of Sheffield in a post entitled ”You can be a victim of your own success” summarised local reaction to the decision to provide Google Mail for students at the University of Sheffield:

Formally announced the Google mail for students option last night by sending an email to all staff and students. Replies are split almost 50/50. From students saying this is great news, and from staff saying why can’t we have it!

In addition to these institutions I also understand that the universities and colleges at Cambridge, York, Loughborough, De Montfort , London Metropolitan, Leeds Metropolitan, Queen Mary College, Sheffield Hallam, Westminster,  Brunel, Portsmouth, Keele, Bath Spa, Lincoln, Aston, Ravensbourne, Birbeck, Oxford Brookes, SOAS and the Open University all provide Google Apps for Edu. Note that additional information may be found using a Google search for “google apps


We seem to be seeing the start of what could be a significant trend. And if we were to gather information on institutional use of Microsoft’s Office 365 service it would appear that core office functionality is being migrated to the Cloud. In January 2010 a post entitled Save £1million and Move to the Cloud? summarised experiences at the University of Westminster:

When the University of Westminster asked students what campus email system they wanted, 90% requested Google Apps, which lets colleges and universities provide customized versions of Gmail, Google Docs, Google Calendar, and other services on their school domain

And yet in a recent discussion I heard two IT developers state strongly that “Google own your data if you use Google Apps“. I had to point out the Google terms and conditions which state:

Google claims no ownership or control over any Content submitted, posted or displayed by you on or through Google services. You or a third party licensor, as appropriate, retain all patent, trademark and copyright to any Content you submit, post or display on or through Google services and you are responsible for protecting those rights, as appropriate.

There are clearly many issues which need to be addressed if institutions are considering moving key services to the Cloud: reliability, security, performance, privacy, trust, copyright and other legal issues. But such discussions should, I feel, be carried out in an open and objective manner, which can help ensure that erroneous beliefs can be identified.

If brief, the evidence shows that institutions are migrating office functionality to Google (and perhaps Microsoft). The question may no longer be “Should we move to the Cloud?” but “Can we afford to run such services in-house?”  I’d welcome your thoughts on this. I’d also welcome further evidence to inform the discussions – I appreciate that not all institutions I have listed are necessarily using Google Apps for all members of the institution.

View Twitter conversation from: [Topsy] | View Twitter statistics from: [TweetReach] – []

Posted in Evidence, General, openness, UKOLN | Tagged: , | 9 Comments »

Reflecting on Yesterday, Understanding Today, Planning for Tomorrow

Posted by Brian Kelly on 3 Jul 2013

The Umbrella 2013 Conference

Plenary talk at Umbrella 2013Yesterday I attended the first day of the Umbrella 2013 conference. The opening day of the two-day conference was full of fascinating talks and interesting discussions – the highlight of which was the closing plenary talk which asked “Is it a bird? Is it a plane? No it’s a librarian?“. But no ordinary librarian – Victoria Treadway, Clinical Library at the Wirral Hospital Teaching Hospital Trust, in an engrossing double act with Doctor Girendra Sadera described how, by going beyond one’s comfort zone and working closely with others in a team working in the hospital’s Critical Care Unit, librarians could literally save lives.

We’re All Information Professionals Now!

Umbrella tweetIf this was the highlight of the first day, there was also an undercurrent related to the uncertainties of the future of the library profession and CILIP, the professional organisation for librarians and information professionals. Perhaps it would appear strange for librarians and information professionals to be uncertain of their future in an information-rich society. But as Annie Mauger (CLIP CEO) tweeted during the opening plenary earlier today: “We’re all information professionals“. But if we all all information professionals (Channel 4 news journalists, researchers and, indeed, ordinary people many of whom will now have to curate increasingly large volumes pf digital resources) what differentiates information professionals who choose – or choose not – to belong to a professional organisation?

Reflecting on Yesterday, Understanding Today, Planning for Tomorrow

My contribution to the conference was to present a paper on “Reflecting on Yesterday, Understanding Today, Planning for Tomorrow” which argues that librarians need to adopt evidence-based approaches to planning for the implications of technological developments. The paper summarised the approaches which have been taken by the JISC Observatory and argued that, in light of the imminent demise of the JISC Observatory following the cessation of the core funding for UKOLN and CETIS, institutions may wish to adopt the methodology developed by the JISC Observatory team.

Since the presentation only lasted for 20 minutes it was possibly to give an overview of the JISC Observatory team work. However I would hope that the paper (for which Paul Hollins, Director of CETIS, was a co-author) will be published shortly. In addition an extended version of the slides are available on Slideshare and are embedded below.

View Twitter conversation from: [Topsy] | View Twitter statistics from: [TweetReach] – []

Posted in Events, Evidence, Web2.0 | Tagged: | 2 Comments »

#altmetrics, My Redundancy Post and the 1-9-90 Rule

Posted by Brian Kelly on 1 May 2013

Measuring Impact in the Digital Environment

Blog statistics for last week in April 2013How do you assess the impact of digital content which has been published? This is a question which is very relevant in the higher education sector, where indications of success often cannot be reduced to financial indicators. It is a question which is particularly relevant to researchers who have an interest in understanding the ways in which social media can be used to maximise the impact of research papers and scholarly publications. This was a topic which was addressed recently at the UKSG 2013 conference. At the conference Mike Taylor gave a presentation on “altmetrics and the Publisher” in which he admitted the lack of consensus on the value of such approaches:

  • they’re great for measuring impact in [the] diverse scholarly ecosystem
  • Altmetrics are cheap gimmickry that encourage gaming the system, ie dishonesty.

A second talk entitled “altmetrics: What Are They Good For?” was given at the session by Peter Paul Groth. In his trip report Paul commented thatmy main point was that altmetrics is at a stage where it can be advantageously used by scholars, projects and institutions not to rank but instead tell a story about their research“.

But there was also an awareness of the need to develop a better understanding of the strengths and weaknesses of altmetrics. We can see the importance of such metrics not only for researchers, but also for organisations which make extensive use of online technologies, through the example of W3C, the organisation responsible for the development of Web standards. In their recent weekly news digest they provided the following statistics:

Notably this week :

– over 900 stories about W3C on Twitter in 7 days.
– over 3000 mentions of W3C in 7 days.
– With 59840 Twitter followers, net increase of 521 followers in the past week.
– 19 posts that posted between Apr 14 – Apr 21 got 29.9K (+17.3%) clicks and reached 69.1K (+0.7%) connections. 

In light of my long-standing interest in metrics I felt it would be useful to explore metrics for blog posts and tweets.

Metrics For My Redundancy Blog Post

An Opportunity to Gather Evidence

Last Wednesday I noticed that on the day the “My Redundancy Letter Arrived Today” blog post was published my blog had received over 3,000 views (more than double the previous most popular daily visits). I realised that this provided an opportunity to explore one aspect of altmetrics: the impact of a blog post on a topic related to one’s professional activities. Since the post was published a week ago today this gives me an opportunity to collate the evidence using a variety of services and develop a better understanding of the strengths and weaknesses of such tools.

Importance of Metrics for Funders

blog post footerIn the past we have been asked to provide metrics related to the services we’ve provided to our funders. I recently updated the footer for blog posts, which previously included icons which facilitated ‘frictionless sharing’ to include a number of links to services which provide evidence of the extent of such sharing (although, as pointed by by Alun Hughes, who chaired the review of UKOLN and CETIS, the work of the review group was subsequently overtaken by internal changes within Jisc and the review was not concluded).

The Potential Audience for the Blog Post: TweetReach

In order to estimate the potential audience for the blog post I used the TweetReach tool to obtain estimates of the numbers of Twitter users who may have seen a tweet with a link to the blog post.

Ttweetreach report on 1 May 2013As can be seen the estimated reach at 08.30 today was 77,669, based on 50 of an estimated 145 tweets.

TweetReach also provided statistics on the size of the Twitter communities of those who have tweeted links. As can be seen had between 1,000 and 10,000 followers, followed by a significant group with between 10,000 and 100,000 followers.

TweetReach provides an indication of the total reach, with this potential reach being significant due to the numbers of Twitter users with large numbers of followers who included a link to the blog post in their tweets.

But, of course, many of the tweets will not have been seen – most experienced Twitter users will nowadays regard Twitter as a stream of information to be dipped into, and not as information which should always be processed.

The Tweeters and Retweeters: Topsy

The Topsy tool provides a greater focus on Twitter users who tweet and retweet links to the blog post (although I should add that such information is also provided by TweetReach).

Topsy report for 1 May 2013From Topsy it seems that there have been 142 tweets which include links to the blog post.

As well as this headline figure, as illustrated, Topsy also provides a graph of mentions of the post over the past thirty days, as well as an archive of the tweets which contain the link.

Statistics for the Shortened URL:

Finally I should mentioned the statistics which are provided by the URL shortening service I use in Twitter:

By appending a + to a URL you can get usage figures (by default for the past hour, but the information is also available for an extended period of time).

Looking at the statistics for (and selecting the global option) I find that there have been 1,090 clicks on the ‘bitmark’.

The service also provides location information: over a third are from the UK; 12% from the US and since the majority (39%) are unknown this gives a long tail of other countries form which people have followed the link.


This blog post has summarised findings from a number of Twitter analytic services which may be of interest to others who have a need to provide evidence which may help to understand the ‘impact’ of a digital resource.

However, as I have described in a post on Paradata for Online Surveys, I feel that it is important to document the survey methodology and to be open about implied assumptions as well as documenting potential pitfalls for others who may wish to replicate the findings or apply the methodology for themselves in their own context.

Blog Usage Statistics

The first potential pitfall to be aware of is that the blog usage statistics relate to the entire blog, and will include visits during the week to any of the 1,199 posts which have been published previously. The following table therefore gives the number of visits to the Redundancy blog post as well as the number of visits to the blog’s home page during the week (when the post was shown at the top of the page).

Total nos. of blog views, 24-30 April 7,442
Nos. of views of individual post, 24-30 April 5,621
Nos. of views of blog home pages, 24-30 April    765
Total nos. of views of Redundancy post, 24-30 April 6,386

It therefore appears that there have been 6,386 views of the posts during the past week, with 1,056 views of other posts on the blog.

Referrer Statistics

Blog post referrer traffic for week prior to 1 May 2013 How did people arrive at the blog ? Looking at the referrer traffic for the past 7 days for the entire blog we can see that Twitter and Facebook were responsible for delivering most traffic, and that these two social media service were roughly comparable.

However we need to remember that referrer traffic is only provided when a Web link is followed. If visitors arrive by following a link in an email message or dedicated Twitter client, no referrer information is provided. Aggregating the referrer views it seems that 2,043 came from an identifiable Web site, with 5,399 views of all posts during the week coming either from a non-Web source or, possibly, by an anonymous Web source (e.g. a user who visits sites using an anonymising tool).

A summary of the top three ways in which people viewed content on this Web site during the past week is summarised below.

Twitter Web site    555
Facebook Web site    508
Potential Non-Web traffic 2,043

Seemingly clear indication of the social Web in delivering traffic for, admittedly, a post with human interest. Such findings will not necessarily apply in other areas, but it seems to me that such small scale indications might be useful in identifying ‘weak signals’ which would be worth investigating further in other areas.

Does the 1-9-90 Rule Apply?

As described in Wikipedia:

In Internet culture, the 1% rule or the 90–9–1 principle (sometimes also presented as 89:10:1 ratio)[1] reflects a hypothesis that more people will lurk in a virtual community than will participate. This term is often used to refer to participation inequality in the context of the Internet.

Does this apply in the context of engagement with blog posts, I wondered? In this context I used the following definitions:

  • Lurker: someone who only reads a post.
  • Contributor: someone who facilitates engagement with others by lightweight ‘frictionless’ sharing, such as a tweet, a RT, a vote on the blog post, a Facebook like or a Google +1.
  • Creator: someone who create new content by submitting a blog comment or commenting on Facebook.

The findings are summarised below.

Role Activity Numbers  Percentage
‘Lurkers’ View blog post    6,386 96%
‘Contributors’ Tweet about post       142 2.3%
Vote on blog post         11
‘Creators’ Comment blog comments         68 1.5%
Comment on Facebook post         32
Total    6,639

One observation I would make is that the tweets about the post are only included if they continued a link to the post. Since subsequent discussions were not included, due to the difficulties in finding such tweets, it seems that the Contributors count is understated. It therefore appears that the 1-9-90 rule may not be too far out in this case.

I’ll be the first to admit that the distinction between a contributor and a creator are somewhat arbitrary: someone who spend time in composing a relevant tweet in 140 characters (such as “A poignant, perceptive and yet defiantly uplifting post from @briankelly“) is clearly being creative. However posting a tweet will normally be a frictionless activity carried out in one’s current application environment, unlike posting a comment which is likely to involve following a link, clicking a button and filling in authentication details before creating the content. I’m therefore happy to propose this approach as a possible approach for monitoring the extent of engagement with digital content. Might this be an approach which others may be interested in helping to develop and refine?

View Twitter conversation from: [Topsy] | View Twitter statistics from: [TweetReach] – []

Posted in Evidence, General | 9 Comments »

Signals from Institutions: The University of Edinburgh’s Strategic Goals, Targets and KPIs

Posted by Brian Kelly on 2 Jan 2013

The University of Edinburgh Strategic Plan 2012-2016

As described in a paper on What Next for Libraries? Making Sense of the Future the JISC Observatory “provides horizon-scanning of technological developments which may be of relevance to the UK’s higher and further education sectors“. The paper, available in MS Word and PDF formats, describes the systematic processes for the scanning, sense-making and synthesis activities to support this work. The paper focuses on the processes for observing technical developments. However there is also a need to observe signals of institutional interests in IT developments, especially in light of the recent announcement of Jisc’s objective to “address a number of specific priorities for universities and colleges through the development of resources, tools and supported infrastructure“.

Edinburgh University's strategic goals

Strategic plans published by institutions can provide a valuable starting point to help identifying areas of institutional interests. For example, Lorcan Dempsey recently drew attention to the strategic goals which have been identified by the University of Edinburgh:

mm.. U Edinburgh strategy targets include improving citation score in the THE World Uni Rankings. gasp/strategic…

The document, The University of Edinburgh Strategic Plan 2012-2016, (which is available in PDF format) is interesting not so much for the way it identifies strategic goals and the key enablers who will be needed to ensure the goals are attained, but the list of specific KPIs (Key Performance Indicators) and the associated targets.

Of particular interest is the strategic goal of excellence in research for which the KPI is listed as “Russell Group market share of research income (spend)“. The corresponding targets are:

  • Increase our average number of PhD students per member of academic staff to at least 2.5
  • Increase our score (relative to the highest scoring institution) for the citations-based measure in the THE World University Rankings to at least 94/100

The strategic goal of excellence in innovation states that the KPIs are “Knowledge exchange metrics: number of disclosures, patents, licences and new company formation“. The targets for this goal are:

  • Achieve at least 200 public policy impacts per annum
  • Increase our economic impact, measured by GVA, by at least 8%

The Importance of Metrics

It is interesting to see how the University of Edinburgh has clearly targets which are based on measurable criteria: “Increase our average number of PhD students per member of academic staff to at least 2.5“; Increase our score … for the citations-based measure in the THE World university rankings to at least 94/100“; “Achieve at least 200 public policy impacts per annum“; “Increase our economic impact, measured by GVA, by at least 8%“; “Increase the proportion of our building condition at grades A and B on a year-on-year basis, aiming for at least 90% by 2020“; “Increase our total income per staff FTE year-on-year, aiming for an increase of at least 10% in real terms“; “Increase the level of overall satisfaction expressed in responses to the NSS, PTES and PRES student surveys to at least 88%“; “Increase the number of our students who have achieved the Edinburgh Award to at least 500“; “Create at least 800 new opportunities for our students to gain an international experience as part of their Edinburgh degree“; “Increase our headcount of non-EU international students by at least 2,000“; “Increase our research grant income from EU and other overseas sources so that we enter the Russell Group upper quartile“; “Increase our number of masters students on programmes established through our Global Academies by at least 500“; “reduce absolute CO2 emissions by 29% by 2020, against a 2007 baseline (interim target of 20% savings by 2015)” andIncrease our number of PhD students on programmes jointly awarded with international partners by at least 50%” (emphasis added).

The importance of metrics in the context of learning is being addressed by CETIS, with the CETIS Analytics Series being announced by Sheila MacNeill on 23 November 2012 with a follow-up post the next week addressing Legal, Risk and Ethical Aspects of Analytics in Education, The following week Sheila provided a broader perspective in a post on Analytics for Understanding Research, with the series of posts concluding with one on Institutional Readiness for Analytics – practice and policy.

Prior to CETI’s work in this area the importance of metrics had been identified by the JISC in 2010 when they asked UKOLN to facilitate the Evidence, Impact, Metrics activity. A series of reports on this work were published just over a year ago. As described in the document on Why the Need for this Work?:

There is a need for publicly-funded organisations, such as higher education institutions, to provide evidence of the value of the services they provide. Such accountability has always been required, but at a time of economic concerns the need to gather, analyse and publicise evidence of such value is even more pressing.

Unlike commercial organisations it is not normally possible to make use of financial evidence (e.g. profits, turnover, etc) in public sector organisations. There is therefore a need to develop other approaches which can support evidence-based accounts of the value of our services.

A series of three workshops were held between November 2010 and July 2011. It was interesting to reflect on how, at the initial workshop, there was a feeling that an emphasis metrics could be counter-productive in failing to appreciate the complexities of the work being carried out in the higher education sector. However the feedback from the second workshop included an awareness of the need for “More strategic consideration of gathering evidence) both for our own purposes and those of projects we work with/evaluate)“. The work concluded by highlighting the importance of metric-based approaches for projects:

Which should I bother with metrics?
Metrics can provide quantitative evidence of the value of aspects of project work. Metrics which indicate the success of a project can be useful in promoting the value of the work. Metrics can also be useful in helping to identify failures and limitations which may help to inform decisions on continued work in the area addressed by the metrics.

What are the benefits for funders?
In addition to providing supporting evidence of the benefits of successful projects funders can also benefit by obtaining quantitative evidence from a range of projects which can be used to help identify emerging patterns of usage.

What are the benefits for projects?
Metrics can inform project development work by helping to identify deviations from expected behaviours of usage patterns and inform decision-making processes.

What are the risks in using metrics?
Metrics only give a partial understand and need to be interpreted careful. Metrics could lead to the publication of league tables, with risks that projects seek to maximise their metrics rather than treating metrics as a proxy indicator of value.

It will be interesting to see if other institutions emulate the University of Edinburgh in stating specific targets for their institutional strategic plans – and how pressures on staff within the institutions to achieve the targets affects operational practices.

Is anyone aware of other institutions which are taking similar approaches?

View Twitter conversation from: [Topsy]

Posted in Evidence, General | Tagged: | 1 Comment »

Performance Analytics: Twitter, 20Feet and Crowdbooster

Posted by Brian Kelly on 14 Dec 2012

CETIS Series of Analytics Briefing Papers

Adam Cooper, CETIS Director, recently published a post in which he tried to answer the question What does “Analytics” Mean? (or is it just another vacuuous buzz word?). In the post Adam asks the question:

But is analytics like cloud computing, is the word itself useful? Can a useful and clear meaning, or even a definition, of analytics be determined?

Adam concludes that “the answer is ‘yes’” and describes how this definition is explained in a CETIS briefing paper on What is Analytics? Definition and Essential Characteristics. Adam’s post also introduces the CETIS series of briefing papers on Analytics which includes papers on Analytics; what is changing and why does it matter?Analytics for the whole institution, Analytics for Learning and TeachingLegal, Risk and Ethical Aspects of Analytics in Higher EducationAnalytics for Understanding Research and A Framework of Characteristics for Analytics.

The What is Analytics? Definition and Essential Characteristics briefing paper provides the following useful pithy definition:

Analytics is the process of developing actionable insights through problem definition and the application of statistical models and analysis against existing and/or simulated future data.

The CETIS work in this area has a focus on learning analytics, which reflects their core area of interest and expertise. However there are other areas of interest which are of relevance to the higher education sector. In addition there are approaches which have been taken to analytics beyond our sector which may provide useful insights.

Beyond Learning Analytics

Adam Cooper’s blog post concludes by encouraging people to focus on the applications of use of analytics, rather than seeking formal definitions:

Rather than say what business analytics, learning analytics, research analytics, etc is, I think we should focus on the applications, the questions and the people who care about these things.

I would agree with this approach. An example of possible dangers in focussing on the terms being used and the associated definitions can be see in the discussions surrounding altmetrics. As highlighted by Jean Liu in a post on Metrics and Beyond @ SpotOn London 2012:

A commonly held assumption about alt-metrics is that they are meant to replace traditional measures of research impact like citation counts. Actually most in the field (us included) think that alt-metrics should complement traditional metrics, not eliminate them altogether.

Although I recently commented on the need to understand the limits of altmetrics in a post on Understanding the Limits of Altmetrics: Slideshare Statistics in this post I want to focus on what I will refer to as performance analytics.

Performance Analytics in Sport and Hobbies

Wikipedia defines permance metrics as “a measure of an organization’s activities and performance. Performance metrics should support a range of stakeholder needs from customers, shareholders to employees“. We can see how such approaches can be applied in areas such as sports from the article published in the Guardian in August 2012 which described how Manchester City to open the archive on player data and statistics.

On a personal level in a post entitled Personal Perspectives on How Metrics Can Influence Practice I described the judge’s marks for last year’s rapper sword dancing competition (in which we came bottom of our group). The evidence of our low marks led to a decision to change our approaches to the dance, to the structure of the team and our weekly practices. The scores provided us with ‘actionable insights‘ into our performance which led to subsequent changes in behaviour.

I have noticed a growing interest in performance analytics across people I follow on Twitter, with a number of people in my network having purchased a Fitbit gadget and, judging my tweets I see in my Twitter stream, the Runkeeper app on their iPhone or Android device. If you’re looking for a Christmas present for a gadget-minded friend who is starting to think about their fitness the Duigital Trends Web site provides suggestions for Eight fitness gadgets that actually work.

Twitter Analytics

If metrics can provide insights into real world activities such as football, sword dancing, running and walking, then their relevance in a digital environment would appear obvious.

But should one care about performance analytics for activities such as use of Twitter?

John Spencer in a post entitled Twitter Isn’t a Tool has explained how he is unhappy with “organizations inquir[ing] about the best ways to maximize Twitter for professional development“. For John “Twitter isn’t a commodity“. Rather “Twitter is where I go when I want to talk to teacher friends … when I want to hang out with some teachers with my same quirky sense of humor [and] where people challenge my groupthink and push me to rethink my practice“.

@mr_brett_clark was in agreement: “I often describe Twiter like a party“. Curt Ress had a similar view: “I often see Twitter as a cocktail party. Lots of people having quick exchanges amidst a lot of noise. But through time, relationships are formed and real learning happens.

But although Twitter may be an informal conversational medium which can enhance informal learning, I feel that others may agree with this characterisation and yet still find value in using analytics to “develop actionable insights”. After all although my hobby, rapper sword dancing, is a fun activity. there is widespread, although my no means universal, agreement that the judging and the competitive nature can improve standards.

And, of course, beyond Twitter’s role in informal learning and social intercourse, the tool is also being used to support formal institutional activities, as can be seen from the survey in August 2012 which showed that there have been almost 50,000 tweets from official Russell Group university Twitter accounts, which have over 300,000 followers.


Crowdbooster: Impressions for Nov 2012The Crowdbooster service allows you to:

Analyze the performance of your individual tweets and posts with an interactive graph and table to quickly understand what’s working. Customize the date range to understand the impact of your campaign. Drill down to view engagement and reach metrics on Facebook and Twitter.

Use of the free version of the service is illustrated in the accompanying screenshot.

As can be seen you can view the potential impact of tweets on a daily, weekly, monthly basis, over all your tweets or for a customised range.

Crowdbooster: Nos. of followers for Nov 2012As an example, I have an interest in seeing how the initial announcement of the date of the IWMW 2013 event has been shared across Twitter.

It seems that there have been 11 retweets of the posts which have the potential to have been seen by 8.3K Twitter users. As Twitter users will know, this potential audience is unlikely to reflect reality. However it does provide an indication of outreach and the 11 retweets (and 2 conversations) are based on reality.


The TwentyFeet Web site describes how:

TwentyFeet is an “egotracking” service that will help you keep track of your own social media activities and monitor your results. We aggregate metrics from different services, thus giving you the full picture of what happens around you on the web – all in one place.

The TwentyFeet service (also known as 20ft) provides a range of graphs which help to visualise one’s Twitter performance over time. These include:
20ft: Reputation for Nov 2012

  • Reputation influence: the numbers of followers gained and lost over a specified period together with the number of Twitter lists you are on.
  • Influence indicators: the number of mentions and retweets.
  • Conversations: including tweets, retweets and @ messages.
  • Followers analyses: the numbers following you, people not following back and the rato of followers to following.
  • List analyses: the numbers of lists you own, the numbers of members of lists you own and the number of subscribers to one’s lists.
  • Additional information: the numbers of tweets you have favourited, the number number of tweets posted, the total number of links posted and the total number of lists followed.

Examples of TwentyFeet graphs for my numbers of followers in November 2012 are illustrated.

Business Models

The basic Crowdbooster service is available for free. As described on the pricing page this can be used to analyse one Twitter and one Facebook account. A Professional account, costing $39/month allows up to 10 accounts to be analysed with the Business account costing $99/month allowing up to 30 accounts to be analysed. No additional functionality is available for the paid-for accounts, apart from access to a live chat and phone support service,

The basic TwentyFeet service is available for free which can be used to analyse a single Twitter and Facebook account. However users of the free service will also find that the service sends a weekly tweet summarising the week’s performance, along the lines:

My week on twitter: 40 retweets received, 1 new listings, 37 new followers, 78 mentions. Via:

Some people find such automated tweets irritating (with the tweet from TwentyFeet perhaps being regarded as boastful). It is possible to buy a subscription service which can be used to disable the public notifications as well as provide various other benefits. Subscriptions costs $12.45 for 5 credits – however it is not clear how long the credits last for.


As mentioned previously many Twitter users may well have no interest in their Twitter metrics. However if you do have an interest, which service should you use? A similar answer would be to sign up for both. However the real decision to be made is probably whether to use the free version of TwentyFeet and accept the weekly automated tweets from one’s account.

Power Twitter users should have no difficulty in filtering tweets which are of no interest if they have a well-formed and consistent string of characters – which is the case for the alert from TwentyFeet as well as service such as (“The foo Daily is out“) and FourSquare (“I’m at foo“). Back in 2009 Mashable published an article entitled Twitter Better: 20 Ways to Filter Your Tweet. More recently posts on How to Filter out Noise from your Twitter Timeline and How to Filter Your Twitter Stream and a question on Quora which asked What tools can one use to filter one’s Twitter stream? highlighted some tools and techniques for Twitter management.

However many users will not wish to use such advanced filtering techniques. Perhaps in response to the public Twitter alerts provided by TwentyFeet, Crowdbooster now provides a private email alert. A few days ago I received s message saying:

You gained 7 followers a day over the past week! (On average, you gain 2) View your follower growth now.

To reach the most people, schedule your tweets for 12PM today, 8PM today and 9PM today.

In light of the developments to Crowdbooster I have just withdrawn permissions for TwentyFeet to post to my Twitter stream. The last tweet from the service was published 30 minutes ago:

My week on twitter: 45 retweets received, 8 new followers, 108 mentions. Via:

For me, Crowdbooster provides the deeper understanding of how I use Twitter. I know know that my second most retweeted post ever was posted two years ago:

A classic for those who like spotting misuse of apostrophe’s – spotted in Bath charity shop.

It seems there are a lot of grammar pedants amongst my Twitter followers!

View Twitter conversation from: [Topsy]

Posted in Evidence, Twitter | Tagged: , , | 3 Comments »

Observing Growth In Popularity of ORCID: An SEO Analysis

Posted by Brian Kelly on 15 Nov 2012

The ORCID (Open Researcher and Contributor ID) service was launched recently. From the ORCID Web site we learn that

ORCID provides a persistent digital identifier that distinguishes you from every other researcher and, through integration in key research workflows such as manuscript and grant submission, supports automated linkages between you and your professional activities ensuring that your work is recognized

We would expect all public networked services to have an interest in monitoring take-up of the service, especially in the period after the launch. The ORCID team will be monitoring registrations on the service, but it is also possible to monitor the growth of a networked service by monitoring the links to the service.

The MajesticSEO tool can be used to monitor links to a Web site, and provide information on the number of links and domains as well as providing additional information such as the Alexa ranking the domains, link text used, resources linked to, etc.

The findings from the MajesticSEO tool taken on 15 November 2012 are illustrated. As can be seen there are currently 521 domains linking to the service, with a total of 11,923 links, 2,295 of which are from educational institutions.

The current findings can be viewed on the MajesticSEO Web site (a free subscription is needed to view the findings). The findings for the top ten referring domains are shown below.

# Referring Domains Backlinks Alexa Rank Flow Metrics
1 2,462      N/A 24 26
2 2,086 279,286 27 22
3 2,049   21,837 63 65
4    755        N/A 17 10
5    410 718,279 47 49
6    351        N/A 33 25
7    241        N/A   9   5
8    197        N/A 30 26
9    170        N/A 22 16
10    144        22 95 93

The domains appears to me an anomaly.  Following discussions with the owner of this domain, a researcher at the University of Manchester it appears he is not carrying out any ORCID development or harvesting activities, so perhaps there was a flaw in the data collection carried out by the MajesticSEO service. The other entries in the table give an indication of the organisations which seems to be early adopters of ORCID or, perhaps in the case of, suggest where blog posts about ORCID are being discussed.

Sorting the table by Alexa ranking shows the most highly ranked Web services which contain links to the ORCID site.

# Referring Domains Alexa Rank Backlinks Flow Metrics
1   2    1 99 99
2  11   37 98 94
3  22 144 95 93
4 177     1 79 77
5 192     6 91 88
6 238     4 62 52
7 281     1 82 69
8 290     2 64 55
9 317    21 87 83
10 342      4 67 54

The presence of two popular cloud-based blog platforms, and, suggest that researchers are either talking about ORCID on these blogs or perhaps even linking to ORCID records from blog posts. However the number of links are currently too small to draw any significant conclusions from the findings.

But perhaps of most interest is the geographical display of take-up of ORCID IDs.  The global map probably reflects the location of leading research institutions and publishers of research journals. But zooming in on the UK provides a more interesting view of the location of Web sites which have links to the ORCID domain.  Bath is currently represented by 22 links from the UKOLN Web site and one from the Ariadne ejournal. As mentioned above, the map is skewed by the large numbers of links from the domain which is based in Manchester which has 2,462 links. Two locations for Scotland are shown: 9 links from the Edinburgh University Web site, 3 from EDINA and 3 from the DCC. The other location is the city of ‘Heriot’ (which actually refers to Heriot-Watt University which is based in Edinburgh).

It will be interesting to observe how this map develops as ORCID takes off.

View Twitter conversation from: [Topsy]

Posted in Data, Evidence, Identifiers | 1 Comment »

The ‘Altmetrics everywhere – but what are we missing?’ #solo12impact Session

Posted by Brian Kelly on 10 Nov 2012

I’m looking forward to attending the session on “Altmetrics everywhere – but what are we missing?” which takes place on Monday at the SpotOn London (#SOLO12) conference.

In a post entitled Altmetrics everywhere – but what are we missing? #solo12impact #solo12impact Alan Cann, the workshop co-facilitator, has provided a taster for the session.  In the post Alan describes how:

In the last couple of years altmetrics (the creation and study of new metrics based on social media for analyzing and informing scholarship) have popped up across the web. 

Alan refers to a recent guest post on this blog entitled Social Media Analytics for R&D: a Catalan Vision which suggests a range of parameter which may be relevant. However Alan feels that:

The reality is that this is too complex for those of us with lives and jobs. We need services / dashboards to provide and digest this information.

I agree, the research community will need similar dashboards which can provide indications of engagement and outreach. Alan mentions a number of possible solutions. He is dismissive of Klout (which I would agree is not appropriate in our context although if you are an advertising agency and wish to decide which Twitter star to employ to post sponsored tweets this might provide useful information to assist the selection process). Alan is more positive about  Kred, but his preferred tool seems to be CrowdBooster. Alan’s post includes screen shots which illustrate the data visualisation provided by the tool.

I have also recently started to make use of Crowdbooster. However I feel that the dashboard provided by the Twentyfeet service is better.

The screen illustrates one of the dashboard views of  my Twitter engagement during October 2012.

However Twentyfeet (also known as is not popular in some quarters as the free version sends a singly weekly tweet summarising the data over the previous week.

It is possible to disable this alert for a small annual fee (of, I think, ~$12 per year), although since this is only a single weekly tweet it is not be too intrusive.

I will be making comparisons between these services once Crowdbooster has aggregated a sufficient number of my tweets to make valid comparisons. For now I hope this contribution to the #solo12impact session will be of interest to the participants.

View Twitter conversation from: [Topsy]

Posted in Evidence | Tagged: , | 1 Comment »

Understanding the Limits of Altmetrics: Slideshare Statistics

Posted by Brian Kelly on 8 Nov 2012

About AltMetrics

Cricketers like statistics, as we know from the long-standing popularity of Wisden, the cricketing almanack which was first published in 1854. Researchers have similar interests with, in many cases, their profession reputation being strongly influenced by statistics. For researchers the importance of citation data is now being complemented by a new range of metrics which are felt to be more relevant to today’s fat-moving digital environment, which are know as altmetrics. The altmetrics manifesto explains how:

Peer-review has served scholarship well, but is beginning to show its age. It is slow, encourages conventionality, and fails to hold reviewers accountable. 

and goes on to describe how:

Altmetrics expand our view of what impact looks like, but also of what’s making the impact. 

However the manifesto concludes with a note of caution:

Researchers must ask if altmetrics really reflect impact, or just empty buzz. Work should correlate between altmetrics and existing measures, predict citations from altmetrics, and compare altmetrics with expert evaluation. Application designers should continue to build systems to display altmetrics,  develop methods to detect and repair gaming, and create metrics for use and reuse of data. Ultimately, our tools should use the rich semantic data from altmetrics to ask “how and why?” as well as “how many?”

Altmetrics are in their early stages; many questions are unanswered. But given the crisis facing existing filters and the rapid evolution of scholarly communication, the speed, richness, and breadth of altmetrics make them worth investing in.

As I described in a post on “What Can Web Accessibility Metrics Learn From Alt.Metrics?” there can be a danger in uncritical acceptance of metrics. I therefore welcome this recognition of the need to explore the approaches which are currently being developed. In particular I am looking forward to the sessions on Altmetrics beyond the Numbers and Assessing social media impact which will be held at the Spot On London 2012 conference to be held in London on 11-12 November.  In a blog post entitled Altmetrics everywhere – but what are we missing? #solo12impact Alan Cann touches on the strengths and weaknesses of some of the well-known social analytics tools:

It astounds me that Klout continues to attract so much attention when it has been so thoroughly discredited – Gink is a more useful tool in my opinion ;-)

The best of this bunch is probably Kred, which at least has a transparent public algorithm. In reality, the only tool in this class I use is CrowdBooster, which has a number of useful functions.

But beyond Twitter analytics, what of metrics associated with the delivery of talks about one’s research activities? This is an area of interest to the Altmetrics community as can be seen from the development of the Impactstory service which “aggregates altmetrics: diverse impacts from your articles, datasets, blog posts, and more“. As described in the FAQ:

The system aggregates impact data from many sources and displays it in a single report, which is given a permaurl for dissemination and can be updated any time.

The service is intended for:

  • researchers who want to know how many times their work has been downloaded, bookmarked, and blogged
  • research groups who want to look at the broad impact of their work and see what has demonstrated interest
  • funders who want to see what sort of impact they may be missing when only considering citations to papers
  • repositories who want to report on how their research artifacts are being discussed
  • all of us who believe that people should be rewarded when their work (no matter what the format) makes a positive impact (no matter what the venue). Aggregating evidence of impact will facilitate appropriate rewards, thereby encouraging additional openness of useful forms of research output.

In addition to analysis of published articles, datasets, Web sites and software the service also aggregates slides hosted on Slideshare.

Metrics for Slideshare

Metrics for Slide Usage at Events

In May 2011 a post entitled Evidence of Slideshare’s Impact summarised use of slides hosted on Slideshare for talks which have been presented at UKOLN’s IWMW events from IWMW 2006 to IWMW 2010.

A year later, following a tweet in which @MattMay asked “Why does everybody ask for slides during/after a presentation? What do you do with them? I’m genuinely curious” I published an updated post on Trends in Slideshare Views for IWMW Events. In the post I suggested the following reasons for why speakers and event organisers may wish to host slides on Slideshare:

  • To enable a remote audience to view slides for a presentation they may be watching on a live video stream, on an audio stream or even simply listening to the tweets (and a provide a slide number on the slides to make it easier for people tweeting to identify the slide being used.
  • To enable the slides to be viewed in conjunction with a video recording of the presentation.
  • To enable my slides to be embedded elsewhere, so that the content can be reused in a blog post or on a web page.
  • To enable the content of the slides to be reused, if it is felt to be useful to others. Note that I provide a Creative Commons licence for the text of my slide, try to provide links to screenshots and give the origin of images which I may have obtained from others.
  • To enable slides to be viewed easily on a mobile device.
  • To provide a commentable facility for the slides.
  • To enable my slides to be related, via tags, to related slideshows.

The usage statistics for talks given at IWMW events in order to demonstrate the interest and accessing such slides in order to encourage speakers and workshop facilitators to make their slides available.  But beyond the motivations for event organisers, what of the individual speaker?

Metrics for Individuals

My interest in metrics for Slideshare date back to December 2010 when I published a post which asked What’s the Value of Using Slideshare? In August 2010  Steve Wheeler (@timbuckteeth) tweeted that:

Ironically there were 15 people in my audience for this Web 3.0 slideshow but >12,000 people have since viewed it

As can be seen, there have now been over 58,000 views of Steve’s slides on Web 3.0: The Way Forward?

In light of Steve’s experiences and the growing relevance of metrics for Slideshare suggested by the development of the Impactstory service, where a paper by myself, Martyn Cooper, David Sloan and Sarah Lewthwaite on “A Challenge to Web Accessibility Metrics and Guidelines: Putting People and Processes First” was accepted for the W4A 2012 conference earlier this year the co-authors agreed to ensure that our professional networks were made aware of the paper and the accompanying slides in order to maximise the numbers of downloads which, we hoped, would increase the numbers of citations in the future,  but also facilitate discussion around the ideas presented in the paper.

We monitored usage statistics for the slides and found that during the week of the conference there had been 1,391 views, compared with 3 and 351 views for other slides which used the #W4A2012 conference hashtag.  To date, as illustrated, there have been 7,603 views.

I used this example in a talk on Using Social Media to Promote ‘Good News’  which I gave at a one-day event organised by the AHRC (Arts and Humanities Research Council) which took place at the same time as the W4A 2012 conference. I was therefore able to observe how interest in the slides developed, which included use of the Topsy service. This service highlighted the following tweets:

stcaccess STC AccessAbilitySIG Influential
Enjoyed “Challenge to Web Accessibility Metrics & Guidelines” slides from @sloandr & Co. #w4a12 #a11y #metrics
04/17/2012 Reply Retweet Favorite 7 similar tweets
nethermind Elle Waters
We need more of this = #W4A slides by @martyncooper @briankelly @sloandr @slewth – Learner analytics & #a11y metrics:
04/19/2012 Reply Retweet Favorite 2 similar tweets
crpdisabilities Bill Shackleton Influential
A Challenge to Web #Accessibility Metrics & Guidelines: Putting People & Processes First #A11y #Presentation
04/16/2012 Reply Retweet Favorite 2 similar tweets

I’ve used this example to illustrate how analysis of use of Twitter at conferences can help to see how people are engaging with talks. In this example the Twitter IDs STCAccess and CRPDisabilities indicated that those working in accessibility were engaging without paper and spreading the ideas across their networks.

Do the Numbers Add Up?

In a series of talks given during Open Access 2012 week I described the importance of social media in raising the visibility of research papers, including papers hosted on institutional repositories. However when I examined the statistics in more detail I realised that the numbers didn’t add up. According to Slideshare there have been 2,881 views of the slides from the post on A Challenge to Web Accessibility Metrics and Guidelines: Enhancing Access to Slides in which they had been embedded.However, as shown, there have only been 472 views of the blog post itself. Strange!

I subsequently realised that a Slideshare view will be recorded when the post is accessed, even if the individual slides are not viewed. And since the blog post will continue to be shown on the blog’s home page ( until 30 subsequent posts have been published, each time someone visited the home page between the 19 April (when the post was published) and 5 July 2012 (30 posts later) this would have seemingly have registered as a view of the slides- even though most users will not have scrolled down and seen even the title slide!
What, then, do Slideshare usage statistics tell us? Clearly if the slides have been embedded in a blog they don’t tell us how many people have viewed the slides – although if slides are not embedded elsewhere or have been embedded in a static Web page they may provide more indicative statistics. If the slides have been embedded in blog posts or other curated environments this might give an indication of the popularity of the containing blog or similar environment. In Steve Wheeler’s case the popularity of his slides provide evidence of the popularity of Steve’s Learning with E’s’ blog, the Damn Digital Chinese language blog, the Building e-Capability blog and the and curation services – together with a spam farm.

Lies, Damned Lies and Altmetrics?

Where does this leave services such as Impactstory? Looking at the Impactstory findings for my resources I can see that the slides for on a paper on “Accessibility 2.0: People, Policies and Processes” seem to be the most highly-ranked, with 73 downloads and 2,989 views.

But how many of those views were views of the slides, rather than the containing resources? And how many views way have taken as the result of views from a spam farm?

I don’t have answers to these questions or the bigger question of “Will the value of Altmetrics be undermined by the complex ways in which resources may be reused, misused or the systems gamed?

This is a question I hope will be addressed at the Spot On London 2012 conference.

View Twitter conversation from: [Topsy]

Posted in Events, Evidence | Tagged: , , | 10 Comments »

The Sixth Anniversary of the UK Web Focus Blog

Posted by Brian Kelly on 1 Nov 2012

This blog was launched in 1 November 2006. A year after the launch I described The First Year of the UK Web Focus Blog. The following year I provided a summary of  The Second Anniversary of the UK Web Focus Blog) in which I provided a link to a backup copy of the blog’s content, hosted on Scribd. In a post on The Third Anniversary of the UK Web Focus Blog I commented that “with over 600 posts published on the UK Web Focus blog, I can’t recall all of the things I have written about!“. In 2010 a post on Fourth Anniversary of this Blog – Feedback Invited provided a link to a SurveyMonkey form and I subsequently published a post which gave an Analysis of the 2010 Survey of UK Web Focus Blog.

Last year’s anniversary post, entitled How People Find This Blog, Five Years On concluded that “most people now view posts on this blog following alerts they have come across on Twitter rather than via a Google search or by subscribing to the blog’s RSS feed“. I went on to say that “to put it more succinctly, social search is beating Google and RSS“.

Figure 1: Referrer Traffic to this blog, 2011-12

But what do usage statistics now tell us about the previous year? Looking at the referrer statistics for the last 365 days (as illustrated in Figure 1) it seems that has changed how it displays the referrer statistics compared with last year.

Figure 2: Referrer Traffic, 2006-11

Last year’s findings (illustrated in Figure 2) had Twitter in first place, followed by Google Reader and the UKOLN Web site. However this year we find Search Engines in first place, by a significant margin.

This reflects comments made last year by Tony Hirst who felt that the statistics were somewhat misleading:

my stats from the last year show a lot of Twitter referrals, but also (following a three or four day experiment by WordPress a week or so ago), inflated referrals from “”. The experiment (or error?) that WordPress ran was to include RSS counts in the stats. The ‘normal’ stats are page views on; the views over the feed can be found by looking at the stats for each page.

It would appear that last year’s conclusion: “social search is beating Google and RSS” was incorrect. In fact Google continues to be significant in driving traffic to this blog. However I think we can say that “social services, especially Twitter, are beating RSS readers“.

The importance of Twitter is widely appreciated as a means of ensuring that the intended target audience  – those with whom you are likely to share similar professional interests – are alerted to your content. But the thing that surprised me was the importance of Facebook – in third place behind Search Engines and Twitter in referring traffic to this blog.

Perhaps I should not have been surprised by Facebook’s high profile. After all, a post by Daniel Sharkov, an 18 year old student and a blogger, which provided a 9 Step Blog Checklist to Make Sure Your Posts Get Maximum Exposure included the following suggestion:

Did You Share Your Post on Facebook?

An obvious one. What I do is share the post both on my personal wall and on my fan page right after publishing the article.

I appreciate that use of Facebook won’t be appropriate in all cases, but for blogs provided by individuals who have a Facebook account and who wish to see their content widely viewed, it would appear that Facebook can have a role to play in supporting that objective; the evidence is clear to see – even, or perhaps especially, if you’re not a fan of Facebook.

Posted in Blog, Evidence, Facebook | 2 Comments »

SEO Analysis of Enlighten, the University of Glasgow Institutional Repository

Posted by Brian Kelly on 25 Oct 2012


In the third and final guest post published during Open Access Week William Nixon, Head of Digital Library Team at the University of Glasgow Library and the Service Development Manager of Enlighten, the University of Glasgow’s institutional repository service, gives his findings on use of  the MajesticSEO tool to analyse the Enlighten repository.

SEO Analysis of Enlighten, University of Glasgow

This post takes an in-depth look at a search engine optimisation (SEO) analysis of Enlighten, the institutional repository of the University of Glasgow. This builds on an initial pilot survey of institutional repositories provided by Russell Group universities described in the post on MajesticSEO Analysis of Russell Group University Repositories.


University of Glasgow

Founded in 1451, the University of Glasgow is the fourth oldest university in the English-speaking world. Today we are a broad-based, research intensive institution with a global reach. It’s ranked in the top 1% of the world’s universities. The University is a member of the Russell Group of leading UK research universities. Our annual research grants and contracts income totals more than £128m, which puts us in the UK’s top 10 earners for research. Glasgow has more than 23,000 undergraduate and postgraduate students and 6000 staff.


We have been working with repositories since 2001 (our first work was part of the JISC funded FAIR Programme) and we now have two main repositories, Enlighten for research papers (and the focus of this post) and a second for our Glasgow Theses.

Today we consider Enlighten to be an “embedded repository”, that is, one which has “been integrated with other institutional services and processes such as research management, library and learning services” [JISC Call, 10/2010]. We have done this in various ways including:

  • Enabling sign-on with institutional ID (GUID)
  • Managing author identities
  • Linking publications to funder data from Research System
  • Feeding institutional research profile pages

As an embedded repository Enlighten supports a range of activities including our original Open Access aims to provide as any of our research outputs freely available as possible but also to act as a publications database and to support the university’s submission to REF2014.

University Publications Policy

The University has a Publications Policy, introduced to Senate in June 2008, has two key objectives:

  • to raise the profile of the university’s research
  • to help us to manage research publications.

The policy (it is a mandate but we tend not to use that term) asks that staff:

  • deposit a copy of their paper (where copyright permits)
  • provide details of the publication
  • ensure the University is in the address for correspondence (important for citation counts and database searches)

Enlighten: Size and Usage

Size and coverage

In mid-October 2012 Enlighten had 4,700 full text items covering a range of item types including journal articles, conference proceedings, book, reports and compositions. Enlighten has over 53,000 records and the Enlighten Team work with staff across all four Colleges to ensure our publications coverage is as comprehensive as possible.


We monitor Enlighten’s primarily via Google Analytics for overall access (including number of visitors, page views referrals and keywords) and EPrints IRStats package for downloads. Daily and monthly download statistics are provided in records for items with full text and we provide an overall listing of download stats for the last one and 12 month periods.

Looking at Google Analytics for the 1 Jan 12 – 30 Sep 12 (to tie in with this October snapshot) and the previous period we had 201,839 Unique Visitors up to 30 Sept 12 compared to 196,988 in 2011.

In the last year we have seen an increase in the number of referrals and our search traffic is now around 62%. In 2012 – 250,733 people visited this site, 62.82% was Search Traffic (94% of that is Google) with 157,503 Visits and 28.07% Referral Traffic with 70,392 visits.

In 2011 232,480 people visited this site, 69.97% of that was Search Traffic with 162,665 Visits and 18.98% came from referrals with 44,128 Visits.


Our experience with Google Analytics has shown that much of our traffic still comes from search engines, predominantly Google but it has been interesting to note the increase in referral traffic, in particular from our local * domain, this rise has coincided with the rollout of staff publication pages which are populated from Enlighten and provides links to the record held in Enlighten.

After * domain referrals our most popular external referrals come from:

  • Mendeley
  • Wikipedia
  • Google Scholar

We expected that these would feature most predominantly in the Majestic results, with Google itself.

Majestic SEO Survey Results

The data for this survey was generated on the 22nd October 2012 using the ‘fresh index’, current data can be found from the Majestic SEO site with a free account. We do own the domain but haven’t added the code to create a free report. The summary for the site is given below showing 632 referring domains and 5,099 external backlinks. Interestingly it seems our repository is sufficiently mature for Majestic to all provide details for the last five years too.

Since we were looking at rather than * we anticipated that our local referrals wouldn’t feature in this report. As a sidebar a focus just on showed nearly 411,000 backlinks and over 42,000 referring domains.

Figure 1.  Majestic SEO Summary for

This includes 619 educational backlinks and 54 educational referring domains. This shows a drop in the number of referring domains since Brian’s original post in August which showed 680 and a breakdown of the Top Five Domains (and number of links) as:

  • 5,880
  • 5,087
  • 322
  • 178
  • 135

These demonstrate a very strong showing for blog sites, news and Wikipedia.

Figure 2. Top 5 Backlinks

Referring domains was a challenge! We couldn’t replicate the same Matched Links data which Warwick and the LSE have used. Our default Referring Domains report is ordered by Backlinks (other options including matches are available but none of our Site Explorer – Ref Domains options seemed to be able to replicate this. We didn’t use Create Report.

These Referring Domains ordered by Backlinks point us to full text content held in Enlighten from sites it’s unlikely we would have readily identified.

Figure 3a: Referring Domains by Backlinks

Figure 3b: Referring Domains by Matches (albeit by 1)

This report shows at number one with the blog sites holding spots 2 and 3 and then Bibsonomy (social bookmark and publication sharing system) and Mendeley at 4 and 5.

An alternative view of the Referring Domains report by Referring Domain shows the major blog services and Wikipedia in the top 3, with two UK universities Southampton and Aberdeen (featuring again) in positions 4 and 5.

The final report is a ranked list of Pages, downloaded as CSV file and then re-ordered by ReferringExtBacklinks.

URL ReferringExtBackLinks CitationFlow TrustFlow 584 36 28 198 18 15 77 10 9 70 24 2 69 23 2[1].pdf 61 0 0

Table 1: Top 5 pages, sorted by Backlinks

These pages are:

  • Enlighten home page
  • PDF for “Arguments For Socialism”
  • PDF for “Language in Pictland”
  • A chronology of the Scythian antiquities of Eurasia based on new archaeological and C-14 data [Full text record]
  • Some problems in the study of the chronology of the ancient nomadic cultures in Eurasia (9th – 3rd centuries BC) [Full text record]
  • PDF for “87Sr/86Sr chemostratigraphy of Neoproterozoic Dalradian limestones of Scotland and Ireland: constraints on depositional ages and time scales” [Full text record]


Focusing in more detail on the results, in Figure 2, the top 5 backlinks, 4 out of the 5 are from Wikipedia, the first two are to the same paper but from different Wikipedia entries. It’s interesting to see that our third ranked backlink is the ROARmap registry.

Looking at the top 5 pages ranked by backlinks, none of the PDFs or the records which have PDFs currently appear in our IRStats generated list of most downloaded papers in the last 12 months. It is clear however, in this pilot sampling to draw a correlation between ranking and the availability of  full text and not merely a metadata record.


While this initial work has focused on the Top 5, extending this to at least the Top 10 would be useful for further comparison, it was interesting to see that sites such as Mendeley appeared in variations of our Referring Domains which correlated with our Google Analytics reports which indicate that they are a growing source of referrals.

Looking at Figure 3a, a Google search, on the first referring domain (by backlinks) reveals that the number Ref Domain has 136,000 results on Google for “”, didn’t match at all and had 5 results.

Social media sites such as Facebook and Twitter don’t appear in these initial results, it may be because the volume is insufficient to be ranked here or there may be breach of service issues. Google Analytics now provides some social media tools and we have been identifying our most popular papers from Facebook and Twitter.

This has been an interesting, challenging and thought-provoking exercise with the opportunity to look at the results and experiences of Warwick and the LSE who, like us reflect the use of Google Analytics to provide measures of traffic and usage.

The overall results from this work provide some interesting counterpoints and data to the results which we get from both Google Analytics and IRStats. These will need further analysis as we explore how Majestic SEO could be part of the repository altmetrics toolbox and how we can leverage its data to enhance access our research.

About the Author

William Nixon is the Head of Digital Library Team at the University of Glasgow Library. He is also the Service Development Manager of Enlighten, the University of Glasgow’s institutional repository service ( He been working with repositories over the last decade and was the Project Manager (Service Development) for the JISC funded DAEDALUS Project that set up repositories at Glasgow using both EPrints and DSpace. William is now involved with the ongoing development of services for Enlighten and support for Open Access at Glasgow. Through JISC funded projects including Enrich and Enquire he has worked to embed the repository into University systems. This work includes links to the research system for funder data and the re-use of publications data in the University’s web pages. He was part of the University’s team which provided publications data for the UK’s Research Excellence Framework (REF) Bibliometrics Pilot. William is now involved in supporting the University of Glasgow’s submission to the REF2014 national research assessment exercise. Enlighten is a key component of this exercise, enabling staff to select and provide further details on their research outputs.

Posted in Evidence, Guest-post, openness | 2 Comments »

SEO Analysis of LSE Research Online

Posted by ukwebfocusguest on 24 Oct 2012


The second in the series of guest blog posts which gives a summary of an SEO analysis of a repository hosted at a Russell Group university is provided by Natalia Madjarevic, the LSE Research Online Manager. As described in the initial post, the aim of this work is to enable repository managers to openly share their experiences in use of MajesticSEO, a freely-available SEO analysis tool to analyse their institutional repositories.

SEO Analysis of LSE Research Online

This post takes an in-depth look at a search engine optimisation (SEO) analysis of LSE Research Online, the institutional repository of LSE research outputs. This builds on Brian Kelly’s post published on this blog in August 2012 on MajesticSEO Analysis of Russell Group University Repositories.

The London School of Economics and Political Science


LSE is a specialist university with an international intake and a global reach. Its research and teaching span the full breadth of the social sciences, from economics, politics and law to sociology, anthropology, accounting and finance. Founded in 1895 by Beatrice and Sidney Webb, the School has a reputation for academic excellence. The School has around 9,300 full time students from 145 countries and a staff of just under 3,000, with about 45 per cent drawn from countries outside the UK. In 2008, the RAE found that LSE has the highest percentage of world-leading research of any university in the country, topping or coming close to the top of a number of rankings of research excellence. LSE came top nationally by grade point average in Economics, Law, Social Policy and European Studies and 68% of the submitted research outputs were ranked 3* or 4*.

LSE Research Online – a short history

LSE Research Online (LSERO) was set up in 2005 as part of the SHERPA-LEAP project. The aim of the project was to create EPrints repositories for each of the seven partner institutions, of which LSE was one, and to populate those repositories with full-text research papers. In June 2008 the LSE Academic Board agreed that records for all LSE research outputs would be entered into LSE Research Online. We have no full-text mandate but authors are encouraged to provide full-text deposits of journal articles in pre-publication form, clearly labelled as such, alongside references to publications. Research outputs included in LSE Research Online appear in LSE Experts profiles automatically, thereby reusing data collected by LSE Research Online.

LSE Research Online is to be the main source of bibliographic information for the Research Excellence Framework (REF) in 2014. This has served to further increase the impetus for deposit and visibility of the repository in the School and we have various repository champions throughout the School across departments.

LSE Research Online size and a brief look at usage statistics

As of September 2012, LSE Research Online contains around 33,696 records, with 7,050 full-text items. We include a variety of item types such as articles, book chapters, working papers, data sets, blogs and conference proceedings. We most recently began collecting LSE blogs to create a permanent home for this important content. We began tracking LSERO site usage with Google Analytics in 2007 and the site has received 2,268,135 visits since this date. According to Google Analytics, 76.55% (1,748,725 total visits) of traffic to LSE Research Online comes from searches. Only 16.13% of traffic is from referrals and 7.14% from direct traffic. We also use analog server statistics to monitor downloads and total downloads May 2007-Sept 2012 was 5,266,871.

Expectations of the survey

Before running the Majestic SEO report, I expected we would see plenty of traffic from Google and backlinks (i.e. incoming links) from as, understandably, these are key sources of traffic to LSERO and are indicated as such on Google Analytics. Google Analytics also points to referrals from Wikipedia and Google Scholar, and most recently, our Summon implementation which includes LSERO content. However, I was intrigued as to how LSERO would fare in an SEO analysis.

Majestic SEO survey results

The data was generated from Majestic SEO using a free account on 24th September 2012 using the ‘fresh’ index option. A summary of the results is shown below: there are 1,285 referring domains and 8,856 external backlinks. Note that the current findings can be viewed if you have a MajesticSEO account (which is free to obtain).

Figure 1: Majestic SEO analysis summary for

This includes 408 educational referring backlinks. If we look at backlinks in more detail, patterns begin to unravel:

Figure 2: Top 5 Backlinks

This illustrates a distinct majority of Wikipedia pages linking to LSERO content and yet this is only ranked as the sixth most popular source of traffic in Google Analytics.

Top referring domains, sorted by matched links, can be found in the table shown below:

Referring domains Matched links Alexa rank Flow Metrics
Citation flow Trust flow 14502 21 95 93 11239 5 97 94 349 8 97 98 272 33 98 96 225 1 99 99

Table 1: Top 5 Referring Domains

Flickr makes a surprise appearance, with WordPress and Blogger dominating the top of the table.

Top 5 items sorted by Majestic’s flow metrics can be found here:

Figure 3: Top 5 Resources in Repository (sorted by flow metrics)

Perhaps more indicative, the Top 5 linked resources sorted by number of backlinks can be found in the table shown below:

Ref no. URL Ext. BackLinks Ref. Domains CitationFlow TrustFlow
1 501 83 45 41
2 417 69 28 19
3 225 4 27 32
4 130 46 30 25
5 112 54 22 23

Table 2: Top 5 Linked Resources in Repository (sorted by no. of links)

These pages are:

  1. The LSE Research Online homepage.
  2. A PDF of a research paper on climate policy.
  3. The record for a paper on teenager’s use of social networking sites.
  4. The record for a paper on climate policy.
  5. The record for a paper on open source software.


Looking in more detail at the top backlinks to the repository, as listed in Figure 2, we can see that Wikipedia represents four out of five top pages. This includes the Wikipedia page on Free Software, which links back to a Government report on the cost of ownership of open source software. The Wikipedia pages on the European Commission and Proportional Representation are ranked second and third respectively. The Proportional Representation page links back to the full-text of a 2010 workshop paper: Review of paradoxes afflicting various voting procedures where one out of m candidates (m ≥ 2) must be elected. The fifth and only backlink not be Wikipedia is, an AIDS Education site which links back to the record of an early LSERO paper: Peer education, gender and the development of critical consciousness : participatory HIV prevention by South African youth.

In Table 1, the Top 5 Referring Domains to LSE Research Online are WordPress, Blogspot, Wikipedia, Flickr and Google. We can see the dominance of international social platforms here with WordPress (14,502 links) and Blogspot (11239 links), followed by Wikipedia (349 links), Flickr (272 links) and, finally a search engine, (225).

In Figure 3, Top 5 Resources in Repository (sorted by flow metrics), we can see several links to LSERO information pages including the home page and the feed of latest additions. There are, however, several direct links to full-text papers including an Economic History Working Paper on A dreadful heritage: interpreting epidemic disease at Eyam, 1666-2000Sorting this data by number of backlinks, as shown in Table 2, the top item is the LSERO homepage with 501 backlinks. The second item is the PDF of one of our most downloaded papers of all time: The Hartwell Paper.


So what can I draw from the results of the Majestic SEO report of LSE Research Online? Analysing the top referring domains according to the Majestic report, it seems reasonable to suggest that adding links to repository content on blogging platforms such as WordPress and Blogspot may result in an increased SEO ranking. We often link to LSERO content in various LSE Library blogs hosted on Blogspot, including New Research Selected by LSE Library. Flickr is also listed as a top referring domain according to the Majestic SEO but running a Google search for “” retrieves zero results. It’s difficult to ascertain how MajesticSEO gets this result when Google does not confirm the findings – perhaps it uses very different algorithms to Google? The MajesticSEO top referring domains indicate that blogging platforms are the main referring domains to LSERO content. However, according to our Google Analytics stats, 76.55% of traffic to LSERO is from searches. Furthermore, the Majestic report indicates that there are 349 matched links to LSERO content on Wikipedia. “Running the search “” in you get (on 11 October 2012) “About 92 results”. From the last page of the results, by repeating the search to include omitted results, Google ends up with 80 hits.” Searching for in retrieves 83 hits. How does MajesticSEO retrieve such varying results?

Looking at backlinks, it’s important to note that the majority of top backlinks refer to papers that have the full-text attached and often link directly to the full-text PDF, of course resulting in a direct download. In addition, the Top 5 Resources in Repository (sorted by external backlinks) as seen in Table 2 tallies with our consistently popular papers according to Google Analytics and our analog statistics.

It is apparent that the inclusion of repository links on domains such as Wikipedia and blogging platforms appears to have a positive impact in helping the relevancy ranking weighting for LSERO content in web pages. This is not to mention direct hits on the links themselves, adding directly to the site’s visitors, and thus the dissemination of LSE research outputs. However, whether we can draw firm conclusions from the Majestic report remains to be seen, particularly with such differing results to those found on Google.

Thanks to my colleague Peter Spring for his advice when writing this post.

About the Author

Natalia Madjarevic is the manager of LSE Research Online, LSE Theses Online and LSE Learning Resources Online, the repositories of The London School of Economics and Political Science.

Natalia is also the Academic Support Librarian for the Department of Economics and LSE Research Lab. Joining LSE in 2011, prior to that Natalia worked at libraries including UCL, The Guardian and Queen Mary, University of London. Her professional interests include Open Access, research support, REF, bibliometrics and digital developments in libraries.

Posted in Evidence, Guest-post, Repositories | 4 Comments »

SEO Analysis of WRAP, the Warwick University Repository

Posted by ukwebfocusguest on 23 Oct 2012

SEO Analysis of a Selection of Russell Group University Repositories

A post published in August 2012 on an MajesticSEO Analysis of Russell Group University Repositories highlighted the importance of search engine optimisation (SEO) for enhancing access to research papers and is part of a series of articles on different repositories and provided summary statistics of the SEO rankings for 24 Russell Group University repositories.

This work adopted an open practice approach in which the initial findings were published at an early stage in order to solicit feedback on the value of such work and the methodology used. There was much interest in this initial work, especially on Twitter. Subsequent email discussions led to a number of repository managers at Russell group universities agreeing to publish more detailed findings for their repository, together with contextual information about the institutional and the repository which I, as a remote observer, would not be privy too.

We agreed to publish these findings on this blog during Open Access Week. I am very grateful to the contributors for finding time to carry out the analysis and publish the findings during the start of the academic year – a very busy period for those working in higher education.

The initial post was written by Yvonne Budden, the repository manager for WRAP, the Warwick Research Archives Project. It is appropriate that this selection of guest blog post begins with a contribution about the Warwick repository as Jenny Delasalle, a colleague of Yvonne’s at the University of Warwick and myself will be giving a talk on “What Does The Evidence Tell Us About Institutional Repositories?” at the ILI 2012 conference to be held in London next week.

SEO Analysis of the University of Warwick’s Research Repositories

The following summary of a MajesticSEO survey of the University of Warwick’s research repositories, together with background information about the university and the repository environment has been provided by Yvonne Budden.

A Little Background on Warwick

The University of Warwick is one of the UK’s leading universities with an acknowledged reputation for excellence in research and teaching, for innovation and for links with business and industry. Founded in 1965 with an initial intake of 450 undergraduates, Warwick now has in excess of 22,000 students and employs close to 5,000 staff. Of those staff just fewer than 1,400 are academic or research staff. Warwick is a research intensive institution and our departments cover a wide range of disciplines, including medicine and WMG, a specialist centre dedicated to innovation and business engagement. In the 2008 RAE nineteen of our departments were ranked in the top ten for their unit of assessment and 65% of the submitted research outputs were ranked 3* or 4*.

University of Warwick’s Research Repositories

Warwick’s research repositories began in the summer of 2008 with the Warwick Research Archives Project (WRAP), a JISC funded project that created a full text, open access archive for the University. WRAP funding was taken by the Library and in April 2011 we launched the University of Warwick Publications service, which was designed to ‘fill the gaps’ around the WRAP content with a comprehensive collection of work produced by Warwick researchers. The services work on the same technical infrastructure but WRAP remains distinct and exposes only the full text open access material held. The system runs on the most recent version of the EPrints repository software, using a number of plugins for export, statistics monitoring and most recently to assist in the management of the REF2014 submission. To date we do not have a full text mandate for WRAP and engagement with both WRAP and the Publications service varies across the departments. Deposit to the services is highly mediated through the repository team and so engagement is not necessarily reflected in the number of papers available per department, especially as some departments benefit more from the service’s policy of pro-active acquisition of new material where licenses allow. I would judge that our best engagement in terms of full text deposit comes from Social Science researchers but we also have some strong champions in the Medical School, History, Life Sciences and Psychology.

Size and Usage Statistics

At the end of August 2012 WRAP contained 6,554 full text items covering a range of item types, journal articles, theses, conference papers, working papers and more. The Publications service contained a further 40,753 records. In terms of usage since its launch the system has seen 900,997 visits according to Google Analytics, an average of just over 18,000 a month in the 50 months active. To track downloads we use the EPrints plugin, IR Stats, this counts file downloads either directly or through the repository interface. IR Stats will only count one download per twenty-four hours from each source, but will count multiple downloads if an item has multiple files attached. Over the life of WRAP the files held have been downloaded a grand total of 730,304 times with 49.08% of downloads coming from Google or Google Scholar.

Expectations of the Survey

Going into the survey using the MajesticSEO system wasn’t sure what to expect from the results, the majority of the work we’ve done so far with the statistics is with the Google Analytics and the IR Stats package. Looking at the referral sources in the our Google output I can indicate a number of sources I might expect to see back links into the system, including our Business School ( and the Bielefeld Academic Search Engine(BASE) as well as a number of smaller sources. The Warwick Blogs service seems to have fallen out of favour over the past few years with the number of hits from there dropping as people move to other platforms. Above all I’m most curious to see if the SEO analysis can help with the work I am doing in promoting the use of WRAP and the material within it. If this work can assist me in creating the kinds of ‘interest stories’ that help to persuade researchers to deposit it could become another valuable source of information. We are also looking at expanding the range of metrics we have access to, looking at the IRUS project as well as the forthcoming updated version of IR Stats, recently demonstrated at Open Repositories 2012.

Our Survey Results

The data for this survey was generated on the 10th September 2012 using the ‘fresh index’ option, although the images were captured on 19 October. The current results can be found if you have a MajesticSEO account (which is free to obtain). The summary for the site is given below showing 413 referring domains and 2,523 backlinks.

Figure 1: MajesticSEO analysis summary for

On first glance this seems to be rather low in terms of backlinks, it also shows a fairly low number of educational domains linking to us. The top five backlinks in to the system can be seen below, ranked as standard by the system by a combination of citation and trust flow:

Figure 2: Top 5 Backlinks

Interestingly this lists some of the popular referrers we see in Google Analytics driving traffic to us, but not some others I might have expected to see. The top referring domains are shown below:

Figure 3: Top Referring Domains

This is the only place in the results where Google features at all. The top five pages, as ranked by the flow metrics show a fairly distinct anomaly, as two of the pages are not listing any flow metric information despite this supposedly being the method by which they are ranked:

Figure 4: Findings Ranked by Flow Metrics

The top five pages as sorted by number of backlinks can be seen in the table below:

Ref No. URL Ext. Backlinks Ref. Domains Citation Flow Trust Flow
1 228 1 14 0
2 177 23 37 37
3 91 31 15 13
4 82 4 11 9
5 46 4 17 2

Table 1: Top 5 Pages, Sorted By Number of Links

These five items are as follows:

  1. A research paper on the impact of cotton in poor rural households in India.
  2. The WRAP homepage.
  3. A PDF of an economics working paper on currency area theory.
  4. A PDF of an economics working paper on happiness and productivity.
  5. The record for a PhD thesis on Women poets.


The top ten backlinks into the WRAP system include a range of sources, from this blog, two Wikipedia pages and two referrals from the PhilPapersrepository, which monitors journals, personal pages and repositories for Philosophy content. We also see a two of pages that collect literature on health topics who are linking back to us, a Maths blog and the newsletter of the British Centre of Science Education.

Interestingly in Figure 3 there is no mention of the University of Warwick or any of its related domains ( for the Business School, for instance). I assume this is because MajesticSEO are excluding ‘self’ links, so as WRAP is a Warwick subdomain they are excluding a lot of the links I am aware of. This may also take into account the lack of any backlinks from the Warwick Blogs service. Many of the domains listed here are blog platforms of one form or another, which may be because of the database driven architecture of these platforms and the way the MajesticSEO system are reading those links. For example, if a researcher puts a link to his most recent paper in WRAP on the frame of the blog and this propagates onto every post in the blog, does this count as a single link or as many? We are also seeing links from sources such as the BBC and Microsoft, where, again, it would be nice to be able to see who was linking to what and from where in these domains.

The top pages, as listed by number of backlinks in Table 1, show a trend for linking directly to the file of the full text material we hold in WRAP. This information would tie in nicely with the fact that item three is the most downloaded paper in WRAP over the lifetime of the repository, with 9,162 downloads to the end of August 2012. So in this case we can draw a tentative line between the number of downloads and the number of backlinks. However we can’t follow this theory through, especially as the top paper linked to externally, Paper 1 as listed in Table 1, has been downloaded only a fraction of the number of times compared to the currency working paper. When listed by the flow metrics, as in Figure 4 the pages largely follow the results as seen for the Opus repository at Bath and link to pages about the repository. This is apart from the two anomalous results where despite having no citation or trust flow scores they are ranked second and third, when ranked on flow metrics.


I think when looking at metrics the most important thing for a repository manager to do is to be able to build stories around the metrics, as these help the researchers to engage with the figures. Was this spike in downloads because of featuring in a conference, or an author moving to a new institution, or for some other reason? What can I show my users that are going to help them to make the decision to use us over other options and to expend scare time resources maintain a blog or Twitter account? Here the issue, I have with the data we have discovered is that the number of backlinks into a repository will never conclusively prove that a paper will get more downloads, as ably illustrated by the example above. Many researchers are not interested in the fuzzy conclusions we can draw at this point; they want to see clear, conclusive proof that links = downloads = citations.

I also think that search engine performance is an increasingly difficult area to be really conclusive about, especially now users can ‘train’ their Google results to prefer the links they click on most often. This was recently a cause of concern for us as it was reported that our Department of Computer Science (DCS)’s EPrints repository was overtaking our Google ranking and that WRAP didn’t feature until page two of the results now. This wasn’t the case, but because the user reporting this to us was heavily involved in the area of computer science his Google rankings had preferred the DCS repository to the WRAP one as the results were more relevant to his interests. In the same was as when I search for ‘RSP’ my top result is now the Repositories Support Project and not, RSP the Engineering Company or the Peterborough Health and Safety firm as it was initially

We need to always be conscious of what the researcher want from metrics and whether it is possible for us to give it to them. As with any metrics we need to be aware that we have to be explicit in what it is that we are saying and what can be inferred by it. If we are users of metrics don’t understand how the metrics are being developed or how the search engines ranking algorithms work, we won’t be able to confidently predict what we can do to improve them. It may also come down to the way researchers are using these services and for what purpose, which may be why we are not seeing any evidence of the use of services like and LinkedIn. I would imagine if researchers are using services to showcase their work to prospective employers and other researchers they may prefer to link to the publisher’s version of their work rather than the repository versions. I suspect the interest story from the SEO data may be more about ‘who’ is linking to their work rather than where they are linking from, which is detail we cannot and possibly should not be able to provide.

About the Author

Yvonne Budden (@wrap_ed), the University of Warwick’s E-Repositories Manager is responsible for WRAP, the Warwick Research Archive Portal and is the current Chair of the UK Council for Research Repositories (UKCoRR).


Posted in Evidence, Guest-post, Repositories | 3 Comments »

Analysis of Google Search Traffic Patterns to Russell Group University Web Sites

Posted by Brian Kelly on 1 Oct 2012


How can we ensure that the wide range of information provided on university Web sites can be easily found? One answer is quite simple: ensure that such resources are easily found using Google. After all, when people are looking for resources on the Web they will probably use Google.

But what patterns of usage for searches for university Web sites do we find? In a recent survey of the search engine rankings, it was observed that only one institutional Web site (at the University of Oxford) was featured in the list of Web sites which have a high ranking which can help drive traffic to the institutional repository. It was also noticed that this Web site had a significantly lower Alexa ranking (6,187) than the other 15 Web sites listed, such as,,, etc. which had a Google ranking ranging from 1-256.

In order to gain a better understanding of how Google may rank search results for resources hosted on university Web sites are, the findings of a survey are published below which provide graphs of recent search engine traffic and summarise the range of values found for the global and UK Alexa rankings and the Alexa ‘reputation’ scores across this sector.

About Alexa

From Wikipedia we learn that:

Alexa Internet, Inc. is a California-based subsidiary company of that is known for its toolbar and website. Once installed, the Alexa toolbar collects data on browsing behavior and transmits it to the website, where it is stored and analyzed, forming the basis for the company’s web traffic reporting. Alexa provides traffic data, global rankings and other information on thousands of websites, and claims that 6 million people visit its website monthly.

The article goes on to describe how:

Alexa ranks sites based on tracking information of users of its Alexa Toolbar for Internet Explorer and Firefox and from their extension for Chrome. 

This means that the Alexa findings should be treated with caution:

the webpages viewed are only ranked amongst users who have these sidebars installed, and may be biased if a specific audience subgroup is reluctant to do this. Also, the ranking is based on three-month data

Despite such limitations, the Alexa service can prove useful in helping those involved in providing large-scale Web sites with a better understanding of the discoverability of their Web site.  The Alexa Web site describes howAlexa is the leading provider of free, global web metrics. Search Alexa to discover the most successful sites on the web by keyword, category, or country“.

In light of the popularity of the service and the fact that, despite being a commercial service, it provides open metrics it is being used in this survey as part of an ongoing process which aims to provide a better understanding of the discoverability of resources on institutional Web sites.

Survey Using Alexa

The following definitions of the information provided by Alexa were obtained from the Alexa Web site:

The Global Alexa Traffic Rank is “An estimate of the site’s popularity. The rank is calculated using a combination of average daily visitors to the site and pageviews on the site over the past 3 months. The site with the highest combination of visitors and pageviews is ranked.”

The GB Alexa Traffic Rank is “An estimate of the site’s popularity in a specific country. The rank by country is calculated using a combination of average daily visitors to the site and pageviews on the site from users from that country over the past month. The site with the highest combination of visitors and pageviews is ranked #1 in that country.

The Reputation is based on the number of inbound links to the site: The number of links to the site from sites visited by users in the Alexa traffic panel. Links that were not seen by users in the Alexa traffic panel are not counted. Multiple links from the same site are only counted once. 

The graph showing traffic from search engines gives the percentage of site visits from search engines.

The average traffic is based on the traffic over the last 30 days.

The data was collected on 20 September 2012 using the Alexa service. Note that the current finding can be obtained by following the link in the final column.

The graphs for the traffic from search engines contain a snapshot taken on 20 September 2012 together with the live findings provided by the Alexa service. The range of findings for the Alexa rank and reputation is provided beneath the table.

Table 1: Alexa Findings for Russell Group University Web Sites
1 2 3 4 5
Institution Traffic from Search Engines Average Traffic
(18 Aug –
17 Sep 2012)
20 Sept 2012 Current findings
University of
19.7% [Link]
University of
22.8% [Link]
University of
24.0% [Link]
Cardiff University   26.1% [Link]
University of
25.1% [Link]
University of
26.9% [Link]
University of
26.7% [Link]
University of
Imperial College     31.0% [Link]
King’s College
  19.9% [Link]
University of
31.7% [Link]
University of
22.5% [Link]
LSE   22.5% [Link]
University of
  25.8% [Link]
15.7% [Link]
University of
20.5% [Link]
University of Oxford 26.8% [Link]
Queen Mary,
of London
   20.2% [Link]
  14.1% [Link]
University of
     17.4% [Link]
University of Southampton   21.9% [Link]
UCL     26.7% [Link]
University of
  29.6% [Link]
University of
    23.5% [Link]

Survey Paradata

This survey was carried out using the Alexa service on Thursday 20 September. The Chrome browser running on a Windows 7 platform was used. The domain name used in the survey was taken  from the domain name provided on the Russell Group University Web site. The snapshot of the traffic shown in column 2 was captured on 20 September. Column 3 gives a live update of the findings from the Alexa service. Note that if the live update fails to work in the future this column will be deleted.


The Russell Group university Web sites have global Alexa rankings ranging from 6,318 to 75,000 and UK Alexa rankings ranging from 748 – 6,110. In comparison in the global rankings Facebook is ranked at number 1YouTube at 3Wikipedia at 6Twitter at 8Blogspot at at 22, and the BBC at 59.

The Russell Group university Web sites have “reputation” scores ranging from 4,183 – 43,917, which are based on the number of domains with links to the sites which have been followed in the past month. Although the algorithms used by Google to determine the search results ranking are a closely-kept secret (and are liable to change to prevent misuse) the number of domains, together with the ranking of the domains, are used by Google in its search algorithms for ranking the search results. According to the survey, Google delivered between 14-31% of the traffic to the Web sites during August-September 2012.


In addition to the limitations of data provided by Alexa summarised above it should be noted that we should not expect institutions to seek to maximise any of the Alexa rankings purely for its own sake. We would not expect university Web sites to be as popular as global social media services. Similarly it would be unreasonable to expect findings  to be used in a league table. However universities may well be exploring SEO approaches, and perhaps commissioning SEO consultants to advise them. This post, therefore, aims to provide a factual summary of findings provided by a service which may be used for in-house analysis or by third-parties who have been commissioning to advise on SEO strategies for enhancing access to institutional resources.


This survey was published in September since we might expect traffic to grow from a lull during the summer vacation, but increase as students prepare to arrive at university. It will be interesting to see how the pattern changes over time and, since this page contains a live feed from Alexa shown in column 7, it should be easy to compare the current patterns across the Russell Group universities.

This initial survey has been carried out in order to provide a benchmark for further work in this area and invite feedback. Further work is planned which will explore in more detail the Web sites which drive search engine traffic to institutional Web sites in order to identify strategies which might be used in order to enhance traffic search engine.

It should be noted that this data has been published in an open fashion in order that the methodology can be validated and the wider community can benefit from the findings and from open discussion about the approaches taken to the data collection and discussions on how such evidence might inform plans for enhancing the discoverability of content hosted on institutional Web sites. Feedback would be appreciated on these approaches.

Twitter conversation from: [Topsy]

Posted in Evidence, search | Leave a Comment »

Google Search Results for Russell Group Universities Highlight Importance of Freebase

Posted by Brian Kelly on 24 Sep 2012

About This Post

This post summarises the findings of a survey of the Google search engine results for Russell Group universities. The post provides access to the findings which were obtained recently, with live links which enable the current findings to be viewed. The post explains how additional content, beyond the standard search results snippet, is obtained and discusses ways in which Web managers can manage such information.

The following sections are included in this post:

The Importance of Google Search

An important part of my work in supporting those who manage institutional Web service is in evidence-gathering. The aim is to  help identify approaches which can inform practice for enhancing the effectiveness of institutional Web service.

This post summarises the findings for Google searches for institutional Web sites. Google plays an important role in helping users find content on institutional Web sites. But Google nowadays not only acts as a search engine, it also provides navigational aids to key parts of an institutional Web site and hosts content about the institution.

An example of a typical search for a university is shown below; in this case a search for London School of Economics. As can be seen, the results contain navigational elements (known as ‘sitelinks‘); a search box (which enables the user to search the institutional Web site directly); a Google map; a summary from Wikipedia and additional factual content, provided by Google.

Findings of a Survey of Search Results for Russell Group Universities

Are the search results similar across all institutions? And if there are significant differences, should institutions be taking action to ensure that additional information is being provided or even removed?

In order to provide answers to such questions a search for the 24 Russell Group universities was carried out on 17 September 2012. The findings are given in the table shown below. Note that the table is in alphabetic order.  Column 2 gives the name of the institution and the search term used; column 3 gives the sitelinks provided; column 4 states whether a search box was embedded in the results; column 5 states whether a Google Map for the institution was provided; column 6 lists the titles of the factual content provided; column 7 provides a link to the Wikipedia entry if this was provided and column 8 provides a link to the search findings, so that up-to-date findings can be viewed (which may differ from those collected when the survey was carried out).

Table 1: Google Search Findings for Russell Group Universities
Institution / Search term Main search results (on left of Google results page) Additional results
(on right of Google results)
View results
Sitelinks Search box? Google Map? Factual information categories
 from Google
Wikipedia Content
University of Birmingham

Course finder – Jobs

Postgraduate study at … – Contact us

Accommodation – Schools and Departments

No Yes At a glance; Transit; More reviews [Search]
University of Bristol

Undergraduate Prospectus – Faculties and Schools

Jobs – Study

Contacting people – International students

No Yes Motto; Address; Enrollment; Phone; Mascot; Hours [Link]  [Search]
University of Cambridge

Job Opportunities – Contact us

Undergraduate – Visitors

Hermes Webmail Service – Staff & Students/

Yes Yes Motto; Address; Color; Phone; Enrollment; Hours [Link]  [Search]
Cardiff University

For… Current Students – International students

Job Opportunities – For… Staff

Prospective Students – Contact Us

Yes Yes Address; Phone; Enrollment; Colors [Link] [Search]
University of Durham

Undergraduate – Visit us

Postgraduate Study – Student Gateway

Staff Gateway – Courses

Yes Yes Address; Phone; Colors; Enrollment; Founded [Link] [Search]
University of Edinburgh – Staff and students

Studying at Edinburgh – Research

Schools & departments – Summer courses

Yes Yes Address; Acceptance rate; Phone; Enrollment; Founded; Colors [Link] [Search]
University of Exeter

Undergraduate study – Contact us

Postgraduate study – Visiting us

Working here – Studying

Yes Yes Address; Enrollment; Phone; Colors [Link] [Search]
University of Glasgow

Undergraduate degree … – MyGlasgow for students

Postgraduate taught degree … – Information for current students

Jobs at Glasgow – Courses

Yes Yes Address; Phone; Acceptance rate; Enrollment; Founded; Colors [Link] [Search]
Imperial College

Postgraduate Prospectus – My Imperial

Courses – Employment

Faculties & Departments – Prospective Students

Yes Yes Motto; Address; Phone; Acceptance rate; Enrollment; Colors [Link] [Search]
King’s College London

Postgraduate Study – Job opportunities

Undergraduate Study – Florence Nightingale School of …

Department of Informatics – School of Medicine

Yes Yes Address; Phone; Mascot; Enrollment; Founded; Colors [Link] [Search]
University of Leeds

Undergraduate – University jobs

Postgraduate – School of Mathematics

Portal – Coursefinder

Yes Yes Address; Phone; Enrollment; Founded; Colors [Link] [Search]
University of Liverpool

Students – Job vacancies

Postgraduate – Online degrees

Undergraduate – Departments and services

Yes Yes Address; Phone; Enrollment; Acceptance rate; Founded [Link] [Search]
London School of Economics

Impact of Social Sciences – Department of Economics

Undergraduate – Library

Graduate – LSE for You

Yes Yes Address; Phone; Enrollment; Mascot; Founded; Colors [Link] [Search]
University of Manchester

Postgraduate – Courses

Undergraduate – Contact us

Job opportunities – John Rylands Library

 Yes Yes Enrollment; Founded; Colors [Link] [Search]
Newcastle University

Undergraduate Study – Postgraduate Study

Student Homepage – Contact Us

Vacancies – Examinations

Yes Yes Address; Phone; Enrollment; Founded; Colors: [Link] [Search]
University of Nottingham

Undergraduate Prospectus – Open days

Postgraduate Study at the … – Visiting us

Jobs – Academic Departments A to Z 

Yes Yes Address; Phone; Enrollment; Founded; Colors: [Link] [Search]
University of Oxford

Jobs and Vacancies – Online and distance courses

Undergraduate admissions – Colleges

Graduate Admissions – Maps and Directions

Yes No Acceptance rate; Color; Enrollment [Link] [Search]
Queen Mary, University of London
 No Yes Address; Phone; Enrollment; Colors [Link] [Search]
19 Queen’s University Belfast

Course Finder – Queen’s Online

Postgraduate Students – Job Opportunities at Queen’s

Schools & Departments – The Library

Yes Yes Address; Phone; Enrollment; Founded [Link] [Search]
University of Sheffield

MUSE – Postgraduates

Jobs – Courses and Prospectuses

Undergraduates – Departments

Yes No

Enrollment; Founded; Colors:

[Link] [Search]
University of Southampton

Undergraduate study – University contacts

Postgraduate study – International students

Faculties – Medicine

No Yes Address; Enrollment; Founded [Link] [Search]
22 University College London

Prospective Students – Research

Philosophy – About UCL

Economics – Teaching and Learning Portal

Yes No Enrollment; Founder; Founded; Colors [Link] [Search]
University of Warwick

University Intranet – Undergraduate Study

Postgraduate Study – Visiting the University

Current Vacancies – Open Days

Yes Yes Address; Phone; Enrollment; Acceptance rate; Founded; Colors [Link] [Search]
University of York

Jobs – Postgraduate study

Undergraduate study – Staff home

Student home – Departments

 No Yes Address; Enrollment; Hours; Phone; Founded; Colors [Link] [Search]

Note: This information was collected on 17 September 2012 and checked on 18 September 2012. It should also be noted that since Google search results can be personalised based on a variety of factors (previous searches, client used to search , etc.) others carrying out the same search make get different results.


We can see that 21 Russell Group University Web sites have a Google Map; 19 have a search interface on Google. The following table summarises the areas of factual information provided. The table is listed in order of the numbers of entries for each category. Note that the American spellings for ‘enrollment‘ and ‘color‘ are used in the Google results.

Table 2: Summary of the Categories Found
Ref. No. Type      Number
 1  Enrollment 23
 2  Address 19
 3  Color(s) 19
 4  Phone 18
 5  Founded 16
 6  Acceptance rate   6
 7  Mascot   3
 8  Motto   3
 9  Hours   2
10  Founder   1

In addition the search results also included information on Ratings and Google reviews (15 Russell Group university Web sites have a Google rating and 17 have a Google review). The numbers of Google reviews ranged from 1 to 208. Note that this information may well be susceptible to the ‘Trip Advisor Syndrome’ in which people have vested interests in giving either very high or very low scores.



The navigational elements are referred to as ‘sitelinks’ by Google. As described on the Google Webmaster Tools Web site:

sitelinks, are meant to help users navigate your site. Our systems analyze the link structure of your site to find shortcuts that will save users time and allow them to quickly find the information they’re looking for

The creation of sitelinks is an automated process. However, as described on the Google Webmaster Tools Web site, if a sitelink URL is felt to be inappropriate or incorrect, a Webmaster who has authenticated ownership of the Web site with the Google Webmaster tools can demote up to 100 of such links.

It should also be noted that during the final checking of the findings, carried out on 21 September 2012, it was found that the sitelinks for the University of Exeter had changed over a period of 5 days. The initial set of six sitelinks, which are listed above, were: Undergraduate study – Contact usPostgraduate study – Visiting usWorking here – Studying. The more recent list is Undergraduate study – Working herePostgraduate study – Contact usInternational Summer School – Studying.

Google Content

Although I suspect the findings for location maps won’t be a significant issue for universities (unlike, say, for small businesses) it was the the factual content provided by Google which seems to be of most interest. The display of such factual information is a recent development. On 16 May, 2012 a post on the GigaOM blog announced Google shakes up search with new Wikipedia-like feature which described how “the search giant is carving out a chunk of the site for “Knowledge Graph”, a tool that offers an encyclopedia-like package in response to a user’s query“. I highlighted the importance of the announcement in a post entitled Google Launches Knowledge Graph and, as Martin Hawksey commented, “As Freebase uses Wikipedia as its main data source having information in there is important but it’s in Freebase that structure is added to individual entities to make the knowledge graph“.

This factual information appeared to be the most interesting aspect of the survey. A summary of the Freebase service is given below, together with a discussion of the implications for management of content hosted in Freebase.

Thoughts on Freebase

It was back in 2007 when I first became aware of Freebase. As I described in a report on the WWW2007 conference Freebase is “an open Web 2.0 database, which has been exciting many Web developers recently“, with a more detailed summary being provided in Denny Vrandecic’s blog posting. However since then I have tended to focus my attention on the importance of Wikipedia and haven’t been following developments with Freebase apart from the announcement in 2010 of the sale of Freebase to Google.

Looking at the Freebase entry for the University of Oxford it seems there are close links between Freebase and Wikipedia. As shown in the screen image, the textual description for the University of Oxford is taken from the Wikipedia entry. Just like Wikipedia it is possible to edit the content (see the orange Edit This Topic button in the accompanying screen shot) which allows anyone with a Freebase account to update the information.

As with Wikipedia, Freebase provides a history of edits to entries. Looking at the edits to the University of Oxford entry we can see many edits have been made. However most of these related to the assignment of the entry to particular categories e.g. Education (Education Commons). It was initially unclear to me how easy it would be to detect incorrect updates to the entry, whether made by mistake or maliciously.

In order to understand the processes for updating entries to Freebase with the permission of Rob Mitchell, the University of Exeter Web Manager, I updated the Enrollment figure for his institution which was 15,720 in 2006 to 18,542 in 2011. The updating process was simple to use and the new data was immediately made available for the University of Exeter Freebase entry. Rob will be monitoring the Google search results in order to see how long it takes before the update is available. We might reasonably expect (indeed hope) that there will be manual process for verifying the accuracy of updates made to Freebase articles.

It does seem to me that those involved in University marketing activities or those with responsibilities for managing a university’s online presence may wish to be taking responsibility for managing information provided on Freebase. Is the management of factual information about institutions hosted on Freebase something which institutions are currently doing? If so, does is this limited to annual updates of enrollment figures, etc. or is new information being provided?

Twitter conversation from: [Topsy]

Posted in Evidence, search | Tagged: | Leave a Comment »

Posters, Infographics and Thoughts on JISC and C21st Scholarship

Posted by Brian Kelly on 12 Sep 2012

In a recent post on Wikipedia in universities and colleges? published on the JISC blog Amber Thomas mentioned her contribution to the Eduwiki 2012 conference which took place in Leicester last week.

Amber’s post included a poster entitled JISC on C21st Scholarship and the role of Wikipedia which I’ve embedded in this post.

Amber described the image as an “infographic” which generated some debate on Twitter regarded the difference between an infographic and a post. Thus led to recollection of a passionate discussion at the IWMW 2012 event on the difference between infographics and data visualisation.

It seems that data visualisation provides a view on an entire data set, whereas an infographic is a lossy process which focusses on a particular aspect of the data which the creator of the infographic wishes to focus on. A poster might be described as an infographic without the data.

The accompanying image does, in the depiction of the education level of Wikipedia users, a certain amount of ‘infographical’ information, but the remainder is a poster. I think we can conclude that there are fuzzy boundaries between posters and infographics.

This is probably, however, less fuzziness between those who find infographics useful and those who dismiss them as marketing mechanisms for presenting a particular viewpoint, but hiding the underlying complexities. This, at least, lay behind the passionate discussion that took place late one evening at IWMW 2012!

Such discussions frequently take place in the context of scientific communications. There are those who value the importance of communicating the implications of scientific research to the general public and feel that going into the details will tend to alienate the public. However such approaches can be dismissed by others who feel that such approaches results in a dumbing-down of the complexities.

I came across these issues earlier this year when I spoke at a day’s event on “Dealing With the Media” organised by the AHRC (Arts and Humanities Research Council. The event was aimed at recipients of AHRC grants and outlines the experiences of those who had been successful if maximising the visibility of their research through engagement with mass media. Other speakers described strategies for ‘selling’ your story to those who would commission articles for the BBC or publications such as the Guardian and the Times Higher Education. The importance of giving a brief and simple message was made by a number of the speakers.

I am in favour of use of infographics to help put across complex arguments. I was particularly impressed with Amber’s approach, as she not only produced the infographics which I have illustrated but also integrated the points given in the infographic in the slides she used in her presentation. In addition Amber has provided a document giving the source of the materials she used in her presentation.

Amber seems to be suggesting approaches which could benefit others who might wish to enhance the impact of their work. This is, of course, of importance across the sector as can be seen from the EPSRC’s recent announcement of their Impact Toolkit. This addresses areas such as What is impact?Why make an impact?What the ESRC expectsHow to maximise impactDeveloping a strategyImpact toolsTaking research to WestminsterContact government organisationsGetting social science research into the evidence base in governmentKnowledge exchangePublic engagementImpact resources and ESRC Pathways to Impact for Je-S applications.

It strikes me that as well as learning from such resources, it may also be helpful to share the tools and the approaches taken in producing infographics and posters. It may be that such work will be provided by a graphics unit who have expertise in this area. But this may only be a realistic solution for high profile outputs. Perhaps we should all seek to develop expertise in this area? The tool Amber used in the production of her poster,, might provide a useful starting point. We can find examples of other tools for creating infographics which are available. But perhaps more importantly besides Amber’s example has anyone examples of good posters and infographics related to development work which we can learn from?

NOTE: Tony Hirst has provided Delicious bookmarks of services for creating infographics which include Piktochart- Infographic & Presentation and

Twitter conversation from Topsy: [View]

Posted in Evidence, General | 1 Comment »

MajesticSEO Analysis of Russell Group University Repositories

Posted by Brian Kelly on 29 Aug 2012

Investigation of SEO Rankings of Institutional Repositories

There is a need “to investigate whether links [from popular social media services] are responsible for enhancing SEO rankings of resources hosted in institutional repositories” concluded the paper by myself and Jenny Delasalle which asked “Can LinkedIn and Enhance Access to Open Repositories?“.

The importance of SEO rankings for surfacing content hosted in institutional repositories can be gauged from the responses to the query I asked on the JISC-Repositories JISCMail list: “Does anyone have any statistics on the proportion of traffic which arrives at institutional repositories from Google?”. I asked a similar question on Twitter and found that mature research repositories seem to get about from 50-80% of their traffic from Google. This aligns with the findings reported by Les Carr for the University of Southampton back in 2006: “the majority of repository use, if I can equate eprint downloads with repository use, is due to external web search engines (64%)“. Indeed since it has been reported that direct downloads of PDFs hosted in repositories may not be reported unless Google Analytics has been configured appropriately such figures may be an underestimate!

In light of the importance of Google in supporting repositories in their mission of making research papers easily accessible to others it will be useful to gain a better understanding of the factors which contribute to supporting the discoverability of the content hosted in institutional repositories.

The survey described in this post reports on summary SEO findings for the 24 Russell Group universities. The aims of the survey are to provide a benchmark for comparisons with surveys which may be carried out in the future, to attempt to identify any interesting usage patterns which may help to enhance the effectiveness of institutional repositories and to identify the highest ranked domains which provide links to institutional repositories.

Survey Using MajesticSEO

The data was collected on 27-28 August 2012 using the MajesticSEO service. Note that the current finding can be obtained by following the link in the final column. The findings can be viewed if you have signed up to the free service.

Table 1: MajesticSEO Findings for Repositories Hosted at Russell Group Universities
Institutional Repository Details Referring
Top Five Domains & Numbers of Links View Results
Repository usedeprint Repository
 116  499  146  16 6,424 4,658 200 82 67
InstitutionUniversity of Bristol
Repository used: ROSE
 159  691 144  21 7,871 6,692 273 98 89
Repository usedDspace @ Cambridge
  86 7,339  283  97 33,276 17,241 1,771 449 442
InstitutionCardiff University
Repository usedORCA
   22     58     9    4 1,874 883 250 85 60
InstitutionUniversity of Durham

Repository usedDRO

297 1,281   27   12 5,430 3,020 145 76 45
Repository used: ERA
747  3,943  247  71 14,380 9,845 470 401 296
InstitutionUniversity of Exeter
Repository used: ERIC
Note: Repository sub-domain not used. See footnote 2.
198   958  175   18 1,125 1,115 45 43 42
InstitutionUniversity of Glasgow
Repository usedEnlighten
 4,868 423  62 5,880 5,087 322 178 135
InstitutionImperial College
Repository usedSpiral
 139  702 329  11 3,363 1,883 121 119 65
 37 2,552 2,275 169 160 139
InstitutionUniversity of Leeds
Repository usedWhite Rose Research Online
 700 4,847 1,354    2 44 23 13 8 5
Repository usedResearch Archive
 297   147    8 4,057 2,461 97 55 53
Repository usedLSE Research Online
 1,365 9,771  549   80 14,449 11,550 343 262 244

Repository usedeScholar

Note: Repository sub-domain not used. See footnote 3.
 (5)  (29)  – [Link]
InstitutionNewcastle University

Repository usedNewcastle Eprints

 30  215  85    5 6,425 3,929 221 116 87
Repository usedNottingham Eprints
 359 1,594 328   57 5,410 3,856 148 77 66
InstitutionUniversity of Oxford
Repository usedORA
 299  1,116  94  35 42,008 39,798
1,437 548 504
Repository used: QMRO
  27  449  350   6 4,722 1,221 259 219 89

: Repository sub-domain not used. See footnote 4.
 (9)  (14)  –  – [Link]
Repository used: DCS Publications Archive

Note: Repository sub-domain not used. See footnote 5.

Note: The University of Sheffield also uses the White Rose repository which is also used by Leeds and York. See the Leeds entry for the statistics.

 (2)   (3)  –  –  [Link]
Repository usedeprints.soton
46,176 33,524 123 4,384 2,568 264 138 89
Repository usedUCL Discovery
 13,978 492   24 16,009 15,633 860 406 250
InstitutionUniversity of Warwick

Repository usedWRAP

 2,476 278    20 9,412 7,601 217 179 122
InstitutionUniversity of York
Repository used: YODL
Note: Repository sub-domain not used. See footnote 6.
Note: The University of Sheffield also uses the White Rose repository which is also used by Leeds and York. See the Leeds entry for the statistics.
 (3)  (5)  –  –  [Link]
Range  14 – 1,369  37 – 46,176  9 – 33,524  2 – 123


  1. The list of repositories is taken from OpenDoar.
  2. The ERIC repository at the University of Exeter is hosted at Since the repository home page is a redirect from it was possible to analyse the SEO rankings and get appropriate results.
  3. The eScholar repository at the University of Manchester is hosted at  Figures for this home page are given but since the domains with incoming links may refer to pages hosted on the domain, these figures are not given in order to avoid skewing the findings.
  4. The Queen’s University Belfast repository is hosted at Figures which are available for this home page are given but since the domains with incoming links may refer to pages hosted on the domain, these figures are not given in order to avoid skewing the findings.
  5. The DCS repository at the University of Sheffield is hosted at Figures which are available for this home page are given but since the domains with incoming links may refer to pages hosted on the domain, these figures are not given in order to avoid skewing the findings.
  6. The YODL repository of the University of York is hosted at Figures which are available for this home page are given but since the domains with incoming links may refer to pages hosted on the domain, these figures are not given in order to avoid skewing the findings.

Table 2 gives the total number of links to the high-ranking domains which are listed in the survey, together with the Alexa ranking for these domains. Note has the highest Alexa ranking and is listed at number 1. Figure 1 shows the significance of links from blog platforms compared with the other most highly-ranked domains.

Figure 1: Histogram of number of incoming links from top domains

Table 2: Nos. of Links from High-Ranking Domains
No. Domains No. of links Alexa Ranking
1 Blogspot  176,625       5
2 WordPress  153,809     21
3 Wikipedia     7,230       8
4 BBC     2,811     36
5 Google    1,447       1
6 Ask       769     46
7 YouTube       460       3
8 Guardian       334    187
9 Reddit       261    143
10       259    259
11 Typepad       250   212
12 CNN      135     43
13 Microsoft       89     26
14 Sourceforge       67    139
15 Ning       42    256
16 Oxford University         5 6,764


In a previous post I suggested that since is so widely used across Russell Group Universities, encouraging researchers to provide links to their papers hosted in their institutional repository would enhance the visibility of papers to Google, especially since LinkedIn has such a high Alexa ranking (it currently is listed at number 13 in the global ranking order).

However it appears that LinkedIn does not appear to have a significant presence according to the findings provided in MajesticSEO (although the free version does only list the top five domains).

Based on the information obtained in the survey it would appear that two blog platforms, and, are primarily responsible for driving traffic to institutional repositories, having both high Alexa rankings together with large numbers of links to the repositories.

Following these two platforms, but a long way behind, we find Wikipedia and the BBC and then, perhaps somewhat confusingly, Google itself (perhaps links from Google Scholar). The presence of media sites such as the BBC, CNN and the Guardian suggest that researchers (or their media advisers) are doing a good job in ensuring that these organisations provide links to original research papers when stories about university research are being covered in the media.

But perhaps the most noticeable findings is that only one University Web site – Oxford’s – is included in the list of the top 5 domains across all of the Russell Group Universities. The low Alexa ranking (6,764) for the Oxford University Web site in comparison with the other sites listed (which have an Alexa ranking ranging from 1 to 259) suggests that links from university Web sites, even prestigious universities such as Oxford, will not have a significant impact on Google search results. It should also be noted that links from the University of Oxford Web site will not provide SEO benefits to the University of Oxford’s repository, which is hosted in the same domain (

Limitations of this Survey

It should be noted that these conclusions are based on just one SEO tool and only a small selection of the findings are available. A more comprehensive survey would make use of the licensed version of the service, and make use of other SEO tools to compare the findings.

In addition Google do not publish the algorithms on which their search results are ranked so there can be no guarantee that the findings provided by SEO tools will relate directly to users experiences of using Google.

In order to relate these findings to the ways users access resources hosted on a repository there will be a need to examine usage statistics for repositories. It would be interesting to see if the downloads for the most popular items show any correlation with links from the services listed above.

Survey Paradata: The findings given in Table 1 were collected on 27-28 August 2012 using the free version of MajesticSEO. The Alexa rankings listed in Table 2 were obtained from the Alexa survey and collected on 28 August 2012. Where the findings from MajesticSEO were incomplete, due to the repository not being hosted on the root of a repository sub-domain this information was recorded and any data collected was not included in further analysis.

Twitter conversation from: [Topsy] – [SocialMention] – [WhosTalkin]

Posted in Evidence, Repositories | 15 Comments »

Where Do You Go To (My Lovely)?

Posted by Brian Kelly on 24 Aug 2012

Where Do Visitors To This Blog Go To?

Where do visitors to this blog go do when they click on a link which published in a blog post? When I looked at the click statistics for the past year I was surprised that the top ten pages, with just one exception, were to the home page of a number of UK Universities: Abertay, Aston, Cambridge, Bangor, Buckingham, Glasgow, ECA, Exeter and Falmouth. I subsequently found that these were the nine of the 26 institutions which had been hyperlinked in a post on Best UK University Web Sites – According to Sixth Formers published in 2010.

Apart from the links followed from this single post the other top web sites visited in the past year are:,,,, and

What does this evidence tells us? Suggestions for the popularity of these Web sites are given below: I normally provide a link to tweets which I cite. This enables me to find the original source if I wish to make use of it in the future. In addition it will help people reading the post to see the source, see the context and find out more about the Twitter user. It would appear that my decision to do this has proved useful as people do seem to be clicking on links to tweets. This initially appeared to be an anomaly. However I subsequently realised that a post giving Thoughts on Google Scholar Citations published a few days after Google’s announcement that Google Scholar Citations Open To All had proven very popular after I had left a comment linking to the post on Google’s blog post. Scholar It is pleasing to see that links to Opus, the University of bath’s institutional repository, features so highly. These are primarily to copies of my peer-reviewed papers. Interestingly a recent paper by myself and Jenny Delasalle asked Can LinkedIn and Enhance Access to Open Repositories? Although we feel the answer is “yes” it would appear that this blog also has a significant role to play in enhancing access to such papers. As might be expected there are significant numbers of visits to the Web site for UKOLN’s annual Institutional Web management Workshop, IWMW, since the event is featured on this blog when we issues the call for submissions, when we open the event for bookings, and when we publish reflections on the event. The reason for the significant number of visits to the Computer Weekly Web site is simple: they will have read the post in which I announced that this blog had been short-listed for the Computer Weekly’s IT Blogger of the Year award. Since I was the runner-up I know that large numbers must have followed the link and voted for this blog:-) As might be expected there are significant numbers of visits to the UKOLN Web site which hosts many of the resources for work which I write about on this blog.

Redesign of the Blog’s Sidebar

It should be noted that visitors do not only follow links provided in blog posts; the blog’s sidebars and navigation bar also provide addition content and links to resources.

Sometime ago I came across Markosweb which provides information about Web sites including the UK Web Focus blog. I was particularly interested in the heat map for the blog. As described on the Web site:

Heatmap – An F-shaped principle of how web-pages are read: two horizontal strips and one vertical. Using this principle we’ve suggested where your visitors’ eyes will first be directed to on the main page.

This data can help you in placing the most important site’s blocks in the hottest places. This will help you to increase the site’s traffic and raise profitability.

The left hand sidebar provides information about the blog which I feel is important information. However, as shown in the accompanying image of the heat map for a previous design of the blog, although the blog’s search box is likely to be used by people which wish to search for additional posts, the email subscription sign-up area was a waste of space, as this is something people will only do once, if at all.

In light of the suggestion that the heat map can help be to locate important content I updated the design of the sidebar in March 2012. The blog now has a Featured Paper area beneath the search box (as illustrated) which summarises a paper and provides links to the paper. The featured paper is updated every couple of weeks.

It was not clear to me whether the redesign had any effect on users’ behaviour. Having for the first time analysed the statistics for users clicks it would appear that this redesign has helped to raise the visibility of my papers (it should be noted that the clicks may also have come from links provided in blog posts) .

What Does the Evidence Tell Us?

Myself and Jenny are presenting a talk at the Internet Librarian International (ILI 2012) conference to be held in London on 30-31 October which will try to provide an answer to the question: What does the evidence tell us about institutional repositories? The evidence from analysis of the blog’s statistics tells us that the blog delivers significant traffic to the University of Bath’s repository. Given the significant relationship between this blog and the Opus repository it will be interesting to see if the links from this blog have any impact on the repository’s search engine rankings and the visibility of the repository itself, as well as my papers, for researchers who make use of Google to search for relevant information.

Perhaps my post which asked Can LinkedIn and enhance access to open repositories? which was republished yesterday on the LSE Impact of Social Sciences blog gave an incomplete view of the importance of social media for researchers seeking to maximise the impact of their work. Maybe it would be a mistake to ignore the importance of researcher’s blog, not just as an open notebook for sharing ideas at an early stage and inviting feedback, but to support the dissemination of existing published work?

Twitter conversation via Topsy: [View]

Posted in Evidence, Repositories | 1 Comment » Announces Analytics! But How Should Researchers Interpret the Findings?

Posted by Brian Kelly on 16 Aug 2012

Catching up with overnight tweets on a wet morning at the bus stop

At 7.30 am I was waiting in the rain for my bus to work. As normal I was catching up with the tweets I’d received overnight and had downloaded to my iPod Touch before leaving home. One of the tweets which was particularly interesting was from @KeitaBando. I met Keita, Digital Repository Librarian and Coordinator for Scholarly Communication for the My Open Archive service, at the Open Repositories OR 2012 conference recently, following his poster presentation on Current and Future Effects of Social Media-Based Metrics on Open Access and IRs. Keitatweet announced news of relevance to many who attended the OR12 conference: Blog: Announcing @academia Analytics

Since one of the papers I had submitted to the OR 2012 conference asked “Can LinkedIn and Enhance Access to Open Repositories?” this announcement was of particular interest to me.

The blog post Announcing Analytics described how:

Today we are announcing the release of’s Analytics Dashboard [which] allows academics to view the real-time impact of their research.

The development is based on the changing environment provided by the Web:

Increasingly, the primary consumption experience for scientific content is the web, and yet scientists have not generally been aware of the metrics around this consumption. If you ask a Harvard biology professor with 200 publications how many downloads she experienced in the last 30 days, typically she will not know.’s Analytics Dashboard is changing this. It allows an academic to understand in sophisticated detail how their research is being used by the academic community. It shows them countries that are sending them the most traffic, search engines and other sites that are sending them the most traffic, and overall profile views and document views. 

What does the new service tell me about my papers? It seems that on 11 August 2012 there were 5 views of my items available on and over the last 230 days there had been a total of 9 views of information about my papers and 11 views of my profile on

Since the analytics service “allows academics to view the real-time impact of their research” we can explore the individual visits:

and then no other activities until 22.00 on 11 August when someone from Argentina read information about the paper on Open Metrics for Open Repositories.

Clearly such numbers are underwhelming! This would therefore seem to provide evidence which suggests that the question Jenny Delasalle and myself posed in our paper “Can LinkedIn and Enhance Access to Open Repositories?” would be “No” in the case of

Since the metadata I have uploaded to provides a link to papers hosted on Opus, the University of Bath repository, it will be interested to make comparisons with the numbers of downloads of papers hosted on Opus over a similar period.

Since the Opus service provides statistics on a monthly basis it was not possible to make a direct comparison. However looking for the download statistics for my papers during July 2012 I found that there had been a total of 679 downloads with the top two downloads which, as might be expected, were of my most recent two papers, having been downloaded a total of 184 times.

From these personal experiences we might conclude that is not a significant driver of traffic to my papers and it might therefore be questionable as to whether it is worth creating a profile in the service and adding links to one’s papers. I think it would be a mistake to draw such conclusions, for the following reasons:

  • These experiences may not be replicated by others.
  • I have chosen to replicate my research profile across a number of services, including MendeleyLinkedIn and ResearcherGate as well as I would expect some of these services to be widely used, while others are less-well used.
  • Using a variety of researcher profiling services with links to my papers will enhance the ‘Google juice’ for the papers (and the repository). Use of these services can therefore enhance the discoverability of the papers for people who use Google – and this is likely to be the majority of people!

I’d be interested to hear about other people’s experiences of Is anybody finding that their pages on the service are being well-used?

Twitter conversation from Topsy: [View]

Posted in Evidence, Repositories | 1 Comment »

Searches for ‘Olympics’ are Popular! But What Other Trends are There?

Posted by Brian Kelly on 13 Aug 2012


A Four Year Cycle For Searches for ‘Olympics’ and ‘World Cup’

You will be unsurprised to hear that Google searches for ‘Olympics’ have peaked recently:-) As shown using the Google Insights tool to search for ‘Olympics’ we can spot a four-year cycle for such searches together with a slightly smaller peak two years before the Olympics which probably corresponds to the Winter Olympics.

The trends also help to identify a number of recent peaks which include:

A: Olympics: Rowers win Britain’s first gold at Olympics
B: Opening ceremony of the London
C: Olympics: London 2012 torch lit in Olympia
D: London Olympics to open with Duran Duran
E: 100 days to the London Olympics
F: Assad regime not welcome at Olympics
G: Queen to open 2012 Olympics

A similar search for “World Cup” again shows a clear 4-year cycle. But might the Google Insights tool help us to gain a better insight into trends for technological developments and help to provide indications of significant developments?

Helping to Spot Trends

The JISC Observatory provides a scanning function to detect early indications of technological developments which may have a significant impact on the higher education sector. How useful might Google Insights be for detecting or confirming trends? In order to see an answer to this question the Google Insights was used to analyse trends for several of the developments listed in the post giving My Predictions for 2012 together with a number of other developments which have generated interest recently.

The Google Insights search for “tablet computers” trend for shows a clear decline in interest until the beginning of 2010 – which coincided with speculation of the announcement of Apple’s first iPad Tablet. However the sharp decline in searches since the start of 2012 might suggest that Tablet computers have passed their peak which would seem surprising. Looking more closely at the trends we saw a similar decline in the early part of 2010 and 2011 which perhaps suggested that the peaks in December are due to Christmas shoppers. It will be interesting to observe how searches for the term development over the rest of the year. Perhaps the lesson for this example is that trend analyses may well be significantly affected by consumer patterns.

Google Insights trends for searches for ‘tablet computers’

The second prediction I made for 2012 was that we would see a growth in a variety of “open practices” within the sector. However this term has not gained widespread acceptable with Google Insights picking up on use of this term when the British Lions announced public access to their practice sessions. The lesson for this example is that it may not be appropriate to look for meaningful trends for use of a general expression which may have a particular meaning in a higher education context. This might also be the case for a search for ‘open access’ which shows no growth in recent years, even when the trend analysis is restricted to the UK.

Google Insights trends for searches for “learning analytics”

Although the term ‘open access‘ may be used in a number of contexts, “learning analytics” probably has a more specific meaning which is directly relevant to the higher education sector. A search for this term suggests that that public interest began in September 2010 with a significant growth taking place in January 2012, which coincided with the announcement that Blackboard Opens Field Trial for Learning Analytics Solution.

Google Insights search for “mobile web”

The trends for ‘mobile web’ is probably unsurprising, with an increase in the number of searches starting to grow in June 2010 and a sharp growth beginning in May 2012.

Google Insights trends for searches for “Big Data”

The trends for searches for “Big data” show that there has been a steady growth since 2010. It was interesting that these two common words do not appear to have been used outside of their technical usage described in Wikipedia asdata sets so large and complex that they become awkward to work with using on-hand database management tools“.


The reflections on use of Google Insights to detect trends has helped to identify things to consider in using the service to gain a better insight into technological developments:

  • Trend analyses for IT used by consumers may be significantly affected by consumer purchasing patterns.
  • It may not be appropriate to look for meaningful trends for use of an expression which may have a general meaning in addition to a specific meaning when used in a higher educational context.
  • It may be useful to look for trends in the UK if these may differ from global trends.

Finally if we look at the trends for searches for “Semantic Web” and “Linked Data” which are illustrated below we might conclude that Semantic Web has passed its prime but Linked Data in importance. Whilst some might argue that this is the case, another view is that the names given to IT developments and how they are marketed is important, in addition to the underlying value the developments may themselves have. Might Linked Data be being perceived as important because, in comparison with the Semantic Web, it is being actively marketed and promoted?

Google Insights search for “Semantic Web”

Google Insights search for “Linked Data”

Twitter conversation from Topsy: [View]

Posted in Evidence, jiscobs | Tagged: | 1 Comment »

Social Analytics for Institutional Twitter Accounts Provided by the 24 Russell Group Universities

Posted by Brian Kelly on 3 Aug 2012


In June 2011 a survey was published on Social Analytics for Russell Group University Twitter Accounts. The survey built on a previous survey of Institutional Use of Twitter by Russell Group Universities published in January 2011. That survey provided a snapshot of institutional use of Twitter across the twenty Russell Group Universities based on the statistics provided on Twitter account profile pages (numbers of followers, numbers of tweets, etc.). The survey was warmly received by those involved in managing institutional Twitter accounts or with an interest in activities in this area, with Mario Creatura expressing the view that the survey provided an “excellent gathering of data in an area that quite honestly is chock full of confusing stats“.

In the week which sees the expansion of the Russell Group Universities from 20 to 24 institutions a series of surveys of use of a variety of social networking services by the Russell Group universities is being carried out in order to provide a benchmark of use of the services across the enlarged group, as well as providing an opportunity for reflection and discussion of the relevance of social media analytics to inform decisions on use of such services.

Use of Social Analytic Services

In May 2011 in a post entitled Analysing influence .. the personal reputational hamsterwheel Lorcan Dempsey highlighted three social media analytic services. The post described how it had been suggested that the “Klout score will become a new way of measuring people and their influence online“. In addition to Klout, (which according to Crunchbase ”allows users to track the impact of their opinions, links and recommendations across your social graph“) Lorcan’s post also referenced PeerIndex (which according to Crunchbaseidentifies and ranks experts in business and finance based on their digital footprints“) and Twitalyser (described in a Mashable article“provid[ing] detailed metrics on things like impact, engagement, clout and velocity for individual Twitter accounts“) .

Lorcan’s blog post addressed the relevance of such service for helping to understand personal reputation on Twitter. However these services can also be used to analyse institutional Twitter accounts. I have therefore used the Klout, Peerindex and Twitalyzer social media analytic tools to analyse the 24 Russell Group University Twitter accounts. The table below summarises the findings of the survey which was carried out on Wednesday 1 August 2012. It should also be noted that the table contains live links to the services which will enable the current findings to be displayed (and also for any errors to be easily detected and reported).

Institution /
Twitter Account
No. of
No. of
Klout Peerindex Twitteralyzer
Score Network
Description Score Impact Percentile Type Full
1 University of Birmingham:
3,814 17,373 57 39 11 6K Specialist 97 17.2% 97 Everyday
2 University of Bristol:
2,504 13,195 53 36 17 3K Specialist 90  5.0% 90 Everyday
3 University of Cambridge:
 2,460 37,195 52 34  9 3K Specialist ? 13.1% 96 Everyday
4 Cardiff University:
 1,832 15,919 49 30  7 2K Specialist 58  9.4% 94 Everyday
5 University of Edinburgh:
 2,135 15,077 51 32 10 3K Specialist 54  6.7% 92 Everyday
6 Durham University:
 678   4,205 44 23  6 959 Networker 11 1.8% 72 Everyday
7 University of Exeter:
3,472 11,224 51 33  8 3K Specialist 43 6.6% 92 Everyday
8 University of Glasgow:
1,754 17,990 49 28  6 3K Specialist 43 6.1% 92 Everyday
9 Imperial College:
1,572 14,216 49 30  9 2K Specialist 47 6.1% 92 Everyday
10 King’s College London:
  954   9,299 47 27  8 2K Specialist 34 6.9% 93 Everyday
11 University of Leeds:
2,151 14,284 50 31  7  2K Specialist  42 4.2% 88 Everyday
12 University of Liverpool:
4,105 10,593 48 28  8 2K Specialist 50 4.2% 88 Everyday
13 LSE:
  389   6,177 41 19  6 622 Networker 27 1.5% 69 Everyday
14 University of Manchester:
   33    537 27 10  5 101 Conversation-
27 0.1%   8 Everyday
15 Newcastle University:
  576  2,625 41  19  7 474  Networker  11 2.4% 78  Everyday
16 University of Nottingham:
5,214 12,269 51 57 30 2K Specialist 56 10.3% 95 Everyday
17 University of Oxford:
 1,001 43,975 58 65 37 8K Specialist 49 12.0% 96 Everyday
18 Queen Mary:
 1,668 8,113 49 31 11 2K Thought
23 3.2% 83 Everyday
19 Queen’s University Belfast:
1,222  5,916 41 48 23 779 Specialist 15 2.4% 78 Everyday
20 University of Sheffield:
2,276 17,289 52 34  8 3K Specialist 50 12.9% 96 Everyday
21 University of Southampton:
1,898  8,746 50 32 9 2K Specialist 52  7.3% 93 Everyday
22 University College London:
3,384 10,113 59 30 10 2K Specialist 56 6.4% 92 Everyday
23 University of Warwick:
2,939 15,883 51 32  9 3K Specialist 57 7.1% 93 Everyday
24 University of York:
  946 10,248 49 30  8 2K Specialist 61  4% 87 Everyday
TOTAL 48,977 322,461    

It should be noted that the data provided by PeerIndex has changed since the analysis carried out last year. The values for Activity, Audience and Authority which had been provided previously no longer appear to be available. This information is therefore not available for this survey.

[NOTE: A summary of the meaning of the various rankings was given in the initial survey. Added 3 Aug 2012]

Figure 2: PeerIndex scores for Russell Group universities

Figure 1: Klout scores for new Russell Group universities

The two Klout groups set up last year (Russell Group Universities (1 of 3) and Russell Group Universities (2 of 3) have been renamed and complemented by the Russell Group Universities (3 of 3) group. These groups should enable comparisons to be made across the institutions based on the particular social media analytic service elected. Figure 1 shows the Klout scores for the four new Russell Group universities. Also note that a Russell Group Universities Peerindex group which was set up last year has been updated with details of the institutional Twitter accounts for the four new Russell Group Universities. Figure 2 shows the PeerIndex scores for a selection of the Russell Group universities.


Despite the marketing rhetoric around Twitter analytic tools – with Klout, for example, stating thatKlout is the standard for influence” – as a means of measuring ‘value’ such automated analyses have well-known flaws. As an example, if you prune spam followers from your Twitter account, you apparent influence on Twitter will go down.

In the case of institutional Twitter accounts the numbers of followers, especially for Twitter accounts used to support internal communications, is likely to reflect the size of the institution rather than the influence of the Twitter account.

Despite such caveats Twitter analytic tools can be used if used in conjunction with local knowledge of the aims of the service and the particular approaches taken to using the tool. In addition Twitter analytics may be useful for making comparisons with peer institutions.

It should also be added that since the higher education sector is accustomed to University league tables, with Wikipedia listing the Complete University Guide, the Guardian’s University Guide 2013, and the Sunday Times university league table (accessible by paywall) and the Times Higher Education also providing the World University Rankings, as suggested in a post on Bath is the University of the Year! But What if Online Metrics Were Included? we might expect such university ranking tables in future to include an element related to rankings of a university’s online presence.

The Sunday Times have documented their criteria for their University league tables. Although the details are held behind the Sunday Times Paywall a summary was documented in last year’s blog post and the section categories are given below:

Teaching excellence (250 points); Student satisfaction (+50 to =55 points)Peer assessment (100 points); Research quality (200 points); A-level/Higher points (250 points); Unemployment (200 points); Firsts/2:1s awarded (100) and Dropout rate (+57 to -74 points).

The Klout, PeerIndex and Twitteralyzer services have been developed for analysing personal influence, and the approaches they use may be of interest to those involved in alt.metrics work. As described in a paper on Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact

The online, public nature of [social media tools like blogs, Twitter, and Mendeley] exposes and reifies scholarly processes once hidden and ephemeral. Metrics based on this activities could inform broader, faster measures of impact, complementing traditional citation metrics.

However if the current set of popular Twitter analytics tools are not appropriate for developing a better understanding of use of Twitter for research purposes or in an institutional context, might there be a role for in-house development work?  It was therefore very interesting to read Craig Russell’s post on UK Uni Twitter Data API in which he described how “At the start of the month I began collecting data about UK university twitter accounts” and went on to add that “I’ve made this data available through a simple API“.

Rather than pointing out the limitations of social analytics tools such as Klout, might not the sector benefit from developing its own set of tools to help gain a better understanding of how Twitter is being used? And should we not encourage such work to take place in the open, with the data being made available under an open licence and, as Craig has done, open APIs being provided to encourage reuse by others?

Twitter conversation from Topsy: [View]

Posted in Evidence, Twitter | 5 Comments »

Over One Million ‘Likes’ of Facebook Pages for the 24 Russell Group Universities

Posted by Brian Kelly on 2 Aug 2012


On 1 August the 20 Russell Group universities was enlarged from 20 to 24, following the incorporation of Durham and Exeter University, Queen Mary, University of London and the University of York. As described on the Russell Group University Web site “[the] universities are to be found in all four nations and in every major city of the UK. They operate globally, attracting international students and academic staff from many different countries, but also have a strong role and influence within their regional and local community.” But how effective are they in using popular social media services to attract potential students, engage with existing students and staff and with the wider community? In order to provide a benchmark of use of the most popular social networking service a survey of the number of likes for the official institutional Facebook presence has been carried out.

Facebook Usage for Russell Group Universities

In order to gather evidence of use of Facebook in the higher education sector a survey of Facebook usage, determined by links for institutional pages, have been carried out for the Russell Group universities. This survey follows on from previous surveys carried out in January and September 2011 and May 2012 for the 20 Russell group universities which enabled trends to be detected which can inform discussions and policy decisions on institutional use of Facebook. Note that the data provided in the following table is also available as a Google Spreadsheet.

 Ref. No. Institution and Web site link
Facebook name and link
Nos. of Likes
(Jan 2011)
Nos. of Likes
(Sep 2011)
Nos. of Likes
(May 2012)
Nos. of Likes
(Aug 2012)
% increase
since Sep 2011
 1 InstitutionUniversity of Birmingham
Fb nameunibirmingham
8,558  14,182  18,611   20,756    46%
 2 InstitutionUniversity of Bristol
Fb nameUniversity-of-Bristol/108242009204639
2,186   7,913  11,480  12,357    56%
 3 InstitutionUniversity of Cambridge
58,392 105,645 153,000 168,000    59%
 4 InstitutionCardiff University
Fb namecardiffuni
20,035  25,945   30,648  31,989     23%
 5 InstitutionDurham University
Fb nameDurham-University/109600695725424
 –   –  10,843    –
 6 InstitutionUniversity of Exeter
Fb nameintouniversityofexeter  exeteruni
 –   –    1,765
 7 InstitutionUniversity of Edinburgh
Fb nameUniversityOfEdinburgh
(Page URL changed since first survey)
 12,053   24,507   27,574  112%
 8 InstitutionUniversity of Glasgow
Fb Name: glasgowuniversity
  1,860   27,149  29,840 1,504%
 9 InstitutionImperial College
Fb nameimperialcollegelondon
5,490  10,257  16,444  19,020    85%
10 InstitutionKing’s College London
Fb nameKings-College-London/54237866946
2,047   3,587   5,384   7,534   110%
11 InstitutionUniversity of Leeds
Fb nameuniversityofleeds
   899   2,143    3,091    243%
12 InstitutionUniversity of Liverpool
Fb name: livuni
(Page URL change since last survey)
2,811  3,742   4,410   4,655 5,239     40%
13 InstitutionLSE
Fb name: lseps
Page URL changed for this survey)
22,798  32,290 43,716   50,287    56%
14 InstitutionUniversity of Manchester
Fb nameUniversity-Of-Manchester/365078871967
1,978   4,734   9,356   13,751   190%
15 InstitutionNewcastle University
Fb namenewcastleuniversity
    115      693    1,084   840%
16 InstitutionUniversity of Nottingham
Fb nameTheUniofNottingham
3,588    9,991  14,692   17,133     71%
17 InstitutionUniversity of Oxford
137,395 293,010 541,000 628,000  114%
18 InstitutionQueen Mary, University of London
Fb nameQueen-Mary-University-of-London/107998909223423
 –   –  13,362    –
19 InstitutionQueen’s University Belfast
Fb nameQueensUniversityBelfast
(Page URL changed for this survey)
5,211   10,063   16,989  226%
20 InstitutionUniversity of Sheffield
Fb nametheuniversityofsheffield
6,646 12,412  19,308   22,746   83%
21 InstitutionUniversity of Southampton
Fb nameunisouthampton
3,328 6,387  18,062   19,790  209%
22 InstitutionUniversity College London
Fb nameUCLOfficial
977 4,346  33,853  37,493  760%
23 InstitutionUniversity of Warwick
Fb namewarwickuniversity
8,535 12,112 14,472   15,103    25%
24 InstitutionUniversity of York
Fb nameuniversityofyork
 –   –    11,212    –
TOTAL 287,767 566,691 998,991 1,184,958



Facebook ‘Likes’ for Russell Group Universities in August 2012

There are now over a million ‘likes’ for the institutional presence on Facebook of the 24 Russell Group universities.

A post on this blog previously described a significant increase over  a period of eight months in the number of ‘likes’ for the twenty UK Russell Group Universities, which totalled about 999K in May. The current increase over a period of about ten weeks is primarily due to the additional numbers provided by the four new Russell group universities, which come to a total of over 37K likes.

It should be noted that, as illustrated 67% of the likes are provided by just two institutions: the Facebook pages for the University of Oxford (with 628K likes) and the University of Cambridge (168K likes).

Note that a Google Spreadsheet of these figures, together with the accompanying charts, is available.


In some circles providing evidence of Facebook usage is an activity which  people feel should be avoided, since Facebook is a ‘walled garden’ and has a blatant disregard for individual’s privacy.

In the higher education sector I would argue that we have a need for policy decisions to be informed by evidence. There is therefore a need to gather evidence of use of such services in order to inform decisions on their use and also to learn from their strengths and weaknesses and their popularity, so that such lessons can be used in order to make more effective use of existing services and also to be prepared to use new social media service which could replace or complement today’s popular services. Anyone who would like to see Facebook replaced by Diaspora, say (described in Wikipedia as “a nonprofit, user-owned, distributed social network that is based upon the free Diaspora software … is not owned by any one person or entity, keeping it safe from corporate take-overs, advertising, and other threats“)  would surely benefit from gaining an understanding of Facebook’s popularity.

From looking at the names of institutional Facebook accounts and the corresponding URLs and the popularity of the accounts it would appear beneficial to have an easily remembered name, to avoid fragmentation of official accounts and  to avoid the need to rename an accounts address.

This might suggest that it would be useful for institutions to claim a meaningful name on social networks which may gain in popularity in the future. As suggested in a post on Institutional Use of Social Media in China this has been an approach which has been adopted by 19 of the first 20 institutions with an official presence on China’s Sina Wēibó social media service.

But at a time in which it is increasingly important to be able to justify the return on investment in using new services, it will be important to document the intended purposes of such new services and the benefits which may be gained. Back in May 2007 in a post entitled Something IS Going On With Facebook! I commented on early signals of growth in interest in Facebook following the launch of the Facebook Platform. A few months later, in November 2007 a post entitled UK Universities On Facebook reported that “a Facebook search for organisations containing the word ‘university’ revealed ) a total of 76 hits which included, in alphabetical order, the following UK Universities: AstonCardiffKent and the University of Central Lancashire (UCLan)” – and it is interesting to note that the links to the Facebook pages for these early adopters still work even though the URLs have changed.

The post generated a large number of comments with Patrick Lauke asking:

so, for those unis who have a “page” (with new revised Ts&Cs) on facebook…what are your strategic objectives? key performance indicators? external target audience, or a mix of internal and external?

Looking back it would be interesting to see if an institutional Facebook presence has supported strategic objectives. Would the 24 Russell Group Universities  have regarded having a total of over a million as providing a proxy measure of some objective? On the other hand, might this be regarded as a failure?  We have five years of experience of institutional use of Facebook, which includes a number of snapshots of quantitative evidence. It will be interesting to see how this evidence of the recent past can shape and inform discussions and decisions on use of social media over the next five years.

I should add that following the survey in May  2012 Tom Wright, Digital Engagement Manager at the University of Nottingham, commented:

Interesting to see these stats, but to gauge how successful universities are with Facebook you really need to look at other metrics around engagement, reach, influence, etc. You can have plenty of likes but very little engagement and measuring likes is very much like judging a web page’s success based on simple page view numbers – a very raw measure that doesn’t tell you an awful lot. 

I would agree with these comments, although I should add that since such information is restricted to Facebook page administrators it is not possible to get a picture across a community.  However a follow-up post which provided a Survey of Institutional Use of Facebook was also published in May which contained information about a survey in which Tom and I invited those involved in using Facebook to support institutional activities to provide details of their work. In order to gain a broad picture of Facebook use across the sector this survey is still open.

Posted in Evidence, Facebook | 4 Comments »

A Survey of Use of Researcher Profiling Services Across the 24 Russell Group Universities

Posted by Brian Kelly on 1 Aug 2012

Looking Back

Back in March 2012 in a post on Profiling Staff and Researcher Use of Cloud Services Across Russell Group Universities I summarised usage of Academia.eduLinkedInResearcherID and Google Scholar Citations across the 2o Russell Group universities. The post highlighted complementary surveys which had been carried out by Jenny Delasalle, who in Twitter profile describes herself as aResearch support Librarian: interested in bibliometrics, copyright, scholarly communications, and all sorts!” based at the University of Warwick. That connection subsequently led to Jenny and I writing a paper which asked “Can LinkedIn and Enhance Access to Open Repositories?” which was presented at the Open Repositories 2012 conference, OR 2012.

As described in a one-minute video summary and a 4 minute slidecast, in our paper Jenny and I described personal evidence which suggested that use of LinkedIn and can help to raise the profile of peer-reviewed papers hosted in institutional repositories if links to the papers are provided in these popular services as this may enhance the Google ranking for the institutional repository.

As described on the Russell Group University Web site: “Through their outstanding research and teaching, unrivalled links with businesses and a commitment to civic responsibility, Russell Group universities make an enormous impact on the economic, social and cultural wellbeing of the UK“. But to what extent are the Russell Group universities making use of researcher profiling services to enhance access to their research outputs, especially, those hosted in institutional open access repositories?

Updated Survey of Russell Group University Use of Researcher Profiling Services

The methodologies which were used in the previous blog posts and repeated for the findings published in our paper has been used again, this time to provide a benchmark for use of these services across the enlarged collection of Russell Group universities, which was enlarged to 24 institutions on 1 August 2012 following the incorporation of Durham and Exeter University, Queen Mary, University of London and the University of York.

In addition to benchmarking four additional institutions, following Jenny Delasalle’s blog post about ResearchGate the ResearchGate service was also included in the survey.

The findings are given in the following table. Note that the data for the, Google Scholar CitationsResearcherID and ResearchGate services was collected on 25 July 2012.

Ref. No. Institution  Academia LinkedIn LinkedIn ResearcherID Google Scholar
(Followers) (Current) Members Impact Points Publications
1 University of Birmingham     1,210      5,000     5,667            89   131   782  54,959.25 19,515
2 University of Bristol     1,018      4,320     3,477          254   170   641  64,661.22 21,249
3 University of Cambridge     3,020      8,741     7,220          460   330   972 157,728.66 39,713
4 Cardiff University        906      4,287     3,609          468   140
  646  26,620.70   9,596
5 Durham University     1,001      2,620     1,904          148   131   273  13,151.25   1,151
6 University of Exeter        919      3,742     2,735          113    77   269  13,099.47   5,150
7 University of Edinburgh     2,079      7,090     6,123          263   236 1,181  87,934.30 25,918
8 University of Glasgow 1,004      3,802     4,099          293   219    613  59,662.76 20,041
9 Imperial College        798      8,981     6,914          465   362 1,096 105,989.84 30,404
10 King’s College London     1,420      5,994         27          380   174 1,406  60,114.47 18,264
11 University of Leeds     1,657      6,273     6,599          225   164    848  45,132.67 16,944
12 University of Liverpool        866      3,926     4,814          166     91    582  44,800.42 16,475
13 London School of Economics     1,131      8,464     2,075            20     95    191   2,825.73   1,838
14 University of Manchester     2,279      7,601     8,244          305    357 1,113  71,887.98 25,139
15 Newcastle University       906      4,275     3,347          173    143    704  51,783.84 17,307
16 University of Nottingham     1,299      6,269     6,703          355    160    970  56,478.57 20,513
17 University of Oxford     3,842      9,447     9,823          402    405 1,221 159,620.47 38,224
18 Queen Mary       715      3,519
    2,267            20     139    228  15,556.27   5,232
19 Queen’s University Belfast       689      2,317        185
           83       62    479  23,917.28 10,750
20 University of Sheffield     1,082      5,008     5,941           276    174
   823  47,573.65 18,127
21 University of Southampton     1,083      4,935     5,162           287    182    670  37,618.63 16,887
22 University College London     2,776    10,866     7,164           709    580 1,624 138,134.10 35,035
23 University of Warwick     1,143      4,350     3,142           216    119    448  18,142.13   8,098
24 University of York        986      2,824
    2,394           125    474    386  15,808.07   4,841
TOTAL 33,829 134,669 109,634       6,147  5,115  18,166   426,414


It was noted that the figures given in this table for the Google Scholar Citation are an underestimate. This appears to be due to the design of the REST interface to the entries.  The table has been updated with the correct figures.


  • The numbers may be skewed by errors or variants in names of institutions. For example there are 140 people in who are associated with the rather than the domain.
  • The numbers for and ResearcherID were obtained by a search for the institution’s name. However a link to the findings is not available.
  • Searches for ResearcherID were for institution name except for the University of Birmingham which included UK to avoid name clashes.
  • The findings for institutions such as Queen’s University Belfast and King’s College London with apostrophes in the institution’s name may be skewed due to different policies on resolving such names.


It should be noted that the five services covered in this survey are different and it would be inappropriate to make comparisons across the services – in particular although, ResearcherID, Google Scholar Citations and ResearchGate are intended for the research community, LinkedIn  has a wider remit and, understandably, has a larger audience.

In addition, as described in the Notes, there may be flaws or inconsistencies in the way in which the data was gathered and displayed. In particular it seems that the lack of an agreed institutional ID means that users may associate themselves with different variants of their institution, with this seemingly being the case for institutions contains apostrophes, in particular.

The previous survey and subsequent paper suggested that use of popular social media services by researchers could enhance access to the researchers’ research outputs if links to their outputs were provided from the services.  I am still convinced that this is the case but appreciate that further evidence may be needed in order to convince decision-makers that a coordinated approach to providing links to the content of open access repositories would help to maximise access to the resources.  For now, however, this post is intended to provide a benchmark of use of the services on the launch day for the enlarged group of Russell Group Universities.  In addition I would welcome feedback on the survey methodology, especially from the Russell Group Universities who may find that their information is fragmented across several variants of the institution’s name.

I would also, of course, welcome comments in the implications of the findings and their relevance in the context of the 24 institutions referenced in the survey. Researchgate, for example, appears to have information on over 426K papers ranging from 1.8K at LSE to 39K at the University of Cambridge.  What proportion of research papers hosted in institutional repositories does this cover?  And if the numbers appear low for some institutions does this mean that the institutions should seek to take appropriate actions to increase the numbers, or ignore such findings as it may simply demonstrate the  lack of relevance of the services?

Paradata:   As described in  a post on Paradata for Online Surveys blog posts which contain live links to data will include a summary of the survey environment in order to help ensure that survey findings are reproducible, with information on potentially misleading information being highlighted.

The data for the AcademiaLinkedIn,  Google Scholar Citations,  ResearchGate and ResearcherID was collected on 25 July 2012.

The values for Google Scholar Citation for the universities of Birmingham and Newcastle include ‘UK’ in the search field in order to avoid including information from US and Australian universities with the same name.

It should also be noted that I was logged into the services when I gathered the information.

It should also be noted that the low values for LinkedIn followers for King’s College London and Queen’s University Belfast are felt to be due to the apostrophe used in the institution’s names. For example of search (carried out on 31 July 2012) on LinkedIn for King’s College London gives 3,758 hits but a search for Kings College London gives 328 hits.

Posted in Evidence, Web2.0 | 4 Comments »