UK Web Focus

Innovation and best practices for the Web

Archive for June 28th, 2011

Social Analytics for Russell Group University Twitter Accounts

Posted by Brian Kelly (UK Web Focus) on 28 June 2011

“Students to get best-buy facts”

On a day on which the main headline on the BBC News Web site announces the Government’s Competition Plan For Universities which “could bring more competition between universities and greater powers for students” it would seem timely to publish a survey which makes use of a number of social media analytic tools to explore how Russell Group Universities are making use of their institutional Twitter accounts and to invite discussion on the strengths and weaknesses of such approaches. After all if, as described in an accompanying articleStudents [are] to get best-buy facts“, shouldn’t the facts about Universities’ online presence also be provided – especially if you believe in openness and transparency?


A survey of Institutional Use of Twitter by Russell Group Universities was published back in January 2011. This survey provided a snapshot of institutional use of Twitter across the twenty Russell Group Universities based on the statistics provided on Twitter account profile pages (numbers of followers, numbers of tweets, etc.). The survey was warmly received by those involved in managing institutional Twitter accounts or with an interest in activities in this area, with Mario Creatura expressing the view that the survey provided an “excellent gathering of data in an area that quite honestly is chock full of confusing stats“.

The interest in gathering further evidence of the value of Social Web services continues to grow. A recent study, for example, sought to answer the question “What’s the ROI with advertising on Facebook?” and concluded that “1 Facebook fan = 20 additional visits to your website“. But what approaches can institutions take to gain a better understanding of institutional use of Twitter?

Use of Social Analytic Services

In a recent post entitled Analysing influence .. the personal reputational hamsterwheel Lorcan Dempsey highlighted three social media analytic services. The post described how it has been suggested that the “Klout score will become a new way of measuring people and their influence online“. In addition to Klout, (which according to Crunchbase “allows users to track the impact of their opinions, links and recommendations across your social graph“) Lorcan’s post also referenced PeerIndex (which according to Crunchbase “identifies and ranks experts in business and finance based on their digital footprints“) and Twitalyser (described in a Mashable article as “provid[ing] detailed metrics on things like impact, engagement, clout and velocity for individual Twitter accounts“) .

Although Lorcan’s blog post addressed the relevance of such service for helping to understand personal reputation I felt it would be useful to gain a better understanding of how these service work by using them to analyse institutional Twitter accounts. I have therefore used the Klout, Peerindex and Twitalyzer social media analytic tools to analyse the twenty Russell Group University Twitter accounts. The table below summarises the findings of the survey which was carried out on Thursday 23 June 2011. It should also be noted that the table contains live links to the services which will enable the current findings to be displayed (and also for any errors to be easily detected and reported).

Institution /
Twitter Account
Klout Peerindex Twitteralyzer
Score Network
Description Score Activity Audience Authority Impact Percentile Type Full
1 University of Birmingham:
55 61 34 3K Thought
19 31 70 4 3.3% 88.6 Everyday
2 University of Bristol:
49 54 28 2K Specialist 16 16 68 0 1.7% 75.2 Everyday
3 University of Cambridge:
56 63 39 7K Thought
29 38 0 37 5.4% 94.6 Everyday
4 Cardiff University:
48 52 26 3K Specialist 43 47 76 33 0.8% 57.1 Everyday
5 University of Edinburgh:
52 60 35 2K Thought
14 6 69 0 1.7% 75.2 Everyday
6 University of Glasgow:
51 58 29 3K Specialist 40 47 78 28 1.1% 65.1 Everyday
7 Imperial College:
51 57 30 3K Specialist 39 24 74 24 2.8% 85.7 Everyday
8 King’s College London:
46 53 26 1K Networker 16 19 53 4 1.3% 69.1 Everyday
9 University of Leeds:
51 59 32 2K Specialist 23 37 62 12 1.8% 76.4 Everyday
10 University of Liverpool:
43 48 21 2K Networker 2 40 0 0 1.4% 70.9 Everyday
11 LSE:
39 48 18 797 Networker 33 43 0 43 0.4% 38.8 Everyday
12 University of Manchester:
14 10 10 46 Feeder 27 ? ? ? ?%      ?  - View
13 Newcastle University:
No official account found
14 University of Nottingham:
51 57 30 2K Specialist 41 41 65 33 1.9% 77.6 Everyday
15 University of Oxford:
58 65 37 8K Specialist 58 44 83 52 2.7% 85.1 Everyday
16 Queen’s University Belfast:
41 48 23 779 Specialist 11 0 53 0 0.7% 53.6 Everyday
17 University of Sheffield:
54 59 36 3K Networker 41 44 73 37 2.9% 86.4 Everyday
18 University of Southampton:
46 55 27 1K Networker 46 46 57 44 0.9% 60.1 Everyday
19 University College London:
54 63 39 2K Specialist 62 68 71 59 2% 78.7 Everyday
20 University of Warwick:
53 58 31 3K Thought
52 42 77 45 1.2% 67.3 Everyday

Please note that you will need to sign in to Klout in order to view the findings.

Russell Group Universities Peerindex group and two Klout groups (since there is a limit of ten entries these are split into Russell Group Universities (1 of 2) and Russell Group Universities (2 of 2) ) have been set up) which should enable comparisons to be made across the institutions based on the particular social media analytic service elected.

It should be noted that since the original survey of institutional use of Twitter by Russell Group Universities accounts for the Universities of Liverpool and Manchester have been identified. The University of Liverpool account (@livuni) seems to have replaced an older @liverpooluni account which was never used (although it did have over 2.000 followers). The University of Manchester account (@UniofManc) was set up on 14 March 2011 and there have been insufficient numbers of tweets for the PeerIndex and Twitteralyzer services to provide meaningful reports.

About the Social Media Analytic Metrics

In Klout:

The Klout Score is the measurement of your overall online influence. The scores range from 1-100 with higher scores representing a wider and stronger sphere of influence.

Network Influence is the influence level of your engaged audience. Capturing the attention of influencers is no easy task, and those who are able to do so are typically creating spectacular content.

Amplification Probability is the likelihood that your content will be acted upon. The ability to create content that compels others to respond and high-velocity content that spreads into networks beyond your own is a key component of influence.

The True Reach does not appear to be defined.

PeerIndex is built up of three components: authority, activity and audience score (all three are normalised ranks out of 100):

Authority is the measure of trust; how much can you rely on that person’s recommendations and opinion on a given topic. The authority is calculated from eight benchmark topics for every profile: AME (arts, media and entertainment); TEC ( technology and internet); SCI (science and environment); MED (health and medical); LIF leisure and lifestyle); SPO (sports); POL news, politics and society) and BIZ (finance, business and economics). These are used to generate the overall authority score as well as produce the PeerIndex Footprint diagram.

The authority is a relative positioning against everyone else in each benchmark topic. The rank is a normalised measure against all the other authorities in the topic area.

Note that the PeerIndex findings for the University of Oxford are illustrated with a comparison being made with the the Peerindex findings for the University of Cambridge. The analysis suggests that both institutions have a broadly similar ‘fingerprint’ but Oxford tends to focus on news, politics and society whilst Cambridge on technology and Internet.

Audience is indication of an individual’s reach. It is not simply determined by the number of people who follow you, but instead generate from the number of people who listen and are receptive to what you are saying.
Being followed by large number of spam accounts, bots, inactive accounts will reduce an audience score. The audience takes into account the relative size of the audience to the size of the audiences for the rest of community.

Activity is the measure of how much you do that is related to the topic area. Being to active and people will stop listening to you and if you are too inactive people will never know to listen to you. The Activity Score takes into account this behaviour. Like the other scores Activity Score is done relative to the community. If you are part of a community that has lots of activity your level of activity will need to be higher to achieve the same relative score as in a topic that has a lot less activity.

Realness is a metric that indicates the likelihood that the profile is of a real person, rather than a spambot or Twitter feed. A score above 50 means Peerindex thinks this account is of a real person; a score below 50 means it is less likely to be a real person. When Peerindex comes across a new profile, it gives it a score of 50. Initially, Peerindex doesn’t have the information to make any determination. As more information is gathered Peerindex modifies the number accordingly. Peerindex looks at a range of information to generate realness such as whether the profile is claimed and been linked to Facebook or LinkedIn. Peerindex is continually adding new signals to the realness calculations to improve it. The calculations are modified by the realness metric in order to penalise non-real people. Claiming a profile will boost the authority, audience and activity scores and consequently the PeerIndex as well.

Note that before the PeerIndex scores are displayed that are normalized. This means every number in PeerIndex is based on a scale of 1 to 100, showing relative positions. An aggressive normalization calculation is used which helps to discriminate between top authorities. The benefit is that you can more easily understand who the top authorities are. The trade-off is that many users end up with seemingly lower scores. Here’s an example: If you are in the top 20% by authority in a topic like climate change, it means you have higher authority than 80% of other people who we measure within this topic. Your normalized authority score for this topic will be in the range of 55 to 65 (that is, significantly lower than 80). Remember, however, that a score of 60 puts you higher that 80% of people we track in that topic. A score of 65, means you rank higher than 95% of the people we track. PeerIndex focuses on tracking the top people on a specific topic, not just anyone.

In Twitalyzer the Impact measure is a combination of the following factors:

  • The number of followers a user has.
  • The number of unique references and citations of the user in Twitter.
  • The frequency at which the user is uniquely retweeted.
  • The frequency at which the user is uniquely retweeting other people.
  • The relative frequency at which the user posts updates.
  • Twitalyzer’s “Impact Percentile” score provides insight into the relative rank of the individual within the service’s dataset. A ranking in the 69.8th percentile means that the user’s Twitalyzer Impact score is higher than 69.8 percent of the hundreds of thousands of active Twitter accounts the service is tracking.
  • Twitalyzer’s user profiles report 30-day trailing averages for Impact to help visualize how the user’s Impact trends over a longer period of time. This mitigates out weekends, vacations, etc.

Thoughts on Openness of Social Media Analytics Data

We are starting to see a stream of social media analytic services being developed, together with companies offering to analyse institutional use of social media and advise on best practices. There is a danger, I feel, of unnecessary duplications of such analyses being carried out, with funds which could be used to enhance the teaching and learning and research services provided by institutions being used to pay for unnecessary consultancy work. Whilst there maybe legitimate justifications for such consultancy, I feel that factual data which is gathered should be made openly available. In addition I feel that there is a need for open discussion on how social media analytic findings should be interpreted and used.

Issues for the “Metrics and Social Web Services: Quantitative Evidence for their Use and Impact” Workshop

On 11 July I am facilitating a one-day workshop on “Metrics and Social Web Services: Quantitative Evidence for their Use and Impact” which will be held at the Open University. The workshop aims to ensure that the participants:

  • Have a better appreciation of the importance of the need to gather and interpret evidence.
  • Understand how metrics can be used to demonstrate the value and ROI of services.
  • Have seen examples of how institutions are gathering and using evidence.
  • We aware of limitations of such approaches.
  • Have discussed ways in which such approaches can be used across the sector.
Some questions which I hope will be addressed at the workshop (which, incidentally, is now fully subscribed, indicating the interest across the sector in this area) include:
  • Do existing social media analytic services, such as those described above, have a role to play in helping to gain a better understanding of how social media services are being used to support institutional goals?
  • Can such  existing social media analytic service be used to help identify personal professional reputation?
  • Should the higher education sector be developing its own social media analytic tools in order to ensure that the specific requirements of higher education institutions are being addressed?
  • What are the dangers and limitations of seeking to analyse and make use of social media metrics and how should such concerns be addressed?

If you have any answers to these questions, or general comments or queries you would like to raise feel free to add a comment to this post.

Twitter conversation from Topsy: [View]

Posted in Evidence, Twitter, Web2.0 | 5 Comments »