UK Web Focus

Innovation and best practices for the Web

Archive for the ‘Impact’ Category

Recognising, Appreciating, Measuring and Evaluating the Impact of Open Science

Posted by Brian Kelly (UK Web Focus) on 6 September 2011

The #SOLO11 Conference

As I mentioned in yesterday’s post on Use of Twitter at the SOLO11 Conference on Friday and Saturday, 2-3 September 2011 I attended the Science Online London 2011 event, SOLO11.

We are now starting to see various posts on the event being published. One of the first reports on the events was written by Alexander Gerber and published on the Scienceblogs service based in Germany. Alexander began his brief post by saying:

My sobering conclusion after two days of ScienceOnline London: The technologies are ready for take-off, the early-adopter-scientists are eager to kickstart the engine, but the runway to widespread usage of interactive technologies in science is still blocked by the debris of the traditional academic system. This system needs to be adapted to the new media paradigms, before web 2.0 / 3.0 can have a significant impact on both research and outreach. 

and went on to list three central questions which he feels need to be answered:

  • How can we recognise, appreciate, measure and evaluate the impact of outreach and open science in funding and evaluation practice?
  • Which new forms of citation need to be installed for that?
  • How can we create a reward system that goes way beyond peer-reviewed citations?

I’d like to address certain aspects of the first question, in particular ways in which one might measure and evaluate the use of social media to support such outreach activities since this issue was discussed during a workshop session on Online Communication Tools which I spoke at.  However I would first like to give some thoughts on the opening plenary talk at the event.

Plenary Talk on Open Science

For me the highlight of SOLO11 was the opening plenary talk on “Open Science” which was given by Michael Nielsen, a “writer; open scientist; geek; quantum physicist; writing a book about networked science“.

A number of blog posts about the event have already been listed in the Science Online wiki. I found Ian Mulvany’s thoughts on the Science Online London Keynote talk particularly helpful in reminding me of the key aspects of the talk.

Michael told the audience that he didn’t intend to repeat the potential benefits of open science; rather he would look at some examples of failures in open science approaches and then look in other disciplines to see if there were parallels and strategies which could be used in the science domain.

The example given described use of open notebook science in which a readership of ~100 readers in a highly technical area had been established, but there was little active participation from others.  The author, Tobias J Osbourne, was putting in a significant amount of effort but was failing to gain value from this work.

Michael gave an example of how a significant change can be made in a short period of time which brought significant benefits: the change to driving on the right hand side of the road in Sweden at  5am on Sunday, 3 September 1967.

However although this example was successful and brought benefits (such as reduced costs) there are many other examples in which the potential benefits of  Collective Action fail to deliver, often due to some potential beneficiaries chosen to ‘freeload’ on the work of others.

We can learn from examples of successes in other areas, ranging from the establishment of trade unions and well-established practices for managing water supply in villages through to the growth of the ArXiv archive and of the Facebook social networking service.  Successful approaches include:

Starting small: For example the ArXiV service success was due to it focussing on a small subject area. Similarly Facebook was initially available only to students at Harvard University, before expanding to, initially, other Ivy Leagues and then other higher educational institutions before being available to everyone.

Monitoring and sanctions: Michael concluded by describing how there was a need to monitor use and, if needed, to be able to apply sanctions.

The concept is that there is some action where if everyone changed it would be better for everyone, but you need everyone to change at the same time. There are incentives for people not to participate because there is some cost involved in changing for the individual but if the individual does not change, they get the benefit anyway from everyone else changing. This is the same kind of problem that we have with the move to open data.

In brief, therefore, Michael felt that those who feel that open science can provide benefits tend to be too ambitious – there is a need to start with small achievable aims and to make use of approaches for broadening the scope using various approaches which have proven successful in other areas.

Analytics for Use of Social Media

The second day of the SOLO 11 event provided a series of workshop sessions.  I attended one which was billed as Scholarly HTML but it fact provided an introduction to blogging on WordPress :-(  However a workshop session on Online Communication Tools which provided an introduction to Twitter, Google+ , etc in the morning moved on in the afternoon sessions to:

… cover all angles from how to practically use the tools most beneficially in an institutional or academic environment, to how to measure their impact via statistics and online “kudos” tools

Alan Cann, one of the facilitators of the session, invited me to speak in this session as Alan had attended a one-day workshop on “Metrics and Social Web Services: Quantitative Evidence for their Use and Impact” which I organised recently. I used the slides from a talk on “Surveying Our Landscape From Top to Bottom” which reviewed various analyses of use of social media services by individuals and institutions, including tool such as Klout, PeerIndex and Twitalyser.

Alan Cann also spoke in the session and in his presentation pointed out the statistical limitations in using such services – similar concerns to those made by Tony Hirst in a talk on which he gave at the  “Metrics and Social Web Services: Quantitative Evidence for their Use and Impact” event.

Tony’s slides, which are available on Slideshare, illustrated dangers of misuse of statistics including the accompanying graphs  showing data which can all be, incorrectly, reduced to the same linear curve.

Tony went on to describe Goodhart’s Law which states that:

once a social or economic indicator or other surrogate measure is made a target for the purpose of conducting social or economic policy, then it will lose the information content that would qualify it to play such a role.

and Campbell’s Law:

The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.

Lies, Damned Lies and Social Media Analytics?

Might, therefore, we conclude that social media analytics tools such as Klout, PeerIndex and Twitalyzer have no role to play in, for example, “measuring and evaluating the impact of outreach and open science“? Not only are, for example, the ways in which Peerindex aggregates its scores for authority, activity and audience to give a single value statistically flawed, but, if such services are used for decisions-making purposes we will see users gaming the system.

Whilst this is true, I also feel that there are dangers in trying to develop a perfect way of measuring such impact – and it was clear from the workshop that this is an acceptance of the need for such measurements.

There will be many other examples of approaches to measurements which we generally accept but which have underlying flaws. The university system, for example, may be regarded as evaluating its successful consumers as first, two-one, two-two or third class degree students.  But despite the limitations of assessment the importance of such assessment is accepted.

We might also wish to consider how such measuring schemes are used.  The approaches taken by Klout and Peerindex have parallels with Google’s ranking algorithms – and again can be gamed. But organisations are prepared to invest in ways of  gaining high Google rankings since this will provide business benefits, through Web sites being more easily found in Google searches.

We are starting to hear of examples of Klout and Peerindex statistics being used  in recruitment, with a recent article published in the New York Times inviting readers to:

IMAGINE a world in which we are assigned a number that indicates how influential we are. This number would help determine whether you receive a job, a hotel-room upgrade or free samples at the supermarket. If your influence score is low, you don’t get the promotion, the suite or the complimentary cookies.

I suspect that marketing departments will use such statistics and that people working in marketing and outreach activities will start to use personal social media analytic scores in their CVs. Note that as can be seen from the image which shows my Peerindex scores such tools can be used in a variety of ways – it is clear that you wouldn’t employ me based on the diagram if you were looking for someone who had demonstrable experience in outreach work using Twitter in the field of medicine (my areas tend to focus on technology, sport and politics).

I therefore feel that we should treat social media analytics with care and use them in conjunction with qualitative evidence of value. But to disregard such tools completely whilst waiting for the perfect solution to appear will fall into the trap which Michael Nielsen warned against, of seeking to gain broad acceptance of a universally applicable solution.

I’d welcome your thoughts.

Posted in Evidence, Impact | 2 Comments »

Event Report: Metrics and Social Web Services Workshop

Posted by Kirsty Pitkin on 18 July 2011

In this guest post, event amplifier Kirsty Pitkin reports on the key messages from the recent UKOLNeim workshop – Metrics and Social Web Services: Quantitative Evidence for their Use and Impact.


Introduction

In introducing the event, Brian Kelly emphasised that the aims were to explore ways of gathering evidence that can demonstrate the impact of services and to devise appropriate metrics to support the needs of the higher and further eduction sector.

Many people argue that you cannot reduce education to mere numbers, as it is really about the quality of the experience. However, Kelly argued that numbers do matter, citing the recent JISC-funded Impact Report, which found that the public and the media are influenced by metrics. As we have to engage with this wider community, metrics are going to become more relevant.

View the introduction in full on Vimeo

The slides to accompany this talk are available on Slideshare

Why Impact, ROI and Marketing are No Longer Dirty Words

Amber Thomas, JISC

Amber ThomasThomas mapped out the current landscape, drawing on her own experiences and those of colleagues working in other areas at JISC. She observed a dominant culture of resistance to measurement within education for a number of reasons, including the concern that caring about metrics will mean that only highly cited people or resources will be valued. She noted that the search for an effective impact model is taking place on shifting sands, as issues associated with the value, ownership and control of media channels are being contested, as is the fundamental role of the university within British society.

In discussing impact, Thomas noted that it would be tempting to use the language of markets – with education as a “product” – but stressed that this not how we see ourselves in the education sector. One of the challenges we face is how to represent the accepted narrative of the sector as a nurturer and broker of knowledge, through the use of metrics.

Thomas went on to describe some of the dirty words in this space and the measurements that are associated with them. However, she noted that these measurements can be used for good, as they can help to instigate change. To support this, she provided a model for the role of metrics in decision making, with metrics being one form of evidence, and evidence being only one form of influence on the decision maker.

She concluded by outlining our options for responding to the impact debate: we could deny the impact agenda is important, or we could deepen our understanding and improve our metrics so they work for us and are fit for purpose. The possible directions we could take include developing business intelligence approaches, improving data visualisation techniques and looking for better tools to give us deeper understanding of the metrics. She also stressed that we need to look more closely at the use and expectations of social media in the commercial sector, as we might find we are expecting too much of ourselves.

“I don’t think we can ignore the debate on impact and metrics… what we need to do is engage with the impact debate and use the sort of language that is expected of us to defend the values of the sector a we wish to defend them.”

View the presentation in full at Vimeo

The slides to accompany this talk are available on Slideshare

Surveying our Landscape from Top to Bottom

Brian Kelly, UKOLN

Brian KellyKelly provided an overview of the surveys he has been carrying out using a variety of analytics tools.

He began with a personal view: discussing the picture of his own Twitter usage provided by the Tweetstats tool, and how this differs from his own memory. He noted that the data did not always correspond with other evidence, emphasising that we cannot always trust the data associated with such tools.

“You need to be a bit skeptical when looking at this data… you can’t always trust all the data that you have.”

From an institutional perspective, he asked: “What can commercial analytics tools tell us about institutional use of Twitter?” He compared the Klout scores of Oxford and Cambridge Universities’ Twitter accounts, showing how visualisations of the numbers can give a much better understanding of what those numbers really mean than the numbers themselves do in isolation.

He continued in this vein by demonstrating Peer Index, which he used to analyse participants of the workshop. He noted that the top seven people are all people he knows and has had a drink with, so asked whether this shows that the gathering is really a self-referential circle? Kelly also noted how easy it can be to gain extra points and questioned whether it is ethical to boost your score in this way. However, he observed that research funding is determined by flawed metrics, and gaming the system is nothing new. So will universities head hunt researchers with valuable social media scores?

Next he looked at Slideshare statistics, using a presentation by Steve Wheeler as a case study. Wheeler made a presentation to 15 people, but his slides were viewed by over 15,000 people on Slideshare. Kelly asked us to consider the relationship between the number of views and the value of this resource. He also examined statistics from the collection of IWMW slides, observing that the commercial speakers had higher view rates, and that the most popular slides were not in corporate look and feel. This evidence could be used to challenge standard marketing perspectives.

Finally, Kelly compared Technorati and Wikio results to demonstrate that four people in the room were in the top 67 English language technology blogs. He pondered whether they should they share their success strategies, or how we could tell the story of this data in different ways.

To conclude, Brian emphasised that he believes this kind of analysis can inform decision making, so it is important to gather the data. However, the data can be flawed, so it is important to question it thoroughly.

View the presentation in full on Vimeo

The slides to accompany this talk are available on Slideshare

Learning From Institutional Approaches

Ranjit Sidhu, SiD

Ranjit SidhuSidhu focussed primarily on the role of pound signs in communicating particular messages and connecting social media metrics to reality in a powerful way.

He began by observing that the data is often vague. The analytics institutions receive look exactly the same as the analytics used by commercial organisations, despite the fact that their needs and objectives differ widely. He attributed this to the dominance of the technology, which has taken control over the information that gets delivered, thus ensuring everyone gets data that is easy to deliver, rather than data that is meaningful to them. Sidhu also observed that universities often fail to break down their data into relevant slices, instead viewing it at such a high level that it cannot usefully be interpreted in financial terms.

In a self-confessed rant, Sidhu emphasised that you have a chance to tell the narrative of your data. Most social media data is openly available, so if you don’t, someone else will and you will no longer have control over that narrative.

“You need to be proactive with your data. If you’re proactive, people don’t sack you.”

Sidhu went on to demonstrate the type of analytics dashboard he creates for universities, discussing the importance design as well as the analysis itself. His dashboard features nine groups of data and only three key themes, which fit onto one A4 sheet and are arranged in an attractive way. He also discussed his methodology when creating these dashboards, which involves finding out what people want to know first, then finding the data to match those requirements. This is the reverse of common practice, where people take the data that is readily available and try to fit that to their requirements.

He explained the need to match up offline experience with online experience to help to generate projections and quantify the savings produced by online tools and social media. He exemplified this by talking us through one of the most powerful statistics he creates: a calculation demonstrating the amount saved by online downloads of prospectuses compared to sending printed versions. This is usually around £500 per month. This takes the online data, combines it with existing data from the comparable offline process, and creates a tangible value.

He extended this to show other types of story we could tell with such data, including the potential value of a website visit from a specific country. Once you have this, you can more effectively demonstrate the monetary value of social media by using referrer strings to show how a visitor from that country reached your site, and therefore make better decisions about how you attract those visitors.

You have to justify your spend. Your justification has to be based on what you are trying to do at that particular time.

View the presentation in full at Vimeo

The slides to accompany this talk are available on Slideshare

Identity, Scholarship and Metrics

Martin Weller, The Open University

Martin WellerWeller posed many questions and points to ponder, focussing on how academic identity is changing now we are online.

He observed that identity is now distributed across different tools, with a greater tendency to intersect with the personal. There are more layers to consider: where once you had your discipline norms and your institutional norms, now there are more social media norms to observe to create cultural stickiness. You end up with a set of alternative representations of yourself, so your business card is now a much messier thing.

Weller went on to define impact as a change in behaviour, but emphasised that telling the story of impact online is actually very difficult. Your impact may be more about long term presence than an individual post. The metrics we currently use do not necessarily correspond to our traditional notions of academic impact: after all, what do views mean? What do links mean? What do embeds mean? How do they compare to citations?

He put forward the accepted view that blogging and tweeting provide you with an online identity, which drives attention to more traditional outputs. He placed this in the context of a digital academic footprint, which helps tell the story of the impact you are having within your community. Whilst metrics can be useful for this, he warned that they could also be dangerous, with official recognition leading to a gameable system.

He concluded by illustrating a sandwich model explaining why metrics will be increasingly important to what academics do: with top-down pressure from above to demonstrate impact when applying for funding, and bottom-up pressure from individuals asking why their impact via social media doesn’t count. Once you’ve got those two pressures, you have an inevitable situation.

View the presentation in full on Vimeo

The slides to accompany this talk are available on Slideshare

Impact of Open Media at the OU

Andrew Law, The Open University

Andrew LawLaw discussed the activities of the Open University when monitoring the various media channels used to disseminate content and how these metrics have led to real, significant funding decisions.

He observed that several of their online media channels did not necessarily have a very clear strategic remit. However, they found that the data was increasingly asking the question: “What is the purpose of all this activity?” Deeper analysis of this data led to the development of clearer stategies for these channels, based on their core institutional aims.

Law emphasised the importance of having all of the information about the different channels in one place to help dispel the myths that can grow up around particular tools. He used the example of iTunes U, which gets huge amounts of internal PR on campus, whilst channels like OpenLearn and YouTube sit very quietly in the background. However, the reality is very different and he observed that one of the challenges they face is ensuring that the broad story about the performance of all of these channels is well understood by the main stakeholders.

Law expanded on this, noting that whilst the iTunes U download statistics provide a positive story, it does not actually perform well against their KPIs compared to other channels, despite little or no investment in those other channels. He observed that their pedagogical approach to iTunes U – which includes offering multiple, small downloads, with transcripts and audio downloaded separately – can inflate the numbers. He compared this to their YouTube channel, which has received very little investment, but is performing very effectively. He also discussed the OpenLearn story, which has been quietly outstripping other channels against their KPIs – particularly in terms of conversions, because it has a lot of discoverable content. He emphasised that this is a very positive story for the university, which needs to be told and built upon.

By demonstrating these realities, the data has demanded of management a much clearer sense of purpose and strategy. This has led to real investment. The OU has massively increased the amount of money spent on YouTube and OpenLearn, representing a significant change in strategy.

In conclusion, Law did note that, so far, the data has only helped the university, not the end user, so their next steps include mapping journeys between these channels to identify the traffic blockages and better tune the service delivered across the board.

View the presentation in full on Vimeo

The Script Kiddie’s Perspective

Tony Hirst, The Open University

Tony HirstHirst provided a set of observations and reflections, which ranged from ethical issues about the use of statistics through to practical demonstrations of visualised data.

He began by observing that social media are co-opting channels that were private and making them public, so there is nothing inherently new going on. He quoted Goodhart’s Law, emphasising that, whilst measuring things can be good, once measures are adopted as targets they distort what you are measuring and create systems open to corruption.

Hirst went on to discuss the perils of summary statistics and sampling bias. He emphasised that the way you frame your expectations about the data and the information that can be lost in the processing of that data are both vital considerations if you are to accurately tell the story of that data.

Hirst discussed the role of citations as a traditional measure of scholarly impact and the ways your content can be discovered, and thereby influence through citation. He highlighted three layers of discovery: the media layer, the social layer and the search engine layer, each of which enables your material to be discovered and therefore influence behaviour. He noted that if links come through to your own domain, you can already track how they are reaching your content. What is difficult to track is when there is lots of social media activity, but none of it is coming back to your domain.

Hirst demonstrated some approaches to tracking this type of activity, including the Open University’s Course Profiles Facebook app; Google search results, which are including more personalisation; and social media statistics gleaned through APIs, many of which can be accessed via an authentication route using OAuth.

Hirst concluded by discussing some visualisations of Twitter communities to show how these can provide insight into external perspectives and how we are defined by others in our community.

View the presentation in full on Vimeo

The slides to accompany this talk are available on Slideshare

Conclusions

The workshop brought forward a number of concerns, that were often less about the tools and technologies involved, but more about the ethics and pitfalls of formalising the measurement of social media activity. The main concern seemed to be the potential for creating a gameable system, or metrics do not reflect reality in a useful way. Ensuring that the metrics we use are fit for purpose will not be an easy challenge, but the discussions held within this workshop helped to identify some potential routes to improving the value and integrity of social media data.

Posted in Evidence, Guest-post, Impact, Social Networking | Tagged: | 4 Comments »

Plans for “Metrics and Social Web Services” Workshop on Monday

Posted by Brian Kelly (UK Web Focus) on 7 July 2011

Supporting a Remote Audience

On Monday 11th July I am facilitating a one-day workshop on “Metrics and Social Web Services: Quantitative Evidence for their Use & Impact ” which will be held at the Open University.  There has been a lot of interest in this workshop which I think is indicative in the perceived importance of the need to gather evidence to be able to demonstrate the use, impact and value of online services, with, in this case, a particular focus on Social Web services.

Since there is such interest in the workshop we have decided to attempt to video stream the talks. However we are not in a position to guarantee that we will be able to provide a high quality video streaming service since we will be setting up the infrastructure on the morning of the workshop and will be keeping our fingers crossed that the bandwidth is up to it and there are no firewall problems.

We do intend to record the talks given at the workshop and make these available shortly afterwards.  In addition in order to help to provide a context to the workshop I have pre-recorded an audio presentation of the Welcome slides for the workshop which is available as a slidecast of the talk is available and embedded below.

The talk is also available on YouTube and embedded below (although note that the Moyea PPT to Video conversion tool used to create the video included a watermark which is embedded in the video).

If you feel this workshop is of interest to you please sign up on the Eventbrite booking system as a remote participant so that we can email you details of the video stream.

Evaluating Shhmooze for the Local Audience

I should also add that for those who will be physically present we will be evaluating the Shhmooze app. The Shhmooze marketing material states that:

Research by Shhmooze shows that 75% of conference delegates find networking to be hard work or ‘a nightmare’!

That’s because it’s really hard to find the right person to talk to within a crowd of dozens, hundreds or thousands of people. And, for many people, it’s even harder to strike a conversation out of nowhere with a complete stranger.

Shhmooze takes the pain out of networking by making it easy to find the 3 people at the event you really need to talk to! 

and goes on to suggest that event organisers can tailor the following:

But networking’s not always easy. That’s why we’ve teamed up with Shhmooze to bring you a free smartphone app you can use to

  • find useful, interesting people
  • broadcast your professional profile to other event attendees
  • privately contact the people you want to talk with face-to-face

Whilst the marketing rhetoric grates somewhat (how will it ensure that I find useful, interesting people and not useless dull ones?!) I do feel it would be useful to explore the potential of geo-located social apps in the context of events. Perhaps a one-day workshop with 50 participants isn’t the ideal event  but we’d like to evaluate its potential prior to using it at a large event, such as the forthcoming IWMW 2011 event which will have about 150 participants (and note that bookings are due to close on Friday).

If you are attending the workshop please consider installing the app (iPhone/iPod Touch/ iPad only at present) and try and track me down on Monday. I’m sure you are an interesting person and I’ll try and be useful :-)

Posted in Events, Impact | Tagged: | 3 Comments »

Metrics for Understanding Personal and Institutional Use of the Social Web

Posted by Brian Kelly (UK Web Focus) on 19 May 2011

Tomorrow I am giving an invited presentation on “Metrics for Understanding Personal and Institutional Use of the Social Web” at a workshop 0n “Digital Impacts: How to Measure and Understand the Usage and Impact of Digital Content” which is being organised by the Oxford Internet Institute.

The abstract for the event summarises the need to measure usage and impact of electronic content in order to able to demonstrate a return on the investment in providing such services:

The question of how we can measure and understand the usage and impact of digital content within the education sector is becoming increasingly important. Substantial investment goes into the creation of digital resources for research, teaching and learning and, in the current economic climate, both content creators, publishers as well as funding bodies are being asked to provide evidence of the value of the resources they’ve invested in.

But how do we go about defining value and impact? Which metrics should we adopt to understand usage? When is a digital resource a well used resource?

My contribution to the event will be to explore how Social Media channels can be used to enhance access to not only content – whether digital, physical or less tangible, such as ideas – and ways in which metrics can be used to understand the ways in which the channels are being used and inform the development of appropriate best practices as well as provide indicators of usage and impact.

The slides for the talk are available on Slideshare and are also embedded below.

Posted in Impact | 4 Comments »

Impact, Openness and Libraries

Posted by Brian Kelly (UK Web Focus) on 3 December 2010

Measuring Impact” is the theme of the December 2010 issue of CILIP’s Library and Information Update magazine.  In an editorial piece entitled “Capturing Numeric Data to Makes an Evidence Based Approach” Elspeth Hyams provides a shocking revelation: school libraries have very little impact. Or at least that’s how how a review commissioned by the Department of Culture, Media and Sport is being spun.  The reality, as described in an article by Javier Stanziola published in CILIP Update is that “studies of library impact are hard to find” – a quite different story. The article, “Numbers and Pictures: Capturing the Impact of Public Libraries“, suggests that “the sector is not playing the Prove Your Impact game well“.  I agree, and this criticism can be applied to the higher education sector too.

Elspeth feels that future longitudinal research will depend on data collection by frontline services (and knowing what data to collect).  The editorial concludes “So whether we like it or not, we would be wise to learn the ways of social scientists and the language of policy making“.

The importance of gathering data in order to demonstrate impact and value underpinned a session I ran recently on ““Sixty Minutes To Save Libraries”: Gathering Evidence to Demonstrate Library Services’ Impact and Value“.  As described in a post on “Gathering and Using Evidence of the Value of Libraries” which reviewed the session we did identify relevant sources of data which are collated annually from information provided by academic libraries by SCONUL which could be used to demonstrate value and impact and, if aggregated, could raise the profile and value of the academic library sector.

As described on the SCONUL Web site:

SCONUL has been collecting and publishing statistics from university libraries for over twelve years, with the aim of providing sound information on which policy decisions can be based.

Further information is provided which informs readers that “All UK HE libraries are invited to complete the SCONUL Statistical Questionnaire, which forms the foundation of all SCONUL’s statistical reports and services. The questionnaire details library resources, usage, income and expenditure for any academic year.

However, as was discussed at the session, the SCONUL data is not publicly available. It seems that the SCONUL Annual Library Statistics is published yearly – and copies cost £80.

And here we have a problem.  As I write this post a SCONUL 2010 conference is taking place and via the #scounl10 hashtag I see Twitterers at the event are summarising the key aspects of the various talks:

SCONUL can help us promote the value of libraries to wider world/senior people (see tweet)

We need to be a more self-confident community – blow our own trumpet e.g. about our track record with shared services (see tweet)

Again I agree.  But the closed nature of the statistics is a barrier to blowing one’s own trumpet and promoting the value of libraries.

Perhaps more importantly in today’s climes, the closed nature of the report and the underlying data (which is closed by its price, closed by being available only to member organisations and closed by being available in PDF format) is how perceptions of secrecy goes against  expectations that public sector organisation should be open and transparent.

And whilst one might expect certain public sector organisations to have a tendency to be closed and protective (the Ministry of Defence, perhaps) one might expect libraries, with their characteristics of trust and openness, to see the advantages in being open as a underlying philosophy, as well as being appropriate in today’s political environment.

A few days ago I attended the Online Information 2010 conference. I particularly enjoyed the talk on “The Good (and Bad) News About Open Data”  by Chris Taggart of openlylocal.com, “a prototype/proof-of-concept for opening up local authority data … [where] everything is open data, free for reuse by all (including commercially)“.

In Chris’s presentation he described the potential benefits which openness can provide and listed concerns which are frequently mentioned and responses to such concerns.  Rather than trying to apply Chris’s approaches in the content of academic library data which is collated by SCONUL I will simply link to Chris’s presentation which is available on Slideshare and embedded below.

So if the following arguments are being used to maintain the status quo, remember that increasing numbers of councils have already found their own ways of addressing such concerns:

  • People & organisations see it as a threat (and it is if you are wedded to the status quo, or an intermediary that doesn’t add anything)
  • The data is messy e.g. tied up in PDFs, Word documents, or arbitrary web pages
  • The data is bad
  • The data is complex
  • The data is proprietary
  • The data contains personal info
  • The data will expose incompetence
  • The tools are poor and data literacy in the community is low

I began this post by citing the sub-heading to an article published in CILIP Update: “the sector is not playing the Prove Your Impact game well“. Are academic libraries playing the game well? Can they change? Or will SCONUL be regarded as an intermediary which is wedded to the status quo?  Or might the change be driven by a bottom up approach?  After all since the individual institutions are collating the information prior to submitting it to SCONUL could the raw data be published by the individual institutions?

Posted in Impact, openness | 5 Comments »

I’m A Top Influencer For The Open University! (Or Am I?)

Posted by Brian Kelly (UK Web Focus) on 22 June 2009

Metrics For Measuring Impact in the Social Web

Martin Weller has published a blog post on Connections versus Outputs which discusses a report produced by the Open University Online Services team in collaboration with external consultants (MarketSentinel). The aim of the work was to examine “the broader influence of various web sites and looking at sentiment mining. The idea from an official communications perspective being you can see how well regarded your institution is in different sectors, and maybe influence that perception“.

Their findings? Well it seems this UK Web Focus blog is:

  • In 4th 6th place in a list of the Open University’s top 100 influencers in ‘distance learning’;
  • 4th in a ‘betweenness‘ category of “Stakeholders who are “stations” where information (on the issue in focus) is passed via in order to reach the constituency of said stakeholder”;
  • 8th in a ‘hubness‘ table which “is a characteristic of disproportionately linking to those who are authoritative on a given topic”.

Andy Powell responded to this post in a comment saying “Sorry… not meaning to pick on Brian here but the appearance of his blog, given this particular choice of topic [distance learning], stuck out a little“. Andy was correct in mentioning this strange result. I will have a better awareness of the topics I have covered in my 580 posts and I know this isn’t a topic I write about – and a search for the term confirms this (although there may have been a couple of occurrences of the term in comments).

Andy’s comment also touched on the sensitivity of discussing an individual, and this concern was shared by others on Twitter. Let me make it clear that I think it is appropriate to explore both the reasons for my inclusion in this list and the relevance of such an approach. As Martin Weller commented, this is very appropriate academic debate.

Interpreting The Findings

Let’s begin by trying to explore the reasons why I’m listed so highly (Martin Weller and Tony Hirst are also featured highly in the tables, but this can probably be explained by the fact that they work at the Open University).

Collusion: Perhaps Martin Weller, Tony Hirst and myself collude in linking to each other, in order to boost our rankings. After all we know each other and follow each other on Twitter. That could be a possibility – but we don’t.

Echoing: It may be, as was suggested on a second post on Martin Weller’s blog, that we are echoing each others views and the metrics simply reflect that. There may be some truth in that. As you can see from Martin Weller’s post on Web 2.0 – even if we’re wrong, we’re right following a talk I gave on What If We’re Wrong? and my follow-up posts on Even If We’re Wrong, We’re Right and What If We’re Right? we can see this in action. Now this reflecting on other”s views and adding new insights is, for me, part of the learning process. And although we’ve created something new in this process (we’re thinkers and not just linkers, as the saying goes) I appreciate that the metrics may give (undue?) weight to this.

Complementing: It may also be that the reason this blog is ranked so highly is that it complements the topics covered by Martin, Tony and others. This blog tends to reflect my background in working in IT Services and my interests in, say, Web accessibility – areas which tend not to be addressed in Martin or Tony’s blogs so much. So perhaps my ‘influence’ reflects this?

Being an early adopter: Although I wasn’t an early adopter of blogging (I started in November 2006) it may be that my high profile in the Open University reports simply reflects my presence in various the Social Web technologies (Twitter, Friendfeed, etc.) This could mean that the survey is picking up on the technologies I’ve been using, rather than the content I publish on this blog.

Blog is outside the institution: This blog, as is the case for the blogs published by others mentioned in the report, is hosted outside my institution. Perhaps the high ranking is a manifestation of the hosting arrangements? Or perhaps the fact that we have chosen an external hosting body indicates early adoption of blogging (before our host institution provided a blogging service) and the survey is skewed by the presence of the early adopters? Or perhaps a willingness to use a third party service, when this may have been discouraged (it’s not open source; what about sustainability of the service? …) , reflects a level of independence and willingness to take risks which the survey picks up on?

Social Web presence builds on peer-reviewed publications: I don’t just publish on Social Web services, such as blogs, Twitter, Slideshare., etc. I also write papers for peer-reviewed journals and conferences and invited papers for conferences. I then reference the papers on the social Web on my blog and make slides (and sometimes video recording) of the accompanying presentations available on services such as Slideshare, Vimeo and Google Video. Perhaps the amplification of peer-reviewed ideas and approaches via the Social Web helps to enhance the impact I have, which is being detected in the survey?

Writing style, linking style, etc.: I may be that my writing style, the ways I try to cite relevant posts, Web resources and even tweets contribute to the high ranking.

Relevant, Useful and Interesting Content: In an attempt to document the range of possibilities for this blog being identified as a significant influencer and hub for ideas related to ‘distance learning’ I should include the possibility that the content of the blog are felt to be relevant, timely, useful and interesting!

These are some thoughts which occur to me for my high ranking in the survey. But surely we simply need to find out what algorithms are being used. And, as Peter Murray-Rust has pointed out in a bog post on “Open Source increases the quality of science” if we have access to the source code we will be better placed to spot any flaws in the code itself.

This argument reminds me of the time I attended a WWW conference and heard a research er describe how his team had reverse engineered the algorithms used by a number of the global search engines. In the subsequent questions an engineer from Google said he wished the paper hadn’t been published, as Google would have to change the algorithms in order to prevent spammers from exploiting this knowledge. I suspect that we’d find institutions looking at ways to game Social Web metrics,especially if this became competitive. And as we know how one’s position in the University league tables are to institutions, I suspect this would happen.

Is This A Useful Starting Point?

If we have to accept that there are likely to be various metrics covering use of the Social Web, the question may be whether the approach which is being taken at the Open University provides a useful starting point.

Andy Powell agrees with Martin that metrics on how the Social Web can impact scholarly activities are needed: “I think we want to get to the same place (some sensible measure of scholarly impact on the social Web)” but goes on to add “ I disagree with you that this is a helpful basis on which to build.

Is this glass, as Martin feels, half full or would you agree with Andy that it’s half empty? I’ll add a third alternative – I’ll finish off what’s in the glass while the rest of you are arguing!  Or to put it another way, while the academics go off in pursuit of the perfect metric the marketing departments will make use of a variety of impact measurements in any case. I suspect we’ll find people in marketing departments asking “How can we use the Social Web to market our institutions, attract new students and new funding?” and then asking “How can we measure the impact – or ROI – of our presence in the Social Web?“. I’ll conclude by echoing Martin’s conclusions:

We’ve got to start somewhere – my take on this is that the output may have problems, but it’s a start. We could potentially develop a system focused on higher education, which is more nuanced and sophisticated than this. By analysing existing methodologies and determing problems with them (such as the three I’ve listed above) we could develop a better approach. I hold out hope that we can get interesting results from data analysis that reveals something about online scholarly activity.

And we should be analysing the existing methodologies in an open fashion. I hope my observations have contributed to this analysis.

Posted in Impact | 1 Comment »