Metrics For Measuring Impact in the Social Web
Martin Weller has published a blog post on Connections versus Outputs which discusses a report produced by the Open University Online Services team in collaboration with external consultants (MarketSentinel). The aim of the work was to examine “the broader influence of various web sites and looking at sentiment mining. The idea from an official communications perspective being you can see how well regarded your institution is in different sectors, and maybe influence that perception“.
Their findings? Well it seems this UK Web Focus blog is:
4th6th place in a list of the Open University’s top 100 influencers in ‘distance learning';
- 4th in a ‘betweenness‘ category of “Stakeholders who are “stations” where information (on the issue in focus) is passed via in order to reach the constituency of said stakeholder”;
- 8th in a ‘hubness‘ table which “is a characteristic of disproportionately linking to those who are authoritative on a given topic”.
Andy Powell responded to this post in a comment saying “Sorry… not meaning to pick on Brian here but the appearance of his blog, given this particular choice of topic [distance learning], stuck out a little“. Andy was correct in mentioning this strange result. I will have a better awareness of the topics I have covered in my 580 posts and I know this isn’t a topic I write about – and a search for the term confirms this (although there may have been a couple of occurrences of the term in comments).
Andy’s comment also touched on the sensitivity of discussing an individual, and this concern was shared by others on Twitter. Let me make it clear that I think it is appropriate to explore both the reasons for my inclusion in this list and the relevance of such an approach. As Martin Weller commented, this is very appropriate academic debate.
Interpreting The Findings
Let’s begin by trying to explore the reasons why I’m listed so highly (Martin Weller and Tony Hirst are also featured highly in the tables, but this can probably be explained by the fact that they work at the Open University).
Collusion: Perhaps Martin Weller, Tony Hirst and myself collude in linking to each other, in order to boost our rankings. After all we know each other and follow each other on Twitter. That could be a possibility – but we don’t.
Echoing: It may be, as was suggested on a second post on Martin Weller’s blog, that we are echoing each others views and the metrics simply reflect that. There may be some truth in that. As you can see from Martin Weller’s post on Web 2.0 – even if we’re wrong, we’re right following a talk I gave on What If We’re Wrong? and my follow-up posts on Even If We’re Wrong, We’re Right and What If We’re Right? we can see this in action. Now this reflecting on other”s views and adding new insights is, for me, part of the learning process. And although we’ve created something new in this process (we’re thinkers and not just linkers, as the saying goes) I appreciate that the metrics may give (undue?) weight to this.
Complementing: It may also be that the reason this blog is ranked so highly is that it complements the topics covered by Martin, Tony and others. This blog tends to reflect my background in working in IT Services and my interests in, say, Web accessibility – areas which tend not to be addressed in Martin or Tony’s blogs so much. So perhaps my ‘influence’ reflects this?
Being an early adopter: Although I wasn’t an early adopter of blogging (I started in November 2006) it may be that my high profile in the Open University reports simply reflects my presence in various the Social Web technologies (Twitter, Friendfeed, etc.) This could mean that the survey is picking up on the technologies I’ve been using, rather than the content I publish on this blog.
Blog is outside the institution: This blog, as is the case for the blogs published by others mentioned in the report, is hosted outside my institution. Perhaps the high ranking is a manifestation of the hosting arrangements? Or perhaps the fact that we have chosen an external hosting body indicates early adoption of blogging (before our host institution provided a blogging service) and the survey is skewed by the presence of the early adopters? Or perhaps a willingness to use a third party service, when this may have been discouraged (it’s not open source; what about sustainability of the service? …) , reflects a level of independence and willingness to take risks which the survey picks up on?
Social Web presence builds on peer-reviewed publications: I don’t just publish on Social Web services, such as blogs, Twitter, Slideshare., etc. I also write papers for peer-reviewed journals and conferences and invited papers for conferences. I then reference the papers on the social Web on my blog and make slides (and sometimes video recording) of the accompanying presentations available on services such as Slideshare, Vimeo and Google Video. Perhaps the amplification of peer-reviewed ideas and approaches via the Social Web helps to enhance the impact I have, which is being detected in the survey?
Writing style, linking style, etc.: I may be that my writing style, the ways I try to cite relevant posts, Web resources and even tweets contribute to the high ranking.
Relevant, Useful and Interesting Content: In an attempt to document the range of possibilities for this blog being identified as a significant influencer and hub for ideas related to ‘distance learning’ I should include the possibility that the content of the blog are felt to be relevant, timely, useful and interesting!
These are some thoughts which occur to me for my high ranking in the survey. But surely we simply need to find out what algorithms are being used. And, as Peter Murray-Rust has pointed out in a bog post on “Open Source increases the quality of science” if we have access to the source code we will be better placed to spot any flaws in the code itself.
This argument reminds me of the time I attended a WWW conference and heard a research er describe how his team had reverse engineered the algorithms used by a number of the global search engines. In the subsequent questions an engineer from Google said he wished the paper hadn’t been published, as Google would have to change the algorithms in order to prevent spammers from exploiting this knowledge. I suspect that we’d find institutions looking at ways to game Social Web metrics,especially if this became competitive. And as we know how one’s position in the University league tables are to institutions, I suspect this would happen.
Is This A Useful Starting Point?
If we have to accept that there are likely to be various metrics covering use of the Social Web, the question may be whether the approach which is being taken at the Open University provides a useful starting point.
Andy Powell agrees with Martin that metrics on how the Social Web can impact scholarly activities are needed: “I think we want to get to the same place (some sensible measure of scholarly impact on the social Web)” but goes on to add “ I disagree with you that this is a helpful basis on which to build.“
Is this glass, as Martin feels, half full or would you agree with Andy that it’s half empty? I’ll add a third alternative – I’ll finish off what’s in the glass while the rest of you are arguing! Or to put it another way, while the academics go off in pursuit of the perfect metric the marketing departments will make use of a variety of impact measurements in any case. I suspect we’ll find people in marketing departments asking “How can we use the Social Web to market our institutions, attract new students and new funding?” and then asking “How can we measure the impact – or ROI – of our presence in the Social Web?“. I’ll conclude by echoing Martin’s conclusions:
We’ve got to start somewhere – my take on this is that the output may have problems, but it’s a start. We could potentially develop a system focused on higher education, which is more nuanced and sophisticated than this. By analysing existing methodologies and determing problems with them (such as the three I’ve listed above) we could develop a better approach. I hold out hope that we can get interesting results from data analysis that reveals something about online scholarly activity.
And we should be analysing the existing methodologies in an open fashion. I hope my observations have contributed to this analysis.