UK Web Focus (Brian Kelly)

Innovation and best practices for the Web

Archive for December, 2012

Wishing You A Peaceful 2013

Posted by Brian Kelly on 31 December 2012

Blog content in the shape of a doveMy colleague Marieke Guy recently reminded me of Tagexo – an online service which “lets you create shaped tag clouds from Twitter IDs, Delicious accounts, RSS feeds, Web sites and searches“.

As it’s New Year’s Eve I thought I’d provide this visualisation of the content of recent posts on the UK Web Focus blog.

Here’s looking forward to a peaceful 2013.

Posted in General | Leave a Comment »

2012 in review

Posted by Brian Kelly on 30 December 2012

The stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

19,000 people fit into the new Barclays Center to see Jay-Z perform. This blog was viewed about 90,000 times in 2012. If it were a concert at the Barclays Center, it would take about 5 sold-out performances for that many people to see it.

Click here to see the complete report.

Posted in Blog, blog-summary | Leave a Comment »

Importance of Social Media for Finding New Opportunities

Posted by Brian Kelly on 27 December 2012

The recent post which summarised the Announcement: UKOLN – Looking Ahead was based on the news of the cessation of UKOLN’s core funding from 31 July 2013. The announcement concluded:

From August 2013, we will continue to build on this reputation and we very much look forward to working with you again in the future.

In order to support UKOLN staff in exploiting new opportunities I recently gave a training session on “Managing Your Digital Profile“. In the talk I described the value of social media in developing relationships with potential new partners, co-authors and funders which can be of value in one’s current job as well as in finding new jobs and opportunities.

During the session I was asked if there was one key service to make use of. I highlighted the importance of LinkedIn and provided examples of effective uses of LinkedIn. Just before Christmas @suebecks alerted me to a post entitled For job recruiters, Monster out, LinkedIn in. This post provided evidence of the ways in which LinkedIn is being used:

LinkedIn, the biggest professional-network​ing website, got into the field early with the introduction of Recruiter in 2008. The service lets headhunters search its more than 187 million profiles and contact potential candidates.

Since last year, Adobe has found more than half its new hires through LinkedIn. Adobe, the biggest graphic-design software company, uses job boards to fill only about 5% of openings.

In the session I went on to describe how I felt it was a mistake to think there was a single key service to use. I argued that there were a range of services which provided different functions and were used by different communities. I went on to describe how researchers could find value in claiming a Google Scholar profile and providing access to their research publications using services such as and ResearchGate, as well as claiming an ORCID ID.

I was asked if Facebook had a role to play. I described how this would relate to the personal ways in which one uses the service – but mentioned that Facebook is the third most important referrer of traffic to this blog. In addition I suggested that Facebook may have a role to play in finding new opportunities, and illustrated this by showing how a Google search for “Facebook Bath jobs found a Facebook page for jobs at Future Publishing. The potential relevance of Facebook for job-seekers was highlighted in the article For job recruiters, Monster out, LinkedIn in:

Two-thirds of companies already use Facebook, the world’s largest social-networking service, to find recruits using the site’s friend-finding search function, according to a June survey of more than 1,000 human resources professionals by recruiting software maker Jobvite. Fifty-four percent use micro-blogging service Twitter to learn about potential candidates’ views and interests, the survey found.

The article then went on to suggest new developments we may see for people looking for new opportunities:

The next challenge is to develop advanced tools that find greater detail on candidates from more social networks, says Brian O’Malley, a general partner at Battery Ventures. His firm has invested in social job-search startup Entelo, which trawls Twitter, Google’s Google+ and other sites, using proprietary algorithms to find candidates for specific positions and predict who among them may be open to offers.

Can you afford not to make use of social media if you are looking for new business opportunities in the future?

Note as mentioned above the slides on “Managing Your Digital Profile” are available on Slideshare and embedded below:

View Twitter conversation from: [Topsy]

Posted in General, Social Networking | 5 Comments »

Christmas Future: “Current monopoly of HE will be lost & just [a] few universities will survive”

Posted by Brian Kelly on 24 December 2012

The Ghosts of Christmas Past

A year ago, on 29 December 2011, I gave My Predictions for 2012. The post began “How will the technology environment develop during 2012? I’m willing to set myself up for a fall my outlining my predictions for 2012 :-)” To be honest the predictions were fairly predictable:

Tablet Computers …

After a couple of years in which use of smart phones, whether based on Apple’s iOS or Goole’s Android operating system), became mainstream for many when away from the office, 2012 will see use of Tablets becoming mainstream, with the competition provided by vendors of Android continue to bring the prices for those reluctant to pay a premium for an iPad.

Once the new term starts we’ll see increased numbers of students who received a Tablet PC for Christmas making use of them, not only for watching videos and listening to music in their accommodation, but also in lectures. As well as note-taking the devices, together with smart phones, will be used for recording lectures. In some cases this will lead to concerns regarding ownership and privacy infringements but students will argue that they are paying for their education and they should be entitled to time-shift their lecturers. Since it will be difficult to prevent students from making such recordings lecturers will start to encourage such practices and will seek to develop an understanding of when comments made during lecturers and tutorials should be treated as ‘off-the-record’.

Open Practices …

Such lecturers will be providing one example of an ‘open practice’. Such encouragement of recording or broadcasting lecturers will become the norm in several research areas, with organisers of research conferences acknowledging that they will need to provide an event amplification infrastructure (including free WiFi for participants, an event hashtag, live streaming or recording of key talks) in order to satisfy the expectations of those who are active in participation in research events.

Such open practices will complement more well-established examples of openness including open access and open content, such as open educational resources. We’ll see much greater use of Creative Commons licences, especially licence which minimise barriers to reuse.

Social Applications …

Social applications will become ubiquitous, although the term may be rebranded in order to avoid the barrier to use faced by those who regard the term ‘social’ as meaning ‘personal’ or ‘trivial’. Just as Web 2.0 became rebranded as the Social Web and the Semantic Web as Linked Data, we shall see such applications being marked as collaborative or interactive services.

Social networking services will continue to grow in importance across the higher education sector. However the view that the popularity of such services will be dependent on conformance with a particular set of development (open source and distributed) or ownership criteria (must not be owned by a successful multi-national company) will be seen to be of little significance. Rather than a growth in services such as or Diaspora, we will see Facebook continue to develop (with its use by organisations helped by mandatory legal requirements regarding conformance with EU privacy legislation described in a post on 45 Privacy Changes Facebook Will Make To Comply With Data Protection Law). In addition to Facebook, Twitter and Google+ will continue to be of importance across the sector.

Learning and Knowledge Analytics ….

The ubiquity of mobile devices coupled with greater use of social applications as part of a developing cultural of open practices will lead to an awareness of the importance of learning and knowledge analytics. Just as in the sporting arena we have seen huge developments in using analytic tools to understand and maximise sporting performances, we will see similar approaches being taken to understand and maximise intellectual performance, in both teaching and learning and research areas.

With just one of the predictions being more speculative:

Collective Intelligence

Just as the combination of developments will help us to have a better understanding of intellectual performance, so too will these development help to in the growth of Collective Intelligence, described in Wikipedia as the “shared or group intelligence that emerges from the collaboration and competition of many individuals and appears in consensus decision making in bacteria, animals, humans and computer networks“. The driving forces behind Collective Intelligence will be the global players which have access to large volumes of data and the computational resources (processing power and storage) to analyse the data.

However rather than simply presenting a list of predictions the post went on to describe how “a greater challenge is being able to demonstrate that such predictions have come true. How might we go about deciding, in December 2012, whether these predictions reflect reality?“.

The methodology used to support the predictions of technological developments was one used to support the JISC Observatory and described in more detail in a paper on “What Next for Libraries? Making Sense of the Future” which was presented at EMTACL12, an international conference on Emerging Technologies in Academic Libraries held in Trondheim, Norway on 1-3 October 2012.

The Ghosts of Christmas Present

In this post I will not go into details on the validity of the predictions. The importance of tablet computers and social applications should be self-evident whilst, as described in a post on Institutional Readiness for Analytics – practice and policy, CETIS have been pro-active in the areas os learnig and knowledge analytics, having recently published a series of briefing paper on analytics. The prediction on collective intelligence was intended to be more speculative, so perhaps discussion would be best focussed on open practices.

However in retrospect all of the predictions were based on an assumption that evidence would demonstrate the value of technological developments for high education. Although the paper “What Next for Libraries? Making Sense of the Future” highlighted the need to distinguish between invention, innovation and improvements, there was an assumption that technological developments would continue to enhance the value of higher education. But is this a valid assumption? And what if other other developments – economic, political, demographic, etc. – undermine the relevance of technical developments?

The Ghost of Christmas Yet To Come

These questions came to mind earlier today when I saw the following tweet from @phil_batty, the editor at large for Times Higher Education (@timeshighered) & editor of the World University Rankings (@THEWorldUniRank):

Current monopoly of HE will be lost & just few universities will survive RT @Lennie_SW: The Perfect Storm for Unis:

The post on The Perfect Storm for Universities was published on 3 December 2012 by Dr Stefan Popenici, an academic, public speaker, author and international consultant with extensive experience in leadership in the global higher education arena including the United States of America, the United Kingdom, Israel, Austria, Canada, the People’s Republic of China, France, Italy, Hungary, Philippines, Serbia, the Republic of Moldova, Portugal, Spain, Poland, Romania, Belgium, Georgia.

The post begins:

Even if universities may look well on the surface there is an increasing (and justified) concern that all will change soon. New data and analysis increase the anxiety that the current monopoly of higher education will be lost and just few universities will survive. No one knows which, how many or even if any university will have the chance to celebrate the middle of this century. Deafened by the noise of various bureaucrats and mediocre academics interested to say only what their masters like to hear, some universities and academic groups struggle to see beyond fads and slogans what is shaping the future that will change their existence. This hidden uneasiness is justified. An increasing number of disruptive factors – adding to the obvious and massive impact of Internet and online education – already are changing the landscape for higher education: the significant increase of youth isolation and marginalization, graduate unemployment and persistent underemployment, a concerning economic forecast of a constant slowdown of global growth (with implications for numbers of international students) and issues evolving from the global ageing population (and implications on lifelong learning strategies and numbers of local students). There is even more on the horizon and – while teaching and learning are still organized within university walls by models designed in early 1960s – the pace of change is accelerating.

I’d recommend that those who have an interest in the future of higher education should read this post. The (rather long) post concludes:

In the middle of this storm, universities that continue to glorify mediocrity and impose compliant thinking are condemned to perish. These victims of the storm may still consider that is safer to shut their eyes and stay comfortable within the limits of the status quo. After all, this is what has worked well for the last century. However, on the day after the storm, higher education will be anything but comfortable. The era of compliance and contentment is over!

It’s interesting to see how the damning conclusions are targetted at institutions which “glorify mediocrity and impose compliant thinking“. If that reflects the current culture within your organisation, I’d be worried.

It will be interesting to start observing signals of a future for higher education in which the “current monopoly of HE will be lost & just a few universities will survive”. As it’s Christmas Eve I’ll not comment on such signals today, but may revisit this post in a year’s time. To update the comment I made last year “a greater challenge is being able to demonstrate that such predictions have come true. How might we go about deciding, in December 2013, whether these predictions reflect reality?“.

Merry Christmas!

View Twitter conversation from: [Topsy]

Posted in General | 2 Comments »

Announcement: UKOLN – Looking Ahead

Posted by Brian Kelly on 21 December 2012

An official announcement was published yesterday on the UKOLN home page:

Following nearly 20 years of supporting Jisc innovation activities, UKOLN is now looking ahead to new challenges. In response to the Wilson review of Jisc, the organisation has confirmed that it will only provide core funding to the UKOLN Innovation Support Centre, up to July 2013 but not beyond.

Since Jisc’s inception in 1993, UKOLN has worked collaboratively to support the development and use of digital libraries and digital information management in many innovative areas. The decision to cease funding in no way reflects on the contribution of UKOLN to this agenda for education and research, but rather the new ways in which Jisc innovation activity will need to be taken forward into the future. There will be more targeted innovation where Jisc works directly with its stakeholders and although the scale of activity will be reduced, there will be new innovation taking place in line with the changes in the environment.

During these years, UKOLN has established a substantive global reputation, and has led innovation work to develop information environments, repositories, resource discovery, metadata registries, metadata standards, collection level descriptions and software tools. We are currently supporting innovation in areas such as research information management, repository metadata and infrastructure, and resource discovery. We continue to support and facilitate communities of practice, notably Web managers and software developers working in higher education. UKOLN has also published the Ariadne Web journal since 1996.

We would like to take this opportunity to thank the many people with whom we have worked closely, for your participation and engagement in our Innovation Support Centre activities. While the Innovation Support Centre will cease operating after July 2013, UKOLN will continue and as the organisation enters a new phase, it is a time to reflect on what we’ve achieved. We’d be interested to hear from you about how UKOLN’s work has made an impact. From August 2013, we will continue to build on this reputation and we very much look forward to working with you again in the future.

Dr Liz Lyon, Director UKOLN
Paul Walk, Deputy Director UKOLN

Note that a similar announcement has been published by CETIS. I think it is clear that 2013 will provide interesting challenges!

Merry Christmas.

View Twitter conversation from: [Topsy]

Posted in General | Tagged: | 9 Comments »

Commercial Exploitation of Content and the Instagram Story

Posted by Brian Kelly on 20 December 2012

Licence Conditions for this Blog

Creative Commons licenceOn 12 January 2011 I described how Non-Commercial Use Restriction [had been] Removed From This Blog. This post explained how:

The BY-NC-SA licence was chosen [in 2005] as it seemed at the time to provide a safe option, allowing the resources to be reused by others in the sector whilst retaining the right to commercially exploit the resources. In reality, however, the resources haven’t been exploited commercially and increasingly the sector is becoming aware of the difficulties in licensing resources which excludes commercial use, as described by Peter Murray-Rust in a recent post on “Why I and you should avoid NC licence“.

I have therefore decided that from 1 January 2011 posts and comments published on this blog will be licenced with a Creative Commons Attribution-ShareAlike 2.0 licence (CC BY-SA).

Later that year, on 24 October 2011 in a post entitled My Activities for Open Access Week 2011 I described how the licence conditions had been liberalised from CC-BY-SA to CC-BY. The post provided the background to the changes of the licence conditions:

… the share-alike clause can also provide difficulties in allowing others to reuse the content. Although I would encourage others to adopt a similar Creative Commons licence I realise that this may not also be achievable. So rather than requiring this as part of the licence, I will now simply encourage others who use posts published on this blog to make derived works available under a Creative Commons licence and limit the licence conditions to a CC-BY licence which states that:

You are free:

    • to copy, distribute, display, and perform the work
    • to make derivative works
    • to make commercial use of the work

Under the following conditions:

    • Attribution — You must give the original author credit.

These developments reflect a more general move towards the minimisation of barriers to the reuse of content, not just by others in the public sector but by the wider community. Such policies can help to stimulate growth in the economy by ensuring that resources are spent in development activities and not in negotiating licences. Such approaches are well-established in the software development environment in which open source software products are freely-available for everyone to use (large companies, such as Microsoft, thus benefit from using open source software products such as the Apache Web server). In the area of content, Peter Murray-Rust has argued that Scientists should NEVER use CC-NC. This explains why.

Commercial Exploitation of Content

Whilst there is a growing, but by no means universal, understanding of the benefits of allowing commercial exploitation of content, moves towards licences which grant commercial companies the right to commercially exploit content uploaded to their services tend to generate anger, as we have seen from the recent changes to the terms and conditions for users of the Instagram photo-sharing service. “Instagram makes you the product” argued Josh Halliday in The Guardian whilst TechCruch reported how “The Backlash Continues: Zuck’s Sis Doesn’t Seem To Like The Instagram Changes Either“.

But there is another angle to this story. Another TechCrunch article entitled “Quit Instagram, They Said. They’re Selling Your Photos, They Said.” poked fun at the outrage whilst in an article entitled “No, Instagram can’t sell your photos: what the new terms of service really mean” The Verge provided a more measured summary of the changes in the terms and conditions.

Yesterday Instragram responded to the storm in the blogosphere in which they acknowledged mistakes in the announcement regarding the changes: Thank you, and we’re listening. The post addresses some of the concerns which have been raised:

Ownership Rights Instagram users own their content and Instagram does not claim any ownership rights over your photos. Nothing about this has changed. We respect that there are creative artists and hobbyists alike that pour their heart into creating beautiful photos, and we respect that your photos are your photos.

Privacy Settings Nothing has changed about the control you have over who can see your photos. If you set your photos to private, Instagram only shares your photos with the people you’ve approved to follow you. We hope that this simple control makes it easy for everyone to decide what level of privacy makes sense.

The real change related to how Instagram would seek to make money, both to cover the costs of providing a global photo-sharing service, as well as to make money for the company:

Advertising on Instagram From the start, Instagram was created to become a business. Advertising is one of many ways that Instagram can become a self-sustaining business, but not the only one. Our intention in updating the terms was to communicate that we’d like to experiment with innovative advertising that feels appropriate on Instagram. Instead it was interpreted by many that we were going to sell your photos to others without any compensation.

Personally I’m quite happy to make use of a service such as Instagram for free. I also acknowledge that the company is neither a charity nor a public-sector organisation and has a legitimate need to make money. It has provided notification of changes to its terms and conditions which clarifies how it will seek to make money from “innovative advertising that feels appropriate on Instagram“.

I am also willing for others to commercially exploit content which I have released under a Creative Commons licence which does not exclude commercial use. I wonder if those who are unhappy with Instagram’s terms and conditions will apply the same arguments to content released under a CC-BY licence? Yes, such content could be used in ways you may not approve of. Accept this – and avoid applying discriminatory licence conditions. Open source software developers learnt this lesson long ago.

I’ll conclude by suggesting that if anyone wishes to respond to this post by using the “If you’re not paying for the product, you are the product” cliché, you should read the Powazek post on I’m Not The Product, But I Play One On The Internet which describes how:

There are several subtextual assumptions present in “you are the product” I think are dangerous or just plain wrong that I’m going to attempt to tease out here. Many of these thoughts have been triggered by Instagram’s recent cluelessness, but they’re not limited to that. I also want to be clear that I’m not arguing that everything should be free or that we shouldn’t examine the business plans of the services we consume. Mostly I’m just trying to bring some scrutiny to this over-used truism.

Many thanks to Wilbert Kraan for alerting me to this post last night. The post could, of course, have pointed out that the absurdity of applying the cliché to use of Creative Commons content.

View Twitter conversation from: [Topsy]

Posted in Finances, openness | Tagged: | 4 Comments »

‘Does He Take Sugar?': The Risks of Standardising Easy-to-read Language

Posted by Brian Kelly on 19 December 2012

 'Does He Take Sugar?': The Risks of Standardising Easy-to-read LanguageBack in September 2012 in a post entitled “John hit the ball”: Should Simple Language Be Mandatory for Web Accessibility? I described the W3C WAI’s Easy to Read activity and the online symposium on “Easy to Read” (e2r) language in Web Pages/Applications.

The article highlighted the risks of mandating easy-to-read language and, following subsequent discussions with Alastair McNaught of JISC TechDis, led to a submission to the online symposium. Although reviewers of the paper commented that the submission provided “very sound ideas about how to approach e2r on level with other accessibility issues” and “The argument that the user perspective needs to be taken into account for discussing and defining “easy to read” makes a lot of sense” the paper was not accepted. Since the reviewers also suggested that “The authors should provide more material on how this step could be realized” and “More background on BS 8878 and a justification should be added” we decided to submit an expanded version of our paper to the current issue of the Ariadne Web magazine.

In subsequent discussions when preparing the paper I came across Dominik Lukeš, Education and Technology Specialist at Dyslexia Action, who has published research in the areas of language and education policy. Dominik’s blog posts, in particular a post on The complexities of simple: What simple language proponents should know about linguistics, were very relevant to the arguments which Alastair and myself had made in our original paper. I was therefore very pleased when Dominik agreed to contribute to an updated version of our paper. The paper, ‘Does He Take Sugar?': The Risks of Standardising Easy-to-read Language, has been summarised by Richard Waller in his editorial for the current issue of Ariadne:

In “Does He Take Sugar?”: The Risks of Standardising Easy-to-read Language, Brian Kelly, Dominik Lukeš and Alistair McNaught highlight the risks of attempting to standardise easy-to-read language for online resources for the benefit of readers with disabilities. In so doing, they address a long-standing issue in respect of Web content and writing for the Web, i.e. standardisation of language. They explain how in the wake of the failure of Esperanto and similar artificial tongues, the latest hopes have been pinned on plain English, and ultimately standardised English, to improve accessibility to Web content. Their article seeks to demonstrate the risks inherent in attempts to standardise language on the Web in the light of the W3C/WAI Research and Development Working Group (RDWG) hosting of an online symposium on the topic. They describe the aids suggested by the RDWG such as readability assessment tools, as well as the beneficiaries of the group’s aims, such as people with cognitive, hearing and speech impairments as well as with readers with low language skills, including readers not fluent in the target language. To provide readers further context, they go on to describe earlier work which, if enshrined in WCAG Guidelines would have had significant implications for content providers seeking to comply with WCAG 2.0 AAA. They interpret what is understood in terms of ‘the majority of users’ and the context in which content is being written for the Web. They contend that the context in which transactional language should be made as accessible to everyone as possible differs greatly from that of education, where it may be essential to employ the technical language of a particular subject, as well as figurative language, and even on occasions, cultural references outside the ordinary. They argue that attempts to render language easier to understand, by imposing limitations upon its complexity, will inevitably lose sight of the nuances that form part of language acquisition. In effect they supply a long list of reasons why the use and comprehension of language is considerably more complex than many would imagine. However, the authors do not by any means reject out of hand the attempt to make communication more accessible. But they do highlight the significance of context. They introduce the characteristics that might be termed key to Accessibility 2.0 which concentrate on contextualising the use of content as opposed to creating a global solution, instead laying emphasis on the needs of the user. They proceed to detail the BS 8878 Code of Practice 16-step plan on Web accessibility and indicate where it overlaps with the WCAG guidelines. Having provided readers with an alternative path through the BS 8878 approach, they go on to suggest further research in areas which have received less attention from the WCAG guidelines approach. They touch upon the effect of lengthy text, figurative language, and register, among others, upon the capacity of some readers to understand Web content. The authors’ conclusions return to an interesting observation on the effect of plain English which might not have been anticipated – but is nonetheless welcome.

The article is of particular relevance since it brings home very clearly the limitations of WAI’s approach to Web accessibility and the belief that universal accessibility can be obtained by simply following a set of rules documented in the WCAG guidelines. As we’ve explained in the article, this isn’t the case for the language used in Web pages. However although the approach developed by WAI has significant flaws, the BS 8878 Code of Practice enables guidelines developed by WAI and other organisations to be used in a more pragmatic fashion. We hope that the experiences in using this Code of Practice described by EA Draffan in her talk on Beyond WCAG: Experiences in Implementing BS 8878 at the IWMW 2012 event help in the promoting greater use of this approach, including use of the standard to address the readability of Web pages.

Posted in Accessibility | Tagged: | 13 Comments »

Reflections on the “Great Dropbox Space Race”

Posted by Brian Kelly on 17 December 2012

The Great Dropbox Space Race

Back on 15 October 2012 the Dropbox blog announced The Great Dropbox Space Race!. The post described how:

Space Race is a chance for you to support your school and compete against other schools for eternal glory (by eternal glory we mean up to 25 GB of free Dropbox space for two years).

Everyone who signed up with an institutional email address for a bona fide educational institution received a minimum of 3 Gb storage for 2 years. Additional storage up to 25 Gb for 2 years was available based on the numbers of people who have signed up from the institution.

The space race is now over. The leader table shows the top ten institutions which have gained the largest amount of free disk space in the Cloud for members of the institution.

No. Institution Number of
“Space Racers”
1 University College London   4,020 10,977
2 University of Cambridge   4,129 10,810
3 University of Oxford   3,999   9,817
4 Imperial College London   3,566   9,284
5 University of Edinburgh   2,545   6,662
6 University of Southampton   2,515   6,429
7 University of Manchester   2,025   6,224
8 University of Nottingham   2,208   6,016
9 Open University   1,503   4,431
10 University of Warwick   1,684   4,325
TOTAL 28,194

There is also a table for the top 100 institutions, which goes down as far as Dartington College of Arts which has 134 “space racers” with a total of 428 points.

Note, incidentally, that the numbers of points aren’t directly related to the numbers of users as additional points can be scored in other ways, including reading the getting started manual!


Tweet from PlymouthI have to admit that I am a fan of Dropbox. Its ease-of-use makes the shipping of files across my desktop computers and mobile devices trivial. I was therefore hopeful that there would be significant take-up of the service across the University of Bath, which would increase my storage capacity. However after the closure of the space race the University was only in 26th place. Perhaps we should have emulated the approach taken at the University of Portsmouth and been more pro-active in encouraging take-up of the offer.

The global league table appears surprising, with no UK institutions and only 21 US institution in the top ten. The top UK institution, UCL, is in 68th position in the global table.

No. Country Institution Number of
“Space Racers”
1 Singapore National University of Singapore   20,406 42,354
2 Taiwan National Taiwan University   16,485 38,044
3 Italy Politecnico di Milano   14,359 32,017
4 Singapore Nanyang Technological University   14,875 31,355
5 Mexico Tecnológico de Monterrey   13,235 30,550
6 Netherlands Delft University of Technology   13,226 30,511
7 Brazil Universidade de São Paulo   13,469 28,307
8 USA University of California Berkeley   12,126 28,214
9 Ukraine Sumy State University     7,303 27,007
10 Germany Rheinisch Westfalische Technische Hochschule Aachen   10,038 25,777
TOTAL 135,522

What might the apparent low take-up of this offer tell us? It may be that other institutions around the world have been pro-active in encouraging take-up of the service. Alternatively it may simply be that institutions currently provide sufficient disk space for their staff and students. Alternatively it may be that institutions do not want their staff and students to make use of cloud-based storage services due to concerns regarding security, privacy and data protection.

These are legitimate issues, although when I hear people say “We can’t use Dropbox – it’s based in the US” I assume they are referring to data protection legislation. However there seems to be a lack of awareness of the Safe Harbor Agreement (a streamlined process for US companies to comply with the EU’s Directive 95/46/EC on the protection of personal data) and Dropbox’s announcement on 14 February 2012 that they had signed up to the Safe Harbor Agreement.

But what is being lost by not using such services? The 28,194 users of the top ten UK institutions are being provided with a minimum of 82.6 Terrabytes (according to this conversion table) or up to 688 Terrabytes if they each receive the maximum allowance of 25 Gb. According to a Wikipedia page which provide a List of Storage hierarchy media with costs the disk storage provided by a reliable cloud service with cost $140 per Terrabye per month. If each of the 28,194 users of the top ten UK institutions used the maximum of 25Gb allowable storage the commercial cost of this would appear to be $11,564 per month, or $277,536 over the two years for which the free deal is available.


I’ll be the first to admit that my back-of-envelop calculations are likely to be flawed. Pat Parslow suggested I take a look at Amazon’s calculator to provide a sanity check. I would therefore invite others to provide feedback on the estimates of the disk storage which Dropbox are offering and do the sums of the costs in providing similar disk storage over two years within the institution based on the many thousands of users listed in the top 100 UK institutions who have signed up to the Dropbox offer.

But in addition to the financial aspects, even if the service appears to be more popular outside the UK and US, the numbers of people who have subscribed to the service suggests that there will be a need to provide education on best practices for use of the service, including highlighting the risks of using the service.If you are a researcher I would suggest you do not allow sensitive research data to be hosted on services hosted in the US, even if the company hosted the data has signed up to the Safe Harbor Agreement.

But if a key aspect regarding use of Dropbox relates to digital literacy and risk assessment, might there be a need to ask whether the popularity of Dropbox in countries such as Taiwan and Singapore suggests that the company might be well-placed to carry out espionage on research activities in these countries? Might Dropbox be a cost-effective way of the US intelligence services to monitor activities in universities around the world? Or am I being paranoid?

View Twitter conversation from: [Topsy]

Posted in General | Tagged: | 2 Comments »

Performance Analytics: Twitter, 20Feet and Crowdbooster

Posted by Brian Kelly on 14 December 2012

CETIS Series of Analytics Briefing Papers

Adam Cooper, CETIS Director, recently published a post in which he tried to answer the question What does “Analytics” Mean? (or is it just another vacuuous buzz word?). In the post Adam asks the question:

But is analytics like cloud computing, is the word itself useful? Can a useful and clear meaning, or even a definition, of analytics be determined?

Adam concludes that “the answer is ‘yes’” and describes how this definition is explained in a CETIS briefing paper on What is Analytics? Definition and Essential Characteristics. Adam’s post also introduces the CETIS series of briefing papers on Analytics which includes papers on Analytics; what is changing and why does it matter?Analytics for the whole institution, Analytics for Learning and TeachingLegal, Risk and Ethical Aspects of Analytics in Higher EducationAnalytics for Understanding Research and A Framework of Characteristics for Analytics.

The What is Analytics? Definition and Essential Characteristics briefing paper provides the following useful pithy definition:

Analytics is the process of developing actionable insights through problem definition and the application of statistical models and analysis against existing and/or simulated future data.

The CETIS work in this area has a focus on learning analytics, which reflects their core area of interest and expertise. However there are other areas of interest which are of relevance to the higher education sector. In addition there are approaches which have been taken to analytics beyond our sector which may provide useful insights.

Beyond Learning Analytics

Adam Cooper’s blog post concludes by encouraging people to focus on the applications of use of analytics, rather than seeking formal definitions:

Rather than say what business analytics, learning analytics, research analytics, etc is, I think we should focus on the applications, the questions and the people who care about these things.

I would agree with this approach. An example of possible dangers in focussing on the terms being used and the associated definitions can be see in the discussions surrounding altmetrics. As highlighted by Jean Liu in a post on Metrics and Beyond @ SpotOn London 2012:

A commonly held assumption about alt-metrics is that they are meant to replace traditional measures of research impact like citation counts. Actually most in the field (us included) think that alt-metrics should complement traditional metrics, not eliminate them altogether.

Although I recently commented on the need to understand the limits of altmetrics in a post on Understanding the Limits of Altmetrics: Slideshare Statistics in this post I want to focus on what I will refer to as performance analytics.

Performance Analytics in Sport and Hobbies

Wikipedia defines permance metrics as “a measure of an organization’s activities and performance. Performance metrics should support a range of stakeholder needs from customers, shareholders to employees“. We can see how such approaches can be applied in areas such as sports from the article published in the Guardian in August 2012 which described how Manchester City to open the archive on player data and statistics.

On a personal level in a post entitled Personal Perspectives on How Metrics Can Influence Practice I described the judge’s marks for last year’s rapper sword dancing competition (in which we came bottom of our group). The evidence of our low marks led to a decision to change our approaches to the dance, to the structure of the team and our weekly practices. The scores provided us with ‘actionable insights‘ into our performance which led to subsequent changes in behaviour.

I have noticed a growing interest in performance analytics across people I follow on Twitter, with a number of people in my network having purchased a Fitbit gadget and, judging my tweets I see in my Twitter stream, the Runkeeper app on their iPhone or Android device. If you’re looking for a Christmas present for a gadget-minded friend who is starting to think about their fitness the Duigital Trends Web site provides suggestions for Eight fitness gadgets that actually work.

Twitter Analytics

If metrics can provide insights into real world activities such as football, sword dancing, running and walking, then their relevance in a digital environment would appear obvious.

But should one care about performance analytics for activities such as use of Twitter?

John Spencer in a post entitled Twitter Isn’t a Tool has explained how he is unhappy with “organizations inquir[ing] about the best ways to maximize Twitter for professional development“. For John “Twitter isn’t a commodity“. Rather “Twitter is where I go when I want to talk to teacher friends … when I want to hang out with some teachers with my same quirky sense of humor [and] where people challenge my groupthink and push me to rethink my practice“.

@mr_brett_clark was in agreement: “I often describe Twiter like a party“. Curt Ress had a similar view: “I often see Twitter as a cocktail party. Lots of people having quick exchanges amidst a lot of noise. But through time, relationships are formed and real learning happens.

But although Twitter may be an informal conversational medium which can enhance informal learning, I feel that others may agree with this characterisation and yet still find value in using analytics to “develop actionable insights”. After all although my hobby, rapper sword dancing, is a fun activity. there is widespread, although my no means universal, agreement that the judging and the competitive nature can improve standards.

And, of course, beyond Twitter’s role in informal learning and social intercourse, the tool is also being used to support formal institutional activities, as can be seen from the survey in August 2012 which showed that there have been almost 50,000 tweets from official Russell Group university Twitter accounts, which have over 300,000 followers.


Crowdbooster: Impressions for Nov 2012The Crowdbooster service allows you to:

Analyze the performance of your individual tweets and posts with an interactive graph and table to quickly understand what’s working. Customize the date range to understand the impact of your campaign. Drill down to view engagement and reach metrics on Facebook and Twitter.

Use of the free version of the service is illustrated in the accompanying screenshot.

As can be seen you can view the potential impact of tweets on a daily, weekly, monthly basis, over all your tweets or for a customised range.

Crowdbooster: Nos. of followers for Nov 2012As an example, I have an interest in seeing how the initial announcement of the date of the IWMW 2013 event has been shared across Twitter.

It seems that there have been 11 retweets of the posts which have the potential to have been seen by 8.3K Twitter users. As Twitter users will know, this potential audience is unlikely to reflect reality. However it does provide an indication of outreach and the 11 retweets (and 2 conversations) are based on reality.


The TwentyFeet Web site describes how:

TwentyFeet is an “egotracking” service that will help you keep track of your own social media activities and monitor your results. We aggregate metrics from different services, thus giving you the full picture of what happens around you on the web – all in one place.

The TwentyFeet service (also known as 20ft) provides a range of graphs which help to visualise one’s Twitter performance over time. These include:
20ft: Reputation for Nov 2012

  • Reputation influence: the numbers of followers gained and lost over a specified period together with the number of Twitter lists you are on.
  • Influence indicators: the number of mentions and retweets.
  • Conversations: including tweets, retweets and @ messages.
  • Followers analyses: the numbers following you, people not following back and the rato of followers to following.
  • List analyses: the numbers of lists you own, the numbers of members of lists you own and the number of subscribers to one’s lists.
  • Additional information: the numbers of tweets you have favourited, the number number of tweets posted, the total number of links posted and the total number of lists followed.

Examples of TwentyFeet graphs for my numbers of followers in November 2012 are illustrated.

Business Models

The basic Crowdbooster service is available for free. As described on the pricing page this can be used to analyse one Twitter and one Facebook account. A Professional account, costing $39/month allows up to 10 accounts to be analysed with the Business account costing $99/month allowing up to 30 accounts to be analysed. No additional functionality is available for the paid-for accounts, apart from access to a live chat and phone support service,

The basic TwentyFeet service is available for free which can be used to analyse a single Twitter and Facebook account. However users of the free service will also find that the service sends a weekly tweet summarising the week’s performance, along the lines:

My week on twitter: 40 retweets received, 1 new listings, 37 new followers, 78 mentions. Via:

Some people find such automated tweets irritating (with the tweet from TwentyFeet perhaps being regarded as boastful). It is possible to buy a subscription service which can be used to disable the public notifications as well as provide various other benefits. Subscriptions costs $12.45 for 5 credits – however it is not clear how long the credits last for.


As mentioned previously many Twitter users may well have no interest in their Twitter metrics. However if you do have an interest, which service should you use? A similar answer would be to sign up for both. However the real decision to be made is probably whether to use the free version of TwentyFeet and accept the weekly automated tweets from one’s account.

Power Twitter users should have no difficulty in filtering tweets which are of no interest if they have a well-formed and consistent string of characters – which is the case for the alert from TwentyFeet as well as service such as (“The foo Daily is out“) and FourSquare (“I’m at foo“). Back in 2009 Mashable published an article entitled Twitter Better: 20 Ways to Filter Your Tweet. More recently posts on How to Filter out Noise from your Twitter Timeline and How to Filter Your Twitter Stream and a question on Quora which asked What tools can one use to filter one’s Twitter stream? highlighted some tools and techniques for Twitter management.

However many users will not wish to use such advanced filtering techniques. Perhaps in response to the public Twitter alerts provided by TwentyFeet, Crowdbooster now provides a private email alert. A few days ago I received s message saying:

You gained 7 followers a day over the past week! (On average, you gain 2) View your follower growth now.

To reach the most people, schedule your tweets for 12PM today, 8PM today and 9PM today.

In light of the developments to Crowdbooster I have just withdrawn permissions for TwentyFeet to post to my Twitter stream. The last tweet from the service was published 30 minutes ago:

My week on twitter: 45 retweets received, 8 new followers, 108 mentions. Via:

For me, Crowdbooster provides the deeper understanding of how I use Twitter. I know know that my second most retweeted post ever was posted two years ago:

A classic for those who like spotting misuse of apostrophe’s – spotted in Bath charity shop.

It seems there are a lot of grammar pedants amongst my Twitter followers!

View Twitter conversation from: [Topsy]

Posted in Evidence, Twitter | Tagged: , , | 3 Comments »

Reflections on the “Top 10 Tips on How to Make Your Open Access Research Visible Online”

Posted by Brian Kelly on 13 December 2012

Top 10 Tips on How to Make Your Open Access Research Visible Online

Open Access Yesterday I received an email which informed me that contribution to the Jisc Inform online newsletter (issue 35, December 2012) had been published. The article on Top 10 Tips on How to Make Your Open Access Research Visible Online is based on a blog post originally published on the Networked Researcher blog which was tweaked slightly and republished on the Jisc blog. The version published in the Jisc Inform newsletter includes a series of images to accompany each of the ten tips.

The tips were originally developed to accompany a series of presentations given at the universities of Exeter, Salford and Bath during Open Access Week. These presentations were based on the experiences gained in use of social media to help maximise access to peer-reviewed publications. In particular the tips documented the experiences of use of social media services such as blogs, Twitter and Slideshare to help maximise the readership of a paper entitled “A Challenge to Web Accessibility Metrics and Guidelines: Putting People and Processes First“.

The Complexities Behind the Tips

It is interesting to see how the advice initially given in a one-hour seminar can be distilled into a series on top tips. The sceptic may be dismissive of the value of reducing the complexities of open practices for researchers to a series of top tips. However at the recent SpotOn 2012 conference in sessions such as How to do Smart Journalism on Complex Science the value of science writers in being able to communicate complex scientific ideas in ways which can be understood by the general public was emphasised. The challenges, however, was to ensure that those with a deeper interest in the complexities can be able to access resources which provide more in-depth discussions.

Sldie on Slideshare statisticsIn the case of the Top 10 Tips on How to Make Your Open Access Research Visible Online more detailed information was provided in the slides of the original talk. In addition, as illustrated the slides also contain links to further information. In the example shown evidence that being proactive in ensuring that the co-authors of the paper provided links to the presentation on their blog posts and Twitter channels can be seen from the large numbers of views of the slides during the week of the conference.

The limitations of Slideshare statistics was mentioned, but the slide also contain a link to the usage statistics which showed how the accompanying paper was, at the time, the most downloaded of my peer-reviewed papers which had been deposited in the University of Bath repository this year.

In addition to the more detailed information provided in the slides during the presentation itself I expanded on a number of issues, including responding to questions raised during the talk. A post has been published on the JISC-funded Open Exeter blog about the Open Access Week @ Exeter which includes a series of videos of the invited presentations. The video of my talk is available on YouTube and embedded below. I hope this additional information complements the top 10 tips published in Jisc Inform.

View Twitter conversation from: [Topsy]

Posted in openness, Repositories | 6 Comments »

Good News From the UK Government: Launch of the Open Standards Principles

Posted by Brian Kelly on 11 December 2012

In April 2012 I wrote a post entitled Preparing a Response to the UK Government’s Open Standards: Open Opportunities Document which summarised my experiences of support for open standards in JISC development programmes since the 1990s and encouraged others to participate in the UK Government’s consultation exercise. A post by Simon Wardley entitled The UK’s battle for open standards which began:

Many of you are probably not aware, but there is an ongoing battle within the U.K. that will shape the future of the U.K. tech industry. It’s all about open standards.

motivated me to write a follow-up post entitled Oh What A Lovely War! in which I described the language which was being used to describe this consultation exercise:

In brief we are seeing a “battle for open standards” that will “shape the future of the UK tech industry” in which we are seeing “UK Government betrayal” which has led to a “proprietary lobby triumph” . The ugly secrets of “how Microsoft fought true open standards” have been revealed and now every man must do his duty and “get involved”! Who said standards were boring?

Yesterday I received the following email from Linda Humphries of the Government Digital Service, Cabinet Office.

Thank you for your response to the UK Government’s Open Standards: Open Opportunities public consultation. The consultation ran from 9 February to 4 June 2012. At the close of the consultation, we had received evidence from over 480 responses and we would like to take this opportunity to thank you for sharing your views and helping us to formulate new policy on this topic.
As you may know, the consultation process concluded with the publication of a government response and a new policy – the Open Standards Principles – on 1 November 2012. The government response covers the process we followed, a review of the key themes that emerged in the consultation, how they have been taken on board and the next steps for open standards in government IT.
Online submissions were published during the consultation period to encourage debate and we have now also made available the written responses submitted through other channels. The only exception to this is any submissions which explicitly requested confidentiality. Two independent reports commissioned by the Cabinet Office from Bournemouth University have also been published and are available on the Cabinet Office website – an analysis of the consultation responses and an evidence review of aspects of the proposed policy. The responses, reports and new policy are all available here. In the new year, we shall be setting up the Open Standards Board, as described in the Open Standards Principles. We look forward to your continuing engagement through the Standards Hub during 2013. Kind regards, Linda Linda Humphries Government Digital Service Cabinet Office

government standards consultation

The Key Documents

The key documents which have been published are Open Standards Principles (PDF, MS Word and ODT formats), Open Standards Consultation – Government Response (PDF, MS Word and ODT formats), Statistical data (PDF, MS Word and ODT formats), An Analysis of the Public Consultation on Open Standards: Open Opportunities (PDF, MS Word and ODT formats), Open Standards in Government IT: A Review of the Evidence (PDF, MS Word and ODT formats) and B (PDF, MS Excel and CSV formats). The first document summarised the key principles:

Open Standards Principles

These principles are the foundation for the specification of standards for software interoperability, data and document formats in government IT:

1. We place the needs of our users at the heart of our standards choices 2. Our selected open standards will enable suppliers to compete on a level playing field 3. Our standards choices support flexibility and change 4. We adopt open standards that support sustainable cost 5. Our decisions on standards selection are well informed 6. We select open standards using fair and transparent processes 7. We are fair and transparent in the specification and implementation of open standards The introduction to the document states that:

This policy becomes active on 1 November 2012. From this date government bodies [1] must adhere to the Open Standards Principles – for software interoperability, data and document formats in government IT specifications. The other documents summarised the responses which had been received to the consultation (which included feedback from Adam Cooper, JISC CETIS, Rowan Wilson, JISC OSS Watch, Rob Englebright, JISC and Tony Hirst, Open University in addition to myself and several others from the university sector). The document Open Standards in Government IT: A Review of the Evidence which provided an independent report for the Cabinet Office by the Centre for Intellectual Property & Policy Management at Bournemouth University concluded:

Although there is a lack of quantitative evidence on expected cost savings from adopting open standards, abundant examples exist where an open standards policy has been adopted with various consequent benefits, and the literature identifies few downside risks. The challenges appear to lie in the manner of implementation so that potential pitfalls, such as adopting the wrong standard, are avoided while potential gains from increased interoperability, including more competitive procurement and benefits to SMEs and citizens are maximised.

Perhaps some unexpected good news from the Government for Christmas? Might we be able to announce that the standards battle is now over and cry out “Peace in our time”? Time to read the documents in more detail, I feel. But I’d welcome comments from anyone who may already had read the documents and digested the implications.

[1] Central government departments, their agencies, non-departmental public bodies (NDPBs) and any other bodies for which they are responsible.

View Twitter conversation from: [Topsy]

Posted in standards | 4 Comments »

“It Was 20 Years Ago Today”

Posted by Brian Kelly on 9 December 2012

On 9 December 1992 I saw the Web for the first time. As I described in a handbook entitled Running A World-Wide Web Service published in 1995:

[I] first came across the World-Wide Web (WWW) at a workshop on Internet tools organised by the Information Exchange Special Interest Group, University of Leeds on 9th December 1992. In January 1993 the Computing Service installed the CERN httpd server on its central Unix system – this was probably the first WWW service provided by a central service in the UK academic community.

The workshop included demonstrations of a number of Internet applications. The aim of the workshop, was to raise awareness of the importance of the Internet to support institutional research, teaching and marketing activities.

At the time I was familiar with GopherVeronicaWAIS and Archie but the Web was new to me. The applications were probably demonstrated on Silicon Graphics or possibly Sun workstations. The Web browser I saw was the Viola which was publicly released in May 1992.

A screenshot of Viola running under X-Windows is illustrated. It should be noted, however, that this image shows a later release of the browser since, in December 1992, the Web was text-only with inline images only becoming available with the release of the NCSA Mosaic browser.

Despite its text-only origins the potential of the Web was apparent to me from the first time I saw it. The ability to have have links within a document, as opposed to Gopher which provided only links from menu items, was a clear strength of the application as was the integration with a range of existing Internet services, such as FTP and Gopher, as well as links with a variety of backend services, such as directory applications which were already starting to be integrated with the Web.

At that time I was the Information Officer in the University Computing Service and was looking for a tool which could be used to provide access to online information provided by the Computing Service as well as, I hoped, form the basis of a Campus Wide Information Service (CWIS).

A small number of Universities were at that time starting to explore the potential of Gopher to provide a CWIS and that was the technology I expected would be used at Leeds. But on 9 December 1992 I saw the Web for this first time and was convinced that I have seen a new vision of the future. It was twenty years ago today, but it’s another set of Beatles lyrics which are more appropriate:

Roll up for the mystery tour.
The magical mystery tour is waiting to take you away,
Waiting to take you away.

When were you taken away by the Web?

Twitter conversation from Topsy: [View]

Posted in General | 9 Comments »

Guest Post: “1 billion people, 17 million students, 500+ colleges and millions of eager learners”

Posted by Brian Kelly on 7 December 2012

Today’s guest post is written by  Gwen van der Velden, Director, Learning and Teaching Enhancement at the University of Bath. Following a chat last night along our shared corridor on level 5 of the Wessex House building Gwen kindly agreed to write a guest post about her recent trip to India.

I work a few offices away from Brian Kelly and Paul Walk and other colleagues in UKOLN. We chat often in the corridors and today I told Brian about last week’s trip to Delhi, India. Because of my enthusiasm about what we found in relation to e-learning, new technologies and connectivity for the public good, Brian asked me to blog and share some of the inspiration. For context, when I say ‘we’ I am not being royal, I am just also referring to Kyriaki Anagnostopoulou, our Head of e-Learning at Bath who has the kind of international reputation that got us invited to India in the first place.

The Indian government works with the HE sector on increasing access to HE for learners who cannot access HE at the moment. The HE system in India is highly regulated and it isn’t a market where entry is easily possible. Many UK universities are working to establish themselves there, but this is far from easy. Moreover, there isn’t enough Indian faculty to grow the existing universities or establish new ones and student places are very, very limited considering the interest in university study that there is. We heard that for one of the Institutes of Technology, there are over 40 students for each available place. So, a different approach is required. Against this background there is a bigger drive to educate India out of poverty. Experiencing New Delhi, you can see what is possible. But driving into old Delhi, we saw what still is to be achieved. It is a country of zest, opportunity, large numbers (1 Billion people) and great economic and social challenges…

The Ministry of Human Resources Development which oversees HE, is investing $1 billion into growing HE. Crucial to their plan is the National Mission on Education through ICT. Growth is going to come through reaching all corners of India with connectivity, and that is why there is an incredible project of taking glass fibre cable into the farthest ends of India. A huge development, and often combined with putting solar energy provision in place, where no electricity existed before. WiFi connections are going to become available through 40 rupees a year subscriptions. That’s about 50 pence. It shows some clear government financial commitment. And it’s all for learning, how inspiring is that?

Aakash tablet (image from WIkipedia:

Aakash tablet (image from Wikipedia:

The second step is to have the learning platforms that connect learners to the curriculum, teaching and assessment. This too is addressed in the most imaginative way. You may have heard of the Indian invention of a $30 tablet, the Aakash (illustrated). I understood that Aakash means ‘clouds’, or ‘sky’, and that shows again how India is reaching for the sky here. The Aakash 1 apparently didn’t get past the pilot, but I’ve held the Aakash 2, played with it (thanks Prof Kannan Moudgalya) and sat in amazement at what a smart little thing this is.  It’s less than half the size of an i-pad but large enough to work comfortably with. It has some good processing power and I saw some software on it that allows you to do programming –useful for Comp Sci students and e-developers. The current pilot means 100.000 learners are testing it out, and we understood from government officials that another 1.5 Million are to be piloted in early Spring next year.

With connectivity and the technology platform under way, the content needs to get out there, and this is where our discussions came in. At the moment universities are encouraged to make as much content available as possible. They all do it in different ways. In some cases it is curriculum, sometimes just content and in some cases there is a larger or smaller effort towards designing materials for learning. Designing content for learning is clearly a developing field and again, full of challenges in India, such as the need for various language versions, cultural context adjustment and then there are also issues about what text/ expression/ content may or may not be used for cultural, religious or property right sensitivities. (On that note, this entry is not a statement sanctioned or approved by the Indian government or any partners we have worked with. It’s just my own account!)

Interestingly, at the conference – courtesy of the British Council and Indira Gandhi National Open University – the Ministry’s Secretary told us that developments now in universities have to be about quality, not quantity. It isn’t good enough to just put content online, if ICT is not used effectively to actually improve learning. Excellent.

The three step approach is incredible considering the size of the country: 1 billion people, 17 million students, 500+ colleges and millions of eager learners wanting to get ahead. We were impressed by the university colleagues we met from all over India. They were genuinely driven by seeing universities as a public good: educating the country out of poverty and developing the technologies to do it. It explains where all these inspired e-ideas are coming from. Watch that space, I can’t help thinking there is more to come from the East.

gwenGwen van der Velden
Learning and Teaching Enhancement
University of Bath.

Web page:
Twitter: @gwenvdv

Kyriaki Anagnostopoulou
Head of e-Learning
Learning and Teaching Enhancement
University of Bath.

Web page:

View Twitter conversation from: [Topsy]

Posted in General, Guest-post | Tagged: | 1 Comment »

Guest Post: Reflections on Open Access Week 2012 at the University of Oxford

Posted by Brian Kelly on 4 December 2012

During Open Access Week a series of guest blog posts were published on this blog in which three repository managers shared their findings of SEO analyses of their institutional repositories.

As a follow-up to those posts, which were motivated by a commitment to openness and sharing which is prevalent in the repository community, this post by Catherine Dockerty (Web and Data Services Manager, Radcliffe Science Library) and Juliet Ralph (Bodleian Libraries Life Sciences Librarian) provides a summary of the activities behind the Open Access Week event at the University of Oxford.

Open Access Week at Oxford

Open Access Week 2012 saw a determined effort from the Bodleian Libraries of Oxford University to shine a light on developments in Open Access with a full week-long programme of events. This was prompted by the need to assess the state of play in Open Access (OA) which, for major research institutions such as Oxford, is particularly urgent in the wake of the publication of the Finch Report. It was the second year we have participated in Open Access Week – last year we held a single event and we wanted to do a lot more this time round.

What We Were Trying To Do

We had a number of specific things we wanted to achieve though our programme:

  • Increasing the knowledge of library staff. All reader-facing staff will potentially deal with enquiries relating to Open Access.
  • Assembling and showcasing the expertise of Bodleian Libraries staff in Open Access. Readers need to know what we can do for them.
  • Raising awareness of publishing options to academic researchers.
  • Promoting submission to Oxford’s institutional repository ORA (Oxford Research Archive). Oxford currently has mandatory deposit for doctoral theses, but not for research papers.
  • Highlighting Oxford’s progress in the field of Open Data.

What We Did

We put together a programme of talks and other activities, most of which were lunchtime sessions and took place at the Radcliffe Science Library, one of the Bodleian Libraries and Oxford University’s main library for the sciences and engineering. The majority of speakers were library staff. The focus was on science, but events covering law and medicine were included and there were attendees from the humanities and social sciences.

An evening session, “Bodley’s ‘Republic of [Open] Letters” was hosted by the Oxford Open Science Group and highlighted the DaMaRO Project, which is developing a research data management policy and data archiving infrastructure for Oxford

The presentations are available online.

Wikipedia Editathon

Ada Lovelace by Margaret Carpenter, 1836

Ada Lovelace by Margaret Carpenter, 1836

The final event of the Open Access Week programme was a Wikipedia “Editathon” on the theme Women in Science. The event was organised as a collaboration between the Bodleian Libraries and Oxford University’s IT Services, and was a follow-up to the Ada Lovelace Day event at the Royal Society the week earlier. This tied in neatly with Open Access Week as we were able to highlight open access sources for use in updating articles. Our event was publicised at the Royal Society one and on Ada Lovelace Day Wikipedia page.

Having an Oxford-based Wikipedia event was also an opportunity to encourage academics and students to get involved in editing Wikipedia, which is reliant on expert contributors to add high quality articles and improve existing ones. Wikipedia has a readership vastly exceeding that of any academic journal, and presents an opportunity for academics to have an impact on a wider audience.

Juliet Ralph (Bodleian Libraries Life Sciences Librarian) kicked off the proceedings with an introductory talk to introduce Wikipedia and outline the format of the session. Online resources for editing articles were suggested, focusing on open access. The fact that the Royal Society was providing free access to all its publications until 29th November 2012 was highlighted. A collection of printed reference materials from the RSL’s collection was also provided.

A list of articles for adding/updating was provided as guidance to participants, but this was not intended to be prescriptive. The list was the same one as used at the Royal Society event, updated to reflect all the work done that day.

We were very pleased that Oxford-based Wikipedians James and Harry Burt were able to attend and assist the assembled editors. They also treated us to an impromptu presentation on their work as long-time Wikipedia editors.

Online participation via Twitter was encouraged using the hashtag #WomenSciWP (the same as for the Royal Society event). Note that a Twubs archive of the tweets is available. The event was also live-tweeted from the RSL’s Twitter feed (@radcliffescilib).

By the end of the session two new articles were created and 12 updated. Attendees were mainly research staff and postgraduate students from the fields of science and medicine. Also present were two archivists from the Saving Oxford Medicine project who posted a blog post about the work.

Special thanks to:

  • James and Harry Burt for presenting and for help they gave to other participants.
  • Izzie McMann and Karen Langdon (Radcliffe Science Library staff) for assisting participants on the day.
  • Janet McKnight (IT Services) and Alison Prince (Bodleian Libraries Web Manager) for help in organising and publicising the event.
  • Andrew Gray (British Library Wikipedian in Residence) and Daria Cybulska (Wikimedia UK) for publicising the Editathon and supplying learning materials for the session.


We certainly achieved the aim of increasing the knowledge of OA issues in Library staff within the sciences, several of whom attended more than one event. In future we will aim to actively promote the staff development benefits from participating to all Bodleian Libraries staff, not just those in the sciences. Our collaborations with the Open Science Group and IT Services were successful, and we hope to work together with them on future events.

We fulfilled all our original intentions to some extent, but some events were not well attended in spite of being publicised widely although were positively received by those who did.

The timing of Open Access Week is a problem for Oxford as the start of the academic year is later than for most UK universities, which means the new term is just getting underway in earnest and there are many other events to compete with. Staff time in planning events is also in short supply as reader-facing staff will have been prioritising inductions for new students over the previous weeks.

The Wikipedia event was a success (well attended with positive feedback) and we would certainly hold a similar event in the future, although not necessarily as part of Open Access Week. The fact that it was a hands-on session went down well, and the Women in Science theme attracted interest.

Next Time

Holding events at lunchtime was evidently not popular and we may decide to move them to an afternoon slot (colleagues who run user education programmes had a higher take-up when they did this). We may also move the sessions out of the library into academic departments or colleges, and hold events at other times of year.

We will be making a concerted effort to involve well-known speakers, rather than relying heavily on library staff.

We will be looking to encourage other OA events in Oxford and elsewhere, and we will also think about using online chat as well as Twitter for online participation. The planning starts now!

View Twitter conversation from: [Topsy]

Catherine DockertyCatherine Dockerty is the Web and Data Services Manager at the Radcliffe Science Library at Oxford University where her role is managing online content, social media and communications, and to support colleagues in serving the University’s teaching and research in the sciences. She has spent 13 years working in various reader services roles at Oxford University, and has also worked in the civil engineering industry and the book trade.

Juliet RalphJuliet Ralph is the Subject Librarian for Life Sciences and Medicine in the Bodleian Libraries at Oxford, where she has worked for over 15 years. She is one of many librarians involved in providing support for research at Oxford, including Open Access.

Posted in Guest-post, openness, Repositories | Tagged: , | 1 Comment »