UK Web Focus (Brian Kelly)

Innovation and best practices for the Web

Archive for the ‘openness’ Category

Guest Post: Reflections on Open Access Week 2012 at the University of Oxford

Posted by Brian Kelly on 4 Dec 2012

During Open Access Week a series of guest blog posts were published on this blog in which three repository managers shared their findings of SEO analyses of their institutional repositories.

As a follow-up to those posts, which were motivated by a commitment to openness and sharing which is prevalent in the repository community, this post by Catherine Dockerty (Web and Data Services Manager, Radcliffe Science Library) and Juliet Ralph (Bodleian Libraries Life Sciences Librarian) provides a summary of the activities behind the Open Access Week event at the University of Oxford.

Open Access Week at Oxford

Open Access Week 2012 saw a determined effort from the Bodleian Libraries of Oxford University to shine a light on developments in Open Access with a full week-long programme of events. This was prompted by the need to assess the state of play in Open Access (OA) which, for major research institutions such as Oxford, is particularly urgent in the wake of the publication of the Finch Report. It was the second year we have participated in Open Access Week – last year we held a single event and we wanted to do a lot more this time round.

What We Were Trying To Do

We had a number of specific things we wanted to achieve though our programme:

  • Increasing the knowledge of library staff. All reader-facing staff will potentially deal with enquiries relating to Open Access.
  • Assembling and showcasing the expertise of Bodleian Libraries staff in Open Access. Readers need to know what we can do for them.
  • Raising awareness of publishing options to academic researchers.
  • Promoting submission to Oxford’s institutional repository ORA (Oxford Research Archive). Oxford currently has mandatory deposit for doctoral theses, but not for research papers.
  • Highlighting Oxford’s progress in the field of Open Data.

What We Did

We put together a programme of talks and other activities, most of which were lunchtime sessions and took place at the Radcliffe Science Library, one of the Bodleian Libraries and Oxford University’s main library for the sciences and engineering. The majority of speakers were library staff. The focus was on science, but events covering law and medicine were included and there were attendees from the humanities and social sciences.

An evening session, “Bodley’s ‘Republic of [Open] Letters” was hosted by the Oxford Open Science Group and highlighted the DaMaRO Project, which is developing a research data management policy and data archiving infrastructure for Oxford

The presentations are available online.

Wikipedia Editathon

Ada Lovelace by Margaret Carpenter, 1836

Ada Lovelace by Margaret Carpenter, 1836

The final event of the Open Access Week programme was a Wikipedia “Editathon” on the theme Women in Science. The event was organised as a collaboration between the Bodleian Libraries and Oxford University’s IT Services, and was a follow-up to the Ada Lovelace Day event at the Royal Society the week earlier. This tied in neatly with Open Access Week as we were able to highlight open access sources for use in updating articles. Our event was publicised at the Royal Society one and on Ada Lovelace Day Wikipedia page.

Having an Oxford-based Wikipedia event was also an opportunity to encourage academics and students to get involved in editing Wikipedia, which is reliant on expert contributors to add high quality articles and improve existing ones. Wikipedia has a readership vastly exceeding that of any academic journal, and presents an opportunity for academics to have an impact on a wider audience.

Juliet Ralph (Bodleian Libraries Life Sciences Librarian) kicked off the proceedings with an introductory talk to introduce Wikipedia and outline the format of the session. Online resources for editing articles were suggested, focusing on open access. The fact that the Royal Society was providing free access to all its publications until 29th November 2012 was highlighted. A collection of printed reference materials from the RSL’s collection was also provided.

A list of articles for adding/updating was provided as guidance to participants, but this was not intended to be prescriptive. The list was the same one as used at the Royal Society event, updated to reflect all the work done that day.

We were very pleased that Oxford-based Wikipedians James and Harry Burt were able to attend and assist the assembled editors. They also treated us to an impromptu presentation on their work as long-time Wikipedia editors.

Online participation via Twitter was encouraged using the hashtag #WomenSciWP (the same as for the Royal Society event). Note that a Twubs archive of the tweets is available. The event was also live-tweeted from the RSL’s Twitter feed (@radcliffescilib).

By the end of the session two new articles were created and 12 updated. Attendees were mainly research staff and postgraduate students from the fields of science and medicine. Also present were two archivists from the Saving Oxford Medicine project who posted a blog post about the work.

Special thanks to:

  • James and Harry Burt for presenting and for help they gave to other participants.
  • Izzie McMann and Karen Langdon (Radcliffe Science Library staff) for assisting participants on the day.
  • Janet McKnight (IT Services) and Alison Prince (Bodleian Libraries Web Manager) for help in organising and publicising the event.
  • Andrew Gray (British Library Wikipedian in Residence) and Daria Cybulska (Wikimedia UK) for publicising the Editathon and supplying learning materials for the session.


We certainly achieved the aim of increasing the knowledge of OA issues in Library staff within the sciences, several of whom attended more than one event. In future we will aim to actively promote the staff development benefits from participating to all Bodleian Libraries staff, not just those in the sciences. Our collaborations with the Open Science Group and IT Services were successful, and we hope to work together with them on future events.

We fulfilled all our original intentions to some extent, but some events were not well attended in spite of being publicised widely although were positively received by those who did.

The timing of Open Access Week is a problem for Oxford as the start of the academic year is later than for most UK universities, which means the new term is just getting underway in earnest and there are many other events to compete with. Staff time in planning events is also in short supply as reader-facing staff will have been prioritising inductions for new students over the previous weeks.

The Wikipedia event was a success (well attended with positive feedback) and we would certainly hold a similar event in the future, although not necessarily as part of Open Access Week. The fact that it was a hands-on session went down well, and the Women in Science theme attracted interest.

Next Time

Holding events at lunchtime was evidently not popular and we may decide to move them to an afternoon slot (colleagues who run user education programmes had a higher take-up when they did this). We may also move the sessions out of the library into academic departments or colleges, and hold events at other times of year.

We will be making a concerted effort to involve well-known speakers, rather than relying heavily on library staff.

We will be looking to encourage other OA events in Oxford and elsewhere, and we will also think about using online chat as well as Twitter for online participation. The planning starts now!

View Twitter conversation from: [Topsy]

Catherine DockertyCatherine Dockerty is the Web and Data Services Manager at the Radcliffe Science Library at Oxford University where her role is managing online content, social media and communications, and to support colleagues in serving the University’s teaching and research in the sciences. She has spent 13 years working in various reader services roles at Oxford University, and has also worked in the civil engineering industry and the book trade.

Juliet RalphJuliet Ralph is the Subject Librarian for Life Sciences and Medicine in the Bodleian Libraries at Oxford, where she has worked for over 15 years. She is one of many librarians involved in providing support for research at Oxford, including Open Access.

Posted in Guest-post, openness, Repositories | Tagged: , | 1 Comment »

Open Practices for Open Repositories

Posted by Brian Kelly on 29 Oct 2012


Open Access Week, which took place last week, was a busy period for me. Not only did I give talks on how social media can enhance access to research papers hosted in institutional repositories at the universities of Exeter, Salford and Bath, I also wrote accompanying posts which were published on the Networked Researcher and JISC blogs. But perhaps more importantly last week I coordinated the publication of three guest posts on this blog: SEO Analysis of WRAP, the Warwick University RepositorySEO Analysis of LSE Research Online and SEO Analysis of Enlighten, the University of Glasgow Institutional Repository.

Sharing of Repository Practices and Experiences

The background to this work were the two papers I co-authored for the Open Repositories OR 2012 conference. In the paper on “Open Metrics for Open Repositories” (available in PDF and MS Word formats) myself, Nick Sheppard, Jenny Delasalle, Mark Dewey, Owen Stephens,Gareth Johnson and Stephanie Taylor conclude with a call for repository managers, developers and policy makers to be pro-active in providing open access to metrics for open repositories. In the paper which asked “Can LinkedIn and Enhance Access to Open Repositories?“, also available in PDF and MS Word formats, Jenny Delasalle and myself described how popular social media services which are widely use by researchers can have a role to play in  enhancing the visibility of papers hosted in repositories. However although LinkedIn and appeared to be widely used we  concluded by described how “further work is planned to investigate whether such links are responsible for enhancing SEO rankings of resources hosted in institutional repositories“.

This work began with a post which described the findings of a MajesticSEO Analysis of Russell Group University Repositories. This post made use of the MajesticSEO service which can report on SEO ranking factors for Web sites. The survey provided initial findings of a survey of institutional repositories hosted by the 24 Russell Group universities.

This initial post was intended to explore the capabilities of the tool and gauge the level of interest in further work.  In response to the post the question was asked “Are [the findings] correlated with amount of content, amount of full-text (or other non-metadata-only) content, breadth or depth of subject matter, what?” These were valid questions and were addressed in the more detailed follow-up surveys, which were provided by repository managers at the universities of Warwick, Glasgow and LSE who have the contextual knowledge needed to provide answers to such questions.

In this initial series of guest blog posts, William Nixon concluded with the remarks:

This has been an interesting, challenging and thought-provoking exercise with the opportunity to look at the results and experiences of Warwick and the LSE who, like us reflect the use of Google Analytics to provide measures of traffic and usage.

The overall results from this work provide some interesting counterpoints and data to the results which we get from both Google Analytics and IRStats. These will need further analysis as we explore how Majestic SEO could be part of the repository altmetrics toolbox and how we can leverage its data to enhance access our research.

I feel the exercise has been valuable for the three contributors. But I also feel that the descriptions of the experiences in using the MajesticSEO tool, the findings and the interpretation of the findings in an open fashion will be of valuable to the wider repository community, who may also have an interest in gaining a better understanding of the ways in which repository resources are found by users of popular search engines, such as Google.  There will also be a need to have a better understanding of the tools used to carry out such analyses. How, for example, will SEO analysis tools address link farms and other ‘black hat’ SEO techniques which may provide significant volumes of links to resources which may, in reality, be ignored by Google?

William Nixon’s post concluded by pointing out the need for:

further analysis as we explore how Majestic SEO could be part of the repository altmetrics toolbox and how we can leverage its data to enhance access our research.

I suspect the University of Glasgow will not be alone in wishing to explore the potential of SEO analysis tools which can help in understanding current patterns of traffic to repositories and in shaping practices to enhance such traffic. I hope the work which has been described by Yvonne Budeden, Natalia Madjarevic and William Nixon has been useful to the repository community in summarising their initial experiences.

I should also add that Jenny Delasaale and I are giving a talk at the ILI 2012 conference which will ask “What Does The Evidence Tell Us About Institutional Repositories?” We are currently finalising the slides for the talk, which are available on Slideshare and embedded below. There is still an opportunity for us to update the slides, which might include a summary of plans for future work in this area. So we would very much welcome your feedback and suggestions. Perhaps you might be willing to publish a guest post on this blog which builds on last week’s posts?

Posted in openness, Repositories | Tagged: , | 5 Comments »

SEO Analysis of Enlighten, the University of Glasgow Institutional Repository

Posted by Brian Kelly on 25 Oct 2012


In the third and final guest post published during Open Access Week William Nixon, Head of Digital Library Team at the University of Glasgow Library and the Service Development Manager of Enlighten, the University of Glasgow’s institutional repository service, gives his findings on use of  the MajesticSEO tool to analyse the Enlighten repository.

SEO Analysis of Enlighten, University of Glasgow

This post takes an in-depth look at a search engine optimisation (SEO) analysis of Enlighten, the institutional repository of the University of Glasgow. This builds on an initial pilot survey of institutional repositories provided by Russell Group universities described in the post on MajesticSEO Analysis of Russell Group University Repositories.


University of Glasgow

Founded in 1451, the University of Glasgow is the fourth oldest university in the English-speaking world. Today we are a broad-based, research intensive institution with a global reach. It’s ranked in the top 1% of the world’s universities. The University is a member of the Russell Group of leading UK research universities. Our annual research grants and contracts income totals more than £128m, which puts us in the UK’s top 10 earners for research. Glasgow has more than 23,000 undergraduate and postgraduate students and 6000 staff.


We have been working with repositories since 2001 (our first work was part of the JISC funded FAIR Programme) and we now have two main repositories, Enlighten for research papers (and the focus of this post) and a second for our Glasgow Theses.

Today we consider Enlighten to be an “embedded repository”, that is, one which has “been integrated with other institutional services and processes such as research management, library and learning services” [JISC Call, 10/2010]. We have done this in various ways including:

  • Enabling sign-on with institutional ID (GUID)
  • Managing author identities
  • Linking publications to funder data from Research System
  • Feeding institutional research profile pages

As an embedded repository Enlighten supports a range of activities including our original Open Access aims to provide as any of our research outputs freely available as possible but also to act as a publications database and to support the university’s submission to REF2014.

University Publications Policy

The University has a Publications Policy, introduced to Senate in June 2008, has two key objectives:

  • to raise the profile of the university’s research
  • to help us to manage research publications.

The policy (it is a mandate but we tend not to use that term) asks that staff:

  • deposit a copy of their paper (where copyright permits)
  • provide details of the publication
  • ensure the University is in the address for correspondence (important for citation counts and database searches)

Enlighten: Size and Usage

Size and coverage

In mid-October 2012 Enlighten had 4,700 full text items covering a range of item types including journal articles, conference proceedings, book, reports and compositions. Enlighten has over 53,000 records and the Enlighten Team work with staff across all four Colleges to ensure our publications coverage is as comprehensive as possible.


We monitor Enlighten’s primarily via Google Analytics for overall access (including number of visitors, page views referrals and keywords) and EPrints IRStats package for downloads. Daily and monthly download statistics are provided in records for items with full text and we provide an overall listing of download stats for the last one and 12 month periods.

Looking at Google Analytics for the 1 Jan 12 – 30 Sep 12 (to tie in with this October snapshot) and the previous period we had 201,839 Unique Visitors up to 30 Sept 12 compared to 196,988 in 2011.

In the last year we have seen an increase in the number of referrals and our search traffic is now around 62%. In 2012 – 250,733 people visited this site, 62.82% was Search Traffic (94% of that is Google) with 157,503 Visits and 28.07% Referral Traffic with 70,392 visits.

In 2011 232,480 people visited this site, 69.97% of that was Search Traffic with 162,665 Visits and 18.98% came from referrals with 44,128 Visits.


Our experience with Google Analytics has shown that much of our traffic still comes from search engines, predominantly Google but it has been interesting to note the increase in referral traffic, in particular from our local * domain, this rise has coincided with the rollout of staff publication pages which are populated from Enlighten and provides links to the record held in Enlighten.

After * domain referrals our most popular external referrals come from:

  • Mendeley
  • Wikipedia
  • Google Scholar

We expected that these would feature most predominantly in the Majestic results, with Google itself.

Majestic SEO Survey Results

The data for this survey was generated on the 22nd October 2012 using the ‘fresh index’, current data can be found from the Majestic SEO site with a free account. We do own the domain but haven’t added the code to create a free report. The summary for the site is given below showing 632 referring domains and 5,099 external backlinks. Interestingly it seems our repository is sufficiently mature for Majestic to all provide details for the last five years too.

Since we were looking at rather than * we anticipated that our local referrals wouldn’t feature in this report. As a sidebar a focus just on showed nearly 411,000 backlinks and over 42,000 referring domains.

Figure 1.  Majestic SEO Summary for

This includes 619 educational backlinks and 54 educational referring domains. This shows a drop in the number of referring domains since Brian’s original post in August which showed 680 and a breakdown of the Top Five Domains (and number of links) as:

  • 5,880
  • 5,087
  • 322
  • 178
  • 135

These demonstrate a very strong showing for blog sites, news and Wikipedia.

Figure 2. Top 5 Backlinks

Referring domains was a challenge! We couldn’t replicate the same Matched Links data which Warwick and the LSE have used. Our default Referring Domains report is ordered by Backlinks (other options including matches are available but none of our Site Explorer – Ref Domains options seemed to be able to replicate this. We didn’t use Create Report.

These Referring Domains ordered by Backlinks point us to full text content held in Enlighten from sites it’s unlikely we would have readily identified.

Figure 3a: Referring Domains by Backlinks

Figure 3b: Referring Domains by Matches (albeit by 1)

This report shows at number one with the blog sites holding spots 2 and 3 and then Bibsonomy (social bookmark and publication sharing system) and Mendeley at 4 and 5.

An alternative view of the Referring Domains report by Referring Domain shows the major blog services and Wikipedia in the top 3, with two UK universities Southampton and Aberdeen (featuring again) in positions 4 and 5.

The final report is a ranked list of Pages, downloaded as CSV file and then re-ordered by ReferringExtBacklinks.

URL ReferringExtBackLinks CitationFlow TrustFlow 584 36 28 198 18 15 77 10 9 70 24 2 69 23 2[1].pdf 61 0 0

Table 1: Top 5 pages, sorted by Backlinks

These pages are:

  • Enlighten home page
  • PDF for “Arguments For Socialism”
  • PDF for “Language in Pictland”
  • A chronology of the Scythian antiquities of Eurasia based on new archaeological and C-14 data [Full text record]
  • Some problems in the study of the chronology of the ancient nomadic cultures in Eurasia (9th – 3rd centuries BC) [Full text record]
  • PDF for “87Sr/86Sr chemostratigraphy of Neoproterozoic Dalradian limestones of Scotland and Ireland: constraints on depositional ages and time scales” [Full text record]


Focusing in more detail on the results, in Figure 2, the top 5 backlinks, 4 out of the 5 are from Wikipedia, the first two are to the same paper but from different Wikipedia entries. It’s interesting to see that our third ranked backlink is the ROARmap registry.

Looking at the top 5 pages ranked by backlinks, none of the PDFs or the records which have PDFs currently appear in our IRStats generated list of most downloaded papers in the last 12 months. It is clear however, in this pilot sampling to draw a correlation between ranking and the availability of  full text and not merely a metadata record.


While this initial work has focused on the Top 5, extending this to at least the Top 10 would be useful for further comparison, it was interesting to see that sites such as Mendeley appeared in variations of our Referring Domains which correlated with our Google Analytics reports which indicate that they are a growing source of referrals.

Looking at Figure 3a, a Google search, on the first referring domain (by backlinks) reveals that the number Ref Domain has 136,000 results on Google for “”, didn’t match at all and had 5 results.

Social media sites such as Facebook and Twitter don’t appear in these initial results, it may be because the volume is insufficient to be ranked here or there may be breach of service issues. Google Analytics now provides some social media tools and we have been identifying our most popular papers from Facebook and Twitter.

This has been an interesting, challenging and thought-provoking exercise with the opportunity to look at the results and experiences of Warwick and the LSE who, like us reflect the use of Google Analytics to provide measures of traffic and usage.

The overall results from this work provide some interesting counterpoints and data to the results which we get from both Google Analytics and IRStats. These will need further analysis as we explore how Majestic SEO could be part of the repository altmetrics toolbox and how we can leverage its data to enhance access our research.

About the Author

William Nixon is the Head of Digital Library Team at the University of Glasgow Library. He is also the Service Development Manager of Enlighten, the University of Glasgow’s institutional repository service ( He been working with repositories over the last decade and was the Project Manager (Service Development) for the JISC funded DAEDALUS Project that set up repositories at Glasgow using both EPrints and DSpace. William is now involved with the ongoing development of services for Enlighten and support for Open Access at Glasgow. Through JISC funded projects including Enrich and Enquire he has worked to embed the repository into University systems. This work includes links to the research system for funder data and the re-use of publications data in the University’s web pages. He was part of the University’s team which provided publications data for the UK’s Research Excellence Framework (REF) Bibliometrics Pilot. William is now involved in supporting the University of Glasgow’s submission to the REF2014 national research assessment exercise. Enlighten is a key component of this exercise, enabling staff to select and provide further details on their research outputs.

Posted in Evidence, Guest-post, openness | 2 Comments »

Open Practices for the Connected Researcher

Posted by Brian Kelly on 22 Oct 2012

Today sees the start of Open Access Week, #OAWeek. As described on the Open Access Week Web site:

Open Access Week, a global event now entering its sixth year, is an opportunity for the academic and research community to continue to learn about the potential benefits of Open Access, to share what they’ve learned with colleagues, and to help inspire wider participation in helping to make Open Access a new norm in scholarship and research.

I am participating in Open Access Week by sharing my experiences of making use of the Social Web to maximise access to papers hosted in institutional repositories. Tomorrow (Tuesday 23 October 2012) I am giving a talk on “Open Practices for the Connected Researcher” in a seminar which is part of a series of Open Access Week events which are taking place at the University of Exeter.

On Thursday, as described in a news item published by the University of Salford, I am the invited guest speaker for an Open Access event which will take place at the  Old Fire Station at the University of Salford where I will give a talk on “Open Practices and Social Media for the Connected Researcher“.

The following day I will be giving a talk on “Open Access and Open Practices For Researchers” at the University of Bath. This event, which marks the launch of a Social Media programme for Researchers, will include a presentation from Ross Mounce, a PhD student and Open Knowledge Foundation Panton Fellow at the University of Bath, who will talk about the need for true Open Access (as originally defined), why it matters and the plethora of options we have for OA publishing in addition to my talk.

In addition to such ‘real-world’ activities in support of Open Access Week I am also taking part in the Networked Researcher Blogging Unconference and earlier today published the launch post for the unconference.

My slides for tomorrow’s talk are available on Slideshare and are embedded below.

Posted in openness, Repositories | Tagged: | 3 Comments »

Which University? The One With Good News or the One Which is Open and Transparent?

Posted by Brian Kelly on 13 Sep 2012

I came across the news first on Twitter from the @timeshighered account:

Which? launches university comparison website, featuring details of 30,000 courses and 262 HEIs: 

This announcement caused some slight concerns on Twitter, perhaps with a feeling that higher education shouldn’t be treated as a consumer good.

But shortly after the announcement on Twitter Alison Kerwin, head of Web Services at the University of Bath, reminded me that she had predicted that we would see such consumer guides to selecting higher education course when she gave a plenary talk on Let the Students do the Talking… at UKOLN’s IWMW 20o7 event:

I don’t want to say I told you so but… Which Guide to Universities? … #iwmw2007

As can be seen from Alison’s slide (which are available on Slideshare) which have her vision of the future, Alison predicted that we would see commercial services such as This service exists and which is not too dissimilar in its aims from the newly launched site.

As one does, the first University to explore on such services is your host institution. As can be seen for the entry for the University of Bath, we see not only the picturesque display of the University campus but also some pleasing words about the University:

Bath University is consistently one of the highest ranked for student satisfaction in the UK. The University has an ideal blend of academia and a thriving campus with many activities to get involved in. With a reputation for exceptionally strong sports we’re national champions in netball, football and women’s tennis – we also have a brand new arts complex on the way.

However the positive view of the university is not surprising when you notice that it has been provided by the Student’s Union.

Another view of the university is given by comments from students with one example of the downside being:

Faith support. We have a chaplaincy where I work part time but it is not advertised as a resource and is kept hidden by its location on campus so most students are unaware of the support offered by it.

Although it was good to read the positive comments:

Library facilities, teaching quality is generally high, communication is excellent.

Really good sports clubs due to the enthusiasm of the students involved.

But for me the most interesting aspect of the Which University Web site was the inclusion of the latest tweet from the institutional Twitter account. In this case this said:

RT @TeamBath The reception is over & the bus prepares to return to the @uniofbath . Thank you Bath.

and highlighted yesterday’s open top bus parade of Olympians and Paralympians in Bath including University of Bath Sports Performance student Michael Jamieson, who won swimming silver at the London Olympics and Paralympic swimmer Stephanie Millward, who won an impressive haul of five meals in the pool at the Games.

Clearly a relevant story for the University. But what if there’s less good news to report? What if the announcement is “Severe delays in getting Bath University today due to Open Day. Car park is full!“. Or, as happened a few year’s ago “University closed due to snow. No traffic allowed up Bathwick Hill” – although that can clearly be described as a good news story :-)

But what we are seeing is that a university’s official Twitter channel will have multiple purposes including keeping current staff and students up-to-date with relevant news as well as providing a marketing channel for potential students. It strikes me that the providers of the official channel may find tensions between the informational and marketing aspects of such work.

It would be interesting to hear if any Universities have published policies on the purpose, content and scope of their official Twitter channel, and how they might use Twitter to communicate important information which could have negative connotations.

But perhaps technology could provide a means of detecting feeds which only publish good news. We are seeing Twitter analytics tools which provide sentiment analysis. Perhaps such tools could be tuned to analyse University feeds too. And if potential students find that 100% of weather-related tweets, especially from a northern university, describe sunny weather they might detect a lack of openness! After all, as we know that many only reviews are fake, digital literacy courses provided for students may give advice on how to spot fake reviews. Let’s ensure that our channels are based on values of openness and transparency and not just the good news. Which is, of course, the point Alison made back in 2007 when she said Let the Students do the Talking….

Twitter conversation from: [Topsy] – [SocialMenton] – [WhosTalkin]

Posted in openness, Twitter | 4 Comments »

“If a Tree Falls in a Forest”

Posted by Brian Kelly on 6 Sep 2012

If a paper is deposited in an institutional repository and nobody notices, can the associated work be seen to have any relevance? I wondered about this recently after looking at the download statistics for my papers hosted in Opus, the University of Bath repository. Normally I’m interested in the reasons for popular downloads (such as the evidence that this might suggest that the large numbers of downloads are due to the ‘Google juice’ provided by links from popular Web site). However as part of the preparation for a talk on “Open Practices for the Connected Researcher” I’m giving at the University of Exeter during Open Access Week I was interested in lessons to be learnt from papers which hardly anyone downloads.

In my case the papers nobody cares about are an article published in LA Record in 1997, a paper on Collection Level Description also published in 1999 which I had forgotten about until I rediscovered it a few years ago and uploaded to the repository, the final report for the QA Focus project and a peer-reviewed paper on Using Context to Support Effective Application of Web Content Accessibility Guidelines.

It was the peer-reviewed paper I was most interested in. This paper, written by myself, David Sloan, Helen Petrie, Fraser Hamilton and Lawrie Phipps and published in the Journal of Web Engineering (JWE), has only been downloaded twice. Clearly nobody is being deafened by the impact of this paper challenging the status quo!

Given that a total of 13,104 papers of mine have been downloaded from the repository what are the reasons for the lack of interest in this paper?

The obvious starting point would be the content. But this paper was a follow-up from previous papers on Web accessibility which have been well-read and widely-cited and the interest in our papers in this area has continued.

Looking at the email folder about this paper it seems that the first version of the paper was submitted to the publishers in July 2005. I seem to recall that we were invited to submit a paper based on an updated version of a paper on Forcing Standardization or Accommodating Diversity? A Framework for Applying the WCAG in the Real World by the same authors which had been presented at the W4A 2005 conference.

We received positive comments from the reviewers in August 2005 and responded with appropriate updates to the paper. But then everything went quiet. It wasn’t until August 2006 when we received the final proofs of the paper and September 2006 when we received confirmation that the paper had been accepted and the paper had published in the Journal of Web Engineering, Vol. 5 No. 4 in December 2006. This was 17 months after we had submitted the first version of the paper!

By this time myself and my co-authors had forgotten about the paper, and the ideas we described had been superceded by a paper on Contextual Web Accessibility – Maximizing the Benefit of Accessibility Guidelines presented at the W4A 2006 conference in May 2006.

Looking at the download statistics for my papers it seems that I began depositing items in the Opus repository in October 2008. My first set of papers were deposited by repository staff based on the links available from the UKOLN Web site. However it would appear that the JWE paper had not uploaded, probably because I had failed to include it in my list of publications due to its long gestation period. A few months ago I noticed that the paper had not been uploaded to the repository so on 17 May 2012 I uploaded the paper.

The reason for the lack of downloads is now clear: the paper wasn’t available until recently! And by the time the paper was available the ideas were no longer current.

What are the lessons which can be learnt which I can share in my talk on “Open Practices for the Connected Researcher“? I would suggest:

Repository items need to be made publicly available when the ideas are current. Depositing old papers may be useful for preserving the content and for record-keeping purposes, but not if the aim is to maximise the impact of the ideas.

Of course there is a bigger question about the value of peer-reviewed papers. In his 1,000th blog post Tony Hirst gave his reflections on The Un-academic. Tony pointed out that “Formal academic publications are a matter of record, and as such need to be self-standing, as well as embedded in a particular tradition” and contrasted this with blog posts which are “deliberately conversational: the grounding often coming from the current conversational context – recent previous posts, linked to sources, comments – as well as discussions ongoing in the community that the blog author inhabits and is known to contribute to“.

Tony argued the value of blogs in the support of the research process by point out blog posts can provide:

“a contribution to a daily ongoing communication with a community that often mediates its interests through the sharing of links (that is, references); in part it’s a contribution of ideas at a finer resolution than a formal academic reference, and in completely different style to them, to the free flow of ideas that can be found through the searchable and sharable world wide web.

Since 2005 myself and my colleagues have had peer-reviewed papers published at the W4A conferences in 2005, 2006, 2007, 2008, 2010 and 2012. This is part of the “annual ongoing communication with a community that often mediates its interests through the sharing of links (that is, references)”. However sometimes this process goes wrong, as has been described in this post. Although the problems associated with the long time frames it can take for research work to be published this doesn’t mean that the process of research publications is fundamentally flawed. However I think this example does illustrate the need for researchers to make “contribution to a daily ongoing communication with a community that often mediates its interests through the sharing of links“.

Tony’s blog post concludes by referencing a number of recent posts by Alan Levine (@CogDog) in which he has shared his thinking on blogging: The question should be: why are you NOT blogging?Every box you type in can be a doorway to creativity, and in a roundabout way, Gotta know when to walk. Alan’s first post provides his reflections on his blogging activities since he started 0n 19 April 2003. This long post is worth reading, but can be summarised very succinctly:

So here is why I blog. It is foolish and informationally selfish, not to.

Perhaps that should be the key message I give in my talk in Exeter during Open Access Week. Oh, having reflected on the paper which nobody reads I have decided that if a peer-reviewed paper is not read, this is a failure. My time and the time spent by my co-authors in writing the paper could have been more productively spent on other work. And no, unlike blog posts in which writing ideas may be a useful process in itself, peer-reviewed papers aren’t intend to assist in self-reflective.

Twitter conversation from Topsy:  [View]

Posted in openness, Repositories | 1 Comment »

Thoughts on Wolfram|Alpha Personal Analytics for Facebook

Posted by Brian Kelly on 5 Sep 2012

Recent News: Wolfram|Alpha Personal Analytics for Facebook

Tony Hirst alerted me to the recent post on Wolfram|Alpha Personal Analytics for Facebook. Facebook is, of course, one of those services which generates strong opinions, rather as Microsoft used to do. In the case of Microsoft the criticisms have been centred around its proprietary file formats and its misuse of its dominance in the desktop computer environment. For Facebook, the criticisms have focussed on Facebook being a “walled garden” and its misuse of personal data.

Facebook Was a Walled Garden

It was back in 1993 when Novell claimed that Microsoft was blocking its competitors out of the market through anti-competitive practices. However as described in Wikipedia the European Union Microsoft competition case resulted in the EU ordering Microsoft to divulge certain information about its server products and release a version of Microsoft Windows without Windows Media Player, in addition to paying a fine of £381 million. Microsoft also eventually migrated its proprietary file format to XML and the Open Office XML format which became an ISO standard in December 2006.

Might we see similar changes happening with Facebook? Back in December 2008 I asked Just What Is A “Walled Garden”? – a post which generated interesting discussion on the pros and cons of walled gardens, with Ben Toth commenting:

I don’t like the phrase at all. Firstly it’s one of those phrases which gives the impression of being meaningful but in practice doesn’t bear too much analysis. Secondly, walled gardens were a pretty clever Victorian technology for creating micro-climates in order to boost food production (, so it seems a shame to use the term in a negative way. Finally, all gardens have walls of one sort or another – an un-walled garden wouldn’t be a garden. So the phrase is a tautology.

Max Norton concluded the discussion by observing that:

to leap to judgement just because something can be described as a walled garden is hasty. While my instinct is towards openness I try to be pragmatic about these things and where I feel there are gains to be had in using “walled garden” solutions I’ll use them.

A willingness to accept the benefits that can be provided by walled gardens can clearly be seen by fans of Apple products, with, as described by the Wikipedia entry for Walled Garden (technology) Apple’s iOS devices are “restricted to running pre-approved applications from a digital distribution service“.

In October 2010 I pointed out that Planet Facebook Becomes Less of a Walled Garden following the announcement that “Facebook lets users download data, create groups“; news that was welcomed as “A step in the right direction, by the vice-chair of the DataPortability Project“.

Back in September 2011 ZDNet published an article which provided an update on Facebook’s export options and argued that Facebook finally makes your exported data useful. Since there are also tools such as SafeGuard which enable you to export data from Facebook and other social networking services it seems that we can say that not only can a walled garden provide a safe managed environment, but that it would be wrong to describe Facebook as a walled environment.

Accessing Facebook Activity Data

There are now a number of ways of migrating one’s personal data from Facebook. Facebook provide advice on how to do this, and this approach has been described in an article published in C|net. Meanwhile applications such as Social Safe provide alternative ways of accessing one’s Facebook data – and I learnt that I updated my Facebook profile picture on 13 December 2007.

But it was Tony Hirst’s tweet which interested me that most, since the Wolfram|Alpha service goes beyond the simple exporting of one’s content (status updates and images and videos which have been uploaded) and provides information and visualisations of one’s activity data.

Figure 1: Facebook activities, by time and day of week

Once you have given permission to the Wolfram|Alpha app to access your Facebook data visualisations of how you use Facebook are provided, such as the day of the week and time of posting status updates, posting links or uploading images. As shown in Figure 1 it seems that I tend to use Facebook mostly between 6pm and 9pm, which is not unexpected as I use it primarily for social purposes.

Figure 2: Facebook apps used

Figure 2 shows the Facebook apps which I use. It seems that the one I use most is the app which provides an automated status update when I publish a new post on this blog.

This information simply gives me a better understanding of my use of Facebook. This personal understanding of one’s Twitter use was the angle taken in a post on the Mashable tech blog which described how This App Knows More About Your Facebook Account Than You Do.

Figure 3: Visualisation of my Facebook community

However of greater interest to me is the way in which the Wolfram|Alpha app provides a visualisation of my Facebook community and the connections between the members of the community.

In Figure 3 you can see the various communities, which includes my sword dancing and folk communities and my profession contacts. I can also see the various outliers, of people who have few connections with others, which includes the landlady of a pub I often visit.

Such visualisation of one’s connections will be familiar to anyone who keeps an eye on Tony Hirst’s work in this area. In the past Tony has made use of Twitter APIs in order to visualise the growth and development of Twitter connections, including connections based around an event hashtag.

Facebook and Twitter Social Graphs

Assuming that you are willing to trust Wolfram|Alpha, their Facebook app may be of interest to anyone who would like to gain a better understanding of their own use of Facebook – as well as understanding what Facebook may know about you. Apart from the automated updates when I publish a new blog post, I update my Facebook status in the evening, often when I’m listening to live music in a local pub. Being able to process such information in an automated and global way will be of interest to the service providers who are looking to optimise targetted advertising.

Beyond the individual’s interest in such tools, clearly of greater interest will be developments around the global social graphs provided by Facebook, Twitter and, to a lesser extent, Google+

Tony Hirst has addressed this issue recently when he asked Is Twitter Starting to Make a Grab for the Interest Graph? As Tony pointed out:

If targeted advertising is Twitter’s money play, then it’s obviously in their interest to keep hold of the data juice that lets them define audiences by interest. Which is to say, they need to keep the structure of the graph as closed as possible.

Will Twitter’s increased control over their APIs mean that there will be less opportunity for developers such as Tony Hirst (and Martin Hawksey with his developments based on processing the Twitter data stream) to continue their work which helps to provide a better understanding of how social networks are being used to enhance teaching and learning and research activities? And will, ironically, we find that Facebook provides a more open environment for such work?

NOTE: Following publication of this post Tony Hirst informed me of his posts on Getting Started With The Gephi Network Visualisation App – My Facebook Network, Part I and Social Interest Positioning – Visualising Facebook Friends’ Likes With Data Grabbed Using Google Refine which described his experiments in analysing and visualising Facebook data.

Twitter conversation from Topsy: [View]

Posted in Facebook, openness | 2 Comments »

Getting a Kik Messenger Account – and Assessing Risks and Benefits

Posted by Brian Kelly on 3 May 2012


I recently heard about the Kik Messenger app, an instant messaging application for mobile devices which, according to Wikipedia “took only 15 days for Kik Messenger to reach one million user registrations“. Kik Messenger has been described as a BBM killer – and as someone who has never owned a Blackberry phone I was interested in evaluating a cross-platform application who appears to be a competitor to the Blackberry’s key selling point: instant messaging.

I have now installed the app on my Android phone and iPod Touch. I’m familiar with the benefits which messaging applications can provide over email through over five years of Twitter use and am interested in exploring the potential of an app which can be used with non-Twitter users.

However in order to use such communication tools, you need to have people to communicate with. At present I only know the Kik username of one person. My username is ukwebfocus and I’d be interested in seeing how this app might be used to support my professional activities. Perhaps a tool such as Kik Messenger could have a role to play at an event, such as UKOLN’s 3-day IWMW 2012 event, in which it might not be appropriate to use Twitter for, say, administrative queries.

When making use of such new services I use three guiding principles to assist the decision-making process which were described in a paper on “Empowering Users and Institutions: A Risks and Opportunities Framework for Exploiting the Social Web“:

  1. Understanding the reasons why a service will be used.
  2. Understanding possible risks in using the service.
  3. Identification of ways of minimising such risks.

A summary of how these principles have been applied in installing Kik Messenger are given below:

Reasons for using Kik Messenger
The reasons include:

    • A desire to evaluate instant messaging tools to complement use of Twitter.
    • A need to evaluate tools which can be used to support communication needs at an event.
    • A wish to be an early adopter in use of a social networking / communications tool in order to claim a meaningful identifier and to facilitate the development of a community.

Risks in using Kik Messenger

The risks in making use of the tool include:

    • The tool may fail to reach a critical mass.
    • The service may not be sustainable and the terms and conditions may change or the service itself, and the accompanying network and data may be lost.
    • Use of the tool may result in a failure to make use of richer alternatives.
    • The tool may not address a significant need.
    • The benefits provided by the tool may not be sufficient to motivate others to use it.

Approaches for minimising risks in using Kik Messenger

The approaches being taken to minimising the risks include:

    • Raising awareness of the tool across my network.
    • Acceptance of possible loss of content and community (as is the case with use of Twitter and text messaging on my mobile phone).
    • Evaluation of use of the toll in different contexts.
    • A willingness to use the tool in a small-scale context if it fails to gain significant market penetration.
    • A willingness to accept the time lost in downloading and learning use of the tool if the service itself is not sustainable.

On his blog Doug Belshaw has documented his “3 principles for a more Open approach” which appear to provide a similar goal in documenting principles to aim the selection of new services:

“I’ve come up three principles to guide me:

    1. I will use free and Open Source software wherever possible. (I’m after the sustainable part of OSS, not the ‘free’ part)
    2. If this is not possible then I will look for services which have a paid-for ‘full-fat’ offering.
    3. I will only use proprietary services and platforms without a paid-for option if not doing so would have a significant effect on my ability to connect with other people.”

It is interesting to note the differences between our two approaches. Doug, it seems, very much focusses on the service itself (it needs to be available as open source software) and a particular business model (a subscription service, rather than one which is funded through advertising, for example) although, like me, he provides an escape clause which acknowledges that there are risks in failing to use a service if doing so would mean he was unable to fulfil particular requirements. My approach, on the other hand, focusses on the outputs of the service and takes a disinterested view of the development approaches.

The principles which Doug mentions do, of course, have validity. However for me Open Source Software is simply software which should be evaluated alongside proprietary software, with an open source software licence being no guarantee of the value of the software or it sustainability. I agree with Doug on the value of services having a variety of business models for their sustainability. However although the availability of open source software so that users can install the software on their own server may help Doug, who runs his own domain, and others who have the technical expertise, time and motivation to be system administrators, for many people this will not be the case. It should also be added to the availability of open source software is also not necessarily a guarantee that one’s host institution, which has traditionally provided the IT infrastructure will install the software. Indeed, even if software, including social software, is installed within one’s host institution, there is no guarantee that the service, the data or the community will be available if one leaves the institution. As Sarah Lewthwaite in a post entitled University Email: A PhD Exit Strategy reminded research students who were about to finish their PhD:

Your email account has been an academically sanctioned identity for three or more years. And, unless you have a particularly benevolent institution that guarantees email for life, your account is about to end. Full stop. You may receive a letter asking you to ‘forward all important emails to an external account’ before your account is sedated (suspended) and put out of its misery (erased). If, like me, you have come to rely on your university email, you need an exit strategy, fast.

Sarah went on to reiterate this point:

“Now, two essential factors come into play. They’re so important; so you can quote me.

    1. Your email is not yours. It belongs to your university.
    2. Your university email address constitutes and validates your academic identity. This signifier is about to expire.”

If you (as is the case for me) you do not wish to become a system administrator, you should understand alternative sustainability options. Many people will be happy to make use of free services for which advertising and other uses of activity data help to fund the service whereas others, such as Doug, will be willing to pay a fee for such advertisements to be removed.

It will be interesting to see the approaches to sustainability which users will select. There will be personal factors which come into play – and as someone who is happy to pay my TV licence feed and accept that when I watch ITV for ‘free’ that “I’m the product, not the user” I have chosen not to subscribe to Sky because of my antipathy towards Murdoch (although I have watch football on Sky in pubs).

Revisiting my initial comments about the Kik Messenger service, I should probably add that there would also be costs and risks in using an open alternative (perhaps Jabber/XMPP). But what if a proprietary approach, though not platform-specific such as Blackberry’s BBM, is needed in order to establish that there is a real user need and establish appropriate technical requirements before the open alternatives are developed? Karl Marx suggested that there were a number of evolutionary stages in society’s development (the slave society, feudalism and capitalism) which had to be passed before a more equitable society was reached. The evidence of Twitter’s success and social networks such as Facebook hints at the difficulties of achieving the seemingly more equitable online environment which, as Doug describes in a post on Why we need open, distributed social networks supporters of and Diaspora claim these services will provide. But can we build Openness in one country or might Blackberry BBM users benefit from moving to a more open cross-platform solution which has an API, albeit a solution which is not open source and for which, according to the FAQ, it does not seem possible to pay for an account?

Twitter conversation from Topsy: [View]

Posted in openness, Web2.0 | 4 Comments »

Openness in One Country

Posted by Brian Kelly on 10 Apr 2012

Reflections on the Openness Guest Blog Posts

A series of guest posts have been published on this blog over the past week or so. As described in the Announcement of a Series of Openness Guest Blog Posts the posts were published following a series of articles about openness which were published in the latest issue of JISC Inform. The guest posts were:

For me these posts, and the articles in JISC Inform, explored the benefits which could be gained through adoption of a variety of open practices, ranging from open access for research papers, development of open educational resources (OERs), making content available on Wikipedia, consuming content provided by Massive Open Online Courses (MOOC) to support personal staff development and embracing openness by supporting ‘amplified events‘ as well as exploring ways in which Creative Commons licences may be used to support such goals.

“Openness in Higher Education: Open Source, Open Standards, Open Access”

Openness was regarded as a means to an end, and not as a goal in itself. Such approaches reflect the ideas described in a paper on Openness in Higher Education: Open Source, Open Standards, Open Access by myself, Scott Wilson (JISC CETIS) and Randy Metcalf (JISC OSSWatch) in which we provided the following abstract:

For national advisory services in the UK (UKOLN, CETIS, and OSS Watch), varieties of openness (open source software, open standards, and open access to research publications and data) present an interesting challenge. Higher education is often keen to embrace openness, including new tools such as blogs and wikis for students and staff. For advisory services, the goal is to achieve the best solution for any individual institution’s needs, balancing its enthusiasm with its own internal constraints and long term commitments. For example, open standards are a genuine good, but they may fail to gain market acceptance. Rushing headlong to standardize on open standards may not be the best approach. Instead a healthy dose of pragmatism is required. Similarly, open source software is an excellent choice when it best meets the needs of an institution, but not perhaps without reference to those needs. Providing open access to data owned by museums sounds like the right thing to do, but progress towards open access needs to also consider the sustainability plan for the service. Regrettably institutional policies and practices may not be in step with the possibilities that present themselves. Often a period of reflection on the implications of such activity is what is needed. Advisory services can help to provide this reflective moment. UKOLN, for example, has developed of a Quality Assurance (QA) model for making use of open standards. Originally developed to support the Joint Information Systems Committee’s (JISC) digital library development programmes, it has subsequently been extended across other programmes areas. Another example is provided by OSS Watch’s contribution to the development of JISC’s own policy on open source software for its projects and services. The JISC policy does not mandate the use of open source, but instead guides development projects through a series of steps dealing with IPR issues, code management, and community development, which serve to enhance any JISC-funded project that takes up an open source development methodology. CETIS has provided a range of services to support community awareness and capability to make effective decisions about open standards in e-learning, and has informed the JISC policy and practices in relation to open standards in e-learning development. Again, rather than a mandate, the policy requires development projects to become involved in a community of practice relevant to their domain where there is a contextualised understanding of open standards.

Although the paper was written in 2007 such pragmatic approaches appear particularly relevant for today’s changed environment in which institutions need to make policy decisions which take into account not only the a continually changing technical environment, but also reduced levels of funding and changing expectations from the user communities, including students who will be paying significant sums of money to attend university and research councils who will be facing pressures to demonstrate the value of investment in research activities.

As described in the Enabling Open Scholarship blog:

The UK’s Research Councils have proposed a revised policy on Open Access (PDF format) which further clarifies RCUK’s definition of OA and strengthens some of the criteria that must be satisfied. In particular, the policy commits to libre Open Access as the agreed RCUK definition, and permits an embargo of not longer than 6 months except for research funded by the Arts and Humanities Research Council and the Economic and Social Research Council.

I welcome this policy, which was featured in yesterday’s Guardian in an article which described how the “Wellcome Trust joins ‘academic spring’ to open up science“. However I do acknowledge that some people, such as Tom Olijhoek, have expressed objections:

I do have strong objections to the acceptance of delayed open access as a valid form of open access. This may be a compromise so that (certain) publishers will accept the policy, however there are enough open access publishers that do not impose an embargo and I don’t see why we (scientists) should give in to the wishes of a specific group of publishers. 

The Hard Line Perspective

Others, such as Glyn Moody, have expressed similar strong objections to a perceived failure to mandate another form of openness – open standards – with Glyn Moody, in January 2012, making his views clear in an article published in Computer Weekly: UK Government Betrayal of Open Standards Confirmed. Glyn Moody’s post, which suggested that “The British government withdrew its open standards policy after lobbying from Microsoft, it has been revealed in a Cabinet Office brief leaked to Computer Weekly“, was based on a posts by Mark Ballard published in January 2012 who initially argued that Microsoft hustled UK retreat on open standards, says leaked report but then went on to suggest that Hope shines through crack in lid of open standards coffin. This latter post described how “An informal public consultation [PDF format] meanwhile came out resoundingly in favour of open standards – giving the Cabinet Office a second mandate for its policy“.

 I commented on the Government’s informal consultation in a post entitled “UK Government Will Impose Compulsory Open Standards”. In that post I described how fundamentally flawed the survey was: for example as can be seen in a question on proposed Web service request delivery standards, SOAP v1.1 and v 1.2 were given as options but despite the form inviting alternatives, it was only possible to add a few words. As I concluded in the post

sadly I see nothing to indicate that the government has an understanding of the implications of any decisions that may be taken as a result of this flawed information-gathering exercise.

The report on the survey acknowledged the survey’s many deficiencies with “Around a quarter of the additional comments were critical of the survey, especially the content and its structure, ease of handling and the time it took to complete“. In its analysis of 970 responses (which include responses to the various sections from me) the report (in a page which, strangely, seems to be scanned and therefore can’t be copied as text) states that “issues were raised regarding the difficulties in implementing an open standards approach … A please was also made for Government not to impose regulatory constraints or red tape that would make it difficult for suppliers to comply, in particular smaller SMEs“. The so-called UK Government betrayal of open standards seems hardly to be due to lobbying by Microsoft but a recognition of the fundamentally flawed survey methodology which, ironically, seemed to regard Microsoft’s RTF format as an open standard but has no place for RSS (in any of its guises) which, whilst not recognised by a formal standards body (unlike Atom) is not a proprietary standard and is widely used on a global basis.

Openness in One Country?

Is it desirable to mandate a particular ideology (a set of ideas that constitute one’s goals, expectations and actions), such as an open standards ideology? Back in 2003 myself, Alastair Dunning, Marieke Guy and Lawrie Phipps wrote a paper entitled Ideology Or Pragmatism? Open Standards And Cultural Heritage Web Sites in which we highlighted risks of a top-down imposition of standards, particularly at a time of innovation. We developed these ideas further in papers on “A Standards Framework For Digital Library Programmes” , “A Contextual Framework For Standards“, “Addressing The Limitations Of Open Standards” and “What Does Openness Mean To The Museum Community?“.

In January 2010 JISC CETIS organised a “Future of Interoperability Standards” meeting. The reports on the meeting included the following comments:

  • The second day attracted more people than expected: the good news is that quite a few people seem to care about the future of interoperability standards. The bad news is that the day was organized because of the feeling of dissatisfaction with how standardization of learning technologies is taking place. … the standardization process is far from optimal: it is slow, doesn’t always lead to results, or at least not always to results that matter to folks outside of these meetings” Published on Erik Duval’s blog.
  • .. it is generally agreed that the development and adoption of specifications and standards is not a simple and straightforward process …” Meeting report by Li Yuan [PDF format].
In addition, in his position paper Tore Hoel argued that:

… the interoperability standards in the LET domain failed miserably. Second, the ICT developed more to the benefit of Learning, Education and Training than anybody could dream of. All of sudden, anybody (well, so we claim) can do almost anything with technology to support what they want in learning, e.g., finding information, expressing views from different perspectives, building communities, etc. Who asks any more for standards? Well, the enduser shouldn’t anyway, but then the ones that should ask for LET standards are not very enthusiastic either!

It seems that whilst journalists and policy makers may welcome the certainties provided by commitments to open standards, experts in the field continue to have reservations. Experts who are well-versed in the history of mandating standards within the higher education sector may recall the difficulties this caused when OSI networking standards were mandated, and Coloured Book software was developed to provide a migration path to full use of the OSI network stack. However an alternative set of standards, not developed by ISO, a formal international standards body, but by an organisation called IETF which developed RFCs (Requests for Comments) started to become popular and eventually user pressure led to an embarrassing (and no doubt costly) move away from OSI standards and an adoption of TCP/IP standards. There is clearly a need to avoid repeating such mistakes!

And yet whilst I continue to warn against premature mandation of open standards, the value of ‘standards’ (such as RSS) which may not be endorsed by an open standards body and the benefits which can be gained by use of design principles (such as REST) rather than open standards (such as the Web Services stack) I have previously given by support for research council’s mandates for open access. Is there not an inconsistency in these views?

For me, the difference is in prioritising the users’ perspectives. Open access can facilitate ease of access to resources by end users. As Ross Mounce pointed out in his guest blog post on Open Access to Science for Everyone:

it is not just academics who benefit from access to scientific literature … There are a huge number and variety of people that would benefit from legally unrestricted, free, Open Access to scientific publications e.g. patients, translators, artists, journalists, teachers and retired academics“.

But the withdrawal of open standards, such as RSS, which are not endorsed by an open standards or open standards, such as the MP3 audio format, which are encumbered by patent which makes it difficult for them to be used in an open source environment, will cause problems for the end user.

Another difference is that policies on open access are primarily about business models for institutions, publishers and funders, rather than technical issues. In contrast policies on open standards will be influenced by marketplace considerations across a variety of sectors (e.g. software vendors, hardware vendors, mobile phone vendors, media companies, etc.) and will affects a much wider group of stakeholders, including academics, researchers and students as consumers and individuals as well as within their place of work or study.

We can benefit from open practices. But when Engels asked “”Will it be possible for this revolution to take place in one country alone?” we saw from Stalin’s doctrine of Socialism in One Country of the dangers of such approaches. If we want the government to support open standards across our country, we need to ensure that the accompanying policies our flexible enough to embrace user needs and the complexities of the market place. And if this means that users will want to listen to podcasts produced by central and local government and other public sector bodies on their iPods, we should allow them to do so, even if this means continued support for RSS and MP3.

Posted in openness | 2 Comments »

Syndicated Post: The Commons Touch

Posted by Brian Kelly on 7 Apr 2012

As part of a series of guest posts on the broad theme of openness it seems appropriate to publish this blog post, on The Commons Touch, which has been published by Steve Wheeler, Associate Professor of learning technology in the Faculty of Health, Education and Society at Plymouth University, under a Creative Commons licence on his Learning with ‘E’s blog.

Steve’s post provides an useful introduction to Creative Commons and the benefits which Creative Commons can provide across the sector and concludes by suggesting that Creative Commons is “going to be very big news indeed for all web users in the near future“.

I agree, but how should one reuse resources published under a Creative Commons licence, as I’m doing here, and what are the associated risks?

The licence allows me to reuse the content for non-commercial purposes provided a give acknowledgements to the rights owner (as I have done) and I make my post available under the same licence conditions (and I have included the rights statement and Creative Commons logo from the source post).

Although I am under no legal obligation to inform Steve of my reuse of his post I have chosen to do so so that he is not surprised if he sees the republished post.

I did point out that replicated web content may (slightly) undermine the Google ranking for the resource, as Google can treat replicated content as an attempt to spam Google’s index. However, as Steve is aware and has commented in his post, the value of providing an additional access path for such content will outweigh this slight concern.

Reusing content provided under a Creative Commons licence can also lead to the question regarding what the content actually is. In this case I have chosen to reuse the words, images and links, although the underlying HTML representation may have changed since we use different blog platforms. Since Steve has not applied a No-derivative clause in the licence I could, however, have chosen to edit the content which might have included not including the image and links provided in the source material. It should also be noted that in a comment made to the blog post Joscelyn pointed out a minor error in the original post – the post stated that “Much of the content on Wikipedia for example is licensed under Wikimedia Commons – a version of CC” but in fact “Wikipedia text is licensed with Creative Commons Attribution Sharealike (CC BY SA) licence not a version of a CC licence“]. I could have edited the original post but chose to include an editor’s note.

The final comment I would make is that the licence which applies by default to content published on this blog is CC-BY; a more liberal Creative Common licence which does not restrict reuse to non-commercial purposes or require reuse to apply the same licence. The blog now contains resources with a variety of licences which, ideally, would be described in a machine-understandable form through use of tools such as the WordPress Creative Commons License Manager or the Open Attribute plugins. The latter describes how:

OpenAttribute allows you to add licensing information to your WordPress site and individual blogs. It places information into posts and RSS feeds as well as other user friendly features. This plugin is an part of the OpenAttribute project which is part of Mozilla Drumbeat.

However these plugins are not available on the platform, so it does not seen currently to be possible to describe the rights for blog posts and embedded content in a machine-readable fashion. But since this is the case for many digital resources, this is not of great concern to me.

I am still in agreement with Steve that Creative Commons is “going to be very big news indeed for all web users in the near future” and we should all develop (and share) practices for consuming other people’s content which they have provided using such licences. I’d also welcome suggestions as to who should be described as the author of this post as, unlike other guest posts I’ve published this week, this contains significant intellectual content from me. I think this will have to be described as a post with joint authors.

The Commons Touch

Many people assume that because the web is open, any and all content is open for copying and reuse. It is not. Use some content and you could well be breaking copyright law. Many sites host copyrighted material, and many people are confused about what they can reuse or copy. My advice is this – assume that all content is copyrighted unless otherwise indicated. In the last few years, the introduction of Creative Commons licensing has ensured that a lot of web based content is now open for reuse, repurposing and even commercial use. The Stanford University law professor Lawrence Lessig is one of the prime movers behind this initiative. Essentially, Creative Commons has established a set of licences that enables content creators to waive their right to receive any royalties or other payment for their work. Many are sharing their content for free, in the hope that if others find it useful, they will feel free to take it and use it. Creative Commons is a significant part of the Copyleft movement, which seeks to use aspects of international copyright law to offer the right to distribute copies and modified versions of a work for free, as long as it is attributed to the creator. Any subsequent reiterations of the work must also be made available under identical conditions. In keeping with similar open access agreements, Copyleft promotes four freedoms:

Freedom 0 – the freedom to use the work,
Freedom 1 – the freedom to study the work,
Freedom 2 – the freedom to copy and share the work with others,
Freedom 3 – the freedom to modify the work, and the freedom to distribute modified and therefore derivative works.

Finding free for use images on the web is now fairly easy. Normal search will unearth lots of images. But these are not necessarily free images. Many will have copyright restrictions. To find the free stuff go to Google and click on the cog icon at the top right of the screen. Select the Advanced Search option. Next, scroll down the screen until you find the drop down box labelled ‘usage rights’. You will be presented with four options:

  1. Free to use or share
  2. Free to use or share, even commercially
  3. Free to use, share or modify
  4. Free to use, share or modify, even commercially

Whatever option you choose, you will be presented with a reduced collection of images that still meet the requirements of the search, but under the conditions of that specific licence. Now you have a collection of images you can use under the agreements of Creative Commons. Use them for free under these agreements and you are complying with international copyright law. Don’t forget the attribute the source!

So why would people wish to give away their content for nothing? I have previously written about my own personal and professional reasons for doing so in ‘Giving it all away‘, but just for the record, I will summarise:

Giving away your content for free under a CC licence ensures that anyone who is interested in your work does not have to pay for it or worry about whether they are licenced under copyright law to use your content. In today’s economic uncertain climate, it makes sense to be equitable and to give content away that others have a need to see and can make good use of. It also means that users will do some of your dissemination for you. Your ideas will be spread farther if you give them away for free, than they necessarily will if you ask people to pay a copyright fee or royalty. If you allow repurposing of your content, the rewards can be even greater. Some of my slideshows have been translated into other languages. Having your content translated into Spanish for example, opens up a huge new audience not only in Spain, but also most of the continent of South America. Many are now licensing their work under CC because they know it makes sense. Much of the content on Wikipedia for example is licensed under Wikimedia Commons – a version of CC [Note that in a comment on Steve Wheeler’s post Joscelyn has pointed out that “Wikipedia text is licensed with Creative Commons Attribution Sharealike (CC BY SA) licence not a version of a CC licence“]. So look out for Creative Commons licensing – it’s going to be very big news indeed for all web users in the near future.

Image source

Creative Commons Licence
The Commons touch by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Posted in Guest-post, openness | 6 Comments »

Guest Post: Openly Commercial

Posted by ukwebfocusguest on 6 Apr 2012

Creative Commons has an important role to play in providing a legal framework which permits reuse of resources. But as Joscelyn Upendran describes in this guest blog post, how the Creative Commons NC (non-commercial) licence are interpretted can cause confusion. Will CC+ provide an answer?

Openly Commercial

The Non-commercial component of the Creative Commons (CC) licences has occasionally given rise to some uncertainty and debate amongst those interested in copyright licensing. (See About the Licences for a reminder of the different CC licences.)

The CC licences which contain the NC component refers to commercial use, as used:

in any manner that is primarily intended for or directed toward commercial advantage or private monetary compensation.

So what does that cover exactly?

CC guidance below and from @mollyali is very useful, but as with many things of a legal nature, they do not provide absolute certainty, as there are usually a number of factors at play. As described in the FAQ which asks ‘Does my use violate the NonCommercial clause of the licenses?‘ on the Creative Commons wiki:

In CC’s experience, whether a use is permitted is usually pretty clear, and known conflicts are relatively few considering the popularity of the NC licenses. However, there will always be uses that are challenging to categorize as commercial or noncommercial. CC cannot advise you on what is and is not commercial use. If you are unsure, you should either contact the creator or rightsholder for clarification, or search for works that permit commercial uses. Please note that CC’s definition does not turn on the type of user: if you are a non profit or charitable organization, your use of an NC-licensed work could run afoul of the NC restriction; and if you are a for-profit entity, your use of an NC-licensed work does not necessarily mean you have violated the term.

A CC commissioned study on “how people understand ‘noncommercial use’” was published in 2009. @plagiarismtoday provides a good potted summary of the report. Notwithstanding the 2009 report and “known conflicts” relating to the NC licensed being “relatively few” the NC component of the CC licence still generates much deliberation and debate.

Some objections to the NC licences relate to a viewpoint that they are not truly ‘open’ as they block licence interoperability and frictionless remix and reuse of content. The NC licence remains popular, however, and some CC adopters may well experiment initially by using a NC licence before choosing more permissive licences in due course.

The CC BY NC SA licence is a popular choice of licence amongst Higher Educational Institutions (HEIs). The Open University’s OpenLearn, MIT Open Courseware (MITOCW) and Open Yale Courses (OYC) all use a Creative Commons (CC) BY NC SA licence for their open educational resources (OER).

The JORUM Final Report published in 2011, indicates that the majority of the resources deposited within the JORUM repository are from the Academy/JISC OER Programme and a high percentage is from HEIs and licensed with a CC BY NC SA licence.

Although OpenLearn, MITOCW & OYC, all use a CC BY NC SA licence, all three institutions provide additional ‘”guidelines intended to help users determine whether or not their use of OCW materials would be permitted”

There are differences between the guidelines provided by the three institutions in the degree of permissiveness. For example OpenLearn permits “educational institutions, commercial companies or individuals to use the CC licensed content” and permits use of the “content as part of a course for which you charge an admission fee” and permits the charging of “a fee for any value added services you add in producing or teaching based around the content providing that the content itself is not licensed to generate a separate, profitable income” This would therefore appear to permit a commercial training company to reuse OpenLearn CC BY NC SA licensed content as part of a fee paying training course as long as the licensed content itself is not monetised.

OYC, by contrast, does not permit sites, that “provides and/or promotes services for which the user will be charged a fee (e.g., tutor services)” to use the CC licensed content.

MITOCW, whilst stating that “A corporation may use OCW materials for internal professional development and training purposes“also states “A commercial education or training business may not offer courses based on OCW materials if students pay a fee for those courses and the business intends to profit as a result“. So a commercial organisation can carry out staff development using MITOCW CC BY NC SA licensed content but they may not provide chargeable external training.

Does it matter that even though MIT, Yale and the Open University all use the CC BY NC SA licence yet they intend and permit different uses of their licensed content?

Some of the benefits of CC licenses include the ease of use, and the familiarity of the symbols and the speed in understanding the human-readable Commons Deed. This enables the user of the licensed content to glean quite easily and quickly what their rights and obligations are in respect of the content. The provision of additional guidelines in the above examples may undermine some of these benefits and place an unnecessary burden on the user. It also contributes to uncertainty and detracts from any possibility of  consensus on the use and understanding of a NC licence.

The reason many institutions choose the NC licence may be to control the potential or perceived potential commercialisation of the licensed content. There is quite a compelling argument that content arising from state funded programme should be licensed with the most permissive terms. For example the US Department of Labour is funding $2 billion over four years to create OER materials for career training programs in community colleges. Where new learning materials are created using the grant funds, those materials must be made available under CC Attribution licence (CC BY).

I imagine it would not be easy in UK universities and colleges to demarcate “sate funded content” from the University’s “privately funded content” . Many HEIs and FEIs have a revenue generating ‘business arm’. What is state-funded and what is the commercial arm of the institution may be quite blurred.

To achieve the widest possible access and participation in global education the most appropriate CC licence for ‘open’ educational resources is the CC BY licence. But it doesn’t appear to be always such an easy procedural or cultural step for organisations to take.

If an institution decides that a CC licence with a NC component is the most appropriate licence for its needs, the CC+ Protocol may be worth exploring  for example by universities who may be making moves towards becoming private.

Creative Commons developed its free licences to enable people to share their works as they choose. Using the CC+ protocol permits copyright owners to easily accommodate acceptable non-commercial uses while directing commercial traffic to their own fee-based agreement.

What is CC+?

CC+ is a Creative Commons license plus another agreement, for example:

A copyright owner may pair a CC Attribution-Non-Commercial license [that is the CC] with a non-exclusive commercial agreement [that is the +] enabling a copyright owner to license the work commercially for a fee.

The [+] is a means to provide a simple click through to rights or opportunities beyond those offered in the CC licence. The creator is able to leverage the expanded exposure that results from otherwise freely distributed content.

CC+ is not another CC licence; rather it is a means to point users toward the copyright owner’s own “extension” of rights that may be additional to the existing CC license. The copyright owner is responsible for constructing the license that expresses those additional terms and conditions.

CC+ has many uses and advantages for both commercial and non-commercial users, for example:

  • A copyright owner of content may choose to use a CC Attribution Non-Commercial (CC BY NC) Licence to make content available on the web so they can be shared easily and freely on a non-commercial basis providing attribution is given
  • The copyright owner in this example may pair this CC BY NC licence with a + click-through to non-exclusive rights beyond those permitted under the CC licence such as allowing commercial use in return for a fee.

Other additional permissions beyond those provided in CC licences may include: permission to reuse without providing attribution (paired with any of the six CC licences); or permission to use without having to share alike (paired with CC BY SA or CC BY NC SA licences) or permission to create derivative works (paired with the CC BY ND or CC BY NC ND licences).

CC+ is another means by which copyright owners are able to exercise their copyright as they choose, on their own terms. Using the CC licence enables the free, easy and legal means of sharing on the web whilst the “extension” of permissions provided by the + has the benefit of clear “signposting” to commercial terms for additional uses of the copyrighted works.

This is a guest post by Joscelyn Upendran (@Joscelyn on Twitter). Any views expressed are personal views and not that of any organisation or employer, and not intended to be legal advice nor should they be relied upon as such.

Posted in Guest-post, openness | 3 Comments »

Guest Post: Opening Up Events – The GEII Event Amplification Toolkit

Posted by ukwebfocusguest on 5 Apr 2012

In today’s guest blog post on openness Kirsty Pitkin introduces the JISC-funded Greening Event II projectand describes her involvement in developing an event amplification toolkit which aims to document best practices for opening access to conferences which, as touched on recently in a post on Adventures in Space, Place and Time by my colleague Marieke Guy, have traditionally been “trapped in space and time”. It is particularly appropriate that this post is published today, the day after the Amplified Conferences Wikipedia entry has been reinstated.

Opening Up Events

Workshops, seminars, conferences: just some of the learning opportunities that are often closed, with any knowledge or resources contained therein accessible only to those who are able to physically attend a fixed point in time and space where the event takes place. Yet these are some of the key ways we can disseminate and share knowledge in a really interactive, practical way.

UKOLN has a well-established role at the forefront of what have become termed “amplified” or open events. These are events where the event materials and discussions are amplified out via the local audience to their own professional networks using online social networking tools. Such activities overlap neatly with the emergence of hybrid events, which are specially designed to allow a remote audience to participate in an event simultaneously with the local audience. Amplified events can often be used as a stepping stone for organisers who are consciously looking to move into hybrid events, or organisers who are just looking to increase their audience without substantially increasing the carbon impact of their event.

The JISC GEII Event Amplification Toolkit

Event amplification at IWMW 2012

I have been working with UKOLN in this area to help develop an Event Amplification Toolkit, as part of the JISC Greening Events II project. The toolkit is designed to help event organisers decide what type of event is most appropriate for their needs (a traditional, hybrid or a fully virtual event) and provides tools to help organisers approach the task of amplifying their event.

The toolkit has been developed using lessons drawn from a series case study events, including Institutional Web Management Workshop (IWMW 2011), UKOLN’s Metrics and Social Web Services workshop, and most recently the 7th International Digital Curation Conference (IDCC11). These lessons have been condensed into a number of simple templates and two-page best practice briefings, which can be mixed and matched according to the event organisers’ requirements. As new online services are emerging all the time, whilst others wane in popularity, these best practice briefings focus on general amplification activities, rather than specific third party tools. The toolkit covers approaches to live video streaming, live commentary, discussion, and curation tools, providing examples of existing services, business models, resourcing requirements and risks which need to be considered. The templates provide models for assessing risk and structuring an amplified event to achieve specific outcomes.

Open Approaches vs Open Tools

Whilst an event may be considered open by virtue of being amplified, many of the individual tools and services used to achieve this are third party commercial services, which may vary in their degree of openness and accessibility (depending how you define open, of course!). This means that organising an open event can become a pragmatic exercise – using open platforms where available and offering alternative options where necessary to help make the event accessible to the widest range of users.

Copyright Shutterstick. Used under licence. prime example of this is the most popular tool for use at amplified events: Twitter. Whilst Twitter is considered to be one of the more open social media platforms, participants must have an account with the service in order to take an active part in an event discussion. If you don’t have an account, you can only watch the discussion unfold, you cannot contribute. Opening up an event to the widest possible audience means you must consider those people who do not want to have a direct relationship with a service provider, like Twitter, by establishing an account with the service, no matter how little personal information is required in the process. Tools like CoverItLive and ScribbleLive can provide the option for remote participants to offer comments and questions publicly without a registered account and without having to part with any information about their identity. The role of an event amplifier would then involve integrating these comments into the wider discussion beyond in a sensitive manner, particularly if that discussion is taking place prominently on Twitter.

As this example demonstrates, an amplified event may need to provide a mix of access points to open up all aspects of the event. This means that, in many ways, openness in an events context is less about the specific technologies employed and more about the attitude of the organisers and the way they blend a selection of tools to provide open access. An open attitude when running an event could be summarised as:

  • A commitment to the online audience as first class citizens, providing the same opportunities to access and interact within the live event as those physically in attendance.
  • A commitment to sharing resources in multiple contexts as an aid to future discovery and reuse.
  • A commitment to linking between resources so the audience has a clear path to guide them to other event resources or the same resources in alternative formats.
  • A commitment to the use of creative commons licences, with respect to the speaker or copyright holder.

Looking Forward

We intend to amplify the toolkit itself according to these same principles and using the same techniques detailed in the report.  Our hope is that these resources will help others to approach the problem of opening up their events and reduce the carbon impact of their event by facilitating more people engaging from afar.

Kirsty Pitkin is a professional event amplifier. This is a newly emerging role, which involves working with conference organisers to help deliver an online dimension to traditional events by leveraging social media and other online tools to expand the audience for the event. She explores current research and best practice associated with amplified and hybrid events in her blog. Kirsty holds a Masters in Creative Writing and New Media from De Montfort University.

Twitter: @eventamplifier

Posted in Guest-post, openness | 4 Comments »

Guest Post: Professional Development Using Open Content

Posted by ukwebfocusguest on 4 Apr 2012

As described recently, a series of guest blog posts on open practices are being published this week on the UK Web Focus blog which build on ideas published in latest issue of JISC Inform. Having explored what openness may mean in the context of researcheducation and libraries, in today’s guest post my colleague Marieke Guy explores “Professional Development Using Open Content“.

As a home worker Marieke takes a pro-active approach to her professional development as can be seen from her posts on her Ramblings of a Remote worker blog. In this post Marieke describes her participation in a Massive Open Online Course (MOOC).

Professional Development

For me professional development has always been about being proactive. Patience is not one of my virtues. I’m not the sort of person who would sit and wait for my team leader to send me on a course, though I’m always open to suggestions.

Professional development according to Wikipedia refers to “skills and knowledge attained for both personal development and career advancement“. The way I see it, there are areas that I need to know more about to make me better at my job, and then there are areas that I want to know more about to give my job context and meaning. The goal is to balance the two and also to fit them alongside my day job.

I work from home (see my Ramblings of a Remote worker blog) and already travel a reasonable amount so any activities I can do from the comfort of my own swivel chair suit me fine. Over the last few years online professional development has really taken off, in a similar way to online learning. Although many courses cost there is now a plethora of open content out there that can be used in any way you chose.



Massive Open Online Course crib sheet. This crib sheet was created for a workshop being presented at ISTE 2011 on using a MOOC model for professional development by Jeannette Shaffer

One recent addition is the Massive Open Online Courses or MOOCs. The courses are free, open to all and comprise of open content. They tend to be hosted by Higher Education institutes and often students from that particular institution are encouraged to register. Often there is no credit for the course (though some use the Mozilla open badge system or similar approaches) and no feedback for participants from the course leaders. The approach taken is a fluid one where participants are encouraged to blog about what they learn and interact with other participants by commenting on their posts.

As described in “7 things you should know about MOOCs” (PDF format):

For the independent, lifelong learner, the MOOC presents a new opportunity to be part of a learning community, often led by key voices in education. It proves that learning happens beyond traditional school-age years and in a specific kind of room … Certainly as MOOCs develop, the scale on which these courses can be taught and the diversity of students they serve will offer institutions new territory to explore in opening their content to a wider audience and extending their reach into the community.

The Massive Open Online Course crib sheet which is illustrated was created by Jeannette Shaffer and is available from Flickr.

Openness in Education

My first MOOC learning endeavour has been the Introduction to Openness in Education course (see the #ioe12 tweets) co-ordinated by David Wiley, associate professor of instructional psychology and technology at Brigham Young University, US. This was an open course about openness in education – a little postmodern?! I came across the course via a colleague’s Twitter feed and after registering discovered that a couple of other colleagues were also giving MOOCs a go. We ended up meeting for coffee (See my post on #ioe12 Coffee Breaks with a Little Open Licensing Thrown In) to discuss how things had gone so far. Always good to have some support.

I’ve found the course a challenge, mainly due to time constraints, but also because the concept of ‘open’ is complex one. What does being ‘open’ truly mean? Some of the more orthodox advocates of the open movement could offer up a checklist of criteria to help us decide if a license, piece of software, resource, data set, policy, … (add whatever takes your fancy) is strictly open. For them openness is an ideology and a goal. However much of what is out there falls into the spaces in-between and often for good reason.

I’d agree that the movement towards openness is a good thing, though I am still unsure on how I feel about many aspects of it. Openness is not always possible or desirable and it brings with it responsibilities. My current work activities take me into the area of Research Data Management where FOI has a big impact. Requests for data sets (such as the recent Philip Morris smoking research request) are becoming more frequent and are not always for just reasons. A colleague of mine recently pointed me in the direction of a paper written back in 2000 by Martin Strathern entitled The Tyranny of Transparency. To summarise: transparency measures often have paradoxical outcomes like eroding trust and turning knowledge into information rather than information into knowledge. Openness, like free speech, is a double edged sword and we’d do well to ensure that we use the tool appropriately.


All my posts relating to my experiences of MOOCs and learning from open content are available from my blog. There’s no doubt that use of online courses and open content will significantly contribute to my professional development in the future. Learning in this way gives me the flexibility that my job and lifestyle require, however I know that I need to be disciplined and keep motivated if I want to make the most of these opportunities. As Oscar Wilde, a man who held a fairly cynical view of formal education, once said: “Nothing that is worth knowing can be taught“. Maybe a pro-active approach using MOOCs would have been more up his street!

Twitter conversation from Topsy: [View]

Posted in Guest-post, openness | 2 Comments »

Guest Post: Librarians meet Wikipedians: collaboration not competition!

Posted by Brian Kelly on 3 Apr 2012

As part of the series of guest blog posts which describe how the higher education sector is engaging with various aspects of openness Simon Bains, the Head of Research and Learning Support and Deputy Librarian, The John Rylands University Library at University of Manchester, describes how the university library is engaging with Wikipedia.

It isn’t really news to say that the world libraries inhabit has changed almost beyond recognition in less than 20 years. Perhaps with the benefit of hindsight it will be possible to make sense of the rapid technological change and resulting shift in behaviours which combine to challenge the collections, services and perhaps the very existence of libraries. Whilst we continue to live through this information revolution, we seek to make educated guesses at the next trend, respond as we can to the very different expectations of our user communities, and develop strategies to ensure we remain relevant and sustainable in challenging times.

Several trends in particular seem to me to have made a marked contribution to the seismic landscape disruption which has followed the invention of the Web:

  1. Transition to online from print – published content, particularly journals, being made available online and becoming, fairly quickly, the dominant delivery channel.
  2. Challenges to traditional models of publishing – the rise of the open access agenda, and a general trend towards widespread support for openness, not just for published material but for underlying data, with a view to fostering sharing, reuse and linking.
  3. The Social Web – interaction and conversation, sharing, tagging, developing personal networks for both social and business purposes. Publication is no longer primarily about dissemination, but about sharing, reuse and conversation.
  4. The development of large scale global public and commercial content hubs which have grown to dominate the ways in which information is published, discovered, and shared.

These, of course, aren’t entirely independent developments, and can instead be seen as components of an evolutionary (if not revolutionary) process which has brought us to today’s information landscape. Equally, it is clear that change continues, and recent challenges to traditional scholarly publishing models serve to underline that.

The creation of one of these ‘hubs’ is the focus of this blog post. In just a few years we have seen the very rapid ascendency of Wikipedia as the preferred starting point for the sort of reference enquiry that would once have been directed to a traditionally published encyclopaedia, or a library reference desk. Despite scepticism, it has become a hugely popular resource, with evidence to support the reliability of crowd-sourced factual information, as a result of strict editing policies and zealous, perhaps over-zealous, editors.

In 2007, whilst Digital Library Manager at the National Library of Scotland I was interested to read of a project to use it to make library collections more widely known, and this encouraged me to initiate work at to do likewise. Unfortunately, the timing was not good, as concern about the credentials of editors, and allegations about attempts to influence Wikipedia entries had resulted in very careful vetting, and an aversion to anything which even hinted at advertising, even from the cultural sector. Some forays into relevant Wikipedia entries in fact resulted in my web developer’s account being shut down, almost immediately. Somewhat discouraged, we directed our effort at the more welcoming global networks, such as Flickr and YouTube.

Since then, Wikipedia seems to have adopted a more mature stance, still managing entries very carefully, but recognising that partnership with organisations with information which enriches its entries is to be welcomed rather than resisted (although a recent verbal exchange with a Wikipedia editor makes me think that this is still somewhat dependent on the outlook of individual editors). I was very interested to see the creation of the concept of the ‘Wikipedian in Residence’ at the British Museum, although my move from the National Library back into HE required a focus on other priorities.

Advertisements for the Wikipedia Lounge in the John Rylands University Library

An interior shot of the John Rylands Library in central Manchester

My move to The John Ryland University Library at the University of Manchester coincided with contact from Wikimedia UK, who were now actively seeking partnerships with education institutions, recognising the mutual benefit of working with students, academics and libraries to foster more effective use of Wikipedia as a resource, to encourage content creation and editing by experts, and to link entries to relevant resources. As a Library at a major research intensive institution, with the additional responsibility of steward of an internationally important special collections Library, we were identified as a particularly valuable pilot partner. For our part, influenced very much by the sort of strategic thinking coming from organisations like OCLC, which encouraged libraries to collaborate with large information hubs, we were very enthusiastic about a partnership which would help us connect to a global network level hub, and also address the digital literacy agenda.

We have begun the engagement process, which we hope will develop into a substantial project which includes a ‘Wikipedian in Residence’. To date, we have hosted a ‘Wikipedia Lounge’, which saw academics and students meet Wikipedians to learn more about getting involved and creating content. This event attracted academics, students and librarians, and we have plans to repeat it. We are now in discussions with Wikimedia UK about setting up a 12 month pilot project which would see a Wikipedian in Residence based at the John Rylands Library, working with our curators, students and academics to expose our collections, encourage further research and learning, develop a network of Wikipedians at Manchester (we already have some), and place Wikipedia within our digital literacy strategy as a powerful tool which when used effectively can play an important part in University teaching and research. There are already a number of references to our collections in Wikipedia entries, biographical pages such as that of the author Alison Uttley, which serve to demonstrate the very great untapped potential. Perhaps the best entry which focuses on a specific item on our collections is for the Rylands Library Papyrus P52, also known as the St John’s fragment (illustrated) which ranks as the earliest known fragment of the New Testament in any language.

Fragment of St John’s Gospel: recto

Of course there are concerns about Wikipedia: it may not be reliable; it can be used as an easy substitute for comprehensive research and study; it can be difficult to change erroneous content, etc. But to ignore it or dissuade students from its use reminds me of the approach that was sometimes taken in the face of the rapid rise of Google in the late 1990s. It is a battle we are unlikely to win, and so much more could be achieved by working with, not against, the new information providers, especially when so much of what we are about has synergy: open access, collaboration, no profit motive, etc.

It is early days for us in this engagement at the moment, but I have high hopes. And I’m sure that when we introduce our Wikimedia UK contacts to the wonders of the John Rylands Library, they will find it impossible not to see the obvious potential!

Simon Bains is Head of Research and Learning Support and Deputy Librarian, The John Rylands University Library, University of Manchester. You can see his Library Website staff page or follow him on Twitter: @simonjbains

Posted in Guest-post, openness, Wikipedia | 8 Comments »

Guest Post: Being Openly Selfish and Making “OER” Work for You

Posted by ukwebfocusguest on 2 Apr 2012

This is the second guest post on the theme of openness which, as described last week, explores various aspects of openness which have been addressed in the current issue of the JISC Inform newsletter.

In this guest post James Burke (@deburca) explores what the term OER currently means to him, although he admits the “I’m sure that it will mean something different to me 12 months from now…“.

What is/are OER?

Even though OER has a new global logo it is one of those terms that appears to have no formally agreed definition and people’s use of and reference to the term OER changes over time.

The term OER is broad and still under discussion” and over the past few years OER has been used as a “supply-side term” and remained “largely invisible in the academy”. Metaphors (“Open Education and OER is like…?) have been used to take a light hearted look at potential issues and tensions such as those between “Big OER and Little OER” and all in-between. On the definition front Stephen Downes has written a useful “Half an Hour” essay: “Open Educational Resources: A Definition” and David Wiley (Open Content and the 4Rs) recently put forward: “2017: RIP for OER?” (or not…)

The FAQ page for Open Education Week (held on 5-10 March 2012) provides a useful, current overview of OER and Open Education.

One of the “core attributes” of OER is that access to the “content is liberally licensed for re-use in educational activities, favourably free from restrictions to modify, combine and repurpose the content; consequently, that the content should ideally be designed for easy re-use in that open content standards and formats are being employed”. So, now that I have re-used the new and “liberally licensed” OER global logo in this post I have a number of options and queries regarding adherence to the licence and provision of any requested attribution such as “how do I properly attribute a work offered under a Creative Commons license?” leading me to “what are the best practices for marking content with Creative Commons licenses?”.

I’ll settle with using: “OER Logo” © 2012 Jonathas Mello, used under a Creative Commons license: BY-ND

…but maybe I should have included this attribution directly beneath the image to be less ambiguous to the human reader?, or maybe I should have associated the licence and attribution more “semantically” and unambiguously with the image for the “machine reader”?, or maybe I should have just have made my life simple and just used “Kevin” to add attribution directly to the image to cater for both human and machine readers?, and what is this “machine” anyway…?

Machine readable, but what “machine”?

The Creative Commons license-choosing tool provides you with a snippet of RDFa that you can embed in your web-based content with the idea that this “machine readable” metadata can be automatically identified and extracted by “machines” such as search engines and made available via their search, e.g. Google Advanced Search. This “machine readable” licence can also be used to facilitate accurate attribution via browser and CMS plugin “machines” such as Open Attribute as well as being used for automated cataloguing, depositing etc..

Creative Commons is not the only “machine readable” licence, many countries have their own “interoperable” Public Sector Information/Open Government Licences such as the UK Government Licensing Framework , and many “vanity licenses” for content in both the public and private sectors have also emerged but Creative Commons remains the most widely used technically & legally interoperable licensing framework.

The Google Advanced search help refers to their usage rights filter but states that this filter is used to show “pages that are either labeled with a Creative Commons license or labeled as being in the public domain”. Bing does not have an equivalent usage rights filter but their “advanced operators” can be used to derive the similar results, e.g. inbody: “search term” loc:gb can be used to find UK content that likely has a Creative Commons licence deed link in the metadata or in the HTML body.

The implementation of Creative Commons licences into content can be quite variable ranging from using a Creative Commons icon in a PDF file that contains no link to the license deed through to a complete snippet of RDFa containing the full works title together with attribution, source and more permissions URLs.

Mainstream Web Applications such as Flickr, Soundcloud, Vimeo, Scribd and SlideShare all allow the association of a Creative Commons licence with uploaded image, audio, video or “Office” document content that is then publicly visible and searchable via Google and Bing et al with the site: operator and a usage rights filter. Oddly, for most of these Web Applications Google and Bing provide the best search results and usage rights filters within the Web Applications can be a rare find.

So, to me, the “machine” that is “reading” OER is really any Web application that can consume openly licensed content accessible via the Web and for convenience the best way of me finding this “stuff” is via the mainstream search engines, even if I do have to use a usage rights filter…

Openly licensed resources and “stuff” is readily available on the Web

Arguably, the Internet and the Web would not be where it is today without being “open” and built upon a “stack” of standards and simplification that specifically lack patents and their associated licences that need to be paid for. The Web has significantly lowered the cost of software and content collaboration, creation and publishing and encouraged the embracing of serendipity.

Most of the Internet is run by volunteers who do not get paid, most of the Internet is run by amateurs”. – (video: Innovation in Open Networks) Joi Ito, Thinking Digital May 2010 (@joi)

Joi Ito speaking at #TDC10 from Codeworks Ltd on Vimeo.

One of “open’s” main advantages over proprietary digital content has been the lowering of cost and the cost of failure. The main source of friction in the production of digital content used to be primarily at the content layer in the stack (see prezi and video above) but as this eased the highest cost and restriction causing the most friction to be present whilst consuming and publishing content has shifted towards the legal domain. With the introduction of open licensing frameworks such as Creative Commons that offer worldwide legal interoperability this legal friction is being eased.

More and more educational content is going through a “rights clearance” process and being published by Institutions with more permissive open licenses “openly” to the Web and by “openly” I mean visible to search engines and not behind authentication “walls” such as learning platforms. Quite often this Web published content is a copy with attribution back to the Institution and Institutionally held source and copied to more than one location – if you have a PowerPoint presentation why not upload to Scribd and SlideShare?

This content can now be readily discovered and shared, promoted or “amplified” via Social Networks and usage via metrics, metadata and paradata from various sources is readily and, in a lot of cases, openly available. Properly attributed derivative works should contains links back to the source and if not there are various methods of monitoring and obtaining duplicate content “openly” via Web Applications such as Blekko. This content being consumed can also surface people that are consuming it that can subsequently be used to discover how the re-used work is being used whether that be in a different context to the original, different language etc.

Derivative works are often created by “consumers” who are individuals and not Institutions or organisations and attribution is made to them personally so why not include attribution to the “authors” within the original Creative Commons license?, e.g. Copyright is held by the Institution but why not add acknowledgement to the people (with links to their preferred Social Graph “node”) that created the works so that they get their “whuffie” and be “openly selfish”?

I tend to follow people rather than organisations and to me the attribution to a person tends to be more important than attribution to the copyright owner as it tends to be the person that provides the most context in how the content is being used and from them I tend to “serendipitously” discover new content. This is nothing new and fundamental to the emerging MOOCs.

What OER means to me at the moment

For me, at the moment, the most important aspect of OER is the availability of openly licensed content accessible via the Web, that has a clear provenance of all assets used with attribution to the people that created it as well as to the copyright owner, kind of “OeR”.

This “OeR” includes all “non academic institution” content such as that from Khan Academy, Peer 2 Peer University and Flat World Knowledge and ideally this “OeR” has more permissive Creative Commons licenses and avoids the NoDerivs and NonCommercial conditions that restrict my usage rights as per the “4Rs Framework”.

..but is this OER and can this type of OER use that new global logo?

Twitter conversation from Topsy: [View]

Posted in Guest-post, openness | 10 Comments »

Guest Post: Open Access to Science for Everyone

Posted by ukwebfocusguest on 30 Mar 2012

Yesterday I announced a series of guest blog posts on the theme of openness. I’m pleased to launch this series with a post by Ross Mounce, a PhD Candidate at the University of Bath. In the post Ross outlines his views on the importance of open access for not just the research community but for everyone.

Before the internet, there were non-trivial costs associated with disseminating paper-based research publications – each and every page of every article of every journal cost the publisher money to produce. Every single paper copy of those journals needed to be physically sent by post to all institutions, libraries and individuals that wanted those journals. This was both a costly and complex process, so it was sensibly outsourced to full-time professional publishers to deal with, some of whom were commercial for-profit enterprises – at first this didn’t cause any problems.

But now the internet allows unlimited copies of research publications to be created for zero cost and these can be advertised and disseminated at relatively insignificant costs – just the cost of bandwidth, keeping servers up and running, maintaining a user-friendly website that search engines can crawl, and providing an RSS feed to notify interested parties of new journal articles. Indeed, when Tim Berners-Lee created the Web in 1991, it was with the aim of better facilitating scientific communication and the dissemination of scientific research.

Note that for the sake of clarity we’ll ignore the role of manuscript-submission, organising peer-review, and the peer review process itself here – I contend these are only of minor administrative cost. Peer-review is provided for free by other academics and manuscript-submission is a largely automated process often requiring little editorial input. Only organising peer review is an administrative task that might conceivably have a significant and real time cost. Furthermore these processes need not necessarily be performed by the same organisation that acts to distribute the publications (decoupled scholarly publication), a nice idea as popularised by Jason Priem.

Yet, the models of payment for publication of, and distribution of research works are still largely centred on paying-for-access, rather than paying-to-publish. In the digital age this is inefficient and illogical. Why try and charge millions of separate customers (institutions, libraries, academics, and other interested persons) for a resource – a complex undertaking to organise in itself, when you can simply ask for a sustainably priced one-off charge to the funder/authors of the content to be published. The latter author-pays model is clearly the simpler, easier to implement option. Yet, I contend that the reader-pays model is currently dominant, especially with commercial for-profit publishers because it can generate excessive profits through its opaqueness and inefficiency (relative to the ultimate goal of providing free, Open Access to scientific knowledge for everyone).

The interests of shareholders, and board members of for-profit publishing companies are now hugely conflicting with that of research funders, institutions and academics. By definition, the primary goal of a for-profit publishing company is profit. In that respect, some academic publishers make Murdoch look like a socialist, with their unscrupulous profiteering as gatekeepers denying access to scientific knowledge. Whereas the goal of STM researchers & funders is surely for knowledge to be created and shared with the world. To myself and thousands of other academics it is clear without further explanation that these two goals cannot be simultaneously be maximised. One strategy works to maximise profit by proactively denying access to vital materials, and punishing those caught sharing materials, whilst the other works to maximise dissemination potential, so that all (who have access to a computer – unfortunately not everyone has access to one of these, but this problem is out of scope) can if they wish read the material, whilst forfeiting maximum profit-potential.

Of course, if research is entirely privately funded, it need not be openly-published – one cannot force private companies to disclose all research and development they do (although efforts by certain privates to share to cure malaria and other humanitarian problems are certainly very welcome!). But as I understand it, the majority of scientific research is publicly-funded and thus there is a clear moral duty to share results with everyone e.g. taxpayers. To paraphrase James Harvey: if you want to keep your research private, fund it yourself. That’s the privilege of private funding.

The tension between librarians (who have to negotiate to buy subscription-access to journals) and academics united on one side, and for-profit publishing companies on the other is particularly noticeable at the moment, hence The Economist’s labelling of this as a potential Academic Spring, analogous to the recent revolutions overthrowing malevolent incumbent powers – the Arab Spring.  Note that a cartoon representation of this debate can be seen on YouTube and is embedded below.

Indeed it is not just academics who benefit from access to scientific literature – as is being documented by a new initiative called Who Needs Access? There are a huge number and variety of people that would benefit from legally unrestricted, free, Open Access to scientific publications e.g. patients, translators, artists, journalists, teachers and retired academics. When one hits a paywall asking for 51USD for just 24 hours access to a single article on palliative care – it’s no wonder people are often put-off reading scientific literature. Thus everyone with even the slightest bit of curiosity about scientific research would stand to benefit from Open Access to scholarly publications, as achieved by the author-pays model.

So where would all these publications go, if not on servers owned and controlled by for-profit publishers? The ideal, natural home as Björn Brembs argues are libraries and university presses as institutional repositories for research publications, code and data. Currently IRs are used as Green OA archives which achieve only limited success in providing free full-text access. But as Networked Repositories for Digital Open Access Publications perhaps they might enable Open Access for all, as well as reducing the overall cost of publishing research.

In areas of science that have already shifted to this model e.g. some of Physics and related subjects with ArXiv (which is arguably analogous to a subject-specific Cornell University IR); Science is distributed pre-review with remarkable ease and cost effectiveness at <$7 per article submitted.

Some final thoughts:

We lose so many legal freedoms with closed access publishing, and its tendency to assign all copyright to publishers (not just mere access, but also text-mining rights, and the right to re-use information in even vaguely commercial contexts) that we cannot and should not allow this continue any longer, as it is causing irreparable damage to the future usability of scientific literature.

Ross Mounce, a PhD Candidate at the University of Bath is an active member of the Open Science community, pushing for beneficial reforms in scholarly publishing. Having had trouble in the past getting research data from publications, he is very proactive in blogging and giving talks on how scientific publishing can improve utility and dissemination by making greater and better use of digital technologies.

Contact details

Twitter: @rmounce

Posted in Guest-post, openness | 10 Comments »

Announcement of a Series of Openness Guest Blog Posts

Posted by Brian Kelly on 29 Mar 2012

The latest issue of JISC Inform, which was published yesterday, features several articles on the theme of openness.

In an article entitled “Open Doors” several JISC programme managers describes aspects of openness of importance to them and the programmes they manage. Rachel Bruce, the Innovation Director of Digital Infrastructure Team, provides an overview of “How your digital infrastructure supports open learning and research” which introduces the following contributions:

  • Amber Thomas on ‘Open resources’
  • Neil Jacobs on ‘Open communication’
  • Simon Hodson on ‘Open research data’
  • Andy McGregor on ‘Open developer communities’
  • Ben Showers on ‘Open standards’

In addition an article on “Making the most of the open web” provides advice from “three experts [who] discuss how to use the social web to increase traffic to your work and make it more discoverable“. I have contributed a piece on blogging as an open practices, based on the approaches taken in publishing this blog. In addition Grace Owen, JISC Communications Coordinator, provides a video summary on “Running a successful hybrid event” and Steph Gray, director of Helpful Technology, gives a podcast providing advice for “colleges and universities embarking on their first use of digital communication tools such as Twitter and Facebook through to those who are well established and looking for the next new tech trend“.

Finally Jennifer Jones (@jennifermjones) describes a day in the life of an open researcher.

The importance of openness as a means of achieving institutional business objectives in teaching and learning, research and related areas on work are frequently addressed in this blog. I’m therefore pleased to announce the launch of a series of guest blog posts which will be published on this blog tomorrow and next week which will address a range of aspects related to openness, including open research, open education resources, open scholarly practices and open licences.

Myself and the guest bloggers hope that these posts will encourage a discussion on the ways on which a variety of open practices can enhance the effectiveness and impact of activities which take place across the higher and further education sector.

The following guest posts were published in this series from 30 March – 7 April 2012:

Posted in openness | 8 Comments »

Favouriting Tweets, Openness and Frictionless Sharing

Posted by Brian Kelly on 6 Feb 2012

Yesterday I favourited (or should I say ‘favorited’) a tweet from @lisaharris which had a link to an article on “Scholars Seek Better Ways to Track Impact Online” published in The Chronicle of Higher Education. An hour or so later I received a direct message (DM) asking me if I was interested in exploring possibilities of joint work in this area.  We exchanged a few messages and agreed to discuss this more using a technology which allows for more in-depth discussions – the telephone :-)

It occurred to me that this is an interesting example of frictionless sharing – I spotted a link to an interesting resource and decided to bookmark it (using Twitter’s ‘favorite’ function) for reading later.  The bookmarking takes place in public (as, for example, I also do when I wish to bookmark web resources using Delicious or Diigo). And as a result of this public action Lisa Harris, who posted the tweet on Sunday morning, got in touch with me.

I have found that being aware of such Twitter favouriting activities has become easier following recent developments to Twitter’s mobile client.  As shown in the accompanying image (on the right if viewing this post in a web browser), such activities are readily accessible via the web site on a desktop PC.  But since, as with increasing numbers of  other Twitter users, a mobile device is now my preferred method of using Twitter, it’s the Interactions tab on my iPod Touch which typically alerts me to similar activities, as shown below.

From this we can see, for example, that @lualnu10 (Marisa Alonso Nunez) favourited and then retweeted my comment:

Great post from @ambrouk on “Why I Blog”. Good to see open reflections based on “vanalytics” & “pimpact” (TM Amber :-)

It should be noted that access to such interactions are not available on all Twitter clients.  A lack of awareness of Twitter’s more subtle aspects is perhaps an example of why people may fail to ‘get” Twitter. As I mentioned in a recent post on Twitter? It’s Better Than The Most Things (According to Sturgeon)  there is a need to understand techniques for filtering Twitter content which are best exploited by using a dedicated Twitter client. In this example, however, we can see that there can be benefits in accessing content (interactions) which may not be available on all clients.

It is appropriate that the screenshot of recent interactions mentions Amber Thomas blog post on “Why I Blog“. In the post Amber explains why she is embracing ‘open practices in her role as a JISC programme manager. She cites Lou McGill’s definition of open practices:

By Open practices I mean a broad range of practices which have an ‘open’ philosophy, intention or approach […] Informal and formal open practice takes place within wider societal contexts which are evolving rapidly. Open practices take place in, and are enabled by, a highly connected socially networked environment”

Amber’s post primarily addresses the open practices within the context of blogging, and covers associated metrics which can demonstrate the ways in which the content is being used and shared.  However as we can see Twitter also provides an example of open practices in which the value lies not just in the content which is shared in the 140 characters or the embedded links but also in simple frictionless sharing actions such as favouriting and retweeting.

Of course there may also be risks in public bookmarking activities: it you favourite a tweet on “how to deal with a difficult boss” you may be sending unintended messages to your manager! But open practices will always entail risks – I suspect the question will be what your personal attitude to risks are. And perhaps if you are an optimist you will see the advantages which can be gained in open practices, as I suggested in a post on “A Tweet Takes Me To Catalonia“.  But if you are at heart a pessimist, you may well worry about how your tweets could be used against you.   I can’t help but think that embracing open practices says a lot about the individual rather than the technology. On reflection, this is an over-simplistic analysis as I know several people I follow on Twitter who enjoy sharing their grumbles on Twitter, particularly related to public transport failures around the south west!

Posted in openness, Social Web, Twitter | 2 Comments »

My Activities for Open Access Week 2011

Posted by Brian Kelly on 24 Oct 2011

Open Access Week 2011: #OAWeek

Today marks the launch of Open Access Week. This is a global event, now in its 5th year, which promotes Open Access as a new norm in scholarship and research.  As described in last year’s summary about the event:

“Open Access” to information – the free, immediate, online access to the results of scholarly research, and the right to use and re-use those results as you need – has the power to transform the way research and scientific inquiry are conducted. It has direct and widespread implications for academia, medicine, science, industry, and for society as a whole. 

This year’s summary of the campaign encourages people to become actively involved with the campaign:

Every year, research funders, academic institutions, libraries research organizations, non-profits, businesses, and others use Open Access Week as a valuable platform to convene community events as well as to announce significant action on Open Access.  The Week has served as a launching pad for new open-access publication funds, open-access policies, and papers reporting on the societal and economic benefits of OA.

I agree that it is important to become actively involved in open access activities – being a passive supporter can mean that one is consuming open resources provided by others, rather than actively engaging in the transformation of the research culture which the campaign is seeking to do.  I’m looking forward to seeing the #OAWeek tweets (which is archived on TwapperKeeper) in which people will be describing what they are doing. In this post I’ll describe how I have engaged in open access in the past and how I am supporting the Open Access Week 2012 campaign, beyond registering on the Open Access Web site.

Getting Involved

Back in 2005 in a paper entitled “Let’s Free IT Support Materials! ” I argued that support service departments, which should include libraries as well as IT Service departments, should be taking a lead in embracing openness by making training materials, slides and documentation available with a Creative Commons licence.

For several years I have been making my slides available under a Creative Commons licence. As an example on Thursday I will be giving a talk entitled “What’s On the Technology Horizon?” at the ILI 2011 conference.  The talk will describe work commissioned by the JISC Observatory (which is being provided by UKOLN and CETIS) which has identified technological developments which are expected to have an impact on the higher education sector over the next four years or so.  It is pleasing that open content has been listed as a development which is expected to have a significant impact across the sector with a time-to-adoption horizon or one year of less. It is clearly appropriate that my slides for the talk are provided with a Creative Commons licence:

It should also be noted that permission will be granted for live-blogging and live streaming of the talk, with permission being clarified on the second slide of the presentation, as illustrated.

The licence to share live presentations is one aspect of UKOLN’s long-standing involvement in organising and participating in amplified events and in advising others of best practices in the provision of such events.  We are currently developing guidelines for amplified events as part of our involvement  in the JISC-funded Greening Events II project.

In addition to describing possible environmental benefits which can be gained by enabling a remote audience  to participate in events, we will also describe additional benefits which can be gained by adopting a more open approach to events as described by my c0lleague Marieke Guy in a post on Openness and Event Amplification.

However so far I have summarised ways in which myself and colleagues at UKOLN have supported differing aspects of open access in the past. I feel there is a need at the start of Open Access Week 2011 to outline new and additional ways in which the benefits of open access can be further enhanced.

A change to the licence conditions for posts on this blog was announced on 12 January 2011 when I described how Non-Commercial Use Restriction Removed From This Blog. This post described how

The BY-NC-SA licence was chosen [in 2005] as it seemed at the time to provide a safe option, allowing the resources to be reused by others in the sector whilst retaining the right to commercially exploit the resources. In reality, however, the resources haven’t been exploited commercially and increasingly the sector is becoming aware of the difficulties in licensing resources which excludes commercial use, as described by Peter Murray-Rust in a recent post on “Why I and you should avoid NC licence“.

I have therefore decided that from 1 January 2011 posts and comments published on this blog will be licenced with a Creative Commons Attribution-ShareAlike 2.0 licence (CC BY-SA).

However the share-alike clause can also provide difficulties in allowing others to reuse the content.  Although I would encourage others to adopt a similar Creative Commons licence I realise that this may not also be achievable.  So rather than requiring this as part of the licence, I will now simply encourage others who use posts published on this blog to make derived works available under a Creative Commons licence  and limit the licence conditions to a CC-BY licence which states that:

You are free:

  • to copy, distribute, display, and perform the work
  • to make derivative works
  • to make commercial use of the work

Under the following conditions:

  • Attribution — You must give the original author credit.

In addition to using this licence for blog posts from 24 October 2011 I also intend to use this  licence for presentations I will give in the future – and, as can be seen from the above image, the licence has been applied to the resources I will give in my talk at the ILI 2011 conference later this week.

That’s how I’m involved with Open Access 2011 week. What are you doing?

Posted in openness | Tagged: | 2 Comments »

Openness and Open Folk Culture

Posted by Brian Kelly on 11 Aug 2011

Open Content in Higher Education

I have been involved in promoting open access to resources for several years. Back in 2005 I presented a paper on Let’s Free IT Support Materials! at the EUNIS 2005 conference in which I suggested that:

Although interest in open access has initially focussed on research publications and datasets and teaching and learning resources, the authors feel that the education community can benefit if IT service departments take a pro-active role in making their support materials (e.g. documents, training materials, etc.) available under licensing conditions such as those available under Creative Commons.

IT service departments are well-placed to take a leading role in opening access to their support materials for several reasons:

  • The IT services community has a culture of collaboration and sharing.
  • Open access to support materials will complement interests in use of and provision of open source software.
  • From an institutional perspective, open access to IT support materials will be less contentious than open access to teaching and learning or research materials.

In addition IT service departments will benefit from the experiences gained through the provision of open access resources, including experiences of open access management tools, training needs, user acceptance, provision of a test bed, etc.

However the perceived difficulties of deploying Creative Commons licences meant that IT Service departments have failed, to the best of my knowledge, to take the leading role I suggested in opening up access to their resources.

Two years later myself, Scott Wilson (CETIS) and Randy Metcalfe (JISC OSS Watch) felt we were in a better position to appreciate the difficulties in embracing openness and presented a paper on  Openness in Higher Education: Open Source, Open Standards, Open Access  at the ELPub 2007 conference. In the paper we suggested that the contextual approach to use of open standards (which is illustrated) could be applied to policies and practices for providing open content.

But is such a softly-softly approach, which encourage organisations to take small steps to using Creative Commons licences, the right one to take? Might we be able to learn from other sectors which have had a long-standing tradition on openness and sharing?

Open Folk Culture

Last week I took part in the Sidmouth Folk Week as a dancer (and comic character) for the Newcastle Kingsmen Sword Dancers.  And this week I am having a few days off as a volunteer for the Bath Folk Festival, where I am providing the Bath Folk Facebook page and BathFolkFest Twitter account together with Nicola McNee and also working with Kirsty and Rich Pitkin who are taking videos of the festival which are being published on the Bath Folk Festival YouTube channel.

In our planning for use of social media at the Bath Folk Festival we were conscious of the need to respect the performers rights. During the first Bath Folk Festival Fundraiser concert (which took place on 6 July) all of the acts were recorded and we asked all of the acts if they were happy for us to publish the video on YouTube (where possible we asked in advance, but this was not always possible). All of the acts agreed to this as so we were able to publish the individual performances as well as a 3 minute feature of all of the performers.

At Sidmouth Folk Festival Taffy Thomas, the UK’s first Laureate for Storytelling, ran a number of story-telling workshops. Although I couldn’t attend any of the workshops I did see him perform briefly and was able to take a few photographs, one of which I subsequently uploaded to Flickr and added to Taffy’s Wikipedia page (with an appropriate Creative Commons licence).

I heard that Taffy explained how the story-telling tradition is based on passing on stores for retelling by others. “Take as much as you want; use as little as you need” was, I understand, his advice to other story-tellers, but you should always try to give acknowledgements to the source of your stories.

Isn’t that a wonderful way of describing a Creative Commons attribution licence which formally allows others “to Share — to copy, distribute and transmit the work; to Remix — to adapt the work; to make commercial use of the work subject to the following condition: Attribution — You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work.

This culture of sharing in order to enrich others’ is deeply embedded in the folk culture with many performers running workshops in which they share their expertise, knowledge, skills and tips.

An example of such willingness to share was see at the Late Night Extra show at Sidmouth when the Kingsmen and Gaorsach (a female rapper team based in Aberdeen – no, the sword tradition isn’t men-only!) put together a show which featured individual dances (a long sword dance from the Kingsmen and reels and jigs from Gaorsach dancers) with a finale of a joint rapper dance featuring a 13-star lock.

Of course being open can also be risky.  Some performers may wish to be able to have a veto on video recordings if things go wrong.  But the folk tradition tends to feel that once a performance has been made it belongs to the public, warts and all.  And if you view the video is the ending, 12 minutes in, a mistake or a feature?


It might be argued that whilst hobbiests may be willing to allow others to record and publish their activities, it will be different for professionals  who will need to consider issues such as business models and sustainability.  But  performers at the Sidmouth Folk Week and the Bath Folk Festival include those who are professional or semi-professional and can see the benefits of having their work promoted by others. If this is the case for folkies, what arguments can be made for those of us working in higher education for not taking similar approaches, and being will for talks at conferences, for example, to be regarded as public property?  I’ll conclude with a mashup of the first Bath Fundraiser concert which illustrates how others (in this case Rich and Kirsty Pitkin, who recorded the concert) can bring added value to such recordings, in this case to The Grey Blues, Katherine Mann & Marick Baxter, Martin Vogwell, Northgate Rapper Sword Dancers, Jumping Rooves, Angel Ridge and the Brian Finnegan Band.

Posted in openness | 4 Comments »

You Are Not Alone – You Do Not Live In A Vacuum!

Posted by Brian Kelly on 15 Jul 2011

Back in April 2011 I attended a Bathcamp Startup night. One of the talks which I found particularly interesting described the Frintr service. However the talk was of interest to me not as an example of best practices for setting up a startup company (the speaker admitted that he had lost money on the business) but in the service itself and the ideas which it generated.

Frintr is an online service which allows you to create an image based on a mosaic portraits taken from portraits of one’s contacts on Twitter, Facebook or MySpace. As an example the accompanying image was created by the service by taking the people I follow on Twitter and creating a mosaic from a portrait which I uploaded.

Of course this probably raise interesting legal issues: Does Frintr have the rights to harvest the images in this way? What if one of the Twitter user wishes to change their image or delete their profile? In addition, as I discovered during the talk, there is also the question of patents for creating images based on mosaics (it seems that there is a patent which covers this technology).

But putting such issues to one side for me this provided an interesting visual way of presenting the ways in which creative work is not done in isolation – we are all influenced by others. In particular I feel that users of Twitter may be influenced by their engagement on the service – I know this is the case for myself as I have described in posts on how A Tweet Takes me to Catalonia and my reflections on 5,000 Tweets On.

The day before the Bathcamp meeting I came across a link to a video of a talk given by Cameron Neylon on “The Gatekeeper is dead. Long live the Gatekeeper“. The slides of this talk are available on Slideshare and I’ve read a report on the talk – but what I found interesting was Cameron’s licence for the talk and the presentation of the talk. Cameron had made the presentation available under a Creative Commons licence and, on the video, described how his ideas were the result of interactions with many other people. I agree with Cameron – I feel that a great deal of the activities which takes place in higher education is based on individual interpretations of existing knowledge rather than the creation of new knowledge.

Last week Tony Hirst published a post entitled A Map of My Twitter Follower Network which provided another visualisation of the way in which online communities are interacting. Tony described how he had created “a map of how the connected component of the graph of how my Twitter followers follow each other; it excludes people who aren’t followed by anyone in the graph (which may include folk who do follow me but who have private accounts)“. Tony concluded by volunteering to spend up to 20 minutes in creating a similar map for a handful of people who were willing to donation to charity such as Ovacome (an idea initially suggested by Martin Hawksey). I thought this was a great idea (the Big Society in action in the blogosphere, perhaps?). Tony has created a visualisation of my Twitter community and provided annotations on his thoughts on the different communities which are depicted.

I think Tony has correctly identified some of the key sectors I have engaged with on Twitter over the years. The Ed-tech, libraries and Info pros sectors should be self-explanatory and will probably be well-represented by readers of this blog. The IWMW sector is shorthand for this involved in the provision of institutional Web services who ware likely to attend UKOLN’s annual IWMW event. The Museums sector reflects involvement with the sector when UKOLN received funding from the MLA (Museums, Libraries and Archives council) which finished in April prior to the MLA’s demise. The Spanish community, as Tony suggests, is interesting and is probably based on my trip to Spain last year and the talks I gave in Catalonia and the Basque Country.

There are an increasing number of social media analytic tools being developed and, as Lorcan Dempsey pointed out in a post on “Analysing influence .. the personal reputational hamsterwheel” in which he referred to services such as Klout, Twitalyzer and Peerindexanalysing influence has been a central part of academic life“.

These analytic tools are worthy of further investigation. But there is also a need to step back in order to be able to see the big picture and the relevance of social media for those of us working in higher education. These two images help me to understand the Twitter-friendly sound bite which I might use if I was asked why social media is relevant to higher education: “The relevance of social media in Higher Education? You are not alone – you do not live an a vacuum!“. I think this is a particularly timely message as the fifteenth annual Institutional Web Management Workshop, IWMW 2011, starts on Tuesday 26 July and I’m sure that those who are new to the sector will be pleased to discover that they are not alone.

Posted in openness | 3 Comments »

Government to Force Universities to Publish Data – Hurrah?

Posted by Brian Kelly on 26 Jun 2011

Earlier today in an article entitled “Universities forced to publish new data” the Daily Telegraph described how “Universities are to be forced to publish a range of new information – including how many hours’ lectures and tutorials students receive – as part of a sweeping set of reforms due to be announced this week“.

I was alerted to this new by a tweet from @karenblakeman which led to a subsequent discussion as to whether Universities should be concerned about the implications of the forthcoming announcement or should welcome the opportunity to embrace openness in order to demonstrate the diversity of services provided by the university sector.

I suspect there will be suspicious that this decision is being taken to provide an opportunity to bring into question the value for money provided by the institutions currently involved in providing higher education. But this would perhaps be a rather hypocritical position to be taken by those who heralded the commitment to open data taken by the Labour Government in spring 2010 and featured in articles such as “Gordon Brown launches big shift to open gov data and broadband but where’s the detail?“. We now seem to be seeing the detail – and it applies to the University sector and not just, for example, opening up ordnance survey data in order to allow developers to provide a rich set of mashups.

We are also seeing arguments about the diversity of the services provided across the HE sector, with one person commenting in response to the Daily Telegraph article:

What is absolutely vital to know is the student-staff ratio (SSR) by department. The average SSR for a whole university is usually all that is currently available and is quite meaningless. My business school has an SSR of 30:1 (and this is a Russell Group university); other departments more favoured by the university centre have SSRs of 12:1 and still boast of offering tutorials

Whilst it is indisputably true that there is much diversity across he University sector – and also within individual institutions themselves – it could also be pointed out that the same argument could have been made (and, indeed, was made) when the Daily Telegraph first broke the story about the MPs expenses claims and developers, such as Tony Hirst, subsequently provided analyses of the data once the data had been released.

Surely if you believe that public sector organisations, in particular, should be open and transparent in areas which don’t conflict with data protection issues then should beliefs shouldn’t change if a new government is elected? And such openness shouldn’t just relate to institutional data related to the student experience – as I’ve suggested in recent posts it should also apply to the data about the contents of institutional repositories and the services provided by academic libraries.

I particularly enjoyed the talk on”The Good (and Bad) News About Open Data” by Chris Taggart of, “a prototype/proof-of-concept for opening up local authority data … [where] everything is open data, free for reuse by all (including commercially)“.

In making this argument I am revisiting this talk, which was given at the Online Information 2010 conference, in which Chris described how aims to provide “a prototype/proof-of-concept for opening up local authority data … [where] everything is open data, free for reuse by all (including commercially)“. Chris’s presentation he described the potential benefits which openness can provide and listed concerns which are frequently mentioned and responses to such concerns. Rather than trying to apply Chris’s approaches in the content of the government’s forthcoming announcement I will simply link to Chris’s presentation which is available on Slideshare and embedded below.

So if the following arguments are being used to maintain the status quo, remember that increasing numbers of councils have already found their own ways of addressing concerns such as:

  • People & organisations see it as a threat (and it is if you are wedded to the status quo, or an intermediary that doesn’t add anything)
  • The data is messy e.g. tied up in PDFs, Word documents, or arbitrary web pages
  • The data is bad
  • The data is complex
  • The data is proprietary
  • The data contains personal info
  • The data will expose incompetence
  • The tools are poor and data literacy in the community is low

Chris Gutteridge has already welcomed the news in his response to the Daily Telegraph’s article: “Yay. The UK government has some amazing people working in the field of open data. The UK commitment to Open Data with Open standards is something to be proud of. If they help advise on the standard then it’ll be good.”

HEFCE have already announced that from September 2012 they will be providing Key Information Sets (KIS) which are “comparable sets of standardised information about undergraduate courses … designed to meet the information needs of prospective students and will be published ‘in context’ on the web-sites of universities and colleges“.

The KIS will contain areas of information that students have identified as useful including student satisfaction; course information; employment and salary data; accommodation costs; financial information, such as fees and students’ union information. A mockup of the KIS output is available (Adobe PDF 432K) or (MS Word 1.3 Mb) and is illustrated in this post.

If the exercise in collating this data from Universities results in the provision of access to the data in PDF format we will, I feel, have lost a valuable opportunity to take a significant move in providing open access to data in an accessible, open, structured, linkable and reusable form.

My view is that we should be looking at the lead which has been taken by the University of Southampton and the information they provide on Open Data from UK Academic Institutions – with the discussions focussing on well-understood technical arguments around open data and Linked Data. But David Kernohan, in a post entitled “The bubble of openness?” has recently asked “Is openness (in the form of open access to knowledge, and the open sharing of distilled knowledge) a contemporary bubble, destined to collapse as universities and industries seek to tighten their budgets?“. Meanwhile in a post entitled “The Paradox of Openness: The High Costs of Giving Online” provides details of a session to be held at the ALT-C conference will examine the paradoxes of giving and receiving online in education in a changing economic climate, pointing out that “Ownership in the age of openness calls for clarity about mutual expectations between learners, communities and ourselves“.

Perhaps the benefits of openness do need to be questioned after all. But are the concerns related to the use and access to open Educational Resources (OERs) relevant to discussions about the openness of data about the institution and the student experience?

Twitter conversation from Topsy: [View]

Posted in openness | 2 Comments »

Numbers Matter: Let’s Provide Open Access to Usage Data and Not Just Research Papers

Posted by Brian Kelly on 9 Jun 2011

Numbers matter. Or, as the JISC-funded report on Splashes and Ripples: Synthesizing the Evidence on the Impacts of Digital Resources put it in its list of recommendations: “The media and the public are influenced by numbers and metrics“.

An example of how the media makes use of numbers in its reports can be seen by the recent story on Apple’s series of announcements at the WWDC event. Despite there being no new iPhone or iPad announced the media picked up on the statistics which Steve Jobs presented to highlight his view of the importance of various Apple developments with the BBC effectively providing an advertisment for the company:

Sales of 25 million iPads in just 14 months. 200 million iOS devices in total. 15 billion songs downloaded since 2003. 130 million books. 14 billion apps downloaded from a store that now runs to 425,000 apps.

Note how the figures are nicely rounded: 25 million iPads and not 24,987,974; 130 million books and not 131,087,459.  Similarly there are no caveats about the difficulties in gathering accurate figures.   There is simply a clear understanding from the private sector that such such approaches can be valuable in marketing activities to help ensure the growth and sustainability of the company – and the JISC report recommends that development work in the higher education sector learns from these approaches.

This is a reason why I feel we should be more willing to gather and use statistics related to the services we develop within the sector. Unfortunately, as described in a recent post on A Pilot Survey of the Numbers of Full-Text Items in Institutional Repositories it seems that we can’t provide a summary of the form:

UK repositories grew by xxxx  in just 14 months. yyy items in total. zzz full text items downloaded since 2003.

because, although we have such data and it is under our control, we do not make it available. There will be reasons for this: the complexity of the data; the difficulties in interpretting the data and, perhaps most importantly, the fears that the data will be used against the host institution or those directly involved in providing the service.

Yet data about institutional use of Social Web services is, in many case, freely available, as has been shown in recent posts on
Institutional Use of Twitter by the 1994 Group of UK Universities, Use of Facebook by Russell Group Universities, Institutional Use of Twitter by Russell Group Universities and How is the UK HE Sector Using YouTube?.

UKOLN is hosting a one-day workshop on “Metrics and Social Web Services: Quantitative Evidence for their Use and Impact” which will be held at the Open University on 11 July (and a few spaces are still available) which will seek to explore ways in which metrics related to use of Social Web services can be related to value.  Since the event is being hosted at the Open University it would seem appropriate to provide the following summary of the institution’s usage of popular services:

Open University’s
Social Network

Metric Comments
Facebook Likes
Twitter Followers Tweets
13,274 3,058
YouTube Channel Views Total Upload Views Subscribers
390,313 515,581 4,004 3 other channels available
iTunes No statistics readily available

The statistics for Facebook, Twitter and YouTube are easily obtained – although I am not aware of a way of automating the gathering of such statistics across all UK University presences which would be helpful if we wished to provide a national picture of how UK Universities are using these services.  I do suspect, however, that institutions may well be employing Social Media consultants to provide advice on strategies for making use of such social media services, which will include an audit of findings for peer institutions.  It would therefore be cost-effective for the sector if such information were to be gathered and published centrally so that money is not being used to replicate such activities unnecessarily.

It should be noted that it does not appear possible to obtain statistics from the iTunes service so in this case we are reliant on the information published by the Open University, such as the press release which announced in November 2009 that “The 10 millionth Open University track on iTunes U, a dedicated area within the iTunes Store (, was downloaded this week, making the OU a top provider of free university content on iTunes U. The Open University launched its first piece of educational content on iTunes U in June 2008 and now has an average of 375,000 downloads a week.” The press release went on to add that “Tracks from the OU’s 260 collections are consistently in the Top Twenty downloads and this week one in four of the top 100 downloads on iTunes U is from The Open University“.

But whilst the Open University clearly benefits from such marketing, the sector itself is failing to demonstrate how collectively we are making use of innovative IT developments – whether in the area of social media, institutional repositories or in other areas –  to support its teaching and learning and research activities.

The concerns mentioned previously (such as “the information could  be used against us”)  can lead to the sector facing a form of the prisoner’s dilemma, whereby people (or organisations) fail to collaborate even if it is their interest to do so.

A danger for the sector is that if we fail to provide (and exploit) evidence of our services, others may make use of Freedom of Information (FOI) requests in order to gather such information and use it against the sector, as the Daily Telegraph did in an article entitled “Universities spending millions on websites which students rate as inadequate” which I described in a blog post last year year.

In a plenary session at UKOLN’s IWMW 2011 event to be held at the University of Reading on 26-27 July Amber Thomas, a JISC Programme Manger will give a talk on “Marketing and Other Dirty Words“. The talk will seek answers to the questions “How [Web services] can have maximum impact? How can they be effectively presented to aid in marketing and recruitment, and to increase engagement with the world outside the university? This session will bring together key messages from marketing, social media around content, usage tracking and strategy, with ideas for how we can present our intellectual assets online to get maximum effect.

The JISC-funded report on Splashes and Ripples: Synthesizing the Evidence on the Impacts of Digital Resources which I described at the beginning of this post described how:

Being able to demonstrate your impact numerically can be a means of convincing others to visit your resource, and thus increase the resource’s future impact. For instance, the amount of traffic and size of iTunesU featured prominently in early press reports.

I agree. But let’s not just do this on an institutional basis, let’s open up access to our usage data so that the value of use of IT across the higher education sector can be demonstrated.  And let’s ensure that usage data for in-house services is made available (ideally in an open and easily reusable format) so that we won’t be reliant solely on usage data which is provide by commercial services (with uncertainties as to whether such data will continue to be provided for free)  – so that there won’t be a need for FOI requests for such data.

Posted in Evidence, openness | 12 Comments »

How Should UK Universities Respond to EU Cookie Legislation?

Posted by Brian Kelly on 26 May 2011

Confusions Over Cookie Legislation

The EU’s Privacy and Communications Directive comes into force at midnight tonight (26 May 2011).  This requires user’s consent before using cookies – the text files which are used for various purposes including storing browsing information.

The UK Government’s Information Commissioner’s Office (ICO) have provided guidelines on how Web site providers can implement such legislation.  However, as pointed out by the JISC Legal service, differences in interpretation of the legislation by Ministers, the  Internet Advertising Bureau and the ICO have led to uncertainties as to what needs to be done.  The JISC Legal post concludes by highlighting such uncertainties:

This does leave website operators with a tricky decision:

  • make changes to their websites now in order to implement a belt-and-braces, but clumsy, can-we-use-cookies explicit permission each time a user visits;
  • wait until the government’s guidance on interpretation emerges, and take a view then as to whether to implement an explicit each-visit permission question;  or
  • hope that browser suppliers make the necessary changes soon enough such that website operators need do nothing.

Perhaps we should be looking to the ICO to see how it has implemented the legal requirements on its Web site. As can be seen from the following image the ICO’s Web site has introduced a new text area at the top of every page which requires users to click on the accept box.

I think it is clear that this is a very flawed solution. Not only is it very ugly, but it also appears to force users to accept cookies (not the message “You must tick the ‘I accept cookies from this site’ box to accept” was displayed after clicking on the Continue box without selecting the option to confirm acceptance of cookies.

The Guardian has pointed out significant flaws in the legislation on its Technology blog:

One problem sites are wrestling with if the ICO insists on enforcement is a catch-22 where if people choose not to accept cookies, then sites will have to keep asking them if they want to accept cookies – because they will not be able to set a cookie indicating their preference.

What, then, is to be done?

A Year’s Grace

The good news is that the ICO has recognised the complexities in implementing this legislation.  As described on the BBC Web site:

UK websites are being given one year to comply with EU cookie laws, the Information Commissioner’s Office has said.

The UK government also sought to reassure the industry that there would be “no overnight changes”.

This provides the UK higher education sector with an opportunity to develop and implement appropriate and implementable solutions. We are seeing the Government providing indications that is looking to see “business-friendly solutions” being developed. Ed Vaizey, the Communications Minister, has suggested that the EU directive is  “a good example of a well-meaning regulation that will be very difficult to make work in practice“.  Perhaps this is an example of Government policies being in alignment with those working in higher education who wish to continue to make use of Web technologies to deliver a wide range of services.

How should the sector proceed?  I feel it would be a mistake for Universities to work on their own in attempting to implement individual solutions based on institutional interpretations of the EU directive  and trying to second-guess what may be deemed to be acceptable practices.

I am in agreement with those who suggest that the opt-in/opt-out requirement should be provided by the Web browser rather than on every individual Web site. It should be noted that Microsoft’s IE 9 and the latest version of Mozilla’s Firefox offer settings to protect users from services which collect browser data. In addition Google is working at integrating so-called ‘Do Not Track‘ technologies into their Chrome browser.

In addition to such developments to Web browsers it may be appropriate to explore the potential of machine-readable privacy policies such as W3C’s P3P standard which I discussed in a previous post.  Although this standard has seen little usage since it was first published in 2002 the EU legislation might provide the motivating force which can encourage greater take-up.

At UKOLN’s IWMW 2011 event, which will be held at the University of Reading on 26-27 July, Dave Raggett will be giving a plenary talk on Online Privacy in which he will describe his EU-funded Privacy Dashboard work.  The event might also provide an opportunity for those working in Web-management who have a good understanding of the implications of privacy policies on the services they provide to agree on a sector-wide approach which can be deployed in a year’s time.

There is a slot which is currently vacant at the event of the event.  There is therefore an opportunity for a small group of University Web managers using the next two months to develop a proposal on how the sector might implement the cookie legislation in a year’s time.

Some thoughts on what could be addressed:

  • Why cookies are needed and what concerns they raise. A briefing paper explaining these issues to policy-makers and end users.  The briefing should have a Creative Commons licence which can help to demonstrate the efficiency savings being made across the sector by avoiding duplication of such work.
  • Documenting ways in which widely used applications and technologies currently use cookies (e.g. Google Analytics, CMS systems, portals and other personalisation tools, etc.). Documentation of the implications of users opting out of use of cookies in use of these applications
  • What privacy policies should cover and possibly provision  of privacy templates.
  • Policies on preferred browsers and education on use of privacy preferences.
  • Potential of use of machine-readable policies such as P3P.

I welcome your comments and feedback.

Posted in Legal, openness | 14 Comments »

Privacy Settings For UK Russell Group University Home Pages

Posted by Brian Kelly on 24 May 2011

On the website-info-mgt JISCMail List Claire Gibbons, Senior Web and Marketing Manager at the University of Bradford today askedHas anyone done anything in particular in response to the changes to the rules on using cookies and similar technologies for storing information from the ICO?” and went on to add that “We were going to update and add to our privacy policy in terms of what cookies we use and why“.

This email message was quite timely as privacy issues will be featured in a plenary talk at UKOLN’s forthcoming IWMW  2011 workshop which will be held at the University of Reading on 26-27 July with Dave Raggett giving the following talk:

Online Privacy:
This plenary will begin with a report on work on privacy and identity in the EU FP7 PrimeLife project which looks at bringing sustainable privacy and identity management to future networks and services. There will be a demonstration of a Firefox extension that enables you to view website practices and to set personal preferences on a per site basis. This will be followed by an account of what happened to P3P, the current debate around do not track, and some thoughts about where we are headed.

The Firefox extension mentioned in the abstract is known as the ‘Privacy Dashboard’ and is described as “a Firefox add-on designed to help you understand what personal information is being collected by websites, and to provide you with a means to control this on a per website basis“. The output for a typical home page is illustrated.

The dashboard was developed by Dave Raggett with funding from the European Union’s 7th Framework Programme for the PrimeLife project, a pan-European research project focusing on bringing sustainable privacy and identity management to future networks and services.

In order to observe patterns of UK Universities practices in online privacy I have used the W3C Privacy Dashboard to analyse the home pages of the twenty UK University Russell Group Web sites. The results are given in the following table.

Ref. No. Institution Cookies External third party Invisible images
Session cookies Lasting cookies External lasting cookies Sites Cookies Lasting cookies
1 University of Birmingham 3 3 0 4 0 2 0
2 University of Bristol 0 0 0 4 0 6 8
3 University of Cambridge 1 3 0 3 1 2 0
4 Cardiff University 1 4 0 0 0 0 0
5 University of Edinburgh 1 4 0 0 0 0 0
6 University of Glasgow 2 3 0 2 1 6 2
7 Imperial College 3 3 0 3 0 2 0
8 King’s College London 3 3 0 3 1 6 0
9 University of Leeds 2 3 0 1 0 0 0
10 University of Liverpool 2 3 0 2 2 3 0
11 LSE 3 0 0 1 0 0 0
12 University of Manchester 3 0 0 1 0 0 0
13 Newcastle University 2 0 0 0 0 0 3
14 University of Nottingham 2 3 0 2 0 5 0
15 University of Oxford 1 5 0 1 0 0 1
16 Queen’s University Belfast 1 3 0 1 0 0 0
17 University of Sheffield 2 3 0 0 1 0 0
18 University of Southampton 1 3 0 3 0 0 0
19 University College London 1 2 7 0 0 0 0
20 University of Warwick 9 6 0 39 2 95 6
TOTAL 43 54 7 70   127 20 

It should be noted that the findings appear to be volatile, with significant differences being found when the findings were checked a few days after the initial survey.

How do these findings compare with other Web sites, including those on other sectors?  It is possible to query the Privacy Dashboard’s  data on Web sites for which data is available, which include Fortune 100 Web site. In addition I have used the tool on the following Web sites:

Ref. No. Institution Cookies External third party Invisible images Additional Comments
Session cookies Lasting cookies External lasting cookies Sites Cookies Lasting cookies
1 W3C  0  0 0 2  0 4 1 P3P Policy
2 Facebook Home page  4 6 0  1 0  0  1
3 Google  0  7  0 0  0  1 0
4 No. 10 Downing Street 1  4  0  8  0 52 1 (Nos. updated after publication)
5 BP 1 1 0 0 0 0 2 P3P Policy
6 Harvard 3 4 1 0 0 0
7 2 3 0 1 0 0 1

I suspect that many Web managers will be following Claire Gibbon’s lead in seeking to understand the implications of the changes to the rules on using cookies and similar technologies for storing information and reading the ICO’s paper on Changes to the rules on using cookies and similar technologies for storing information (PDF format).  I hope this survey provides a context to the discussions and that policy makers find the Privacy Dashboard tool useful.  But in addition to ensuring that policy statements regarding use of cookies are adequately documented, might not this also provide an opportunity to implement a machine-readable version of such policy. Is it time for P3P, the Platform for Privacy Preferences Project standard, to make a come-back?

Posted in Evidence, Legal, openness, standards, W3C | Tagged: | 15 Comments » Another Simple Service For Annotating Content

Posted by Brian Kelly on 2 May 2011

I was recently alerted to,  a new Web-based service for annotating public Web sites. In his tweet Pat Lockley observed that this provided “another like tool for #ukoer #oer #ocw remixing“.

I installed the Chrome extension to use this service (a bookmarklet is available for other browsers) and annotated the home page for this blog. As can be seen the service creates a copy of the page on the service with annotations using simple drawings and text tools.

I recently mentioned the service and suggested that although there are obvious copyright concerns in allowing any public Web page to be copied and edited, such an easy-to-use service might be particularly useful in the context of open educational resources (OER) for which licences are available which permits such re-use. It should also be noted that additional annotations can also be added – although it does not appear to be possible to delete annotations, so there will be dangers about graffiti appearing (such as, for example the name of a famous footballer who took out a super-injunction appearing on a BBC news article).

It does strike me, though, that the direct editing of a page which provides does have risks, not least the dangers of  the ease of forging content which provides.  Although is also taking a copy of a page and hosting it one its own servers the annotation approach which the service provides seems to minimise risks of forgery.  Perhaps this is a useful approach for annotating Web-based OER resources?

Posted in openness, Web2.0 | 1 Comment »

What I Like and Don’t Like About

Posted by Brian Kelly on 27 Apr 2011

I was recently told about the service, a repository of information about researchers and their research activities. “Not another one!” was one reaction I heard. But is there anything that can be learnt from this service, which has been developed by Mr Yang Yang, an MSc student at the University of Southampton? Les Carr, over on his Repository Man blog has been “Experimenting With Repository UI Design” and describes how he is “always on the lookout for engaging UI paradigms to inspire repository design“. Might this service provide any new UI design paradigms?

Things I Like

I have to admit that I was pleased with how easy it was to get started with the service. I signed up and asked the system to find papers associated with my email address. It found many of my papers, with much of the metadata being obtained from the University of Bath Opus repository. I them searched for other papers which weren’t included in the initial set and was able to claim them as belonging to me – including one short paper which had been published in the Russian Digital Libraries Journal in 2000 which I had forgotten about.

I can now view my 49 entries and sort them in various ways: in addition to the default date order I can also sort by item type; lead author; co-authors and keywords. The view of my co-authors (illustrated) was of particular interest: I hadn’t realised that I had written papers with 55 others.

In comparison the interface provided on my institutional repository service does now seem quite dated. However this is perhaps not unexpected as according to the Wikipedia entry the ePrints software (which is widely used across the UK) was created way back in 2000.

Revisiting the question as to whether we need another service which provides access to research information I would say ‘yes’. Such developments can help drive innovation. In this case ePrints developers are in a position to see more modern approaches to the user interface. In addition the service describes itself as “Web 3.o ready application” by which they seem to mean that the service “connects researcher and research students anywhere in the world using an intelligent network”.

I haven’t seem much evidence of Web 3.0 capabilities in the service, apart from being able to download details of my papers in FOAF format, but perhaps the “ready” word is providing a signal that such functionality is not yet available.

Things I Don’t Like

There are some typos on the data entry forms and some usability niggles, but nothing too significant – indeed after attending a recent Bathcamp Startup Night and hearing the suggestion that “If you’re not embarrassed about the launch version of your software then you released it too late” (a quote from the founder of LinkedIn) I welcome seeing this service before everything has been thoroughly checked.

The language used in the terms of service are somewhat worrying, however:

No Injunctive Relief.
In no event shall you seek or be entitled to rescission, injunctive or other equitable relief, or to enjoin or restrain the operation of the Service, exploitation of any advertising or other materials issued in connection therewith, or exploitation of the Services or any content or other material used or displayed through the Services.

It also seems that as a user of the service I undertake not to:

Duplicate, license, sublicense, publish, broadcast, transmit, distribute, perform, display, sell, rebrand, or otherwise transfer information found on iamResearcher (excluding content posted by you) except as permitted in this Agreement, iamResearcher’s developer terms and policies, or as expressly authorized by iamResearcher

Hmm. The service harvested its metadata from other repository services, such as the University of Bath’s Opus repository but does not allow others to reuse its content. This seems to undermine the benefits provided by permitting (indeed encouraging) others to make use of open data. In addition the service appears to be hypocritical, as the University of Bath’s repository policy (which was created using the OpenDOAR Policy tool) states that “The metadata must not be re-used in any medium for commercial purposes without formal permission“. Now the service does not appear to be a commercial service – but its privacy policy states that “To support the Services we provide at no cost to our Users, as well as provide a more relevant and useful experience for our Users, we serve our own ads and also allow third party advertisements on the site“. If advertising does appear on the service, won’t it then be breaching the terms and conditions of the service from which it harvested its data?

Personally I have no problem with advertising being used to fund services where, as in this case, there are multiple providers of services. Indeed those who argue for openness of data should be willing to accept that data may be used for commercial purposes. However services which accept the opportunities provided by open data should accept that they should be providing similar conditions of usage.

The final concern that I have about the service is that currently it can only be accessed if you sign in. I feel this is counter-productive – indeed one person I mentioned this service to asked why he should bother. That’s a fair comment, I think. And seeing that the terms and conditions also state that users of the service are not allowed to:

Deep-link to the Site for any purpose, (i.e. including a link to a iamResearcher web page other than iamResearcher’s home page) unless expressly authorized in writing by iamResearcher or for the purpose of promoting your profile or a Group on iamResearcher as set forth in the Brand Guidelines;

I now wonder what benefits this service can provide to the research community. Developers of other repository services, however, should be able to learn from the technological enhancements the service provides, even if the business model is questionable.

Twitter conversation from Topsy: [View]

Posted in openness, Repositories | Tagged: | 8 Comments »

The BO.LT Page Sharing Service and OERs

Posted by Brian Kelly on 22 Apr 2011

Earlier today, having just installed the Pulse app on my iPod Touch, I came across a link to an article published in TechCrunch on the launch of a new service called  The article’s headline summarises what the service will provide: “Page Sharing Service Lets You Copy, Edit And Share Almost Any Webpage“.

The comments on the article were somewhat predictable; as seems to be the norm for announcements of new services published in TechCrunch some were clearly fans (“OMG! This is going to change everything!“) whilst others point out that the new service provides nothing new: “Shared Copy ( is a great service that’s been around for 4 years that does ~the same thing“.

Of particular interest to me, however, were the comments related to the potential for copyright infringements using a services which, as the TechCrunch article announced “let’s you copy, edit and share any page“. As the first comment to the article put it: “I can just see it…this will make it easier for 1) people to create fake bank statements, 2) awesome mocking of news headlines, 3) derivative web designs“.

In order to explore the opportunities and risks posed by this service I registered for the service and created a copy of the home page for my blog and subsequently edited it to remove the left hand sidebar. As can be seen an edited version of the page has been created, and you can view the page on

So it does seem that it will be easy for people to copy Web pages, edit them for a variety of purposes, including poking fun, creating parodies (has anyone edited a Government Web page yet) as well as various illegal purposes.

But what about legitimate uses of a service which makes it easy to copy, edit, publish and share a Web resource?  The educational sector has strong interests in exploring the potential of open educational resources (OERs) which can be reused and remixed to support educational objectives.  We are seeing a growth in the number of OER repositories.  Might a service such as have a role to play in enabling such resources to be reused,I wonder?  Will turn out to be a threat to our institutions (allowing, for examples, disgruntled students unhappy at having to pay £9,000 to go to University to create parodies of corporate Web pages) or a useful tool to allow learners to be creative without having to master complex authoring tools?

Posted in openness, Web2.0 | Tagged: | 2 Comments »

Archiving Blogs and Machine Readable Licence Conditions

Posted by Brian Kelly on 21 Apr 2011

Clarifying Licence Conditions When Archiving Blogs

UKOLN’s Cultural Heritage blog has recently been frozen following the cessation of funding from the MLA (a government body which is due to be shut down shortly).

As part of the closure process for our blog we have provided a Status of the Blog page which summarises the reasons for the closure, provides a  history of the blog, outlines various statistics about the blog and provides some reflections of the effectiveness of the blog.

Another important aspect of the closure of a blog should be the clarification of the rights of the blog posts. This could be important if the blog contents were to be reused by others – which could, for example, include archiving by other agencies.

As shown a human readable summary was included in the sidebar of the blog which states that the content of the blog are provided under a Creative Commons Creative Commons Attribution-Noncommercial-Share Alike 2.0 UK: England & Wales License.

The sidebar also defined the scope of this licence which covered the textual content of blog posts and comments which were submitted to the blog.  It was pointed out that other embedded objects, such as images, video clips, slideshows, etc, may have other licence conditions.

However automated tools will not be able to understand the licence conditions.  What is needed is a definition of the licence in a format suitable for automated reading. This has been implemented using a simple use of RDFa which is included in the sidebar description.  The HTML fragment used is shown below:

<img alt=”Creative Commons License” src=”×31.png&#8221; /> This blog is licensed under a <a href=”; rel=”license”>Creative Commons Attribution-Noncommercial-Share Alike 2.0 UK: England & Wales License</a>.

How might software process such information? One example is the OpenAttribute plugin which is available for the FireFox, Chrome and Opera browsers. This is described as a “suite of tools that makes it ridiculously simple for anyone to copy and paste the correct attribution for any CC licensed work“. Use of the OpenAttribute plugin on the Cultural Heritage blog is illustrated below.

Assigning Multiple Licences To Embedded Objects in Blogs

The image above shows the licence for the blog in its entirety.  However the blog is a complex container of a variety of objects (blog posts from multiple authors;  comments from readers and embedded images and other objects from multiple sources)  and each of these embedded may have its own set of licence conditions.

How might one specify the licence conditions of such embedded objects?  In the case of the Cultural Heritage blog there was a statement that any comments added to the blog would be published under a Creative Commons licence so although anybody making a comment did not have to formally accept this licence condition, it practice we can demonstrate that we took reasonable measures to ensure that the licence conditions were made clear.

In order to specify the licence conditions for embedded images we initially looked at the Image Licenser WordPress plugin.   However this provides a mechanism for assigning licence conditions as images are embedded within a post, which are then made available as RDFa.  Since in our case we were looking at retrospectively assigning licence conditions to existing images (in total 151 items) it was not realistic to use this tool.

The Creative Commons Media Tagger provides the ability to “tag media in the media library as having a Creative Commons (CC) license“. But what licence should be assigned to images on the blog?  These include screen images and photographs which may have been include by guest bloggers but which have not been explicitly assigned a Creative Commons licence.  The question of  Who owns the copyright to a screen grab of a website? was asked recently on with a lack of consensus and a patent and trade mark attorney providing the less than helpful suggestion that “It is better to include a link to the original work if it is on the Web rather than to copy it.” The uncertainties regarding ownership of screen shots are echoed in a Wikipedia article which states:

Some companies believe the use of screenshots is an infringement of copyright on their program, as it is a derivative work of the widgets and other art created for the software. Regardless of copyright, screenshots may still be legally used under the principle of fair use in the U.S. or fair dealing and similar laws in other countries.

In light of such confusions there is a question as to what licence, if any, should be assigned to images in the blog. As described in the Creative Commons Media Tagger FAQ it is possible to run the plugin in batch mode to “tag media that was already in your media library prior to installing and activating CC-Tagger“. It occurred to me that it would be best to assign a non-CC licence by default to all images and then to manually assign an appropriate CC licence to images such as those taken from Flickr Commons in a post entitled “Around the World in 80 Gigabytes“. However using the batch made of the tool appeared not to change the content – and it is unclear to me whether there is a way of providing a machine-readable statement in RDFa stating that a resource is not available with a Creative Commons licence.

Using the Image Licenser tool on an individual image resulted in the following HTML fragment which illustrates how a machine readable statement of the licence conditions can be applied to an individual object:

<img class=”size-medium wp-image-2206″ title=”Flickr Commons” src=”×205.jpg&#8221; alt=”image of flickr commons home page” width=”300″ height=”205″ />


Whilst finalising this post I asked on TwitterIs it possible to use RDFa to provide a machine-readable statement that an image *doesn’t* have a CC licence? …” and followed this by describing the context: “.. i.e. have a blog post with CC licence for content but want to clarify lience for embedded objects. #creativecommons“.  Subsequent comments from @patlockley and @jottevanger helped to identify areas for further work which I hadn’t considered – I have kept an archive of the discussion to ensure that I don’t forget the points which were made. A summary of my thoughts is given below:

Purpose: Why should one be interested in ways in which the licence conditions of objects embedded in blog posts? My interest relates to arching policies and processes for blogs.  For example if an archiving service chooses to archive only blogs for which an explicit licence is available there will be a need to ensure that such licences are provided in a machine-readable format in automate to allow for automated harvesting.  There will also be a need to understand the scope of such licences. In addition to my interests, those involved in the provision of or reuse of OER resources will have similar interests for reusing blog posts if these are treated as OER resources.  Finally, as  @jottevanger pointed out this discussion is also relevant more widely, with Jeremy’s interests focussing on complex Web resources containing digitised museum objects.

Granularity: What level of granularity should be applied – or perhaps this might be better phrased as what level of granularity is it feasible to apply machine readable licence conditions for complex objects? Should this be at the collection level (the blog), the item level (the blog post) or for each component of the object (each individual embedded image)?

Risks: Should one take a risk averse approach, avoiding use of a Creative Commons licence at the collection level since it may be difficult to ensure that each individual item has an appropriate Creative Commons licence)? Or should one state that by default items in the collection are normally available under a Creative Commons licence, but there may be exceptions?

Viewing tools: What tools are available for processing machine understandable licence conditions? What are the requirements for such tools?

Creation tools : What tools are available for assigning machine understandable licence conditions? What level of granularity should they provide? What default values can be applied?

I know that in the OER community there are interests in these issues.  I would be interested to hear how such issues are being addressed and details of tools which may already exist – especially tools which can be used with blogs.

Posted in openness, preservation | Leave a Comment »

UKOLN Seminar On OER Open to All

Posted by Brian Kelly on 11 Apr 2011

UKOLN’s seminar programme continues on Thursday 14 April 2011. Vic Jenkins and Alex Lydiate of the e-Learning team in LTEO (Learning & Teaching Enhancement Office) with describe the JISC-funded OSTRICH (OER Sustainability through Teaching & Research Innovation Cascading across HEIs) project. As described in the abstract for the seminar:

The progress of the OSTRICH project so far at the University of Bath will be described by Vic Jenkins (Learning Technologist in the Learning and Teaching Enhancement Office). This will include highlights and challenges encountered, discussions around IPR for learning and teaching resources, and the sustainability of processes for managing the release of OERs on an institutional basis.

Alex Lydiate (Educational Software and Systems Developer) will present an overview of the design of the Drupal-based OSTRICH distributed repository and the rationale behind it.  This will include an outline of the proposed strategy for representing the OSTRICH OER records on the Web.

As with previous seminars this year the event is open to others in the sector with an interest in the development of open educational resources to attend.  The seminar will also be streamed live.  If you would like to attend, either in person or remotely, please complete the online booking form.

Note that following the most recent UKOLN seminar there was a suggestion that we should make use of the Ustream streaming video service rather than Bambuser.

In order to familiarise myself with this service I created a brief video clip which provides an announcement about the seminar.  On replying the clip (which, I should add, contains no additional information) I discovered that as well as the advertisement on flights to Australia (illustrated) there is also another advert display as a caption on the screen and a video advert is played before my video starts.

It seems that:

Ustream is free because it is ad-supported, but if you want to get rid of ads on your stream ― no problem!

Going Ad-Free on Ustream is simple. With a few easy steps, you can remove ads from your channel to fully control the viewing experience.

And whilst going ad-free may be simple, it costs from $99 per month. The use of advertisement to fund online services is something we have tended to avoid in higher education in the past.   But in light of reductions in funding, I wonder if we will start to see increased use of services which contain adverts, not only in sidebar widgets but also at the start of video clips.  Will this, I wonder, be regarded as an appropriate response to addressing reductions in funding?

Posted in Events, openness | 3 Comments »

Thoughts on the New WebGL Open API Standard

Posted by Brian Kelly on 7 Apr 2011

A Brief Introduction to WebGL

A post on the TechCrunch blog today asks “Who Needs Flash? New WebGL And HTML5 Browser Game Sets Tron’s Light Cycles In 3D“. It seems the Cycleblob browser game which has been released today was written exclusively in JavaScript, using elements of WebGL and HTML5. WebGL is “a graphics library that basically extends the functionality of JavaScript to allow it to create interactive 3D graphics within ye olde browser” which was released in March 2011.

The TechCrunch article provides a summary of WebGL:

As a cross-platform API within the context of HTML5, it brings 3D graphics to the Web without using plug-ins. WebGL is managed and developed by The Khronos Group, a non-profit consortium of companies like Google, Apple, Intel, Mozilla, and more, dedicated to creating open standard APIs through which to display digital interactive media — across all platforms and devices.

Over the past decade or so that W3C’s approach to the development of open standards has focussed on the development of declarative markup languages based on XML such as SMIL and SVG.  But here’s another approach which is based on providing open APIs with buy-in from browser vendors and other IT companies. Might WebGL have an impact in the development of interactive e-learning and research applications, I wonder?

But Is WebGL Really Open?

Investigations into the potential of WebGL for development work in higher and further education should consider its openness and its likely sustainability.  Although  is has been developed and maintained by a non-profit consortium it is questionable whether an API maintained by an industry consortium should be regarded as an open standard according to a definition of an open standard which the UK Government is currently attempting to define.  As described in a recent post the UK Government’s first condition for an open standards is that it is “result[s] from and are maintained through an open, independent process“.  A industry consortium, even if non-profit making, surely cannot be considered independent; if this was the case Microsoft  could set up a similar consortium responsible for the maintainance of their formats and code base which they could then claim to be an open standard.

But such considerations are really only relevant for those who feel there is a simple binary divide between open standards and proprietary approaches. In my view there is a complex spectrum of openness and for now  I would feel that WebGL is worth considering for development work – and the fact that WebGL is not supported by Microsoft should be regarded as an interesting challenge for developers but not necessarily a reason for discounting it.

Observing WebGL’s Development

It should be noted that there is an entry for WebGL in Wikipedia and, as is often the case, the article provides a useful brief summary of the standard:

WebGL is a Web-based Graphics Library. It extends the capability of the JavaScript programming language to allow it to generate interactive 3D graphics within any compatible web browser.

The development of this entry is interesting.  A stub entry for the article was created on 14 September 2009 and there have been regular updates ever since.

I must admit I hadn’t realised that statistics for revisions of Wikipedia articles are available.  The statistics for the WebGL article reveal that there have been 192 revisions from 104 users. It is also possible to view details for those who have edited the article and to discover how many users are watching the article.

The statistics page for the article also informs us that the WebGL article has been viewed 40,009 times in March 2011 and is ranked 7,576 in traffic on

What have I learnt from observing the information about the WebGL Wikipedia article, as well as the information provided in the WebGL Wikipedia article itself?

The chart of the number of edits over time shows that there is a steady growth in the number of edits, which suggests that the article is continually being revised.  The main contributors to the article include those involved in development in computer games which may suggest that the priority for future developments may be in this area. However the article itself lists Google Body as an early application of WebGL which perhaps suggests that WebGL could have a role to play in the development of teaching and learning applications.

Your Thoughts

Are there any  examples of early use of WebGL within the higher education sector, I wonder?  I would be interested in hearing about examples and, perhaps more importantly, hearing about experiences of those involved in WebGL development work.

In addition I’d be interested in comments on observation of use and changes in Wikipedia articles as a means of providing early indications of new standards which may be of interest to  developers.  Is this an approach which could be used more widely?



Posted in openness | 4 Comments »