UK Web Focus

Innovation and best practices for the Web

Archive for the ‘Blog’ Category

Guest Post: Data Expeditions and Data Journalism project as OER in Russian

Posted by Irina Radchenko on 15 March 2014

Open Education Week 2014 logoThe third annual Open Education Week (#openeducationwk) takes place from 10-15 March 2014. As described on the Open Education Week web site “its purpose is to raise awareness about the movement and its impact on teaching and learning worldwide“.

Myself and my Cetis colleagues are supporting Open Education Week by publishing a series of blog posts about open education activities. The Cetis blog provides a series of posts from Cetis staff which describe Cetis activities concerned with a range of open education activities. These posts are complemented by a series of guest posts on the UK Web Focus blog from people I have worked with who are working in open education.

The fifth and final guest post in the series published on the UK Web Focus blog is written by Anna Sakoyan and Irina Radchenko. In this post Anna and Irina describe “Data Expeditions and Data Journalism project as OER in Russian“.


Data Expeditions and Data Journalism project as OER in Russian

As the availability of education enhances internationally, in particular through the development of OERs and informal educational projects, predominantly in English, similar trends appear in other language environments. By their nature, they are less available for the contributors from all over the world, but at the same time, they often provide the only participation opportunity for those who for whom the English-language sources are not an option due to the language barrier. What seems important here is to create such projects in a way that makes them both their language audience-oriented and integrated into the international knowledge exchange.

Our online educational project DataDrivenJournalism.RU and its data expeditions are an example of an attempt to adopt this approach.

Context

Before we discuss the details of this particular project, we find it necessary to introduce the context, in which it was born and is currently developing. Basically, there seem to have been three deficiencies, or aspects to the problem, that we tried to address in this way.

First off, in the spring 2013 there was a surge of interest in open data among the Russian media, primarily due to the fact that the government was about to open its data officially. Many journalists turned to this subject, simply because it was promoted and supported by the state, so it was a discussed topic by default. Following the coverage, their audience was becoming aware of this kind of developments, but there was little understanding of what exactly it was about. Before the official move, there was some Open Data movement in Russia, but it was mostly promoted by a relatively small group of citizen activists and IT-developers with little response from the broader audience. All in all, by the moment when open data were about to be introduced officially, the bottom-up initiative was really scarce and with deplorably weak horizontal connections.

Second, there is a lack of Russian-language projects for those who might be interested in learning how to deal with data from scratch. Clearly, there were programmers’ communities and some of those were rather enthusiastic about building open data-based applications. But outside this scope, there were journalists, citizen activists, and scholars who could well make use of the new developments, but had no sufficient technical skills, nor even the idea of where to start acquiring them. While there are numerous international English-language learning projects of this kind, they are hardly available for those with a considerable language barrier. So there was a need for translated or newly created Russian-language manuals, as well as some supportive environment, which would encourage people to learn something really new.

Third, and most general, the project seems to comply with the trend all over the world. When there is a considerable number of open materials (books, manuals, tutorials), as well as open/free tools, and there are people who are trying to use them, at a certain stage there is also a demand for further structuring and adaptation of such materials and tools for learning. This means not only collecting relevant links in one catalogue, which is sometimes very helpful by itself, but creating something more interactive that could provide more comfortable learning facilities.

DataDrivenJournalism.RU and its data expeditions

DataDrivenJournalism.RU

DataDrivenJournalism.RU was initially created as a blog to accumulate translated or originally written manuals on working with data. Its mission was formulated as promoting the broad use of data (Open Data first of all) in the Russian-language environment. As the number of the published materials was growing, it was necessary to structure them in a searchable way, which resulted in making it look more like a website. After almost a year of its existance, the functioning of the project appears basically twofold. On one hand, it operates as an educational resource with a growing collection of tutorials, a glossary and lists of helpful external links, as well as the central platform of its data expeditions; on the other hand, as a blog, it provides a broader context of open data application to various areas of activity, including data driven journalism itself.

First Data Expedition

Being inspired by the School of Data example, we decided to try such format as online data expeditions soon after the blog was created. The first Russian-language Data Expedition (DE1) was launched in July 2013. It was a week long and its objectives were searching, processing and visualizing datasets on universities, both Russian and international. The review of DE1 was published on DataDrivenJournalism.RU http://datadrivenjournalism.ru/2013/08/04/first-russian-data-expedition-report/ (in Russian). Its English version can be found on Anna’s blog http://ourchiefweapons.wordpress.com/2013/08/05/first-data-expedition-in-russian-mission-complete/.

Our Second Data Expedition (DE2) launched in December 2013 was based on working with data collected in 2013 within a survey that was conducted by PSRAI Omnibus (http://www.psrai.com/omnibus.shtml). This dataset can be found at PEW Internet & American Life Project site: http://pewinternet.org/Shared-Content/Data-Sets/2013/July-2013—Online-Video-%28onmibus%29.aspx. It was chosen due to its clear structure and lots of variables in the first place. DE2’s main idea was to get beginners to try working with data in a friendly and encouraging environment. Unlike DE1, which heavily relied on self-organisation, DE2 had a ready-made scenario for those who might find it difficult to conduct their own research.

The review of DE2 can be found at DataDrivenJournalism.RU: http://datadrivenjournalism.ru/2014/01/02/de2-report/ (in Russian) and on Anna Sakoyan’s blog: http://ourchiefweapons.wordpress.com/2014/01/04/second-data-expedition-in-russian-mission-accomplished/ (in English).

Data Expeditions

Our most recent Data Expedition (DE3) had a special feature. This Data Expedition was dedicated to researching the subject of orphan or rare  diseases. DE3 was organised in a partnership with NGO “Teplitsa of Social Technologies” so it was a joint project. They helped us to involve experts in the fields of rare diseases. The active participation of experts was an invaluable part of the research, because they provided extremely helpful navigation on the subject. This was the first time we have seen the combination of peer-learning and research in action. We are planning to publish the review of this DE3 in the near future. Right now, its participants are working on creating the follow-up digital story based on the findings.

Conclusions

Undoubtedly, data expeditions being a combination of a peer-learning project and a hackaton can be an extremely helpful tool not only for learning (and teaching) data processing techniques, but also for researching particular areas of knowledge or life posed as the subjects of these expeditions. In this respect, data expeditions could be a very flexible promising format equally applicable to things like an activist campaign or an educational project.

DataDrivenJournalism.RU was created as a response to the two former challenges, because it was designed to accumulate and generate the Russian-language learning materials and also to contribute to building a community of people interested in learning more about making sense of data. As to the latter, an interactive approach was implemented through the Russian-language online data expeditions as a subproject of DataDrivenJournalism.RU.

However, this is only one side of the project. Like any other open educational resource, DataDrivenJournalism.RU can’t exist in a vacuum. It needs to be integrated in broader OER networks and open data communities, both Russian-language and international. It might be some interaction on the basis of knowledge or experience exchange; it might be participation in international data expeditions or other project-based peer-learning projects. Due to the flexibility of open projects, the variety of cooperation formats is virtually great.


About the authors

Anna Sakoyan Anna Sakoyan is a co-founder of DataDrivenJournalism.RU. Anna is currently working as a journalist and translator for a Russian analytical resource Polit.ru and is also involved in the activities of NGO InfoCulture.

Twitter: @ansakoy
Facebook: anna.sakoyan
LinkedIn: Anna Sakoyan
Blog (English): http://ourchiefweapons.wordpress.com/

Irina Radchenko Irina Radchenko is consultant on Open Data at NGO Infoculture and Associate Professor at Higher School of Economics. She is co-founder of DataDrivenJournalism.RU and lecturer at Open Data School.

Twitter: @iradche
Facebook: iradche
LinkedIn: Irina Radchenko
About.me: Irina Radchenko
Blog (Russian): iradche.ru


View Twitter conversations and metrics using: [Topsy] – [bit.ly]

Posted in Guest-post, openness | Tagged: | 2 Comments »

Guest Post: Why the Opposite of Open isn’t Necessarily Broken

Posted by sheilmcn on 14 March 2014

Open Education Week 2014 logoThe third annual Open Education Week (#openeducationwk) takes place from 10-15 March 2014. As described on the Open Education Week web site “its purpose is to raise awareness about the movement and its impact on teaching and learning worldwide“.

Myself and my Cetis colleagues are supporting Open Education Week by publishing a series of blog posts about open education activities. The Cetis blog provides a series of posts from Cetis staff which describe Cetis activities concerned with a range of open education activities. These posts are complemented by a series of guest posts on the UK Web Focus blog from people I have worked with who are working in open education.

The fourth guest post in the series published on the UK Web Focus blog is written by by Sheila MacNeill. In this post Sheila gives her reasons “Why the Opposite of Open isn’t Necessarily Broken“.


Why the Opposite of Open isn’t Necessarily Broken

When Brian approached me to write a guest post for Open Education week, I was flattered particularly when he told me about the other guest bloggers he had lined up. And I was relieved that my last excursion onto his blog hadn’t put him off! But more seriously it seemed to the perfect opportunity for me to share some of my recent experiences of open education and open educational practice. Later today, along with Catherine Cronin, I’ll be taking part in a webinar latsr today (from 13.00-14.00 on Friday 14th March) organised by David Walker, University of Sussex as part of their open education week activities. This post will hopefully complement the webinar, as well as contributing to the discussions on this blog this week.

The title of our webinar is “Open and online: connections, community and reality“. It will give us an opportunity to explore the research and realities of open education, online identities, networks, communities and connections.

As some of you may know, I have fairly recently changed jobs from Assistant Director with Cetis to a Senior Lecturer in Blended Learning at Glasgow Caledonian University. A large part of my work with Cetis was increasingly predicated by engagement in open, online communities. My visibility in a number of networks was a key part in me getting my current position. Openness, from open software to OER to open educational practice was and continues to be a core value not only for Cetis but for my own professional practice and values. However I am increasingly conscious that my practice is changing in response to my institutional role and new physical networks. This ties in really well to Catherine’s research on open online identity and the role of the networked educator.

When Brian and I were talking about this post, I half jokingly said to him that I felt a bit like a time traveller and a bit like Marty McFly was experiencing some back to the future moments. At other times I feel a bit like one of the Tomorrow People, who has to be very careful about where and when to use their special powers, particularly in relation to open education.

Over the past few years, I’ve heard in various places (both online and offline) that the “battle for open” has been won, or that open education is now “ mainstream”. I’ve always been slightly skeptical about such grand claims. Whilst the open education movement has made considerably inroads in the past decade, OERs and open educational practice are still not universally known about and used. Now, I’ve not started to work at some backwater on the edge of civilisation but believe me there are people here who aren’t even aware there has been a battle let alone have any idea of who/what has won, and what the legacy of the war is. Perhaps the greatest Trojan horse for open education has been MOOCs, as nearly everyone has heard about them.

Of course we do have some pockets of excellent activity not least from our library who are currently developing an institutional OER policy. But open practice, and to take an important step back to just sharing “stuff” doesn’t feature on the radar of many of my colleagues. It’s not because they are anti-open, or closed, it’s just not their practice. They haven’t developed open practices or habits in the way I have over the last however many years. And you know what? I think some of us in our open, care-y, share-y, OER-y community forget how hard it can be to start being open and develop open habits. I am getting a bit of a reputation here for saying (perhaps slightly flippantly) “just slap a CC licence on it”, and then more importantly “stick it somewhere other people can find and use it”. It really is that simple. However I am still being met with wide-eyes and doubting, knowing faces. Sharing and being open is a great thing in context, but the benefits aren’t always obvious and there is a lot of confidence building and hand holding to be done yet. And that is always the part of “the war” that seems to be forgotten about. Developing people and habits is where any education battle is really won or lost.

I have come from an incredibly privileged position where I was able to be in on almost at the start of developments, particularly in the UK, around OER and open practice. I had time to explore the issues, play with open playgrounds, build my online networks , be a very small part of the twitterati, build up my confidence around blogging and sharing my thoughts with others, sharing slides with images attributed, try things just because I could. Most jobbing academics, learning technologists, librarian and other support staff don’t have that luxury. I now have even more respect for those who do make time to engage externally. With cut backs to funding from bodies like Jisc, experimentation and risk taking opportunities are becoming less and less common. However I can (and am) doing as much as I can to support open-ness across our institution – from policy to hand holding level.

The irony of this is that as I am connecting and sharing more with my new internal networks I feel that I am sharing less and less with my external networks. I certainly don’t spend as much time on twitter, which maybe isn’t such a bad thing . . . In preparation for this week, I was heartened to see that people in my twitter network do still consider me an open practitioner (this storify collates a few responses). My former Cetis colleague David Sherlock in this response to a tweet from me point out another side to why people might not be open, that of who controls our open communication networks and who owns our data? That hadn’t been on my mind thinking of this post, but it is a crucial point. Our networks and data aren’t only valuable to us, they have other economic values. We do need to remember that seemingly open and free services do have economic models.

Last year at the Open Scotland summit, Cable Green gave a great line “the opposite of open is not ‘closed’, the opposite of open is ‘broken’.” However good a line that is, in reality things are more nuanced. In trying to support others to be open I may for a time, appear closed, and may even feel a bit broken and bruised. I’m not working with broken people or systems, just ones that need time and support to be comfortable with being open in ways that work for them. It is my open practice and the support from my open networks that continues to give me the support I need to continue to be open and contribute to our collective development and understanding of what being open actually means.


Biography and Contact Details:

Sheila MacNeill

Sheila is UK Learning Technologist of the Year, 2013.

She is interested in all aspects of the development and use of technology in education. She is a Senior Lecturer in Blended Learning at Glasgow Caledonian University.

Over the past 10 years her work has centred on developments in the Higher Education sector through her work with CETIS.

For further biographical details please see Sheila’s About.me page.

Sheila MacNeill
Senior Lecturer
Blended Learning
Glasgow Caledonian University
Glasgow

Blog: How Sheila Sees It
Twitter: @sheilamcn


View Twitter conversations and metrics using: [Topsy] – [bit.ly]

Posted in Guest-post, openness | Tagged: | 10 Comments »

Guest Post: Open Education and Staff Development at the University of Salford

Posted by gdfielding on 13 March 2014

Open Education Week 2014 logoThe third annual Open Education Week (#openeducationwk) takes place from 10-15 March 2014. As described on the Open Education Week web site “its purpose is to raise awareness about the movement and its impact on teaching and learning worldwide“.

Myself and my Cetis colleagues are supporting Open Education Week by publishing a series of blog posts about open education activities. The Cetis blog provides a series of posts from Cetis staff which describe Cetis activities concerned with a range of open education activities. These posts are complemented by a series of guest posts on the UK Web Focus blog from people I have worked with who are working in open education.

The third guest post in the series is written by Gillian Fielding of the University of Salford. In this post Gillian reports on “Open Education and Staff Development at the University of Salford“.


Open Education and Staff Development at the University of Salford

Thanks to Brian for asking me to write a guest blog. Being ‘open’ I’ve never written a guest blog before so it’s a bit scary. Also thanks to the thought-provoking guest bloggers: Doug Belshaw who posted yesterday and Ross Mounce who wrote a guest post during Open Access Week 2012. Ross’s comment: “things were different before the Internet!” made me smile, and Doug’s suggestion of writing a policy openly are both things I shall return to later.

I am not going to debate definitions “open” etc, These been discussed elsewhere. I am going to focus on our staff development activity in open educational practice and on how the Internet has changed that.

I’d like to also say a thank you Tim Berners-Lee for the Internet (Happy 25th). It has not only kept me in work the last 22 years but it has made my work in learning and teaching even more interesting. Looking back I have loved (almost) every moment of it and I still feel as passionate about it today as I did in the early 1990s. Open education/al practices, elearning, collaborative learning, blended learning, interactivity, multimedia, mobile learning, … all open up new opportunities, debates and challenges. Sometimes we struggle to keep up, or should that be we always struggle to keep up? For example, it was only last year when we introduced a staff development workshop on Twitter, Twitter was launched in 2006. We do not have a social media policy yet. We just do it and to great effect I might add. Salford is in fourth position in “theunipod” national university rankings on social media use. Is that because we don’t have a policy I wonder? Similarly we do not have a University policy on open education practice/resources staff just do “it”.

We are currently developing both policies, better late than never. And I feel we need these policies to endorse open practices and social media use. To say to staff yes it’s great, embrace it, do it, it has huge benefits for you, your students and the University. (Just pay regard to the potential risks and drawbacks). In our development sessions we illustrate the benefits with a case study from one of our Professors. The month the Prof set-up social media accounts, he saw his open access article downloads triple (stored in the University’s Institutional Repository), he has seen other tangible benefits too. (Incidentally I am going to pass on Doug’s suggestion of open policy making to the teams concerned).

The other staff development workshop we offer in this area is “Managing Your Digital Identity” (introduced late last year). Next month, we are introducing a Facebook session and hopefully later in the year, other workshops too. We have always supported individual requests for support.

Last year we introduced a new module on the Post Graduate Certificate in Education (PG CAP) called, Flexible, Distance and Online Learning (FDOL). This embedded Social Media and included a unit on open educational practices.

The module was 13 weeks in duration. Eleven “classes” were delivered fully online (synchronously using Collaborate). However the initial class, and week 6 class, were an on-campus physical session (apart from the three students joined virtually – how flexible was that!?). It was only open to our PGCAP students, it’s not an open course. The original module designer has an open version available. We used a variety of tools used, both synchronous and asynchronous: Twitter, Google+, Google Hangouts, Blackboard, Skype, YouTube, WordPress, Collaborate, etc. (“Things were certainly different before the Internet!”). The module used online problem-based learning (pbl), the groups decided amongst themselves what problems to solve, their roles in the group, what learning technologies to use, etc.

During the module we held two “Twitter Journal Clubs” (twjc). A new concept introduced to me by a student (Chloe James). A twjc is generally open to anyone to join in on a discussion (via Twitter) of a journal. These are in a defined time period of an hour or up to 24 hours. The benefits are it was: open, educational, concise (140 characters forces that) though it can be challenging putting deep thought into 140 characters; it was fun and innovative but could be frustrating for others especially if they were new to Twitter or the session was unstructured (it works best if you go through the journal in order and the facilitator keeps time and people on topic. For more information on our first twjc see my post on Using a Twitter journal club for learning and teaching (and my first foray into twjc’s)The second twjc was more exciting as the journal’s author saw this as his opportunity to start using Twitter and joined our debate. “Things were different before the Internet!” that wouldn’t have happened.

Assessment of the module was the creation of a reflective portfolio (in WordPress). Students were encouraged to be innovative and creative and to use tools they had not used before. Examples included: cartoons, images, videos, even a specially written and performed song. In the spirit of the open educational practice, students were encouraged to make their portfolios open to the world, however this was not compulsory. Publishing on the web can be a very daunting undertaking, publishing your reflections on your own professional practice is even more daunting to those newer to this publishing medium.

With that note it seems entirely appropriate to finish with links to some of my students portfolios. These include their reflections on open educational practices and on using Internet tools, and how they are applying/will apply what they learnt in their own professional practice. Note that the unit included a webinar on open educational practices led by Brian. This can be viewed in the recording of the webinar and Brian’s reflections were given in a post on Open Educational Practices (OEP): What They Mean For Me and How I Use Them

Links to students’ portfolios are available below:

Paul Crowe http://cpdpaulcrowe.wordpress.com/ 
Alex Fenton http://cpdalexfenton.wordpress.com/
Natalie Ferry http://nferry2013.wordpress.com/
Liz Hannaford http://pgcaplizhannaford.wordpress.com/
Joe Telles http://jtee78.wordpress.com/
Nadine Watson http://cpdnadinewatson.wordpress.com/
Juliette Wilson https://cpdjuliettewilson.wordpress.com/

Biography and Contact Details

Gillian FieldingGillian Fielding is responsible for the development of digital literacies of staff at the University of Salford. She is also a PhD candidate at Lancaster University. Gillian has a background in lecturing and has a strong passion for enhancing the student and staff experiences by using open access, the Social Web, learning technologies, mobile devices, etc.

Gillian has presented on learning technologies at conferences including: SOLSTICE, CLTR, LILAC, ECE, UCISA, and Blackboard World.

Twitter: g_fielding
Facebook: Gillian D Fielding
Email: g.d.fielding@salford.ac.uk
Telephone: 0161 295 2451

Posted in Guest-post, openness | Tagged: , , , , | 8 Comments »

Guest Post: What Does Working Openly on the Web Mean in Practice?

Posted by Doug Belshaw on 12 March 2014

Open Education Week 2014 logoThe third annual Open Education Week  (#openeducationwk) takes place from 10-15 March 2014. As described on the Open Education Week web site “its purpose is to raise awareness about the movement and its impact on teaching and learning worldwide“.

Myself and my Cetis colleagues are supporting Open Education Week by publishing a series of blog posts about open education activities. The Cetis blog provides a series of posts from Cetis staff which describe Cetis activities concerned with a range of open education activities. These posts are complemented by a series of guest posts on the UK Web Focus blog from people I have worked with who are working in open education.

The second guest post in the series is written by Doug Belshaw whom I’ve known in Jisc circles for several years. Last year Doug, who now works for the Mozilla Foundation, was a plenary speaker at the IWMW 2013 event. In this post Doug asks “What does working openly on the web mean in practice?“. This is a very timely post in light of today’s Guardian article on “An online Magna Carta: Berners-Lee calls for bill of rights for web“.


What Does Working Openly on the Web Mean in Practice?

I’m what’s known as a ‘paid contributor to the Mozilla project’. You may think that’s just a quirky way to describe being an employee of the Mozilla Foundation but I think it highlights something important that I’d like to explore in this post.

Open
Image CC BY-NC-SA mag3737

Mozilla is a mission-driven organisation. You can read the manifesto here. But it’s not only Mozilla’s mission that makes it different. After all, there are plenty of charities, NGO’s, and even for-profit organisations that aim to change the world for the better. Something fundamentally different about Mozilla is a commitment to ‘working in the open’.

There are many definitions of what ‘open’ means. At one end of the spectrum are those who use the term to mean nothing more than something being ‘accessible to everyone’. People who take this approach allow you to access their resources if you have the required hardware and/or software. At the other end of the spectrum (where you will find Mozilla) is what might be called ‘open practice’. This goes several stages further. You may access the resource and use it under the terms of an open license. You may remix (or ‘fork’) the resource to improve it or better fit your context. And you may discuss and suggest changes to the resource with those responsible for maintaining it.

Many of Mozilla’s working practices are heavily influenced by the Free Software Definition. However, it’s applied more widely then just to the creation of software. For example, Mozilla uses it when creating teaching resources as part of our Webmaker programme. It’s used when planning the future of the Open Badges Infrastructure. Mozilla chooses open source tools and protocols like BugzillaIRC and Etherpad that default to publicly-accessible outputs. Unless there’s a very good reason for doing otherwise, anyone can see what’s going in within Mozilla projects.

Working open is not only in Mozilla’s DNA but leads to huge benefits for the project more broadly. While Mozilla has hundreds of paid contributors, they have tens of thousands of volunteer contributors — all working together to keep the web open and as a platform for innovation. Working open means Mozilla can draw on talent no matter where in the world someone happens to live. It means people with what Clay Shirky would call cognitive surplus can contribute as much or as little free time and labour to projects as they wish. Importantly, it also leads to a level of trust that users can have in Mozilla’s products. Not only can they inspect the source code used to build the product, but actually participate in discussions about its development.

There’s a well-known saying called Linus’s Law that states, “given enough eyeballs, all bugs are shallow.” In other words, problems can be fixed if you get enough people to work on solutions. Of course, there needs to be an architecture of participation to make the process distinct from chaos, but get this right and — like Wikipedia and Mozilla’s Firefox, you end up with a competitive advantage. The cognitive surplus can be channelled away from TV watching towards things that benefit humankind.

In practice, working open for Mozilla looks like this: if you’re interested in something (whoever you are and wherever you’re from) you can turn up and get involved. If the community find your input useful, then you are likely to be given more responsibility. There are many ways this can happen, but becoming a module owner is a good example. Module owners are people in charge of a module or sub-module of code within a particular codebase. They have responsibility and authority that has been earned through a meritocratic system. For more on this, I’d highly recommend reading Peer Participation and Software: What Mozilla Has to Teach Government (it’s a free download).

But what does all this mean for education? As someone who’s worked in both schools and universities, I know how different the brave new world of the web can feel from the lived reality of institutions. One way to shake things up is to continually ask the question, “can we make this public?” And if that’s too radical, how about “is there any reason why this shouldn’t be shared with everyone at the institution?” It’s a truism that innovation comes from the edges; you’re unlikely to know where the best ideas are residing unless you give people a platform to share them. And one of the easiest ways to provide such a platform is to use the web.

I won’t deny that there may be legitimate reasons for sometimes restricting access to resources, using closed-source software, and privileging top-down decision making. However, I’d suggest that these cases are probably rarer than we collectively admit. Why not try inviting comments from everyone connected with your institution or organisation next time you’re drafting a new policy? How about throwing open the doors (perhaps virtually?) of your next meeting? Next time you’re choosing a digital tool, is it worth considering privileging Open Source software?

There’s much to say on this issue, but if you’ll excuse me I’m going to have to go. A Mozilla contributor is pinging me on IRC…


Biography and Contact Details

Doug BelshawDr. Doug Belshaw, Web Literacy Lead for the non-profit Mozilla Foundation is an educator, researcher and writer.

Contact details:

Email: doug@mozillafoundation.org
Website: http://dougbelshaw.com/
Twitter: @dajbelshaw


View Twitter conversations and metrics using: [Topsy] – [bit.ly]

Posted in Guest-post, openness | Tagged: , , | 3 Comments »

Guest Post: Open Education Data

Posted by mariekeguy on 11 March 2014

Open Education Week 2014 logoThe third annual Open Education Week (#openeducationwk) takes place from 10-15 March 2014. As described on the Open Education Week web site “its purpose is to raise awareness about the movement and its impact on teaching and learning worldwide“.

Myself and my Cetis colleagues are supporting Open Education Week by publishing a series of blog posts about open education activities. The Cetis blog provides a series of posts from Cetis staff which describe Cetis activities concerned with a range of open education activities. These posts are complemented by a series of guest posts on the UK Web Focus blog from people I have worked with who are working in open education.

The first guest post in the series is written by my former colleague Marieke Guy. After working at UKOLN for 13 years Marieke moved to the Open Knowledge Foundation last year. In this post Marieke reviews her work at the Open Knowledge Foundation on open education data.


Open Education Data

Hi, I’m Marieke Guy and I work for the Open Knowledge Foundation, a global not-for-profit organization that want to open up knowledge around the world and see it used and useful.

My main area of interest is open education, I co-ordinate the Open Education Working Group and I work on a project called LinkedUp. LinkedUp is an EU-funded project that aims to push forward the exploitation of public, open data available on the Web, in particular by educational institutions and organizations. It is doing this through a series of competitions aimed at developers called the LinkedUp Challenge. For the challenge we ask developers to create interesting and innovative tools and applications that analyse and/or integrate open web data for educational purposes.

Defining the terms…

Within the project we use terms like ‘open education data’, ‘open educational data’ and ‘open data in education’ fairly loosely, partly because the terms themselves are ill-defined. For the sake of this post I want to drill down and consider one particular characterization of open education data, and consider its use.

Open education data can refer specifically to the open data that comes out of educational institutions. By educational Institutions I am here referring to all physical places of study from schools to further education and universities. One could broaden this out to include data from online courses, though that is a topic for another post!

So we are really talking about administrative data, which could include:

  • Reference data such as the location of academic institutions

  • Internal data such as staff names, resources available, personnel data, identity data, budgets

  • Course data, curriculum data, learning objectives,

  • User-generated data such as learning analytics, assessments, performance data, job placements

Naturally these types of data can be classified in a variety of different ways, so you can think of them in terms of content, but also in terms of provenance, openness (some are more openly available than others), granularity, legal restrictions and so on. The World Economic Forum report Education and Skills 2.0: New Targets and Innovative Approaches sees there as being two types of education data: traditional and new. Traditional data sets include identity data and system-wide data, such as attendance information; new data sets are those created as a result of user interaction, which may include web site statistics, and inferred content created by mining data sets using questions.

Whatever their classification it is clear that open education data sets are of interest to a wide variety of people including educators, learners, institutions, government, parents and the wider public.

Open Education Data Sets

Here in the UK you could start thinking about some of the datasets that fall under this definition, many of them are held by the government, such as school performance data, data on the location of educational establishments and pupil absenteeism. There is also data from individual institutions such as that collated on linked universities and on data.ac.uk and from research into education, such as the Open Public Services Network report into Empowering Parents, Improving Accountability.

Previously much of the release and use of open educational data sets has been driven by the need for accountability and transparency. A well-cited global example has been the situation in Uganda where the Ugandan government allocated funding for schools, but corruption at various levels meant much of the money never reached its intended destination. Between 1995 and 2001, the proportion of funding allocated which actually reached the schools rose from 24% to 82%. In the interim, they initiated a programme of publishing data on how much was allocated to each school. There were other factors but Reinikke and Svensson’s analysis showed that data publication played a significant part in the funding increase.

However recent developments, such as the current upsurge of open data challenges (see the ODI Education: Open Data Challenge and the LAK data challenge), have meant that there is an increasing innovation in data use, and opportunities for efficiency and improvements to education more generally. Their potential us is broad. Data sets can support students through creation of tools that enable new ways to analyse and access data e.g. maps of disabled access and by enriching resources, making it easier to share and find them, and personalize the way they are presented. Open data can also support those who need to make informed choices on education e.g. by comparing scores, and support schools and institutions by enabling efficiencies in practice e.g. library data can help support book purchasing.

Education technology providers are also starting to see the potential of data-mining and app development. So for example open education data is a high priority area for Pearson Think tank, back in 2011 they published their blue skies paper How Open Data, data literacy and Linked Data will revolutionise higher education. Ideas around how money, or savings, can be made from these data sets are slowly starting to surface.

Application of Data Sets

Some of the interesting UK applications of these data sets can be see through services like Which? University which builds on the NSS annual survey held in Unistats, the Key information sets and other related data sets to allow aid students to select a university; Locrating, defined as ‘To locate by rating: they locrated the school using locrating.com’ which combines data on schools, area and commuting times; London Schools Atlas, an interactive online map providing a comprehensive picture of London schools; equipment data.ac.uk – which allow searching across all published UK research equipment databases through one aggregation portal.

The UK is not alone in seeing the benefit of open education data, in Holland, for example, the education department of the city of Amsterdam commissioned an app challenge similar to the current ODI one mentioned earlier. The goal of the challenge was to provide parents with tools that help them to make well-informed choices about their children. A variety of tools were built, such as schooltip.net, 10000scholen.nl, scholenvinden.nl, and scholenkeuze.nl. The various apps have now been displayed on an education portal focused on finding the ‘right school’.

Further afield in Tanzania Shule.info (see accompanying image) allows comparison of exam results across different regions of Tanzania and for users to follow trends over time, or to see the effect of the adjustments made to yearly exam results. The site was developed by young Tanzanian developers who approached Twaweza, an Open Development Consultant, for advice, rather than for funding. The result is beneficial to anyone interested in education in Tanzania.

The School of Data, through their data expeditions, are starting to do some important work in the area of education data in the developing world. And in January the World Bank released a new open data tool called SABER (The Systems Approach for Better Education Results), which enables comparison of countries education policies. The web tool helps countries collect and analyze information on their education policies, benchmark themselves against other countries, and prioritize areas for reform, with the goal of ensuring that in those countries all children and youth go to school and learn.

All over the world prototypes and apps are been developed that use and build on open education data.

Data Challenges

However there are still challenges that those keen to develop applications using open education data face. Privacy and data protection laws can often prevent access to some potentially useful data sets, yet many data sets that are not personal or controversial remain unavailable, or only available under a closed licence or inappropriate format. This may be for many reasons: trust, concerns around quality and cost being the biggest issues. Naturally there is a cost to releasing data but in many cases this can be far out-weighed by cost-savings later down the line, so for example a proactive approach will save time and effort when FOI requests are made.

Open Education

So while you may find this all very interesting (I hope!) it’s possible you could still be asking how does this all relate to open education?

My answer would be that firstly Open education is fundamentally about removing barriers to education, this could be barriers to entry, or barriers to content, data or knowledge. Opening up data of any sort fits with this agenda and activities around open licensing in particular are both important and hugely supportive. But secondly, and possibly more importantly, opening up education data gives us the potential to see education and its components differently. This new perspective provides us with an opportunity to revolutionise education and make it better.

As David Lassner, Interim president and former chief information officer at the University of Hawaii explains:

Our opportunities for improvement are immense, and data provide a powerful lens to understand how we are doing internally and relative to our peers. This applies across all segments of what we do, from teaching and learning to administrative support. Performance metrics and dashboards are the beginning, but using data to understand deeper correlations and causality so we can shape change will be critical as we strive to advance our effectiveness.”

The movement for open education is ultimately about wanting better education for all. Open education data is proving to be an important instrument in achieving that goal.

If you would like to participate in more discussions around open education data and its role in open education then do join the Open Education Working Group mailing list.


Biography and Contact Details

Marieke GuyMarieke Guy is a Project Co-ordinator at the Open Knowledge Foundation. She leads on dissemination and community building on the LinkedUp Project and co-ordinates the Open Education Working Group.

Prior to joining the Open Knowledge Foundation she worked at UKOLN at the University of Bath on a number of digital information projects focussing on digital preservation, e-learning and social networking for communities such as the cultural heritage sector. She spent two years supporting higher education institutions with their research data management via the Digital Curation Centre institutional support work.

Marieke writes a blog about remote working.


View Twitter conversations and metrics using: [Topsy] – [bit.ly]

Posted in Guest-post, openness | Tagged: , , | 2 Comments »

Guest Post: Sheila MacNeill’s reflections on the #byod4l “mini-MOOC”

Posted by sheilmcn on 31 January 2014

The #BYOD4L event took place this week. One of the aims of the five-day long online course was to encourage collaboration. Brian Kelly and I have agreed to collaborate by writing guest posts on each others blog. Brian’s post is available on my HowSheilaseesIT blog and my post is given below.


What was the byod4l event about

The best place to get an overview of the event is from the byod4l homepage, and last Sunday in preparation for the week I wrote this blog post which explains some of my thoughts and motivations for participating.

What did I learn?

To be honest I’m not completely sure yet as I think there is another C that needs to be added to the list – contemplation. I think a need a couple of days to cogitate and reflect on the week. But a few things come to mind including time and chaos but more on that later in the post

Connecting

I’ve tried to instigate some f2f connections here and later today a few of us are having a MOOC meet-up to have a chat about our experiences. I’ve managed to join in a couple of the twitter chats at night and that has allowed me to connect with old friends and find some new ones via twitter. This has also been a great way for Brian and I to connect in a different context. My connecting blog post tho’ was about a different kind of connection.

Communicating

I’ve pretty much stuck to twitter, my blog and google+. I find the UI of the ipad google+ much nicer now and so I am more inclined to look at that more than before. I also automagically publish blog posts to various places including google+ so I’m there even when I’m not. Brian and I also experimented with a bit of video communication.

Curating

To be honest, I’m leaving curating to others, the team are doing a grand job of curating tweets, posts etc. I shared my thoughts on curating in this post.

Collaborating

I hope this post is a form of collaboration, and that the different approaches Brian and I have shared resonate with others. I also hope that focused interactions with others in my peer group online and within my institution will lead to more collaboration.

Creating

Well I have created 4 posts over the week and one or two tweets:-)  and I’ve created time for some f2f discussions with colleagues which I think is really important.

Final Thoughts

This is the hardest bit to write. As I said earlier I’m still processing the week. It’s been really useful to have some f2f chats with people and get different perspectives on things. It has reinforced the fact that I don’t mind a bit of chaos, and I that am confident enough online to “have a go” without having always having a clear goal in mind. This is probably equally a good and bad thing!

However the one thing that I keep coming back to is time. Participating this week has required time commitment. Some evenings I’ve been able to join the twitter chat, others I haven’t. Some days I’ve been able to take a bit of time during the day to watch the videos, do a quick blog post, others I haven’t. Today a few of us have blocked some time out to discuss the experience. Creating that time is really important for us as academic staff but I think we also need to find ways to give students more time to become more comfortable with using their own devices in an educational context. If we are serious about integrating byod4l approaches into education, then we need to move beyond byod policies and think about how to redesign our courses to allow some time to just try things. We all need some space and time to play (or experiment if you prefer) to develop the confidence and digital literacies needed to engage more fully with the potential that byod4l approaches to connecting, communication, curating, creating and collaboration can contribute to.


This guest blog post was written by Sheila MacNeill, Senior Lecturer, Blended Learning, Glasgow Caledonian University an assignmentexperiment for BYOD4L. Sheila normally publishes on the How Sheila Sees IT blog.

Posted in Guest-post | Tagged: | 1 Comment »

Institutional Web Team Blog Aggregator: Advance Notice of Closure

Posted by Brian Kelly (UK Web Focus) on 3 June 2013

The Institutional Web Team Blog Aggregator

IWTB: Institutional Web Team blog aggregator

The Institutional Web Team Blog Aggregator was announced at the final session of the IWMW 2011 event held at the University of Reading.

The aim of the service was to provide a centralised location which aggregates blog posts provided by institutional Web teams, by individuals who post primarily about their work in supporting institutional Web services or by others who support members of the institutional Web management community.

In a post on “Sharing Job Information More Effectively” I gave an example of one additional use case was for ensuring that members of Web teams at other institutions could easily find details of job vacancies.

However it’s probably fair to say that use of blog technologies as a simple mechanism for letting others know about the work being carried out in Web teams, plans for new areas of work and more general sharing of information hadn’t taken off to the extent to which I had hoped.

Advance Notice of Closure

In light of the forthcoming cessation of UKOLN’s core funding we are in the process of archiving our digital content and, where appropriate, shutting down services.

This post provides notification of the closure of the Institutional Web Team Blog Aggregator. It should be noted that this should not mean the loss of significant content – the aggregator is a collection of blog content published elsewhere. If people find this aggregation of content useful I suggest that you visit the IWTB blog while it is still available, make a note of the RSS feed for blogs of interest to you and add them to your own blog reader.

Note that we cannot guarantee that the service will continue to be available after 30 June.

Posted in Blog, rss | Tagged: | 3 Comments »

Guest Post: Opening up University Space online using Google Street View

Posted by Brian Kelly (UK Web Focus) on 6 March 2013

The UK Web Focus blog invites occasional guest posts which cover topics which are likely to be of interest to readers of this blog. In this guest post Edward Miller, a graduate from the University of Sheffield, describes ways of opening up University space online using Google Street View. This post is based on his work for the Sheffield University.


Sheffield Information Commons Street View
Google Street View inside Sheffield University Information Commons

Last month, Sheffield University became the first University to have Google Street View inside one of their buildings. So far, the ground and first floor of the university’s flagship learning space, the Information Commons has been mapped out, with more buildings on their way.

To see the imagery, just drag the little man into the building or you can go directly to it. Don’t forget to explore both floors by going up or down the stairs.

Once more buildings have been made live on Street View, each building will be embedded into the university’s website, along with integration into the University’s Facebook page.

Shooting Google Streetview-in the Information Commons
Edward Photographing Street View inside Sheffield University Information Commons

Google started to roll out Google Street View inside buildings about a year and a half ago, initially just in the United States and now have a roster of “Google-Trusted photographers” across several countries who are able to photograph the Street View imagery.

In addition to photographing Street View imagery, photographers also take still photos around each venue for use in any offline and online marketing and are uploaded to your Google Place Page to help improve a building or businesses’ web presence and SEO.

After what began with a few streets in the States 5 years ago, Street View has now expanded to 5 million miles of road across 48 countries with 96% coverage of all roads in the UK. We can travel from the Rainforest to the Grand Canyon; from caves in Japan to a hut in the Antarctic in a matter of seconds. It allows us to visit places halfway across the globe that are inaccessible, either because of time, money or practicality.

For universities, this means prospective students who are unable to visit a university in person are able to gauge a feeling for the environment from the comfort of their homes on their computer or mobile device. For international students particularly, this could be an invaluable resource. In a world becoming increasingly digital, Street View allows universities to celebrate, promote and attract people to their physical home, online.


Edward Miller, a graduate from the University of Sheffield, started a business producing interactive photography in his third year of University whilst reading Philosophy and Psychology. He specializes in large scale ‘gigapixel’ photos that can be tagged through Facebook and is trusted by Google to produce Street View imagery. Since leaving university, he as built a client list including The Mail, ESPN, Press Association and Vogue.

Contact Details

Website: www.reaxive.com
Email: edward@reaxive.com
Telephone: +44 (0)20 3397 7989


View Twitter conversation from: [Topsy] | View Twitter statistics from: [TweetReach] – [Bit.ly]

Posted in Guest-post, mashups | 1 Comment »

2012 in review

Posted by Brian Kelly (UK Web Focus) on 30 December 2012

The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

19,000 people fit into the new Barclays Center to see Jay-Z perform. This blog was viewed about 90,000 times in 2012. If it were a concert at the Barclays Center, it would take about 5 sold-out performances for that many people to see it.

Click here to see the complete report.

Posted in Blog, blog-summary | Leave a Comment »

Guest Post: “1 billion people, 17 million students, 500+ colleges and millions of eager learners”

Posted by Brian Kelly (UK Web Focus) on 7 December 2012

Today’s guest post is written by  Gwen van der Velden, Director, Learning and Teaching Enhancement at the University of Bath. Following a chat last night along our shared corridor on level 5 of the Wessex House building Gwen kindly agreed to write a guest post about her recent trip to India.


I work a few offices away from Brian Kelly and Paul Walk and other colleagues in UKOLN. We chat often in the corridors and today I told Brian about last week’s trip to Delhi, India. Because of my enthusiasm about what we found in relation to e-learning, new technologies and connectivity for the public good, Brian asked me to blog and share some of the inspiration. For context, when I say ‘we’ I am not being royal, I am just also referring to Kyriaki Anagnostopoulou, our Head of e-Learning at Bath who has the kind of international reputation that got us invited to India in the first place.

The Indian government works with the HE sector on increasing access to HE for learners who cannot access HE at the moment. The HE system in India is highly regulated and it isn’t a market where entry is easily possible. Many UK universities are working to establish themselves there, but this is far from easy. Moreover, there isn’t enough Indian faculty to grow the existing universities or establish new ones and student places are very, very limited considering the interest in university study that there is. We heard that for one of the Institutes of Technology, there are over 40 students for each available place. So, a different approach is required. Against this background there is a bigger drive to educate India out of poverty. Experiencing New Delhi, you can see what is possible. But driving into old Delhi, we saw what still is to be achieved. It is a country of zest, opportunity, large numbers (1 Billion people) and great economic and social challenges…

The Ministry of Human Resources Development which oversees HE, is investing $1 billion into growing HE. Crucial to their plan is the National Mission on Education through ICT. Growth is going to come through reaching all corners of India with connectivity, and that is why there is an incredible project of taking glass fibre cable into the farthest ends of India. A huge development, and often combined with putting solar energy provision in place, where no electricity existed before. WiFi connections are going to become available through 40 rupees a year subscriptions. That’s about 50 pence. It shows some clear government financial commitment. And it’s all for learning, how inspiring is that?

Aakash tablet (image from WIkipedia: http://en.wikipedia.org/wiki/Aakash_(tablet))

Aakash tablet (image from Wikipedia: http://en.wikipedia.org/wiki/Aakash_(tablet))

The second step is to have the learning platforms that connect learners to the curriculum, teaching and assessment. This too is addressed in the most imaginative way. You may have heard of the Indian invention of a $30 tablet, the Aakash (illustrated). I understood that Aakash means ‘clouds’, or ‘sky’, and that shows again how India is reaching for the sky here. The Aakash 1 apparently didn’t get past the pilot, but I’ve held the Aakash 2, played with it (thanks Prof Kannan Moudgalya) and sat in amazement at what a smart little thing this is.  It’s less than half the size of an i-pad but large enough to work comfortably with. It has some good processing power and I saw some software on it that allows you to do programming –useful for Comp Sci students and e-developers. The current pilot means 100.000 learners are testing it out, and we understood from government officials that another 1.5 Million are to be piloted in early Spring next year.

With connectivity and the technology platform under way, the content needs to get out there, and this is where our discussions came in. At the moment universities are encouraged to make as much content available as possible. They all do it in different ways. In some cases it is curriculum, sometimes just content and in some cases there is a larger or smaller effort towards designing materials for learning. Designing content for learning is clearly a developing field and again, full of challenges in India, such as the need for various language versions, cultural context adjustment and then there are also issues about what text/ expression/ content may or may not be used for cultural, religious or property right sensitivities. (On that note, this entry is not a statement sanctioned or approved by the Indian government or any partners we have worked with. It’s just my own account!)

Interestingly, at the conference – courtesy of the British Council and Indira Gandhi National Open University – the Ministry’s Secretary told us that developments now in universities have to be about quality, not quantity. It isn’t good enough to just put content online, if ICT is not used effectively to actually improve learning. Excellent.

The three step approach is incredible considering the size of the country: 1 billion people, 17 million students, 500+ colleges and millions of eager learners wanting to get ahead. We were impressed by the university colleagues we met from all over India. They were genuinely driven by seeing universities as a public good: educating the country out of poverty and developing the technologies to do it. It explains where all these inspired e-ideas are coming from. Watch that space, I can’t help thinking there is more to come from the East.


gwenGwen van der Velden
Director
Learning and Teaching Enhancement
University of Bath.

Email: g.m.vandervelden@bath.ac.uk
Web page: http://www.bath.ac.uk/learningandteaching/about/staff/g.vandervelden.html
Twitter: @gwenvdv

Kyriaki Anagnostopoulou
Head of e-Learning
Learning and Teaching Enhancement
University of Bath.

Email: k.anagnostopoulou@bath.ac.uk
Web page: http://www.bath.ac.uk/learningandteaching/about/staff/k-anagnostopoulou.html


View Twitter conversation from: [Topsy]

Posted in General, Guest-post | Tagged: | 1 Comment »

Guest Post: Reflections on Open Access Week 2012 at the University of Oxford

Posted by Brian Kelly (UK Web Focus) on 4 December 2012

During Open Access Week a series of guest blog posts were published on this blog in which three repository managers shared their findings of SEO analyses of their institutional repositories.

As a follow-up to those posts, which were motivated by a commitment to openness and sharing which is prevalent in the repository community, this post by Catherine Dockerty (Web and Data Services Manager, Radcliffe Science Library) and Juliet Ralph (Bodleian Libraries Life Sciences Librarian) provides a summary of the activities behind the Open Access Week event at the University of Oxford.


Open Access Week at Oxford

Open Access Week 2012 saw a determined effort from the Bodleian Libraries of Oxford University to shine a light on developments in Open Access with a full week-long programme of events. This was prompted by the need to assess the state of play in Open Access (OA) which, for major research institutions such as Oxford, is particularly urgent in the wake of the publication of the Finch Report. It was the second year we have participated in Open Access Week – last year we held a single event and we wanted to do a lot more this time round.

What We Were Trying To Do

We had a number of specific things we wanted to achieve though our programme:

  • Increasing the knowledge of library staff. All reader-facing staff will potentially deal with enquiries relating to Open Access.
  • Assembling and showcasing the expertise of Bodleian Libraries staff in Open Access. Readers need to know what we can do for them.
  • Raising awareness of publishing options to academic researchers.
  • Promoting submission to Oxford’s institutional repository ORA (Oxford Research Archive). Oxford currently has mandatory deposit for doctoral theses, but not for research papers.
  • Highlighting Oxford’s progress in the field of Open Data.

What We Did

We put together a programme of talks and other activities, most of which were lunchtime sessions and took place at the Radcliffe Science Library, one of the Bodleian Libraries and Oxford University’s main library for the sciences and engineering. The majority of speakers were library staff. The focus was on science, but events covering law and medicine were included and there were attendees from the humanities and social sciences.

An evening session, “Bodley’s ‘Republic of [Open] Letters” was hosted by the Oxford Open Science Group and highlighted the DaMaRO Project, which is developing a research data management policy and data archiving infrastructure for Oxford

The presentations are available online.

Wikipedia Editathon

Ada Lovelace by Margaret Carpenter, 1836

Ada Lovelace by Margaret Carpenter, 1836

The final event of the Open Access Week programme was a Wikipedia “Editathon” on the theme Women in Science. The event was organised as a collaboration between the Bodleian Libraries and Oxford University’s IT Services, and was a follow-up to the Ada Lovelace Day event at the Royal Society the week earlier. This tied in neatly with Open Access Week as we were able to highlight open access sources for use in updating articles. Our event was publicised at the Royal Society one and on Ada Lovelace Day Wikipedia page.

Having an Oxford-based Wikipedia event was also an opportunity to encourage academics and students to get involved in editing Wikipedia, which is reliant on expert contributors to add high quality articles and improve existing ones. Wikipedia has a readership vastly exceeding that of any academic journal, and presents an opportunity for academics to have an impact on a wider audience.

Juliet Ralph (Bodleian Libraries Life Sciences Librarian) kicked off the proceedings with an introductory talk to introduce Wikipedia and outline the format of the session. Online resources for editing articles were suggested, focusing on open access. The fact that the Royal Society was providing free access to all its publications until 29th November 2012 was highlighted. A collection of printed reference materials from the RSL’s collection was also provided.

A list of articles for adding/updating was provided as guidance to participants, but this was not intended to be prescriptive. The list was the same one as used at the Royal Society event, updated to reflect all the work done that day.

We were very pleased that Oxford-based Wikipedians James and Harry Burt were able to attend and assist the assembled editors. They also treated us to an impromptu presentation on their work as long-time Wikipedia editors.

Online participation via Twitter was encouraged using the hashtag #WomenSciWP (the same as for the Royal Society event). Note that a Twubs archive of the tweets is available. The event was also live-tweeted from the RSL’s Twitter feed (@radcliffescilib).

By the end of the session two new articles were created and 12 updated. Attendees were mainly research staff and postgraduate students from the fields of science and medicine. Also present were two archivists from the Saving Oxford Medicine project who posted a blog post about the work.

Special thanks to:

  • James and Harry Burt for presenting and for help they gave to other participants.
  • Izzie McMann and Karen Langdon (Radcliffe Science Library staff) for assisting participants on the day.
  • Janet McKnight (IT Services) and Alison Prince (Bodleian Libraries Web Manager) for help in organising and publicising the event.
  • Andrew Gray (British Library Wikipedian in Residence) and Daria Cybulska (Wikimedia UK) for publicising the Editathon and supplying learning materials for the session.

Reflections

We certainly achieved the aim of increasing the knowledge of OA issues in Library staff within the sciences, several of whom attended more than one event. In future we will aim to actively promote the staff development benefits from participating to all Bodleian Libraries staff, not just those in the sciences. Our collaborations with the Open Science Group and IT Services were successful, and we hope to work together with them on future events.

We fulfilled all our original intentions to some extent, but some events were not well attended in spite of being publicised widely although were positively received by those who did.

The timing of Open Access Week is a problem for Oxford as the start of the academic year is later than for most UK universities, which means the new term is just getting underway in earnest and there are many other events to compete with. Staff time in planning events is also in short supply as reader-facing staff will have been prioritising inductions for new students over the previous weeks.

The Wikipedia event was a success (well attended with positive feedback) and we would certainly hold a similar event in the future, although not necessarily as part of Open Access Week. The fact that it was a hands-on session went down well, and the Women in Science theme attracted interest.

Next Time

Holding events at lunchtime was evidently not popular and we may decide to move them to an afternoon slot (colleagues who run user education programmes had a higher take-up when they did this). We may also move the sessions out of the library into academic departments or colleges, and hold events at other times of year.

We will be making a concerted effort to involve well-known speakers, rather than relying heavily on library staff.

We will be looking to encourage other OA events in Oxford and elsewhere, and we will also think about using online chat as well as Twitter for online participation. The planning starts now!


View Twitter conversation from: [Topsy]


Catherine DockertyCatherine Dockerty is the Web and Data Services Manager at the Radcliffe Science Library at Oxford University where her role is managing online content, social media and communications, and to support colleagues in serving the University’s teaching and research in the sciences. She has spent 13 years working in various reader services roles at Oxford University, and has also worked in the civil engineering industry and the book trade.

Juliet RalphJuliet Ralph is the Subject Librarian for Life Sciences and Medicine in the Bodleian Libraries at Oxford, where she has worked for over 15 years. She is one of many librarians involved in providing support for research at Oxford, including Open Access.

Posted in Guest-post, openness, Repositories | Tagged: , | 1 Comment »

Social Media Analytics for R&D: a Catalan Vision

Posted by Brian Kelly (UK Web Focus) on 5 November 2012

Social Media Analytics for R&D: a Catalan Vision

In this guest post Xavier Lasauca i Cisa reviews how institutions that are part of the Catalan R&D environment make use of social media and described the benefits of this approach. Xavier also discusses the metrics  used by the Catalan Administration to evaluate and measure the impact of the government’s presence in this area and their benefits for the public.

This guest blog post builds on previous posts on this blog which have described use of social media in the UK higher education sector, including posts on Social Analytics for Institutional Twitter Accounts Provided by the 24 Russell Group UniversitiesUse of Facebook by Russell Group Universities and Links to Social Media Sites on Russell Group University Home Pages.

The post has been published in the run-up to the Spot-On London (SOLO12) conference which includes sessions on Assessing social media impact (#solo12impact), Altmetrics beyond the Numbers (#solo12alt) and Using Twitter as a Means of Effective Science Engagement (#solo12Twitter). The post aims to provide a wider view on approaches to use of social media and evaluation of its impact beyond the UK.


Introduction

The Directorate General for Research is the unit of the Generalitat de Catalunya (Government of Catalonia) responsible for promoting science and technology research centres, planning training and career development of researchers, promoting Catalan participation in national, European and international research programs, and designing actions on science communication and dissemination in Catalonia, among other functions. This unit, along with the Directorate General of Universities, is part of the Secretariat for Universities and Research, which at the same time is part of the Ministry of Economy and Knowledge, headed by Minister Andreu Mas-Colell. The parallels with the British political system lie in the fact that the Directorate General for Research is the equivalent to the Government Office for Science within the Department for Business Innovation and Skills.

As the person responsible for Knowledge Management and ICT on R&D, I am in charge of the management of R&D computer applications at the Directorate General for Research, of the technical coordination of the research website of the Ministry of Economy and Knowledge, and of an electronic newsletter (RECERCAT). I am also the person responsible, in conjunction with the Communication department of the Secretariat, for  the administration of  the Directorate General for Research profiles on social media (Twitter, Facebook, Flickr, etc.). In addition, I maintain a personal blog (“L’ase quàntic” or “The quantum donkey“) where I write about innovation in Public Administration, the use of social media in universities and research, the Open Galaxy (Open Access, Open Science, Open Data, Open Courseware…) and the issue of women in science, among others.

This article focuses on the use of social media by the units within the departments of the Catalan Government (specifically the Secretariat for Universities and Research), research centres, large research support infrastructures and the reference networks in Catalonia. I would like to thank Professor Miquel Duran, from the University of Girona, for his support in the preparation of data on the number of Twitter, Klout and Kred followers of the organisations analyzed during the second week of October this year.

A General Overview of the Catalan R&D System

The Catalan public R&D system is primarily composed of universities, research centres, large research support infrastructures, hospitals, science and technology parks, networks of reference and research groups.

The central topics in science policies applied in Catalonia in recent years are, on the one hand, talent attraction and retention, with excellence and internationalization as their benchmarks (a good example of this line of action is ICREA, Catalan Institution for Research and Advanced Studies), and on the other hand, a sustained increase in research funding, with the bulk of the spending allocated to research structures, both research centres and large facilities (such as the Alba synchrotron light facility or the MareNostrum supercomputer).
A good sign of the health status of the Catalan scientific system is that, if we consider that the size of the population in Catalonia represents 1.5% of the EU-27, the system has managed to attract 2.2% of the financing available from the European Union Seventh  Framework Programme, and has obtained 3.4% of European Research Council (ERC) grants. Another relevant fact is that 2.9% of scientific publications in the EU-27 have been written by Catalan researchers. You can find these data and more information on the Catalan research system in the article by the Secretary for Universities and Research of the Catalan Government, Antoni Castellà, published in the issue 1 of the journal Global Scientia.

Institutional support

Three social media accounts are being managed from the Secretariat for Universities and Research: the Directorate General for Research account ((@recercat), the Directorate General of Universities (@universitatscat) and the Secretariat for Universities and Research (@coneixementcat).

The Twitter account of the Directorate General for Research is used to disseminate the scholarships and research grants funded by the unit, as well as the publications of the institution (for example, the most important news published in the newsletter RECERCAT). It is also used to post news and updates from the web, to promote the scientific dissemination activities from the unit and from the Recerca en acció (Research in action) website, as well as events, awards, scholarships, publications and other information from the system related agents. Apart from this public information service, the Twitter account also serves to promote government action (with links to press releases) and to share institutional statements from events or interviews of policymakers.

The institutional account management of the Secretariat for Universities and Research, as well as of other departments of the Catalan government, is based on the Style and usage guide of the Government of Catalonia’s social networks, produced by the General Directorate for Citizen Services and Publicity (GDCSP), at the Ministry of the Presidency of the Government of Catalonia. This publication establishes common guidelines for a consistent presence of the Government  of Catalonia on social networks and lists the different social media utilities, their various uses, the purpose of each network, recommendations for an appropriate and productive presence, and criteria for finding the best communicative style for each tool.

One of the most important chapters in the guide is dedicated to metrics, an essential tool to monitor the activity that is being done and to assess and measure the impact, in this case, of the presence of the Administration in this environment and the benefits it represents for citizens. Metric indicators are based on the following key concepts:

  • Dialogue: measures the degree of dialogue that the Government of Catalonia maintains with citizens on different social networks.
  • Reach: information on the distribution of the Government of Catalonia contents to the people who are part of the social network.
  • Action: indicates whether the content shared on the networks promotes activity.
  • Interaction: shows the global relationship between an account and its audience.
  • Acceptance (Applause): quantifies the degree of satisfaction.

For each of these key concepts, the indicators shown in Table 1 (List of indicators for Twitter and Facebook) and Table 2 (List of indicators for YouTube, Flickr and Slideshare) are used:

Concept  Twitter Facebook
Audience Followers Friends
Tweets sent Entries
Interactions Mentions Comments
Retweets (RT) Shares
Clicks to links Likes
Interest Dialogue Mentionts/tweets Comments/entries
Reach RT/tweets Shares/entries
Action Clicks to links/tweets
Applause Likes/entries
Interactions (Mentions+RT)/tweets (Comments+shares+likes)/entries
Commitment Dialogue Mentions/followers Comments/friends
Reach RT/followers Shares/friends
Action Clicks to links/followers
Applause Likes/friends
Interactions (Mentions+RT)/followers (Comments+shares+likes)/friends

Table 1: List of indicators for Twitter and Facebook

Tool Indicator
Youtube Total number of videos uploaded
Videos uploaded during the month
Number of views of all the videos uploaded
Visits to the channel
Subscribers
Flickr Total number of photos published
Photos published during the month
Number of views of all the photos
published
Slideshare Total number of presentations and documents published
Presentations and documents published during one month
Number of downloads of all the presentations and documents published
Number of visits of all the presentations and documents published

Table 2: List of indicators for Youtube, Flickr and Slideshare

In order to facilitate a better interpretation of the metrics, the GDCSP prepares a quarterly report that shows the evolution of these indicators graphically and sends it to each of the units responsible for corporate social media accounts. These reports help the units to evaluate the effectiveness of their activity on social media and to consider whether the previously defined objectives are being achieved. In addition, the information obtained can serve as a basis for predicting future actions and planning campaigns. After all, assessment in the Administration must serve to identify public policies that work, knowing the impact and to what extent it is attributable to the intervention of Public Administration. Table 3 shows the number of Twitter, Facebook and YouTube followers of the institutional accounts of the Universities and Research areas in the Catalan Government, as well as the Klout and Kred reputation indices:

Secretariat for Universities and Research accounts Twitter FB YT Klout Kred
Directorate General for Universities (@universitatscat) 3,300 859 52 718 5
Directorate General for Research (@recercat) 2,970 538 53 697 5
Secretariat for Universities and Research (@coneixementcat) 1,467 49 656 4
Research in action (@RecercaenAccio) 1,438 87 49 616 3

Table 3: Social Analytics for Institutional Twitter Accounts of  Secretariat for Universities and Research of Catalan Government

The Twitter and Facebook accounts of the Universities area lead the classification ahead of the Research accounts, probably because their target audience is considerably larger. The Twitter account of the Knowledge area of the Ministry of Economy and Knowledge of the Government of Catalonia ranks third, whereas the account of the science dissemination website Research in action closes the classification.

Research Centres in Catalonia: Increasingly Intensive Use of the Social Web

As regards research centres in Catalonia, it has to be mentioned that the CERCA Institute  is the Government of Catalonia’s technical service and its means for supervising, supporting and facilitating the activities of the 47 research centres in the CERCA system. These research centres are independent entities with their own independent legal status, partially-financed by the Government of Catalonia (which provides them with stable funding through programme contracts) and their main aim is excellence in scientific research. They follow a private sector management model that is totally flexible and based on multi-year activity programmes within the framework of a strategic plan and ex-post supervision  that respects the autonomy of each centre.

The aim of this model is to encourage co-ordination and strategic co-operation between  centres, to improve the positioning, visibility and impact of the research carried out and to facilitate communication between public and private agents. To illustrate  the efficiency of the system, out of the 60 ERC Starting Grants awarded in Catalonia during the 2007-2012 period, 34 were awarded to researchers from the CERCA centres (56%), whereas in the case of the ERC Advanced Grants, the percentage rises to 63% (19 out of 30) for the 2008-2011 period.

Out of all the 47 CERCA centres, 25 use social media tools, primarily Twitter, as part of their communication strategy. Table 4 summarises the most important indicators of their presence on social media:

CERCA centres accounts Twitter Facebook YouTube Klout Kred
1 IJC – Josep Carreras Leukemia Research Institute >2,344 >44,758 >245 >50 >730 >3
2 i2CAT – Internet and Digital Innovation in Catalonia 1,503 44 638 4
3 CTFC – Forest Technology Centre of Catalonia 1,195   957    21 49 650 3
4 CREAF – Centre for Ecological Research and Forestry Applications 1,191 51 681 5
5 IPHES – Catalan Institute for Human Palaeoecology and Social Evolution 1,097 1,048    29 49 677 5
6 IGTP – Health Sciences Research Institute of the Germans Trias i Pujol Foundation   923 50 667 5
7 CRG – Centre for Genomic Regulation   905  449    58 50 675 6
8 IDIBELL – Bellvitge Biomedical Research Institute   777  234     3 52 685 5
9 IDIBAPS – August Pi i Sunyer Biomedical Research Institute   772  185 43 597 2
10 ISGlobal-CRESIB-Barcelona Centre for International Health Research   758  353 52 694 6
11 IMIM – Municipal Institute for Medical Research Hospital del Mar   744    24 44 583 3
12 VHIR – Vall d’Hebron Research Institute   734  301    62 47 644 3
13 ICCC – Catalan Institute of Cardiovascular Sciences   617  217 39 595 4
14 ICIQ – Institute of Chemical Research of Catalonia   499  428    12 45 600 4
15 IRB Barcelona – Institute for Research in Biomedicine   295  448    13 43 576 3
16 IRSI-CAIXA – Institute for AIDS Research   283 46 597 5
17 ICP – Catalan Institute of Palaeontology Miquel Crusafont   258 2,683    20 42 545 3
18 CVC – Computer Vision Center   252    93 43 577 4
19 ICFO – Institute of Photonic Sciences   228  197 45 619 3
20 IMPPC – Institute of Predictive and Personalized Cancer Medicine    70 31 434 2
21 IC3 – Catalan Climate Sciences Institute    33  218 31 351 2
22 CTTC – Catalan Telecommunications Technology Centre    29    12     1 25 344 1
23 CMR[B] – Centre of Regenerative Medicine in Barcelona    21    21   52 0
24 IBEC – Institute for Bioengineering of Catalonia  265     7
25 CReSA – Centre for Animal Health Research     6

Table 4: Social Analytics for Institutional Twitter Accounts Provided by CERCA centres

As for the number of Twitter followers, the Josep Carreras Foundation, on which the Josep Carreras Leukaemia Research Institute depends, leads the account classification with 2.344 followers. At a certain distance, and above 1.000 followers, we find the i2CAT Foundation (Internet and Digital Innovation in Catalonia), the Forest Technology Centre of Catalonia and the Centre for Ecological Research and Forestry Applications.

Regarding the José Carreras Foundation, which also tops the rankings on Facebook (over 44.000 followers) and YouTube (with 245 subscribers), it should be noted that the Foundation probably generates a very significant number of emotional supporters, which may not occur in most other centres.

In the case of Facebook, 18 CERCA centres are present in this network. Apart from the aforementioned first position, the second one goes to the Catalan Institute of Palaeontology Miquel Crusafont, and the third is for the Catalan Institute for Human Palaeoecology and Social Evolution, both of them with over 1000 followers.

About YouTube, the top channels in number of subscribers correspond to the Josep Carreras Foundation, the Center for Genomic Regulation and the Vall d’Hebron Research Institute. As we can see, there is a wide variety of fields of knowledge regarding the top positions of the various social media.

As regards to reputation indices, the Bellvitge Biomedical Research Institute and the Barcelona Institute for Global Health (ISGlobal-CRESIB) lead the Klout ranking (Klout 52), and there are four centres over 50: the Centre for Ecological Research and Forestry Applications, the Center for Genomic Regulation, the Health Sciences Research Institute of the Germans Trias i Pujol Foundation, and the Josep Carreras Leukaemia Research Institute. Interestingly, when analyzing the Kred index substantial variations are not observed with respect to the centres that occupy the top six ranking positions of the Klout index, except for the entry of the Catalan Institute of Human Palaeoecology and Social Evolution into the Top 6, which moves the Health Sciences Research Institute of the Germans Trias i Pujol Foundation up to the seventh position.

Apart from the 47 CERCA centres, Catalonia has 21 centres from the Spanish National Research Council (CSIC), which are public state-owned agencies. Among these research centres, we wish to highlight the Artificial Intelligence Research Institute (IIIA), with 369 followers on Twitter, the Institute of Materials Science of Barcelona (ICMAB), with 306 followers, and the Institute of Robotics and Industrial Computing (IRII), with 144 followers.

Large research support infrastructures

Large research support infrastructures require large investments for their construction and maintenance, with the aim to advance cutting-edge experimental science. Catalonia has basically two major infrastructures: the Alba synchrotron light facility at the CELLS Consortium and the MareNostrum supercomputer at the Barcelona Supercomputing Center – Centro Nacional de Supercomputación (BSC-CNS).

These major research support infrastructures in Catalan territory are mainly consortia participated by the Government of Catalonia, the Spanish State and other organizations that take a minority stake. Apart from the two major infrastructures mentioned above, there are up to 10 other major research support infrastructures. Only five out of these 12 structures are present on social media as shown in Table 5:

Catalan large infrastructures accounts  Twitter Facebook YouTube Klout Kred
Barcelona Supercomputing Center (BSC-CNS) 385 236 11 45 597 4
Center for Scientific and Academic Services of Catalonia (CESCA) 158  45  0 31 523 3
Ebre Observatory  99 137 38 494 2
National Centre for Genomic Analysis (CNAG)  64 40 395 3
Montsec Observatory 940

Table 5: Social Analytics for Institutional Twitter Accounts Provided by Catalan Large Infrastructures

The Barcelona Supercomputing Center – Centro Nacional de Supercomputación (BSC-CNS) ranks first in the number of Twitter followers, while the Montsec Astronomical Observatory leads the Facebook network.

Reference networks of R&D and innovation

Reference networks of R&D and innovation consist of a series of groups from different institutions that carry out research and innovation projects, and other activities collaboratively. These groups have common goals and the networks aim at promoting collaborative, interdisciplinary and multidisciplinary work, as well as the optimization of infrastructure and R&D and innovation facilities in Catalonia. Four out of the eight reference networks are present on Twitter, as shown in Table 6, with the Reference Network of R&D and innovation on Theoretical and Computational Chemistry leading the classification:

Catalan reference networks accounts Twitter Facebook Klout Kred
Reference Network of R&D&I on Theoretical and Computational Chemistry (XRQTC) 117 35 516 4
Catalan Biotechnology Reference Network (XRB) 112 41 498 4
Reference Network of R&D&I on Aquaculture (XRAq)  63 74 23 396 3
Reference Network of R&D&I on Food Technology (XaRTA)  39 17 220 1

Table 6: Social Analytics for Institutional Twitter Accounts Provided by Catalan Reference Networks

How Can We Measure the Reputation of a Research Network?

Is the number of Twitter followers a good indicator of the presence of an institution on the net? In my blog, I regularly analyze the presence of research structures in Catalonia on social media, based on the number of followers. I realized that this indicator may not be a sufficiently complete indicator, so I decided to introduce also the Klout and Kred indicators, in line with the analysis of Professor Miquel Duran, an expert in analyzing metrics in universities of the Catalan-speaking territories, and Brian Kelly, UKOLN,  University of Bath, with his detailed analysis on the presence of the UK Russell Group universities (note the latter also includes indicators such as  Peerindex or Twitalyzer). Both Klout and Kred provide complementary and useful information in order to assess the impact of bidirectional communication.

Klout is a social networking service that measures influence using data points from Twitter, such as the size of a person’s network, the content created and how other people interact with that content. This analysis is also done on data taken from Facebook, Google+, Linkedin, and other sites. Klout creates profiles on individuals and assigns them scores ranging from 1 to 100. Despite being criticized because of its opacity, this service has become quite popular and I believe it is a good complement.

Another interesting measure of influence is Kred. Unlike Klout, Kred provides a fully transparent view of the actions that compose any user’s score and it is the only influence measure to openly publish its algorithm. Kred’s scoring system, which is based on Twitter profiles, is composed of two scores: Influence measures a user’s ability to inspire action from others like retweeting, replies or new follows and it is scored on a 1000 point scale.  Outreach reflects generosity in engaging with others and helping them spread their message and it is scored on a 10 point scale. Outreach score is cumulative and always increases, and it is measured on Twitter by your retweets, replies and mentions of others.

The Importance of Being Present on Social Media

In late May 2012, I gave a presentation at the University of Barcelona on dissemination using Web 2.0 (Com divulgar en el web 2.0). The workshop aimed to provide tools and strategies for scientific knowledge dissemination to researchers and other agents linked to R&D and innovation system, so that they could be in a better position to spread the object of their research.

Although my remarks focused on the importance of having a blog and a Twitter account to disseminate research, I finally mentioned other instruments that could contribute to it such as repositories (Slideshare, YouTube, Flickr) or social networking tools (ResearchGate, LinkedIn, Google+). During the talk it was mentioned that, while Facebook has been considered a very suitable network for personal rather than professional purposes, currently a trend has been detected among young people to use this network to disseminate research. Therefore, it should be considered in future studies.

Twitter and Facebook are the social networking tools where research structures in Catalonia are mostly present, although with slightly different communication strategies. The centres use these media mainly to disseminate research and, in some cases, to make dissemination activities organized by the centre more widely available and to engage with the public, even as a teaching support. Moreover, these tools are often used to post vacancies at the institution. According to Raül Toran, science writer at the Bellvitge Biomedical Research Institute (IDIBELL), “generally the topics that are forwarded mostly are the ones related to cutting-edge research and job vacancies at the institution“.

Social media at the research centres are basically managed by the Communication Departments at the same institution although in most centres, some researchers use social media mostly for personal rather than professional activity. Cinta S. Bellmunt, Head of Communication at the Catalan Institute of Human Paleoecology and Social Evolution (IPHES) states that the researchers in the centre “are aware of the value and visibility that social networks provide to research, because quite often I have to refer questions that arise in the group to them, so they realize there is movement, interrelation“. To follow up on the impact of the communication strategy of the centre, tools such as Hootsuite and Tweetdeck are used.

A good example of the impact of the dissemination activity of social media by the centres is to be found in the increasing traffic to their websites, as well as an increase in the number of job applications for possible vacancies. Inevitably, this communication activity results in a continuous increase in the number of Twitter and Facebook followers. As far as readers’ preferences is concerned, it is highly variable and it depends on the centres: in the case of the IDIBELL, there is more interaction via Twitter (direct messages, mentions, RT, etc.) than via Facebook (Likes or comments), while in the case of IPHES the situation is reversed.

As regards privacy, there is growing awareness that knowledge must flow, but with precaution in order not to affect the privacy of others, respecting authorship and quoting the source of what is being communicated.

Open social network tools such as Diaspora or identi.ca, are still little known in Catalonia. In contrast, a growing increase in Catalan researchers in the ResearchGate network has been detected.

In summary, we could say that research structures in Catalonia are consolidating and increasing their presence on social media, especially on Twitter and Facebook, which has become part of their communication strategy, increasing visibility. To disseminate the research that is being carried out and to approach society are the main goals. In addition, communication units are progressively incorporating metrics tracking tools designed to evaluate and measure the impact of the communication activity and its benefits to the target audience. And good news is that research, often funded with public money, engages with the whole of society.


About the Author

Xavier Lasauca i Cisa is the person in charge of Knowledge Management and Information Technologies on R&D (Directorate General for Research, Ministry of Economy and Knowledge, Government of Catalonia).

Twitter: @xavierlasauca

Image: Research.cat 2.0, by Maricel Saball (CC BY 3.0), adapted from My social networks


View Twitter conversation from: [Topsy]

Posted in Guest-post, Social Web | 12 Comments »

The Sixth Anniversary of the UK Web Focus Blog

Posted by Brian Kelly (UK Web Focus) on 1 November 2012

This blog was launched in 1 November 2006. A year after the launch I described The First Year of the UK Web Focus Blog. The following year I provided a summary of  The Second Anniversary of the UK Web Focus Blog) in which I provided a link to a backup copy of the blog’s content, hosted on Scribd. In a post on The Third Anniversary of the UK Web Focus Blog I commented that “with over 600 posts published on the UK Web Focus blog, I can’t recall all of the things I have written about!“. In 2010 a post on Fourth Anniversary of this Blog – Feedback Invited provided a link to a SurveyMonkey form and I subsequently published a post which gave an Analysis of the 2010 Survey of UK Web Focus Blog.

Last year’s anniversary post, entitled How People Find This Blog, Five Years On concluded that “most people now view posts on this blog following alerts they have come across on Twitter rather than via a Google search or by subscribing to the blog’s RSS feed“. I went on to say that “to put it more succinctly, social search is beating Google and RSS“.

Figure 1: Referrer Traffic to this blog, 2011-12

But what do usage statistics now tell us about the previous year? Looking at the referrer statistics for the last 365 days (as illustrated in Figure 1) it seems that WirdPress.com has changed how it displays the referrer statistics compared with last year.

Figure 2: Referrer Traffic, 2006-11

Last year’s findings (illustrated in Figure 2) had Twitter in first place, followed by Google Reader and the UKOLN Web site. However this year we find Search Engines in first place, by a significant margin.

This reflects comments made last year by Tony Hirst who felt that the statistics were somewhat misleading:

my stats from the last year show a lot of Twitter referrals, but also (following a three or four day experiment by WordPress a week or so ago), inflated referrals from “WordPress.com”. The experiment (or error?) that WordPress ran was to include RSS counts in the stats. The ‘normal’ stats are page views on wordpress.com; the views over the feed can be found by looking at the stats for each page.

It would appear that last year’s conclusion: “social search is beating Google and RSS” was incorrect. In fact Google continues to be significant in driving traffic to this blog. However I think we can say that “social services, especially Twitter, are beating RSS readers“.

The importance of Twitter is widely appreciated as a means of ensuring that the intended target audience  - those with whom you are likely to share similar professional interests – are alerted to your content. But the thing that surprised me was the importance of Facebook – in third place behind Search Engines and Twitter in referring traffic to this blog.

Perhaps I should not have been surprised by Facebook’s high profile. After all, a post by Daniel Sharkov, an 18 year old student and a blogger, which provided a 9 Step Blog Checklist to Make Sure Your Posts Get Maximum Exposure included the following suggestion:

Did You Share Your Post on Facebook?

An obvious one. What I do is share the post both on my personal wall and on my fan page right after publishing the article.

I appreciate that use of Facebook won’t be appropriate in all cases, but for blogs provided by individuals who have a Facebook account and who wish to see their content widely viewed, it would appear that Facebook can have a role to play in supporting that objective; the evidence is clear to see – even, or perhaps especially, if you’re not a fan of Facebook.

Posted in Blog, Evidence, Facebook | 2 Comments »

SEO Analysis of Enlighten, the University of Glasgow Institutional Repository

Posted by Brian Kelly (UK Web Focus) on 25 October 2012

Background

In the third and final guest post published during Open Access Week William Nixon, Head of Digital Library Team at the University of Glasgow Library and the Service Development Manager of Enlighten, the University of Glasgow’s institutional repository service, gives his findings on use of  the MajesticSEO tool to analyse the Enlighten repository.


SEO Analysis of Enlighten, University of Glasgow

This post takes an in-depth look at a search engine optimisation (SEO) analysis of Enlighten, the institutional repository of the University of Glasgow. This builds on an initial pilot survey of institutional repositories provided by Russell Group universities described in the post on MajesticSEO Analysis of Russell Group University Repositories.

Background

University of Glasgow

Founded in 1451, the University of Glasgow is the fourth oldest university in the English-speaking world. Today we are a broad-based, research intensive institution with a global reach. It’s ranked in the top 1% of the world’s universities. The University is a member of the Russell Group of leading UK research universities. Our annual research grants and contracts income totals more than £128m, which puts us in the UK’s top 10 earners for research. Glasgow has more than 23,000 undergraduate and postgraduate students and 6000 staff.

Enlighten

We have been working with repositories since 2001 (our first work was part of the JISC funded FAIR Programme) and we now have two main repositories, Enlighten for research papers (and the focus of this post) and a second for our Glasgow Theses.

Today we consider Enlighten to be an “embedded repository”, that is, one which has “been integrated with other institutional services and processes such as research management, library and learning services” [JISC Call, 10/2010]. We have done this in various ways including:

  • Enabling sign-on with institutional ID (GUID)
  • Managing author identities
  • Linking publications to funder data from Research System
  • Feeding institutional research profile pages

As an embedded repository Enlighten supports a range of activities including our original Open Access aims to provide as any of our research outputs freely available as possible but also to act as a publications database and to support the university’s submission to REF2014.

University Publications Policy

The University has a Publications Policy, introduced to Senate in June 2008, has two key objectives:

  • to raise the profile of the university’s research
  • to help us to manage research publications.

The policy (it is a mandate but we tend not to use that term) asks that staff:

  • deposit a copy of their paper (where copyright permits)
  • provide details of the publication
  • ensure the University is in the address for correspondence (important for citation counts and database searches)

Enlighten: Size and Usage

Size and coverage

In mid-October 2012 Enlighten had 4,700 full text items covering a range of item types including journal articles, conference proceedings, book, reports and compositions. Enlighten has over 53,000 records and the Enlighten Team work with staff across all four Colleges to ensure our publications coverage is as comprehensive as possible.

Usage

We monitor Enlighten’s primarily via Google Analytics for overall access (including number of visitors, page views referrals and keywords) and EPrints IRStats package for downloads. Daily and monthly download statistics are provided in records for items with full text and we provide an overall listing of download stats for the last one and 12 month periods.

Looking at Google Analytics for the 1 Jan 12 – 30 Sep 12 (to tie in with this October snapshot) and the previous period we had 201,839 Unique Visitors up to 30 Sept 12 compared to 196,988 in 2011.

In the last year we have seen an increase in the number of referrals and our search traffic is now around 62%. In 2012 – 250,733 people visited this site, 62.82% was Search Traffic (94% of that is Google) with 157,503 Visits and 28.07% Referral Traffic with 70,392 visits.

In 2011 232,480 people visited this site, 69.97% of that was Search Traffic with 162,665 Visits and 18.98% came from referrals with 44,128 Visits.

Expectations

Our experience with Google Analytics has shown that much of our traffic still comes from search engines, predominantly Google but it has been interesting to note the increase in referral traffic, in particular from our local *.gla.ac.uk domain, this rise has coincided with the rollout of staff publication pages which are populated from Enlighten and provides links to the record held in Enlighten.

After *.gla.ac.uk domain referrals our most popular external referrals come from:

  • Mendeley
  • Wikipedia
  • Google Scholar

We expected that these would feature most predominantly in the Majestic results, with Google itself.

Majestic SEO Survey Results

The data for this survey was generated on the 22nd October 2012 using the ‘fresh index’, current data can be found from the Majestic SEO site with a free account. We do own the eprints.gla.ac.uk domain but haven’t added the code to create a free report. The summary for the site is given below showing 632 referring domains and 5,099 external backlinks. Interestingly it seems our repository is sufficiently mature for Majestic to all provide details for the last five years too.

Since we were looking at eprints.gla.ac.uk rather than *.gla.ac.uk we anticipated that our local referrals wouldn’t feature in this report. As a sidebar a focus just on gla.ac.uk showed nearly 411,000 backlinks and over 42,000 referring domains.



Figure 1.  Majestic SEO Summary for eprints.gla.ac.uk

This includes 619 educational backlinks and 54 educational referring domains. This shows a drop in the number of referring domains since Brian’s original post in August which showed 680 and a breakdown of the Top Five Domains (and number of links) as:

  • blogspot.com: 5,880
  • wordpress.com: 5,087
  • wikipedia.org: 322
  • bbc.co.uk: 178
  • cnn.com: 135

These demonstrate a very strong showing for blog sites, news and Wikipedia.


Figure 2. Top 5 Backlinks

Referring domains was a challenge! We couldn’t replicate the same Matched Links data which Warwick and the LSE have used. Our default Referring Domains report is ordered by Backlinks (other options including matches are available but none of our Site Explorer – Ref Domains options seemed to be able to replicate this. We didn’t use Create Report.

These Referring Domains ordered by Backlinks point us to full text content held in Enlighten from sites it’s unlikely we would have readily identified.

Figure 3a: Referring Domains by Backlinks


Figure 3b: Referring Domains by Matches (albeit by 1)

This report shows wikipedia.org at number one with the blog sites holding spots 2 and 3 and then Bibsonomy (social bookmark and publication sharing system) and Mendeley at 4 and 5.

An alternative view of the Referring Domains report by Referring Domain shows the major blog services and Wikipedia in the top 3, with two UK universities Southampton and Aberdeen (featuring again) in positions 4 and 5.

The final report is a ranked list of Pages, downloaded as CSV file and then re-ordered by ReferringExtBacklinks.

URL ReferringExtBackLinks CitationFlow TrustFlow
http://eprints.gla.ac.uk 584 36 28
http://eprints.gla.ac.uk/58987/1/58987.pdf 198 18 15
http://eprints.gla.ac.uk/2081/1/languagepictland.pdf 77 10 9
http://eprints.gla.ac.uk/562 70 24 2
http://eprints.gla.ac.uk/431 69 23 2
http://eprints.gla.ac.uk/225/01/Thomas[1].pdf 61 0 0

Table 1: Top 5 pages, sorted by Backlinks

These pages are:

  • Enlighten home page
  • PDF for “Arguments For Socialism”
  • PDF for “Language in Pictland”
  • A chronology of the Scythian antiquities of Eurasia based on new archaeological and C-14 data [Full text record]
  • Some problems in the study of the chronology of the ancient nomadic cultures in Eurasia (9th – 3rd centuries BC) [Full text record]
  • PDF for “87Sr/86Sr chemostratigraphy of Neoproterozoic Dalradian limestones of Scotland and Ireland: constraints on depositional ages and time scales” [Full text record]

Summary

Focusing in more detail on the results, in Figure 2, the top 5 backlinks, 4 out of the 5 are from Wikipedia, the first two are to the same paper but from different Wikipedia entries. It’s interesting to see that our third ranked backlink is the ROARmap registry.

Looking at the top 5 pages ranked by backlinks, none of the PDFs or the records which have PDFs currently appear in our IRStats generated list of most downloaded papers in the last 12 months. It is clear however, in this pilot sampling to draw a correlation between ranking and the availability of  full text and not merely a metadata record.

Discussion

While this initial work has focused on the Top 5, extending this to at least the Top 10 would be useful for further comparison, it was interesting to see that sites such as Mendeley appeared in variations of our Referring Domains which correlated with our Google Analytics reports which indicate that they are a growing source of referrals.

Looking at Figure 3a, a Google search, on the first referring domain (by backlinks) reveals that the number Ref Domain scientificcommons.org has 136,000 results on Google for “eprints.gla.ac.uk”, salero.info didn’t match at all and abdn.ac.uk had 5 results.

Social media sites such as Facebook and Twitter don’t appear in these initial results, it may be because the volume is insufficient to be ranked here or there may be breach of service issues. Google Analytics now provides some social media tools and we have been identifying our most popular papers from Facebook and Twitter.

This has been an interesting, challenging and thought-provoking exercise with the opportunity to look at the results and experiences of Warwick and the LSE who, like us reflect the use of Google Analytics to provide measures of traffic and usage.

The overall results from this work provide some interesting counterpoints and data to the results which we get from both Google Analytics and IRStats. These will need further analysis as we explore how Majestic SEO could be part of the repository altmetrics toolbox and how we can leverage its data to enhance access our research.


About the Author

William Nixon is the Head of Digital Library Team at the University of Glasgow Library. He is also the Service Development Manager of Enlighten, the University of Glasgow’s institutional repository service (http://eprints.gla.ac.uk). He been working with repositories over the last decade and was the Project Manager (Service Development) for the JISC funded DAEDALUS Project that set up repositories at Glasgow using both EPrints and DSpace. William is now involved with the ongoing development of services for Enlighten and support for Open Access at Glasgow. Through JISC funded projects including Enrich and Enquire he has worked to embed the repository into University systems. This work includes links to the research system for funder data and the re-use of publications data in the University’s web pages. He was part of the University’s team which provided publications data for the UK’s Research Excellence Framework (REF) Bibliometrics Pilot. William is now involved in supporting the University of Glasgow’s submission to the REF2014 national research assessment exercise. Enlighten is a key component of this exercise, enabling staff to select and provide further details on their research outputs.

Posted in Evidence, Guest-post, openness | 2 Comments »

SEO Analysis of LSE Research Online

Posted by ukwebfocusguest on 24 October 2012

Background

The second in the series of guest blog posts which gives a summary of an SEO analysis of a repository hosted at a Russell Group university is provided by Natalia Madjarevic, the LSE Research Online Manager. As described in the initial post, the aim of this work is to enable repository managers to openly share their experiences in use of MajesticSEO, a freely-available SEO analysis tool to analyse their institutional repositories.


SEO Analysis of LSE Research Online

This post takes an in-depth look at a search engine optimisation (SEO) analysis of LSE Research Online, the institutional repository of LSE research outputs. This builds on Brian Kelly’s post published on this blog in August 2012 on MajesticSEO Analysis of Russell Group University Repositories.

The London School of Economics and Political Science

Background

LSE is a specialist university with an international intake and a global reach. Its research and teaching span the full breadth of the social sciences, from economics, politics and law to sociology, anthropology, accounting and finance. Founded in 1895 by Beatrice and Sidney Webb, the School has a reputation for academic excellence. The School has around 9,300 full time students from 145 countries and a staff of just under 3,000, with about 45 per cent drawn from countries outside the UK. In 2008, the RAE found that LSE has the highest percentage of world-leading research of any university in the country, topping or coming close to the top of a number of rankings of research excellence. LSE came top nationally by grade point average in Economics, Law, Social Policy and European Studies and 68% of the submitted research outputs were ranked 3* or 4*.

LSE Research Online – a short history

LSE Research Online (LSERO) was set up in 2005 as part of the SHERPA-LEAP project. The aim of the project was to create EPrints repositories for each of the seven partner institutions, of which LSE was one, and to populate those repositories with full-text research papers. In June 2008 the LSE Academic Board agreed that records for all LSE research outputs would be entered into LSE Research Online. We have no full-text mandate but authors are encouraged to provide full-text deposits of journal articles in pre-publication form, clearly labelled as such, alongside references to publications. Research outputs included in LSE Research Online appear in LSE Experts profiles automatically, thereby reusing data collected by LSE Research Online.

LSE Research Online is to be the main source of bibliographic information for the Research Excellence Framework (REF) in 2014. This has served to further increase the impetus for deposit and visibility of the repository in the School and we have various repository champions throughout the School across departments.

LSE Research Online size and a brief look at usage statistics

As of September 2012, LSE Research Online contains around 33,696 records, with 7,050 full-text items. We include a variety of item types such as articles, book chapters, working papers, data sets, blogs and conference proceedings. We most recently began collecting LSE blogs to create a permanent home for this important content. We began tracking LSERO site usage with Google Analytics in 2007 and the site has received 2,268,135 visits since this date. According to Google Analytics, 76.55% (1,748,725 total visits) of traffic to LSE Research Online comes from searches. Only 16.13% of traffic is from referrals and 7.14% from direct traffic. We also use analog server statistics to monitor downloads and total downloads May 2007-Sept 2012 was 5,266,871.

Expectations of the survey

Before running the Majestic SEO report, I expected we would see plenty of traffic from Google and backlinks (i.e. incoming links) from lse.ac.uk as, understandably, these are key sources of traffic to LSERO and are indicated as such on Google Analytics. Google Analytics also points to referrals from Wikipedia and Google Scholar, and most recently, our Summon implementation which includes LSERO content. However, I was intrigued as to how LSERO would fare in an SEO analysis.

Majestic SEO survey results

The data was generated from Majestic SEO using a free account on 24th September 2012 using the ‘fresh’ index option. A summary of the results is shown below: there are 1,285 referring domains and 8,856 external backlinks. Note that the current findings can be viewed if you have a MajesticSEO account (which is free to obtain).

Figure 1: Majestic SEO analysis summary for eprints.lse.ac.uk

This includes 408 educational referring backlinks. If we look at backlinks in more detail, patterns begin to unravel:


Figure 2: Top 5 Backlinks

This illustrates a distinct majority of Wikipedia pages linking to LSERO content and yet this is only ranked as the sixth most popular source of traffic in Google Analytics.

Top referring domains, sorted by matched links, can be found in the table shown below:

Referring domains Matched links Alexa rank Flow Metrics
Citation flow Trust flow
wordpress.com 14502 21 95 93
blogspot.com 11239 5 97 94
wikipedia.org 349 8 97 98
flickr.com 272 33 98 96
google.com 225 1 99 99

Table 1: Top 5 Referring Domains

Flickr makes a surprise appearance, with WordPress and Blogger dominating the top of the table.

Top 5 items sorted by Majestic’s flow metrics can be found here:


Figure 3: Top 5 Resources in Repository (sorted by flow metrics)

Perhaps more indicative, the Top 5 linked resources sorted by number of backlinks can be found in the table shown below:

Ref no. URL Ext. BackLinks Ref. Domains CitationFlow TrustFlow
1 http://eprints.lse.ac.uk 501 83 45 41
2 http://eprints.lse.ac.uk/27939/1/HartwellPaper_English_version.pdf 417 69 28 19
3 http://eprints.lse.ac.uk/27072 225 4 27 32
4 http://eprints.lse.ac.uk/27939 130 46 30 25
5 http://eprints.lse.ac.uk/39826 112 54 22 23

Table 2: Top 5 Linked Resources in Repository (sorted by no. of links)

These pages are:

  1. The LSE Research Online homepage.
  2. A PDF of a research paper on climate policy.
  3. The record for a paper on teenager’s use of social networking sites.
  4. The record for a paper on climate policy.
  5. The record for a paper on open source software.

Summary

Looking in more detail at the top backlinks to the repository, as listed in Figure 2, we can see that Wikipedia represents four out of five top pages. This includes the Wikipedia page on Free Software, which links back to a Government report on the cost of ownership of open source software. The Wikipedia pages on the European Commission and Proportional Representation are ranked second and third respectively. The Proportional Representation page links back to the full-text of a 2010 workshop paper: Review of paradoxes afflicting various voting procedures where one out of m candidates (m ≥ 2) must be elected. The fifth and only backlink not be Wikipedia is avert.org, an AIDS Education site which links back to the record of an early LSERO paper: Peer education, gender and the development of critical consciousness : participatory HIV prevention by South African youth.

In Table 1, the Top 5 Referring Domains to LSE Research Online are WordPress, Blogspot, Wikipedia, Flickr and Google. We can see the dominance of international social platforms here with WordPress (14,502 links) and Blogspot (11239 links), followed by Wikipedia (349 links), Flickr (272 links) and, finally a search engine, google.com (225).

In Figure 3, Top 5 Resources in Repository (sorted by flow metrics), we can see several links to LSERO information pages including the home page and the feed of latest additions. There are, however, several direct links to full-text papers including an Economic History Working Paper on A dreadful heritage: interpreting epidemic disease at Eyam, 1666-2000Sorting this data by number of backlinks, as shown in Table 2, the top item is the LSERO homepage with 501 backlinks. The second item is the PDF of one of our most downloaded papers of all time: The Hartwell Paper.

Discussion

So what can I draw from the results of the Majestic SEO report of LSE Research Online? Analysing the top referring domains according to the Majestic report, it seems reasonable to suggest that adding links to repository content on blogging platforms such as WordPress and Blogspot may result in an increased SEO ranking. We often link to LSERO content in various LSE Library blogs hosted on Blogspot, including New Research Selected by LSE Library. Flickr is also listed as a top referring domain according to the Majestic SEO but running a Google search for site:flickr.com “eprints.lse.ac.uk” retrieves zero results. It’s difficult to ascertain how MajesticSEO gets this result when Google does not confirm the findings – perhaps it uses very different algorithms to Google? The MajesticSEO top referring domains indicate that blogging platforms are the main referring domains to LSERO content. However, according to our Google Analytics stats, 76.55% of traffic to LSERO is from searches. Furthermore, the Majestic report indicates that there are 349 matched links to LSERO content on Wikipedia. “Running the search site:wikipedia.org “eprints.lse.ac.uk” in http://www.google.co.uk/ you get (on 11 October 2012) “About 92 results”. From the last page of the results, by repeating the search to include omitted results, Google ends up with 80 hits.” Searching for eprints.lse.ac.uk in http://en.wikipedia.org/wiki/Main_Page retrieves 83 hits. How does MajesticSEO retrieve such varying results?

Looking at backlinks, it’s important to note that the majority of top backlinks refer to papers that have the full-text attached and often link directly to the full-text PDF, of course resulting in a direct download. In addition, the Top 5 Resources in Repository (sorted by external backlinks) as seen in Table 2 tallies with our consistently popular papers according to Google Analytics and our analog statistics.

It is apparent that the inclusion of repository links on domains such as Wikipedia and blogging platforms appears to have a positive impact in helping the relevancy ranking weighting for LSERO content in web pages. This is not to mention direct hits on the links themselves, adding directly to the site’s visitors, and thus the dissemination of LSE research outputs. However, whether we can draw firm conclusions from the Majestic report remains to be seen, particularly with such differing results to those found on Google.

Thanks to my colleague Peter Spring for his advice when writing this post.


About the Author

Natalia Madjarevic is the manager of LSE Research Online, LSE Theses Online and LSE Learning Resources Online, the repositories of The London School of Economics and Political Science.

Natalia is also the Academic Support Librarian for the Department of Economics and LSE Research Lab. Joining LSE in 2011, prior to that Natalia worked at libraries including UCL, The Guardian and Queen Mary, University of London. Her professional interests include Open Access, research support, REF, bibliometrics and digital developments in libraries.

Posted in Evidence, Guest-post, Repositories | 4 Comments »

SEO Analysis of WRAP, the Warwick University Repository

Posted by ukwebfocusguest on 23 October 2012

SEO Analysis of a Selection of Russell Group University Repositories

A post published in August 2012 on an MajesticSEO Analysis of Russell Group University Repositories highlighted the importance of search engine optimisation (SEO) for enhancing access to research papers and is part of a series of articles on different repositories and provided summary statistics of the SEO rankings for 24 Russell Group University repositories.

This work adopted an open practice approach in which the initial findings were published at an early stage in order to solicit feedback on the value of such work and the methodology used. There was much interest in this initial work, especially on Twitter. Subsequent email discussions led to a number of repository managers at Russell group universities agreeing to publish more detailed findings for their repository, together with contextual information about the institutional and the repository which I, as a remote observer, would not be privy too.

We agreed to publish these findings on this blog during Open Access Week. I am very grateful to the contributors for finding time to carry out the analysis and publish the findings during the start of the academic year – a very busy period for those working in higher education.

The initial post was written by Yvonne Budden, the repository manager for WRAP, the Warwick Research Archives Project. It is appropriate that this selection of guest blog post begins with a contribution about the Warwick repository as Jenny Delasalle, a colleague of Yvonne’s at the University of Warwick and myself will be giving a talk on “What Does The Evidence Tell Us About Institutional Repositories?” at the ILI 2012 conference to be held in London next week.


SEO Analysis of the University of Warwick’s Research Repositories

The following summary of a MajesticSEO survey of the University of Warwick’s research repositories, together with background information about the university and the repository environment has been provided by Yvonne Budden.

A Little Background on Warwick

The University of Warwick is one of the UK’s leading universities with an acknowledged reputation for excellence in research and teaching, for innovation and for links with business and industry. Founded in 1965 with an initial intake of 450 undergraduates, Warwick now has in excess of 22,000 students and employs close to 5,000 staff. Of those staff just fewer than 1,400 are academic or research staff. Warwick is a research intensive institution and our departments cover a wide range of disciplines, including medicine and WMG, a specialist centre dedicated to innovation and business engagement. In the 2008 RAE nineteen of our departments were ranked in the top ten for their unit of assessment and 65% of the submitted research outputs were ranked 3* or 4*.

University of Warwick’s Research Repositories

Warwick’s research repositories began in the summer of 2008 with the Warwick Research Archives Project (WRAP), a JISC funded project that created a full text, open access archive for the University. WRAP funding was taken by the Library and in April 2011 we launched the University of Warwick Publications service, which was designed to ‘fill the gaps’ around the WRAP content with a comprehensive collection of work produced by Warwick researchers. The services work on the same technical infrastructure but WRAP remains distinct and exposes only the full text open access material held. The system runs on the most recent version of the EPrints repository software, using a number of plugins for export, statistics monitoring and most recently to assist in the management of the REF2014 submission. To date we do not have a full text mandate for WRAP and engagement with both WRAP and the Publications service varies across the departments. Deposit to the services is highly mediated through the repository team and so engagement is not necessarily reflected in the number of papers available per department, especially as some departments benefit more from the service’s policy of pro-active acquisition of new material where licenses allow. I would judge that our best engagement in terms of full text deposit comes from Social Science researchers but we also have some strong champions in the Medical School, History, Life Sciences and Psychology.

Size and Usage Statistics

At the end of August 2012 WRAP contained 6,554 full text items covering a range of item types, journal articles, theses, conference papers, working papers and more. The Publications service contained a further 40,753 records. In terms of usage since its launch the system has seen 900,997 visits according to Google Analytics, an average of just over 18,000 a month in the 50 months active. To track downloads we use the EPrints plugin, IR Stats, this counts file downloads either directly or through the repository interface. IR Stats will only count one download per twenty-four hours from each source, but will count multiple downloads if an item has multiple files attached. Over the life of WRAP the files held have been downloaded a grand total of 730,304 times with 49.08% of downloads coming from Google or Google Scholar.

Expectations of the Survey

Going into the survey using the MajesticSEO system wasn’t sure what to expect from the results, the majority of the work we’ve done so far with the statistics is with the Google Analytics and the IR Stats package. Looking at the referral sources in the our Google output I can indicate a number of sources I might expect to see back links into the system, including our Business School (wbs.ac.uk) and the Bielefeld Academic Search Engine(BASE) as well as a number of smaller sources. The Warwick Blogs service seems to have fallen out of favour over the past few years with the number of hits from there dropping as people move to other platforms. Above all I’m most curious to see if the SEO analysis can help with the work I am doing in promoting the use of WRAP and the material within it. If this work can assist me in creating the kinds of ‘interest stories’ that help to persuade researchers to deposit it could become another valuable source of information. We are also looking at expanding the range of metrics we have access to, looking at the IRUS project as well as the forthcoming updated version of IR Stats, recently demonstrated at Open Repositories 2012.

Our Survey Results

The data for this survey was generated on the 10th September 2012 using the ‘fresh index’ option, although the images were captured on 19 October. The current results can be found if you have a MajesticSEO account (which is free to obtain). The summary for the site is given below showing 413 referring domains and 2,523 backlinks.


Figure 1: MajesticSEO analysis summary for wrap.warwick.ac.uk

On first glance this seems to be rather low in terms of backlinks, it also shows a fairly low number of educational domains linking to us. The top five backlinks in to the system can be seen below, ranked as standard by the system by a combination of citation and trust flow:


Figure 2: Top 5 Backlinks

Interestingly this lists some of the popular referrers we see in Google Analytics driving traffic to us, but not some others I might have expected to see. The top referring domains are shown below:

Figure 3: Top Referring Domains

This is the only place in the results where Google features at all. The top five pages, as ranked by the flow metrics show a fairly distinct anomaly, as two of the pages are not listing any flow metric information despite this supposedly being the method by which they are ranked:

Figure 4: Findings Ranked by Flow Metrics

The top five pages as sorted by number of backlinks can be seen in the table below:

Ref No. URL Ext. Backlinks Ref. Domains Citation Flow Trust Flow
1 http://wrap.warwick.ac.uk/2489 228 1 14 0
2 http://wrap.warwick.ac.uk 177 23 37 37
3 http://wrap.warwick.ac.uk/1539/1/WRAP_Horvath_twerp647.pdf 91 31 15 13
4 http://wrap.warwick.ac.uk/1335/1/WRAP_Oswald_twerp_882.pdf 82 4 11 9
5 http://wrap.warwick.ac.uk/1118 46 4 17 2

Table 1: Top 5 Pages, Sorted By Number of Links

These five items are as follows:

  1. A research paper on the impact of cotton in poor rural households in India.
  2. The WRAP homepage.
  3. A PDF of an economics working paper on currency area theory.
  4. A PDF of an economics working paper on happiness and productivity.
  5. The record for a PhD thesis on Women poets.

Summary

The top ten backlinks into the WRAP system include a range of sources, from this blog, two Wikipedia pages and two referrals from the PhilPapersrepository, which monitors journals, personal pages and repositories for Philosophy content. We also see a two of pages that collect literature on health topics who are linking back to us, a Maths blog and the newsletter of the British Centre of Science Education.

Interestingly in Figure 3 there is no mention of the University of Warwick or any of its related domains (wbs.ac.uk for the Business School, for instance). I assume this is because MajesticSEO are excluding ‘self’ links, so as WRAP is a Warwick subdomain they are excluding a lot of the links I am aware of. This may also take into account the lack of any backlinks from the Warwick Blogs service. Many of the domains listed here are blog platforms of one form or another, which may be because of the database driven architecture of these platforms and the way the MajesticSEO system are reading those links. For example, if a researcher puts a link to his most recent paper in WRAP on the frame of the blog and this propagates onto every post in the blog, does this count as a single link or as many? We are also seeing links from sources such as the BBC and Microsoft, where, again, it would be nice to be able to see who was linking to what and from where in these domains.

The top pages, as listed by number of backlinks in Table 1, show a trend for linking directly to the file of the full text material we hold in WRAP. This information would tie in nicely with the fact that item three is the most downloaded paper in WRAP over the lifetime of the repository, with 9,162 downloads to the end of August 2012. So in this case we can draw a tentative line between the number of downloads and the number of backlinks. However we can’t follow this theory through, especially as the top paper linked to externally, Paper 1 as listed in Table 1, has been downloaded only a fraction of the number of times compared to the currency working paper. When listed by the flow metrics, as in Figure 4 the pages largely follow the results as seen for the Opus repository at Bath and link to pages about the repository. This is apart from the two anomalous results where despite having no citation or trust flow scores they are ranked second and third, when ranked on flow metrics.

Discussion

I think when looking at metrics the most important thing for a repository manager to do is to be able to build stories around the metrics, as these help the researchers to engage with the figures. Was this spike in downloads because of featuring in a conference, or an author moving to a new institution, or for some other reason? What can I show my users that are going to help them to make the decision to use us over other options and to expend scare time resources maintain a blog or Twitter account? Here the issue, I have with the data we have discovered is that the number of backlinks into a repository will never conclusively prove that a paper will get more downloads, as ably illustrated by the example above. Many researchers are not interested in the fuzzy conclusions we can draw at this point; they want to see clear, conclusive proof that links = downloads = citations.

I also think that search engine performance is an increasingly difficult area to be really conclusive about, especially now users can ‘train’ their Google results to prefer the links they click on most often. This was recently a cause of concern for us as it was reported that our Department of Computer Science (DCS)’s EPrints repository was overtaking our Google ranking and that WRAP didn’t feature until page two of the results now. This wasn’t the case, but because the user reporting this to us was heavily involved in the area of computer science his Google rankings had preferred the DCS repository to the WRAP one as the results were more relevant to his interests. In the same was as when I search for ‘RSP’ my top result is now the Repositories Support Project and not, RSP the Engineering Company or the Peterborough Health and Safety firm as it was initially

We need to always be conscious of what the researcher want from metrics and whether it is possible for us to give it to them. As with any metrics we need to be aware that we have to be explicit in what it is that we are saying and what can be inferred by it. If we are users of metrics don’t understand how the metrics are being developed or how the search engines ranking algorithms work, we won’t be able to confidently predict what we can do to improve them. It may also come down to the way researchers are using these services and for what purpose, which may be why we are not seeing any evidence of the use of services like Academia.edu and LinkedIn. I would imagine if researchers are using services to showcase their work to prospective employers and other researchers they may prefer to link to the publisher’s version of their work rather than the repository versions. I suspect the interest story from the SEO data may be more about ‘who’ is linking to their work rather than where they are linking from, which is detail we cannot and possibly should not be able to provide.


About the Author

Yvonne Budden (@wrap_ed), the University of Warwick’s E-Repositories Manager is responsible for WRAP, the Warwick Research Archive Portal and is the current Chair of the UK Council for Research Repositories (UKCoRR).

Email: Y.C.Budden@warwick.ac.uk

Posted in Evidence, Guest-post, Repositories | 3 Comments »

The Blog as a Narrative or the Post as a Self-Contained Item

Posted by Brian Kelly (UK Web Focus) on 10 September 2012

Do Blog Provide Self-Contained Posts or a Narrative Thread?

Does a blog used to support professional activities act as a diary in which in order to fully appreciate the content readers will need to have an understanding of the context provided by other posts? Or, alternatively, are blog posts self-containe, so they make sense if viewed in isolation?

Tony Hirst’s recent post on For My One Thousandth Blogpost: The Un-Academic caused me to reflect on this question. Tony had commented that whereas “Formal academic publications are a matter of record, and as such need to be self-standing, as well as embedded in a particular tradition blog posts are deliberately conversational: the grounding often coming from the current conversational context – recent previous posts, linked to sources, comments – as well as discussions ongoing in the community that the blog author inhabits and is known to contribute to“.

But although the writing style of blogs is often conversational in tone, I don’t agree that blog posts cannot also be self-standing. I realised this recently after a conversation with Amber Thomas who was preparing a talk on use of Wikipedia in a higher education context, which she gave last week at the Eduwiki conference. In the discussion I gave Amber some examples of use of Wikipedia in a research context, based on posts I’d written on How Can We Assess the Impact and ROI of Contributions to Wikipedia? (published on 27 September 2010) and How Well-Read Are Technical Wikipedia Articles? (published in 8 July 2010). Since I realised that I might have a need to be able to find such articles again in the future, I created a page on this blog on the Importance of Wikipedia which contained links to posts on this subject. I subsequently created a number of other pages providing links to posts which should be self-standing, in areas including Web Accessibility, standards and blog practices.

I then realised that my style is always to try to make post self-standing, with the relationships with related posts being made explicit by links to such posts.

It strikes me that writing self-contained blog posts is more relevant than it used to be when blogs took off as a way of keeping one’s peers informed. Back then readers of blogs would typically keep up-to-date through their RSS reader. But now it seems (and the Web analytics provided in a blog’s dashboard will confirm this) visitors will arrive at a post by following a link from Twitter or a service such as Scoop.it

Since there seems to be a decrease in the numbers of people who regularly follow individual blogs (and I know that although I still use an RSS reader I do not always read all posts, even from the blogs I am most interested in) it will be more important to provide the context for visitors who arrive at a particular post. This may cause regular readers to encounter repetition if they are following a stream of posts, but I think this needs to be accepted in light of the changing patterns of blog reading.

Tony Hirst’s follow-up post on How OUseful.Info Posts Link to Each Other… provides a graphic which depicts how posts published on Tony’s blog (which now number over 1,000) link to each other. This made me wonder whether Tony’s blog could also be described as self-contained, with the links providing the context.

Postscript

As a postscript I should add that over the past 3 or 4 years I have provided links to blog posts from slides when I give presentations in order to be able to provide easy access to supplementary materials related to the contents of a particular slide.

An example of this approach is illustrated. For a forthcoming talk on “Open Practices for the Connected Researcher” to be given during Open Access Week if, during the talk, I am asked for the context or the evidence I can click on the blue arrow to go to the relevant post. Initially I used this approach when I embedded an image so that I could easily find the original source. I later realised that this approach has become more useful following developments to Slideshare which meant the that HTML5 replacement for the Flash interface enable the links to be followed from Slideshare. I don’t think the blog post should be regarded primarily as an item in a narrative for a regular audience. Rather I feel that there will be a significant proportion of the audience who will view posts in isolation.


Twitter conversation from Topsy: [View]

Posted in Blog | 7 Comments »

“Celebrating 10,000 Followers!”: Social Media is About Nodes and Connections

Posted by Brian Kelly (UK Web Focus) on 17 August 2012

 

JISC Celebrates 10,000 Followers

Yesterday a tweet from @jisc announced that their Twitter account had reached 10,000 followers:

NEWS: Celebrating 10,000 followers… and our resources to help engage students through social media: … http://bit.ly/RVLMMv 

This news provided a useful opportunity for JISC to “showcase some resources that can help you blog, tweet and interact your way to better student retention, marketing and teaching online“. The news item highlighted seven resources which were felt to help institutions in using social media to support their students:

  1. Listen to a podcast (MP3 format) on developing your social media strategy with Steph Gray of Helpful Technology.
  2. Read JISC CETIS’ ideas about using Twitter in the classroom.
  3. Learn how Cardiff Northumbria and Bristol universities use Twitter and Facebook to support international students.
  4. Reflect on how your PhD students are using social media and other new technologies to collaborate and stay up to date using the biggest ever survey of PhD students.
  5. Read a case study on engaging students through blogging.
  6. Download the LSE’s guide to Tweeting for academics.
  7. Compare your university to other universities. Find out which social media networks others are using on the UK Web Focus blog post.

And whilst the @JISC Twitter account provides a valuable channel for JISC to disseminate JISC activities and innovative uses of IT across the higher and further education sector, this is complemented by the work of JISC Programme Managers and other JISC staff who use social media technologies for engaging with the sector in the support of development activities. Remember that the solution which may be described in a glossy PDF report or a polished podcast will be the result of rich interactions, discussions and even disagreements; social media provides an environment for supporting such engagement which, ten years ago, tended to be restricted to mailing lists, meetings and trips to workshops and conferences.

It probably goes without saying that the benefits of social media aren’t restricted to supporting students; LSE’s Impact of Social Sciences blog, for example, regularly provides examples of how social media can support research activities. A good example is Mellisa Terras’s post which asked The verdict: is blogging or tweeting about research papers worth it? and described how “Melissa Terras took all of her academic research, including papers that have been available online for years, to the web and found that her audience responded with a huge leap in interest in her work“.

Nodes and Connections

In a recent post I described how Social Media? It’s About The Numbers! The post reflected on how the popularity of Twitter for talking about the Olympics indicated a mass take-up of the channel which appears to becoming an ‘embedded technology’ – a technology which large numbers of familiar with and comfortable in using for a range of activities. The post went on to explain how for many communication channels achieving a critical mass is important in order to maximise awareness, engagement discussion, feedback and marketing opportunities. JISC clearly appreciate the importance of such numbers, and it is very pleasing to see the significant growth in their followers since the account was established on 10 January 2009.

Yesterday Steve Wheeler in a post on Separation and connection reinforced this view when he described how “We are witnessing a time where a mobile world wide web of connections is proliferating, and in which social mores, human relationships and communication conventions have been irrevocably changed“, supporting this view with the evidence that “Facebook boasts over 845 million subscriptions and this statistics grows each month. What is even more remarkable is that these 845 million user accounts have so far generated over 100 billion connections“. Steve concluded with an optimistic view of the role of social media in education: “I believe we have not even started to scratch the surface of the massive potential of social media and mobile technology to disrupt and transform learning. That’s why it’s so exciting to be an educator in the digital age.

But not everyone, I feel, appreciates the importance of ‘nodes’ and ‘connections’ which are at the heart of successful social web services. As I described in a post entitled It’s About Links; It’s About Connectedness! Cameron Neylon’s opening plenary talk at the Open Repositories OR 2012 conference addressed the importance of such connectivity. As reported in the live blog of Cameron’s talk:

Most of you can remember a time without mobile phones. 20 years ago if I’d shown up and wanted to meet for a drink it would have been difficult or impossible. Email wasn’t useful back then either as so few people had it. When you start with nodes and start joining up the network… for a long time little changes. You just let people communicate in the same way you did before… right up until everyone has access to a mobile phone. or everyone has email. You move from a network that is better connected network to a network that can be traversed in new ways. for chemists THIS IS A Cooperative phase transition. Where the network crystalises out from a solution.

Cameron has kindly shared his slides with me (prior to making a more generic version of the slides publicly available) which has helped me to refresh my memories of his talk and reuse some of the images he provided.

Cameron argued that “Networks qualitatively change our capacity” and depicted this ‘phase transition’ as shown: with only 20% of a community being connected only a limited amount of interaction can take place, but this increases drastically as the numbers of connected nodes grows – and imagine the possibilities as the numbers approach 100%!

Cameron provided some examples of such approaches in scientific research including Galaxy Zoo and the Timothy Gower’s experiment in which Professor Gower asked “is massively collaborative mathematics possible?“. The answer was “yes” with a new combinatorial proof to the density version of the Hales–Jewett theorem being found using “blogs and a wiki to organize an open mathematical collaboration attempting to find a new proof ” after only 7 weeks.

The importance is the network effect, with a growth in the number of nodes (the bloggers, the contributors, the Twitter users) leading to a growth in the number of connections (the posts, the comments, the tweets, the retweets) which help in the development of new insights and new ideas.

Let’s Not Kill The Golden Goose!

A concern which needs to be recognised is that the evidence of the benefits of use of social media will lead to organisations seeking to use the social web in inappropriate ways, leading to a failure to provide the benefits based on the network effect. There are dangers that the benefits of the social web are felt to be its ease-of-use and its virality, but that the tools should be used in a corporate way. Seeking to take the individuality away from use of such tools could lead a reduction in the number of nodes and in the connections which often take place between individuals rather than organisations. Such approaches could kill the golden goose and lead to social networks which people abandon due to the lack of openness and transparency and effectiveness.

One barrier which people sometimes mention are concerns of information overload – and this may have been the reaction when I suggested that people should “imagine the possibilities as the numbers approach 100%!“.

Cameron Neylon addressed this as one of the three key issues in his plenary talk at OR 2012. “Filters block” argued Cameron, “Filters cause friction“. And as there’s not a single right filter for everyone (as we all have different needs, with your rubbish being my valuable resources) we should reject inappropriate supply-side filters and focus, instead, on developing and using client-side filters.

Let’s therefore keep on encouraging new nodes to spring up – new Twitter users (many of whom may have started tweeting during the Olympics) and new bloggers – and avoid developing barriers on the creation of new connections – the tweets, the comments and the posts.

But we need to appreciate that those who may be considering the development of top-down approaches to use of social media are probably doing so because they have legitimate concerns. As described in a paper on Moving From Personal to Organisational Use of the Social Web there is a need for “a policy framework which seeks to ensure that authors can exploit Cloud Services to engage with their audiences in a professional and authentic manner whilst addressing the concerns of their host institution“. And note that such policies need not be difficult to write.

Posted in Blog, Social Networking, Twitter | 8 Comments »

Guest Post: Further Evidence of Use of Social Networks in the UK Higher Education Sector

Posted by Brian Kelly (UK Web Focus) on 6 June 2012

 

Further Evidence of Use of Social Networks in the UK Higher Education Sector

A series of recent posts on the UK Web Focus blog have summarised use of social networking service such as Facebook and Twitter by the 20 Russell Group universities. In today’s guest post Craig Russell, a Web Systems Developer at the University of Leicester, provides a picture across the UK higher education sector. Craig’s work is particularly timely as it has been carried out shortly before UKOLN’s IWMW 2012 event. Craig will be attending the event and will welcome feedback and comments from fellow participants on the survey and, perhaps more importantly, the implications of the findings and how they should inform policy decisions.


These are lean times for UK universities. The second half of this year is going to be a challenging one for all of us. Purse strings are being pulled tight in response to post-September uncertainty and we are all finding ourselves spread thinner than before, having to find new ways to do more-for-less. Universities have a strong history of academic collaboration, a practice that we in the corporate and support services should seek to emulate. By way of an example, I’d like to share my experiences of sharing a project of my own with the university community and the great benefit that this has returned.

In recent weeks I’ve set out to compile a dataset of all UK university social media (SM) accounts. Initially I was working alone in compiling the data set, and I got a fair way with it, but it wasn’t until sharing my work with the university web community that it grew in to the comprehensive resource that it has become.

I began with a list of institutions taken from the Guardian League Tables, which turned out not to be the best source as it didn’t use the correct names for institutions nor did it list all HEIs in the UK. When I shared the dataset with members of the WEB-INFO-MGT mailing list I received a few responses from institutions who were disappointed to find they weren’t included in it. Wanting to make this resource as inclusive as possible, I later adapted it to use the institution list provided by HESA in their “2010/11 Students by Institution” dataset. In addition to being more complete and accurate, this allowed me to include the HESA Institution ID and UK Provider Reference Number, which will make it easier to join this dataset with others in the future.

Figure 1: Number of social media services used

Initially I only collected data for Twitter, Facebook, YouTube, Flickr and iTunesU accounts. My thinking at the time was that that the first four were the most popular (and therefore the only interesting ones – herp) and I had a general interest in iTunesU. While collecting the data I noticed that other networks were also fairly common among universities. This revelation was reinforced by the emails I received from web maintainers, which listed a variety of services. So in the revised version I included every service that universities identified themselves as using. The dataset now lists 16 different services that are currently being used by UK universities. A surprisingly broad spread.

Expanding the dataset in this dimension allows an important questions to be asked; what are the social network that UK universities are currently using, and how popular are they? The chart below answers this question. The data shows that my initial hunch about the top four was correct (but all the better with evidence), though I expected Flickr to be more popular than it is. In contrast, LinkedIn is better represented than I had thought. Also of note is the low position of Google+, echoing the general attitude towards the much-hyped service.

Figure 2: Distribution of accounts across institutions

Another question worth asking is; how many social networks are universities using? The histogram shown in Figure 2 the distribution of accounts across institutions. Most universities have a presence in 3 or 4 networks, with a significant minority above and below this range. The peak at 0 suggests missing data, therefore it’s likely that university presence in social media is in truth greater than this chart would suggest.

Though this is only a fairly superficial analysis of the data, these results raise many more questions than they answer. Why do most institutions have only 3/4 social media accounts? I suspect that the availability of resources in the university to manage an on-line social presence is the primary limiting factor, though the response to the popularity of these services in our target markets should also be considered. The combination of popular services is interesting too. Twitter, Facebook, YouTube and to a lesser extent Flickr, seem to provide a complimentary suite of tools – why?

I’m also interested to understand the strategy guiding the use of these services. Having glanced over a few accounts I see that some institutions use twitter primarily as a broadcast medium to share information about themselves, whereas others use it as a two-way channel to communicate and converse with their audience. On a related point, while most universities linked to their SM accounts from their homepage, those that did not, commonly linked to it from their news and events pages. This implies a ‘broadcast’ view of social media, though it may simply reflect where responsibility for managing these accounts lays within the organisation.

I originally compiled this dataset to answer a few questions of my own. But thanks to the involvement of the university web community it has grown and developed in to a resource that has been useful for me and (I hope) you too. If you use this dataset as a basis for your own work, or if you have data of your own that others may find useful, I’d encourage you to share it. Post a few links to the WEB-INFO-MGT mailing list or better yet attend an event such as IWMW12 to meet and discuss your work. The chances of you being the only person who finds your work interesting or useful is vanishingly small, find those other people and help one another.

The UK University Social Media Accounts dataset is up on Google Docs. Please do email me with any updates, corrections, comments or criticisms. I will be attending IWMW next month, so do come say hi if you’d like to chat about this – or anything else for that matter. Finally I’d like to thank everyone who has contributed to the dataset and sent messages of encouragement, I am very grateful.


Twitter conversation from Topsy: [View]

We may well have found ourselves shoe-horned in to the free-market, but I strongly believe that it is through our cooperation, not our competition, that UK universities will continue to thrive in a challenging future.

Posted in Guest-post, Social Web | 8 Comments »

Syndicated Post: The Commons Touch

Posted by Brian Kelly (UK Web Focus) on 7 April 2012

As part of a series of guest posts on the broad theme of openness it seems appropriate to publish this blog post, on The Commons Touch, which has been published by Steve Wheeler, Associate Professor of learning technology in the Faculty of Health, Education and Society at Plymouth University, under a Creative Commons licence on his Learning with ‘E’s blog.

Steve’s post provides an useful introduction to Creative Commons and the benefits which Creative Commons can provide across the sector and concludes by suggesting that Creative Commons is “going to be very big news indeed for all web users in the near future“.

I agree, but how should one reuse resources published under a Creative Commons licence, as I’m doing here, and what are the associated risks?

The licence allows me to reuse the content for non-commercial purposes provided a give acknowledgements to the rights owner (as I have done) and I make my post available under the same licence conditions (and I have included the rights statement and Creative Commons logo from the source post).

Although I am under no legal obligation to inform Steve of my reuse of his post I have chosen to do so so that he is not surprised if he sees the republished post.

I did point out that replicated web content may (slightly) undermine the Google ranking for the resource, as Google can treat replicated content as an attempt to spam Google’s index. However, as Steve is aware and has commented in his post, the value of providing an additional access path for such content will outweigh this slight concern.

Reusing content provided under a Creative Commons licence can also lead to the question regarding what the content actually is. In this case I have chosen to reuse the words, images and links, although the underlying HTML representation may have changed since we use different blog platforms. Since Steve has not applied a No-derivative clause in the licence I could, however, have chosen to edit the content which might have included not including the image and links provided in the source material. It should also be noted that in a comment made to the blog post Joscelyn pointed out a minor error in the original post – the post stated that “Much of the content on Wikipedia for example is licensed under Wikimedia Commons – a version of CC” but in fact “Wikipedia text is licensed with Creative Commons Attribution Sharealike (CC BY SA) licence not a version of a CC licence“]. I could have edited the original post but chose to include an editor’s note.

The final comment I would make is that the licence which applies by default to content published on this blog is CC-BY; a more liberal Creative Common licence which does not restrict reuse to non-commercial purposes or require reuse to apply the same licence. The blog now contains resources with a variety of licences which, ideally, would be described in a machine-understandable form through use of tools such as the WordPress Creative Commons License Manager or the Open Attribute plugins. The latter describes how:

OpenAttribute allows you to add licensing information to your WordPress site and individual blogs. It places information into posts and RSS feeds as well as other user friendly features. This plugin is an part of the OpenAttribute project which is part of Mozilla Drumbeat.

However these plugins are not available on the WordPress.com platform, so it does not seen currently to be possible to describe the rights for blog posts and embedded content in a machine-readable fashion. But since this is the case for many digital resources, this is not of great concern to me.

I am still in agreement with Steve that Creative Commons is “going to be very big news indeed for all web users in the near future” and we should all develop (and share) practices for consuming other people’s content which they have provided using such licences. I’d also welcome suggestions as to who should be described as the author of this post as, unlike other guest posts I’ve published this week, this contains significant intellectual content from me. I think this will have to be described as a post with joint authors.


The Commons Touch

Many people assume that because the web is open, any and all content is open for copying and reuse. It is not. Use some content and you could well be breaking copyright law. Many sites host copyrighted material, and many people are confused about what they can reuse or copy. My advice is this - assume that all content is copyrighted unless otherwise indicated. In the last few years, the introduction of Creative Commons licensing has ensured that a lot of web based content is now open for reuse, repurposing and even commercial use. The Stanford University law professor Lawrence Lessig is one of the prime movers behind this initiative. Essentially, Creative Commons has established a set of licences that enables content creators to waive their right to receive any royalties or other payment for their work. Many are sharing their content for free, in the hope that if others find it useful, they will feel free to take it and use it. Creative Commons is a significant part of the Copyleft movement, which seeks to use aspects of international copyright law to offer the right to distribute copies and modified versions of a work for free, as long as it is attributed to the creator. Any subsequent reiterations of the work must also be made available under identical conditions. In keeping with similar open access agreements, Copyleft promotes four freedoms:

Freedom 0 – the freedom to use the work,
Freedom 1 – the freedom to study the work,
Freedom 2 – the freedom to copy and share the work with others,
Freedom 3 – the freedom to modify the work, and the freedom to distribute modified and therefore derivative works.

Finding free for use images on the web is now fairly easy. Normal search will unearth lots of images. But these are not necessarily free images. Many will have copyright restrictions. To find the free stuff go to Google and click on the cog icon at the top right of the screen. Select the Advanced Search option. Next, scroll down the screen until you find the drop down box labelled ‘usage rights’. You will be presented with four options:

  1. Free to use or share
  2. Free to use or share, even commercially
  3. Free to use, share or modify
  4. Free to use, share or modify, even commercially

Whatever option you choose, you will be presented with a reduced collection of images that still meet the requirements of the search, but under the conditions of that specific licence. Now you have a collection of images you can use under the agreements of Creative Commons. Use them for free under these agreements and you are complying with international copyright law. Don’t forget the attribute the source!

So why would people wish to give away their content for nothing? I have previously written about my own personal and professional reasons for doing so in ‘Giving it all away‘, but just for the record, I will summarise:

Giving away your content for free under a CC licence ensures that anyone who is interested in your work does not have to pay for it or worry about whether they are licenced under copyright law to use your content. In today’s economic uncertain climate, it makes sense to be equitable and to give content away that others have a need to see and can make good use of. It also means that users will do some of your dissemination for you. Your ideas will be spread farther if you give them away for free, than they necessarily will if you ask people to pay a copyright fee or royalty. If you allow repurposing of your content, the rewards can be even greater. Some of my slideshows have been translated into other languages. Having your content translated into Spanish for example, opens up a huge new audience not only in Spain, but also most of the continent of South America. Many are now licensing their work under CC because they know it makes sense. Much of the content on Wikipedia for example is licensed under Wikimedia Commons – a version of CC [Note that in a comment on Steve Wheeler's post Joscelyn has pointed out that "Wikipedia text is licensed with Creative Commons Attribution Sharealike (CC BY SA) licence not a version of a CC licence"]. So look out for Creative Commons licensing – it’s going to be very big news indeed for all web users in the near future.

Image source

Creative Commons Licence
The Commons touch by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Posted in Guest-post, openness | 6 Comments »

Guest Post: Openly Commercial

Posted by ukwebfocusguest on 6 April 2012

Creative Commons has an important role to play in providing a legal framework which permits reuse of resources. But as Joscelyn Upendran describes in this guest blog post, how the Creative Commons NC (non-commercial) licence are interpretted can cause confusion. Will CC+ provide an answer?


Openly Commercial

The Non-commercial component of the Creative Commons (CC) licences has occasionally given rise to some uncertainty and debate amongst those interested in copyright licensing. (See About the Licences for a reminder of the different CC licences.)

The CC licences which contain the NC component refers to commercial use, as used:

in any manner that is primarily intended for or directed toward commercial advantage or private monetary compensation.

So what does that cover exactly?

CC guidance below and from @mollyali is very useful, but as with many things of a legal nature, they do not provide absolute certainty, as there are usually a number of factors at play. As described in the FAQ which asks ‘Does my use violate the NonCommercial clause of the licenses?‘ on the Creative Commons wiki:

In CC’s experience, whether a use is permitted is usually pretty clear, and known conflicts are relatively few considering the popularity of the NC licenses. However, there will always be uses that are challenging to categorize as commercial or noncommercial. CC cannot advise you on what is and is not commercial use. If you are unsure, you should either contact the creator or rightsholder for clarification, or search for works that permit commercial uses. Please note that CC’s definition does not turn on the type of user: if you are a non profit or charitable organization, your use of an NC-licensed work could run afoul of the NC restriction; and if you are a for-profit entity, your use of an NC-licensed work does not necessarily mean you have violated the term.

A CC commissioned study on “how people understand ‘noncommercial use’” was published in 2009. @plagiarismtoday provides a good potted summary of the report. Notwithstanding the 2009 report and “known conflicts” relating to the NC licensed being “relatively few” the NC component of the CC licence still generates much deliberation and debate.

Some objections to the NC licences relate to a viewpoint that they are not truly ‘open’ as they block licence interoperability and frictionless remix and reuse of content. The NC licence remains popular, however, and some CC adopters may well experiment initially by using a NC licence before choosing more permissive licences in due course.

The CC BY NC SA licence is a popular choice of licence amongst Higher Educational Institutions (HEIs). The Open University’s OpenLearn, MIT Open Courseware (MITOCW) and Open Yale Courses (OYC) all use a Creative Commons (CC) BY NC SA licence for their open educational resources (OER).

The JORUM Final Report published in 2011, indicates that the majority of the resources deposited within the JORUM repository are from the Academy/JISC OER Programme and a high percentage is from HEIs and licensed with a CC BY NC SA licence.

Although OpenLearn, MITOCW & OYC, all use a CC BY NC SA licence, all three institutions provide additional ‘”guidelines intended to help users determine whether or not their use of OCW materials would be permitted”

There are differences between the guidelines provided by the three institutions in the degree of permissiveness. For example OpenLearn permits “educational institutions, commercial companies or individuals to use the CC licensed content” and permits use of the “content as part of a course for which you charge an admission fee” and permits the charging of “a fee for any value added services you add in producing or teaching based around the content providing that the content itself is not licensed to generate a separate, profitable income” This would therefore appear to permit a commercial training company to reuse OpenLearn CC BY NC SA licensed content as part of a fee paying training course as long as the licensed content itself is not monetised.

OYC, by contrast, does not permit sites, that “provides and/or promotes services for which the user will be charged a fee (e.g., tutor services)” to use the CC licensed content.

MITOCW, whilst stating that “A corporation may use OCW materials for internal professional development and training purposes“also states “A commercial education or training business may not offer courses based on OCW materials if students pay a fee for those courses and the business intends to profit as a result“. So a commercial organisation can carry out staff development using MITOCW CC BY NC SA licensed content but they may not provide chargeable external training.

Does it matter that even though MIT, Yale and the Open University all use the CC BY NC SA licence yet they intend and permit different uses of their licensed content?

Some of the benefits of CC licenses include the ease of use, and the familiarity of the symbols and the speed in understanding the human-readable Commons Deed. This enables the user of the licensed content to glean quite easily and quickly what their rights and obligations are in respect of the content. The provision of additional guidelines in the above examples may undermine some of these benefits and place an unnecessary burden on the user. It also contributes to uncertainty and detracts from any possibility of  consensus on the use and understanding of a NC licence.

The reason many institutions choose the NC licence may be to control the potential or perceived potential commercialisation of the licensed content. There is quite a compelling argument that content arising from state funded programme should be licensed with the most permissive terms. For example the US Department of Labour is funding $2 billion over four years to create OER materials for career training programs in community colleges. Where new learning materials are created using the grant funds, those materials must be made available under CC Attribution licence (CC BY).

I imagine it would not be easy in UK universities and colleges to demarcate “sate funded content” from the University’s “privately funded content” . Many HEIs and FEIs have a revenue generating ‘business arm’. What is state-funded and what is the commercial arm of the institution may be quite blurred.

To achieve the widest possible access and participation in global education the most appropriate CC licence for ‘open’ educational resources is the CC BY licence. But it doesn’t appear to be always such an easy procedural or cultural step for organisations to take.

If an institution decides that a CC licence with a NC component is the most appropriate licence for its needs, the CC+ Protocol may be worth exploring  for example by universities who may be making moves towards becoming private.

Creative Commons developed its free licences to enable people to share their works as they choose. Using the CC+ protocol permits copyright owners to easily accommodate acceptable non-commercial uses while directing commercial traffic to their own fee-based agreement.

What is CC+?

CC+ is a Creative Commons license plus another agreement, for example:

A copyright owner may pair a CC Attribution-Non-Commercial license [that is the CC] with a non-exclusive commercial agreement [that is the +] enabling a copyright owner to license the work commercially for a fee.

The [+] is a means to provide a simple click through to rights or opportunities beyond those offered in the CC licence. The creator is able to leverage the expanded exposure that results from otherwise freely distributed content.

CC+ is not another CC licence; rather it is a means to point users toward the copyright owner’s own “extension” of rights that may be additional to the existing CC license. The copyright owner is responsible for constructing the license that expresses those additional terms and conditions.

CC+ has many uses and advantages for both commercial and non-commercial users, for example:

  • A copyright owner of content may choose to use a CC Attribution Non-Commercial (CC BY NC) Licence to make content available on the web so they can be shared easily and freely on a non-commercial basis providing attribution is given
  • The copyright owner in this example may pair this CC BY NC licence with a + click-through to non-exclusive rights beyond those permitted under the CC licence such as allowing commercial use in return for a fee.

Other additional permissions beyond those provided in CC licences may include: permission to reuse without providing attribution (paired with any of the six CC licences); or permission to use without having to share alike (paired with CC BY SA or CC BY NC SA licences) or permission to create derivative works (paired with the CC BY ND or CC BY NC ND licences).

CC+ is another means by which copyright owners are able to exercise their copyright as they choose, on their own terms. Using the CC licence enables the free, easy and legal means of sharing on the web whilst the “extension” of permissions provided by the + has the benefit of clear “signposting” to commercial terms for additional uses of the copyrighted works.


This is a guest post by Joscelyn Upendran (@Joscelyn on Twitter). Any views expressed are personal views and not that of any organisation or employer, and not intended to be legal advice nor should they be relied upon as such.

Posted in Guest-post, openness | 3 Comments »

Guest Post: Opening Up Events – The GEII Event Amplification Toolkit

Posted by ukwebfocusguest on 5 April 2012

In today’s guest blog post on openness Kirsty Pitkin introduces the JISC-funded Greening Event II projectand describes her involvement in developing an event amplification toolkit which aims to document best practices for opening access to conferences which, as touched on recently in a post on Adventures in Space, Place and Time by my colleague Marieke Guy, have traditionally been “trapped in space and time”. It is particularly appropriate that this post is published today, the day after the Amplified Conferences Wikipedia entry has been reinstated.


Opening Up Events

Workshops, seminars, conferences: just some of the learning opportunities that are often closed, with any knowledge or resources contained therein accessible only to those who are able to physically attend a fixed point in time and space where the event takes place. Yet these are some of the key ways we can disseminate and share knowledge in a really interactive, practical way.

UKOLN has a well-established role at the forefront of what have become termed “amplified” or open events. These are events where the event materials and discussions are amplified out via the local audience to their own professional networks using online social networking tools. Such activities overlap neatly with the emergence of hybrid events, which are specially designed to allow a remote audience to participate in an event simultaneously with the local audience. Amplified events can often be used as a stepping stone for organisers who are consciously looking to move into hybrid events, or organisers who are just looking to increase their audience without substantially increasing the carbon impact of their event.

The JISC GEII Event Amplification Toolkit

Event amplification at IWMW 2012

I have been working with UKOLN in this area to help develop an Event Amplification Toolkit, as part of the JISC Greening Events II project. The toolkit is designed to help event organisers decide what type of event is most appropriate for their needs (a traditional, hybrid or a fully virtual event) and provides tools to help organisers approach the task of amplifying their event.

The toolkit has been developed using lessons drawn from a series case study events, including Institutional Web Management Workshop (IWMW 2011), UKOLN’s Metrics and Social Web Services workshop, and most recently the 7th International Digital Curation Conference (IDCC11). These lessons have been condensed into a number of simple templates and two-page best practice briefings, which can be mixed and matched according to the event organisers’ requirements. As new online services are emerging all the time, whilst others wane in popularity, these best practice briefings focus on general amplification activities, rather than specific third party tools. The toolkit covers approaches to live video streaming, live commentary, discussion, and curation tools, providing examples of existing services, business models, resourcing requirements and risks which need to be considered. The templates provide models for assessing risk and structuring an amplified event to achieve specific outcomes.

Open Approaches vs Open Tools

Whilst an event may be considered open by virtue of being amplified, many of the individual tools and services used to achieve this are third party commercial services, which may vary in their degree of openness and accessibility (depending how you define open, of course!). This means that organising an open event can become a pragmatic exercise – using open platforms where available and offering alternative options where necessary to help make the event accessible to the widest range of users.

Copyright Shutterstick. Used under licence. http://www.shutterstock.com/pic.mhtml?id=81656434A prime example of this is the most popular tool for use at amplified events: Twitter. Whilst Twitter is considered to be one of the more open social media platforms, participants must have an account with the service in order to take an active part in an event discussion. If you don’t have an account, you can only watch the discussion unfold, you cannot contribute. Opening up an event to the widest possible audience means you must consider those people who do not want to have a direct relationship with a service provider, like Twitter, by establishing an account with the service, no matter how little personal information is required in the process. Tools like CoverItLive and ScribbleLive can provide the option for remote participants to offer comments and questions publicly without a registered account and without having to part with any information about their identity. The role of an event amplifier would then involve integrating these comments into the wider discussion beyond in a sensitive manner, particularly if that discussion is taking place prominently on Twitter.

As this example demonstrates, an amplified event may need to provide a mix of access points to open up all aspects of the event. This means that, in many ways, openness in an events context is less about the specific technologies employed and more about the attitude of the organisers and the way they blend a selection of tools to provide open access. An open attitude when running an event could be summarised as:

  • A commitment to the online audience as first class citizens, providing the same opportunities to access and interact within the live event as those physically in attendance.
  • A commitment to sharing resources in multiple contexts as an aid to future discovery and reuse.
  • A commitment to linking between resources so the audience has a clear path to guide them to other event resources or the same resources in alternative formats.
  • A commitment to the use of creative commons licences, with respect to the speaker or copyright holder.

Looking Forward

We intend to amplify the toolkit itself according to these same principles and using the same techniques detailed in the report.  Our hope is that these resources will help others to approach the problem of opening up their events and reduce the carbon impact of their event by facilitating more people engaging from afar.


Kirsty Pitkin is a professional event amplifier. This is a newly emerging role, which involves working with conference organisers to help deliver an online dimension to traditional events by leveraging social media and other online tools to expand the audience for the event. She explores current research and best practice associated with amplified and hybrid events in her blog. Kirsty holds a Masters in Creative Writing and New Media from De Montfort University.

Email: kirsty.pitkin@tconsult-ltd.com
Blog: http://eventamplifier.wordpress.com/
Twitter: @eventamplifier

Posted in Guest-post, openness | 4 Comments »

Guest Post: Professional Development Using Open Content

Posted by ukwebfocusguest on 4 April 2012

As described recently, a series of guest blog posts on open practices are being published this week on the UK Web Focus blog which build on ideas published in latest issue of JISC Inform. Having explored what openness may mean in the context of researcheducation and libraries, in today’s guest post my colleague Marieke Guy explores “Professional Development Using Open Content“.

As a home worker Marieke takes a pro-active approach to her professional development as can be seen from her posts on her Ramblings of a Remote worker blog. In this post Marieke describes her participation in a Massive Open Online Course (MOOC).


Professional Development

For me professional development has always been about being proactive. Patience is not one of my virtues. I’m not the sort of person who would sit and wait for my team leader to send me on a course, though I’m always open to suggestions.

Professional development according to Wikipedia refers to “skills and knowledge attained for both personal development and career advancement“. The way I see it, there are areas that I need to know more about to make me better at my job, and then there are areas that I want to know more about to give my job context and meaning. The goal is to balance the two and also to fit them alongside my day job.

I work from home (see my Ramblings of a Remote worker blog) and already travel a reasonable amount so any activities I can do from the comfort of my own swivel chair suit me fine. Over the last few years online professional development has really taken off, in a similar way to online learning. Although many courses cost there is now a plethora of open content out there that can be used in any way you chose.

MOOCs

mooc

Massive Open Online Course crib sheet. This crib sheet was created for a workshop being presented at ISTE 2011 on using a MOOC model for professional development by Jeannette Shaffer

One recent addition is the Massive Open Online Courses or MOOCs. The courses are free, open to all and comprise of open content. They tend to be hosted by Higher Education institutes and often students from that particular institution are encouraged to register. Often there is no credit for the course (though some use the Mozilla open badge system or similar approaches) and no feedback for participants from the course leaders. The approach taken is a fluid one where participants are encouraged to blog about what they learn and interact with other participants by commenting on their posts.

As described in “7 things you should know about MOOCs” (PDF format):

For the independent, lifelong learner, the MOOC presents a new opportunity to be part of a learning community, often led by key voices in education. It proves that learning happens beyond traditional school-age years and in a specific kind of room … Certainly as MOOCs develop, the scale on which these courses can be taught and the diversity of students they serve will offer institutions new territory to explore in opening their content to a wider audience and extending their reach into the community.

The Massive Open Online Course crib sheet which is illustrated was created by Jeannette Shaffer and is available from Flickr.

Openness in Education

My first MOOC learning endeavour has been the Introduction to Openness in Education course (see the #ioe12 tweets) co-ordinated by David Wiley, associate professor of instructional psychology and technology at Brigham Young University, US. This was an open course about openness in education – a little postmodern?! I came across the course via a colleague’s Twitter feed and after registering discovered that a couple of other colleagues were also giving MOOCs a go. We ended up meeting for coffee (See my post on #ioe12 Coffee Breaks with a Little Open Licensing Thrown In) to discuss how things had gone so far. Always good to have some support.

I’ve found the course a challenge, mainly due to time constraints, but also because the concept of ‘open’ is complex one. What does being ‘open’ truly mean? Some of the more orthodox advocates of the open movement could offer up a checklist of criteria to help us decide if a license, piece of software, resource, data set, policy, … (add whatever takes your fancy) is strictly open. For them openness is an ideology and a goal. However much of what is out there falls into the spaces in-between and often for good reason.

I’d agree that the movement towards openness is a good thing, though I am still unsure on how I feel about many aspects of it. Openness is not always possible or desirable and it brings with it responsibilities. My current work activities take me into the area of Research Data Management where FOI has a big impact. Requests for data sets (such as the recent Philip Morris smoking research request) are becoming more frequent and are not always for just reasons. A colleague of mine recently pointed me in the direction of a paper written back in 2000 by Martin Strathern entitled The Tyranny of Transparency. To summarise: transparency measures often have paradoxical outcomes like eroding trust and turning knowledge into information rather than information into knowledge. Openness, like free speech, is a double edged sword and we’d do well to ensure that we use the tool appropriately.

Conclusions

All my posts relating to my experiences of MOOCs and learning from open content are available from my blog. There’s no doubt that use of online courses and open content will significantly contribute to my professional development in the future. Learning in this way gives me the flexibility that my job and lifestyle require, however I know that I need to be disciplined and keep motivated if I want to make the most of these opportunities. As Oscar Wilde, a man who held a fairly cynical view of formal education, once said: “Nothing that is worth knowing can be taught“. Maybe a pro-active approach using MOOCs would have been more up his street!


Twitter conversation from Topsy: [View]

Posted in Guest-post, openness | 2 Comments »

Guest Post: Librarians meet Wikipedians: collaboration not competition!

Posted by Brian Kelly (UK Web Focus) on 3 April 2012

As part of the series of guest blog posts which describe how the higher education sector is engaging with various aspects of openness Simon Bains, the Head of Research and Learning Support and Deputy Librarian, The John Rylands University Library at University of Manchester, describes how the university library is engaging with Wikipedia.


It isn’t really news to say that the world libraries inhabit has changed almost beyond recognition in less than 20 years. Perhaps with the benefit of hindsight it will be possible to make sense of the rapid technological change and resulting shift in behaviours which combine to challenge the collections, services and perhaps the very existence of libraries. Whilst we continue to live through this information revolution, we seek to make educated guesses at the next trend, respond as we can to the very different expectations of our user communities, and develop strategies to ensure we remain relevant and sustainable in challenging times.

Several trends in particular seem to me to have made a marked contribution to the seismic landscape disruption which has followed the invention of the Web:

  1. Transition to online from print – published content, particularly journals, being made available online and becoming, fairly quickly, the dominant delivery channel.
  2. Challenges to traditional models of publishing – the rise of the open access agenda, and a general trend towards widespread support for openness, not just for published material but for underlying data, with a view to fostering sharing, reuse and linking.
  3. The Social Web – interaction and conversation, sharing, tagging, developing personal networks for both social and business purposes. Publication is no longer primarily about dissemination, but about sharing, reuse and conversation.
  4. The development of large scale global public and commercial content hubs which have grown to dominate the ways in which information is published, discovered, and shared.

These, of course, aren’t entirely independent developments, and can instead be seen as components of an evolutionary (if not revolutionary) process which has brought us to today’s information landscape. Equally, it is clear that change continues, and recent challenges to traditional scholarly publishing models serve to underline that.

The creation of one of these ‘hubs’ is the focus of this blog post. In just a few years we have seen the very rapid ascendency of Wikipedia as the preferred starting point for the sort of reference enquiry that would once have been directed to a traditionally published encyclopaedia, or a library reference desk. Despite scepticism, it has become a hugely popular resource, with evidence to support the reliability of crowd-sourced factual information, as a result of strict editing policies and zealous, perhaps over-zealous, editors.

In 2007, whilst Digital Library Manager at the National Library of Scotland I was interested to read of a project to use it to make library collections more widely known, and this encouraged me to initiate work at to do likewise. Unfortunately, the timing was not good, as concern about the credentials of editors, and allegations about attempts to influence Wikipedia entries had resulted in very careful vetting, and an aversion to anything which even hinted at advertising, even from the cultural sector. Some forays into relevant Wikipedia entries in fact resulted in my web developer’s account being shut down, almost immediately. Somewhat discouraged, we directed our effort at the more welcoming global networks, such as Flickr and YouTube.

Since then, Wikipedia seems to have adopted a more mature stance, still managing entries very carefully, but recognising that partnership with organisations with information which enriches its entries is to be welcomed rather than resisted (although a recent verbal exchange with a Wikipedia editor makes me think that this is still somewhat dependent on the outlook of individual editors). I was very interested to see the creation of the concept of the ‘Wikipedian in Residence’ at the British Museum, although my move from the National Library back into HE required a focus on other priorities.

Advertisements for the Wikipedia Lounge in the John Rylands University Library

An interior shot of the John Rylands Library in central Manchester

My move to The John Ryland University Library at the University of Manchester coincided with contact from Wikimedia UK, who were now actively seeking partnerships with education institutions, recognising the mutual benefit of working with students, academics and libraries to foster more effective use of Wikipedia as a resource, to encourage content creation and editing by experts, and to link entries to relevant resources. As a Library at a major research intensive institution, with the additional responsibility of steward of an internationally important special collections Library, we were identified as a particularly valuable pilot partner. For our part, influenced very much by the sort of strategic thinking coming from organisations like OCLC, which encouraged libraries to collaborate with large information hubs, we were very enthusiastic about a partnership which would help us connect to a global network level hub, and also address the digital literacy agenda.

We have begun the engagement process, which we hope will develop into a substantial project which includes a ‘Wikipedian in Residence’. To date, we have hosted a ‘Wikipedia Lounge’, which saw academics and students meet Wikipedians to learn more about getting involved and creating content. This event attracted academics, students and librarians, and we have plans to repeat it. We are now in discussions with Wikimedia UK about setting up a 12 month pilot project which would see a Wikipedian in Residence based at the John Rylands Library, working with our curators, students and academics to expose our collections, encourage further research and learning, develop a network of Wikipedians at Manchester (we already have some), and place Wikipedia within our digital literacy strategy as a powerful tool which when used effectively can play an important part in University teaching and research. There are already a number of references to our collections in Wikipedia entries, e.g.in biographical pages such as that of the author Alison Uttley, which serve to demonstrate the very great untapped potential. Perhaps the best entry which focuses on a specific item on our collections is for the Rylands Library Papyrus P52, also known as the St John’s fragment (illustrated) which ranks as the earliest known fragment of the New Testament in any language.

Fragment of St John’s Gospel: recto

Of course there are concerns about Wikipedia: it may not be reliable; it can be used as an easy substitute for comprehensive research and study; it can be difficult to change erroneous content, etc. But to ignore it or dissuade students from its use reminds me of the approach that was sometimes taken in the face of the rapid rise of Google in the late 1990s. It is a battle we are unlikely to win, and so much more could be achieved by working with, not against, the new information providers, especially when so much of what we are about has synergy: open access, collaboration, no profit motive, etc.

It is early days for us in this engagement at the moment, but I have high hopes. And I’m sure that when we introduce our Wikimedia UK contacts to the wonders of the John Rylands Library, they will find it impossible not to see the obvious potential!


Simon Bains is Head of Research and Learning Support and Deputy Librarian, The John Rylands University Library, University of Manchester. You can see his Library Website staff page or follow him on Twitter: @simonjbains

Posted in Guest-post, openness, Wikipedia | 8 Comments »

Guest Post: Being Openly Selfish and Making “OER” Work for You

Posted by ukwebfocusguest on 2 April 2012

This is the second guest post on the theme of openness which, as described last week, explores various aspects of openness which have been addressed in the current issue of the JISC Inform newsletter.

In this guest post James Burke (@deburca) explores what the term OER currently means to him, although he admits the “I’m sure that it will mean something different to me 12 months from now…“.


What is/are OER?

Even though OER has a new global logo it is one of those terms that appears to have no formally agreed definition and people’s use of and reference to the term OER changes over time.

The term OER is broad and still under discussion” and over the past few years OER has been used as a “supply-side term” and remained “largely invisible in the academy”. Metaphors (“Open Education and OER is like…?) have been used to take a light hearted look at potential issues and tensions such as those between “Big OER and Little OER” and all in-between. On the definition front Stephen Downes has written a useful “Half an Hour” essay: “Open Educational Resources: A Definition” and David Wiley (Open Content and the 4Rs) recently put forward: “2017: RIP for OER?” (or not…)

The FAQ page for Open Education Week (held on 5-10 March 2012) provides a useful, current overview of OER and Open Education.

One of the “core attributes” of OER is that access to the “content is liberally licensed for re-use in educational activities, favourably free from restrictions to modify, combine and repurpose the content; consequently, that the content should ideally be designed for easy re-use in that open content standards and formats are being employed”. So, now that I have re-used the new and “liberally licensed” OER global logo in this post I have a number of options and queries regarding adherence to the licence and provision of any requested attribution such as “how do I properly attribute a work offered under a Creative Commons license?” leading me to “what are the best practices for marking content with Creative Commons licenses?”.

I’ll settle with using: “OER Logo” © 2012 Jonathas Mello, used under a Creative Commons license: BY-ND

…but maybe I should have included this attribution directly beneath the image to be less ambiguous to the human reader?, or maybe I should have associated the licence and attribution more “semantically” and unambiguously with the image for the “machine reader”?, or maybe I should have just have made my life simple and just used “Kevin” to add attribution directly to the image to cater for both human and machine readers?, and what is this “machine” anyway…?

Machine readable, but what “machine”?

The Creative Commons license-choosing tool provides you with a snippet of RDFa that you can embed in your web-based content with the idea that this “machine readable” metadata can be automatically identified and extracted by “machines” such as search engines and made available via their search, e.g. Google Advanced Search. This “machine readable” licence can also be used to facilitate accurate attribution via browser and CMS plugin “machines” such as Open Attribute as well as being used for automated cataloguing, depositing etc..

Creative Commons is not the only “machine readable” licence, many countries have their own “interoperable” Public Sector Information/Open Government Licences such as the UK Government Licensing Framework , and many “vanity licenses” for content in both the public and private sectors have also emerged but Creative Commons remains the most widely used technically & legally interoperable licensing framework.

The Google Advanced search help refers to their usage rights filter but states that this filter is used to show “pages that are either labeled with a Creative Commons license or labeled as being in the public domain”. Bing does not have an equivalent usage rights filter but their “advanced operators” can be used to derive the similar results, e.g. inbody:http://creativecommons.org/licenses/by “search term” loc:gb can be used to find UK content that likely has a Creative Commons licence deed link in the metadata or in the HTML body.

The implementation of Creative Commons licences into content can be quite variable ranging from using a Creative Commons icon in a PDF file that contains no link to the license deed through to a complete snippet of RDFa containing the full works title together with attribution, source and more permissions URLs.

Mainstream Web Applications such as Flickr, Soundcloud, Vimeo, Scribd and SlideShare all allow the association of a Creative Commons licence with uploaded image, audio, video or “Office” document content that is then publicly visible and searchable via Google and Bing et al with the site: operator and a usage rights filter. Oddly, for most of these Web Applications Google and Bing provide the best search results and usage rights filters within the Web Applications can be a rare find.

So, to me, the “machine” that is “reading” OER is really any Web application that can consume openly licensed content accessible via the Web and for convenience the best way of me finding this “stuff” is via the mainstream search engines, even if I do have to use a usage rights filter…

Openly licensed resources and “stuff” is readily available on the Web

Arguably, the Internet and the Web would not be where it is today without being “open” and built upon a “stack” of standards and simplification that specifically lack patents and their associated licences that need to be paid for. The Web has significantly lowered the cost of software and content collaboration, creation and publishing and encouraged the embracing of serendipity.

Most of the Internet is run by volunteers who do not get paid, most of the Internet is run by amateurs”. – (video: Innovation in Open Networks) Joi Ito, Thinking Digital May 2010 (@joi)

Joi Ito speaking at #TDC10 from Codeworks Ltd on Vimeo.

One of “open’s” main advantages over proprietary digital content has been the lowering of cost and the cost of failure. The main source of friction in the production of digital content used to be primarily at the content layer in the stack (see prezi and video above) but as this eased the highest cost and restriction causing the most friction to be present whilst consuming and publishing content has shifted towards the legal domain. With the introduction of open licensing frameworks such as Creative Commons that offer worldwide legal interoperability this legal friction is being eased.

More and more educational content is going through a “rights clearance” process and being published by Institutions with more permissive open licenses “openly” to the Web and by “openly” I mean visible to search engines and not behind authentication “walls” such as learning platforms. Quite often this Web published content is a copy with attribution back to the Institution and Institutionally held source and copied to more than one location – if you have a PowerPoint presentation why not upload to Scribd and SlideShare?

This content can now be readily discovered and shared, promoted or “amplified” via Social Networks and usage via metrics, metadata and paradata from various sources is readily and, in a lot of cases, openly available. Properly attributed derivative works should contains links back to the source and if not there are various methods of monitoring and obtaining duplicate content “openly” via Web Applications such as Blekko. This content being consumed can also surface people that are consuming it that can subsequently be used to discover how the re-used work is being used whether that be in a different context to the original, different language etc.

Derivative works are often created by “consumers” who are individuals and not Institutions or organisations and attribution is made to them personally so why not include attribution to the “authors” within the original Creative Commons license?, e.g. Copyright is held by the Institution but why not add acknowledgement to the people (with links to their preferred Social Graph “node”) that created the works so that they get their “whuffie” and be “openly selfish”?

I tend to follow people rather than organisations and to me the attribution to a person tends to be more important than attribution to the copyright owner as it tends to be the person that provides the most context in how the content is being used and from them I tend to “serendipitously” discover new content. This is nothing new and fundamental to the emerging MOOCs.

What OER means to me at the moment

For me, at the moment, the most important aspect of OER is the availability of openly licensed content accessible via the Web, that has a clear provenance of all assets used with attribution to the people that created it as well as to the copyright owner, kind of “OeR”.

This “OeR” includes all “non academic institution” content such as that from Khan Academy, Peer 2 Peer University and Flat World Knowledge and ideally this “OeR” has more permissive Creative Commons licenses and avoids the NoDerivs and NonCommercial conditions that restrict my usage rights as per the “4Rs Framework”.

..but is this OER and can this type of OER use that new global logo?


Twitter conversation from Topsy: [View]

Posted in Guest-post, openness | 10 Comments »

Guest Post: Open Access to Science for Everyone

Posted by ukwebfocusguest on 30 March 2012

Yesterday I announced a series of guest blog posts on the theme of openness. I’m pleased to launch this series with a post by Ross Mounce, a PhD Candidate at the University of Bath. In the post Ross outlines his views on the importance of open access for not just the research community but for everyone.


Before the internet, there were non-trivial costs associated with disseminating paper-based research publications – each and every page of every article of every journal cost the publisher money to produce. Every single paper copy of those journals needed to be physically sent by post to all institutions, libraries and individuals that wanted those journals. This was both a costly and complex process, so it was sensibly outsourced to full-time professional publishers to deal with, some of whom were commercial for-profit enterprises – at first this didn’t cause any problems.

But now the internet allows unlimited copies of research publications to be created for zero cost and these can be advertised and disseminated at relatively insignificant costs – just the cost of bandwidth, keeping servers up and running, maintaining a user-friendly website that search engines can crawl, and providing an RSS feed to notify interested parties of new journal articles. Indeed, when Tim Berners-Lee created the Web in 1991, it was with the aim of better facilitating scientific communication and the dissemination of scientific research.

Note that for the sake of clarity we’ll ignore the role of manuscript-submission, organising peer-review, and the peer review process itself here – I contend these are only of minor administrative cost. Peer-review is provided for free by other academics and manuscript-submission is a largely automated process often requiring little editorial input. Only organising peer review is an administrative task that might conceivably have a significant and real time cost. Furthermore these processes need not necessarily be performed by the same organisation that acts to distribute the publications (decoupled scholarly publication), a nice idea as popularised by Jason Priem.

Yet, the models of payment for publication of, and distribution of research works are still largely centred on paying-for-access, rather than paying-to-publish. In the digital age this is inefficient and illogical. Why try and charge millions of separate customers (institutions, libraries, academics, and other interested persons) for a resource – a complex undertaking to organise in itself, when you can simply ask for a sustainably priced one-off charge to the funder/authors of the content to be published. The latter author-pays model is clearly the simpler, easier to implement option. Yet, I contend that the reader-pays model is currently dominant, especially with commercial for-profit publishers because it can generate excessive profits through its opaqueness and inefficiency (relative to the ultimate goal of providing free, Open Access to scientific knowledge for everyone).

The interests of shareholders, and board members of for-profit publishing companies are now hugely conflicting with that of research funders, institutions and academics. By definition, the primary goal of a for-profit publishing company is profit. In that respect, some academic publishers make Murdoch look like a socialist, with their unscrupulous profiteering as gatekeepers denying access to scientific knowledge. Whereas the goal of STM researchers & funders is surely for knowledge to be created and shared with the world. To myself and thousands of other academics it is clear without further explanation that these two goals cannot be simultaneously be maximised. One strategy works to maximise profit by proactively denying access to vital materials, and punishing those caught sharing materials, whilst the other works to maximise dissemination potential, so that all (who have access to a computer – unfortunately not everyone has access to one of these, but this problem is out of scope) can if they wish read the material, whilst forfeiting maximum profit-potential.

Of course, if research is entirely privately funded, it need not be openly-published – one cannot force private companies to disclose all research and development they do (although efforts by certain privates to share to cure malaria and other humanitarian problems are certainly very welcome!). But as I understand it, the majority of scientific research is publicly-funded and thus there is a clear moral duty to share results with everyone e.g. taxpayers. To paraphrase James Harvey: if you want to keep your research private, fund it yourself. That’s the privilege of private funding.

The tension between librarians (who have to negotiate to buy subscription-access to journals) and academics united on one side, and for-profit publishing companies on the other is particularly noticeable at the moment, hence The Economist’s labelling of this as a potential Academic Spring, analogous to the recent revolutions overthrowing malevolent incumbent powers – the Arab Spring.  Note that a cartoon representation of this debate can be seen on YouTube and is embedded below.

Indeed it is not just academics who benefit from access to scientific literature – as is being documented by a new initiative called Who Needs Access? There are a huge number and variety of people that would benefit from legally unrestricted, free, Open Access to scientific publications e.g. patients, translators, artists, journalists, teachers and retired academics. When one hits a paywall asking for 51USD for just 24 hours access to a single article on palliative care – it’s no wonder people are often put-off reading scientific literature. Thus everyone with even the slightest bit of curiosity about scientific research would stand to benefit from Open Access to scholarly publications, as achieved by the author-pays model.

So where would all these publications go, if not on servers owned and controlled by for-profit publishers? The ideal, natural home as Björn Brembs argues are libraries and university presses as institutional repositories for research publications, code and data. Currently IRs are used as Green OA archives which achieve only limited success in providing free full-text access. But as Networked Repositories for Digital Open Access Publications perhaps they might enable Open Access for all, as well as reducing the overall cost of publishing research.

In areas of science that have already shifted to this model e.g. some of Physics and related subjects with ArXiv (which is arguably analogous to a subject-specific Cornell University IR); Science is distributed pre-review with remarkable ease and cost effectiveness at <$7 per article submitted.

Some final thoughts:

We lose so many legal freedoms with closed access publishing, and its tendency to assign all copyright to publishers (not just mere access, but also text-mining rights, and the right to re-use information in even vaguely commercial contexts) that we cannot and should not allow this continue any longer, as it is causing irreparable damage to the future usability of scientific literature.


Ross Mounce, a PhD Candidate at the University of Bath is an active member of the Open Science community, pushing for beneficial reforms in scholarly publishing. Having had trouble in the past getting research data from publications, he is very proactive in blogging and giving talks on how scientific publishing can improve utility and dissemination by making greater and better use of digital technologies.

Contact details

Email: ross.mounce@gmail.com
Blog: http://www.science3point0.com/palphy/
Twitter: @rmounce

Posted in Guest-post, openness | 10 Comments »

The Importance of Images in Blog Posts

Posted by Brian Kelly (UK Web Focus) on 12 March 2012

Over the past year or so I’ve become aware of the importance of images in blog posts. I noticed this after I started to move away from reading blogs on my RSS reader on my mobile device, which didn’t include images, to use of RSS and Twitter aggregator services, such as Smartr, Pulse, Flipboard or Zite.

An example of the interface which I use most mornings on the way to work can be seen. This image shows the Pulse App on my iPod Touch. As can be seen in the display of UKOLN RSS feeds my blog and the blog for my colleague Marieke Guy both feature images taken from the blog posts which can held differentiate posts; in contrast items available in the UKOLN News RSS feed, for which we tend not to provide images,  fail to stand out.

It was as the importance of such personalised newspaper apps started to become apparent that I decided to make greater use of images on this blog. In this respect I am well behind Martin Weller who, on his Ed Techie blog, frequently includes images in his posts.

The thing I didn’t expect was to see such interfaces being provided for desktop browsers. However last week when I followed a link to a post on Library 2.0 on Steve Wheeler’s Learning With ‘E’s blog I found a similar graphical interface, with an image for the most recent post displayed prominently and images for other recent posts displayed underneath.

I think it will be interesting to see the way in which user interface approaches developed for mobile devices start to migrate to a desktop environment.

In a post on Who let the blogs out? Steve discusses the new theme, with a tongue-in-cheek reference to a recent series of posts on the Context is King vs Context is King debate:

For all these years I have been focusing mainly on content. It was substance over style. Focusing solely on content at the expense of context is a mistake. 

Steve went on to describe the changes to the blog:

I gave my blog a makeover a few days ago. I invoked one of the new templates that Blogger has just started to offer its users. You can see the difference it has made.  …  It holds the content, and presents it in a manner that is more accessible, easy to explore and in a more dynamic way. 

The point about “accessible content” is important, I feel, particularly in the context of accessibility for people with disabilities, which often focusses on support for Assistive Technologies (AT). But since the content hosted on blogs is available as RSS feeds, this enables end users much greater flexibility in reading blog content in ways which reflect their own personal preferences, some of which may be determined  by particular disabilities.  So for me the accessibility challenge when presented with more graphical and flexible interfaces such as the one that can be seen on Steve’s blog is the ease by which such content can be rendered by AT tools, possibly including tools which don’t support JavaScript. It is good to see that the blog is felt to conform with accessibility guidelines according to WAVE (based, of course, on only checking guidelines which can be tested with automated tools) although the blog does not conform with HTML standards.

It will be interesting to see if developments such as this theme, which is provided on the Blogger.com platform, owned by Google, will challenge traditional views on the importance of HTML conformance and Web accessibility guidelines. I would be interested to find out if the content of the blog can be made available to AT tools whilst still providing the new interface for those who prefer this way of interacting with continually the updated content we often find on blogs.

I should add that Steve’s blog can be read on my iPod Touch and Android phone using apps such as Pulse. This makes me wonder if we can regard such devices as AT tools for users who may, for example, find it difficult to make use of desktop computers?

Posted in Accessibility, Blog | 17 Comments »

Risk Register for Blogs

Posted by Brian Kelly (UK Web Focus) on 17 February 2012

 

Bloggers’ Squabble Involves Lawyers

RisksAn article published in the Guardian the week before Christmas announced “Hacked climate emails: police seize computers at West Yorkshire home” and went on to describe how “Police officers investigating the theft of thousands of private emails between climate scientists from a University of East Anglia server in 2009 have seized computer equipment belonging to a web content editor based at the University of Leeds“. It seems that “detectives from Norfolk Constabulary entered the home of Roger Tattersall, who writes a climate sceptic blog under the pseudonym TallBloke, and took away two laptops and a broadband router“.

But rather than comment on a climate denier’s blog of more interest was Tattersall’s post regarding Greg Laden: Libellous article which describes how “Blogger Greg Laden has libelled me [Tattersall] in a scurrilous article on his blog“. In brief, Greg Laden appears to have accused Roger Tattersall of illegal activities. However being a climate denier is not illegal and Laden seems to have opened himself up to accusations of libel. He seems to have realised this and has updated his post so that it now begins:

I’ve decided to update this blog entry (20 Dec 2011) because it occurs to me that certain things could be misinterpreted, in no small part because of the common language that separates us across various national borders, and differences in the way debate and concepts of free speech operate in different lands.

I want to make it clear that I do not think that the blogger “TallBloke” a.k.a. Roger Tattersall has broken British law

I hope that will be the end of that matter, but it does highlight some additional legal risks related to publishing a blog, beyond the issue of the cookie legislation which was discussed in a recent post. This incident highlights possible reputational risks for an organisation which employs a blogger (even if, as in this case, the blog is published anonymously and is not related to work activities) and risks that impassioned debate may lead to libellous comments being posted.

Managing risksA Risk Register For Blogs

There may be dangers that risk averse institutions may use such incidents as an opportunity to restrict or even ban blogs provided by their staff. In order to minimise such risks it may be advantageous to take a lead in providing a risk register which documents possible risks and ways in which such risks may be minimised.

I am in the process of providing a risk register and the draft is given below. I welcome feedback on the risks listed below and the approaches described to minimising the risks. In addition I would welcome suggestions for additional risks which I may have failed to address = and suggestions for how such unforeseen risks can be minimised.

Risk Description Risk Minimisation
Legal Risks
Infringement of ‘cookie’ legislation Since the WordPress.com service uses cookies to measure Web site usage, this may be regarded as infringing the ICO’s ‘cookie’ legislation. The ICO’s guidance suggests that due to the technical difficulties in requiring users to opt-in, they will be unlikely to take further action, provided appropriate measures to address privacy concerns are being taken. In the case of this blog, a sidebar widget provides information on cookie usage.
Publication of copyrighted materials Blog posts may contain copyrighted materials owned by others. Images, such as screen shots, may be included without formal permission being granted. Where possible, links will be provided to the source. If copyright owners feel that use of their materials is inappropriate, the content will be removed normally within a period of a week.
Plagiarism Blog posts may plagiarise content published by others. Where possible links will be provided to content published by others and quoted content will be clearly identified.
Publication of inappropriate comments. Inappropriate blog comments may be published. The policy for this blog states that inappropriate comments will be deleted.
Sustainability Risks
Loss of content due to changes in WordPress.com policies. WordPress.com may change its policies on content which can be hosted. Alternatively since the service is based in the US the US Government may force content published on this blog to be removed. Since this blog has a technical focus, it is felt unlikely that this will happen.
Loss of blog service due to WordPress.com service being unsustainable. The WordPress.com service may go out of business or change its terms and conditions so that the blog cannot continue to be hosted on the service. It is felt unlikely that the WordPress.com service will go out of business in the short term. If the service does go out of business or changes in terms and conditions it is felt that due notice will be given which will allow content to be exported and the blog hosted elsewhere.
Reputational Risks
Damage to blog author’s reputation due to inappropriate posts being published. The author’s professional reputation will be undermined in inappropriate posts are published. The blog’s policy states that “the blog will provide an opportunity for me to ‘think out loud’: i.e. describe speculative ideas, thoughts which may occur to me“. If such thoughts are felt to be inappropriate or if incorrect or inappropriate content is published an apology will be given.
Damage to blog author’s host institution or funder due to inappropriate posts being published. The reputation of the author’s host institution or funder will be undermined in inappropriate posts are published. The author will seek to ensure that the conversational style of the blog does not undermine the position of the author’s host institution or funder. Occasional surveys will be undertaken to ensure that the content provided on the blog is felt to be relevant for the blog’s target audience.

Twitter conversation from Topsy: [View]

Posted in Blog, Legal, Web2.0 | 3 Comments »

I Built It and They Didn’t Come! Reflections on the UK Web Focus Daily Blog

Posted by Brian Kelly (UK Web Focus) on 3 January 2012

On 1 January 2011 I set up the UK Web Focus Daily blog. As described in the initial post:

Inspired by WordPress.com’s suggestion that WordPress users may wish to publish a blog post a day (see the post on “Challenge for 2011: Want to blog more often?“) I have set up this blog. This will be used for informal notes, ideas, etc.

The blog was used actively during the first six months of the year with 30 posts being published in January, 27 in February, 26 in March, 30 in April, 24 in May and 26 in June with the final 6 posts published during the year being published in July.

The blog made use of the P2 theme which is described as “A group blog theme for short update messages, inspired by Twitter“. As can be seen in the screenshot the post creation window is displayed at the top of the blog, thus making it simple to create brief posts.

The content posted is unlikely to be of significant interest to others; the blog was primarily intended to keep brief notes about topics of interest to me. However shortly after launching the blog I realised that it could be used to see how much traffic a blog generates if no attempt is made to promote the blog. However on 8 January a post in which I described how I intended to Unsubscribing from RSS feeds with only summary content contained links to two blogs, which subsequently resulted in comments being posted on the blog. I therefore subsequently did not publish any links to blogs in subsequent posts and I described this experiment in a post entitled Build It and They’ll Come? which was published on 23 January.

As can be seen from the accompanying image, as expected the numbers of visitors to the blog were low (apart from the home page there were only four posts which received over 10 visits).

It will be noticed that there was a big jump in the numbers of statistics in June. As described in a post entitled Blog Views Up By 300%! this occurred after the blog to search engines, including Google, was removed on 31 May.

Normally experiments look at ways of measuring strategies for maximising access to resources. This experiment looked at ways of publishing content openly whilst keeping the numbers of visitors to a minimum – along the lines of publishing the plans for the destruction of Arthur Dent’s home planet to make room for an expressway at the city planning office, “on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.’

The suggestions I have for those who wish to minimise the chances that people will find a blog were:

  • Block search engines from indexing the site (note you can also create a unique string so you can check if Google has indexed the site).
  • Don’t link to other people’s blog posts: they’ll see the referrer link and possibly choose to subscribe to your blog).
  • Don’t allow comments: people may find what you are writing about of interest, add their own thoughts and then look for further comments.
  • Don’t add the blog to any directories.
  • Don’t refer to your blog on other web sites or blogs.
  • Don’t tweet about the blog.

Of course if you want others to read your posts you’ll do the opposite! More seriously, this experiment has helped to demonstrate the fact that simply building an online resource isn’t sufficient if you want users to make use of your resource.  The launch of the web site is just the start of the process.


Twitter conversation from Topsy: [View]

Posted in Blog | 4 Comments »

How can universities ensure that they dispose of their unwanted IT equipment in a green and socially responsible way?

Posted by ukwebfocusguest on 26 December 2011

Christmas is a time for sharing and thinking of others. In this guest blog post I’m pleased to provide a forum for Anja ffrench, Director of Marketing and Communications at Computer Aid International. I met Anja at the recent Computer Weekly Social Media Awards and we discussed ways in which the importance of universities could ensure that their unwanted IT equipment could be disposed in a green and socially responsible way. Whilst I’m sure most universities will have appropriate policies and procedures in place, I would like to use this opportunity to raise the visibility of the Computer Aid International.


The Environmental Cost of using Computers

At every step of the PCs product life-cycle carbon footprints are left behind, during the initial extraction of minerals from the environment; the processing of raw materials; production of sub-components; PC assembly and manufacture; global distribution; and power consumption in usage.

The production of every PC requires 10 times its own weight in fossil fuels. According to empirical research published by Williams and Kerr from the UN University in Tokyo, the average PC requires 240kg of fossil fuels, 22kg of chemicals and 1,500kg of water. That’s over 1.7 metric tonnes of materials consumed to produce each and every PC. PCs require so much energy and materials because of the complex internal structure of microchips.

Why it is better to reuse rather than recycle

Given the substantial environmental cost of production it important we recover the full productive value of every PC through reuse before eventually recycling it to recover parts and materials at its true end-of-life. A refurbished computer can provide at least another three years productive life.

How does the WEEE directive affect UK Universities?

Since July 2007 the Waste Electrical and Electronic Equipment (WEEE) Directive has been in force. The WEEE directive is an EU initiative which aims to minimise the impact of electrical and electronic goods on the environment, by increasing reuse and recycling and reducing the amount of WEEE going to landfill.

The WEEE directive affects every organisation and business that uses electrical equipment in the workplace. The regulations cover all types of electrical and electronic equipment including the obvious computers, printers, fax machines and photocopiers, as well as fridges, kettles and electronic pencil sharpeners. The regulations state that business users are responsible, along with producers, for ensuring their WEEE is correctly treated and reprocessed. The regulations encourage the reuse of whole appliances over recycling. When you are disposing of your IT equipment you must ensure that it is sent to an organisation that has been approved by the Environment Agency to take in WEEE who will provide you with Waste Transfer Notes for your equipment.

Do I need to worry about data security?

Under the Data Protection Act 1998 it is your responsibility to destroy any data that may be stored on the machines. Just hitting the delete button is not enough to wipe the data. To ensure you are protected make sure any organisation you use to dispose of your IT equipment uses a professional data wiping solution that has been approved by CESG or similar.

An environmentally friendly and socially responsible solution to your unwanted IT equipment

Donating your unwanted IT equipment to a charity such as Computer Aid International is both environmentally friendly and socially responsible. You will be fully complying with the WEEE directive and benefiting from a professional low cost PC decommissioning service, which includes free UK Secret Services approved Ontrack Eraser data wiping.

Computer Aid is the world’s largest provider of professionally refurbished PCs to the not-for-profit sector in the developing world. It has been in the business of IT refurbishing for over 14 years. The charities aim is to reduce poverty through practical ICT solutions.

To date Computer Aid has provided just under 200,000 fully refurbished PCs and laptops – donated by UK universities and businesses – to where they are most needed in schools, hospitals and not-for-profit organisations in over 100 countries, predominantly in Africa and Latin America. In order for Computer Aid to continue with its work it relies on universities and companies donating their unwanted computers to them.

Schools and universities in the developing world using a PC professionally refurbished by Computer Aid will enjoy at least 3 years more productive PC use. This effectively doubles the life of a PC halving its environmental footprint whilst enabling some of the poorest and most marginalised people in the world to have access to computers.

Anja ffrench

Director of Marketing and Communications
Computer Aid International
10 Brunswick Industrial Park
Brunswick Way, London, N11 1JL
Registered Charity no. 1069256

Tel: +44 (0) 208 361 5540
Fax: +44 (0) 208 361 7051

Email: anja@computeraid.org
Website: www.computeraid.org
Twitter: www.twitter.com/anjaffrench and www.twitter.com/computer_aid

_____________________________________________________________

Computer Aid International is the world’s largest and most experienced not-for-profit provider of professionally refurbished PCs to developing countries. We have provided over 185,000 computers to educational institutions and not-for-profit organisations in over 100 different countries since 1998. Our aim is to reduce poverty through practical ICT solutions.

Posted in Gadgets, Guest-post | Leave a Comment »

Beyond Blogging as an Open Practice, What About Associated Open Usage Data?

Posted by Brian Kelly (UK Web Focus) on 14 December 2011

 

Should Projects Be Required To Blog? They Should Now!

A recent post on Blogging Practices Session at the JISC MRD Launch Event (#jiscmrd) contains access to the slides hosted on Slideshare used at the JISC MRD Programme Launch Meeting. In the talk I reflected on the discussion on Should Projects Be Required To Have Blogs? which took place initially on Twitter and then on this blog in February 2009.

The context to the discussion was described by Amber Thomas: “I should clarify that my colleagues and I were thinking of mandating blogs for a specific set of projects not across all our programmes“. During the discussion the consensus seemed to be that we should encourage a culture of openness rather than mandate a particular technology such as blogs. One dissenting voice was Owen Stephens who commented “I note that Brian omitted one of my later tweets – not sure if this was by mistake or deliberately because he recognised it for a slightly more light-hearted comment “i say mandate – let them write blogs!” – but I wasn’t entirely joking.

Owen’s view is now becoming more widely accepted across the JISC development environment with a number of programmes, including the recently established JISC Managing Research Data and the open JISC OER Rapid Innovation call both requiring funded projects to provide blogs. This current call (available in MS Word and PDF formats) states that:

In keeping with the size of the grants and short duration of the projects, the bidding process is lightweight (see the Bid Form) and the reporting process will be blog-based

and goes on to state that:

We would also expect to see projects making use of various media for dissemination and engagement with subject and OER communities, including via project blogs and twitter (tag: ukoer)

I’m pleased that JISC have formalised this requirement as I feel that blogs can help to embed open working practices in development activities as well as providing access to information which is more easily integrated into other systems and viewed on variety of devices than formats normally used for reporting purposes.

But how should projects go about measuring the effectiveness of their blogging processes and should should the findings we made openly available, as part of the open practices which projects may be being encouraged to adopt, and as data which is available under an appropriate open data – as we might expect data associated with these two programmes in particular – which is unencumbered by licencing restrictions which may be imposed by publishers or other content owners?

Openness for Blog Usage Data

In addition to providing project blogs there may be a need to be able to demonstrate the value of project blogs. And as well as the individual blogs, programme managers may wish to be able to demonstrate the value of the aggregation of blogs. But how might this be done?

A simple approach would be to publish a public usage icon on the blog. As well as providing usage statistics such tools should also be able to provide answers to questions such as “Has IE6 gone yet?” and “What proportion of visitors use a mobile device?“. But beyond the tools which we will be familiar with in the context of traditional Web sites there may be a need to be able to measures aspects which are of particular relevance to blogs, such as comments posted on blogs and links to blogs posted from elsewhere.

A post on Blog Analytic Services for JISC MRD Project Blogs explored this issue and described how tools such as Technorati and eBuzzing may provide lightweight solutions which may help to provide a better understanding of a blog’s engagement across the blogosphere. It should be acknowledged that such tools do have limitations and can be ‘gamed’. However in some circumstances they may help to identify examples of good practice. In addition gaining an understanding of the strengths and weaknesses of such analytic tools may be helpful if the altmetrics initiative which, in its manifesto, describes how “the growth of new, online scholarly tools allows us to make new filters; these altmetrics reflect the broad, rapid impact of scholarship in this burgeoning ecosystem” and goes on to “call for more tools and research based on altmetrics“.

In a post The OER Turn (which is, according to the author, ” the most read post of 2011 on [the JISC Digital Infrastructure] team blog“) Amber Thomas reflects on developments in the Open Educational Resources environment and describes how she now “find[s] [her]self asking what the “Open” in Open Content means” and concludes by asking “What questions should be asking about open content?“.

My contribution to the discussion is that I propose that when adopting open practices, one should be willing to provide open accesses to usage data associated with the practices.

This was an idea I explored in a post on Numbers Matter: Let’s Provide Open Access to Usage Data and Not Just Research Papers in which I highlighted the comment published in JISC-funded report on Splashes and Ripples: Synthesizing the Evidence on the Impacts of Digital Resources which said that:

Being able to demonstrate your impact numerically can be a means of convincing others to visit your resource, and thus increase the resource’s future impact. For instance, the amount of traffic and size of iTunesU featured prominently in early press reports.

which suggests how quantitative data can be used to support marketing activities. But beyond such marketing considerations, shouldn’t those who believe in the provision of open content and who, in addition, wish to minimise limitations on how the content can be reused (by removing non-commercial and share-alike restrictions from Creative Commons licences, for example) also be willing to make usage statistics similarly freely available? And to argue that “my use case is unique and usage statistics won’t provide the nuanced understanding which is needed” is little different from those who wish to keep strict control on their data?

In other words, what is the limit to the mantra “set your data free“? Does this include setting your usage data free?


Twitter conversation from Topsy: [View]

Posted in Blog, Evidence | 7 Comments »