UK Web Focus

Innovation and best practices for the Web

Archive for October, 2011

How Should the Library Sector Respond to Predictions of Technological Developments?

Posted by Brian Kelly (UK Web Focus) on 31 October 2011

On Thursday I presented a short paper entitled “What’s on the Technology Horizon?” at the ILI 2011 conference. The paper, which is available in MS Word format, described initial work of the JISC Observatory which led to the publication of the Technology Outlook: UK Tertiary Education report.

The paper summarised the findings of the report (which are illustrated)  including the technological developments which have (now) arrived; developments which are expected to have a time-to-adoption horizon of two to three years and those with an expected time-to-adoption horizon of four to five years.

The focus of the short paper and the accompanying presentation at the ILI 2011 conference was “how should the sector respond to such predictions?”  Since I was expecting significant numbers of participants in the  session to have mobile devices I intended to encourage the participants to contribute their thoughts on how the library sector should be responding.  When the response to my question “How many of you have smart phones or table computers?” showed positive responses for over 90% of those present I was hopeful that we would be able to crowd-source suggestions for appropriate actions in preparing for the technological developments.

As shown below, I provided some examples of how I might expect libraries to be preparing for technological developments which should now have arrived, with each brief sentence being provided in a form suitable for tweeting.

Area Actions

Mobile and Tablet Computing

Personal use of mobile phones & tablets in order to gain experiences of new working practices; experiences of accessing library services, etc. Update Acceptable Use Policies to address use of mobile devices. Update Web developments tools and standards to ensure mobile access is treated as ‘first class citizen’.

Cloud Computing

Staff development to provide better understanding of Cloud Computing concepts and implications. Update Acceptable Use Policies to address use of cloud services. Ensure potential risks are understood as well as opportunities. Develop risk minimisation strategies.

Open Content

Staff development to provide better understanding of open data as well as open access including licensing issues for open content.  Understand personal and organisational barriers to provision of open content as well as consuming open content.  Seek ways in which the Library can provide open content.

Table 2: Actions for developments for today’s technologies

If each of the hundred of so participants in the room could tweet one or two similar summaries, I suggested, we would have a significant resource based on suggestions from practitioning librarians and information professions. This would be particularly valuable for those technological developments which may not yet be impacting on daily activities which are listed below:

Area

Actions

Game-Based Learning

Learning Analytics

New Scholarship

Semantic Applications

Table 3: Actions for developments expected to be adopted in two to three years

Area

Actions

Augmented Reality

Collective Intelligence

Smart Objects

Telepresence

Table 4: Actions for developments expected to be adopted in four to five years

I had hoped that, following the talk by Åke Nygren who was giving an alternative view of the future, we would have time available to actively solicit feedback from the audience. Unfortunately due to technical difficulties Åke’s talk overran and we didn’t have time to discuss the ways in which libraries should be responding to these predictions.  In addition I was unable to record a video of my talk due to the video application on my camera stopping after my camera received an SMS alert :-(

I have captured the tweets about my talk using Storify which has tweets from the following 15 Twitterers @StarseekR, @karenblakeman, @librarygirlknit, @daveyp, @mstephens7, @SoullaStylianou@joeyanne, @psychemedia, @ujlibscience, @cybrgrl, @abbybarker, @issip, @jennye, @jannicker and @katelomax.  One  tweet commented:

RT @abbybarker #ili2011 #a101 I have two mobile devices with me and neither if them are connecting to the wifi properly! Ditto.

In retrospect I think I was too ambitious in seeking to use small group exercises which are more suited to a workshop session than a short presentation, with the limited time and technical delays conspiring against me. However perhaps a blog post can provide the opportunity for feedback which wasn’t forthcoming during my talk.  My question, then is, what actions are you taking today in response to the technologies which seen now to be mainstream and those which are expected to arrive in the next two to five years?

Posted in Events | Tagged: , | Leave a Comment »

What Twitter Told Us About ILI 2011

Posted by Brian Kelly (UK Web Focus) on 29 October 2011

Thoughts on #ILI2011

As I said to one of the two video bloggers who recorded participants’ thoughts and comments about the Internet Librarian International (ILI) conference, ILI is probably my favourite conference as it provides an opportunity to catch up in developments in the online Library world in the UK, Europe, North America and Australasia.  This year at ILI 2011 I could only attend for the first day, but this did give me an opportunity to hear about, amongst other topics, JISC-funded developments in the areas of usage data, analysis techniques which can help to prove value and three cutting-edge technology developments taking place in Norway, Belgium and the USA.

Unfortunately I don’t have the time to give detailed thoughts on the sessions I attended. However an analysis of Twitter usage at the conference might help to provide some insights into how Twitter was used at the conference.

What Does Twitter Tell Us?

If you carry out a sentiment analysis of the archive of the tweets from last week’s #ili2011 (Internet Librarian International) conference I suspect you’ll find a lot of positive comments.  Without going into a textual analysis of the content, what can we learn from the Summarzr statistics of the 2,683 tweets from 310 users? (Note as described in a post on Conventions For Metrics For Event-Related Tweets I feel that such summaries should include a data range, so this total covers the period from slightly after the start of the opening plenary talk on Thursday 27 October at 08:38 (actually 09:38) to Saturday 29 October at 09:37).

As perhaps might be expected for an event with over 300 librarians and information professions the Twitter users understood the benefits of providing distinct tags for the three parallel streams. This is a bit of a hobbyhorse for me and I was pleased that I was able to set a precedent in the first set of parallel sessions when I encouraged the 100 or so participants in the session on “A101 – What’s on the Technology Horizon?” to use the tag #A101 to be able to differentiate the conversation from those taking part in sessions “B101 – Not So Secret Weapons – Advocacy and Influence” and “C101 – The e-Book Revolution in Libraries“:

ili2011 (2676) , a101 (98), c202 (67), lidp (61), a104 (54), a203 (53), b103 (45), a102 (44), a201 (41) and b202 (32).

The easily-identifiable tweets will help myself and Åke Nygren, my fellow speaker in the session, to be able to see what was being discussed during our talk, so such session tagging provides a useful way for speakers to gain feedback for their talks. Our opening track seems to have been the only one in which significant numbers of session-tagged tweets were used. However it seems that the benefits of such tagging were quickly spotted with the second, third and fourth parallel sessions (which end in 2, 3 and 4) being included in the above list of the top ten hashtag contained in the TwapperKeeper archive. I should also add that in revisiting my post on Thoughts on ILI 2010 it seems that use of session hashtags is new this year, with only session #C102 being included in the list of top ten hashtag for last year’s event. (Having just looked at last year’s programme it seems that Session C102 on Monitoring and Maximising Organisational Impact was given by myself and Joy Palmar, so it seems it has taken a year for this practice to become embedded!)

The list of the top Twitterers at the conference included several of the ‘normal suspects’ who have a proven track record of tweeting at conferences headed, as was the case for ILI 2010 by @bethanar and @Mimomurr.

Comparing the overall numbers of tweets at the year’s events with ILI 2010 it seems that Twitter usage has now stabilised:

ILI 2011: 80% (2150) of the tweets in this TwapperKeeper archive were made by 14% (45) of the twitterers. The top 10 (3%) twitterers account for 46% (1241) of the tweets. 56% (175) of the twitterers only tweeted once.

ILI 2010: 80% (2032) of the tweets in this TwapperKeeper archive were made by 15% (57) of the twitterers. The top 10 (2%) twitterers account for 45% (1143) of the tweets. 61% (229) of the twitterers only tweeted once.

It should also be noted that once again there were very few geo-located tweets: 39 tweets this year compared with 18 last year, both of which represent no more than 1% of the total number of tweets.

Feedback From Twitter

The event organisers have sent out a SurveyMonkey form to ILI 2011 participants which will help to inform planning for next year’s events. But in addition the event organisers will also be able to analyse the content of the tweets.

I have created a Storify page which summarises a number of tweets related to particants’ thoughts on the conference, rather than comments on the topics discussed at the conference. The most recent tweets are shown in the accompanying screen shot.

Beyond ILI 2011

We were told about changes in ILI conference organisation, with next year’s event being the responsibility of Information Today’s office based in Oxford.  Although I’ve enjoyed previous ILIs, I do feel it will be beneficial to have greater participation from the UK and mainland Europe. I felt that it was somewhat strange, for example, that although there was much interest in use of social media, there was little discussion about privacy issues and the implications of EU privacy legislation related to cookie use.

In light of the changes to the event organisation I would like to conclude by making some suggestions related to use of social media at the event, based on the ideas I’ve described in this post which I hope with be useful for other event organisers.

  • Create a TwapperKeeper (or equivalent) archive of event tweets well in advance of the conference. Note that I discovered that a TwapperKeeper archive hadn’t been set up for the #ILI2011 tweets during the opening talk. I created an archive  during the talk, but this meant that tweets made in the run-up to the event will not be included in the archive
  • Be aware of the benefits of session-related (or room-related) hashtags for parallel sessions and ensure that you clearly publicise such hashtags if you wish to encourage their use.
  • Be aware of  how tweets can be used in the evaluation of an event.

Finally I’d also suggest that event organisers should consider being pro-active in promoting use of the Lanyrd service. It was suggested that participants badges should include their Twitter ID. But in addition, the  Lanyrd page for ILI2011 provides an electronic means for participants to develop their professional network.  No fewer than 24 of the speakers at the conference are listed on Lanyrd, but as there are only an overall total of 42 participants, this means that the majority of the 311 people who tweeted (or the 136 who tweeted more than once) aren’t included in this network.  I think that’s a shame, as I’m a great fan of Lanyrd and have included details of my talk on the Lanyrd page. But that should be the topic of another post!

Posted in Events, Twitter | 9 Comments »

W3Conf: Practical Standards for Web Professionals – Free for Remote Participants!

Posted by Brian Kelly (UK Web Focus) on 28 October 2011

The W3C are hosting their first conference: “W3Conf: Practical Standards for Web Professionals” which will take on 15-16 November 2011 at the Redmond Marriot Town Center, Redmond, USA. Although the early bird registration fee of $199 for the two day event seems very reasonable I suspect that despite the event’s focus on HTML5 and the Open Web Platform probably being of interest to many readers of this blog, not many will be able to travel to the US to attend this conference (but if you do wish to attend note that the deadline for the early bird registration is 1 November when the fee will go up to $299).

However the event Web site states that “The recordings of the presentations will be freely available” and goes on to add that “During the event, there will be a live stream of the sessions, with English subtitling. After the event, each session will be archived for future reference“.

The following sessions will be held at the conference:

Day 1, 15 November:

  • Welcome: Contributing to Open Standards, Ian Jacobs (W3C)
  • Testing to Perfection, Philippe Le Hégaret (W3C)
  • Community Groups: a Case Study With Web Payments, Manu Sporny (Digital Bazaar)
  • Developer Documentation, Doug Schepers (W3C)
  • HTMl5 Games
  • Web Graphics – a Large Creative Palette, Vincent Hardy (Adobe)
  • Modern Layout: How Do You Build Layout in 2011 (CSS3)?, Divya Manian (Opera)
  • Shortcuts: Getting Off (Line) With the HTML5 Appcache, John Allsopp (Web Designs)
  • The n-Screens Problem: Building Apps in a World Of TV and Mobiles, Rajesh Lal (Nokia)
  • The Great HTML5 Divide: How Polyfills and Shims Let You Light Up Your Sites in Non-Modern Browsers, Rey Bango (Microsoft)
  • HTML5: the Foundation of the Web Platform, Paul Irish (Google)

Day 2, 16 November:

  • HTML5 Demo Fest: The Best From The Web, Giorgio Sardo (Microsoft)
  • Shortcuts: Data Visualisation With Web Standards, Mike Bostock (Square)
  • Universal Access: A Practical Guide To Accessibility, Aria, And Script, Becky Gibson (Ibm)
  • Security and Privacy: Securing User Identities and Applications, Brad Hill (Paypal), Scott Stender (Isec Partners)
  • Shortcuts: Touch Events, Grant Goodale (Massively Fun)
  • Mobile Web Development Topic: Building For Mobile Devices
  • Shortcuts: Modernizr, Faruk Ateş (Apture)
  • Browsers and Standards: Where the Rubber Hits the Road, Paul Cotton (Microsoft), Tantek Çelik (Mozilla), Chris Wilson (Google), Divya Manian (Opera)

It was very timely to read about this conference during Open Access 2011 Week, which the JISC, among many other organisations, are supporting. The free access to the talks and resources which will be used illustrates how openness can be used to enhance learning and creativity, in this context for developers who are looking to use Web standards to enhance their services.

The provision of remote access to the conference is also very timely in the context of the JISC-funded Greening Events II project which is being provided by ILRT and UKOLN.    It would be valuable if the conference organisers were able to provide statistics on remote participation during the event.  How many people viewed from the UK, for example, and for how long. It would be interesting to see if the environmental costs of delivering the steaming video and hosting videos and slides for subsequent viewing could be compared with the costs of flying to the US.

Posted in Events, standards | Leave a Comment »

Learning Analytics and New Scholarship: Now on the Technology Horizon

Posted by Brian Kelly (UK Web Focus) on 26 October 2011

What’s On The Technology Horizon?

Tomorrow I’m giving a talk on “What’s on the Technology Horizon?” at the Internet Librarian International (ILI) 2011 conference. This talk is based on a “Technology Outlook: UK Higher Education” report commissioned by UKOLN and CETIS which explores the impact of emerging technologies on teaching, learning, research or information management in UK tertiary education over the next five years.

In a post entitled “I Predict A Riot”: Thoughts on Collective Intelligence” I described how “the report highlights Collective Intelligence as one emerging technology which is predicted to have an time-to-adoption horizon of 4-5 years“. Two areas which are expected to have a time-to-adoption horizon of 2-3 years are Learning Analytics and New Scholarship. I would agree that these areas are likely to have an impact on mainstream university activities before collective intelligence, but are these areas really 2-3 years away?  It does seem to me that early adopters in these areas are already having an impact on the mainstream.

Learning Analytics

Dave Pattern, systems librarian in the Library at the University of Huddersfield, for example, is also giving a talk at the ILI 2011 conference about the JISC-funded Library Impact Data Project (LIDP). The slides Dave will be using, together with an accompanying handout, are available from the University of Huddersfield repository.  In addition the slides are also available on Slideshare which, perhaps somewhat ironically, means that the slides are more interoperable, as they can be easily viewed on mobile devices such as an iPhone through Slideshare’s HTML5 interface and can be embedded on third party Web sites, such as this blog:

The talk will describe how “The project looked at the final degree classification of over 33,000 undergraduates, in particular the honours degree result they achieved and the library usage of each student” and explored the hypothesis “There is a statistically significant correlation across a number of universities between library activity data and student attainment‘.

If you want to know the findings of the project you may wish to view the slides, read the project blog or the various papers which have been published about this work including an article on “Looking for the link between library usage and student attainment” published in Ariadne in July 2011.

This project is one of several which have been funded under the JISC’s Activity Data Programme.  These other projects are providing engagement and dissemination activities on the project blogs which includes:

It therefore does seem to me that we are seeing JISC project-funded activities which are helping to explore the relevance of, in this case, activity data related to student achievements and their use of library resources and that the findings are being made available to a wider audience through this contribution to the ILI 2011 conference.  But what of New Scholarship?

New Scholarship

The Technology Outlook report (PDF format) describes how:

Increasingly, scholars are beginning to employ methods unavailable to their counterparts of several years ago, including prepublication releases of their work, distribution through non-traditional channels, dynamic visualization of data and results, and new ways to conduct peer reviews using online collaboration. New forms of scholarship, including creative models of publication and non-traditional scholarly products, are evolving along with the changing process. 

Some of these forms are very common — blogs and video clips, for instance — but academia has been slow to recognize and accept them. Proponents of these new forms argue that they serve a different purpose than traditional writing and research — a purpose that improves, rather than runs counter to, other kinds of scholarly work. Blogging scholars report that the forum for airing ideas and receiving comments from their colleagues helps them to hone their thinking and explore avenues they might otherwise have overlooked. 

As we have seen from the above the library sector seems to be willing to make use of blogs in supporting scholarly activities.  We can also see an example of pre-publication of scholarly work. Readers of this blog are also likely to be aware of ways in which Twitter is being much more readily accepted as a means of supporting a variety of educational and research activities, with a recent post on Les Carr’s Repository Man’s blog describing ways of Using EPrints Repositories to Collect Twitter Data.

Beyond the library and repository sector, as described in a post on Recognising, Appreciating, Measuring and Evaluating the Impact of Open Science the recent Science Online London 2011 conference provided an example of how scientific researchers are making use of open approaches which can be regarded as new scholarship and the Beyond Impact project,  “an Open Society Foundations funded project that aims to facilitate a conversation between researchers, their funders, and developers about what we mean by the “impact” of research and how we can make its measurement more reliable, more useful, and more accepted by the research community” is looking to ensure that appropriate ‘reward’ mechanisms can be provided for researchers who wish to engage in scientific research beyond the traditional publication of peer-reviewed papers.

Conclusions

In this post I am suggesting that both Learning Analytics and New Scholarship are moving beyond the early adopters and starting to be embraced by the mainstream.  I also feel that the Open Access 2011 Week, which is taking place this week, provides a timely opportunity to welcome such developments since New Scholarship, in particular,often encourages use of blogs, Twitter and similar tools to work in a more open fashion and Learning Analytics can benefit from the provision of open, although also perhaps anonymised, data. I am looking forward to seeing the level of interest in these areas at participants at the ILI 2011 conference.  But is my optimism misplaced?   Åke Nygren is also speaking in the session on  “What’s on the Technology Horizon?” and, as can be seen from his slides which are also available on Slideshare and embedded below, he has a very different view to  mine! Both of our slides are embedded below to make it easier to compare the contrasting visions.

 

Posted in Events | Tagged: | 8 Comments »

My Activities for Open Access Week 2011

Posted by Brian Kelly (UK Web Focus) on 24 October 2011

Open Access Week 2011: #OAWeek

Today marks the launch of Open Access Week. This is a global event, now in its 5th year, which promotes Open Access as a new norm in scholarship and research.  As described in last year’s summary about the event:

“Open Access” to information – the free, immediate, online access to the results of scholarly research, and the right to use and re-use those results as you need – has the power to transform the way research and scientific inquiry are conducted. It has direct and widespread implications for academia, medicine, science, industry, and for society as a whole. 

This year’s summary of the campaign encourages people to become actively involved with the campaign:

Every year, research funders, academic institutions, libraries research organizations, non-profits, businesses, and others use Open Access Week as a valuable platform to convene community events as well as to announce significant action on Open Access.  The Week has served as a launching pad for new open-access publication funds, open-access policies, and papers reporting on the societal and economic benefits of OA.

I agree that it is important to become actively involved in open access activities – being a passive supporter can mean that one is consuming open resources provided by others, rather than actively engaging in the transformation of the research culture which the campaign is seeking to do.  I’m looking forward to seeing the #OAWeek tweets (which is archived on TwapperKeeper) in which people will be describing what they are doing. In this post I’ll describe how I have engaged in open access in the past and how I am supporting the Open Access Week 2012 campaign, beyond registering on the Open Access Web site.

Getting Involved

Back in 2005 in a paper entitled “Let’s Free IT Support Materials! ” I argued that support service departments, which should include libraries as well as IT Service departments, should be taking a lead in embracing openness by making training materials, slides and documentation available with a Creative Commons licence.

For several years I have been making my slides available under a Creative Commons licence. As an example on Thursday I will be giving a talk entitled “What’s On the Technology Horizon?” at the ILI 2011 conference.  The talk will describe work commissioned by the JISC Observatory (which is being provided by UKOLN and CETIS) which has identified technological developments which are expected to have an impact on the higher education sector over the next four years or so.  It is pleasing that open content has been listed as a development which is expected to have a significant impact across the sector with a time-to-adoption horizon or one year of less. It is clearly appropriate that my slides for the talk are provided with a Creative Commons licence:

It should also be noted that permission will be granted for live-blogging and live streaming of the talk, with permission being clarified on the second slide of the presentation, as illustrated.

The licence to share live presentations is one aspect of UKOLN’s long-standing involvement in organising and participating in amplified events and in advising others of best practices in the provision of such events.  We are currently developing guidelines for amplified events as part of our involvement  in the JISC-funded Greening Events II project.

In addition to describing possible environmental benefits which can be gained by enabling a remote audience  to participate in events, we will also describe additional benefits which can be gained by adopting a more open approach to events as described by my c0lleague Marieke Guy in a post on Openness and Event Amplification.

However so far I have summarised ways in which myself and colleagues at UKOLN have supported differing aspects of open access in the past. I feel there is a need at the start of Open Access Week 2011 to outline new and additional ways in which the benefits of open access can be further enhanced.

A change to the licence conditions for posts on this blog was announced on 12 January 2011 when I described how Non-Commercial Use Restriction Removed From This Blog. This post described how

The BY-NC-SA licence was chosen [in 2005] as it seemed at the time to provide a safe option, allowing the resources to be reused by others in the sector whilst retaining the right to commercially exploit the resources. In reality, however, the resources haven’t been exploited commercially and increasingly the sector is becoming aware of the difficulties in licensing resources which excludes commercial use, as described by Peter Murray-Rust in a recent post on “Why I and you should avoid NC licence“.

I have therefore decided that from 1 January 2011 posts and comments published on this blog will be licenced with a Creative Commons Attribution-ShareAlike 2.0 licence (CC BY-SA).

However the share-alike clause can also provide difficulties in allowing others to reuse the content.  Although I would encourage others to adopt a similar Creative Commons licence I realise that this may not also be achievable.  So rather than requiring this as part of the licence, I will now simply encourage others who use posts published on this blog to make derived works available under a Creative Commons licence  and limit the licence conditions to a CC-BY licence which states that:

You are free:

  • to copy, distribute, display, and perform the work
  • to make derivative works
  • to make commercial use of the work

Under the following conditions:

  • Attribution — You must give the original author credit.

In addition to using this licence for blog posts from 24 October 2011 I also intend to use this  licence for presentations I will give in the future – and, as can be seen from the above image, the licence has been applied to the resources I will give in my talk at the ILI 2011 conference later this week.

That’s how I’m involved with Open Access 2011 week. What are you doing?

Posted in openness | Tagged: | 2 Comments »

JISC Funding to Enhance Access to UK University Web Sites

Posted by Brian Kelly (UK Web Focus) on 21 October 2011

Yesterday’s blog post asked “Are University Web Sites In Decline?“. Although some anecdotal evidence suggests that in a number of cases this may not be the case, it is also true that there is increasing competition for ‘eyeballs’, with many organisations being prepared to invest resources for search engine optimisation and related techniques in order to maximise traffic to Web sites.

The JISC is looking to enhance the visibility of Web sites within the .ac.uk domain. JISC  inviting those working in the provision of institutional Web sites  to consider bidding for participation in a community of projects that will enhance your .ac.uk web site to make Web site resources more easily found. These projects will innovate around the “JISCLINKU” Toolkit  product which is currently under development which will help to exploit the benefits provided by a variety of linking and related strategies. Funded projects will beta test and advance the JISCLINKU Toolkit and help make it ready for use across the wider sector for the start of the next academic year (2012-13).  A key institutional driver for this work will be raising the visibility of institutional Web sites to attract the intake of new fee-paying students.

Bids are due by noon on the 21st November 2011. The successful projects would be expected to start in as part of the community work in February 2012.

The official call is available on the JISC web site which includes bidding template and marking criteria. In addition a shortened (un-official) version is at the link below so you can easily get an overview of what will be expected of prospective projects: http://code.google.com/p/jisclinku/wiki/CallForProjects

If there are further questions, ideas or intentions for how you can be part of this innovative community of project please contact David F. Flanders (JISC Programme Manager), d.flanders@jisc.ac.uk / mob: 07891 50 1194 / skype: david.flanders

Posted in IWMC | Leave a Comment »

Are University Web Sites in Decline?

Posted by Brian Kelly (UK Web Focus) on 20 October 2011

Are Web Sites In Decline?

Are organisational Web sites in decline? Earlier this year an article suggested that this was the case for an number of well-known companies, such as Coca Cola (“Coca Cola’s website traffic is down more than 40% in just 12 months“). The article cited a study by Webtrends published in March 2011 which revealed that static or declining website traffic is affecting the majority of Fortune 100 web sites, with 68% experiencing negative growth over the past 12 months with a 24% average decrease in unique visitors.

Are we seeing similar trends across University Web sites?

Analysis of Usage Trends for Russell Group Universities

A recent tweet from Martin Hawskey suggested that Google’s Double Click Ad Planner service could be useful in providing usage statistics for University Web sites. This tool has been used to provide a graph of estimated usage of the twenty Russell Group Universities for a period of slightly over a year, from March 2010 to August 2011. The findings are displayed in the following table.

Institution /
Double Click Stats
Graph  Additional Statistics
1 University of Birmingham -
Stats for Birmingham
Unique visitors (estimated cookies)
390K
Unique visitors (users)
260K
Reach
0.0%
Page views
9.1M
Total visits
920K
Avg visits per cookie
2.4
Avg time on site
13:00
2 University of Bristol -
Stats for Bristol
 
Unique visitors (estimated cookies)
160K
Unique visitors (users)
110K
Page views
1.1M
Total visits
270K
Avg visits per cookie
1.6
Avg time on site
6:00
3 University of Cambridge-Stats for Cambridge  
Unique visitors (estimated cookies)
1.3M
Unique visitors (users)
1M
Reach
0.1%
Page views
19M
Total visits
2.4M
Avg visits per cookie
1.8
Avg time on site
10:50
4 Cardiff University -
Stats for Cardiff
 
Unique visitors (estimated cookies)
220K
Unique visitors (users)
150K
Page views
3.9M
Total visits
570K
Avg visits per cookie
2.6
Avg time on site
7:50
5 University of Edinburgh -
Stats for Edinburgh
 
Unique visitors (estimated cookies)
680K
Unique visitors (users)
520K
Page views
13M
Total visits
1.4M
Avg visits per cookie
2.1
Avg time on site
10:50
6 University of Glasgow -
Stats for Glasgow
 
Unique visitors (estimated cookies)
420K
Unique visitors (users)
300K
Page views
6.8M
Total visits
900K
Avg visits per cookie
2.1
Avg time on site
10:40
7 Imperial College -
Stats for Imperial College
 
Unique visitors (estimated cookies)
200K
Unique visitors (users)>
140K
Page views
2.4M
Total visits
310K
Avg visits per cookie
1.6
Avg time on site
8:00
8 King’s College London -
Stats for King’s College London
 
Unique visitors (estimated cookies)
1.3M
Unique visitors (users)
1M
Page views
19M
Total visits
2.4M
Avg visits per cookie
1.8
Avg time on site
10:50
9 University of Leeds -
Stats for Leeds
 
Unique visitors (estimated cookies)
580K
Unique visitors (users)
390K
Page views
15M
Total visits
1.5M
Avg visits per cookie
2.5
Avg time on site
11:40
10 University of Liverpool -
Stats for Liverpool
 
Unique visitors (estimated cookies)
350K
Unique visitors (users)
240K
Reach
0.0%
Page views
8.9M
Total visits
1.1M
Avg visits per cookie
3
Avg time on site
11:50
11 LSE -
Stats for LSE
 
Unique visitors (estimated cookies)
470K
Unique visitors (users)
350K
Page views
5.6M
Total visits
860K
Avg visits per cookie
1.8
Avg time on site
9:40
12 University of Manchester -
Stats for Manchester
 
Unique visitors (estimated cookies)
610K
Unique visitors (users)
430K
Page views
14M
Total visits
1.6M
Avg visits per cookie
2.6
Avg time on site
11:40
13 Newcastle University -
Stats for Newcastle
 
Unique visitors (estimated cookies)
390K
Unique visitors (users)
260K
Page views
6.2M
Total visits
1M
Avg visits per cookie
2.6
Avg time on site
9:50
14 University of Nottingham -
Stats for Nottingham
 
Unique visitors (estimated cookies)
470K
Unique visitors (users)
320K
Page views
13M
Total visits
1.4M
Avg visits per cookie
3.1
Avg time on site
13:00
15 University of Oxford -
Stats for Oxford
 
Unique visitors (estimated cookies)
1.5M
Unique visitors (users)
1.1M
Page views
21M
Total visits
2.7M
Avg visits per cookie
1.9
Avg time on site
9:50
16 Queen’s University Belfast -
Stats for Queen’s University Belfast
 
Unique visitors (estimated cookies)
220K
Unique visitors (users)
140K
Page views
6.7M
Total visits
770K
Avg visits per cookie
3.5
Avg time on site
13:00
17 University of Sheffield -
Stats for Sheffield
 
Unique visitors (estimated cookies)
470K
Unique visitors (users)
320K
Reach
0.0%
Page views
7.4M
Total visits
1.1M
Avg visits per cookie
2.3
Avg time on site
8:00
18 University of Southampton -
Stats for Southampton
 
Unique visitors (estimated cookies)
430K
Unique visitors (users)
290K
Reach
0.0%
Page views
6.6M
Total visits
910K
Avg visits per cookie
2.1
Avg time on site
8:50
19 University College London -
Stats for University College London
 
Unique visitors (estimated cookies)
830K
Unique visitors (users)
560K
Page views
9.9M
Total visits
1.6M
Avg visits per cookie
1.9
Avg time on site
8:40
20 University of Warwick -
Stats for Warwick
 
Unique visitors (estimated cookies)
430K
Unique visitors (users)
320K
Page views
6.8M
Total visits
980K
Avg visits per cookie
2.3
Avg time on site
7:50

It should be noted that, as described on an Ad Planner help pageinformation from a variety of sources including anonymized, aggregated Google Toolbar data from users who have opted in to enhanced features, publisher opt-in anonymous Google Analytics data, opt-in external consumer panel data, and other third-party market research“.

Using Google Trends To Make Comparisons

In order to see if the findings were reproducible using other tools the Google Trends service was also used. The findings are depicted below, with trends since late 2008 being shown in groups of five institutions.

Trends across Oxford, Cambridge, UCL, Edinburgh and Southampton

Trends across Birmingham, Bristol, Cardiff, Glasgow and Imperial College

Trends across KCL, Leeds, Liverpool, LSE and Manchester

Trends across Newcastle, Nottingham, Queen’s University Belfast, Sheffield and Warwick

It can be seen from these comparisons that similar trends are taking place across all twenty Russell Group Universities, with the possible exception of Warwick University, which did not see a drop in usage in 2009, although after this its usage patterns followed that of the other institutions.

It should be noted that the Google Trends site does give the warning that  “several approximations are used when computing these results” and gives the warning that “All traffic statistics are estimates“. The site goes on to add that “The data Trends produces may contain inaccuracies for a number of reasons, including data-sampling issues and a variety of approximations that are used to compute results” and gives the warning that “you probably wouldn’t want to write your Ph.D. dissertation based on the information provided by Trends“!  So perhaps it would be inappropriate to make policy decisions based on this data. But if no addition data is available, how else can be make evidence-based policy decisions?  And as described in a post on “University Web Sites Cost Money!” we know that the Daily Telegraph has a record of publishing an article entitled “Universities spending millions on websites which students rate as inadequate“ based on flawed interpretation of statistics gathered using Freedom of Information requests.  Unless and until universities are willing to openly publish Web site usage statistics we need to be prepared to accept that alternative metrics may well be used.

University adoption of social mediaSummary

Whilst the evidence is suggesting that we are seeing a slight decrease in the amount of traffic to institutional Web sites for Russell group Universities, there is additional evidence which suggests that the same group of twenty UK Universities are seeing increased activity across the institutions’ Facebook sites.

As summarised in a recent post entitled Is It Time To Ditch Facebook, When There’s Half a Million Fans Across Russell Group Universities?   “in a period of nine months we have seen an increase in the number of ‘likes’ for the twenty UK Russell Group Universities of over 274,000 users or almost 100% with the largest increase, of over 155,000 occurring at the University of Oxford“. The post goes on to describe how are “seeing a huge increase in the number of Facebook ‘likes’ with all of the institutions seeing a growth of between 33% and 345%“.

The findings from the declining usage of institutional Web sites could be used to question the importance of those working in institutional Web teams. However the evidence from Facebook suggests that certain services initially provided on institutional Web sites seem to have migrated to popular social web services – and clearly there will be a need to manage the content and interactions with potential students wherever such interactions take place. For example a couple of day ago a post on Mashable described 7 Ways Universities Are Using Facebook as a Marketing Tool which included providing virtual tours; demonstrating pride in the institution; marketing ‘shwag‘; supporting alumni activities; sharing departmental; content; reaching out to potential students and exploiting geo-location services – all activities which will require institutional support.

The importance of social web across higher education has also been identified in an infograph which was launched in August 2011 in a post entitled “How colleges and universities have embraced social media” on the US-based Schools.com service (and embedded in this post).

This article suggests that the US higher education system seemed initially reluctant to embrace social media:

Universities are often at the forefront of intellectual thought, but they have been known to lag behind the rest of society when it comes to learning and adopting new technologies. Such has certainly been the case with social media technologies. In fact, so reluctant were universities to adopt social media on campus that in 2007, only about half of colleges reported social media usage.

but have recently recognised the benefits which can be gained:

According to a recent report from the University of Massachusetts, however, colleges have finally caught on; in 2011, 100% of universities are using at least one form of social media–and they are reporting that it’s now an important and successful piece of their outreach efforts. Check out the below infographic to learn more about how colleges have been slowly going social.

The Mashable blog is in agreement with these views of the current importance of social media to US Universities. A post entitled 6 Best Practices for Universities Embracing Social Media suggests that:

For universities, deciding to use social media is a no-brainer. The 18- to 24-year-old college student demographic is all over the social web, and its younger counterpart (the high school crowd) is equally immersed.

and goes on to describe how:

Already, many schools have leveraged social media in a big way. In fact, a recent study showed that an astounding 100% of universities have a social media presence. From luring in potential new students with admissions blogs and creative use of location-based services like SCVNGR, to keeping alumni engaged via dynamic, content-rich Facebook and Ning communities, to informing students about campus offerings through Twitter feeds and YouTube videos, it’s clear that universities recognize the importance of social media.

But in addition to the popularity of Social Web sites, another possible reason for the lack of growth in usage of institutional Web site may be a consequence of the difficulties in navigating such sites on mobile devices. In the US a Read/Write Web article informs use that “7% of U.S. [is] Web Traffic From Handheld Devices“. How many institutional Web sites provide easy-to-use interfaces on mobile devices, I wonder?

Implications

There is a danger that the evidence of decline in traffic to institutional Web sites could be used to justify cuts in levels of funding for institutional Web teams.  However additional evidence suggests that users may be simply making use of alternative sources of information and interactions or may be using mobile devices which may provide cumbersome experiences when accessing sites which have not been configured to provide optimal interfaces when using small screens, no mouse interface and other characteristics of mobile devices.

I think it would therefore be a mistake to argue that there is a decrease in interest in or relevance of online services which may initially have been provided on institutional Web sites. Rather I feel we are seeing a move towards a variety of cloud-based services.  The high-profile services may include Facebook together with social media sharing services such as YouTube and iTunes (for which usage across Russell Group universities has been documented in posts on How is the UK HE Sector Using YouTube? and What are UK Universities doing with iTunes U?). But in addition we are also seeing policy and funding decisions being made by funding bodies such as HEFCE which will see a move towards cloud-based services which will be more closely-aligned with the requirements of the UK’s higher education sector, with the migration of the Jorum service from a project to a service role providing a good example of how key online services traditional hosted within the institutional may be more cost-effective if hosted externally but developed with the needs on institutions in mind.

How should the evidence, such as the examples I’ve listed in this post, be used to inform institutional policies, I wonder? And might there be a need to make changes to existing Web team structures, if responsibilities for managing institutional Web sites are separate from managing content and interactions hosted outside the institution?

Posted in Evidence, IWMC | 14 Comments »

Metrics, This Time For Web Accessibility

Posted by Brian Kelly (UK Web Focus) on 17 October 2011

Metrics For Web Accessibility

A recent tweet from @LouWoodley alerted me to a post which described “Here is how you can game Klout“.  The post described how an automated bot seems to have been successful in gaining a high ranking on Klout.  A clear example of the limitations of automated use of metrics in order to seek to establish some kind of value – in the case related to the impact, outreach and influence of individuals using Twitter.

In this post, rather than revisiting a discussion of the pros and cons of metrics for analysing interactions on the social web I’d like to drawn attention to a call for short papers for an “Online Symposium on Website Accessibility Metrics“. The call for papers describes how:

The W3C/WAI Research and Development Working Group (RDWG) will hold an online symposium on measuring website accessibility and invites your contribution. The goal of this symposium is to bring researchers and practitioners together to scope the extent and magnitude of existing website accessibility metrics, and to develop a roadmap for future research and development in the field.

The background to this area of work describes how:

Measuring the level of web accessibility is essential for assessing accessibility implementation and improvement over time but finding appropriate measurements is non-trivial. For instance, conformance to the Web Content Accessibility Guidelines (WCAG) is based on 4 ordinal levels of conformance (none, A, AA, and AAA) but these levels are too far apart to allow granular comparison and progress monitoring; if a websites satisfied many success criteria in addition to all Level A success criteria, the website would only conform to level A of WCAG 2.0 but the additional effort would not be visible.

and goes on to admit that:

Using numerical metrics potentially allows a more continuous scale for measuring accessibility and, to the extent that the metrics are reliable, could be used for comparisons. However, it is unclear how metrics can be developed that fulfill requirements such as validity, reliability, and suitability. For example, is a web page with two images with faulty text alternatives out of ten more accessible than another page with only one image with a faulty text alternative out of five? While such a count may be a fairly simple and reliable metric it is generally not a valid reflection of accessibility without additional information about the context in which the faults occur, but identifying this context may introduce complexity, reduce reliability, and raise other challenges.

The online symposium is invited submissions for papers which will “constitute the basis from which to further explore a research and development roadmap for website accessibility metrics”.  Papers are invited which discuss of the relationship of two approaches: (1) measuring ‘accessibility in terms of conformance’ with guidelines such as WCAG or Section 508 and (2)  ‘accessibility in use’ metrics that reflect the impact that accessibility issues have on real users, regardless of guidelines.  Papers are invited to address the following types of questions:

  • What sort of techniques can we explore to combine metrics that are computed automatically, semi-automatically (with input from humans), and manually (where the judgement is made by humans, even if with input from software)?
  • How can we build an infrastructure (such as IBM Social Accessibility) that allows experts to store accessibility information (metadata) for use with metrics that are computed during subsequent audits?
  • What metrics, or combination of metrics, can be used as predictors of accessibility?
  • How shall we characterize the quality of such predictors in terms of properties such as reliability, validity, sensitivity, adequacy and adaptability?
  • Which approaches can be embraced for validating, benchmarking, and comparing web accessibility metrics?
  • How should we tackle metrics in web applications with dynamic content?

Discussion

Myself and several Web accessibility researchers and practitioners have, over the years, written several papers in which we have described reservations regarding use of WAI guidelines as a definitive benchmark for Web accessibility. Our work began in 2004 with a paper on “Developing A Holistic Approach For E-Learning Accessibility” and a year later a follow-up paper on “Implementing a Holistic Approach to E-Learning Accessibility” won a prize for the best research paper at the ALT-C 2007 conference. Since then we have published several further papers with contributions from a number of Web accessibility researchers and practitioners which have developed our ideas further. In brief we might describe our ideas with the Twitter-friendly summary: “Accessibility is about people & their content; it is not an inherent characteristic of a resource“.

I therefore welcome the invitation for ideas on “ ‘accessibility in use’ metrics that reflect the impact that accessibility issues have on real users, regardless of guidelines“.

I should say that I do feel that there is still a need to be able to provide more sophisticated Web accessibility rankings that go beyond the WAI A/AA or AAA scores (which, in a university context seem to have parallels with the first, upper second, lower second and third class ranking we have for undergraduate achievements). We should be able to useful differentiate between Web pages (and Web sites) which  have images which have 95% containing alt equivalents from those with only 5% – currently both such pages will be treated equally as a WCAG failure. Similarly it would be useful to rank the importance of a failure to conform with WCAG guidelines (for example missing alt text on images should be regarded as more significant than having a hr element without the required terminating / in an XHTML document which breaks HTML conformance requirements).

But to prioritise analysis and subsequent actions based on the resource still focuses on the content, in isolation of the context, the target audience and the purpose of the Web resource or service.  Such an approach fails to consider blended approaches (“blended accessibility for blended accessibility” as we described in a presentation back in 2006) or the provision of multiple access routes to content (such as access to content published in the Guardian which, as described in a recent post, can be accessed via the Guardian Facebook, Kindle, iPhone or Android apps, on the Guardian Web site and via Guardian RSS feeds.

The “Online Symposium on Website Accessibility Metrics“, however, provides an opportunity to revisit previous work from a new perspective: the role of metrics. In general metrics may be needed for several purposes including:

  • Measuring conformance to agreed benchmarks
  • To identify good and bad practices which can lead to sharing experiences and changing working practices.
  • To penalise conformance which fails to reach acceptable measures.
  • To understand the limitations of benchmarks and to identify ways in which they may be refined.

Tools such as Bobby, WebXact, WAVE, and similar accessibility checking tools tended to focus on conformance with WCAG guidelines and, in some cases, the Section 508 US guidelines which had many similarities to WCAG.

However an initial survey published in 2002 and a follow-up survey in 2004 of  conformance with WCAG guidelines for UK University home pages using Bobby was used to demonstrate the limitations of such tools and the underlying guidelines they were based on: i.e. the 3 universities  in 2002 and the nine universities in 2004 having a WCAG AA conformance rating (based on automated testing which will, in any case, tend to give a higher rating) demonstrated failures ion the guidelines rather than failing across the UK higher education sector.

This example illustrates how the tools were used to demonstrate flaws in the underlying guidelines,  and why it would, in some cases, be inappropriate to use the metrics to highlight good or bad practices or to penalise lack of conformance.

What Is To Be Done For Web Accessibility Metrics?

Whilst our accessibility research highlighted limitations in WAI’s approaches, we did not feel it would be appropriate to abandon efforts to develop guidelines to enhance access to digital resources and services for people with disabilities.   We proposed a less mechanistic approach which we described as a “holistic approach to Web accessibility” which was described in a paper on  “Implementing A Holistic Approach To E-Learning Accessibility” and subsequently further refined and developed in several papers including”Holistic Approaches to E-Learning Accessibility” and “Implementing A Holistic Approach To E-Learning Accessibility“.

Our approaches became validated with the launch of BS 8878; Code of Practice for Web Accessibility.  As described in  a post on BS 8878: “Accessibility has been stuck in a rut of technical guidelines” BS 8878 acknowledges the complexities of Web accessibility and provides a 16 step plan which identifies the necessary steps for enhancing accessibility, whilst acknowledging that it may not always be possible to implement best practices at all stages.

For me, therefore, the focus of future Web metrics work should be based on metrics associated with the 16 stages of BS 8878, rather than the single stage in BS 8878 which addresses technical guidelines such as WCAG.

In a post on Web Accessibility, Institutional Repositories and BS 8878 I described how the UK’s BS 8878 Code of Practice for Web Accessibility might be applied in the content of large numbers of PDF resources hosted in institutional repositories. Some suggestions for  metrics associated with the various stages described in the post are given below.

  1. Note any platform or technology preferences:
    Advice: PDFs may not include accessibility support.
    Metrics: Monitor numbers of resources provided in PDF format and measure changes over time.
  2. Define the relationship the product will have with its target audience:
    Advice: The paper will be provided at a stable URI.
    Metrics:  Monitor changes in URIs for resources and report on any changes.
  3. Define the user goals and tasks:
    Advice: Users will use various search tools to find resource. Paper with then be read on screen or printed.
    Metrics: Monitor terms used in searches and use to identify possible usability problems.
  4. Consider the degree of user experience the web product will aim to provide:
    Advice: Usability of the PDF document will be constrained by publisher’s template. Technical accessibility will be constrained by workflow processes.
    Metrics: Feedback on enhancements to the template should be made to the publisher and records kept on implementation of recommendations.
  5. Consider inclusive design & user-personalised approaches to accessibility:
    Advice: Usability of the PDF document will be constrained by publisher’s template. Technical accessibility will be constrained by workflow processes.
    Metrics: Records should be kept on personalised preferences selected and feedback should be gathered on the personalised experiences.
  6. Choose the delivery platform to support:
    Advice: Aims to be available on devices with PDF support including mobile devices.
    Metrics: Records on usage of platforms should be kept and used to inform and possibly modify the policy decision.
  7. Choose the target browsers, operating systems & assistive technologies to support:
    Advice: All?
    Metrics: Selection of target browsers may be determined by popularity of such browsers.  There will therefore be a need to define how to measure browser usage.
  8. Choose whether to create or procure the Web product:
    Advice: The service is provided by repository team.
    Metrics: Not appropriate.
  9. Define the Web technologies to be used in the Web product:
    Advice: HTML interface to PDF resources.
    Metrics: Not appropriate.
  10. Use Web guidelines to direct accessibility web production:
    Advice: HTML pages will seek to conform with WCAG 2.0 AA. PDF resources may not conform with PDF accessibility guidelines.
    Metrics: Use of automated tools to measure conformance with best practices for HTML and PDF resources.  Deviations from best practices should be documents and remedial action taken, if felt to be appropriate.
  11. Assure the Web products accessibility through production (i.e. at all stages):
    Advice: Periodic audits of PDF accessibility planned.
    Metrics:  This is the key area.
  12. Communicate the Web product’s accessibility decisions at launch:
    Advice: Accessibility statement to be published.
    Metrics: Provide a feedback tool for the accessibility statement and ensure that issues raised are addressed.

Does this seem to be an appropriate response to the question “What should be done with Web accessibility metrics?”

Posted in Accessibility | 1 Comment »

JISC Call For Proposals for Implementations of the Linking You Toolkit

Posted by Brian Kelly (UK Web Focus) on 12 October 2011

The JISC have recently announced the JISC Grant Funding Call 16/11 for the JISC Digital Infrastructure Programme. The call has six strands in the areas of 1) Resource Discovery; 2) Enhancing the Sustainability of Digital Collections; 3) Research Information Management; 4) Research Tools; 5) Applications of the Linking You Toolkit and 6) Access and Identity Management.

Call 5 has direct relevance to recent posts on this blog related to best practices for institutional Web sites to enhance access to institutional resources. In a recent guest post on this blog Dave Flanders, JISC Programme Manager, asked readers to Lend Me Your Ears Dear University Web Managers!. The post highlighted the potential benefits which institutions should be able to gain from implementation of recommendations which were identified by the JISC-funded Linking You project.  The following benefits were identified:

Ten Benefits to Institutions

Why should you mint the suggested set of ‘linking you’ URLs for your institution?  We recognise this work of minting and maintaining the redirects would be ‘yet another thing to deal with’ across your complex and growing .ac.uk websites, however we think there is potential value (both in time savings and value add) we could all communally benefit from in considering these URL conventions. Below we list reasons why we think will result if we can get multiple institutions to start adopting this syntax and vocabulary and some simply suggestions for ways of achieving these benefits:

  1. Better SEO: As a sector we can go to Google and say, “Hi we are the University sector and we think you should give priority to these URLs when people are searching for things like courses.”
  2. Management of robot.txt files: If a group of Universities started adopting these URL syntaxes, we could save time and money by generating a common robot.txt for all of us so to use so we don’t have to each write a robot.tx file, this would also make doing analytics across the sector enhanced as we could understand patters of clicking across all .ac.uk websites.
  3. A simple mapping tool: An apache mod_rewrite (or IIS, nginx, etc. equivalent) tool that will do most of this work for you that could be written once and support many!
  4. Improve discovery: Clear human-readable URLs are now integral to browser search and lookup technology and becoming essential if you want to enable ease by a student experiencing your website.
  5. Predictable, consistent, aggregations: It will be easier to build tools on behalf of the entire sector because people will know where to go for the data. See the below reasons (nos. 6, 7 and 8) for immediate experimentation JISC is already undertaking and just think what else could be leveraged if we could bring our data together:
  6. Provision  of  a course catalogue: As many of you know JISC is actively encouraging universities to create XCRI feeds for their courses.  If everyone producing an XCRI feed put it at the following URL http://www.foo.ac.uk/courses/xcri/ we’d lay the groundwork for persistent, structured course data that developers (many of them students) could use to build new and engaging apps and websites that we could all benefit from.
  7. Provision of news feed aggregators: If we all knew where all the corporate news feeds were e.g. http://foo.ac.uk/news/rss we could create a UK University News Aggregation Service where the sector could have their news published on demand, let alone text mining goodness and other filters for highlight key news developments across all higher and further education institutions.
  8. A sector wide directory: Common information such as institutional policies, contact information, news, about, events, etc. could be aggregated into a searchable directory; useful to both the public and HEI data geeks.
  9. Managing your assets: Your .ac.uk addresses can be understood as your ‘virtual real estate’. Adopting a well-formed, widely understood and persistent ‘portfolio’ of core web addresses will help University Web Managers manage these increasingly valuable assets.
  10. Use ‘Cool URLs’Simple, stable, manageable URLs make sense. They are recommended by the WC3, to make Web Managers’ lives easier and keep users happy, too.
The Call for proposals has now been published:

Applications of the Linking You Toolkit – Up to 10 projects investigating the implementation and improvement of the ‘Linking You Toolkit’ for the purpose of demonstrating the benefits that management of institutional URLs can bring to students, researchers, lecturers and other University staff. Funding of up to £140,000 is available for this work.

Note that a commentable document about this particular call has also been published. which describes how “bidders are highly encouraged to contact the JISC Programme Manager responsible for the Call prior to bidding. Questions on the applicability of the project to the Call as well as the process by which JISC has the community mark proposals and select successful proposals is welcome and encouraged.

The UK higher/further education sector has a well-established and well-connected community of practice for those involved in the provision of institutional Web management services. This funding call provides an opportunity for the community to submit proposals which can demonstrate the value of shared approaches across the sector.  I hope we will see plenty of high quality proposals submitted.

Posted in Web Server | Tagged: | Leave a Comment »

When Trends Can Mislead: The Rise, Fall and Rise of Apache

Posted by Brian Kelly (UK Web Focus) on 11 October 2011

Back in April 2008 in a post entitled “The Rise and Fall of Apache?” I contrasted the fall in the numbers of Web servers running on the Apache Web server software with the corresponding rise in use of the Microsoft Web server software, as illustrated below.

But how have things changed over the past three years? A recent email alert from Netcraft has provided an answer. As can be seen from the October 2011 Web Server Survey (illustrated below) since 2009 there has been a steady decline in usage of Microsoft server software and a corresponding increase in use of Apache.

 

One lesson from this is that trends won’t always be an accurate predictor of future developments. But in addition, when I published the initial post Mike Nolan, Richard Cunningham and others suggested that the overall figures of Web server usage were not necessarily accurate. Rather it would be more appropriate to show the trends for active Web server usage.

Those comments, all of which were made on the day the post was published, were valuable in informing me of flaws in my interpretation of the data. The timeliness of the responses was also helpful in minimising dangers that others may have read the post and be unaware of the flaws in the interpretation of the data.  I think that illustrates the value of providing commentable articles and in minimising barriers for commenting (note there are no approval processes in place which could delay publication of comments).

So now we should be able to say with some confidence that the Apache server is well-established as the leading tools for providing Web sites around the world.  I suspect that this will also be true across the UK higher education sector. And although we sometime talk of the value of platform-independent solutions, there are times when it may be legitimate to develop solutions for particular platforms.  I am particularly interested in ways in which institutions may be able to implement recommendations provided by the Linked You Toolkit developed at the University of Lincoln.

One the the recommendations was that:

attention needs to be given to the way institutions transition to a shared ontology for the sector. Research needs to be done that examines and recommends strategies for migrating from existing and legacy URI structures to a model of best practice. HTTP 3xx status codes are at the heart of this.

Might appropriate strategies for development of shared approaches be based on developments for the Apache server, which it seems, is likely to be widely deployed across the sector?

Posted in Evidence | 1 Comment »

Thoughts on “The Future of the Past of the Web” Event (#fpw11)

Posted by Brian Kelly (UK Web Focus) on 10 October 2011

imageOn Friday 7th October 2011 I attended a one-day event on “The future of the past of the web“. The event, which was organised was organised by the British Library, the Digital Preservation Coalition (DPC) and the JISC, was the third joint Web archiving workshop, the previous two workshops having been held in 2006 and 2009 .

I have had an interest for some time having given a talk way in 2002 on “Archiving The UK Domain and UK Web Sites: What Are The Issues?” at a DPC seminar on “Web-archiving: managing and archiving online documents and records“. It seems that the Web archiving world changed significantly since I gave my talk and, indeed, since the first two workshops.  As a number of people commented, many of  those involved in Web archiving initiatives are no longer primarily focussed on archiving conventional Web ‘pages’ – rather the sector is facing the challenges in archiving a much more dynamic environment, with the Social Web now providing significant content which social historians of the future will wish to analyse in order to make sense of today’s online (and offline) environment.

The changes in emphasis can also be seen from the developments of end user services which can help to make the importance of Web archiving move obvious to the wider community.  In the opening plenary talk Herbert van der Sompel described Memento, an initiative which is looking to “add time to the Web” by developments which build on existing web protocols including HTTP and content negotiation.

imageA Memento plugin for Firefox is available which enables end users to gain an understanding of benefits which such developments can provide. I was also pleased to hear that a Memento Browser is available for Android mobile devices. For those who may not be able to install such applications, use of Memento’s capabilities can also be seen by using the Internet Archive’s Wayback Machine. As can be seen from the accompanying image you can view the BBC News Web site for October 2008, and perhaps reminisce about the early days of the financial crisis.

Further examples of rich interactive interfaces to Web archives have been developed to enhance the  UK Web Archive service and, as described by Maureen Pennock and Lewis Crawford, this includes N-Gram visualisations of searches across the archive, tag clouds generated from the General Election 2005 Collection and a 3D wall visualisation across archived collections.

Services provided by the British Library have, of course, always been valued by researchers.  But in a talk on “Web Archiving: the State of the Art and the Future” Eric Meyer, Research Fellow Director at the Oxford Internet Institute, asked us to consider how effective we have been in making social science researchers aware of the potential of Web archives in supporting their research.  There is, I feel, a need for further advocacy for ensuring that researchers are aware of the ways in which not only archived digital resources, but also data associated with such archives, can sup[port research interests.

The increasing importance of Web archiving has led to archiving tools and services being developed within the commercial sector in addition to activities led by national libraries and archives, higher education and EU-funded consortia. Mark  Williamson was invited to give a presentation at the last minute and described various archiving activities of his company, Hazno. It was interesting to hear how a well-known multi-national company such as Coca Cola, which, as might be expected, has well-established archiving processes for archiving of physical objects but was slow in recognising the importance of digital archiving, including initially the development of its public Web site and then its public presence on social web sites including the Coca Cola Facebook page. Mark also described how APIs are being developed for the Hazno Web archiving system and how the APIs would be valuable in analysing the data associated with large collections of Web archives. As Mark put it: “The individual pages in a web archive are pretty boring – it’s the Big Data that’s exciting“. It will be interesting to see whether the Hazno software could provide a solution for Universities which may be interested in archiving their digital presence, especially uses of social web services for which the content cannot be managed through use of a content management system used to manage the institutional Web presence.

As well as finding the talks at the workshop of interest it was also interesting to observe the gaps. In the final session Neil Grindley, JISC Programme Manager for digital preservation asked the panel for their thoughts on standards for web archiving – and found that no one on the panel. However in response to my tweet that:

Interesting that nobody wanted to respond to the question about standards for web archiving at #fpw11

Helen Hockx commented that:

@briankelly I agree. Both ISO and BSI have initiated and are going to initiate work on standards related to web archiving.

If the next Web archiving event is held in another two years time, it will be very interesting to see what the focus of development work will be.  Ten years ago the drive for Web archiving came from national and international bodies.  However as suggested in a tweet posted by Les Carr a few hours ago who provided a link to a blog post on EPrints repositories to collect data from Twitter perhaps we shall see institutions appreciating the value of digital content created by members of the institution, including content hosted outside of the institution. Or perhaps, as suggested by the EU-funded Arcomem project, it may be large EU-funded projects which help to preserve todays’ cultural memories which are help on online service, including social web services.  And although motivated individuals may wish to make use of tools such as Memolane, a “Social Web application that captures all of your memories from different Social Networks like Flickr, Facebook, Twitter, Youtube ” highlighted on the Arcomem Website as a “Personal Timemachine for the Social Web“, in reality I don’t think we can leave it to individuals to take responsibility for preserving their own public content. Of course, this begs the question of ‘walled gardens’ which apparently mean that content cannot be accessed by third parties and issues such as privacy and copyright.  I wonder if the next Web archiving workshop will have got bogged down by the difficulties which such issues raise, or if ways of circumventing such difficulties may have been found?

Posted in preservation | 4 Comments »

Things We Can Learn From Facebook

Posted by Brian Kelly (UK Web Focus) on 4 October 2011


image

Are you pleased, angry or indifferent to Facebook developments (photograph taken at Kelvingrove Museum, Glasgow)

Looking at the Evidence

What is your take on recent Facebook developments? Are you feeling angry and have perhaps already deleted your Facebook account as have one or two of my Facebook followers? Or perhaps you are indifferent or even unaware of recent Facebook developments. In which case you are probably just using Facebook as a tool and aren’t taking part in the discussions about Facebook and privacy.

Shortly before a trip to Glasgow this weekend I asked for suggestions on places to visit and things to do. I decided to use my three main social networks in order to gain some anecdotal evidence on current usage of Twitter, Facebook and Google+.

In response to my query I received three responses from four people on Twitter (including one who suggested that I should visit Edinburgh!), 16 comments from fifteen people on Facebook and four comments on Google+.

Whilst that would suggest that Facebook is the most effective social networking environment for me, there is a need to related the numbers of responses to the size of the social network.   But since I have 2,583 followers on Twitter, 625 friends on Facebook and 417 followers on Google+ this seems to confirm the personal value of Facebook to me.

But what about the bigger picture? In a post entitled “Why Facebook’s new Open Graph makes us all part of the web underclass” by Adrian Smart and published recently on the Guardian Web site Adrian argued that “If you’re not paying for your presence on the web, then you’re just a product being used by an organisation bigger than you“. This was, I felt, a very elitist article, with the suggestion that:

When you own a domain you’re a first class citizen of the web. A householder and landowner. What you can do on your own website is only very broadly constrained by law and convention. You can post the content you like. You can run the software you want, including software you’ve written or customised yourself. And you can design it to look the way you want.

suggesting that you are a second class citizen if you primarily use your institutional Web site or, as I do. on the WordPress.com site which constrains the plugins used and look-and-feel for this blog. Actually, you’re worse than a second class citizen:

When you use a free web service you’re the underclass. At best you’re a guest. At worst you’re a beggar, couchsurfing the web and scavenging for crumbs. It’s a cliché but worth repeating: if you’re not paying for it, you’re aren’t the customer, you’re the product. 

In this elitist view, it seems that unless you control your own domain you’re a member of the underclass. The article goes on to take a sideswipe at Facebook, in particular. But it was amusing when I saw the tweet from the Guardian’s @currybet (Martin Belam) which pointed out that:

That “peril of Facebook” post by @adrianshort has 2,000 Likes and has been read nearly 3,000 times in our Facebook apphttp://bit.ly/pUKVXP

Yes, it seems that the “Web underclass” is willing to share their engagement with their peers  using a Facebook  Like or the walled garden provided by the Guardian Facebook app – and in quite large numbers.

Avoiding The Echo Chamber

I have described a polarised situation in which posts describing the various problems with Facebook such as the reasons series of articles which have described how Facebook tracks you online even after you log outFacebook denies cookie tracking allegations, Facebook fixes cookie behavior after logging out and US congressmen ask FTC to investigate Facebook cookies.

But whilst Nik Cubrilovic, author of the post in which he accused Facebook of tracking its users even if they log out of the social network has subsequently written a post on how Facebook made changes to the logout process in which he describes how the cookies in question now behave as they should (they still exist, but they no longer send back personally-identifiable information after you log out) we are still seeing tweets in which the initial findings are being repeated.  We also seem to fail to hear other perspectives including the comment from Facebook engineer Gregg Stefancik:

I’m an engineer who works on these systems. I want to make it clear that there was no security or privacy breach. Facebook did not store or use any information it should not have. Like every site on the internet that personalizes content and tries to provide a secure experience for users, we place cookies on the computer of the user. Three of these cookies on some users’ computers included unique identifiers when the user had logged out of Facebook. However, we did not store these identifiers for logged out users. Therefore, we could not have used this information for tracking or any other purpose. In addition, we fixed the cookies so that they won’t include unique information in the future when people log out.

I feel there is a need to have a better understanding of the complexities of the issues and  be willing to listen to the views of others and not just respond to views expressed on ‘echo chambers‘ such as Twitter.

What Can We Learn From Facebook?

In order to move the discussion on from the Twitter echo chamber I’d like to summarise some aspects of Facebook which should be considered in more depth.

“Seamless sharing” could be an appealing concept: A recent post on the Bashki blog announced “Facebook Wants to Change the Way You Share” and described how “Facebook wants to remove as much friction from sharing as possible so that it’s seamlessly integrated with a user’s online activity“. When I heard the term ‘seamless sharing’ it reminded me of the JISC’s vision, over 10 years ago, for the Distributed National Electronic Resource (the DNER as it was initially referred to).   As I described in a poster entitled “Approaches To Indexing In The UK Higher Education Community” presented at the WWW 9 conference in May 2000: “The DNER aims to provide seamless access to electronic resources provided by JISC service providers“. The ideas in the paper were a reflection of the vision for the DNER described by Reg Carr, Director of the Oxford University Library Services who, in a paper on “Creating the Distributed National Electronic Resource, argued that “if the DNER is to deliver the goods in the way envisaged, it will have to do so in a carefully integrated, flexible and seamless way“.

Let’s be honest and admit that in higher education we too are looking to provide a seamless sharing environment.  This is a positive term and we should avoid misinterpretting this term.

We want to understand and respond to user interactions: I recently attended a meeting on learning analytics which Wikipedia describes asthe measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs“.

Back in 2008 Dave Pattern in a post on “Free Book Usage Data Available from the University of Huddersfield” described how the Library Service had “released a major portion of our book circulation and recommendation data“. Eighteen months later in a post on “Non/low library usage and final grades” Dave described how analysis of the library usage data had showed that “it’s those students who graduate with a third-class honour who are the most likely to be non or low-users of e-resources“. In this case analysis of user interactions (and non-interactions) can lead to an institution taking actions, which could include promotion of appropriate Library resources, training, etc.

Facebook  also analyses its user attention data.  If it notices that I am following England’s rugby team’s exploits in the Rugby World Cup it might also respond to failings, but rather than providing an advert for a Library training course, it might suggest I console myself with a pint of Carling!

Walled gardens can provide a nurturing environment: The term ‘walled garden’ is widely used to dismiss Facebook as a closed environment. Facebook clearly was a closed environment when it was launched, with access restricted to those working in approved academic institutions. However now anyone can have a Facebook account (including organisations) and content can be made public too all or access restricted (in ways not easily achieved on conventional Web sites).  Facebook can be used as a platform for walled garden application, with users needing to install the app in order to access the content – but since standard Facebook content can be published openly it would probably be incorrect to describe Facebook as a walled garden, unless we wish to use the term to describe Intranets.  However a mobile phone app which can only be deployed on a singly platform could, possibly, be described as a walled garden – and as several institutions are developing such apps we need to avoid inconsistencies in the terminology we are using.

In addition to the need to be more rigourous in defining the term there is also a need to reflect on the potential benefits of walled gardens.  I have heard a walled garden being described a providing a ‘managed’ or ‘nurturing’ environment. The institutional VLE may be regarded as a walled garden, but this point is very rarely heard when the term is being used to dismiss technologies one doesn’t approve of.

Users understand the need for sustainable business models: I have always been rather bemused by the statement: “if you’re not paying for a service you’re the product“. When I watch the Rugby World Cup matches on ITV I can also be described as ‘the product’.  ITV isn’t broadcasting the matches as a favour to me and other sports’ fans: it’s doing so in order to make money from the associated advertising.  And just as TV viewers understand the business models so too will users of social networking services understand that the service providers need to make money, both to fund the service and to provide a profit for the owners.

Let’s be honest and admit that faced with a choice of business models based on subscription services, advertising or even nationalised services, the evidence suggests that many users are willing to use services which provide adverts.

Isn’t there a lot which we can learn if we avoid the simple slogans and reflect on the Facebook experiences and successes which users seem to find beneficial?

Posted in Facebook | 17 Comments »