UK Web Focus

Innovation and best practices for the Web

Archive for March, 2011

ILI 2011 and the ‘New Normal’

Posted by Brian Kelly (UK Web Focus) on 31 March 2011

This year’s Internet Librarian International conference (ILI 2011) will be held in London on 27-28 October.  The call for speakers begins:

We are now in a time best characterised as the “New Normal”. The new normal isn’t just about austere budgets or the old chestnut of “doing more with less” – it’s also about new technologies. The new normal is having library patrons, users, customers and clients who know as much or more about technology than we do. It’s about partnerships and transparency, about new ways to develop and disseminate knowledge, about the increasing importance of communication skills, about opening up access to information, data, and knowledge.

What is meant by the term the ‘New Normal’ and how does it apply to in a library context?  I found an article on “The Politics Of The ‘New Normal’” which was published in The Atlantic in July 2009.  This states that “About a third of Americans, 32 percent, say they are spending less now and expect to make their present habits a “new normal” of their future budgetings“. The writer, Chris Good, goes on to add “One can’t help but wonder if the “new normal” has political ramifications“.

In a library (and educational) context in addition to the obvious economic and political changes there are also technological developments which are adding to the radical changes we are seeing across the sector.  But what might the implications of the ‘New Normal’ be in a Library context?  Let us assume that an accompanying discussion about such implications is based on an agreement that there are significant changes which will have an impact on working practices and will challenge orthodox thinking and working practices.   I should add that although the political and economic changes  are undoubtedly contentious there will be other changes which many will welcome.

Focussing on the technological developments we have seen in recent years it can  be argued that:

  • Many users now have the skills and access to technologies to find and access resources which previously were mediated by librarians.
  • We are seeing a decrease in the importance of finding via metadata and an increase in the importance of social discovery.
  • We are seeing a decrease in the importance of libraries providing access to trusted resources. Instead users now wish to access resources they find in the wild – but will need to be able to evaluate such resources.
  • We are seeing a  decrease in an unquestioning belief in the value of libraries and librarians and a need for the sector to be able to demonstrate value and pro-actively market themselves.

The Cabinet Office has recently published the Government ICT Strategy (PDF format).  The document provides many statements which, of the face of it, seem reasonable, especially for those who have been active  in IT development work.  For example:

  • projects tend to be too big, leading to greater risk and complexity, and limiting the range of suppliers who can compete“: Yes, there is value in agile development and rapid innovation projects which JISC, for example, has been funding.
  • Departments, agencies and public bodies too rarely reuse and adapt systems which are available ‘off the shelf’ or have already been commissioned by another part of government, leading to wasteful duplication“: The not-invented here syndrome we are familiar with.
  • systems are too rarely interoperable“: Again we are familiar with non-interoperable silos.

A number of solutions the government is proposing will we welcomed by many:

  • create a level playing field for open source software“: The JISC OSS Watch service has provided advice in this area  to the HE/FE sector.
  • impose compulsory open standards, starting with interoperability and security“: Many will see benefits in mandating use of open standards which can help public sector organisations from continuing to make use of proprietary formats.

Whilst there are aspects of the Government ICT Strategy which make for uncomfortable reading it does seem to me that there may be benefits in embracing new approaches which may build on experiences gained over recent years in working in a changing environment with changing user expectations and requirements.

I will be interested to see how speakers at the ILI 2011 conference will address the implications of the “New Normal”.  Note that the deadline for submissions is 8 April – so if you have an interest in sharing your experiences I’d encourage you to submit a proposal.  If you are not able to submit a proposal, I’d welcome suggestions on what the New Normal might mean to you – I’d especially welcome positive examples.

Posted in Events, General, library2.0 | 2 Comments »

Thoughts on Facebook, Linked Data and Other Developments

Posted by Brian Kelly (UK Web Focus) on 29 March 2011

A Week of Facebook Developments

Last week saw a number of interesting Facebook developments which may have implications for the higher and further education sector.  A new Facebook feature, Facebook Questions, was rolled out to all users on March 24 and the following day an Operation Developer Love: Facebook Hack Day took place in Berlin which generated some interesting discussions on Twitter.   Whilst I am aware that many developers and others who have interests  in the use of networked technologies to support educational and research activities have concerns regarding various aspects of the Facebook environment I feel that there is a need to monitor significant developments and to have an open discussion about the potential of such developments as well as possible concerns.

Facebook Questions

A post was published on Mashable on 27 March 2011 which outlines reasons why “Why Facebook’s New Questions Tool Is Good for Brands & Businesses“. The post began:

Brands and businesses are looking for ways to leverage Facebook’s recently unveiled Questions tool in ways that differ from what they’re already doing on Q&A sites such as Quora, Yahoo Answers and LocalMind.

This new feature, which functions as a recommendation engine, was rolled out to all users on March 24. According to Ben Grossman, communication strategist for marketing agency Oxford Communications “It also presents a major opportunity for businesses to conduct market research and crowdsource in a far more elegant way than was previously possible“.

Looking at my Facebook contacts I’ve found that an early user of the new feature was Euan Semple who responded to the questionCheck out the new Facebook Questions what do you think? :)“. The answer, it seems, is that 8 people aren’t sure, 4 love it, 16 feel it could be a useful tool whilst one person doesn’t like it at all.

Whilst many of my other Facebook contacts have been answering fairly trivial questions (such as “FOOTBALL OR RUGBY ? WHICH IS BETTER ? ” and “WHO WOULD WIN IN A FIGHT“)  Aaron Tay, a librarian at the National University of Singapore (who was also recently named as a Library Journal Mover & Shaker 2011), has started to explore how the service can be used to support his professional interests, asking, for example, What is your favourite database/ search engine (excluding Google & Wikipedia)?

I have not yet come across any universities making use of Facebook Questions to gather feedback but as described in the Mashable blog post the feature can be used by organisations and groups and not just individuals. Once a Facebook page owner has set up the appropriate configuration options:

Brands, businesses, groups and organizations can then use Questions in several ways. For example, Grossman said:Ice cream parlors can find out what the flavor of the week should be.

  • A gym can find out what time is best for its new hip-hop yoga class.
  • Radio stations can determine the hottest concerts for the summer.
  • Manufacturers can do a pulse check on fans’ holiday shopping plans.

In light of the increased importance of marketing an institution to new and existing customers (and since many new students will be paying £9,000 per year to attend University we should be regarding them – or their parents – as customers) I suspect we will start to see greater use of Facebook Questions. Is any University already using it, I wonder?

The Operation Developer Love: Facebook Hack Day

The #fbdevlov Twitter Discussion

On its own Facebook Questions is simply a new feature which has been deployed by a large scale social networking environment. Of greater interest to the develop[er community was the Operation Developer Love: Facebook Hack Day (see also the Facebook page) which took place in Berlin on Friday 25 March.

I became aware of this event through spotting tweets from three people in my Twitter stream: @gkob, @kidehen and @ldodds. I follow these three individuals as I am aware of their active involvement in Linked Data developments, which is illustrated in the following biographical details provided on Twitter or, in @ldodds case, his personal Web site:

@gkob (Georgi Kobilarov):
CEO at Uberblic Labs. data geek. building data infrastructure for the Web. trying to change the world. linked & open data advocate. ex dbpedia developer.

@kidehen (Kingsley Uyi Idehen)
Founder & CEO, OpenLink Software, An Open Linked Data Enthusiast.

@ldodds (Leigh Dodds)
Until recently Leigh Dodds was the CTO of Ingenta where he was responsible for the ongoing development of their publishing platform based on Semantic Web technologies. Leigh has recently joined Talis as Programme Manager for the Talis Platform

Since I am aware of their involvement in Linked Data development activities I was fascinated by the Twitter discussion which took place around the tweets for the Facebook Developer Love Hack Day. The Twitter hashtag for the event was #fbdevlove. I created a Twapper Keeper archive for the hashtag and also used Storify to keep an archive of the discussions around structured data available through Facebook and Linked Data developments. In brief  Georgi Kobilarov (@gkob) initiated the discussion with a message to other Linked Data developers::

#linkeddata folks: forget all your RDF & Sparql, you’ll have to compete with Facebook’s Graph API, and that war is about developer love

Kingsley Idehen (@kidehen) responded:

@gkob Facebook (#FB) is just another Data Space plugged into the global #WWW Data Space. It’s all good re. #LinkedData. “AND” is good :-)

@gkob #Facebook has been creating a massive#LinkedData hub since inception. It doesn’t have to be hardcore #RDF to be useful Linked Data.

@gkob key thing is this: #Facebook is a massive#LinkedData Space plugged into the global #WWWdata space. User Agents can query it.

@gkob I don’t have any problems querying#Facebook or meshing its data with data from other places en route to richer #LinkedData. All good.

@gkob #RDF != #LinkedData. What #Facebook#Microsoft #Google #Yahoo! etc.. r doing re. structured data (without #RDF) is quite valuable.

An interesting perspective, I thought. To put it another way, the global Social Web providers, such as Facebook, are well positioned to significantly enhance deployment of Linked Data by providing access to the large-scale structured information repositories they host.

Facebook’s Open Graph API

An example of a tool which developers can use to explore Facebook’s Open Graph API  was mentioned by @sicross: his Facebook Graph API Explorer. I have used this tool to retrieve data for two institutional Facebook pages: the University of Bath and the University of Cambridge. You can view the output for the Universities of Bath and Cambridge.

I have previously surveyed institutional Facebook pages for Russell Group Universities in order to identify emerging patterns of usage.  This survey provide a manual comparison which would be resource intensive to carry out across all UK Universities (and even more so if international comparisons were to be made).  However use of the Facebook Graph API Explorer has helped to identify patterns of usage which could be carried out in an automated way including the following numerical data:

University of Bath:

Nos. of likes: 3,357
Nos. of checkins: 1,081

University of Cambridge:

Nos. of likes: 69,824

We can immediately see that people have been using the Facebook Places feature has been used a significant number of times at the University of Bath but not at all at the University of Cambridge.  I must admit that I initially found this surprising: I would have expected an institutional geo-location service to have taken off in an institution which has many buildings scattered throughout the city as opposed to a primarily campus-based institution.  However, on reflection, it seems the opposite is true: checking in will have little value for an institution which has based in a large number of locations.  Of course it may be that geo-location services provide little value in the context of institutional use. Alternatively it may be that Facebook Places has failed to have an impact in this market – a suggestion which seems to be confirmed by an article published yesterday in the Daily Telegraph which informs us that “Foursquare has doubled its users since Facebook Places launched says chief“.

The potential to gather statistics on the number of Facebook ‘likes’ in an automated way will, I feel, help to provide evidence which can be used to inform policy decisions on institutional  use of Facebook and the resources which should be assigned to such work.  There could, of course, be dangers that such statistics would be used to publish league tables – but since the aims of higher educational institutions aren’t about maximising numbers of users on  Social Web services, such concerns shouldn’t be taken too seriously.  However the data gathered  could be used in order to help identify the effectiveness of online marketing activities.  And if an aim is to ensure that UK Universities are best positioned to market their services to overseas students the UK economy as a whole would benefit from a shared understanding of the benefits and the best practices.

“Is Facebook Killing Off The Company Website?”

A white paper entitled “The Effect of Social Networks and the Mobile Web on Website Traffic and the Inevitable Rise of Facebook Commerce” (PDF format) was published by Web Trends on 17 March 2011. In response Jeff Bullas published a blog post in which he asked “Is Facebook Killing Off The Company Website?“. This discussion centred around evidence of traffic to Fortune 100 company Web sites. The study revealed that “68% of the top 100 companies were experiencing a negative growth in unique visits over the past year, with an average drop of 23%“.  In order to identify whether Facebook was responsible for the significant decrease in numbers (as opposed, for example, to the effect of the recession) the numbers of visits to a number of company web site were compared with unique visits to equivalent Facebook pages  In a sample of 44 companies it was found that “40% exhibited higher traffic to their Facebook page compared to their website“.

It might be argued that University Web sites are very different from those provided by commercial companies – Universities are concerned with the complexities of teaching and learning and research whereas companies such as Coca Cola and Ford are simply produce drinks or motor vehicles. Such views were made on the Twitter channel during Ranjit Sidu’s talk at the IWMW 2010 event entitled “‘So what do you do exactly?’ In challenging times justifying the roles of the web teams” in which he suggested that the higher education sector could learn from the way companies which sell cars identify the effectiveness of their online activities). It was interesting to note that several participants echoed such sentiments.  So let’s be honest and admit that commercial companies and higher educational institutions are not dissimilar in having many diverse objectives and sometimes little understood complexities – and that both sectors may be in a position to exploit social Web services such as Facebook for a variety of purposes (marketing, sales consumer engagements, etc.)  but may also feel threatened by such services.

A Challenge For Developers

It was interesting to observe the tweets from the Facebook Developer’s Love hack day and not only to see the enthusiasm for making use of Facebook APIs but also hearing about how Facebook content could be made available as Linked Data on the Web.  There are still unresolved issues such as privacy and ownership of data associated with Facebook – but as we have seen similar issues are also faced by Twitter, with still some uncertainties regarding the copyright and ethical issues associated with use of tweets published by others and the ways in which Twitter can enforce their conditions of use of their service. But just as Twitter subsequently toned down the conditions governing reuse of their data, we are also seeing Facebook moving away from their ‘walled garden’  approach and providing APIs to allow others to reuse their content.

As can be seen from the recent post on Use of Facebook by Russell Group Universities Facebook clearly has a role to play across higher educational institutions. Managers and policy-makers within institutions will need to make decisions on how such services should be used and how much effort should be allocated to support such work.  Such decisions should be informed by evidence such as “How extensively is Facebook being being used across the sector?” and “What patterns of usage are emerging?“.

Since APIs are available such answers need no longer have to be based on manual surveys. A challenge I would like to pose developers is to provide answers to the following questions:

It should be possible to provide answers to these questions be simply using the Facebook API to query the Facebook data store. However Linked Data developers may relish the challenge to combine this data store with DBPedia in order to answer the following additional question:

  • Is there a correlation between the numbers of Facebook ‘Likes’ and the size of the institution – or to put it another way, which institution has the largest number of ‘Likes’ per student?

In the longer term it will be useful to monitor trends in institutional use of Facebook – which may, of course, include a decline in such usage if alternative offerings, such as Diaspora service (which will not claim any rights on content uploaded to the service).  But in order to be able to help identify a decline in Facebook usage it will be helpful to have a benchmark of current usage – so even developers who do not approve of Facebook terms and conditions may wish to participate in this challenge.

Posted in Facebook | 6 Comments »

Fixing the Web – for People with Disabilities

Posted by Brian Kelly (UK Web Focus) on 28 March 2011

I have previously described the limitations of basing an institutional Web accessibility policy purely on conformance with WAI WCAG guidelines. Such an approach, whilst appearing very laudable, fails to address the more challenging areas of enhancing access to Web resources and services for people with disabilities, including the challenges of key institutional activities such as the provision of e-learning for students and of institutional repositories for researchers.

The BS 8878 Code of Practice provides a valuable framework for addressing such challenges and, as suggested previously, could be used to provide a policy framework for enhancing access to institutional repositories.

However although BS 8878 seems to provide a policy framework which is appropriate for use in the UK, there is still a need for a mechanism for users with disabilities to be able to report access problems and for such concerns to be addressed. The Fix the Web initiative has been set up to enable end users to report problems and for such problems to be evaluated by Web experts and, where appropriate, for such problems to be reported to service providers.

As described on the JISC CETIS Accessibility blog, provided by Sharon Parry, this can be described as “Crowdsourcing to Fix the Web“. Sharon summarises this initiative:

Fix The Web is a site which encourages people with disabilities to report any accessibility problems they have with a website. Volunteers then take these problems up with the website owners. …

Using a middleman (or woman) to act as an interface between people with disabilities, who experience problems with inaccessible websites, and the web developers themselves could help make the web a better place for everyone and act as an informal means of educating developers about the importance of accessibility.

A post on the JISC TechDis blog reports on how development work funded by the JISC is being used by the Fix the Web team:

Fix the Web and Southampton University have successfully incorporated a Fix the Web reporting button into the Accessibility Toolbar that Southampton evolved from the original JISC TechDis project. …

If you want to find the new plugin and use it for you or your learners to report any inaccessible sites please download it from http://www.fixtheweb.net/toolbar. You can find out more about making a difference by volunteering your web accessibility awareness and expertise at http://www.fixtheweb.net/being-volunteer.

What are the implications of recent developments such as BS 8878 and Fix the Web for those involved in the provision of Web services, whether institutional Web services or the use of the Web to support teaching and learning or research work. I think it is clear that BS 8878 provide a Web accessibility policy framework which is appropriate for use across the sector. In addition to this those with particular interests and expertise in Web accessibility may find it beneficial to volunteer to support this initiative. This will involve:

  1. Ensuring the information from the disabled person, though very brief (some of this will come through tweets!) is reproduced in a polite and comprehensible form.
  2. Finding the web owner via their website and send the information to them through email or contact form.

I know many people involved in institutional Web activities have strong interests in accessibility issues. Here is an opportunity to make such interests and expertise available in a wider context and help to enhance online experiences for people with disabilities.

Posted in Accessibility | 1 Comment »

Twitter Export Functionality Returns to Twapper Keeper

Posted by Brian Kelly (UK Web Focus) on 24 March 2011

On 17 March Twitter updated the terms of service for use of their APIs:

You may export or extract non-programmatic, GUI-driven Twitter Content as a PDF or spreadsheet by using “save as” or similar functionality

In light of these changes, as described on the Twapper Keeper blog, John O’Brien, the Twapper Keeper developer has “decided to bring the “Save as Excel” link back online when viewing an archive. This will allow you to get the currently viewed content into an Excel file for review.

This will be good news for those who were not able to take action following last week’s post that there were only “A Few Days Left to Download a Structured Archive of Tweets“.

The changes in Twitter policies on use of its APIs will have been a result of a backlash following Twitter’s announcement that it was more rigorously enforcing its terms and conditions which appeared to be inhibiting development of third party Twitter applications such as Twapper Keeper.

It should be noted, however, that the terms of service state that:

You will not attempt or encourage others to: sell, rent, lease, sublicense, redistribute, or syndicate access to the Twitter API or Twitter Content to any third party without prior written approval from Twitter.

and go on to add:

Exporting Twitter Content to a datastore as a service or other cloud based service, however, is not permitted.

The first condition is clearly intended to ensure that Twitter is in a position to commercially exploit its content and services (note back in February there were stories being published which speculated that twitter could be sold for up to $10 Bn). It should be noted that the second condition would appear to prohibit Twitter content from being hosted on cloud services for use by others, even if there is no financial gain. Twitter, it seems, is turning itself into a silo, with only limited capabilities for data export and reuse. Perhaps it is seeking to emulate Facebook’s approaches in this respect.

Is this an unacceptable approach from a private company which, like Facebook, seems to be seeking to maximise financial gain from content provided by its users? Should we not be looking to move to an alternative microblogging environment, such as Status.net, which Wikipedia states: “While offering functionality similar to Twitter, StatusNet seeks to provide the potential for open, inter-service and distributed communications between microblogging communities. Enterprises and individuals can install and control their own services and data.

I think we ought to be very careful before making such moves. In part this is because of the importance of one’s social network to effective use of such social web services and also in light of the importance of the variety of tools and services which have been developed around Twitter and its ease of use on a variety of devices and environments – including watching TV programmes such as Question Time, for which use of Twitter as a back channel is now well established.

But in addition we need to consider whether, in light of the current political and economic climes, we should be over critical of organisations which make money out of services we use for free. We should also recognise that services developed in UK Higher Education may also prohibit commercial exploitation of content.  For example the policies for the University of Bath’s Opus institutional repository states that:

The metadata must not be re-used in any medium for commercial purposes without formal permission.

This policy was created using the OpenDOAR policy tool. My understanding is that the policy described above is intended to prevent others from commercially exploiting repository metadata. Is this fundamentally different from Twitter’s statement that:

You will not attempt or encourage others to: sell, rent, lease, sublicense, redistribute, or syndicate access to the Twitter API or Twitter Content to any third party without prior written approval from Twitter.

I think it is unfortunate that Twitter have chosen to make it more difficult for others to make use of twitter content, whether for commercial gain or not. But if a broad aim of higher education is to help stimulate the economy, shouldn’t we be permitted (perhaps, indeed, encouraging) others to reuse our content – and if this generates income to fund such initiatives, should this be a problem?

Posted in Twitter | 2 Comments »

UKOLN International Seminar on 1 April: Dr Andrew Treloar on Data Management

Posted by Brian Kelly (UK Web Focus) on 23 March 2011

A post entitled UKOLN Seminar on Elluminate Open to All described how we were opening up access to UKOLN seminars in order to help to maximise their impact.

The next seminar will take place on 1 April. Dr Andrew Treloar, Director of Technology for the Australian National Data Service (ANDS), will give a seminar entitled Data Management: International Challenges, National Infrastructure and Institutional Responses – an Australian Perspective. The abstract for the seminar is given below:

This seminar will first consider the international challenges arising from the shift towards data-intensive research and the rapid uptake of instruments that produce very large data volumes. It will then consider the national approach taken in Australia in the form of the Australian National Data Service (ANDS). The seminar will present the rationale for ANDS, its different programmes, and the services that are being developed and taken up. ANDS has adopted an explicit (but not exclusive) emphasis on institutional engagement. The seminar will conclude by considering the ways in which different institutions in Australia are responding to, and engaging with, ANDS programmes. The seminar should be relevant to anyone with an interest in data management.

The seminar will start at 11 am and will be open not only to UKOLN staff and researchers and other interested parties at the University of Bath but also to the wider community.  In order to help us to gauge demand if you would like to attend please use the Eventbrite system to book a free ticket.

We also intend to provide a live video stream of the talk.  If you would like to view the video stream you should also use the use the Eventbrite system to book a free ticket. The URL of the video stream will be sent out nearer the event.

Posted in Events | Leave a Comment »

Seminar on “Mobile Technologies: Why Library Staff Should be Interested”

Posted by Brian Kelly (UK Web Focus) on 21 March 2011

I was recently invited to give a staff development session on mobile devices to staff from the University of Bath Library. The title of the seminar was “Mobile Technologies: Why Library Staff Should be Interested” and the slides I used are available on Slideshare and embedded below. As well as described how I use mobile devices (in particular the iPod Touch) the seminar also provided an ideal opportunity to demonstrate various uses of mobile technologies. This included:

Comments on the talk were made in Bambuser. In addition discussions also took place using the #bathlib Twitter hashtag. Afterwards Storify was used to aggregate these tweets.

The point of use of these technologies was to illustrate how mobile devices can be used to most publish and view lectures (on this occasion  I used a portable Apple Mac to stream the video although I have previously used an iPod Touch and a HTC Desire Android phone to do this).  There was some discussion about the quality of the video and audio. I was able to ask the remote audience for their feedback and received the following comments on Twitter:

  • Audio good, video patchy at first but now pretty good – bit blurry but very much what you’d expect from a phone and v. acceptable #bathlib
  • #bathlib Video quality better now than at start of session. Beard concealing lip-synch quality

Comments made on the Bambuser channel included:

  • 11:26  anonymous: Hi Brian!  Bir jerky on the video, audio is fine. :)
  • 11:26  working pretty well brian: Yeah a bit jerky now
  • 11:27  itsme: video jerky audio good
  • 11:27  lescarr: Quality of video & audio very good. It does halt sometimes.
  • 11:27  mhawksey: audio is great, vid a bit jerky cam keeps refocusing
  • 11:29  Jo Alcock: Audio OK – video a bit jerky (but my connection isn’t very good here)
  • 11:30  Jo Alcock: Started watching it on iPad (through Twitter app), works well but moved to desktop now to enable chat
  • 11:30  Nicola: As tweeted: Audio good, video patchy at first but now pretty good – bit blurry but very much what you’d expect from a phone and v. acceptable #bathlib
  • 11:33  working pretty well brian: Video fairly patchy – Mahendra, Audio ok

You can judged for yourself how good the video and audio were by viewing a recording of the video.  It should be noted that the quality of the audio was the most important aspect with the video helping to provide a content to the slides being displayed.

During the talk I mentioned how such lightweight video and audio streaming devices (and video recording devices such as a Flip camera) can help to enhance the benefits of such staff development courses.  I described how members of staff at the University of Bath Library who were unable to attend will be able to view the video. But in addition making such resources publicly available can help to enhance the ROI associated with the preparation and delivery of such courses.  As can be seen from the accompanying image there have so far been 62 views of the talk (of which 40 were of the live video stream).  As @annindk (Ann Priestly, an information professional currently working in Denmark) commented:

Watched yr seminar over lunch – thanks! Quality just fine, thinking ROI must be good for these quick sessions

The question of costs and ROI arose during the discussions after the presentation.  “What are the costs in making use of such technologies and can the investment be demonstrated to provide benefits?” was how I interpreted one question I received.  Following a show of hands it appeared that everyone in the room (apart from possibly one person) had a phone which contained a camera.  So we will probably find that the capital costs in the purchase of mobile devices has already been paid for and as phones are upgraded their functionality will continue to be enhanced.  So rather than having to be able to justify the costs of centralised provision of, say, video recording systems in lecture theatres I suggested that it would be more appropriate to explore a bottom-up approaches, with individuals taking responsibility for recording themselves or their colleagues. A post on the DMU Mashed Library blog picked up on this idea:

One interesting point that came out was Brian’s description of people tweeting their comments on attending conferences to a wider (twitter reading) audience: Can this really be seen as engaging in support for the Big Society? I guess I was consciously doing this from Eduserv’s ‘Work Smarter, not Harder’ workshops #oa11.

My suggestion that taking responsibility for making resources available beyond their immediate target audience could be regarded as a form of the ‘Big Society’ was slightly tongue-in-cheek. But surely if one can provide materials which will be of benefit to others at little additional cost or effort, we should be looking to do this?  And as there were about 25 people in the seminar but 40 people watching the live video stream, can’t this be regarded as providing additional ROI?

Posted in Events, Web2.0 | Tagged: | 3 Comments »

A Few Days Left to Download a Structured Archive of Tweets

Posted by Brian Kelly (UK Web Focus) on 17 March 2011

On 21 February 2011 John O’Brien, developer of the Twapper Keeper twitter archiving service announced the “Removal of Export and Download / API Capabilities“. In a subsequent video interview John explained the reasons for the removal of this service, which arose following Twitter announcement that it was enforcing its policy that third party services are not allowed to syndicate or redistribute tweets. Following Twitter’s ‘cease and desist’ email the removal of Twapper Keeper’s export capabilities and APIs will take place on 20 March – a few day’s time.

It is clear that the popularity of the Twapper Keeper service (which has a total of 2,410,061,623 tweets across 21,475 archives) has demonstrated a clear need for Twitter archiving – and it seems that Twitter wishes to be able to commercially exploit such popularity. I would guess that other services, such as Martin Hawksey’s iTitle Twitter captioning service is another example of an innovative approach which Twitter will be seeking to exploit commercially.

Last year’s JISC-funded developments to the Twapper Keeper service included making the software available under a Creative Commons licence. If you visit the Your.TwapperKeeper.com site you will be able to download the software which can be run on your own server. Clearly you would not be able to simply replicate a public Twapper Keeper service, but if Twitter’s terms and conditions are aimed at stopping public redistribution of tweets it would appear possible to install the software on an institutional Intranet – although I should admit that IANAL.

It should the pointed out that the Twapper Keeper service will continue to archive tweets which can be accessed via the HTML interface – what is being lost is API access and the ability to download a structured archive of tweets in for example, MS Excel format with columns of the tweets, Twitter userid, date and time information, geo-location information, etc. Such structured information is, as Twitter is very aware of, valuable for developers who wish to carry out richer data analysis or provide additional value-added services on top of the conventional Web-based display of tweets.

It is still possible for a few days to download such structured archives from Twitter. I have recently looked at the details of my TwapperKeeper archives. I have decided to keep a local archive of tweets associated with a number of talks I have given. However I don’t intend to keep a structured archive which are primarily of interest to event organisers (such as the ALT-C, JISC and CETIS conferences). I have also decided to keep a record in the list below of the decisions I have made. Note that an example of a local archive can be seen for the seminar I gave last year at the University of Girona.

Archive Type Name Description Policy # of Tweets Create Date
#Hashtag #a11y Accessibility (a11y) Archive not kept as this subject based archive is not directly related to my key areas of work. 42427 04-25-10
#Hashtag #accbc CETIS/BSI Accessibility SIG meeting. Local archive not kept as I was a speaker at this recent event. 154 02-28-11
#Hashtag #altc2009 The ALTC 2009 conference Archive not kept as this event-based archive will primarily be relevant to the event organisers. 4737 08-28-09
#Hashtag #altmetrics New approaches for developing metrics for scholarly research Archive not kept as this subject-based archive will primarily be relevant to others with an interest in the subject area.. 158 01-15-11
#Hashtag #Ariadne The Ariadne hashtag – which may be used for UKOLN’s Ariadne ejournal. Archive not kept as this subject-based archive will primarily be about topics other than UKOLN’s Ariadne ejournal. 11897 09-21-10
Keyword Ariadne Archive of tweets contains the string ‘Ariadne’ Archive not kept as this subject-based archive will primarily be about topics other than UKOLN’s Ariadne ejournal. 25598 09-21-10
@Person ariadne_ukoln Tweets about the Ariadne web magazine. Local archive kept. 882 05-28-10
@Person briankelly Tweets about Brian Kelly Personal archive kept. 6471 03-19-10
#Hashtag #CETIS The CETIS service, based at the University of Bolton. Archive not kept as this organisational archive will primarily be of relevance to the host institution. 2836 09-24-10
#Hashtag #CILIP CILIP, the Chartered Institute of Library and Information Professionals. Archive not kept as this organisational archive will primarily be of relevance to the host institution. 4494 09-24-10
#Hashtag #CILIP1 Campaign on future of CILIP organisation based on CILIP’s 1-minute messages. Archive not kept as this campaign-based archive will primarily be of relevance to the host institution. 357 06-13-10
#Hashtag #CSR Comprehensive Spending Review Archive not kept as this subject archive will primarily be of relevance to others. 79799 10-15-10
#Hashtag #falt09 ALTC Fringe Archive not kept as this event-based archive will primarily be of relevance to others. 219 08-28-09
#Hashtag #heweb10 Tag for the HigherEdWeb 2010 conference Archive not kept as this event-based archive will primarily be of relevance to others. 8723 09-28-10
#Hashtag #ipres10 Tweets for the iPres10 conference, Vienna, 19-24 Sept 2010. Archive not kept as this event-based archive will primarily be of relevance to others. 2 08-27-10
#Hashtag #ipres2010 Archive for the IPres 2010 conference to be held in Vienna on 19-25 Sept 2010. Archive not kept as this event-based archive will primarily be of relevance to others. 1397 08-27-10
@Person iwmwlive IMWM live blogging account Local archive kept. 1373 04-30-10
#Hashtag #jisc10 JISC 2010 conference Archive not kept as this event-based archive will primarily be of relevance to others. 2059 04-02-10
#Hashtag #jiscpowr Archive of tweets related to the JISC PoWR project provided by UKOLN and ULCC Archive not kept due to low numbers of tweets. 6 07-09-10
#Hashtag #jiscpowrguide Archive of tweets about the Guide to Web Preservation published by the JISC-funded PoWR project and launched on 12 July 2010. Archive not kept due to low numbers of tweets. 2 07-09-10
#Hashtag #ldow2010 Linked Data on the Web 2010 conference Archive not kept as this event-based archive will primarily be of relevance to others. 524 04-25-10
#Hashtag #loveHE Times Higher Education campaign to support Higher Education in UK. Archive not kept as this campaign-based archive will primarily be of relevance to others. 12066 06-12-10
#Hashtag #mdforum UKOLN’s Metadata Forum Local archive planned. 119 12-10-10
#Hashtag #morris Tweets about Morris dancing Archive not kept as this social archive will primarily be of relevance to others. 17813 10-16-10
#Hashtag #oxsmc09 socialmediaconference Archive not kept as this event-based archive will primarily be of relevance to others. 1063 09-18-09
#Hashtag #PhD Tweets for researchers using the #PhD tag Archive not kept as this subject-based archive will primarily be of relevance to others. 28527 09-24-10
#Hashtag #s113 Workshop session at ALTC 2009. Local archive kept (will be edited to remove irrelevant tweets posted after event had taken place). 227 09-03-09
#Hashtag #scl2010 Scholarly Communication Landscape (SCL): Opportunities and challenges symposium, held at Manchester Conference Centre on 30 November 2010. Archive not kept as this event-based archive will primarily be of relevance to others. 39 12-02-10
#Hashtag #ucassm Social Media Marketing Conference organsied by UCAS. Archive not kept as this event-based archive will primarily be of relevance to others. 223 10-18-10
#Hashtag #udgamp10 What Can We Learn From Amplifed Events seminar, given by Brian Kelly, UKOLN at the University of Girona.
Local archive available
Local archive kept. 395 09-01-10
#Hashtag #ukmw09 UKMuseumsandtheWeb Archive not kept as this event-based archive will primarily be of relevance to others. 750 12-05-09
Keyword ukoln Tweets about UKOLN Local archive kept. 1948 03-19-10
#Hashtag #ukolneim UKOLN’s Evidence, Impact, Metric work Archive not kept due to low numbers of tweets. 45 11-05-10
#Hashtag #w3ctrack W3C Track at WWW 2010 confernce Archive not kept as this event-based archive will primarily be of relevance to others. 179 04-30-10
#Hashtag #ww2010 Misspelling of WWW2010 hashtag Archive not kept as this event-based archive will primarily be of relevance to others. 833 04-29-10

It should be noted that this list is based on Twapper Keeper archives which I created. There will be a number of other archives which will be of interest to myself and colleagues at UKOLN which may also be archived locally.

Also note that a number of event-based Twitter archives (such as the #s113 archive of a workshop session at the ALT-C 2009 conference) will contain irrelevant tweets due to the hashtag being used for other purposes. Such irrelevant tweets may be deleted from the archive

Posted in preservation, Twitter | 2 Comments »

Reflections on the Amplified Events session at #JISC11

Posted by Brian Kelly (UK Web Focus) on 16 March 2011

The JISC 2011 Conference

As described in a recent post yesterday Marieke Guy and I facilitated a workshop session on “Amplified Events: What? Why? How?” at the JISC 2011 Conference. This was a very appropriate session for the conference in light of the emphasis that was given by the conference organisers on the amplification of the opening plenary session and the ongoing amplification during the rest of the day.  The importance of Twitter to the event amplification can be gauged from the Summarizr statistics. At the time of writing there have been a total of 2,610 tweets with the #JISC11 event hashtag, from 544 Twitter users who have included 188 hashtags in their tweets and 349 URLs.

Others will be writing reports on the conference – with the first conference summary being published by Chris Sexton. I sat next to Chris in the opening session as she wrote her summary of the opening session on her iPad using the iPad virtual keyboard.  This post was published at 11.39am, 39 minutes after the first session had finished. To those who feel that using a computer during a talk is rude and means you are note concentrating on what the speaker is saying  I think Chris’s post demonstrates that this need not be the case.  Chris also published two additional posts: one on the Clouds and clouds and feeling strange session and one on the two sessions in the afternoon: Innovation and Amplification.

Chris’s final post gave her thoughts on our Amplified Events session.  In this post I will give some further thoughts on the issues raised during the session and some general points on the amplification of the JISC conference.  I won’t, however, go into details of the talks given at the session as the three sets of slides were published in advance and embedded in the previous post. In addition my colleague Marieke Guy  used by iPod Touch to record a video of the opening talk which is now available (in two parts) on YouTube and embedded below.

Curating Conference Tweets

The first think to say is that the tweets related to the session have been curated in a Storify archive.  I used Storify to curate tweets which contained both the “#jisc11″ conference hashtag and the “#amp” hashtag I proposed to identify tweets related to the session.  This will, however, have missed tweets which did not contain this set of tags. It was interesting to see from the Summarizr statistics the list of the top 10 tweeted hashtags: #jisc11 (2,449 tweets), #amp (89), #jiscdigital (41), #jiscassess (40), #mediasite (38),  #ocstories (31), #ukoer (29),  #jiscmrd (27), #cetisbos (23) and #jisc11oer (22).   The #cetisbos hashtagwas used for a session facilitated by Paul Hollins, CETIS which I attended. I had suggested to Paul that he proposed a distinct tag for the session at the start and so they chose #cetisbos with ‘bos’ standing for benefits of open standards.  I suspect most of the other tags also related to the workshop topic but, as possibly can be seen from the #ukoer and #jisc11oer there may be fragmentation of use of such tags – indeed this happened in my session with four uses of the ‘#jiscamp” tag (I should add that although I suggested the tag in advance and included the two tags on the title slide of the opening talk since the slides had been submitted to JISC in advance I used an old version of the slides which didn’t include details of the hashtag).

My advice to JISC (which I mentioned to JISC events staff on a crowded train returning from Liverpool last night) would be for the conference organisers to allocate each session a short ID.  At UKOLN’s IWMW 2009 and IWMW 2010 events we identified the plenary sessions #P0-#P9 and the first set of parallel session #A1-#A9 and the second set #B1-#B9. Alternatively, as suggested by Chris Gutteridge and used at Dev8D, the identifier could relate to a code for room. Whichever convention is used I think it is clear that for a large event with multiple parallel sessions there is a clear need to be able to disambiguate the session tweets.

Time Travel and “The Persistence of Memory”

In the opening talk in which I described what an amplified event is and why such approaches are important I used two metaphors: an amplified event can provide a form of time travel, so that you can go back in time and watch a talk which was given in the past. In addition, as one’s memories of an event start to fade event amplification can help to make memories more persistent, both in providing access to the discussion which happened at the time and in enabling discussions about the topics to help clarify understanding.

We had intended to provide live video streaming of the talks at the session. Unfortunately due to problems with the WiFi network (ironic, as the conference was held in the BT Conference Centre) this was not possible. As I had not brought along a Flip camera and tripod (and failed to spot a tweet which offered to let me these devices) we had to use my iPod Touch to record the opening talk. The video was split into parts one and two since halfway through the talk I went into the audience to respond to some of the questions.   Although the talks given by myself and Marieke were not amplified directly we had provided access to the slides on Slideshare in advance and these provided a context for the remote audience who were reading the session tweets.

Paul Shabajee, our third facilitator,  could not attend the conference for personal reasons. However Paul had provided an audio version of his PowerPoint slides on “Amplification and Rethinking Events” – so if you view the slides (available in MS PowerPoint format) you will be able to access his talk in the same way in which the attendees did.

The Amplified Events Session Tweets

Unfortunately since Marieke was occupied videoing the opening talk she was not able to keep notes of the various reasons people gave for attending the session and the issues they hoped would be addressed – and now, the following day, I find that my recollections of the issues is somewhat hazy.   I can recall people asking for advice on best practices for amplifying events and for ways in which evidence of the impact and benefits of event amplification can be gathered. But despite my fading memories of the opening session I am able to view the tweets which were posted during the session and can respond to the various issues raised. This is particularly useful as, although we did not have an opportunity to discus this much at the session, ILRT and UKOLN have been funded by the JISC for the Greening Events II project which will include development of “an Events Planning Toolkit to help event organisers think through what type of event they need to hold (physical, virtual or hybrid) and then to provide assistance in the form of guidelines and technology tools with each stage in the process to enable them to reduce the negative sustainability impacts of their event“.  The notes given below will help to inform the identification of the guidelines we’ll be developing.

We used the Storify Twitter curation service to aggregate the tweets containing the #jisc11″ and “#amp” tags (a service I initially encountered from a post by Kirsty Pitkin on her Event Amplifier blog.

The first series of tweets tended to provide a summary of the opening two talks by myself and Marieke. However after these talks there was a more general discussion about issues relating to event amplification.

@adamhuffman commented:

Seems to be more reluctance to have live amplification in “traditional” subjects, whereas pretty much expected at tech. events #amp #jisc11

This is confirmed by my experiences – events attended by developers and Web 2.0 and Library 2.0 folk tend to expect to be able to participate in  an event’s back channel, whereas more traditional events this is not the case. However this view was challenged by @dosticen (Lorna Prescott, who was a remote participant and, in a follow-up tweet, informed us that she has “been training social reporters today to help amplify event next week, such fun” who responded to a tweet from @joeyanne:

RT @joeyanne: … type of people who will follow remotely, likely to be “techy” people #amp #jisc11 << not true in my case, the topic is key

Perhaps there is a split in the development community between the early adopters and those who have failed to be convinced which is not necessarily reflected in other sectors? I think there is a need to develop a better understanding of perceptions across different sectors.

A participant in the session raised concerned about possible dangers:

Good Q from the room: how do you avoid amplifying too much? #amp #jisc11

A follow-up question on the problems of tweets which represented the views of the speakers generated a fair amount of discussion in the session and on Twitter with a consensus seeming to agree that it is better to get errors published openly as ‘many eyes’ can help to spot,and possibly correct, such errors:

If you’re misrepresented you do get chance to correct and engage. #jisc11 #amp

@joeyanne Quite – it’s useful for speakers to have access to the backchannel, either during or after their talk. #amp #jisc11

In her blog post Chris Sexton’s followed up on her contribution to the discussion:

Someone also commented that tweets sometimes misrepresented the speaker – said things they didn’t say, or interpreted things wrongly. Did that mean they were only half listening because they were tweeting?  I would say in general no. As someone who tweets a lot during talks, I find I concentrate much more – my mind doesn’t wander as much because I’m having to listen to be able to type the tweet. I also believe that speakers sometimes misremember what they’ve said. I’ve read tweets and thought “I didn’t say that”, and then gone back and checked the video, and I did!  Also, if as a speaker you are misrepresented, twitter gives you the chance to correct, explain again, and engage with the listener.

The final talk was given by Paul Shabajee who discussed some on the economic and environmental factors related to the sustainability of events.  In response to a tweet from @timbuckteeth (Professor Steve Wheeler) which summarised Paul’s talk @lesleywprice commented:

RT @timbuckteeth: Will conferences reduce due to economic problems? Survival of the fittest events? #jisc11 #amp < its happening already!

In response to a request for evidence to backup this remark Lesley made the following points:

RT @timbuckteeth: Examples? No of people attending #jisc11online lots of tweets over last couple days saying cost prevented attendance (see source)

RT @timbuckteeth: not just cost of event….cost of travel, cost of accommodation and cost of time out of office…events becoming a luxury (see source)

@timbuckteeth so good value or not…and to whom? Attendees get may value, but how does that transfer back to the organisation? (see source)

@timbuckteeth …Lots of confs full of presentations and keynotes..we know this is not effective learning so why does it still happen? (see source)

Chris Sexton concluded her post by touching on such environmental and economic issues and the importance of engaging with the amplification of events:

Very good session to end on, and I’m a great believer in amplified events – the concept can be extended to any event, including meetings – doesn’t just have to be conferences. With the need to reduce our carbon footprint and travel less I think it it will become more the norm.

The Resources Used

For me the important part of the session was the discussion summarised above. However I still feel there is a tweet for content around which a discussion can be held.  The following resources were used in the session.

Title Speaker Comments
Amplified Events: What and Why? Brian Kelly, UKOLN Slides available from UKOLN Web site in MS PowerPoint format and on Slideshare. Video available on YouTube (part 1 and Part 2)
How to Amplify an Events: Case Studies Marieke Guy, UKOLN Slides available from UKOLN Web site in MS PowerPoint format and on Slideshare
Amplification & Rethinking Event Paul Shabajee, ILRT Slides available from UKOLN Web site in MS PowerPoint format (with audio) and on Slideshare

The YouTube videos are also embedded below.

Posted in Events | Tagged: | 1 Comment »

Amplified Events, Seminars, Conferences, …: What? Why? How?

Posted by Brian Kelly (UK Web Focus) on 14 March 2011

Tomorrow (Tuesday 15 March 2011) myself, my colleague Marieke Guy and Paul Shabajee from ILRT, University of Bristol will be facilitating a workshop session on “Amplified Events, Seminars, Conferences, …: What? Why? How?” at the JISC 2011 conference. This session will review UKOLN experiences in the provision of amplified events together with the experiences of the Greening Events project funded by the JISC and provided by staff at ILRT.

The workshop session is the first joint event by UKOLN and ILRT which is being carried out as part of the Greening Events II project in which UKOLN is supporting ILRT in this follow-up project. The session will support one of the main deliverables of the project which is:

An Events Planning Toolkit to help event organisers think through what type of event they need to hold (physical, virtual or hybrid) and then to provide assistance in the form of guidelines and technology tools with each stage in the process to enable them to reduce the negative sustainability impacts of their event.

We hope that the participants will provide feedback on the type of guidance and tools which will be needed when providing amplified (hybrid) or virtual events.

It would seem appropriate that a session on amplified events should itself be amplified. Although a WiFi network will be available at the conference we do not know how usable this will be of if there will be any barriers (such as firewalls) which would inhibit the amplification of the session.  However if possible we will try and make the various resources available and also stream a video of the session.

Also note that, inspired by a suggestion from Cameron Neylon, the slides which provide an introduction to the session include a set of icons which make it clear that permission for amplification of the session has been granted.  This is an illustration of a guideline which we will be proposing for those who wish to organise an amplified event – and we will be looking for feedback (from the participants at the session and readers of this post) as to whether you feel that this is a useful approach to adopt.

The session will take place from 15.00-16.00 on Tuesday.  I will try to update this page with a link to information about the amplification of the session.  I will also tweet details from my @briankelly account. Note the hashtag for the JISC 2011 conference is #JISC11 and in the absence of any official tags for the workshop sessions I suggest that the #amp tag is used to refer to tweets associated with the session.

Also note that the “Introduction to workshop / Amplified Events: What and Why?“, “How to Amplify an Event” slides and “Amplification and Rethinking Events” slides are available on Slideshare and are embedded below.

Posted in Events | Tagged: | 4 Comments »

When Technology (Eventually) Enhances Accessibility

Posted by Brian Kelly (UK Web Focus) on 10 March 2011

“You’re Damned If You Do and Damned If You Don’t!”

Should you make use of a technology if you can’t guarantee that it will be accessible to people with disabilities?  Should you, for example, provide access to videos if you can’t provide captions for the videos?

If you have stated that your institution’s Web site will conform fully with WCAG (1.0 or 2.0) guidelines then you won’t be able to host such videos as the WCAG 2.0 guidelines state:

Guideline 1.2 Time-based Media: Provide alternatives for time-based media.

1.2.2 Captions (Prerecorded): Captions are provided for all prerecorded audio content in synchronized media, except when the media is a media alternative for text and is clearly labeled as such. (Level A)

Of course failing to provide videos may in itself act as a barrier to people with disabilities: as Lorenzo Dow put itYou’re damned if you do and damned if you don’t!

At the recent JISC CETIS Accessibility SIG meeting which I mentioned recently Shadi Abou-Zahra commented that he felt that some of he criticisms I had made of the difficulties of implementing WCAG guidelines were inappropriate as WAI do not address the policy issues regarding  implementation of the guidelines  – they simply point out that a failure to implement guidelines can result in problems for people with various disabilities.  I have to admit that I wish WAI had been much more vocal in making this point since many public sector organisations (including the UK Government) have stated (or, indeed, mandated) conformance with WCAG guidelines without giving any caveats.

But let’s acknowledge that although there may have been communications problems in the past we are now in a position to exploit WCAG and other guidelines in a pragmatic and achievable way, with the BS 8878 Code of Practice now providing the policy framework to guide us.

The Challenge of Providing Access to Videos

What can be done if you wish to host videos and feel it is not feasible to provide captions?  This may be because ownership of the videos is devolved – perhaps large numbers of students have taken videos of their graduation ceremony and these are being hosted (or linked to) from the institution. Or perhaps, as has been the case at a number of events for developers, researchers and practitioners  video interviews were made with participants and speakers in order to provide potential attendees with an authentic perspective on what to expect at the event and the costs of just-in-case captioning can’t be justified?

The BS 8878 Code of Practice recognises that accessibility is not always easy – or indeed possible – to implement. The important thing to do, therefore, is to document policies and processes.  But in addition there is a need to understand that technological developments may help to address accessibility issues, so that resources which are not accessible today could be made accessible tomorrow but only if those resources are available.

An example of this is the iTitle Twitter captioning service which enabled a Twitter stream to be synchronised with a video stream on popular video-streaming services such as YouTube or Vimeo.

YouTube provides another example of how technological developments may enhance the accessibility of video clips.  Back in November 2009 YouTube announced that they had added a feature that generates video captions:

We’ve combined Google’s automatic speech recognition (ASR) technology with the YouTube caption system to offer automatic captions, or auto-caps for short. Auto-caps use the same voice recognition algorithms in Google Voice to automatically generate captions for video.

Initially this feature only worked for English and was  “enabled for a small number of channels that usually feature talks and interviews: UC Berkeley, Stanford, MIT, Yale, UCLA, Duke,UCTV, Columbia, PBS, National Geographic“. However in March 2010 a CNET News article announcedYouTube brings auto-captioning to everyone“:

Video providers are now able to apply for machine transcription on their own videos. And for videos that have not yet been transcribed, a user can request it themselves. YouTube then puts it in a transcription queue, which can take anywhere from an hour to a day–a time Google is trying to make as fast as possible.

An article in The Register does point out some limitations in th automated transcriptions: “Automatic captions for a 14-year-old’s video diary: nigh incomprehensible” but then goes on to add “US President Obama’s weekly address to the nation: works pretty nice“.

But what are my experiences?  Do I sound like a 14 year old or President Obama?  Generating the automated captions was trivial and, as can be seen in the image below, the system could understand that I was speaking English.  But what has been transcribed as “acceptable snow” was actually me saying “it’s a cancerous cell“!

We therefore can’t say that YouTube’s automatic captions have solved the problem.  But  it strikes me that the quality of the captioning is likely to improve as algorithms improve, additional processing power is provided and, perhaps most importantly, the system begins to recognise regional accents and also individual speaking patterns.

It should also be noted that, as described on the YouTube Web site, the automated captioning service creates a captions.sbv file containing the captions and the time stamp. As this is a text file it can be edited using a simple text editor so that if, for example, much of the captioning is correct but the odd word has been transcribed incorrectly it would be possible to use the automated conversion for the bulk of the conversion work.

Should we not, therefore, be providing YouTube with a wider range of videos containing our various regional accents in order to enhance the automated analyses?  And will a failure to upload our videos result in a failure to enhance accessibility for tomorrow’s audiences?

And if we have lecturers who speak with a clear and distinct English accent (unlike my Scouse accent with traces of the years spend in Yorkshire, Newcastle and the East Midlands) and videos of their talks are successfully captioned, wouldn’t if be unreasonable to fail to provide this service? Let’s remember that UK legislation expects organisations to take reasonable measures – isn’t uploading videos in order to enhance access a reasonable thing for organisations to be doing now?

Posted in Accessibility | 5 Comments »

UK Government Survey on Open Standards: But What is an ‘Open Standard’?

Posted by Brian Kelly (UK Web Focus) on 7 March 2011

UK Government’s Open Standards Survey

I was alerted to the UK Government’s Open Standards Survey by Adam Cooper of JISC CETIS, who has already encouraged readers of his blog to participate in the survey. I’ve skimmed through the questions but haven’t yet completed the survey. What stuck me, though, was the draft definition of the term “open standard” as proposed by the UK Government.

Respondents are invited to give comments to the following five conditions:

  1. Open standards are standards which result from and are maintained through an open, independent process
  2. Open standards are standards which are approved by a recognised specification or standardisation organisation, for example W3C or ISO or equivalent. (N.B. The specification/standardisation must be compliant with Regulation 9 of the Public Contracts Regulations 2006. This regulation makes it clear that technical specifications/standards cannot simply be national standards but must also include/recognise European standards)
  3. Open standards are standards which are thoroughly documented and publicly available at zero or low cost
  4. Open standards are standards which have intellectual property made irrevocably available on a royalty free basis
  5. Open standards are standards which as a whole can be implemented and shared under different development approaches and on a number of platforms

I think the survey was wise to begin by being honest about the difficulties in defining an ‘open standard’ and inviting feedback on its proposed set of conditions. The survey follows on from work which has been carried out by UKOLN, JISC CETIS and JISC OSS Watch with our shared interests in helping the sector to exploit the potential of open standards. I thought it would be useful to revisit our work before I completed the survey.

Previous Work in Describing an ‘Open Standard’

The term “open standard” is somewhat ambiguous and open to different interpretations. In a paper entitled “Openness in Higher Education: Open Source, Open Standards, Open Access” (available in PDF, MS Word and HTML formats) Scott Wilson (CETIS), Randy Metcalfe (at the time at JISC OSS Watch) and myself pointed out that:

There are many complex issues involved when selecting and encouraging use of open standards. Firstly there are disagreements over the definition of open standards. For example Java, Flash and PDF are considered by some to be open standards, although they are, in fact, owned by Sun, Macromedia and Adobe, respectively, who, despite documenting the formats and perhaps having open processes for the evolution of the formats, still have the rights to change the licence conditions governing their use (perhaps due to changes in the business environment, company takeovers, etc.)

It should be added that this paper was written in 2007. Since then PDF has become an ISO standard so we could add the fact that proprietary formats can become standardised to the complexities.

In a UKOLN QA Focus briefing paper we tried to describe characteristics shared by open standards, which had similarities to the approaches proposed in the UK Government survey:

  • An open standards-making process
  • Documentation freely available on the Web
  • Use of the standard is uninhibited by licencing or patenting issues
  • Standard ratified by recognised standards body

It should be noted that we described these as ‘characteristics‘ of an open standard rather than mandatory requirements since we were aware that the second point, for example, would rule out standards produced by many standardisation bodies such as BSI and ISO.

Responding to the Survey

I’d like to share my thoughts prior to completing the survey.

  1. Open standards are standards which result from and are maintained through an open, independent process
  2. I would support this condition. It should be noted that this means that a standard which is owned by a vendor cannot be regarded as an open standard even if the standard is published. This means, for example that Microsoft’s RTF format is not an open standard and PDF was not an open standard until ownership was transferred to ISO in 2008. It should be noted that I believe that the US definition of ‘open standards’ does not include such a clause (there were disagreements on this blog over the status of PDF before it became an ISO standard).

  3. Open standards are standards which are approved by a recognised specification or standardisation organisation, for example W3C or ISO or equivalent. (N.B. The specification/standardisation must be compliant with Regulation 9 of the Public Contracts Regulations 2006. This regulation makes it clear that technical specifications/standards cannot simply be national standards but must also include/recognise European standard).
  4. I used to have this view. However I can recall an email discussion with Paul Miller and Andy Powell when they worked at UKOLN who argued (and convinced me) that this was an over-simplistic binary division of the world of standards. It should be noted that RSS (in any of its flavours) would not satisfy this condition. The question, then, is whether this is a concern? If the definition of an ‘open standard’ will be used to determine whether a standard should be used by the UK Government then there will be a need to avoid being too rigourous in the definition. My view would be to rule out this condition.

  5. Open standards are standards which are thoroughly documented and publicly available at zero or low cost
  6. I would agree on the importance for rigourous documentation for open standards, so that ambiguities and inconsistencies are avoided. This clause is, however, ambiguous itself – what is ‘low cost’ documentation? However I would be happy to see this condition included.

  7. Open standards are standards which have intellectual property made irrevocably available on a royalty free basis
  8. This is desirable, but what happens if it is not possible to negotiate royalty-free licences? This is particularly true for video formats. If the government uses this as a mandatory condition for open standards and subsequently requires services to make use of open standards might this result in a poorer quality environment for the end user? From an ideological position I would like to support this condition but in reality I feel that there needs to be more flexibility – there is a danger that if open standards are mandated this could mean that Government departments would be barred from making use of popular services – such as YouTube and iTunes – which many people fund helpful in gaining simple access to information of interest. I am therefore rather uncertain as to whether this should be a required condition for the definition of an open standard. It is worth noting, incidentally, that the W3C have similarly avoided grasping this particular nettle in the HTML5 standardisation work, with no specific video codex being mandated as part of the standard.

  9. Open standards are standards which as a whole can be implemented and shared under different development approaches and on a number of platforms
  10. This has always been a view I have held.

The contentious issues seems to be “Open standards are standards which have intellectual property made irrevocably available on a royalty free basis“. I suspect people will argue strongly that this condition is essential. For me, though, we are revisiting Martin Weller’s “Cato versus Cicero” argument. Should we be taking a hardline stance in order to achieve a desired goal or do we need to make compromises in order to accommodate complexities and the conflicting needs of various stakeholders?

Posted in standards | 4 Comments »

BS 8878: Applying a Level of Redirection to Web Accessibility

Posted by Brian Kelly (UK Web Focus) on 3 March 2011

As mentioned in a post entitled “A Grammatical View of Web Accessibility” on Monday I gave a talk on “BS 8878 and the Holistic Approaches to Web Accessibility” at a CETIS Accessibility SIG meeting held at the BSI HQ in London.

My talk described the background to the development of the holistic approach to Web accessibility and how this approach relates to the BS 8878 Code of Practice on Web Accessibility.  When I listened to Jonathon Hassell’s talk on “BS 8878 and the Feedback Process” which preceded mine it was clear that BS 8878 provides a very good implementation of the ideas which myself and fellow accessibility researchers and practitioners have developed since 2005.

Our initial concerns (described in more detail in a paper on “Forcing Standardization or Accommodating Diversity? A Framework for Applying the WCAG in the Real World” which is available in PDF, MS Word or HTML formats) were based on a realisation of flaws in the WCAG 1.0 guidelines and a growing awareness of the limitations of the WAI model, which is dependent on full implementation of WCAG, ATAG and UAAG guidelines.

The WAI guidelines (and the WCAG guidelines in particular) should therefore be regarded as a target to aspire towards if they are appropriate to the intended use of the Web service and the target audience and the guidelines can be implemented by taking reasonable measures, which will be dependent on factors such as the scope of the service, your available resources and budgets and the maturity of the technologies you intend to use (don’t, for example, expect that a W3C standard such as SMIL will necessarily provide an accessible solution as support for the standard is low).

The WAI guidelines should therefore be regarded as a set of technological best practices. However such guidelines are useful in helping to make the, sometimes difficult, choices of the technologies to be chosen, the levels of accessibility to be provided and ways in which such accessibility support can be sustained.  This is where BS8878 can provide a solution by outlining 16 stages in the process of developing accessible Web services, including the process of deciding which WCAG guidelines may be appropriate and how they should be deployed.

It struck me that the BS 8878 is an example of the saying I heard many years ago: “There isn’t a problem in computer science which can’t be solved by adding a level of redirection“. In this case the areas in which WCAG fail to provide an appropriate solution can be addressed by providing a standard which enables the scope of WCAG’s usage to be defined.

Note that if you still feel that all Web resources must be universally accessible to everyone, please tell me how the many thousands of PDFs containing in institutional repositories will be made accessible?  (Perhaps by getting rid of such resources?!)

Finally I should add that a video of my talk is available on YouTube and embedded below.

Note: If you wish to view the video you may find it useful to view the slides which are available on Slideshare and embedded below. This link was added shortly after the post was published.

Posted in Accessibility | Tagged: | 2 Comments »

Time to Move to GMail?

Posted by Brian Kelly (UK Web Focus) on 2 March 2011

The University of Bath email service is still down. The problems were first announced 0n Twitter at 06.02 on 24 February:

The University email is currently running at risk of failure we are working towards a fix – sorry for any disruption caused.

Later that day we heard:

University email will be unavailable for the rest of the day -for alternative use University Instant Messenger Jabber: http://bit.ly/fAshWi

The problems continued the following day and so BUCS (the Bath University Computing Service) announced an interim email service: I can now send and receive email but can’t access any email messages which I received prior to 25 February.  I must adit that this provides a strange feeling of bliss (my email folder is almost empty!), but I  know that the actions which I’m now running behind on will come back to haunt me when the full email service is restored.

Of course communications have continued, particularly on Twitter. I’m pleased, incidentally, that BUCS have been using Twitter as a communications channel to keep their users informed of developments.  It has also occurred to me how I am still able to continue working using Twitter to support my professional activities: how, I wonder, are others at the University of Bath who don’t use Twitter coping?

During this outage, whilst away in London, I suggested that use of Google’s GMail service might be appropriate.  In response I received the ironical reply:

Gmail never breaks. Oh. Wait. http://www.pocket-lint.com/news/38815/gmail-reset-deletes-correspondence-history :)

It seems that on the day Bath University email users were suffering as a consequence of hardware problems on its email servers Gmail was also having problems. As the PocketLint article rather dramatically announced:

Oh dear – looks like Google has dropped the bomb on hundreds of thousands of Gmail accounts, wiping out years of email and chat history.

You can’t trust GMail to provide a reliable email service seemed to be the sub-text of other Twitter followers who responded to my initial tweet.  But is that really the case? I have described the continuing problems with the BUCS email service (which are summaried in a BUCS FAQ). But what is the current status of GMail?

Whilst Computer Weekly has highlighted the problems of use of Web-based email services the CBC News has pointed out thatGmail messages [are] being restored after bug“.  The article described how  emails “are being restored to Gmail accounts temporarily emptied out two days ago”. This problem was either small-scale – “About 0.02 per cent of Gmail users had their accounts completely emptied“) or significant – “media outlets estimate there are roughly 190 million Gmail users, so about 38,000 were affected”. The problem, caused by a bug which has now been fixed, did not affect me whereas the BUCS email outage clearly has.  Which, I wonder, is the more significant problem?

I have to admit that I have been affected by outages in externally-hosted communications services previously. In September 2009  I wrote a post entitled “Skype, Two Years After Its Nightmare Weekend” which described how “Skype’s popular internet telephone service went down on August 16 [2007] and was unavailable for between two and three days“. This was also due to a software bug (related to MS Windows automated updates) which has been fixed – and I have continued to be a happy Skype user and agree with last year’s Guardian article which described “Why Skype has conquered the world”.

So yes there will be problems with externally-hosted systems, just as there will be problems with in-house systems (and ironically the day before the BUCS email system went down and two days before GMail suffered its problems my desktop PC died and I had to spend half a day setting up a new PC!). It may therefore be desirable to develop plans for coping with such problems – and note that a number of resources which provide advice on backing up GMail have been provided recently, including a Techspot article on “How to Backup your Gmail Account” and a Techland article on “How to backup GMail“.

But in addition to such technical problems there are also policy challenges which need to be considered. At the University of Bath email accounts are deleted when staff and students leave the institution (and for a colleague who retired recently the email account was deleted a day or so before she left). One’s GMail account, on the other hand, won’t be affected by changes in one’s place of study or employment.  In light of likely redundancies due to Government cutbacks isn’t it sensible to consider migration from an institutional email service?  And shouldn’t those who are working or studying for a short period avoid making use of an institutional email account which will have a limited life span?

Posted in General, preservation | 22 Comments »

Standards for Web Applications on Mobile Devices: the (Re)birth of SVG?

Posted by Brian Kelly (UK Web Focus) on 1 March 2011

The W3C have recently published a document entitled “Standards for Web Applications on Mobile: February 2011 current state and roadmap“. The document, which describes work carried out by the EU-funded Mobile Web Applications project, begins:

Web technologies have become powerful enough that they are used to build full-featured applications; this has been true for many years in the desktop and laptop computer realm, but is increasingly so on mobile devices as well.

This document summarizes the various technologies developed in W3C that increases the power of Web applications, and how they apply more specifically to the mobile context, as of February 2011.

The document continues with a warning:

This document is the first version of this overview of mobile Web applications technologies, and represents a best-effort of his author; the data in this report have not received wide-review and should be used with caution

The first area described in this document is Graphics and since the first standard mentioned in SVG the note of caution needs to be borne in mind.  As discussed in a post published in November 2008 on “Why Did SMIL and SVG Fail?” SVG (together with SMIL) failed to live up to their initial expectations.  The post outlined some reasons for this and in the comments there were suggestions that the standard hasn’t failed as it is now supported in most widely-used browsers, with the notable exception of Internet Explorer.  In January 2010 I asked “Will The SVG Standard Come Back to Life?” following the announcement that “Microsoft Joins W3C SVG Working Group“ and an expectation that IE9 will provide support for SVG. This was subsequently confirmed in a post with the unambiguous title “SVG in IE9 Roadmap” published on the IE9 blog.

The signs in the desktop browser environments are looking positive for support for SVG.  But it may be the mobile environment in which SVG really takes off, since on the desktop Web environment we have over 15 years of experiences in using HTML and CSS  to provide user interfaces. But as described in in the W3C Roadmap:

SVG, Scalable Vector Graphics, provides an XML-based markup language to describe two-dimensions vectorial graphics. Since these graphics are described as a set of geometric shapes, they can be zoomed at the user request, which makes them well-suited to create graphics on mobile devices where screen space is limited. They can also be easily animated, enabling the creation of very advanced and slick user interfaces.

But will SVG’s strength in the mobile environment lead to a fragmented Web in which mobile users engage with an SVG  environment whilst desktop users continue to access HTML resources?  I can recall  suggestions that where being made about 10 years ago which pointed out that since SVG is the richer environment it could be used as a generic environment.  Might we see that happening?  After all, as can be seen (if you’re using a browser which supports SVG) from examples such as the Solitaire game (linked in from the Startpagina Web site which provides access to various examples of SVG uses) it is possible to provide a SVG gaming environment. Might we see Web sites like this being developed?

Perhaps rather than the question “Has SVG failed?” we may soon need to start asking “How such we use SVG?

Posted in standards, W3C | Tagged: | 1 Comment »