UK Web Focus

Innovation and best practices for the Web

Archive for July, 2009

A World Where No-One Visits Our Web Sites

Posted by Brian Kelly (UK Web Focus) on 31 July 2009

In a blog post entitled Pushing MRD out from under the geek rock Mike Ellis provides access to the slides he used in a session on “Digging into data: text and data mining” at the recent JISC Digital Content Conference.

Mike’s blog post goes on to explain his views which he helpfully summarises “I think that MRD (That’s Machine Readable Data – I couldn’t seem to find a better term..) is probably about as important as it gets“. Mike goes on to ask us to:

… be prepared for a world in which no-one visits our websites any more, instead picking, choosing and mixing our content from externally syndicated channels.

This world in which people don’t visit Web sites to read content as the content appears in their preferred environment is one in which I live. The content I have an interest in reading appears on my iPod Touch ready for me to read on the bus travelling to work in the morning. I seldom visit Mike’s Electronic Museum blog site or the other blogs (such as the eFoundations, OUseful, The Ed Techie,  and From a Distance blogs which are on my must-read list) – these appear automatically in my RSS reader.

Of course I still visit Web sites – and increasingly I am finding that the new Web sites I visit are those I am alerted to by the people I follow on Twitter. But the more traditional marketing campaigns for new Web sites or redesigned Web sites tends to have little impact on my browsing habits.  Unless the content can be accessed without having to visit the Web site I am unlike to be a regular visitor, no matter how useful the content may be.

Now we still do need Web sites – the content needs to be held somewhere. And not everyone makes use of an RSS reader. But we are finding that Web sites are sucking in content held elsewhere, perhaps using RSS. And of course the growth in popularity of mobile devices is likely to see a renewed interest in ways in which content can be accessed without having to visit Web sites and navigate the Web sites on small screens.

Mike Ellis suggests we need to rethink our approach to Web site development: “Don’t Think Websites, think data” he argues. His slides are available on Slideshare and are embedded below. Well worth reading.

Posted in Web2.0 | 1 Comment »

The Recession Has Still To Hit the Public Sector!

Posted by Brian Kelly (UK Web Focus) on 27 July 2009

Last week began with the gloomy headline in the Sunday Times Whitehall sharpens the knife for university cuts. The article began:

WHITEHALL is drawing up plans for deep cuts in the higher education budget that in the worst case would slash a fifth from university finances, funding officials have disclosed.

and went on to point out that:

If implemented, they [the cuts] would lead to the widespread closure of university departments and could cause some institutions to shut altogether.

A few days later the times then described how “Arnold Schwarzenegger [is] in last-minute deal to save broke California“. But this isn’t Arnold playing a heroic role as:

The higher education system, including the University of California, will be hit by nearly $3 billion in cuts

It seems that public sector organisations are facing the brunt of such cuts. Indeed the Time praises Arnold Schwarzenegger: “His greatest victory was standing firm and warding off tax increases“.

I’ve heard financial commentators suggest that the recession hit the private sector first, whilst public sector organisations were initially protected by 3 year funding agreements. But as the private sector slims down and closes unprofitable areas of their activities they will be in a better position to respond to the economic recovery, whilst public sector organisations begin to experience their financial difficulties. Indeed in a blog post entitled “Universities and financial crisis” the elearningspace blog reports that:

The Bank of Canada has declared that the recession is over. While the numerical indicators (small growth predicted) may support this assertion, reality will tell a different story for many people and institutions. Universities, for example, are only now beginning to feel the impact. University of California is starting with deep cuts. Canadian universities are facing cuts as well. Few universities, however, face the difficulties of Harvard. Hard Times at Harvard provides a rather depressing glimpse into university systems that have lost focus and direction.

Whilst I appreciate that the Times may be accused of using a tabloid headline and language in its article, I do think we need to reflect on the implications of significant cutbacks in the education sector. Especially in light of the Conservative’s recent success in the by-election and the headline on the front page of Saturday’s Guardian “I’ll be nation’s hate figure, says top Tory Philip Hammond” in which the shadow Treasury chief secretary, “anticipat[ed] an era of deep short-term cuts in public spending“.

We can’t say that JISC has failed to provide support for such a gloomy future: they did, after all, commission work on Scenario Planning which was was originated by the JISC’s Users and Innovation programme, and further developed by JISC infoNet in partnership with Netskills with the aim of “providing a sustainable online resource as well as a range of workshops for the sector“.

My scenario, based on these recent reports: “The higher education sector has to deal with severe cuts in its funding, at a time when the weaker Web 2.0 companies have gone against the wall, leaving the stronger companies well-placed to deliver services on a global scale”. How should we plan to respond to this increasingly likely-looking scenario?

Posted in Finances, General | 6 Comments »

The IWMW 2009 Blog

Posted by Brian Kelly (UK Web Focus) on 23 July 2009

This year’s Institutional Web Management Workshop (IWMW 2009) takes place at the University of Essex on 28-30th July.  In order to support the institutional Web management community we have made use of social networking environments over the past few years. Last year we made use of Ning but this year, inspired by the approaches taken at the Dev8D and the recent Mashed Library Oop North events, we have decided to make use of a blog to support the workshop.

The blog was created on 26th June but was officially launched on 10 July. Since them the blog has published introductions from UKOLN’s organisers (Marieke Guy, Natasha Bishop and Michelle Smith and myself), provided a multimedia record of last year’s event, explained the barcamps and barpicnics,  summarised the plenary talks from Derek Law, Paul Boag and David Harrison and Joe Nicholls and, perhaps most importantly, provided an opportunity for the workshop participants to introduce themselves.

Additional posts will be published which are likely to be of interest to the participants who will be physically present at the event. But if you can’t attend, please note that IWMW 2009 will, once again, be an amplified event. You’ll be able to join in the discussions using the #iwmw2009 hashtag on Twitter and we also intend to provide a video stream of the plenary talks.

Posted in iwmw2009 | Leave a Comment »

This Year’s Technology That Has Blown Me Away

Posted by Brian Kelly (UK Web Focus) on 22 July 2009

About Bathcamp

The history of the Bathcamp is described by Mike Ellis on the Bathcamp Ning service:

Back on 13/14th September 2008, we ran a BarCamp in Bath called – obviously – BathCamp. It was a fun event and brought together a bunch of local (and some not-so-local) people who talked about a range of interesting stuff. Some of it was geeky, some of it wasn’t. You can read more about BathCamp over on the blog or see some Flickr pics.

After the event, I had a think about what we could do to keep the momentum of BathCamp going, without (necessarily!) having to organise another BarCamp any day soon. I did a survey, and a large bunch of people seemed interested in meeting up more regularly.

Last night’s  Bathcamp, held in conjunction with the Bath-based Carsonified company, was entitled “BathCamponified: 3 minutes, one technology…“. The task which the Bathcamp participants were invited to take was to identify “the  one technology that has blown you away more than any other in the last year, and [describe] why?“. The challenge was  in three minutes or less to “tell us about your chosen technology: why it has changed your life, the way you work or ways in which it has improved the world“. As there was a promise of a free bar and a prize I decided to miss my normal Wednesday night rapper sword practice and summarise the one technology which has changed my life this year. For those of you who weren’t there, here is a summary of the script I’ve prepared.

The Technology That Has Transformed My Life in 2009

As there’s a prize at stake I’ve decided to go for a crowd-pleaser for the geeky Bathcamp audience. It’s a technology that is close to my heart. It is [takes phone from shirt pocket] my HTC Magic Android mobile phone.

And as I’m sure you know it has an open source operating system. I decided to get the phone after reading a blog post about it written by Dave Flanders who works for the JISC. Dave described the features of the phone, and concluded by arguing that you should get the phone for ethical reasons.

Now I have to confess – I’m not as ideologically pure as Dave – or, I suspect, many of you. I got the phone for free, and simply had to upgrade my voice-only contract from £15 to £20, which includes data. OK, the device which has transformed my life this year may be free (as in open source software) but is also cheap (as in the costs of the device and the monthly contract).

And I can download applications from anywhere. I avoid the censorship of the single source for applications. Yes I can download music with rude words which certain other companies will block for fear of offending the sensitivities of the American mid-west. This is a feature which I’m sure Mike Ellis (@dmje to his followers) will warmly endorse (warning, adult content!).

A camera, video camera and sound recorded were supplied with the phone. I’ve also installed GPS software, Shazam, an Augmented Reality browser and the Qik live-video streaming application. OK, I’ll admit, the results from Qik weren’t great. Well, they were pretty poor. Some might even say unusable. But its open source, so let’s not quibble about minor details.

I’ve also installed a couple of Twitter clients – so if I have problems with one I can always use the other. I should apologise, by the way. If you follow me (briankelly) on Twitter and you sometimes see a half-composed or misspelled tweet I’m (probably) not drunk – it’s just the Magic’s virtual keyboard and annoying auto-correct feature. Oops, sorry, I’m getting a bit off-message. It’s probably my fault – I’ve got the wrong size fingers for the phone or I’ve got used to tweeting on my iPod Touch.

I ought to confess that I also own an iPod Touch. It’s easy to use. I can easily install new applications. It has WiFi, so I can connect to the Internet. I can – and indeed have – installed Skype, which I used when I was in Australia earlier this year.

Now it did occur to me that if you were to take the telephony aspect of the Android device and couple it with the usability of the iPod Touch, you could create a market leader. But that, I fear,would be dangerous. The ease of use would appeal to the naive and gullible. But us geeks know about the dangers of walled gardens, single providers of hardware and device lock-in to single network providers. We know we don’t want to unleash a twenty-first century Microsoft into the mobile world.

And although we may be geeks, we also care about non-geeks – so we know that ‘jail-breaking’ isn’t an ethical or scalable solution to vendor lock-in.

So join in with me and rejoice in the technology which has blown me away this year.

Android error messageEmbrace the system error messages which pop up from time to time. These remind you that your phone is a computer and not a fashion accessory! Smile, as I did, when I upgraded the NewsRob RSS reader at the message “Version 2.5.1 Fixed an issue where Mark All Read marked too many articles read“.

Exercise your brain: see if you can work out how to use the Augmented Reality app.

Remember the Android device is for clever people!

Become part of a thriving community: tell me how the application you find cool works and I’ll tell you about the application that I’ve eventually mastered.


Note My slides from last night are available on Slideshare. In addition a video clip of part of my talk is available on YouTube part 1 and part 2 (I’m afraid I was over the time limit as I was so passionate about the technology I described!.

But Seriously

I failed to win a prize last night (but congratulations to my colleague Julian Cheal who won a ticket to FOWD Tour Bristol) – I’d forgotten that most of the people at the event were proud owners of an iPhone!

But seriously, doesn’t the popularity of the iPhone amongst many software developers, including those who are supporters of open source software, tell us something about the limitations of open source software. And it’s not just me who feels the Android device is flawed – Tony Hirst recently commentedA few weeks ago, I got my first “real” mobile phone, an HTC Magic (don’t ask; suffice to say, I wish I’d got an iPhone:-(

As someone said last night, open source software might be fine for server applications, but the user interfaces often appear clunky. Does the open source development community or open source development processes fail when it comes to developing applications to be used by non-techies?

Posted in Gadgets | 8 Comments »

Depositing My Paper Into the University of Bath Institutional Repository

Posted by Brian Kelly (UK Web Focus) on 21 July 2009

I recently mentioned that my paper on “From Web accessibility to Web adaptability” had been published in a special issue of the Disability and Rehabilitation: Assistive Technology journal. Shortly after receiving the notification that the paper had been published I deposited the author’s version of the paper in Opus, the University of Bath Institutional Repository. As I had attended a short training course on use of Opus (which uses the ePrints repository software) a few hours before uploading the paper to the repository I decided to time how long it took to complete the process.

I discovered it took me 16 minutes to do this. As someone responded to my tweet about this, this seemed too long.  I subsequently discovered that I had mistakenly chosen the New Item option – as a DOI for the paper was available I should have selected the Import Items option (not an intuitive name, I feel). In addition I also copied the list of 46 references and tried to apply some simple formatting (line breaks between items) to the list and also to the abstract. This was a mistake, as any line breaks appear to be ignored.

In order to understand what I should have done, I went through the deposit process a second time and this time recorded my actions, with an accompanying commentary as a screencast which is available on YouTube and embedded below.

The video lasts for 10 minutes and the deposit process took 7 minutes (although this includes the time taken in giving the commentary and showing what I did the first time).

It does occur to me that it might be useful to make greater use of screencasting not only as a training aid for institutional repository staff to demonstrate the correct processes for depositing items but also to allow authors themselves to show and describe the approaches they take. I’m sure that some of the mistakes I made are due to limitations of the user interface and I won’t be alone in making such mistakes. Indeed having shown this view to the University of Bath’s institutional repository manager she commented:

I’ve also noticed, from your video a few issues that should be fixed, so it was helpful to see.

Why aren’t we making more screencasts available of user interactions with the services we develop, I wonder? And why aren’t we sharing them?


Note: Just to clarify, this post was intended encourage users to described (openly) their experiences in using services such as repositories. and to share these experiences. The video clip is not intended as a training resource on how to deposit an item in a repository! [24 July 2009]

Posted in Repositories | 13 Comments »

“From Web Accessibility To Web Adaptability”: A Summary

Posted by Brian Kelly (UK Web Focus) on 20 July 2009

I recently announced that a paper on “From Web accessibility to Web adaptability” by myself, Liddy Nevile, Sotiris Fanou, Ruth Ellison, Lisa Herrod and David Sloan has been published. I also said that, due to copyright restrictions, access to this article will not be publicly available until next year, when it will be released from the embargo on the University of Bath institutional repository.

David Sloan, who also edited the special issue of the Disability and Rehabilitation: Assistive Technology journal which published the paper, has written a brief summary of the paper:

A review of web accessibility from an organisational and policymaker’s perspective. This paper focuses on ways to strike a balance between a policy that limits the chances of unjustified accessibility barriers being introduced in web design while also providing enough flexibility to allow the web in a way that provides the best possible user experience for disabled people by acknowledging and supporting the diversity of and the occasional conflicts between the needs of different groups.

In this post I will give a extended summary of the ideas and approaches outlined in our paper.

The paper begins by adopting the UN Convention’s view that “disability results from the interaction between persons with impairments and attitudinal and environmental barriers that hinders their full and effective participation in society on an equal basis with others“. Disability is therefore a social construct and not an attribute of an individual. In particular, resource accessibility is the matching of a resource to an individual’s needs an preferences – and is not an attribute of a resource.

From this perspective we see the limitations of the WAI‘s approach to  accessibility, which regards accessibility as a characteristic of the resource (which should conform to WCAG guidelines) and the tools used to create the resource (which should conform to ATAG guidelines) and view the resource (which should conform to UAAG guidelines). In a previous paper we have described in more details the limitations of the WAI approach to accessibility (see Forcing Standardization or Accommodating Diversity? A Framework for Applying the WCAG in the Real World) and here we describe the limitations of what we call ‘Web accessibility 1.0‘  in the context of the UN Convention.

The paper reviews the holistic approach to Web accessibility which we have described in several papers previously (see Implementing A Holistic Approach To E-Learning Accessibility, Holistic Approaches to E-Learning Accessibility, Accessibility 2.0: People, Policies and Processes and Reflections on the Development of a Holistic Approach to Web Accessibility). The approach, which we refer to as ‘Web accessibility 2.0‘, explores accessibility in a number of areas which are more challenging than the simple provision of information, such as access to e-learning and cultural resources.

We then describe an approach which we call ‘Web accessibility 3.0‘ in which access to resources can be personalised to match an individual’s needs and preferences.  As described in our paper Accessibility 2.0: Next Steps For Web Accessibility instead of seeking to ensure that all resources are accessible to all potential users (an approach which the evidence suggests is not a realistic goal), this approach aims to provide resources and information about them that enables users or automated services to construct resources from components that satisfy the individual user’s accessibility needs and preferences.

The paper accepts that the labelling of these different approaches (which has parallels with the ‘Web 2.0′ and ‘Web 3.0′ terms) can be confusing: for many it would imply that Web accessibility 1.0 and 2.0 are now obsolete. This is not the case: there will still be a need for certain types of informational resources (a bus timetable, for example) to conform with WCAG guidelines and the Web accessibility 2.0 and 3.0 approaches describe different approaches which can complement each other.

We have therefore coined the term ‘Web adaptability‘ to described an approach which attempts to support the interaction between persons with impairments and attitudinal and environmental barriers that hinders their full and effective participation in society on an equal basis with others”.

The paper provides four case studies which illustrate how a Web adaptability approach is being used:

Support for users with learning disabilities:  An example is provided of a project at the University of West of England of an e-learning system for people with learning disabilities. The approach taken is to engage the end users in the design and development of the system, rather than the application of WCAG guidelines. A decision was taken “not to try to create a system and content that are universally accessible, but rather to try to maximise the usefulness and usability for a specific audience of learning users with particular permanent disabilities“.

Adaptability for the deaf:  This example illustrates the inappropriateness of the medical model of disabilities which underpins the ‘Web accessibility 1.0′ approach. The deaf community itself recognises both the medical and cultural model of Deafness (and note that the capital D is used to distinguish them as an ethnic community, just as we would use a capital E for English). The case study (which is described in an article on Deafness and the User Experience published on A List Apart) reinforces the merits of the ‘Web adaptability’ approach which can apply a cultural rather than a medical definition of deafness.

Adaptability in a government context: The challenges of applying best practices when faced with limited resources and timescales form the basis of the third case study. This example considers the decisions taken in an Australian government organisation and how the challenges of addressing several constraints: government policies, budgetary measures specific deadlines to meet legislative requirements and availability of staff with the expertise to develop the accessible solutions. The ‘Web adaptability’ framework supported a holistic and pragmatic approach to the challenges by enabling both usability and accessibility issues to be addressed and appropriate solutions to be deployed on time and within the budget.

Adaptability and institutional repositories: Increasing numbers of universities are providing institutional repositories in order to enhance access to research publications and to preserve such resources for future generations. However many of the publications will be deposited as a PDF resource, which will often fail to conform with accessibility guidelines (e.g. images not being tagged for use with screen readers; text not necessarily being ‘linearised’ correctly for use with such devices, etc.). Rather than rejecting research publications which fail to conform with accessibility guidelines the Web adaptability approach would support the continued use and growth of institutional repositories, alongside an approach based on advocacy and education on ways of enhancing the accessibility of research publications, together with research into innovative ways of enhancing the accessibility of the resources.

The paper addresses some of the criticisms which may be made of the Web adaptability approach such as ‘doesn’t the Web adaptability approach allow organisations to disregard accessibility considerations?’ and ‘if WCAG conformance isn’t mandated in law, won’t organisation simply ignore accessibility issues?

How does one specify accessibility requirements in a tender document? How does an organisation audit its resources for accessibility?

We describe how we regard the WCAG 2.0 guidelines as a valuable resource for enhancing the accessibility of resources. The guidelines should be used in they can be used in a cost-effective way and if they do not detract from the core purpose of the service.

We also point out that legislation isn’t the only driver for implementing best practices – and indeed focusing on legal requirement can be counter-productive as if case law subsequently rejects WCAG conformance in a test case (after all the RNIB home page doesn’t conform with the guidelines) this would undermine WCAG as a key component for enhancing the accessibility of Web resources.

Rather than the threat of disability legislation for ensuring organisations enhance the accessibility of their Web services we describe a range of other drivers such as peer pressure, cultural pressure, user engagement, maximising business opportunities and corporate social responsibility and reputation management.

The paper concludes by describing the areas in which standardisation is beneficial. Since we have adopted the UN’s perspective on disability as a social construct and not an attribute of an individual or the resource, we feel that standardisation work should focus on the practices which facilitate the “interaction between persons with impairments and attitudinal and environmental barriers that hinders their full and effective participation in society on an equal basis with others“.  The BSI PAS 78 on “Guide to good practice in commissioning accessible websites” provided a good example of a code of practice which documented best practices for the commissioning of accessible Web sites. The draft BSI PAS 8878 on “Web accessibility. Building accessible experiences for disabled people” has the potential to build on this, although, as I pointed out earlier this year, the initial draft provided too great an emphasis on the potential of the nearly arrived WCAG 2.0 guidelines, rather than documenting proven best practices.

I will conclude this summary of the paper by repeating the final paragraph of the paper:

[This paper] argues for the adoption of a Web adaptability approach which incorporates previous approaches and, perhaps more importantly, embraces the future, including technical innovations, differing perceptions of what is meant by accessibility and real world deployment challenges.

Your views and feedback are welcomed.

Posted in Accessibility | 24 Comments »

“From Web Accessibility to Web Adaptability” Paper Published

Posted by Brian Kelly (UK Web Focus) on 17 July 2009

I’m pleased to report that a paper on From Web Accessibility to Web Adaptability has been published in the Disability and Rehability: Assistive Technology journal. The full citation details are:

From Web Accessibility to Web Adaptability, Kelly, B., Nevile, L., Sloan, D., Fanou, S., Ellison, R. and Herrod, L.
Disability and Rehability: Assistive Technology, Volume 4, Issue 4, July 2009, pages 212 – 226.
doi:10.1080/17483100902903408
http://www.informaworld.com/smpp/content~db=all~content=a912788469

I’ll summarise the contents of this paper in a subsequent post. For now I thought it would be worth describing how this paper came to be written.

I, along with other authors of paper published at the W4A 2009 event, was invited to submit an updated version of my paper, entitled “One World, One Web … But Great Diversity“, although there was a requirement that the requested paper would be substantially different.

I received this invitation in early January 2009, with the deadline of  early March. As I had been invited to give the opening plenary talk at the OzeWAI 2009 conference in January and was already thinking about further developments to the holistic approach to Web accessibility I had been involved in developing over the past 5 years or so, this invitation provided an ideal opportunity to put down in writing the approaches I intended to talk about at the OzeWAI conference.

As I have described previously, immediately following the talk I received tweets from two participants at the conference saying how valuable they found my talk and wished to have further discussions about the ideas I had described.

Following those further discussions I invited Ruth Ellison and Lisa Herrod to provide case studies based on their involvement in Web accessibility work in Australia as examples of the ‘Web adaptability’ approach which the paper describes.

Although I was a bit grumpy at having to submit the final edits to the paper over Easter, I’m pleased that our paper has been published. And the ideas described in the paper were strengthened by the concrete examples provided by Ruth and Lisa. A good example of how Twitter can help in bringing together people with shared interests who can then engage in publishing a paper in a peer-reviewed journal :-)

The other aspect of the process which I was pleased with was the two pages of comments we received from the anonymous reviewer of the first draft of our paper. The reviewer pointed out a number of weaknesses in our arguments, challenged us to justify a number of our assertions and queried whether our criticisms of the traditional approaches to Web accessibility could be interpretted as suggesting that institutions could ignore accessibility considerations. Our responses to these comments helped us to submit a much-improved final version to the publisher – and we were pleased when the reviewer warmly endorsed the final version.

The paper is available on the publisher’s Web site. In addition my version of the paper is available on the University of Bath Institutional Repository.  Unfortunately, due to copyright restriction, access to this version is embargoed until next year :-(

Posted in Accessibility | 5 Comments »

The Network Effect Is Missing From The Standards Debate

Posted by Brian Kelly (UK Web Focus) on 15 July 2009

In a recent post I asked “Do We Want A Standards-based Voice/Video Service?“. The post suggested that the failure of the JANET Talk service to gain significant support or interest provided evidence of the failure of a development approach based solely or primarily on support for open standards.

In a response to the post, Nick Skelton provided  his explanation for why JANET Talk didn’t take off – the lack of positive network effects. Nick pointed out that as network grow “its usefulness increases in proportion to the number of potential connections between people in the network – the square of the number of people“. Nick felt that JANET Talk’s failure was inevitable as it “was only for people in UK HE to talk to others in UK HE“.

Although Nick’s point specifically addressed telephone networks I feel his arguments are also applicable to social networks in general – an argument I made at the JISC Digitisation Conference back in July 2007 in a talk on “Globalisation Of Social Networks and Networked Services.

We are now beginning to appreciate the importance of the network effect in a range of application environments – saving bookmarks used to be a function of the user’s browser but now we are seeing advantages of social sharing services such as del.icio.us.

But this seems to be missing from the approaches which have been taken to support IT development activities. In a post about the JISC e-Framework, for example, Andy Powell  questions whether the e_framework is of “any value to anyone“.  In a response Wibert Kraan felt that we can’t “forget about [the e-Framework] and pretend it never happened” – rather there’s a need to “look at what went well and why and what went wrong and why“. And this is equally true when considering the failure of open standards to live up to their expectations.

We need a better model for the adoption of open standards in our development activities since the current approach, which tends to assume that an open standard from a trusted and mature standards body will inevitably be accepted by the marketplace, is clearly flawed. And the network effect would appear to be a significant aspect in solutions which do become widely deployed and used.

Posted in standards | 3 Comments »

Do We Want A Standards-based Voice/Video Service?

Posted by Brian Kelly (UK Web Focus) on 8 July 2009

Last year JANET(UK) launched a trial of a voice, video and collaboration application called JANET Talk. As described in JANET News No.8 June 2009 (PDF format):

The aims of the trial were to understand the precise requirements and service provisioning model for an ‘on net’, standards-based SIP service that could be used for communication between JANET users via a software PC client interface, mounted on the user’s PC or a SIP-based traditional phone handset“.

A survey of potential users also “showed a requirement for a feature rich collaboration tool for exclusive
use by JANET connected users that didn’t use peer-to-peer technology
“.

Sounds good doesn’t it? A standards-based solution should avoid the problems caused by use of proprietary services and access would be available on both a PC and a phone handset which supported the SIP (Session Initiation Protocol) standard. Who, apart possibly Macintosh and Linux users who seem to have been excluded from the trial, would not wish this trial well (which attracted over 100 institutions) and look forward to deployment of the service across the JANET community?

However, as described in JANET News

The results from both trial feedback and market research showed that the appetite for a service like JANET Talk had diminished. The reasons cited include a preference for alternative solutions that are now available from the commercial sector. These solutions were deemed easier to use, reliable and free.

So now we know. Users don’t care about standards. Users care about solutions that work, are easy to use and, ideally, are free!

I know this is true for me, as I was an early adopter of Skype. At one stage use of Skype was frowned upon here at Bath University due to the load it could place on the campus network as well as the concerns about its proprietary nature, and the licensing conditions. However over time the local network team deployed solutions to manage the network load and we now seem to have happy Skype users, such as myself.

The University has also deployed a SIP solution which is available on SIP-compliant phones in various halls of residence. I must admit that when I heard about this offering I was interested. Was there a service based on open SIP standards which would enable me to talk to others without being constrained by a particular client? Sadly it seems that with the Freewire service used at Bath calls are free “when they’re made from one Freewire user to another” although you can “download the Freewire Telephone software for nothing“. But if you want to talk to someone on another service (Skype, for example) you’ll have to pay for the call :-(

So let’s remember, open standards don’t always succeed. And users may reject standards-based solutions in preference to other alternatives. There are risks in investing in open standards. And there should be lessons to be learnt from examples such as this. But I sometimes feel that we will ignore evidence which does not fit in with established dogma.

Posted in standards | Tagged: , , | 5 Comments »

Thoughts About Dopplr and the Environment

Posted by Brian Kelly (UK Web Focus) on 7 July 2009

The Doppler serviceI’ve been using Dopplr for a couple of years now, and have used it to keep a record of my substantial work trips over the last three years.

Wikipedia describes the service as “a free social networking service, launched in 2007 that allows users to create itineraries of their travel plans and spot correlations with their contacts’ travel plans in order to arrange meetings at any point on their journey“.

Although there is a social aspect for the service (I can share my trips with others) the aspect which is of particular interest to me is the way it can be used to the carbon costs of one’s trips.

Dopplr display of my carbon usage

Could we envisage a future in which institutions are required to account for the carbon emissions associated with travel by members of staff, with targets for reducing the amounts? And possibly the contracts for JISC-funded projects could require projects to report on the carbon costs of the travel associated with project-funded activities.

If this did happen I hope that rather than developing an application for aggregating such data from scratch, the potential of existing services, such as Dopplr, was explored. And this is something we can be doing now. Now although I know I can share this information with others, I wonder if I can export the carbon data (which is created by the AMEE service) for use by other applications?  And what about the traveller’s individual sensitivities? We can appreciate why one might not wish information about futiure trips to be made publicly available (so opportunistic burglars can’t find out when your home might be empty) but what about the carbon costs? Is this something we should be more open about (as the general public expect MPs to be with their expenses claim)? And if so, who will be the first?

Posted in Web2.0 | Tagged: | 5 Comments »

Enthusiastic Amateurs and Overcoming Institutional Inertia

Posted by Brian Kelly (UK Web Focus) on 6 July 2009

I was very pleased but also slightly embarrassed when Dave Pattern invited me to speak at the Mashed Library UK 2009 event (also known as ‘Mash Oop North‘). Pleased because this event, which is building on the success of the  first event which took place at Birkbeck College in November 2008, reflects the interests I have in this area and will provide an opportunity to learn from some of the people (such as Tony Hirst, Mike Ellis and Dave Pattern) who are actively engaged in significant development activities. But embarrassed because I’ve been asked to speak to an audience who would, I suspect, prefer to listen to and talk to the gurus of mashup developments!

Dave convinced me, however, that as there appear to be a significant number of participants at the event who don’t regard themselves as mainstream developers, but rather as ‘enthusiastic amateurs’ that there is a role to play in exploring how the learning which will take place at the event can be exploited.

So I will be giving a talk and inviting discussion on the topic of “Enthusiastic Amateurs and Overcoming Institutional Inertia“.  This session will take place on Tuesday 7 July 2009. My slides are embedded below (and are also available on Slideshare). If you have any thoughts on this subject, especially if you regard yourself as an ‘enthusiastic amateur’ yourself I’d welcome your comments. Of you may wish to participate in the Twitter back channel, using the hastag “#mashlib09″.

Posted in Events, mashups | Tagged: | 4 Comments »

Wolfram|Alpha’s Terms and Conditions

Posted by Brian Kelly (UK Web Focus) on 3 July 2009

Wolfram|Alpha

Wolfram|Alpha is described in Wikipedia as “an online service that answers factual queries directly by computing the answer from structured data“.

Comparing Web Sites

When I discovered that Wolfram|Alpha could be used to compare Web sites I thought it would be interested to compare the Web sites for Oxford and Cambridge Universities. From this I found that the http://www.ox.ac.uk Web site has 960,000 daily pages views and 230,000 daily visitors and the site is ranked 6,289th, whereas the figures for http://www.cam.ac.uk are 760,0000,d 260,000 and 6,269 respectively.

Wolfram Alpha statistics comparing three blogs

Table comparing three blog Web sites (from Wolfram|Alpha)

Closer to home I thoughts I’d compare the figures for this blog with those for the eFoundations blog provided by Andy Powell and Pete Johnston and Martin Weller’s EdTechie blog – of some interest in light of recent discussions about impact metrics for Social Web services. Here I find the amazing statistics that my blog has 150 million daily page views and 53 million daily visitors and is ranked 15th of all Web sites. The eFoundations blog has 16 million daily page views and 7.3 million daily visitors and is ranked 195th with the Ed Techie trailed way behind with 61,000 daily page views and 47,000 daily visitors and is ranked 53,872th.

Unbelievable, isn’t it? And, of course, wrong! The figures provided by Wolfram|Alpha, which they got from the Alexa.com service, seem to be based on the figures for the wordpress.com and typepad.com domains, with Martin Weller’s blog trailing as it is hosted on the typepad.co.uk domain.

So further analysis has given us a better understanding of how WolframAlpha uses the statistics provided by Alexa.com.  And the comparisons for Oxford and Cambridge Universities Web sites may be skewed bv the number of Web services in their domains.

And maybe other services which make use of such figures can be similarly skewed. Does this, I wonder, have any relevance to the metrics to measure online digital reputation described recently by Martin Weller? Perhaps my unexpectedly high ranking in a list of influencers in ‘distance learning’ is due to the service which hosts my blog?

Wolfram|Alpha’s Terms and Conditions

Interesting questions which we need to ask if we are to build up a better understanding of the digital world we’re living in, the tools that can help us in our tasks and the strengths and weaknesses of such tools.

But of interest – and perhaps concern – are the terms of use for the Wolfram!Alpha service .  It short it seems that, as my colleague Emma Tonkin recently pointed out to me there are “no guarantees, no under 18s, no organised repeated access, no mashups (don’t think about accessing this service in your software). Use must be personal, ad hoc (no organised groups of users please, so don’t think about teaching or training with it) and not for a professional reason unless you buy a licence for an unspecified price (curious amateurs only please). They reserve the right to assert IP rights over anything given as input to their site if they can think of any reason for doing so. Whilst they got much of their data for free by spidering sites, they will be deeply upset if you do the same.”

In addition is the requirement that “the results you get from Wolfram|Alpha are correctly attributed to Wolfram|Alpha itself“. The terms of use go on to say:

If you make results from Wolfram|Alpha available to anyone else, or incorporate those results into your own documents or presentations, you must include attribution indicating that the results and/or the presentation of the results came from Wolfram|Alpha. Some Wolfram|Alpha results include copyright statements or attributions linking the results to us or to third-party data providers, and you may not remove or obscure those attributions or copyright statements. Whenever possible, such attribution should take the form of a link to Wolfram|Alpha, either to the front page of the website or, better yet, to the specific query that generated the results you used.

So  if I ask Wolfram|Alpha what 1+1 is, if I published the result ’2′ I must provide a link back to Wolfram|Alpha.  And if I ask “What were the dates of the second World War?” I need to provide a similar link before using the answer “1 September 1939 to 2 September 1945″.

What Should We Do?

What should we make of this? As students are encouraged to cite their sources, perhaps educational institutions should welcome the support they are getting from a commercial company? And maybe we should work with the manufacturers of calculators and require that any numerical calculations include details of the make of the calculator used. There might be sponsorship possibilities in doing this, as well as allowing the teachers to spot flaws in the answers which might be due to errors on the chips on the calculators (after all, we don’t have open source calculators  so, according to Peter Murray-Rust, we probably shouldn’t be using them to carry out open science.

I’m joking! But what should we do? Should we block access to Wolfram|Alpha from our firewalls? Should we simply ignore the terms, as we know that few people will bother reading them (although this story has been picked up on the  Grocklaw blog, Slashdot, CNet and The Register)? Or should we actively break them? After all Peter Murray-Rust recently argued that “We must reform the practice of copyright. We may be getting close to civil disobedience. Because unless we do we shall not control our future but be controlled by others.“.

Posted in Web2.0 | Tagged: | 3 Comments »

Facebook Usage by US Colleges and Universities

Posted by Brian Kelly (UK Web Focus) on 1 July 2009

I’m pleased to publish a guest blog post by Mike Richwalsky, assistant director of public affairs at Allegheny College, a small, private liberal arts college in the United States. Mike provides a US perspective on a topic which often generates heated debate in the UK – the role of Facebook in higher educational institutions.


Facebook Usage by US Colleges and Universities

First, thank you to Brian for allowing me to use this space to talk about how we at US colleges and universities are using Facebook. I’ll be presenting a session at IWMW 2009 (on cloud computing, not social media), and I’m interested to learn more about how schools in the UK and Europe are using tools like Facebook and Twitter to communicate with different audiences. Here we go…

Several years ago, in its infancy, Facebook was all the rage among students on campuses large and small across the United States. At that time, many schools were panicked about what services like Facebook and MySpace allowed students to do, often with an eye towards potential liabilities the school may face due to photos being posted, thoughts being shared, disagreements and much more.

Fast forward to today, and a large majority of schools have changed their tune about Facebook. Yes, we still worry when students post photos of themselves drinking and the like, but now we in college administrations have adopted the site as an effective way to reach students, both prospective and those students already attending our schools.

I’d like to examine how schools in the US are using Facebook and share some thoughts and experiences I’ve had from managing my school’s presence there.

First, why are schools using Facebook? First, it’s where the students are. College students today in the US live and breathe Facebook all day long. For us, using it to reach them makes sense – after all it’s a medium they are comfortable in. Second, it’s free for our institutions to use. Finally, the tools that Facebook offers have developed to the point where it’s become a compelling communication platform for us to use to reach a large number of people very easily.

Now that we’re in the golden age of social media, many colleges are developing strategic plans on how to use Facebook. At Allegheny, our adoption of this medium and the successes we’ve had have been very organic. We didn’t jump right in with a set plan, instead we started small, just creating an official page before someone else did. As we got more comfortable with the tools, we added more and more and have grown to the presence we have today.

When Facebook launched its Groups tool, many schools, mine included, created a group for not only our institution but many offices across campus, such as career services, student life, libraries and more. The groups behaved much like they do today, we could post events, participate in discussions and more.

Eventually, Facebook created its Fan page platform, and many schools transitioned their main institutional presence from the Groups tool to the new Fan page format, which offered many similar functionality but added new tools like video, wall posts and most importantly, analytics.

At the time I write this, we have just north of 2,100 fans of our institution (http://facebook.com/alleghenycollege). Our largest number of fans are in the 25-34 age group, which includes graduates of the last several years, so it makes sense that number is high. The next largest group is the 18-24 group, with the 35-44 group a close third.

The smallest age group is 13-17, which is interesting since that’s an audience we actively market to since they are the college students of the near future. 2% of our college’s fans fall in that age group. It’s great that 45 or so people have indicated they are a fan of our institution, I wonder why that number isn’t larger. Perhaps people of that age don’t want to commit to a college in this way, or they are still into their college search research and planning.

This past academic year, we actually had a student working in our office 10 hours a week that posted events and news to our Facebook fan page. The student worked under close supervision, but it worked out well for us and gave our presence some authenticity and a voice that even someone in their early 30′s can’t provide.

As I mentioned, our college moved its institutional profile from a group to a fan page, but that doesn’t mean Facebook Groups are no longer used by offices on our campus.

Our most active group is a yearly “Class of” group – this year its the Class of 2013 group. For several years prior to this one, incoming students would create an unofficial group for their class and use it to start to get to know each other. The challenge for us as marketers and admissions folks was that we didn’t want our new students to think that group was sanctioned by the college or an official voice of the college, so in 2008, we created the official Class of 2013 group, with several people in different offices across campus serving as administrators. Now, it’s become a very useful tool for communicating quickly with that group of students. Our student orientation program leaders use it to answer questions, be a part of the conversation and post reminders and prod the students to complete tasks like completing necessary paperwork or registering for fall events.

We’ve also had great success in our career services group, who have used Facebook to promote employment fairs, recruiter visits and other employment-related activities on campus. They have seen program attendance increase over previous years, and Facebook has been a great way for them to reach an audience they otherwise may not have been able to be in contact with.

Hopefully, as Facebook grows they will continue to develop new technologies and ways for us to communicate. I think they’ve done a good job of it thus far, but it highlights one of the perils of social media in general – things in this area change very quickly and without warning. It can require a bit of work to keep track of all the new features, rules and more.

Four years ago we had no idea of how to use Facebook and two years ago we didn’t know how to use Twitter. There may be a new tool that’s being developed right now that may come along and change everything we’re doing and we’ll look back and say “wow, we didn’t even think about how to use X two years ago.”


Mike Richwalsky is assistant director of public affairs at Allegheny College, a small, private liberal arts college in the United States. He is also a technology fellow at NITLE, the National Institute of Technology in Liberal Education. He has a blog at HighEdWebTech.com, is on Twitter at @mrichwalsky and Facebook at http://facebook.com/mrichwalsky.

Posted in Facebook | 9 Comments »