UK Web Focus (Brian Kelly)

Innovation and best practices for the Web

Archive for September, 2008

Institutional Repositories and the Costs Of Doing It Right

Posted by Brian Kelly on 29 September 2008

There’s an interesting discussion taking place on the JISC-Repositories JISCMail list, following a post from Jenny Delasalle who asked:

Do any of you know how long it takes you to process a single item, before it is available as a live record in your repository? Please can you share that information with the list? 

Jenny provided details of her experiences:

Here at Warwick it takes at least 2 hours to process a single item. We are adding to our repository at a rate of about 15 items per week. I’m desperate to try to speed this up as we are receiving items faster than we can process them.

My colleague Pete Cliff somewhat tentatively suggestedwhy not put the items in the repository with minimal metadata“.

Pete and others seemed to feel that such compromises may be needed “in the current climate where quantity seems to have more impact than quality“. But this is where I would disagree.  This argument seems to be simply a cry for more resources in an area of interest to those making such a plea. But people will always be asking for more resources for their areas of interest – and, as there will always be limited resources, others will argue that their areas are more worthy of being allocated more resources.  And it strikes me as being somewhat disingenuous to have developed an approach which is known to be resource-intensive and then to make a plea for additional resources in order for the particular approach to be effective. A more honest approach would have been to develop a solution which was better suited for the available resources.

This was an argument I made last week in my talk on “Web Accessibility 3.0: Learning From The Past, Planning For The Future“. As I described in my talk (and note a 30 minute video of the talk is available). I pointed out that evidence suggests that Web accessibility policies based on conformance with WCAG AA have clearly failed, except in a small number of cases. And rather than calling for additional resources to be allocated to changing this we need to acknowledge that this won’t happen, and to explore alternative approaches.

And it is interesting to note that apprarent lack of interest on the JISC-Repositiories list in discussing the accessibility of resources in the repositories rather than the metadata requirements for aiding resource discover. Indeed when this topic was discussed a couple of year’s ago Les Carr, with a openness which I appreciated, argued that:

If accessibility is currently out of reach for journal articles, then it is another potential hindrance for OA. I think that if you go for OA first (get the literature online, change researchers’ working practices and expectations so that maximum dissemination is the normal state of affairs) THEN people will find they have a good reason to start to adapt their information dissemination behaviours towards better accessibility.

Here Les is arguing that the costs of providing accessibility resources in Institutional Repositories is too great, and can act as a barrier to maximising open access to institutional research activities. I would very much agree with Les that we need to argue priorities – as opposed to simply asking that someone (our institutions, the government – it’s never clear who) should give us more money to do the many good things we would like to do in our institutions.  

In the case of Institutional Repositories we then have competing pressures for resources for metadata creation and management and for enhancing the accessibility of the resources. In this context It should be noted that the WCAG 2.0 guidelines have reached the status of Candidate Recommendation, and that WAI Web site states quite clearlyWe encourage you to start using WCAG 2.0 now“. And note that, unlike the WCAG 1.0 guidelines, WCAG 2.0 is format neutral. So you can provide resources on your Web site in a variety of formats, but such resources need to conform with the guidelines if it is your institutional policy to do so.

So shouldn’t institutions who have made public commitment to comply with WCAG guidelines ensure that this applies to content in their institutional repositories, even if this will require a redeployment of effort from other activities, such as metadata creation?

Or, alternatively, you may feel that complying with a set of rules, such as WCAG, without doing the cost-benefit analysis or exploring other approaches to achieving the intended goals is mis-guided. In which case perhaps Pete’s suggestion that you might wish to consider “put[ting] the items in the repository with minimal metadata” might actually be a sensible approach rather than an unfortunate compromise? And in response to Philip Hunter’s comment that “achieving interoperability through dumbing-down the metadata has a strange attractiveness in a world not overly crazy for quality” perhaps we should be arguing that “achieving interoperability and accessibility through labour-intensive manual efforts is a perverse solution in a public sector environment in which should be demonstrating that we can provide cost effective solutions“?

Posted in Accessibility, Repositories | 3 Comments »

Launch of UKOLN’s Resources for the Culture Heritage Sector

Posted by Brian Kelly on 26 September 2008

Resources For The Cultural Heritage Sector

I’m pleased to report that an area of the UKOLN Web site dedicated to the cultural heritage sector has now been launched.

Historical Context

UKOLN has had close links with the cultural heritage sector for many years – when I joined UKOLN back in 1996 UKOLN was funded by BLRIC (British Library Research and Innovation Centre) together with the JISC. Over time this funding body changed, initially to the LIC (Library and Information Commission) and then, as the library, museums and archives sectors moved more closely linked, by Resource which was subsequently renamed MLA (Museums, Libraries and Archives Council).

Engagement With The Sector

UKOLN is perhaps uniquely placed to exploit its close links with the higher and further education communities, libraries (both academic and public) and museums and archives. Over the past couple of years I have become very actively involved in supporting the museums sector, having been a program committee member, speaker, workshop facilitator and chair at the Museums and the Web conferences in 2007 and 2008 and a speaker at UK Museums on the Web conferences in 2004, 2005, 2006 and 2007.

But perhaps more significant to the broader cultural heritage sector are the workshops we have been running which have attracted participants from across a range of museums, libraries and archives. This has included workshops held on behalf of MLA London and MLA Yorkshire and CyMAL (the Welsh equivalent of MLA). We have also run workshops for the Society of Archives in 2007 and 2008, with a workshop for the Association of Scottish Visitor Attractions to be held in November.

Many of these workshops focus on ways in which Web 2.0 can provide benefits to the cultural heritage sector, although a rather wider perspective on the digital landscape is often provided, covering additional areas such as the preservation of digital resources.

Changing Political Context

The importance for UKOLN (which is a JISC Innovation Centre) to engage in this way with the cultural heritage sector was highlighted in Elspeth Hyams’ editorial in the CILIP Update magazine (June 2008, Vol. 7, No. 6) has the byline ”In This Climate, You Have To Innovate“.  As Elspeth described (and I commented upon recently)The age of the quiescent library or information manager or service is dead“.

The editorial went on to describe the MLA’s action plan for public libraries and reports on the MLA’s Chief Executive, Roy Clare, calls for “radical action on structure, far-sighted leadership vision and more public Private Partnerships“. The editorial concludes with the warning that “It’s not just a challenge for the academic schools, but for all of us” but also suggests that “we should use tough times as a golden opportunity to focus on the strategy – and upgrade and refresh our skills“.

I think it is clear from these comments that significant changes will be needed within the cultural heritage sector. And indeed Roy Clare has commented on the failures of previous national initiatives to deliver compelling user-focussed services. As reported in a post on the MCG JISCmail list: “Roy Clare highlighted the NOF Digitise project as an example of where we went wrong in assuming that mass digitisation and online publishing of collections would be engaging“.

The political and funding changes (it seems public sector money is now being used to fund the 2012 Olympics) are taking place at a time in which Web 2.0 approaches are steadily gaining momentum, with smaller organisations (and indeed organisations) now being able provide services which previously would have required significant amounts of funding.

The need to ensure that “engaging” digital services are provided by cultural heritage organisations underpins the workshops we have been providing. It also reflects the strategic thinking of various national bodies, including the National Library of Wales which in its Shaping the future: The Library’s strategy 2008-2009 to 2010-201 document (PDF format) states that:

We propose … Taking advantage of new online technology, including the construction of Web 2.0 services, to develop progressive ways of interacting with users. It is expected that the Library itself will provide only some specific services on its website. Instead, the intention is to promote and facilitate the use of the collections by external users, in accordance with specific guidelines.

A review of the uses of Web 2.0 services by the National Library of Wales was given in a talk by Paul Bevan at the first Sharing Made Simple: An Introduction to the Social Web workshop  – and I’m pleased to say that Paul describes this work as a co-author of an invited paper on “Library 2.0: Balancing the Risks and Benefits to Maximise the Dividends” which I’ll be presenting at the Bridging Worlds 2008 conference in Singapore in a few weeks time.

UKOLN is well-positioned to identify such examples of best practices, make the examples available to wider audiences, encourage debate and use such case studies in the development of more general models for the sector.  In this respect our links with the higher education sector is particularly valuable, as higher eductaional institutions seem to be better positioned to make early use of innoovative new technologies and has a healthy tradition of encouraging open debate on the merits of such innovation.

Resources For The Sector

The new area of the UKOLN Web site provides access to a variety of resources on a range of issues of particular relevance to the cultural heritage sector, and brings together information previously distributed across the UKOLN Web site.

As well as providing access to the events we’ll be running another important area of the Web site is the IntroBytes area, which provides access to a range of briefing document we have produced, sometimes in conjunction with practitioners from the cultural heritage sector. These documents are used at many of the event we run, which helps to ensure that we receive feedbackon the content of the documents. It should also be noted that the documents are available under a Creative Commons licence, which permits their reuse for non-commercial purposes. This licence was chosen in order to ensure that the resources can be embedded for use within organisation in the cultureal heritage sector (and beyond).


We have received positive feedback on our results, as can be seen from comments provided at the recent workshops for CyMAL (which was given a rating of 5.35 out of a maximum score of 6) and MLA Yorkshire.

In order to ensure the ongoing sustainability of our work for the cultural heritage sector we are now running workshops on a cost-recovery basis for the wider sector. This has included workshops for the voluntary sector and CyMAL with additional workshops already scheduled for CyMAL and ASVA.

If anyone would be interested in organising a workshop along the lines described, feel free to get in touch.

Posted in General | 1 Comment »

Google’s G1 Phone: “Innovation For Tech Heads”

Posted by Brian Kelly on 25 September 2008

Yesterday’s Guardian (24 September 2008) contains an article on the release of the Google G1 phone. An accompanying review, entitled “Innovation For Tech Heads” describes how the technology is “as good if not in some cases better” than the iPhone, and mentions G1’s strengths in its camera and download speed. Most importantly, though, the article describes how “The real difference between the two devices … is likely to come from the openness of Google’s operating system, Android, which allows tech-heads to design ‘widgets’ for the phone.” The article does concede that the phone lacks the “wow factor of the Apple device“.

Now I’m sure that most readers of this blog will understand the benefits provided by openness and the dangers of being locked into a proprietary system – whether this is Facebook, Microsoft or Apple’s iPhone. Some readers with a pragmatic view of the world may have bought an iPhone as at the time there wasn’t an equivalent open system. But now that the G1 device is available, which provides, unlike the iPhone, an open environment for accessing widgets, that argument is no longer valid. So we’ll soon be seeing those iPhone users who have strong beliefs in open systems and have criticised the closed nature of various Web 2.0 services seeking to move their contract, won’t we?  And this should include many of the people I follow on Twitter who became very excited when they purchsed their iPhone.

Is this a likely scenario? Isn’t it the case that IT professionals and policies makers can be impressed by the ‘wow’ factor  – this isn’t restricted young people who we sometimes accuse of being impressed by the latest ‘fad’.  And don’t we all have to make judgements about openness, cost, functionality and, indeed, personal preferences.  So if the iPhone, G1 or whatever other new device comes along and provides a valuable personal learning environment, personal research environment, personal work environment and personal social environment for the owner of the device, then shouldn’t we accept that?

And if we accept that argument for the device that we have in our hand, then doesn’t it also apply to the equivalent service which we have accept via our fingertips- whether this is our preferred social networking environment or aggregation tool? Or to put it another way, when should openness trump personal preferences?

(Disclaimer I’m the owner of a Nokia N95 with a short battery life!)

Posted in Gadgets | Tagged: , | 13 Comments »

Web 2.0 In Troubled Economic Times

Posted by Brian Kelly on 24 September 2008

How should institutions response in their uses of Web 2.0 services at a time of a global recession?  In response to a recent post CodeGorilla pointed out that at a number of participants at the Repository Fringe event had felt that use of services such as Flickr and Google should be avoided because such companies were not as well-established as many Universities.

I feel the views that were reported were rather disingenuous, not so much because not all Universities have been in existence for several centuries (BCU is very new University) but because the services Universities provide will change and evolve over time (when I worked at the University of technology, Loughborough – as it was known at the time – the Computer Centre provided a data preparation service which was shut down many years ago). And as I pointed out last year, “Universities, Not Facebook, May Be Facing Collapse” – indeed when I attended a JISC CETIS conference a couple of years ago doubts were expressed by senior academics as to whether high educational institutions in their current form will exist in 20 years time.

This is, of course, just speculation, as was my post in which I pointed out that standards-making organisations, such as W3C, which are funded by memberships fees, with significant contributions being paid by commercial IT vendors and user organisations, may similarly be affected by the recession.

But what scenarios might we envisage happening? And what plans should our institutions be developing in case the worst case scenarios occur? Let me give my thoughts:

Externally-hosted Web 2.0 providers: What if the services provided by Google, Yahoo, etc. prove uneconomic and the services are shut down or the terms and conditions changed, with perhaps free-to-use services becoming subscription services?

Our institutions: What if the economic downturn affects the sustainability of the IT services provided within our institutions?

Our national services: What if the national services provided for our communities are similarly adversely affected, with users preferring the services provided by the global services?

Our information providers: What if the services provided by individuals within our institution, who use Slideshare, Flickr,, etc. aren’t sustainable because the individuals may face redundancy, early retirement, etc.?

Our funding organisations: What if our funding bodies have less funds available, and are forced to stop or reduce the level of funding provided to national or institutional services?

Our user communities: What if our users expectations or interests change?

How should we respond to such dangers, given that we can’t predict which dangers, if any, will materialise? My suggestion is that we should be embracing diversity, rather than searching for a single solution which we hope will be resilient to an economic downturn. So we should avoid any exclusive deals (some time ago I heard that one institution had signed an exclusive deal with a VoIP provider which seems to mean that the institution had to ban use of Skye).  And we should ensure that our data can be easily reused by other services. And we should ensure that we have data migration strategies – and that we test the data migration to ensure that it works in the way we might expect.  And finally we should ensure that we have new media literacy strategy in place so that members of our organisation, including senior managers and not just the users of our services, have an understanding of the risks associated with the services we may be using – with an understanding that the risks will also apply to the in-house and licensed services and applications and not just the services provided on the ‘cloud’.

Posted in Web2.0 | 3 Comments »

What Can Web 2.0 Offer To The IAMIC Community?

Posted by Brian Kelly on 22 September 2008

Last week I gave an invited presentation on “What Can Web 2.0 Offer To The IAMIC Community?” at the annual IAMIC (International Association of Music Information Centres) conference. 

I gave my talk on Thursday 11th September, immediately after Nick Poole, CEO of the Collections Trust gave the opening talk of the day on “Technology and the Future: the Crystal Ball“. In his talk Nick described how the Web of the future will be a world in which organisational Web sites are likely to be little used and will have a low profile – rather organisations will make their information available in the places users visit. This may be a tool used by the individual (similar to the PLEs – Personal Learning Environments – or PREs – Personal Research Environment – which are of such interest in the educational sector) or the popular services users visit (perhaps Flickr for photographs, YouTube for videos or the popular social networks).

Following Nick’s presentation my talk described how national Music Information Centres could make use of Web 2.0 and the Social Web to support their organisational aims and to support the IAMIC member organisations, located at over 40 countries worldwide.

When I prepared my talk I had come across a number of examples of use of Web 2.0 by the national centres. The CMC (Contemporary Music Centre, Ireland) were making use of YouTube to provide easy access to video clips of interviews with contemporary composers (as illustrated) and were also making use of iTunes in a similar fashion.  It was interesting to note that CMC managed the resources on their own organisational Web site in addition to providing access via popular video and music sharing sites.  It was pleasing to see this approach to the management of resources complemented by use of a diversity of access mechanisms. It seems that the vision of the future which Nick described has already arrived, in some places at least.

There were, however, some instances of failures within the IAMIC community; I was told over coffee of the problems with the international IAMIC Web site (which had been unavailable for quite some time) and of attempts to provide cross-searching across the European sites which seems to have been closed down after live up to its promises. 

But the conference participants did seem to be prepared to learn from such mistakes and there did appear to be a willingness to engage with new developments including  the social Web. I provided an example of the potential of Twitter by posting a tweet asking for “examples of Web 2.0 music services for talk I’m about to give to IAMIC members“. Responses I received a few minutes after my post included several from Pete Johnston on “,“, “For sharing own works, MySpace (obv),,, + prob loads more“, “Internet Archive also has lots of “2.0”-ness” and “Plus zillions of music weblogs, many sharing mp3s, aggregators like Hype Machine” together with a note that we “Mustn’t forget MusicBrainz“.

I also received several other responses within a few minutes of my initial post from several other of my Twitter followers including suggestions from marydeeo, t1mmyb, MintyVanilla, MrJ1197, georgeroberts, ianibbo, gavinmitchel and egrommet, as illustrated.

Perhaps in response to my question “what can Web 2.0 offer to the IAMIC community?” one answer might be Twitter. Rather than the perhaps time-consuming process of evaluating social networking tools, maybe a simple approach would be for a group of professionals with a similar set of interests to simple write the occasional 140 character summary about what they’re doing and ask the occassional question.  This works for me, I’m pleased to say.

Posted in Web2.0 | 1 Comment »

Web Accessibility 3.0

Posted by Brian Kelly on 19 September 2008

I previously mentioned a joint paper on “Redefining Accessibility for a Web 2.0 World” which has been accepted for the ADDW08 conference to be held at the University of York on 22-24th September 2008. David Sloan, the lead author for the paper, will present this paper.

In addition to this paper Liddy Nevile and myself have had a paper on “Web Accessibility 3.0: Learning From The Past, Planning For The Future” also accepted at the ADDW08 conference. This paper describes three scenarios: it explores the limitations of a vision for Web accessibility based on use of the WAI approach to provide “universal accessibility” and then describes the limitations of the “holistic approach to Web accessibility” developed initially by myself, Lawrie Phipps and David Sloan. The paper describes how these approaches focus on, in the first scenario, on the accessibility of individual resources and, in the second scenario, on institutional approaches to enhancing the accessibility of the purposes of the Web services. However neither of these approaches seems to have much relevance to the accessibility of the globally popular Web 2.0 services. And if we are serious about Web accessibility we should be looking at the accessibility of the global World Wide Web, and not just individual resources or the resources managed within our institutions.

But how should be go about addressing such large-scale challenges? In the paper we suggest that we should be exploring how the relationships between resources might help to provide users with access to related resources and how personalisation approaches might provide users with access to resources which are accessible to the individual user, rather than being universally accessible. The vision, Liddy and I feel, can be regarded as an implementation of the W3C’s vision for the Semantic Web. But we also argue the need to have the scepticism which failed to be applied to WAI’s model for Web accessibility.  

The slides which will be presented at the conference are available on Slideshare and are embedded below.

And as we argued the need for a critical approach to proposals for Web accessibility (which we have taken in the past to the limitations of the WAI model and the WCAG guidelines) we invite your comments on our paper and this presentation.

Posted in Accessibility | Tagged: | 5 Comments »

Killed By Complexity

Posted by Brian Kelly on 16 September 2008

If this is the death of Wall Street as we know it, the tombstone will read: killed by complexity” it was suggested on the front page of the Guardian today (Tuesday 16 September 2008).  A similar question might be asked about the roadmap for a number of Web developments.  Is Tim Berners-Lee’s vision for the Semantic Web over-complex?  Are the metadata standards which are being developed too complex to be used by many software developers? The abstract for a panel session at WWW 2005 suggested that “It has been estimated that all of the Web Services specifications and proposals (“WS-*”) weigh in at several thousand pages by now”. And one of the many objections to ISO’s decision to standardise the OOXML file format was that, at 6,000 pages, it was too complex for developers in small organisations to implement.

So now’s the time for more lightweight approaches, it could be argued.

Not so, comes the counter-argument. We will need to have comprehensive, well-grounded and unambiguous standards and specifications in order to build robust services.

The current uncertainties in the financial markets of course provide more than just a analogy  – they are also giving rise to uncertainties in the IT sector.  This is often used as an argument to point out dangers of the dependencies on externally-hosted Web 2.0 services, as my colleague Paul Walk pointed out recently. But as I mentioned last year in a post entitled “Universities, Not Facebook, May Be Facing Collapse“, universities themselves are not immune to the financial difficulties which the banks and airline sectors are currently facing.   

But into such discussions we should also add the financial stability of the standards-making organisations. Organisations which have government backing may be able to weather the storm, but what about those member consortiums whose sustainability is dependent on the financial backing of the commercial sector. And as the W3C is one such organisation, can we be confident that the development and maintenance of complex standards will be sustainable in the long run.  In light of suggestion in a recent interview with Ian Hickson, editor of the HTML 5 standard, that the standard is unlikely to be a “Proposed Recommendation in 2022″, should we not now be asking the difficult questions regarding the sustainability of such standards which seem to have a long gestation period before they can be regarded as stable.  

Or am I being unduly pessimistic?  Might not any current financial uncertainties be a mere blip, and perhaps will not affect standardisation development processes along the lines I’ve hinted at? Or will a legacy of George W Bush’s economic mis-management (or Tony Blair’s if you are of a different political hue) be the failure of the HTML 5 standard to achieve its proposed recommendation status by 2022?

Posted in standards | 7 Comments »

Serious Thinking at Bathcamp08

Posted by Brian Kelly on 14 September 2008

On Saturday (13 September 2008) I attended my first Barcamp – the Bathcamp08 event held at Invention Studios in Bath. I was present at the conception of this event, in a cafe in Montreal where Mike Ellis floated the idea and explored possible themes with myself, Mia Ridge and Frankie Roberto on the day after the end of the Museums and the Web 2008 conference.  It was initially suggested that the Barcamp should have a focus on the role of IT and the Web in cultural heritage organisations. However during the planning for the event is seems that this suggestion was dropped and the event didn’t have a particular single theme to it.  What it did have, though, was a lot of enthusiasm and friendly vibes across a more diverse set of participants than I normally encounter, with free-lance software developers, people working in small Web development companies and from Web design and marketing agencies, developers from large companies as well as a handful from the academic and cultural heritage sectors.

As the attendees were mostly very active users of various Web 2.0 technologies and services much of the discussions, comments and reflections of the event took place on Twitter using the ‘bathcamp08′ tag, with photos being uploaded to Flickr and slides to Slideshare using this tag and other resources, including blog posts about the event, should be available using this tag. There is also a Bathcamp08 Pageflakes page which aggregates the various RSS feeds associated with the event. And finally I should mention that there are a number of video recordings of the event available, including MIke Ellis’s introduction to Bathcamp08.

With so many other comments about the event likely to be published soon I’ll not attempt to summarise the event, except to thank Mike Ellis (in particular) and the other organisers of the event (including Tim Beadle, Frankie Roberto, Matt Jukes and Mike’s Eduserv colleagues) for ensuring the event was such a success.

The Barcamp rules expect first-timers to participate actively at the event, and not just be passive lurkers. I had floated the idea of a double-act with Dave Briggs (whom I’ve not met but have had a couple of Twitter conversations with) on the use of Web 2.0 in public sector organisations, with a focus on the barriers rather than the potential barriers. However Dave couldn’t make the event, which meant some last minute updating of my slides for my 40 minute session, which I decided to call “Web 2.0: Time For Serious Thinking!” – a reference to a talk Mike Ellis and myself gave at the Museums and the Web 2007 conference on “Web 2.0: How to Stop Thinking and Start Doing”.

My slides are available on Slideshare and are also embedded below.

As the Bathcamp was an informal and friendly event I had the opportunity to be sceptical about our previous paper, using the example of the enthusiastic Web 2.0 developer (which I called an ‘Ellis‘) who has a valuable role to play in the early stages of a new technology in getting the involvement of other developers and early adopters. However once the initial period of excitement has died down, there’s a need for the more serious thinking to take place.  This will include the need to address the various barriers to the use of Web 2.0 which I have encountered in recent workshop, including, most recently, the Sharing Made Simple: An Introduction to the Social Web workshop I facilitated for organisations in the cultural heritage sector in Wales. As documented on the event wikithe barriers for museums, libraries and archives include:

Corporate Depts (eg IT, Corporate Image etc)– need to get political partners on board to apply pressure via SMT

Need for Higher Level Education– fear of impacts of negative return from Web 2.0 – “it’s chaos”. Especially at SnrManager level. Need for realistic risk management.

Computer Literacy (public) – would we be excluding a generation who don’t use this tech but visual content can be more appealing to those with poor literacy.

Training/ Staff Knowledge – How do we get people’s knowledge and skills up to scratch?

Time – How do we resource this work? Who has the time?

Evaluation– how do you evaluate this work as being worthwhile? How do we get our paymasters to say that these are OK in terms of our KPIs?

Legislation & Procedures – DDA, DPA etc

Sustainability – of Software and activity. How do you work with services with which you have no SLA? How do you make sure this continues in the long term? Who might support us?

Choosing Software – how do we select the right product?

Duplication of Effort (eg. with Corporate Website) – is this a waste of time? Will it be contradictory?

Getting People to Use It – If we build it, will they come? What’s a ‘good’ level to judge ourselves against?

Abuse & Bad Publicity – How do we deal with this? What if it all goes wrong and gets in the papers? Could I lose my job?

Cost – Who pays? How?

Anyone have any suggestions as to how these barriers can be addressed? Or even comments as to whether these barriers are real?

Posted in Events | Tagged: | 1 Comment »

On the Videos from the Repository Fringe 2008

Posted by Brian Kelly on 10 September 2008

On the The thoughts of a Code Gorilla blog a post on Videos from Repository Fringe 2008 provides a link to a number of videos of talks given at the Repository Fringe held recently in Edinburgh. The blog is written by a software developed who has beenidentified as “a free thinker” by JISC“.

The post states that the videos “will be made available via a Streaming Server at some point, however this is a microsoft-specific platform, so non-windows/non-Internet Explorer users struggle to access the data“. In order to maximise the access to the videos Code Gorilla has “uploaded them to google video“.

As I mentioned in post on the Open Standards and the JISC IErecently at one stage there was a fairly hard line view that open standards must be adopted in order to provide device independence – in the case of multimedia, W3C’s SMIL standard wold seem to be particularly relevant for synching audio, video and other resources such as presentation files. However as we see in this example, the vision that we had several years ago has failed to have any significant impact, as instead it is the popular services such as Google Video and YouTube which are being used to deliver such resources, as well as providing additional functionality, such as user comments and the ability to embedded the resources in other pages, as illustrated below.

It is also interesting to note that this also provides a good example of a pragmatic approach to the accessibility of such resources. At one stage, when the SENDA legislation was being deployed, there was a feeling in some circles that institutions would need to remove videos from their services unless they could provide full captioning. We now, however, widely accept the view that we need to take ‘reasonable measures’ to provide accessible alternatives – and that removing resources does not improve their accessibility.

So my congratulations to the ‘free thinker’ who has so clearly demonstrated that the naive views that we used to have can, in circumstances such as this, be ignored in order to maximise benefits to the user and provide cost-effective solutions.

It is appropriate to embed this video of Dorothea Salo’s keynote talk at the Repository Fringe 2008, with her comments that “idealism isn’t enough” and “programmers are moving towards flexibility”.

And finally I should add that at the end of this video clip (45 minutes in) Dorothea mentions the impacts that both Paul Walk and Andy Powell are having in questioning some of the assumptions which have been made in the past regarding the technical approaches taken to institutional repositories.

We do need more ‘free thinkers’, I feel.

Posted in Accessibility, standards | Leave a Comment »

Are Institutional Portals and VLEs Really “Creepy Treehouses”?

Posted by Brian Kelly on 9 September 2008

I first came across the term “creepy treehouse” during Ewan McIntosh’s plenary talk on “Unleashing the Tribe” at the IWMW 2008 event. Alan Cann mentioned it again in a recent comment on one of my blog posts, suggesting, I think, that the University of Bristol’s MyBristol portal is an example of a ‘creep treehouse” which we should avoid building.

The term, according to a post on the Technagogy blog, was coined by Chris Lott. The Flexknowlogy blog has sought to provide a definition. It seems that ‘creepy treehouse’ can have the following meanings:

n. A place, physical or virtual (e.g. online), built by adults with the intention of luring in kids.

n. Any institutionally-created, operated, or controlled environment in which participants are lured in either by mimicking pre-existing open or naturally formed environments, or by force, through a system of punishments or rewards.

n. Any system or environment that repulses a target user due to it’s closeness to or representation of an oppressive or overbearing institution.

n. A situation in which an authority figure or an institutional power forces those below him/her into social or quasi-social situations.

Alan Cann commented that he felt that the University of Bristol’s MyBristol portal “Feels more like a creepy treehouse to me. Why not just facilitate users using public tools so that they’re not tied to UBris?” Following the doubts I expressed Alan responded:
n. Any institutionally-created, operated, or controlled environment in which participants are lured in either by mimicking pre-existing open or naturally formed environments, or by force, through a system of punishments or rewards.

I rest my case.

It would seem, from this definition, that institutions which are developing services to support their students are building creepy treehouses. After all whether it’s a locally developed portal, an open source VLE or a licensed product, these institutional services are created or operating in a managed (controlled, if you will) environment in which participants (the students) are encouraged to use through the incentives of having a quality service to use, with the support of staff and one’s peers in order to enrich the student’s learning and maximise their potential (otr help them get a good degree, if you’d prefer the reward to be described more bluntly).

And I don’t think there’s anything wrong in institutions doing this.

I do object to use of the term ‘lured’ in this metaphor, though.

And I do think that it is ironic that the  institution’s are regarded as creating the creepy environment by “mimicking pre-existing open or naturally formed environments”. And those pre-existing open or naturally formed environments would appear to be those social network and social sharing services owned by those bastions of open and democratic educational values – Google, Yahoo, Facebook, Microsoft, …

Now as readers of this blog will know I’m a regular user myself of many of these Social Web services. And I have found that such services can provide better services than those hosted by my institution. But if my institution does start to provide services which can compete with the externally-hosted services then I would have no problem in using them – especially if this means I no longer have to concern myself over changes in conditions or the sustainability of the service provider, which is something I need to be aware of in my use of the externally hosted services, as I recently commented on in my experiences of the Sqirl service.

And I’m also aware of the complex issues relating to use of social services to support learning. But these complexities aren’t restricted to engagement with students  – they are also relevant in other business and professional contexts.

It seems to me that the creepy treehouse metaphor related to the ownership and provision of the services is flawed for a variety of reasons.  And it’s also a metaphor which doesn’t really work in a UK context, I feel – I never had a treehouse when I was young and nobody I knew did either.  And thinking about it, the only treehouse which means anything to me is Bart’s in The Simpsons. Let’s chop down the creepy treehouse metaphor and address the real issues.

Posted in Web2.0 | Tagged: | 6 Comments »

On the Demise of the Free Twitter SMS Service

Posted by Brian Kelly on 8 September 2008

Imagine the following conversation:

“Where are you going?”
“Down to the High Street. I’ve just received a message saying that there’s a guy giving away free £20 notes. Are you coming?”
“No. And you shouldn’t.”
“Why ever not?”
“It’s clearly not sustainable in the long run”
“Look, he’s clearly not got a sustainable business model.”
“And don’t try and tell me that he might be bought out by Google or Microsoft. You know that’s unlikely to happen. You can’t base your decisions on such speculative thinking.”
“Oh no.” Shuffles back to office.
“Where are you going?”
“Back to work”
“I’m pleased that I managed to persuade you not to be tempted by someone with such clearly flawed and ill conceived idea.”
“**** ***! All the money’s gone – and I missed out, thanks to you. And my friends picked up about £1,000.”

This came to mind after I received a email from Biz Stone on the 14 August 2008 saying that:

Beginning today, Twitter is no longer delivering outbound SMS over our UK number. If you enjoy receiving updates from Twitter via +44 762 480 1423, we are recommending that you explore some suggested alternatives.

The message went on to explain the the delivery of Twitter messages (Tweets) via SMS would continue in the US, Canada and India, as Twitter had negotiated business deal with the mobile phone provers in those countries. They hadn’t been able to negotiate a deal in the UK, unfortunately, As the email described “Even with a limit of 250 messages received per week, it could cost Twitter about $1,000 per user, per year to send SMS outside of Canada, India, or the US“.

Now when I wrote a post on Use of Twitter to Support IWMW Events in which I described how we used Twitter at the IWMW 2008 event to deliver SMS messages to participants for free using Twitter as the delivery mechanisms and then, a few weeks later, you heard that this service had been withdrawn did you think that that clearly demonstrates that organisations shouldn’t make use of free services with questionable sustainability models? Or did you think: “That’s an opportunity not to be missed. Let’s use it while it’s still going.“?

Posted in Twitter | 3 Comments »


Posted by Brian Kelly on 6 September 2008

Summary of the blog statisticsI’ve found it useful in the past to write about significant landmarks on this blog in order to provide some data which other bloggers may find useful in drawing parallels. And such factual data may also be useful in the various blog workshops which myself and colleagues have been running, including a workshop on “Using Blogs Effectively Within Your Library” which my colleagues Marieke Guy and Ann Chapman will be running at the ILI 2008 conference next month.

So I thought I would document the date at which the blog had reached 100,000 page views. This happened on Saturday 6th September 2008, 1 year and 10 months after the blog was launched.

Months and Years

Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Total
2006 4 1,238 2,067 3,309
2007 2,528 3,055 4,059 4,387 4,321 4,389 5,876 4,063 4,181 4,675 6,607 4,514 52,655
2008 4,713 5,350 4,522 5,414 5,025 4,856 6,388 6,314 1,458 44,040

As can be seen the busiest month was November 2007, and this was primarily due to the popularity of a blog post on UK Universities On Facebook. This has been the third most popular post, following the post on The ‘Cities Visited’ Facebook Application and, in second place, one on TokBox – A Useful Video-Conferencing Tool Or Something Sinister?.

It’s also pleasing to note that after an extended period of stability the numbers of visits to the blog has started to increase again over the past two months, as is shown in the following graph.

Of course, we still need to remember that there are lies, dammed lies and blog statistics.

Posted in Blog | 2 Comments »

The George Bush and Microsoft Parallels

Posted by Brian Kelly on 5 September 2008

Back in May 2008 I published a blog post entitled George Bush IS President And Microsoft’s Office Open XML Format IS An ISO Standard which described how Microsoft’s Open Office XML (OOXML) had been approved as an ISO standard. However in the period between first writing the post and then publishing it South Africa, Brazil, India and Venezuela lodged appeals against this decision claiming that the voting process was marred by irregularities. So until ISO had addressed these appeals we could say that OOXML was not an ISO standard. However as described in an article on OOXML Gets Final Nod After Standards Body Rejects Appeals ISO has now has formally rejected these appeal.

The analogy I drew with George W Bush was even more appropriate than I had anticipated – just as there were doubts over the legitimacy of Bush’s first election victory which were eventually rejected, so the appeals against the legitimacy over the standardisation of OOXML have been rejected, with OOXML now becoming an official ISO standard. I suspect many readers of this blog would have preferred it if neither of these decisions had happened, but they have.

Whether this is the end of the matter is not yet clear: a article on CONSEGI 2008 Declaration — Open Letter to ISO Reveals More OOXML Issues published on the Grocklaw site describes how South Africa, Brazil, Venezuela, Ecuador, Paraguay and Cuba have signed and sent an open letter to ISO condemning this decision. Further information about the standardisation process is available in a Wikipedia article on Standardization of Office Open XML.

But although the standardisation process may have been flawed with, no doubt, political skullduggery going on, the technical merits of the standards questionable and the likelihood that the standard will actually be implemented by vendors questioned by some, we now, I would say, have to accept that it is an ISO standard. But as I’ve argued for other reasons recently, we should in any case be questioning the significance and merits of open standards much more questioningly than we have done in the past, when slogans such as ‘interoperability though open standards’ seem to have been used to stifle discussions and debates on the extent to which open standards actually deliver their stated goals.

It was also pleasing  to read Ross Gardler, manager of the JISC OSS Watch service’s comment on my recent post in which suggests that “it is possible to diverge from [open] standards without enforcing locking. This is a huge advantage when it takes so long for standards to be specified and agreed by committees and standards bodies” – he could, of course, have added caveats regarding the political nature of  standardisation processes.

I therefore welcome Ross’s statement that “OSS Watch would be happy to explore these ideas further. Just what are the advantages and disadvantages of formalised standards against open implementations of data formats?” And over the next few weeks I will publish a number of posts in which I’ll invite discussions on  standards issues.

For this post, however, I’d welcome comments specifically on the OOXML standardisation process and the implications of ISO’s decision.  My view is that it’s a good thing when proprietary formats become standardised (as has also happened recently with the standardisation of Adobe’s PDF format which was announced on 2 July 2008) as this can be beneficial for, for example, long term preservation. However this doesn’t necessarily mean that the format will be appropriate in many circumstances – we do need to decouple the view that because an open standard is available in a particular area that it should necessarily be deployed, I feel.

Posted in standards | Tagged: | 5 Comments »

Open Standards and the JISC IE

Posted by Brian Kelly on 4 September 2008

The Ariadne article on Lost in the JISC Information Environment has generated some interesting discussions, including my colleague Paul Walk’s post in which he suggests that all models are wrong, but some are useful and Andy’ Powell’s post entitled Lost in the JISC Information Environment?.

I’ll leave the discussions on the technical architecture to others, but thought I’d pick up on Andy’s comment that:

.. the technical standards certainly were intended to be prescriptive.  I can remember discussions in UKOLN about the relative merits of such an approach vs. a more ad hoc, open-ended and experimental one but I argued at the time that we wouldn’t build a coherent environment if we just let people do whatever the hell they wanted.  Maybe I was wrong?

Myself, Andy, Pete Johnston and Paul Miller were the ones who had those long discussions about the role of open standards and the JISC Information Environment (IE).  I was the person, who had been introduced to standards through my involvement with the Web from its early days, who was the most adamant on the need to use open standards, where open meant the standard had been ratified by a trusted neutral standards organisation, such as the W3C. I was therefore never in favour of standards and protocols which weren’t open in this sense, including Adobe’s PDF or Sun’s Java. On the other hand, I was always fairly relaxed about the technologies used to implement the services, not being too concerned if licensed software was felt to provide advantages over open source alternatives, for example.

It was Paul Miller who suggested than my stance on open standards was too inflexible, suggesting that there was a spectrum to openness, rather than a fixed binary divide. As a result of Paul’s comments and subsequent discussions in UKOLN I wrote a briefing document which suggested that rather than seeking a formal definition of open standards, we needed a more flexible approach based on an understanding of the characteristics of open standards.  And the need for such flexibility became even more apparent when the success of RSS had to be balanced against the lack of formal standardisation of RSS (both 1.0 and 2.0).

And in retrospect many of the W3C standards which I had felt should form the basis of the JISC IE have clearly failed to have any significant impact in the market place – compare, for example, the success of Macromedia’s Flash (SWF) format with the niche role that W3C’s SMIL format has.

Just as the open source debate seems to have matured (and I think that the JISC OSS Watch service has helped to move that debate from the polarised opinions we were seeing several years ago) we still need, I feel, to have a much more sophisticated understanding of the role open standards have to play in development activities. And, as with the decisions institutions (and individuals) have to make regarding their use of externally-hosted Web 2.0 services, so funders, developers and project managers will need to give more thought to the risks as well as the promised benefits of use of open standards.

I’ve written, in conjunction with staff from CETIS, OSS Watch and the AHDS, a number of peer-reviewed papers on this topics ( Openness in Higher Education: Open Source, Open Standards, Open Access, Addressing The Limitations Of Open Standards, A Contextual Framework For Standards, A Standards Framework For Digital Library Programmes and Ideology Or Pragmatism? Open Standards And Cultural Heritage Web Sites). I suspect it is time to revisit this topic.

Posted in standards | 5 Comments »

Over Ten Years Of Accessibility Work

Posted by Brian Kelly on 2 September 2008

David Sloan and myself have had a paper on  “Reflections on the Development of a Holistic Approach to Web Accessibility” (initially entitled “Redefining Accessibility for a Web 2.0 World“) accepted for the ADDW08 (Accessible Design in the Digital World) conference which will be held at the University of York on 23-24th September 2008. The paper reviews our work in Web accessibility from the early days of promoting the WAI model and use of WCAG guidelines  through to our realisation of the limitations of this approach, initially in the content of e-learning  accessibility and then more wider concerns. This work led to the development of alternative approaches to enhancing the accessibility of Web resources which were published in eight peer-reviewed papers (not included the two papers which have been accepted for the ADDW08 conference).

I order to collate the historical data for the paper I created a Dipity time line of my involvement in accessibility work since attending the WAI launch meeting in July 1997.  This is illustrated below.

I found the timeline very useful in giving me a bigger picture of my work in this area and provides me with fresh insights which I was unaware of from just looking at my lists of papers and presentations. In particular I can spot several different phases in my work which are summarised in the table below.

Date Phase Comments
1997-1999 Naivity The first few year were based on learning more about the WAI approach to Web accessibility, including the WCAG, ATAG and UAAG guidelines. Advice was provided based of this approach. During this time I was also a member of the DISinHE Steering Group.
2000-2001 Silence The timeline indicates little activity in this period. Perhaps there was little new to say, as the view then was that WCAG  conformance was all that  Web developers need concern themselves with. In this case, best practices would primarily be a training issue to be carried out by bodies such as Netskills, rather than a development/innovative activity which is a key aspect of UKOLN’s work.
2002 Evidence-gathering During 2002 a number of automated accessibility surveys were carried out in order to gather evidence of institutional adoption of WCAG guidelines. The findings showed low levels of conformance, and as further manual testing would be needed in order to provide proof of conformance with the WCAG guidelines, it was starting to become clear that the WCAG approach was failing to have impact amongst practitioners, despite its clear political success.
2003 Debating alternative approaches Panel sessions on “Web Site Accessibility: Too Difficult To Implement?” at the ILI 2003 conference and “Web Accessibility: Will WCAG 2.0 Better Meet Today’s Challenges?” at the WWW 2003 conference and a debate on “Web accessibility is difficult to implement” provide opportunities to raise doubts over the effectiveness of the WAI approach.
2004- Alternative approaches for e-learning accessibility published Lawrie Phipps (then at TechDis) and I discuss alternative approaches for e-learning accessibility and, together with Elaine Swift (then an e-learning developer at the University of Bath) have a paper on Developing A Holistic Approach For E-Learning Accessibility published in the Canadian Journal of Learning and Technology. These ideas are further developed for a prize-winning paper on “Implementing A Holistic Approach To E-Learning Accessibility” presented at the ALT-C 2005 conference and a paper on “Holistic Approaches to E-Learning Accessibility” published in the ALT-J journal in 2006.
2006- Alternative approaches to Web accessibility framework published A paper on “Forcing Standardization or AccommodatingDiversity? A Framework for Applying the WCAG in the Real World” was presented at the W4A 2005 conference. This paper was co-authored by myself, Lawrie Phipps and David Sloan, who have been the main driving force behind this work. Further papers which further developed our holistic framework for accessibility and applied the approach beyond e-learning accessibility were published at the W4A 2006 (“Contextual Web Accessibility – Maximizing the Benefit of Accessibility Guidelines“), W4A 2007 (“Accessibility 2.0: People, Policies and Processes“) and W4A 2008 (“One World, One Web … But Great Diversity“) conferences.
2006- Alternative approaches to Web accessibility disseminated From 2006 to date the alternative approaches to Web accessibility have been disseminated to UKOLN’s core communities, including the UK’s higher and further education communities, the library, museum and the public sector organisations. This work has included taking part in a panel session on “Web and Access” at the “e-Access’06 Conference“, chairing a Public Sector Conference on Accessibility, helping to organise the Accessibility Summit II, giving a talk on “The Accessible Web” at the “Web Adept: Museums and the Web 2007 conference”, facilitating a session on “What Does Accessibility Mean To The Blogging Community?” at the conference, facilitating a professional forum on “Accessibility 2.0: A Holistic And User-Centred Approach To Web Accessibility” at the Museums and the Web 2007 conference, giving an online interview on “Web Accessibility” in an Access to Experts interview organised by CHIN, contributing a chapter on “Accessibility in the Future for book on “Web Accessibility: Practical Advice for the Library and Information Professional” as well as writing a series of posts on accessibility on this blog.

The timeline has helped me to gain a better understanding of my work in Web accessibility over the past decade and how this work, led initially by myself and Lawrie Phipps and later supported by David Sloan) has been furthered developed and refined by ever-growing numbers of accessibility practitioners and researchers in the UK and Australia.  So I would like to take this opportunity to thank the co-authors of my peer-reviewed papers for their contribution to this work: in order of date of publication these are: Lawrie Phipps, Elaine Swift, David Sloan, Helen Petrie, Fraser Hamilton, Caro Howell, Liddy Nevile, Ann Chapman, Andy Heath, Stephen Brown, Jane Seale, Patrick Lauke, Simon Ball, EA Draffan and Sotiris Fanou, not forgetting Stuart Smith, although the publication of that paper has been delayed.

What lies ahead, I wonder? The release of the WCAG 2.0 guidelines should provide an opportunity for institutions to rethink their approaches to Web accessibility as these guidelines remove some of the more flawed of the WCAG1.0 checkpoints and are, I’m pleased to say, format-agnostic.  But what of the implications of the popularity of many Social Web and Web 2.0 services?  And can the Semantic Web finally start to provide useful benefits to the user community, including accessibility benefits?  These are some of the questions which Liddy Nevile and myself will be raising in our paper on “Web Accessibility 3.0: Learning From The Past, Planning For The Future” which will also be presented at the ADDW08 conference. More of that work in a later post.

Posted in Accessibility | 1 Comment »

Guest Post: You’ve Got A Friend

Posted by ukwebfocusguest on 1 September 2008

It has been a while since I have a guest post published on the UK Web Focus blog. But as I am very keen on encouraging a debate on the role of Web 2.0 within our institutions I would like to welcome Hannah Hiles as a guest blogger.

Hannah Hiles has been Media & PR Officer for Keele University in Staffordshire since August 2006. Previously she was Keele’s Alumni Officer and before joining the University she was a journalist at The Sentinel newspaper in Stoke-on-Trent. Her views are her own and not necessarily those of Keele University.

Keele University has been exploring the potential for communications and connections that can be found in Web 2.0 technologies.

In just 16 months of using Facebook as a corporate tool we have developed a thriving community with links spanning the globe; it has revolutionised the way we run some events, reconnected us with dozens of “lost” alumni and provided a platform where we can interact with prospective students in their own domain.

The Keele University alumni LinkedIn group in particular provides networking opportunities for our professional graduates while at the same time allowing us to learn more about their careers and tailor our services to their needs.

And all this for just the cost of my time – we have no fancy paid-for online community platforms here.

We first started using Facebook in January 2007. One of our graduates had created a group called Keele Alumni and we thought we should get in there with our own official group, so Keele Society ( was born. We didn’t go through any committees or get approval from anyone; we just recognised the potential and seized the opportunity, little knowing how quickly Facebook would grow within just a few months.

We soon added our official Keele University Page (, as well as the Keele-network only Love:Keele group ( to help me find student case studies.

One of the most exciting uses of Facebook for me has been the creation of groups aimed at prospective students. Keele 2008 ( and Keele University 2009 ( have proved a lifeline for applicants wanting to get the lowdown on Keele from the people who know and love it best – the current students.

A team of volunteers from among our Student Academic Representatives (StARs) check the group regularly and answer any questions. Other keen students, including Students’ Union sabbatical officers, also participate. I monitor what is being said and give an official University response when necessary but usually allow the students to take the lead.

A major part of Keele University’s appeal is its friendly atmosphere, so I try to reflect that through my communication style. Our Twitter updates ( are a mixture of news stories with web-links and general observations about what is happening on campus spoken in the “voice” of the University. I’m still very new to Twitter and I don’t think I have fully grasped the possibilities of its use, but it’s another opportunity for communication with prospective students, current staff and students and alumni to be explored.

The University recognises Web 2.0 as an important area for growth, so much so that developing Keele’s e-communications strategy has now been formally built into my job description.

Posted in Guest-post | Tagged: | 4 Comments »