UK Web Focus

Innovation and best practices for the Web

Archive for December, 2010

HTML5: Are Museum Web Sites Ahead of HE?

Posted by Brian Kelly (UK Web Focus) on 30 December 2010

Martin Hawksey, a prolific blogger on the RSC Scotland North and East blog, recently alerted me to an article published in the ReadWriteWeb blog which describes how Scotland Trailblazes the Use of HTML5 in Museums. The trailblazing Scottish institution wasn’t a University or a Web development or Web design company – rather it was the National Museums Scotland Web site.

The article describes how:

The National Museums of Scotland have become the first major museum organization in the world to fully implement HTML5.

and goes on to inform readers that

Museum digital media tech manager Simon Madine explained in a blog post that the implementation across the five allied sites was married to an overall redesign. That redesign saw the site gain color and shoulder-room and emphasize more visuals. But the implementation of HTML5 is more revolutionary. It allows a greater level of search engine accessibility, easier rendering across browsers and overall makes it easier to elegantly add and change site content.

According to Hugh Wallace, NMS head of digital media “The site should be eminently more findable too as it’s structured for the way Google reads pages“.  In fact, the only other museum that Wallace’s crew could find that has fully implemented the language is The American Sport Art Museum and Archives.

My question for those involved in providing institutional Web sites is “Are you making use of HTML5?”. If you are, I’d be interested in hearing how you are going about doing this and what benefits you have identified that this can provide. And if not, why have you chosen not to do so?  I’d also be interested to receive responses from those working in other sectors and other countries

Posted in HTML | 3 Comments »

Will #Quora Be Big In 2011?

Posted by Brian Kelly (UK Web Focus) on 29 December 2010

Remembering Stack Overflow

Back in October I asked “Is Stack Overflow Useful for Web Developers?“. The context to this question was the decline in usage of mailing lists by those involved in Web management and Web development. In response to an early post Virginia Knight suggested that “many email lists to have a natural life-cycle ending with dormancy”. But where, I asked in the post, should Web developers go if they have such queries which need answering?  Might Stack Overflow provide an alternative?

It was suggested that “most web developers will have come across Stack Overflow quite some time ago. If only through stumbling across it when Googling for a solution to a particular problem“. It was also pointed out that “Stack Overflow also runs Server Fault, and Super User; two very similar sites with a different focus“. But if there are a number of Social Web services which might provide advantages over mailing lists for developers, are there equivalent services with a more general scope, I wondered?

Introducing Quora

It was when I looking at Stack Overflow and its StackExchange sister sites that I came across the Quora service.  This is another question and answer site. But in answer to a question “How is Quora different from StackOverflow?” posed on Quora the answers given include:

  • I’d say the connection between Quora’s founders and Facebook is the thing that has excited most people, but it remains to be seen whether this translates into something that makes Quora into a more compelling moneymaker than StackOverflow or the larger and broader sites like Answers.com.
  • StackOverflow is a “programming Q & A site”. Quora’s scope is more general than that.
  • StackOverflow (and its StackExchange sister sites) highlight quantitative achievements (scores, badges) much more, so the gamishness is “have I reached a new level?” Quora emphasizes instant feedback/live-presence, so the gamishness is “can I click back to where the action is happening right now.”
  • It also focuses on many topics and you choose what to follow, instead of visiting different stackoverflow-based sites.

Meanwhil over on the Stack Overflow site the question “What can we learn from Quora?” is answered with the response:

I personally don’t think Quora is even on the same field as us — we’re playing baseball, they’re playing football. Here’s why:

  • we don’t care about the social graph, we care about the information graph. Even if that leads off-site or comes from Google. Heck, we don’t even ask you to register — ever!
  • we believe that the best content comes from topic specific sites (cooking, programming, parenting), and the communities that form around those topics, not a “generic one size fits all Q&A engine”
  • we’re not tied to Facebook (and its social graph)
  • we allow 100% anonymous participation
  • we creative commons all our content (unsure what they do here)

So the services have different scopes and different approaches.  But is there anything to suggest that Quora might become popular, just as Stack Overflow has done within the developer community?

Is Quora Getting Popular?

Yesterday George Seimens (@gseimens) asked on Twitter:

What are good strategies for dealing with information overload? http://b.qr.ae/eO1DoU (Quora)

The tweet linked to a question posed on Quora: “What are some good strategies for dealing with information overload” and included George’s response.

I spotted this tweet shortly after reading another tweet from Cameron Neylon (@cameronneylon) who also referenced an answer he had provided on Quora:

My answer on Quora to: What do people think about the recent Google report on social networks? http://qr.ae/GYDL

My Twitter followers are starting to answer questions posed on Quora, it seems.  And, in addition, I am now being followed on Quora, a service I only joined a few months ago.  As can be seen, eight people started to follow me on Boxing Day!

It seems that we are starting to see signals which indicate a growing popularity in the Quora service, which includes responding to queries as well as following answers.

A question “How many people use Quora?” has been posed on Quora and the answers  (378,639 on 14 December 2010 being the most recent estimate) and accompanying graph indicate that Quora is rapidly growing in popularity.

I would speculate that the early adopters of Twitter might also be willing to make use of Quora in its early days.  I wonder if Quora might have a role to play in providing a forum for wider and more in-depth discussions of issues raised on Twitter?

Note: After writing this post I came across a post on TechnCrunch entitled “Q: What Does Quora Mean For The Future Of Blogging? A: Business As Usual” which cited a post by Robert Scoble in which he wondered  “if Quora was the biggest blogging innovation in 10 years?”. Robert Scoble summarised what he felt was innovative about the Quora service:

First, it learned from Twitter. Ask your users a question and they’ll answer it.

Second, they learned from Facebook. Build a news feed that brings new items to you.

Third, they learned from the best social networks. You follow people you like. But then they twisted it. You can follow topics. Or you can follow questions in addition to following people. This is great for new users who might not know anyone. They can follow topics.

Fourth, they learned from blogs about how to do great SEO. I’ve started seeing Quora show up on Google.

Fifth, they learned from FriendFeed, Digg, and other systems that let you vote up things. If you watch a question that has a lot of engagement you’ll even see votes roll in live. It’s very addictive.

Sixth, they brought the live “engagement display” that Google Wave had: it shows who is answering a question WHILE they are answering it.

Seventh, it has a great search engine for you to find things you are interested in

Interesting. In light of these observations I have chosen to follow topics on Linked Data and HTML5.  I also noticed that, unlike the first two topics, the Digital Preservation topic does not have a description of FAQ. Hmm, if you have a scholarly interest in a topic should you be looking to not only create a page in WIkipedia about the topic but also manage a discussion area in Quora? Perhaps the case for Quora is not yet proven but if you wish to maximise one’s impact surely the case for engagement with such popular services is now accepted?

Posted in Social Networking | Tagged: | 5 Comments »

W3C Standards for Contacts and Calenders

Posted by Brian Kelly (UK Web Focus) on 27 December 2010

I have to admit that I thought that standards for contacts and calendar entries had been established ages ago. However the W3C’s Device APIs and Policy Working Group has been set up in order to “create client-side APIs that enable the development of Web Applications and Web Widgets that interact with devices services such as Calendar, Contacts, Camera, etc.

A working draft of the Contacts API was published on 9 December 2010. As described in the W3C Newsletter:

This specification defines the concept of a user’s unified address book – where address book data may be sourced from a plurality of sources – both online and locally. This specification then defines the interfaces on which third party applications can access a user’s unified address book, with explicit user permission and filtering. The focus of this data sharing is on making the user aware of the data that they will share and putting them at the center of the data sharing process; free to select both the extent to which they share their address book information and the ability to restrict which pieces of information related to which contact gets shared.

Other work in the area includes the following draft specification:

Note that the URIs for the latest version of the a number of these draft specifications seem misleading. For example the URI for the Calendar API is stated as being http://www.w3.org/TR/calendar-api/ though this link is currently broken, with the resource actually hosted on the W3C’s development server at http://dev.w3.org/2009/dap/calendar/. Similarly the URL for The Application Launcher API is stated as being http://www.w3.org/TR/app-launcher/ though this link is currently broken, with the resource actually hosted on the W3C’s development server at http://dev.w3.org/2009/dap/app-launcher/. This may be because these are editor’s draft and the URIs for the published versions are place-holders – but for me this is an error, and one that is surprising for the W3C which places great emphasis on the importance of functioning URIs.

Posted in standards | 3 Comments »

Skype Just Works (Pain, I Know!)

Posted by Brian Kelly (UK Web Focus) on 26 December 2010

“Snowed In – Can’t Make It To London”

I recently ran a workshop in London on “Institutional and Social Web Services: Evidence for Their Value“. Although the event went well, the day before I was somewhat apprehensive as Ranji Sidu, one of the speakers, was snowed in in Scotland and thought it unlikely that he would be able to travel.

Not a problem“, I said to Ranjit. “As long as you can create a video recording of your talk we’ll be able to play that locally. And if you have network access we’ll try some form of communication technology in order that you can participate remotely.

Having sounded so confident in our email discussions I was slightly apprehensive on the morning of the workshop, especially when I discovered that the PC we would be using didn’t have Skype or AV capabilities.  I was prepared to use a streaming video application on my mobile phone and even explore whether a POTS solution could be used – yes if that was a telephone in the seminar room maybe we could use the Plain Old Telephone System.

In the event I had no need to be concerned.  Skype was installed on the local PC and a Webcam and microphone worked a well – in a room containing over 20 people the microphone could pick up questions provided people spoke clearly.

Note Just Telephony: Application Sharing and Ubiquity Too!

I had envisaged using Skype to allow Ranjit to respond to questions after his talk.  In fact we used the application-sharing feature of Skype to share the slides used by other speakers at the event.  So Ranjit, the remote participant snowed in in Scotland, benefited from being able to listen to the speakers and view their slides as they were being presented.   The only time this didn’t work was when one of the speakers used their iPad to give a presentation – if we do this again we’ll need to have contingency plans for when other devices are being used.

For me Skype’s ease-of-use, ubiquity and rich functionality (it’s more than a just a phone system) make Skype part of the infrastructure which one might reasonably expect to be able to use – I personally have used Skype clients on desktop PCs, laptops, netbooks as well as on the Apple Mac, Android Phone and iPod Touch so it seems to have escaped from the MS Windows-only barrier which has hindered take-up of other potentially useful collaborative tools.

But Skype’s Proprietary!

But Skype’s proprietary, the argument went back in 2007, we should be using an open standards solution. But those arguments seem to have gone quiet.  There appear to be occassions when the simplicity of proprietary solutions win over enough users to make the deployment of standards-based solutions difficult.  Recognising when this will happen will be the difficult thing, though, as Nick Skelton pointed out in a post which asked “Why did JANET Talk fail?

Posted in standards | Tagged: | 1 Comment »

W3C’s Online Course on “Introduction to SVG”

Posted by Brian Kelly (UK Web Focus) on 24 December 2010

How do you get training in new (and not so new) standards?  A good choice would seem to be from the organisation responsible for developing the standard.  The following online course on the SVG (Scalable Vector Graphics) may therefore be of interest to developers and others with an interest in this standard.

The W3C is running an online course on Introduction to SVG. Professor David Dailey of Slippery Rock University, Pennsylvania, will lead the course. The course will last for six weeks and starts in January 2011. During the first four weeks participants learn how to create SVG documents, to use basic elements to create effective graphics quickly and easily, add border effects, linear and radial gradients, re-use components, and rescale, rotate and translate images.

During the (optional) final two weeks of the course participants learn how to: add animation, use scripting to transform and manipulate images, and create interactive graphics. The last two weeks will most benefit those with some background in scripting. The only pre-requisite for the course is to have some familiarity with HTML/XML and the ability to edit source code directly.

The rate for the course is €165. Full details of the course (audience, content, timing, weekly commitment) are available in the Introduction to SVG: Course Description.

I should add that back in November 2008 I asked the question Why Did SMIL and SVG Fail? but then in January 2010 asked Will The SVG Standard Come Back to Life? SVG initially became a W3C recommendation in 2003 but failed to live up to initial expectations.  I feel that we often try to promote open standards too soon and early adopters can get their fingers burnt.  However there does seem to be renewed interest in  SVG , especially in a mobile context, so perhaps now, rather than in 2003, is the time to invest in training. After all,  as described in an article on “Microsoft joins IE SVG standards party” published in The Register:  “Commentors responding to Dengler’s post overwhelmingly welcomed Microsoft’s move, with people hoping it’ll lead to SVG support in IE 9“.

Posted in standards | Tagged: | Leave a Comment »

What’s the Value of Using Slideshare?

Posted by Brian Kelly (UK Web Focus) on 23 December 2010

Back in August Steve Wheeler tweeted that “Ironically there were 15 people in my audience for this Web 3.0 slideshow but >12,000 people have since viewed it http://bit.ly/cPfjjP“.

I used that example in a talk on “What Can We Learn From Amplified Events?” I gave in Girona a week later – and in my talk I admitted that not only had I read the tweet while I was in bed but that I also viewed the slides in bed.

I made this point as I wanted to provide additional examples of the ways in which traditional academic events, such as seminars, are being amplified and how such amplification is increasingly being used by growing numbers of users which now have easy access to resources, such as slides used in seminars, which previously were not easy to access.

In a post entitled “Web 3.0 and onwards” Steve has brought this story up-to-date:

one of the surprising highlights for me was the aftermath of a presentation I gave at a school in Exeter, South West England, in July. I was invited by Vitalmeet to present my latest views on the future of the web in education, so I chose to talk about ‘Web 3.0 – the way forward?’ When I arrived, the room wasn’t that ideal, and the projector was on its last legs. Only 15 people turned up, and that included the organisers. Not an auspicous. I gave my presentation, and no-one wished to asked any questions afterwards. I made for the door… then someone asked me if they could have my slides. I promised I would post them up on my Slideshare site so they could gain access.

To say I was amazed at the response is an understatement. My Web 3.0 slideshow received 8,000 views during its first week. Within the month, the count had risen to over 15,000 views – my original audience had multiplied a thousand times. Even more valuable for me, many people commented and shared their ideas to me, which led to to write further blog posts, and publish a second, related post entitled Web x.0 and beyond.

The question I have is “Can we estimate the value which has been generated following the uploading of the slides to Slideshare and the subsequent promotion of the resource?“.

I have met Steve a couple of times and have found him to be a stimulating speaker and his blog is on my ‘must-read’ list.  So I would be happy to suggest that his talk is likely to have been well-received by the 15 people in the audience.  I could suggest that he might have received a 100% rating on the content and style of presentation – but there may have been someone in the audience who had already seen the talk and perhaps someone else who might not have been feeling well or it wasn’t an area of interest to them.  So let’s suggest a 90% average rating from the 15 people, which gives us an overall  13.5 ‘satisfaction’ rating (nos. of people * estimated rating).

But what of the 17,406 views of the slides on Slideshare? The presentation will be lacking Steve physical presentation and his engagement with the audience  and responses to questions.  Might, then, we suggest that this can, at best, provide only a 10% satisfaction rating?  We also need to remember that the 17,406 views will not necessarily related to 17,406 different users – I viewed the slides on my iPod Touch in August and have just visited the Slideshare page again, for example.  It is also difficult to know whether the viewers looked at all slides or perhaps just the first few slides and then left.  In light of such considerations, let’s suggest that the audience who have viewed of the the slides might be 10% of the total number of views. This then gives us a ‘satisfaction’ rating of 174.

So according to this formula the availability of the slides on Slideshare has provided a greater ‘impact’ than the live seminar.

Nonsense, I hear you say, and I agree.  But if there was only one person at the seminar and 1 million viewers, and we found that they all rated highly the slides might we conclude the that availability of slides on Slideshare can provided a greater ‘impact’?  I think we could, so the challenge would be to develop a more sophisticated algorithm than my back of an envelope calculation.

But what are we trying to measure?  Perhaps rather than Steve’s presentational style and personality, which is likely to influence an evaluation given immediately after a talk, we should be looking at the impact of the talk afterwards.

Would it be useful, I wonder, to ask people a few months after a talk (in this case the talk took place four months ago) and ask them to recollect what the talk was about and what things had been done differently as a result of the talk?  And then we could compare the responses from the local and remote audiences to see if there are any significant differences.  I should say that my recollection of the slides (which I’ve not looked at while I’ve been writing this post) was that Steve said that Web 2.0 was important in an elearning context and now Web 3.0 is coming along which can build on Web 2.0 and should be treated seriously. Of course Steve may have been using this slides ironically, in which case I may have picked up the wrong message.

What do you think Steve is saying from just looking at his slides (which is hosted on Slideshare)?  And what will you remember in four months time?  And if the answer is ‘not a lot’ might that require us to ask questions of the benefits and values of traditional seminars?  What, after all, is the ROI of a seminar? Might it, I wonder, be the networking? If as a result of the seminar plans were made and implemented after the seminar, this could be a more tangible impact factor.

And in the online environment perhaps they 226 Facebook users who have ‘liked’ the presentation, the 132 Slideshare users who have favourited it, the 798 users who have downloaded the presentation and the 21 comments received might also provide some tangible indications of value – although, of course, they may be liking and commenting on the design of the slides and not on their content!

Posted in Web2.0 | Tagged: | 7 Comments »

Is It Too Late To Exploit RSS In Repositories?

Posted by Brian Kelly (UK Web Focus) on 22 December 2010

A few years ago we had discussions about ways in which information about UKOLN peer-reviewed papers could be more effectively presented. We asked “Could we provide a timeline view? Or how about a Wordle display which illustrates the variety of subject areas researchers at UKOLN are engaged in?” The answer was yes we could, but it wouldn’t be sensible to carry out development work ourselves. Rather we should ensure that our publications were made available in Opus, the University of Bath’s institutional repository.  And since repositories are based on open standards we would be able to reuse the metadata about our publications in various ways.

We now have a UKOLN entry in Opus and there’s also an RSS feed for the items. And similarly we can see entries for individuals, such as myself, and have an RSS feed for individual authors.

Unfortunately the RSS feed is limited to the last ten deposited items rather than returning the 223 UKOLN items for UKOLN or 45 items belonging to me. The RSS feed is failing to live up to its expectations and isn’t much use :-(

The Leicester Research Archive (LRA), in contrast, does seem to provide comprehensive set of data available as RSS. So, for example, if I go to the Department of Computer Science’s page in the repository there is, at the bottom right of the page (though, sadly, not available as an auto-discoverable link) an RSS feed – and this includes all 50 items.

Sadly when I tried to process this feed, in Wordle, Dipity and Yahoo! Pipes, I had no joy, with the feed being rejected by all three applications. I did wonder if the feed might be invalid, but the W3C RSS validator and the RSS Advisory Board’s RSS Validator only gave warnings. These warning might indicate the problem, as the RSS feed did contain XML elements, such as which might not be expected in an RSS feed.

But whilst my experiment to demonstrate how widely available applications which process RSS feeds could possibly be used to enrich the outputs from an institutional repository  has been unsuccessful to date, I still feel that we should be encouraging developers of institutional repository software to allow full RSS feeds to be processed by popular services which consume RSS.

I have heard arguments that providing full RSS feeds might cause performance problems – but is that necessarily the case? I’ve also heard it suggested that we should be using ‘proper’ repository standards, meaning OAI-PMP – but as Nick Sheppard has recently pointed out on the  UKCORR blog:

I have for some time been a little nonplussed by our collective, continued obsession with the woefully under-used OAI-PMH. Other than OAIster (an international service), the only services I’m currently aware of in the UK are the former Intute demo now maintained by Mimas.

In his post Nick goes on to ask “Perhaps OAI-PMH has had it’s day“.  It’s unfortunate, I feel, that RSS does not seem to have been given the opportunity to see how it can be used to provide value-added services to institutional repositories.  Is it too late?

Posted in Repositories, rss | 8 Comments »

When Should You Consider Use of Cloud Services?

Posted by Brian Kelly (UK Web Focus) on 21 December 2010

The recent speculation about the death of Delicious and it subsequent rebirth, possibly under a new owner, has given rise to discussions about when use of Cloud Services in an institutional context are appropriate. For some, the answer may be never. I have heard people say “I’ll never use Google to index my Web site; I don’t have access to the source code, so I can’t fix it if it goes wrong“.

This is nowadays probably regarded as a fairly extreme position, but how should one go about answering the question: “When might use of a freely-available Cloud Service be appropriate for use in an institutional context?“.

For me the answer is simple: “Always!“.  Or, to refine this answer slightly: “Use of freely-available Cloud Services should always be considered for use in an institutional context“.

In part this is a response to the need to be seen to be making cost-effective use of tax-payers money.  Although this has always been the case, as we have seen from the Daily Telegraph’s recent FOI request the media is looking at how the public sector spends money – to be paying for the development and support of a service within the institution when there are free alternatives available could lead to questions as to the appropriateness of such decisions.

In addition to the need to be seen to be aware of the financial considerations, freely-available Cloud Services also provide an opportunity for evaluating options and identifying popular features and patterns of use. And following on from the question of the features provided by services there is also the question of the community which can have an important role to play in social sharing services. This is an area in which a service hosted within an institution may fail to gain a sufficient user base in order to gain the benefits of network effects.

Of course there will be a need to consider the sustainability of possibly options.  As I described in a post on Lessons From Delicious’s (Non)-Demise this issue seems to have been the top concern when Niall Sclater made his point: “@mweller @psychemedia delicious. i rest my case.“.

But considerations regarding the sustainability of services in nothing new in the IT industry.  In my time in the business I have worked on computers which are no longer made (IBM and ICL mainframes, VAX minicomputers, Apollo and Sun workshops, BBC and Commodore micros, etc.) and have seen software come and go.  Such concerns can affect large-scale enterprise software and business developments (Blackboard buying WebCT, Oracle buying Sun, etc.) . But now we are also seeing political and economic factors affecting the sustainability of institutional services.  We have already heard recently the news that “Welsh universities to merge” – and we can’t realistically expect that there won’t continue to be significant changes across the sector.  So although we might be able to speculate on the risks of commercial providers of such services (such as Yahoo’s failure to develop the Delicious service and, in retrospect, the failure to accept Microsoft offerto buy the search engine company Yahoo for $44.6bn (£22.4bn) in cash and shares“) we also need to be honest about institutional risks. So if we wish to speculate that, for example, Microsoft’s proposed purchase of Yahoo services could lead to cherry-picking and closing down services which compete with existing Microsoft services we might also speculate that proposals to install services in-house are being proposed in order to justify the continued existence of the unit providing the service.

There are risks in using Cloud Services; there are risks in purchasing commercial software and there are risks in deploying services in-house. This is nothing new.   What is different today is that we are no longer in a period of growth – so we will need to be prepared to understand and address the risks of institutional provision of services.  I do feel that “use of freely-available Cloud Services should always be considered for use in an institutional context” – and whilst this does not necessarily mean that such services should be deployed we will need to have exit strategies in place for in-house alternatives.

Posted in Finances | 4 Comments »

Lessons From Delicious’s (Non)-Demise

Posted by Brian Kelly (UK Web Focus) on 20 December 2010

“delicious. i rest my case.”

Niall Sclater made his point succinctly:

@mweller @psychemedia delicious. i rest my case.

The case Niall was making was, I suspect, that one shouldn’t be promoting use of Cloud services within institutions. This is an argument (although that might be putting it a bit too strongly) which Niall has been having over the past few years with Tony Hirst and Martin Weller, his colleagues at the Open University.  As I described in a post on “When Two Tribes Go To War” back in 2007:

Niall Sclater, Director of the OU VLE Programme at the Open University recently pointed out that the Slideshare service was down, using this as an “attempt to inject some reality into the VLEs v Small Pieces debate“. His colleague at the Open University, Tony Hirst responded with a post entitled “An error has occurred whilst accessing this site” in which Tony, with “beautifully sweet irony“, alerted Niall to the fact that the OU’s Intranet was also down.

Back then the specifics related to the reliability of the Slideshare service, with Tony pointing out the the Slideshare service was actually more reliable that the Open University’s Intranet.  But that was just a minor detail. The leaked news that Yahoo was, it appeared, intending to close a social bookmarking services which is highly regarded by many of its users, was clearly of much more  significance.  So is Niall correct to rest his case on this news? Or, as Niall wrote his tweet before we found that the news of Delicious’s death was greatly exaggerated, might we feel that the issue is now simply whether an alternative social bookmarking service should be used?

My view is that we do need to recognise that such service may disappear and plan accordingly.  But such plans need to be based on how such services were used,  and what might be the most appropriate alternatives. Such alternative could be based within the institution – but this may not need necessarily be the case.

My Use of Delicious

I created my first delicious bookmarks back in December 2005.  I used delicious to bookmark the main URLs for my peer-reviewed papers, with the intention of being able to identify others who had bookmarked my papers – if they are interested in the papers I’ve written. I’m likely to find that they have bookmarked similar resources which will be of interest to me was my initial use case for delicious.

I subsequently discovered that the category used for my bookmarks could also be of interest; for example a paper on “Implementing A Holistic Approach To E-Learning Accessibility” was bookmarked by “madeliner: using the tag H807_block_1 – hmm, might social bookmarking have a role to play in suggesting how resources might be being used?  Was this paper being used in block 1 of an Open University course H807.  Further investigation reveal that this is a course on Innovations in elearning. So by using a social bookmaking service I am able to identify that a paper of mine is used by someone in the context of an Open University course. This might provide some evidence of impact which could prove useful.  Further investigation revealed that Lars Nyberg’s has bookmarked several of my papers using an ‘accessibility‘ tag. This suggests that I should read such bookmarks if I plan on writing further papers on this area.

My second use case for delicious was in bookmarking the resources I used in my presentations. If, for example, you visit the page for the seminar on “Web 2.0: Opportunities and Challenges for HE” which I gave at Coventry University in March 2006 you find that the resources used in the slides have also been bookmarked using delicious with the “coventry-2006-03″ tag.

This illustrates my second main use of delicious: bookmarking resources I use in my presentations. The reason I do this is so that people in the audience won’t have to scribble down URLs as they know that all the links I refer to in my talk are available online.  Using this approach also means that I have a record of when resources were used in various presentations and also how popular such resources may be.

The reason I am describing the different uses I make of Delicious and the benefits it provides are to help to appreciate what the requirements are, especially if alternatives are being considered.

Alternatives To Delicious?

The news that Delicious was one of a number of Yahoo services eamarked for ‘sunsetting’ has damaged Delicious’s brand and over the past few days many Delicious users have been exploring alternatives.  If I was to explore an alternative, what should I be looking for?

An important requirement is that the service should be widely used – after all my first use case was in helping to find others with similar interests.  In addition the service should be popular across a global research community and not restricted to the UK or, even worse, to within an institution.  This is a reason why I don’t feel that an open source solution such as Scuttle is appropriate for my requirements.

My solution is therefore to continue with the approach I’ve taken over previous years – to continue to use Delicious with periodic backups to Diigo.

Should We Have Predicted The Dangers?

Should the risks that Delicious may not be sustainable have been identified earlier?  The answer is, of course.  And, indeed, such risk were flagged, in my case going back to 2006 when a Risk Assessment page was created which listed the various externally-hosted services, including Delicious, used to support UKOLN’s IWMW 2006 event.  Back then we wrote:

A number of del.icio.us tags (e.g. iwmw2006) are recommended for bookmarking resources related to the workshop and to individual talks and sessions. There is a reliance on ongoing access to the relevant del.icio.us page

This Risk Assessment approach was subsequently described in a paper on “Library 2.0: Balancing the Risks and Benefits to Maximise the Dividends” which was published in the Program: electronic library and information systems journal.

More recently a paper on “Moving From Personal to Organisational Use of the Social Web” was presented at the Online Information 2010 conference. This paper built on our previous work and suggested that organisation should be carrying out audits of the use of third party services and documenting possible risks and strategies for addressing such risks.

Although that paper focussed primarily on use of blogging platforms the approaches are equally valid for use of social bookmarking services.  As mentioned above, Social Web services used to support UKOLN’s recent IWMW events have been accompanied with a list of the services and a summary of possible risks.  The risks that the Delicious service may not be sustainable have been addressed in two ways: back in 2008 a Diigo account was set up and a backup copy of Delicious bookmarks taken. In addition, since an important use of Delicious has been to provide short-term access to resources after an event, it is accepted that there would be no significant data loss  if such resources were no longer embedded within the appropriate event page.

What Have We Learnt?

I feel that the important lesson if to have a plan B.  For me the plan B is likely to be another Cloud Service, since an institutional service will not adequately address my requirements.

I also feel that this incident has helped to highlight the important of planning and understanding risks.  Such planning processes can be helped by an audit of use of such services, which can be applied at an individual, departmental or institutional level.  I will revisits such audits in a future posts but I feel I should conclude by making the reminder that institutional services may also not be sustainable, so there needs to be an audit of use of institutional services too.

Posted in Social Networking | 8 Comments »

Blog Widget For Creating EPub and PDF Files

Posted by Brian Kelly (UK Web Focus) on 17 December 2010

I’ve recently installed a widget on the sidebar of this blog which enables users to download an EPub or PDF format of recent blog posts (an idea, incidentally, which I got from the RSC  MASHe blog).

Sidebar widget for creating EPub and PDF formats of recent blog posts.As indicated by my post on EPub Format For Papers in Repositories I’ve an interest in the potential EPub format so this blog provides an opportunity for testing various approaches to creating EPub resources. The widget uses the Feedbook service for creating the EPub updated link to ePub file) (and PDF<) formats.  The service processes a blog’s RSS feed, so the number of items it converts is determined by the numbers of RSS items which have been selected in the blog’s administrators interface  - for this blog there are 31 items in the RSS feed (this value was selected so that an RSS feed for the complete contents of the busiest month, July 2007, can be displayed).

Due to performance reasons the Feedbook service only process the text in the blog so accompanying images, for example, embedded  in a post will not be available. Via a recent comment on this blog I learnt about the Anthologize WordPress plugin which “is a free, open-source, plugin that transforms WordPress 3.0 into a platform for publishing electronic texts“. Using the plugin you can “grab posts from your WordPress blog, import feeds from external sites, or create new content directly within Anthologize. Then outline, order, and edit your work, crafting it into a single volume for export in several formats, including—in this release—PDF, ePUB, TEI“. However this plugin cannot be used for blogs, such as this one, which are hosted on WordPress.com.

But in addition to the tools which can be used to create ePub version of blog posts I have a concern on how users who may have an interest reading blog posts (and other documents) on mobile devices will discover the availability of resources published in this format.  I also wonder whether users will be confused if they click on the link will be confused when asked to select an application. Although Wikipedia provides a list of  EPub reading tools none of them are particularly well-known. Will we see a repeat of the confusion which non-technical end users experienced when links to RSS became prevalent?

I should also add that I’ve also an interest in process for easily getting blog posts on Kindle devices. I did wonder whether a PDF creation widget might be used in this process but 5 minutes of testing with a colleague’s Kindle was unfruitful. Hmm, in light of the interest in the new Kindle device I wonder whether we will see renewed interest in the PDF format, possible at the expense of EPub?


Note In February 2011 I became aware that this service had been discontinued. The widget has been removed from the sidebar and replaced by a link to the Newstoebook service which provides a similar format conversion service. However in 2012, due to limitations of the service, this link was also removed.

Posted in Blog, rss | 2 Comments »

Thoughts on Additional Costs of WordPress.com Blogs

Posted by Brian Kelly (UK Web Focus) on 16 December 2010

In a paper on “Moving From Personal to Organisational Use of the Social Web” (which I summarised in a blog post) I suggested that the early adopters of blogs hosted in the Cloud have established best practices which could be emulated by their peers: and, in this might involve providing professional blogs in the Cloud rather than on an institutional platform. After all, I suggested, in light of cuts, is it desirable to use in-house effort to install and maintain services when equivalent alternatives are freely available in the Cloud?

But can we put a price on the cost of such services?  Looking at the prices charged by WordPress.com to implement additional facilities on an out-sourced blog might help to inform such discussions.

WordPress provide information on additional charges for use of its free service to:

Add a Domain: “The Domain Mapping Upgrade allows you to use a custom domain name, such as example.com, instead of a standard WordPress.com domain name—like example.wordpress.com—for your blog. Domain name registration plus domain mapping costs $17.00 ($12.00 for mapping, $5 for registration) per year, per domain.

VideoPress: “The VideoPress upgrade allows you to host and play beautiful HD video right from your WordPress.com blog. VideoPress supports many filetypes and codecs. Your blog comes with 3 gigabytes of space. To get even more room to upload videos and other media, purchase the Space Upgrade.” The cost is $59.97 per year.

Custom CSS: “The CSS Upgrade allows you to use your own CSS code to customize the appearance of your blog. CSS allows you to change fonts, colors, borders, backgrounds, and even the layout of the blog.
With the CSS Upgrade, you’ll be able to take any of our 80+ themes and give it a little bit of style, or completely overhaul the design.
” The cost is $14.97 per year.

Space Upgrades: “If you find yourself running out of space for your media files, it’s easy to add more storage to your blog. You can add 5, 15, 25, 50, or even 100 gigabytes to your blog, so you’ll have all the room you need to host tons of photos, docs, and music.”  The cost ranges from $19.97 for 5 Gb through to $289.97 for 100 Gb per year.

No-ads: “We sometimes display discreet advertisements on your blog—this keeps free features free! The ad code tries very hard not to intrude on your design or show ads to logged-in readers, which means only a very small percentage of your page views will actually contain ads. To eliminate ads on your blog entirely this is the upgrade you want.” The cost is $29.97 per year.

Unlimited Private Users: “The Unlimited Private Users upgrade is available to all WordPress.com blogs that have been set to private by their owners or administrators. The maximum number of users that can be added to a private blog is 35. If you would like a larger private community, you can purchase the upgrade to add as many as you like!“. The cost is $29.97 per year.

Offsite redirect: “Do you want to move away from WordPress.com to your own self-hosted WordPress installation without losing SEO ranking and breaking links? This upgrade redirects your wordpress.com blog to your new blog by performing permanent (301) redirects for all of your content.” The cost is $12 per year.

These prices do seem very reasonable, especially when you consider what a WordPress.com user gets for free.  For example no additional extras have had to be purchased for this blog. I have used 2.5 MB filespace for the 470 objects in the media library of the 3.0 GB free allowance.  Although I have published over 840 posts I still have 98.9% of the free space allocation unused! So if you wish to argue that the costs might be extortionate if thousand of users have to pay them I would suggest that the free service is likely to be adequate for the majority of users.

A constraint of using WordPress.com is that you have no control over the plugins which are available.  There are a whole host of WordPress plugins which can be used to extend the functionality and appearance of WordPress blogs.  However since these would have to be installed by a WordPress administrator I can’t help but feel that the range of offerings might be constrained by institutional policies which will be influenced by resource implications, security issues, interoperability issues, need for testing, etc.

I can’t help but feel that whilst those who want the maximum flexibility will look to host and manage a blog on their own domain, for the majority of blog users a WordPress.com blog will provide a cost-effective and  satisfactory solution.  And will in-house blogs be sustainable  if we see reduced levels of technical resources available in IT Service and Web Service departments?  I’d be interested in hearing what people think.

Posted in Blog | 6 Comments »

Trends For University Web Site Search Engines

Posted by Brian Kelly (UK Web Focus) on 15 December 2010

Surveys of Search Engines Used on UK University Web Sites

What search engines are Universities using on their Web sites? This was a question we sought to answer about ten years ago,with the intention of identifying trends and providing evidence which could be used to inform the development of best practices.

Search engines used across UK Universities in 1999

An analysis of the first survey findings was published in Ariadne in September 1999. As can be seen from the accompanying pie chart a significant number (59 of 160 institutions, or 37%) of University Web sites did not provide a search function. Of those that did the three most widely used search engines were ht://Dig (25 sites, 15.6%), Excite (19 sites, 11.9%) and a Microsoft indexing tool (12 sites, 7.7%).

Perhaps the most interesting observation to be made is the diversity of tool which were being used back then.  On addition to the tools I’ve mentioned universities were also using Harvest, Ultraseek, SWISH, Webinator, Netscape, WWWWais and Freefind together with an ever larger number of tools which were in use at a single institution.

The survey was repeated every six months for a number of years. A year after the initial finding had been published there had been a growth in use of the open source ht://Dig application (from 25 to 44 institutions) and a decrease in the number of institutions which did not provide a search function (down from 59 to 37).

This survey, published in July 2001 was also interesting as it provided evidence of a new search engine tool which was starting to be used: Google, which was being used at the following six institutions: Glasgow School of ArtsLampeterLeedsManchester Business SchoolNottinghamSt Mark and St John.

Two years later the survey showed that ht://Dig was still popular, showing a slight increase to use across 54 institutions.  However this time the second most popular tool was Google, which was being used in 21 institutions. Interestingly it was note that a small number of institutions were providing access to multiple search engines such as ht://Dig and Google. It was probably around this time that the discussion began as to whether one should use an externally-hosted solution (due to concerns regarding the sustainability of the provider, the loss of administrative control and use of a proprietary solution when open source solutions – particularly ht://Dig – were being widely used across the sector).

These surveys stopped in 2003. However two years later Lucy Anscombe of Thames Valley University carried out a similar survey in order to inform decision-making at her host institution. Lucy was willing to share this information to others in the sector, and so the data has been hosted on the UKOLN Web site, thus providing our most recently survey of search engine usage across UK Universities.

This time we find that Google is now the leading provider across the sector, being used in 44 of the  109 institutions which were surveyed. That figure can be increased of the five institutions which were using the Google Search Appliance are included in the total.

What’s Being Used Today?

A survey of Web site search engines used on Russell Group University Web sites was carried out recently. The results are given below.

Institution Search Engine Search
1 University of Birmingham Google Search Appliance Search University of Birmingham for “Search Engine”
2 University of Bristol ht://Dig Search University of Bristol for “Search Engine”
3 University of Cambridge Ultraseek Search University of Cambridge for “Search Engine”
4 Cardiff University Google Custom Search Search Cardiff University for “Search Engine”
5 University of Edinburgh Google Custom Search Search University of Edinburgh for “Search Engine”
6 University of Glasgow Google Custom Search(?) Search University of Glasgow for “Search Engine”
7 Imperial College Google Search Imperial College for “Search Engine”
8 King’s College London Google Search KCL for “Search Engine”
9 University of Leeds Google Search Appliance Search University of Leeds for “Search Engine”
10 University of Liverpool Google Search University of Liverpool for “Search Engine”
11 LSE Funnelback Search LSE for “Search Engine”
12 University of Manchester Google Search University of Manchester for “Search Engine”
13 Newcastle University Google Search Appliance Search Newcastle University of for “Search Engine”
14 University of Nottingham Google Search Appliance Search University of Nottingham for “Search Engine”
15 University of Oxford Google Search Appliance Search University of Oxford for “Search Engine”
16 Queen’s University Belfast Google Search Appliance Search Queen’s University Belfast for “Search Engine”
17 University of Sheffield Google Search Appliance Search University of Sheffield for “Search Engine”
18 University of Southampton Sharepoint Search University of Southampton for “Search Engine
19 University College London Google Search Appliance Search University College London for “Search Engine”
20 University of Warwick Sitebuilder Search University of Warwick for “Search Engine”

In brief 15 Russell Groups institutions (75%) use Google to provide their main institutional Web site search facility, with no other search engine being used more than once.

Note that Google provide a number of solutions including the Google Search Appliance, the Google Mini and the public Google search. Mike Nolan pointed out to me that “you can customise with API or XSLT to make [Google search results] look different” so I have only named a specific solution if this has been given on the Web site or I have been provided with additional information (note that I can update the table if I receive additional information).

Discussion

Over ten years ago there was a large diversity of search engine solutions being used across the sector. The discussions at the time tended to focus on use of open source solutions, with the argument occasionally being made that since ht://Dig was open source there was no need to look any further. There was also a suggestion that the open source Search Maestro solution, developed at Charles University and deployed at the University of Dundee could have an important role to play in the sector.

However in today’s environment it seems that a Google Search solution is now regarded as the safe option and this seems to have been corroborated with a survey carried out by Mike Nolan in December 2008. The potential of Google Custom Search will have been enhanced by the announcement, two days ago, of developments to metadata search capabilities.

There has, however, been some discussion recently on the web-support JISCMail list on software alternatives to the Google Search Appliance.Another discussion on the website-info-mgt JISCMail list has shown some interest in the Funnelback software. But, interestingly, open source solutions has not been mentioned in the discussions.

We might conclude that, in the case of Web site search engines, after ten years of the ‘bazaar’ the sector has moved to Google’s cathedral. What, I wonder, might be the lessons to be learnt from the evidence of the solutions which are used across the sector? Might it be that the HE sector has moved towards cost-effective solutions provided by Google’s free solutions or the richness of the licenced Google Search Appliance or Google Mini? And might this be used to demonstrate that the HE sector has been successful in identifying and deploying cost-effective search solutions?

Posted in Evidence, search, Web2.0 | 13 Comments »

DCMI and JISCMail: Profiling Trends of Use of Mailing Lists

Posted by Brian Kelly (UK Web Focus) on 14 December 2010

Earlier this year DCMI, the Dublin Core Metadata Initiative, celebrated 15 years of Dublin Core. The UK higher education community has had a significant role to play in the development of Dublin Core, with colleagues and former colleagues at UKOLN having been involved since 1995.

Much of the discussions related to the development of Dublin Core standards and related activities has taken place on a series of JISCMail lists.  But how has use of these lists developed over time?  This is a question which relates to work I am involved in in exploring ways of analysing and interpreting data related to use of networked services.  I have previous described trends related to the growth in use of Facebook within UK universities and have captured evidence of early use of institutional use of iTunesU and YouTube Edu in order that future analyses will have benchmark figures to make comparisons with.

In order to understand the trends in use of JISCMail lists by those involved in standardisation, deployment and use of Dublin Core metadata I used the DCMI’s list of mailing lists as my starting point. I used the JISCMail search facility to obtain information on the numbers of posts on each list by carrying out a search for messages containing an ‘@’ in the sender’s email address sent between 1990 and 2010.  I have also included information on the date on which the mailing lists were established.

List Established Total Nos. of Posts
DCMI General Mailing List
General:
The broadest of mailing lists related to the international Dublin Core effort. Unlike other lists, which relate to the tasks of specific working groups or special interest areas, this list is for discussion of all issues relevant to the development, deployment, and use of Dublin Core metadata.
March 1996 5,659
DCMI Architecture Mailing List
Architecture:
This list, which supersedes dc-datamodel, dc-schema, and dc-implementors, is intended for discussion of a technical architecture for the Dublin Core.
October 2000 3,027
DCMI Communities Mailing Lists
Accessibility:
The DCMI Accessibility Community is a forum for individuals and organizations involved in implementing Dublin Core in a context of accessibility, with the objective to enhance interoperability of accessible resources through the use of Dublin Core metadata.
February 2002 589
Collection DescriptionThis list is intended for discussion of issues related to the use of the Dublin Core (DC) for describing collections of resources. February 2002 602
Education: Electronic discussion list to support the efforts of the international Dublin Core effort’s Educational metadata working group in exploring issues directly related to deployment of Dublin Core for the description of educational materials. August 1999 689
Environment: This list supports discussion of deploying Dublin Core metadata in environmental applications. February 2002 151
Government: This list is intended for discussion of the uses to which the Dublin Core Element Set might be put in describing government and public sector resources. December 1999 501
Identifiers: A Dublin Core Metadata Initiative (DCMI) forum to discuss identifiers October 2007 32
Libraries: A mailing list for the DCMI Libraries group focussing on issues from the library sector. December 1999 433
Kernel: A list to support the work of the DCMI Kernel Group September 2003 60
Knowledge Management:A forum to support the work of the Dublin Core Knowledge Management Community December 2007 48
Localization and Internationalization: This list supports the efforts of the DCMI Localization and Internationalization group exploring issues directly related to deployment of Dublin Core metadata in multiple languages. January 1998 275
Preservation: The DCMI Preservation Community is a forum for individuals and organisations involved in implementing Dublin Core metadata in a context of long-term digital preservation, with the objective to promote the application of Dublin Core in that context. December 2003 191
Registry: The DCMI Registry Community is a forum for service providers and developers of both metadata schema registries and controlled vocabulary registries to exchange information and experience. December 1999 661
Science and Metadata: A forum for individuals and organisations to exchange information and knowledge about metadata describing scientific data February 2009 103
Social Tagging: Dublin Core social tagging discussion list October 2006 157
Standards: List to support discussion on issues related to standarization of DCMI specifications February 1999 107
Tools: This list supports discussion of building and using software tools related to the Dublin Core. April 2002 61
DCMI Task Groups Mailing Lists
Collection Description Application Profile Task Group: A list to support work on the Dublin Core Collection Description Application Profile January 2007 74
Kernel Application Profile Task Group: A list for developing the Dublin Core Kernel Application Profile November 2007 127
Metadata Provenance Task Group: The list will support the Dublin Core Task Group on Metadata Provenance June 2010 26
DCMI/NKOS Task Group: Dublin Core Metadata Initiative Task Group developing a Dublin Core Application Profile for KOS (Knowledge Organization System) Resources August 2010 3
DCMI/RDA Task Group: List to support discussion on Resource Description and Access (RDA) December 2005 532
TOTAL 26,692

Note that these statistics were collected on Friday 11 December 2010.

The Need For Trend Analysis

Such figures are pretty meaningless taken in isolation.  We might expect the general discussion lists to be more popular than more specialised lists, and well-established lists to have had more traffic than those which have only been set up recently.  Of more interest should be the trends showing usage of the lists.

As can be seen the number of posts to DCMI JISCMail lists peaked in 2002 and has dropped sharply since.  The number of lists has grown with sharp rises in 1999, 2002 and 2007. However the average number of posts to the lists has also seen a sharp decline.

The details for the individual lists are shown in the following chart.

I should also add that the data I collated in order to produce these charts is available as a Google Spreadsheet.

Discussion

Why the interest in metrics on usage of mailing lists?  In part such evidence can be used to identify whether technologies, in this case mailing lists, are still being actively used – as I described earlier this year in a post on The Decline in JISCMail Use Across the Web Management Community University Web managers seem to no longer be using mailing lists to the extent they did previously.

Mailing lists used to develop standards, as opposed to those used by practitioners to address routine queries, may be valuable for historical analyses, such as observing discussions on decisions taken.  The MarkMail service, for example, provides access to over 36,000 messages posted on the W3C’s www-html list.  But it seems that several lists, such as dc-identifiers and dc-knowledge-management, have failed to attract significant traffic, with only a total of 32 and 48 messages having been posted to these lists. Have the discussions taken place in other fora, I wonder?

It also seems to me that there is a need for popular services, such as JISCMail, to provide simple ways in which usage statistics along the lines I have illustrated in this post, can be produced.  I wonder whether this needs to be done by developments to the JISCMail Listserv software itself or could be layered on externally? Any thought?

Posted in General | Tagged: | 4 Comments »

Interoperability Through Web 2.0

Posted by Brian Kelly (UK Web Focus) on 13 December 2010

I recently commented on Martin Hamilton’s blog post on Crowdsourcing Experiment – Institutional Web 2.0 Guidelines“. In addition to the open approach Martin has taken to the development of institutional guidelines on use of Web 2.0 services the other thing that occurred to me was  how the interoperability of embedding interactive multimedia objects was achieved.

Interoperability is described in Wikipedia as “a property referring to the ability of diverse systems and organizations to work together“. But how is Martin’s blog post interoperable? The post contains several examples of slideshows created by others which are embedded in the post.  In addition to the slides, which are hosted on Slideshare, the post also contains embedded video clips together with an embedded interactive timeline.

How is such interoperability achieved? We often talk about “interoperability through open standards” but in this case that’s not really the case. The slides were probably created in Microsoft PowerPoint and are thus either a proprietary format or in the (open though contentious) OOXML format. But the slides might also have been created using Open Office or made available using PDF.  In any case it’s not the format which has allowed the slides to be able to be embedded elsewhere; rather its other standards which allow embedding which are important (e.g. using HTML elements such as IFRAME, OBJECT and EMBED).

It’s also worth noting that applications are needed which implement such interoperability.  In Martin’s post he has embedded objects which are hosted in the Slideshare, YouTube and Dipity applications.  The ability to be embedded (embeddability?) in other environments may also be dependent on the policies provided by such services.  You can normally embed such objects in Web pages, but not necessarily in environment such as WordPress.com (which restricts objects which can be embedded to a number of well-known services such as SlideShare and YouTube). I would be interested to know if popular CMS services have similar limitations on embedding content from Web 2.0 services.

If the original objects which Martin used in his blog post had been simply embedded in their host Web environment, perhaps as a HTML resource, they would not have been easily reused within Martin’s blog. Interoperability is not a simple function of use of open standards; there are other issues, such as market acceptance, which need to be considered.  And the open format embedded on a Web page could, ironically, be non-interoperable whereas a proprietary format hosted in a Web 2.0 environment could be widely used elsewhere.

Or to put it another way, shouldn’t we nowadays regard the provision of an HTML page on its own as a way of providing access to multiple devices but restricting use of the resource in other environments? Web 1.0 = publishing but Web 2.0 = reuse.

I’d like to conclude this post by embedding a slideshow in a talk on “So that’s it for it services, or is it?” which I found a few days ago linked to from a timetable for HEWIT event held earlier this year.  The slideshow hosted on Slideshare is clearly so much more useful than the PowerPoint file linked to from the HEWIT timetable – and as the HEWIT timetable has the URL http://www.gregynog.ac.uk/HEWIT/ I can’t help but think that the resource could well be overwritten by next year’s timetable, with the Slideshare resource possibly access to the resource for a longer period than the Gregynod Web site

Posted in standards, Web2.0 | Leave a Comment »

“HTML5: If You Bang Your Head Against The Keyboard You’ll Create a Valid Document!”

Posted by Brian Kelly (UK Web Focus) on 10 December 2010

“HTML5 / CSS3 / JS  – a world of new possibilities”

I recently attended the 18th Bathcamp event entitled “Faster, cheaper, better!“.  For me the highlight of the evening was a talk by Elliott Kember (@elliottkember)  on “HTML5 / CSS3 / JS  – a world of new possibilities“.

The Elliottkember.com Web site describes Elliot as:

freelance web developer based in Bath, England
who builds and maintains high-traffic, powerful web apps,
resorts to using 32pt Georgia – sometimes in italic and printer’s primaries,
has 4978 followers on Twitter, speaks at conferences,
and wants to develop your idea into an application.

Elliott gave a fascinating run through some of the new presentational aspects of HTML5 and CSSS, appropriately using a HTML5 document to give the presentation.  His slides are available at http://riothtml5slides.heroku.com/ and are well worth viewing. Note that to progress through the slides you should use the forward and back arrows – and not that Elliott was experimenting with some of the innovative aspects of HTML5 and CSS3 so the presentation might not work on all browsers.

In this post I’ll not comment on the HTML5 features which Elliott described. Rather than looking at the additional features I’ll consider the implications of the ways in which the HTML5 specification is being simplified.

HTML5′s Simplicity

Elliot introduced the changes to HTML5′s by pointing out its simplicity. For example a HTML 4 document required the following Doctype definition:

<!--DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">-->

whereas HTML5 simply requires:

<!--doctype html>-->

The following illustrates a valid HTML5 document:

<!--DOCTYPE html>
Small HTML 5

Hello world
-->

As can be seen there is no requirement to include the <head> and <body> elements which are needed in order for a HTML 4 document to be valid (although HTML 4 documents which do not include these mandatory elements will be rendered correctly by Web browsers.

What about IE?

Over the years developments to HTML standards have always given rise to the question “What about legacy browsers?“. Often the answer has been “The benefits of the new standard will be self-evident and provide sufficient motivation for organisations to deploy more modern browsers“.  Whether the benefits of the developments from, say, HTML 3.2 to HTML 4 and HTML 4 to XHTML 1 have provided sufficient motivation for organisations to invest time and effort in upgrading their browers is, however, questionable – I know I have been to institutions which are still providing very dated versions of browsers on their public PCs.   And whether the HTML technology previews which tend to be demonstrated when a new version of HTML is released will be typical of the mainstream uses may also be questioned.  So there is still a question about the deployment of services based on HTML5 in an environment of flawed browsers, which includes Internet Explorer; it should also be noted that other browsers may also have limited support for new HTML5 (and CSS 3) features.

Elliott suggests that a solution to the “What about IE?” question may be provided by a HTML5 ‘shim’. A shim (which is also sometimes referred to as a ‘shiv’) is described in Wikipedia as “a small library which transparently intercepts an API, changes the parameters passed, handles the operation itself, or redirects the operation elsewhere“.

Remy Sharp has developed what he calls the HTML5 shiv, which consists of the following three lines:

<mce:script 
// -->

This code provides a mechanism for IE to recognose new elements, such as, as Elliott uses in his presentation, <slide>

Use it Now?

Should you start using HTML5 now?  Back in July in his plenary talk on “HTML5 (and friends): The future of web technologies – today” given at the IWMW 2010 event Patrick Lauke suggested that for new Web development work it would be appropriate to consider using HTML5.

Elliott was in agreement, with his slides  making the point that:

All decent browsers support enough of this stuff to make it worth using.

What this means is that you can start to make use of the simple HTML5 declaration but rather than use every HTML5 feature that is documented in the specification you should check the level of support for various features using, for example the Periodic Table of HTML5 Elements and the HTML5 Test web site and Wikipedia’s Comparison of layout engines (HTML5) as well as running standard usability checks on an appropriate range of browsers and platforms.

What About Validity of HTML5?

Following Elliott’s talk there was a question about the validity of HTML5 documents.  Elliott responding with a very graphic depiction of the much more liberal (if one dare uses that word!) approach to validity: “If you bang your head against the keyboard you’ll probably create a valid HTML5 document!“.

Such an approach is based on observing how few Web resources actually conform with existing HTML specifications.  In many cases browser rendering is being used as an acceptable test for conformity – if a Web page is displayed and is usable in popular Web browsers then it is good enough seems to be the situation today.  “After all” asked Elliott “how many people validate their Web pages today?” The small numbers of hands which were raised (including myself and Cameron Neylon) perhaps supported this view and when the follow-up question “Who bothers about using closing tags on <br> elements in XHTML documents these days?” was asked I think mine was the only hand which was raised.

The evidence clearly demonstrates that strict HTML validity, which was formally required in XHTML, has been rejected in the Web environment. In future, it would seem, there won’t be a need to bother about escaping &s and closing empty tags, although if Web authors wish to continue with such practices they can do so.

What About Complex Documents?

Such simplicity seemed to be welcomed by many who attended the Bathcamp meeting.  But myself and Cameron Neylon, an open science researcher based at the Science and Technology Facilities Council, still had some concerns.  What will the implications be if a HTML resource is being used not just for display and user interaction, but as a container for structured information?  How will automated tools process embedded information provided as RDFa or microdata if the look-and-feel and usability of a resource is the main mechanism for validation of the internal consistency of a resource?

And what if an HTML5 document is used as a container for other structured elements, such as mathematical formulae provided using MathML; chemcial formula provided using CML;  etc.?

There are dangers that endorsing current lax approaches to HTML validity can hinder the development of more sophisticated uses of HTML, especially in the research community. We are currently seeing researchers arguing that the main document format for use in scientific and research papers should move away from PDF to a more open and reusable format. HTML5 has been suggested as a possible solution? But will this require more rigourous use of the HTML5 specification?  And if the market place chooses to deploy tools which fail to implement such approaches, will this act as a barrier to deployment of HTML5 as a rich and interoperable format for the community?

Posted in HTML, standards | 4 Comments »

BS 8878: “Accessibility has been stuck in a rut of technical guidelines”

Posted by Brian Kelly (UK Web Focus) on 9 December 2010

Launch of the BS 8878 Web accessibility Code of practice

Yesterday I listened to a Webinar entitled “BS 8878 Explained” which was given the day after the official launch of the “BS 8878 Web accessibility. Code of practice“. The Code of Practice can be purchased for £100 from the BSI shop :-( Once I realised this was the case I tried to  keep a note of the main points which were being made during the Webinar.  Unfortunately the PowerPoint slides which were used do not seem to have been published, so there may be mistakes in the notes I have taken.  It is unfortunate that the launch of this important new code of practice was not supported by the availability of accompanying support materials – uploading the slides to PowerPoint and providing a URL on the title slide would have been simple to do. Perhaps the reasons for not doing this are to maximise consultancy opportunities although, since I have learnt that a recording of the Webinar has been made available, I’m inclined to think that this was just an oversight. Note that I learnt about the availability of the recording of the Webinar from the TwapperKeeper archive of all #bs8878 tweets – and note that an archive of tweets for the 7-8 December 2010 is also available, which may be useful if you want to view the discussions which took place during the Webinar.

Back in June I wrote a post about a draft version of BS 8878 in which I concluded:

the Code of Practice correctly acknowledges the complexities in seeking to enhance accessibility of Web products for people with disabilities.  It was also good to see the references to ‘inclusive design’ rather than the ‘universal design’ which, I feel, leads people to believe that a single universal solution is possible or, indeed desirable.

Many thanks to the people who have produced this document which gets my support.

Although I haven’t read the final published version the Webinar  seems to confirm that a pragmatic and user-focussed approach to Web accessibility has been taken to the production of the code of practice. A summary of my notes from the Webinar is given below and some general comments are given at the end of this post. I should also add that the Access8878 Web site provides a summary of the Code of Practice which is available for free and that Deborah Edwards-Onoro has also published a summary of the Webinar.

Notes from the Webinar

During the Webinar Robin Christoperson and Jon Gooday Elliot Martin gave an introduction to this new BS Code of Practice and provided a case study of how Lloyds TSB have gone about addressing accessibility issues.

The key points I noted during the talk are given below:

  • BS 8878 is user-focussed.
  • BS 8878 covers ‘Web products’ and not just Web sites (including email used over the Web; Flash;  mobile; …).  However the code of practice doesn’t cover software.
  • BS 8878 is a code of practice  which gives guidance (could, should, …) rather than detailed technical specifications.
  • It can be possible to comply with BS 8878 if you implement recommendations. It should be noted that this includes documentation of various processes and decisions.
  • BS 8878 is applicable to all types of organisations.
  • “Accessibility has been stuck in a rut of technical guidelines and a low level focus” i.e. with those working in Web team taking a  checklist approach to accessibility. BS 8878 endorses a more strategic and high level approach. It has been described as provided a more holistic approach.

Following a Lloyd TSB Case Study of how they have addressed accessibility issues the structure of the BS 8878 document was described.

The documents explains why an  accessibility policy is needed, with examples of such policies accessibility statements being provided in annexes to the document.

Advice is given on making ‘justifiable decisions’, which aim to make you think and understand the implications of actions and  ensuring that decisions are documented.

Section 7 of the document covers WCAG guidelines, inclusive design (which wasn’t covered in previous BS 78, the previous code of practice on Web accessibility) and provision of personalised Web sites (e.g. Wen sites for BSL users; style switchers; etc).

Section 8 covers testing processes, to ensure accessibility issues are addressed in the testing processes.  The Annexes provide more detailed examples.

A significant change in the document following changes to DDA legislation (which has been replaced by the Equality Act) which covers liability. Since the legislation applies only to services hosted in the UK there will be need to take care when making use of services provided by 3rd party providers. [It was unclear as to whether this meant that since 3rd party services would be exempt from UK legislation there would be no liability, or the UK organisation using the service would have to accept liability.]

The heart of document is a 16 step plan:

Step 1: Define the purpose.

Step 2: Define the target audience.

Step 3: Analyse the needs of the target audience (note this wasn’t covered in PAS 78)

Step 4: Note any platform or technology preferences

Step 5: Define the relationship the product will have with its target audience

Step 6: Define the user goals and tasks

Step 7: Consider the degree of user experience the web product will aim to provide

Step 8: Consider inclusive design & user-personalised approaches to accessibility

Step 9: Choose the delivery platform to support

Step 10: Choose the target browsers, operating systems & assistive technologies to support

Step 11: Choose whether to create or procure the Web product.

Step 12: Define the web technologies to be used in the Web product

Step 13: Use Web guidelines to direct accessibility web production  This step covers use of WCAG guidelines.

Step 14: Assure the web products accessibility through production (i.e. at all stages)

Step 15: Communicate the web product’s accessibility decisions at launch

Step 16: Plan to assure accessibility in all post-launch updates to the product

Note that BS 887 is a very new document. The editorial team welcome feedback on  experiences of using the approaches described in the document which can be fed into next version, which should be published in 2 years time.

Observations

BS 8878 is user-focussed“:  this was the most pleasing aspect of the Webinar. I have argued in the past that Web accessibility has been regarded as a feature of a resource, with the user often being invisible. It is good to see that the balance has been re-addressed.

Accessibility has been stuck in a rut of technical guidelines and a low level focus“:  another comments I would agree with.  I was pleased to see that Step 13: “Use Web guidelines to direct accessibility web production” is correctly regarded as just one small part of a much more sophisticated approach to addressing Web accessibility challenges.

The more process-driven approach to Web accessibility reflects the ideas which have been described in a series of papers on Web accessibility which a group of accessibility researchers and practitioners have published over the past six years or so.  In particular the BS 8878 Code of Practice implements the suggestions that:

If current approaches in the specification of accessible Web sites are flawed, what alternative approaches should be taken? The authors’ experience suggests that there is not a single specification, or set of them, that can be prescribed for accessibility. The approach that appeals to the more experienced mind is one that operates on a repertoire of techniques, policies and specifications that are worked upon freshly in each new situation. The results of this expert approach cannot be mandated as the relevant expertise cannot be distilled but the practice of consideration, and exploration can be mandated. The authors are inclined to the view that it is more the processes undertaken by authors or not, that are responsible for many accessibility problems. This suggests a process-oriented approach to accessibility rather than one based on strict technical adherence to technical specifications.

which were described in a paper on “One world, one web … but great diversity” which was presented at the W4A 2008 conference in Beijing, China.

The 16 step approach also provides a pragmatic approach to addressing the challenging areas of Web accessibility, such as the accessibility of research publications hosted in institutional repositories or the accessibility of amplified events.  At this year’s W4A 2010 conference in a paper on “Developing Countries; Developing Experiences: Approaches to Accessibility for the Real World” we proposed the following approaches:

Reasonable Measures: Rather than regarding WCAG conformance as a mandatory requirement, WCAG should be regarded as guidelines, which may be ignored if their use conflicts with other requirements – so long as steps are taken to address the potential exclusion that may result. It should be noted that UK legislation that requires use of ‘reasonable measures’ to ensure that users with disabilities are not discriminated against unfairly, provides a legislative context for this approach. A policy based on ‘seeking to make use of WCAG’ will provide the flexibility needed. This would not be possible with a policy which states that all resources must conform to WCAG.

Justification of Costs: ‘Reasonable measures’ should include identification of costs of conforming with accessibility guidelines. There should be consideration of the trade-off between financial savings and usability issues. For example the attraction of promoting open source, free assistive technology in developing countries may be tempered by the challenges of moving users away from familiar, currently-used commercial alternatives – which may in reality have been illegally obtained at low cost.

Provision of Alternatives: If it is too costly or difficult to conform with accessibility guidelines, the provision of alternatives that are as equivalent as possible may be an appropriate solution. As described in[10] the alternative need not be Web-based.

Just-in-time Accessibility: A requirement that all resources conform to WCAG is a ‘just-in-case’ solution. This may be an appropriate resource for widely accessed informational resources, but may be inappropriate if resources are expected to be little used. There may be advantages in delaying provision of accessibility solutions to allow development of technologies which can enable more cost-effective solutions to be devised.

Advocacy, Education and Training: Those involved in supporting content providers and other stakeholders should ensure that education and training on best practices is provided, together with advocacy on the needs for such best practices.

Sharing and Learning: With an emphasis on a community-based approach to the development of appropriate solutions it is important that best practices are widely shared.

Engagement of Users with Disabilities: The need to ensure that disabled people are included in the design and development of Web solutions must be emphasised.

Focus on ‘Accessibility’ rather than ‘Web Accessibility’: The benefits of Web/IT solutions to real world accessibility difficulties needs to be considered. As described above, amplified events can address difficulties in travel and access, even though the technologies used may not conform with accessibility guidelines.

When time permits it would be interesting to see how the holistic approaches to Web accessibility which we have developed (and described in our papers) maps to the approaches described in the BS 8878 Code of Practice.

To conclude, I’d like to give my thanks to the contributors to the BS 8878 Code of Practice who are helping to ensure that Accessibility is no longer “stuck in a rut of technical guidelines“.


Note (added on 2 April 2012). I have been informed that the official slides on BS 8878 from its launch, together with other free information including, case studies of organisations using BS 8878, detailed blogs on its use by SMEs, tools and training for applying the Standard and news on its progress towards an International Standard, can be found on the Hassell Inclusion web site.

Posted in Accessibility | Tagged: | 10 Comments »

IWMW: Looking Back and Looking Forward

Posted by Brian Kelly (UK Web Focus) on 8 December 2010

IWMW 2011: University of Reading from 26-27 July

UKOLN’s annual Institutional Web Management Workshop (IWMW 2011) will be held next year at the University of Reading on 26-27th July.  We have decided that the event will run over two rather than three days in order to reduce the cost of the event as we are aware that budgets may be being reduced. However since the event will start at 10 am and close at 4 pm, rather than running from after lunch on day 1 to before lunch on day 3 as we have done in previous years, we hope that there will be only a slight drop in the amount of content provided.

My colleague Marieke Guy, who is the IWMW 2011 chair, recently announced the date of the event. So please keep this date free in your diary if you would like to attend the event next year.  We will also shortly we announcing the theme for next year’s event and inviting submissions for plenary talks and workshop sessions. But prior to this we are providing an opportunity for interested parties to provider suggestions on what they would like to see addressed at the event.

Marieke has announced that we are using Ideascale in order to solicit suggestions for topics which participants would like to see addressed at the event.  Feel free to make your suggestions on the site – and we would like to extend this invitation not only to those who may wish to present at next year’s event (submitting an idea to Ideascale can be a useful way of identifying interest) and to those who have attended previous events and would like to attend next year’s event (the Ideascale service provides an opportunity for you t0 list the topics you would like to see provided at the event) but also to those who may not be planning on attending the event but have an interest, which might include potential remote participants.

The Ideascale service will continue to be open alongside the call for submissions as the service might also provide an opportunity to identify last-minute topics of interest which could be addressed at the event.

Note that the ideas submitted will be used to inform the selection of submissions at the event.  In order to ensure that we provide a balanced range of topics to cover the requirements of the various stakeholders we cannot guarantee that the most popular topics will be addressed at the event.

The Impact and Value of Previous IWMW Events

In addition to planning for next year’s event we are also looking to gather evidence of the value and impact provided by previous 14 IWMW events. We know from analysing the feedback forms since way back in 1997  that participants have consistently found the event useful and informative, as well as providing a valuable opportunity for participants to develop and cultivate their professional network.

However the feedback forms completed at an event will only provide the event organisers with comments on the content and organisation over the three days of the event. But what about the longer term value and impact of the event?  These are the issues we will be looking to understand in an evaluation form which has recently been launched,  If you have attended previous IWMW events, whether as a speaker, facilitator, exhibitor, attendee or remote participant, we would like to hear from you.

We have published on online survey form which is aimed at anyone who may have attended an IWMW event in the past, whether as a delegate, speaker, workshop facilitator, contributor to a barcamp, exhibitor, sponsor or even as a remote participants to the live video stream which we have provided for the past few years.

We are particularly interested in feedback in the following areas:

  • Examples of the impact and value of attendance at IWMW events. This might include introduction of new working practices as a result of attendance at IWMW events; replacement of services or approaches following awareness of alternatives; collaborative work with others based on contacts made at IWMW events; evidence of financial savings; etc.
  • Examples of ways in which IWMW events have helped in hearing about / implementing innovative approaches.
  • Examples of ways in which IWMW events have helped in learning about services and areas of work funded by the JISC (e.g. JISC development programmes; JISC Innovation Support Centres including UKOLN and CETIS and JISC Services)

We hope that the many people who have attended IWMW over they years will spend a little time in reflecting on the benefits which the events have provided and will be able to find about 5 – 10 minutes to complete the form.

Many thanks

Posted in Events | Tagged: | Leave a Comment »

Gap Analysis: They Tweeted At #online10 But Not At #scl10

Posted by Brian Kelly (UK Web Focus) on 6 December 2010

Twitter Was Popular at #Online10

Last week I attended the Online Information 2010 conference, held at Olympia in London on 30 November – 2 December.  Unfortunately due to other commitments I could only attend on the first day.  But I was able to get a feel for the discussions on the next two days by watching the #online10 column in my Tweetdeck Twitter client – and I was able to do this during what would otherwise have been unproductive times such as standing on an overcrowded bus going to work.

At the time of writing Summarizr informs me that there have been 4,342 tweets from Twitter 1,022 users. This evidence suggests that Twitter had an important role to play at the conference, enabling those users to take part in discussions centred around the various talks presented at the conference as well as enabling conference delegates to cultivate and develop professional relationships. Without Twitter, for example, I wouldn’t have met @Ankix and, over a meal and a few pints in the Warwick Arms with longstanding Twitter colleagues @karenblakeman@hazelh and @akenyg and @stephanbuettner, another new contact, shared experiences of the implications of the cuts across the library sector in the UK, Sweden and Germany.

Little Use of Twitter at #SCL2010

On the same day that I gave a talk at Online Information I was also presenting a pre-recorded video at the Scholarly Communication Landscape: Opportunities and challenges symposium which was held at Manchester Conference Centre, Manchester. For this one-day conference Summarizr informs us that there had been only 38 tweets from 6 Twitter users, but only my colleague Stephanie Taylor (who was supporting my video presentation) and Kevin Ashley, DCC Director  and speaker at the symposium) tweeted more than once. So whilst the far fewer numbers of tweets for this symposium will be due in part to it being a smaller event, running for a single day, the lack of any participation from the audience is, I feel, interesting.

The page about the event informs us that the symposium aims to “investigate the opportunities and challenges presented by the technological, financial and social developments that are transforming scholarly communication” with the programme going to add that “Online social networks are playing an increasingly important role in scholarly communication. These virtual communities are bringing together geographically dispersed researchers to create an entirely new way of doing research and creating scholarly work.

Quite.  But this one-day event, which was open to all staff and postgraduate research students at the University of Manchester, seems to have been unsuccessful in providing an opportunity for participants to try out for themselves Twitter,  an example of a popular online social network which is playing an increasingly important role in scholarly communication, as we saw from the evidence of its use at the Online Information 2010 conference. But rather than point out what the non-users of Twitter may have been missing (such as the active learning and the community engagement which I described above) it might be more interesting to reflect on the more general issues of how non-users of a service can be identified and how one might gain feedback from non-users of a service.

Gap Analysis

Getting feedback from users of a service can be easy – you know who they are and you will often have communications channel with them in which you can invite feedback. But getting feedback from non-users can be much more difficult – although such feedback can be immensely value in understanding reasons why a service isn’t being used and ensuring that enthusiast users don’t give a misleading impression of the benefits.

It might be useful to speculate why services aren’t being used.  Possible reasons for the  lack of Twitter use by the audience at the Scholarly Communication Landscape  symposium could be:

  • Technology problems: lack of or problems with a WiFi network could be responsible for a lack of event-related tweets.
  • Technology limitations: Potential Twitter users may feel that use of a Twitter client at an event is too complex.
  • It’s trivial: Twitter might be regarded as a trivial activity.
  • It’s rude: Use of Twitter at an event might be regarded as being rude and inconsiderate to other participants and to the speakers
  • Personal/profession balance: Twitter users may use it for personal rather than professional purposes.
  • Failure to see relevance: Participants may fail to see the benefits of use of Twitter at events.
  • Relevance not applicable: Participants may appreciate potential benefits of use of Twitter at events but feel such benefits are not applicable for them.
  • Style of working: Use of Twitter (or networked technologies) may not be relevant to personal styles of working.
  • Organisational culture: managers or others in the organisation may frown on such usage.

These are some of my thoughts on why Twitter might not have been used at the symposium, and you may be able to provide additional suggestions.  But how do we find out the real reasons as opposed to our speculations?  And how do we apply approaches for gap analysis to other areas besides use of Twitter? For example, in light of the subject areas which may have been covered at the event, how could we gauge views on the areas such as openness and institutional repositories? How can we gather evidence in order to inform policies on, say, deployment and use of new services or approaches?

Increasingly I’m beginning to think that these type of events should be much more than dissemination channels and provide feedback mechanisms to provide responses, enable aggregated views to be analysed, etc. For an event aimed at staff and postgraduate research students at an institution, such as the Scholarly Communication Landscape symposium which was open to all staff and postgraduate research students at the University of Manchester it would seem that there was an ideal opportunity to gain feedback on the opportunities and challenges in the areas of scholarly communications. And those opportunities and challenges will be shared by many others in the higher education sector.

My concluding thoughts:  events can provide a valuable opportunity for gathering feedback and comments on the areas addressed at the event. There is an opportunity to gather such feedback  using simple technologies which may be very costly to gather in other ways. Open sharing of such feedback can be beneficial to the wider community.  So let’s do it.

Or to provide a more tangible example.  One could ask an audience from one’s host institution if they would be interested in using an communications tool such as Twitter or Yammer to support work activities. Or perhaps whether staff would be willing to make their professional outputs available under a Creative Commons licence.  An example of how this might be approached is given below.

Posted in General, Twitter, Web2.0 | Leave a Comment »

Universities in Wales Told to ‘Adapt or Die’ But How Should They Adapt?

Posted by Brian Kelly (UK Web Focus) on 4 December 2010

Yesterday’s headline on the BBC News Web site was blunt: “Universities in Wales told to ‘adapt or die’“.  The article went on:

Education Minister Leighton Andrews has told universities and further education colleges in Wales that there will be fewer of them by 2013.

Mr Andrews told the Institute of Welsh Affairs’ conference higher education institutions must “adapt or die”.

He warned their future funding would depend on a willingness to “progress swiftly to merger and reconfiguration”.

The minister went on  to inform an audience in Carmarthen that “the higher education sector’s failure to respond to reconfigure and collaborate as the government intended was costing it money“.

What are the implications for those working in the IT and Library departments in Welsh HEIs? And what are the warning signals which are being sent to HEIs in England and Wales?  There seems to be two key themes:

  • Centralisation
  • Embracing change

I think those working in IT need to be seen to be responding to such drivers. And, despite the downsizing in numbers of institutions, there can also be opportunities:

There will be fewer HEI’s in Wales by 2013 and fewer vice-chancellors. That does not mean fewer students or fewer campuses.

But the opportunities provided by continued to support similar numbers of students may not be reaped by those working in support services:

[The minister] repeated the findings of a PriceWaterhouseCoopers review which showed HE in Wales spends 48% of its budgets delivering teaching and research but 52% on support services.

So there’s a need to challenge such findings or to demonstrate tangible benefits provided by the 58% of the budget used by support services.

Hmm, should Welsh library services be the first to respond to the need to demonstrate their value? Haven’t Welsh academic library services being provided the political and economic driver to respond to yesterday’s post in which I argued that Library data which is currently being gathered by Library services and submitted to SCONUL, but then only available to subscribers and only available in PDF format, should be made freely available?

How will Welsh service departments, such as Library Services, IT Services and Web teams react? Doing nothing is clearly not an option. Of course whilst the small numbers of Welsh Universities provide an opportunity for the government to explore ways of reducing levels of funding, the close-knit sector  can also allow those working in service departments to respond more rapidly than would be the case in English Universities.

Support services in Welsh Universities have a well-established communities and events, such as the annual HEWIT conference where earlier this year David Harrison gave a talk on “2010… so that’s it for IT Services … or is it?” (MS PowerPoint format). In addition there is the Wales Higher Education Libraries Forum (WHELF) grouping of Chief Librarians and Directors of Information Services drawn from all the higher education institutions in Wales. From the WHELF blog I learn that WHELF activities include:

  • Raising the profile of services and developments in Welsh HE library and information services in our own institutions, in Wales and beyond; Influence policy makers and funders on matters of shared interest;
  • Implementing collaborative services and developments for the mutual benefit of members institutions and their users;
  • Working with other organisations, sectors and domains in support of the development of a cooperative library network in Wales and the UK;
  • Providing mutual support and opportunities for the sharing of good practice through meetings, mailing lists etc;
  • Providing staff development and training opportunities for member institutions;
  • Collection, dissemination and evaluation of statistics from member institutions.

The first and final bullet points would suggest the WHELF might be an appropriate organisation to push the opening up of statistics in order to raise the profile of the work being done in Libraries.  But on the other hand might organisational inertia prevent such a grouping from responding quickly? Sarah Wickham yesterday suggested (in a prsonal capacity) an alternative approach:

The Freedom of Information Act 2000 both provides a right of access to information held by public authorities (including individual HEIs, but not including Sconul) as well as requiring authorities to publish information pro-actively through their publication scheme. A request for the raw data could be made under the Act to each institution. A requester could then analyse the data for dissemination as a service to the wider community …

I can’t see any other information I would expect institutions may wish to withhold – other than the budgetary if for the current financial year where occasionally an argument may be made for witholding (although exemptions all subject to the public interest test).

The wider community will be looking to see how our Welsh colleagues respond!

Posted in Finances | 2 Comments »

Impact, Openness and Libraries

Posted by Brian Kelly (UK Web Focus) on 3 December 2010

Measuring Impact” is the theme of the December 2010 issue of CILIP’s Library and Information Update magazine.  In an editorial piece entitled “Capturing Numeric Data to Makes an Evidence Based Approach” Elspeth Hyams provides a shocking revelation: school libraries have very little impact. Or at least that’s how how a review commissioned by the Department of Culture, Media and Sport is being spun.  The reality, as described in an article by Javier Stanziola published in CILIP Update is that “studies of library impact are hard to find” – a quite different story. The article, “Numbers and Pictures: Capturing the Impact of Public Libraries“, suggests that “the sector is not playing the Prove Your Impact game well“.  I agree, and this criticism can be applied to the higher education sector too.

Elspeth feels that future longitudinal research will depend on data collection by frontline services (and knowing what data to collect).  The editorial concludes “So whether we like it or not, we would be wise to learn the ways of social scientists and the language of policy making“.

The importance of gathering data in order to demonstrate impact and value underpinned a session I ran recently on ““Sixty Minutes To Save Libraries”: Gathering Evidence to Demonstrate Library Services’ Impact and Value“.  As described in a post on “Gathering and Using Evidence of the Value of Libraries” which reviewed the session we did identify relevant sources of data which are collated annually from information provided by academic libraries by SCONUL which could be used to demonstrate value and impact and, if aggregated, could raise the profile and value of the academic library sector.

As described on the SCONUL Web site:

SCONUL has been collecting and publishing statistics from university libraries for over twelve years, with the aim of providing sound information on which policy decisions can be based.

Further information is provided which informs readers that “All UK HE libraries are invited to complete the SCONUL Statistical Questionnaire, which forms the foundation of all SCONUL’s statistical reports and services. The questionnaire details library resources, usage, income and expenditure for any academic year.

However, as was discussed at the session, the SCONUL data is not publicly available. It seems that the SCONUL Annual Library Statistics is published yearly – and copies cost £80.

And here we have a problem.  As I write this post a SCONUL 2010 conference is taking place and via the #scounl10 hashtag I see Twitterers at the event are summarising the key aspects of the various talks:

SCONUL can help us promote the value of libraries to wider world/senior people (see tweet)

We need to be a more self-confident community – blow our own trumpet e.g. about our track record with shared services (see tweet)

Again I agree.  But the closed nature of the statistics is a barrier to blowing one’s own trumpet and promoting the value of libraries.

Perhaps more importantly in today’s climes, the closed nature of the report and the underlying data (which is closed by its price, closed by being available only to member organisations and closed by being available in PDF format) is how perceptions of secrecy goes against  expectations that public sector organisation should be open and transparent.

And whilst one might expect certain public sector organisations to have a tendency to be closed and protective (the Ministry of Defence, perhaps) one might expect libraries, with their characteristics of trust and openness, to see the advantages in being open as a underlying philosophy, as well as being appropriate in today’s political environment.

A few days ago I attended the Online Information 2010 conference. I particularly enjoyed the talk on “The Good (and Bad) News About Open Data”  by Chris Taggart of openlylocal.com, “a prototype/proof-of-concept for opening up local authority data … [where] everything is open data, free for reuse by all (including commercially)“.

In Chris’s presentation he described the potential benefits which openness can provide and listed concerns which are frequently mentioned and responses to such concerns.  Rather than trying to apply Chris’s approaches in the content of academic library data which is collated by SCONUL I will simply link to Chris’s presentation which is available on Slideshare and embedded below.

So if the following arguments are being used to maintain the status quo, remember that increasing numbers of councils have already found their own ways of addressing such concerns:

  • People & organisations see it as a threat (and it is if you are wedded to the status quo, or an intermediary that doesn’t add anything)
  • The data is messy e.g. tied up in PDFs, Word documents, or arbitrary web pages
  • The data is bad
  • The data is complex
  • The data is proprietary
  • The data contains personal info
  • The data will expose incompetence
  • The tools are poor and data literacy in the community is low

I began this post by citing the sub-heading to an article published in CILIP Update: “the sector is not playing the Prove Your Impact game well“. Are academic libraries playing the game well? Can they change? Or will SCONUL be regarded as an intermediary which is wedded to the status quo?  Or might the change be driven by a bottom up approach?  After all since the individual institutions are collating the information prior to submitting it to SCONUL could the raw data be published by the individual institutions?

Posted in Impact, openness | 5 Comments »

Analysis of the 2010 Survey of UK Web Focus Blog

Posted by Brian Kelly (UK Web Focus) on 2 December 2010

An online survey aimed at readers of the UK Web Focus blog was announced on 1 November, the fourth anniversary of the launch of the blog. The survey was open for three weeks and attracted 27 responses. In comparison a survey carried out in 2007, shortly before the first anniversary, received 39 responses during the four week period the survey was open.

A couple of particularly noteworthy comments received were:

  • Excellent content, among the very best in the field.
  • Don’t stop. This is a fabulous resource.
  • The attention this blog generates for UKOLN’s activities are worth a great deal – buying that attention through more traditional forms of marketing would be very expensive. Seen that way, it’s easy to justify the effort that goes into it.
  • This blog is consistently thought-provoking and helps me to formulate my own ideas about the use of technology in libraries and HE. Definitely a leader in its field.
  • I have followed this blog for about four years and it is consistently ahead of the game – without alienating me with too much “early adopter” zeal. I respect Brian’s judgement and if he mentions something I know I need to find out about it; so he acts as a filter for all the other tech info on the Web – and his impartiality is vital to this role (unlike, say, Wired who need to keep their sponsors on side). Who could we rely upon to do the horizon scanning for us without this blog?

The complete set of responses is available with a summary of the findings and accompanying discussions given below.

Overview

From those who were willing to have their name and affiliation published is was pleasing to see a diversity of institutions represented. There were three responses from overseas (Stephen Downes, National Research Council, Canada and Wendell Dryden from Canada and Alistair Grant from Australia) and responses from well-known figures in the JISC development community (Kevin Ashley, Director of the DCC, Les Carr and Chris Gutteridge from the University of Southampton and Tony Hirst from the Open University). Other respondents were based in research departments closely involved with JISC work (Jo Alcock, Evidence Base, Birmingham City University and Virginia Knight, ILRT, University of Bristol) together with those responsible for the provision of institutional Web services (Anthony Leonard, University of York, Drew McConnell, University of Glasgow and David Williams, Sheffield Hallam University), library services (Mark Clowes, Faculty Team Librarian, University of Leeds), other researchers (Jethro Binks, University of Strathclyde) and those involved in dissemination work across the sector (Martin Hawksey, JISC RSC Scotland N&E). These individuals provide a good cross-section of the main target audiences for the blog so that if this is representative of all those who completed the survey this should provide an indicative views of the main audiences of the blog.

The Content

I was pleased to see the positive comments made about the content of the blog. I was particularly pleased that Stephen Downes, an internationally renowned Canadian e-learning guru had found the time to respond with the comment “Excellent content, among the very best in the field.” who also added “Don’t stop. This is a fabulous resource.

Other comments included:

  • very pertinent and insightful
  • Open notebook approach with an aim of demonstrating/illustrating the practice preached. Handy round-ups of what’s going on across the HE sector
  • Very relevant to me. Always thought provoking and interesting.
  • I’ve found your blog useful as it covers some of the same issues that we’re dealing with in our project (MeCAT, http://mecatproj.wordpress.com/) and have been able to use it to help clarify some of my thoughts.
  • A great range of content – some of which is outside of my main areas of interest/technical knowledge, but much of which is very interesting and has inspired me to blog more exploring my own experiences with various technologies
  • Often sparks thought / interest.
  • Interesting and varied
  • I have yet to have found anything not worth reading. I enjoy the fact that it has wide scope.
  • For me the content hits the spot covering topics I’m interested in.

But what didn’t the respondents like:

  • Occasional political and rapper dancing mentions add colour, though they may jar with some readers.
  • A bit variable in the depth of thought/research included, but this is to be expected for such prolific output on a wide number of fronts. Perhaps a little too much about Twitter, much as I love that particular channel!
  • Lots of times the blog talks about things I don’t understand, using terminology I’m not familiar with, so there are probably many other aspects of the site that I’m missing.
  • It’s good reading; it’s often messy, but it’s consistently messy so I don’t find that a problem. I know what to expect. The fact that I sometimes don’t agree is what makes it worth reading.

I wouldn’t disagree with these comments. The open notebook approach I take does mean that the quality of content is likely to be variable.

Regarding the frequency of publication most of the respondents felt that this was about right:

  • The frequency of posts is about right for me
  • You’ve got it about right.
  • The frequency feels about right to me.
  • I dip in and out and read a bunch of articles at a time, rather than most every day as published, but publishing frequency is fine.
  • Again frequency is fine
  • The posts are frequent enough that I check daily – no pressure LOL!
  • It’s OK.
  • Just about right
  • about right
  • Probably wouldn’t be able to keep up if it were more frequent.
  • I like the high frequency. It’s an impressive output, and gives the thoughts an up to the minute feel. It’s to Brian’s credit that he adds unthinking and value to very recent news, rather than simply regurgitating it like so many blogs.

although one respondent commented that:

  • Can’t always keep up with the frequency of the posts so occasionally miss some (if I’m behind on my RSS/Twitter)

Other comments related to the content of the blog included:

  • Don’t stop. This is a fabulous resource.
  • I’m afraid I don’t read blogs as much as I used to – I could blame twitter or the number of other commitments I currently have…
  • There’s a LOT of text on the home page – plus 4 other tabs which I’m sure I’ll never read.
    It’s none of my business, of course. But there’s a LOT of text.
  • Thank you for providing both informative and challenging posts over the last 4 years.
  • How often do you comment on other blogs? Is it an important part of your practise?
  • Sustainability is I think the elephant in the room of many IT services, including web-based ones. It needs more discussion. Brian always promotes accessibility also, which is very important and too easily ignored.
    The key benefit of this blog for me is that I think it’s the primary channel for web managers to discuss with each other. I rarely use email lists these days, and see greater benefit from a led discussion on blogs such as yours, with more free form heads-up messages happening via Twitter

Accessing The Blog

There some interesting comparisons in how people are now accessing the blog in comparison with the findings published in the last survey carried out in 2007.

Back then 59% read the blog using an RSS reader and 20% visited the web site, with 10% reading the blog at an alternative location (e.g. the Emerge, OSS Watch or MyBlogLog blog aggregators). This year 56% of those responding used an RSS reader; and 37% visited the blog site.

In 2007 64% used a MS Windows platform, 26% used an Apple Macintosh, 10% used Linux/Unix with nobody reporting use of PDAs, mobile devices of digital TVs. This year 67% of those responding used MS Windows; 26% used an Apple Macintosh; 4% Linux and 4% Android.

Whilst these figures do not indicate any significant changes, the changes were highlighted in a new question this year which asked about secondary platforms used to read blog posts. Here we found that 18% of those responding use MS Windows; 12% use an Apple Mac; 18% Linux; 41% iPod; 18% Android Smartphone; 6% Other Smartphone; 12% iPad and 6% Android phone.

What seems to have happened in that in 2007 people read the blog whilst at work using their office computer. This year this pattern of usage seems to be the same but, in addition, people use a mobile device to read posts at home, whilst travelling, whilst at conferences, etc. Such additional ways of accessing the blog may be, in part, responsible for the increased traffic to the blog.

Sustainability

A question about the sustainability of the blog sought to gain feedback on the value placed on the blog and its relevance at at time when blogs are supposedly no longer being read. The sustainability issues also covers sustainability of the WordPress.com service and, of course, prioritisation of work activities at a time of change across the higher education sector. The following comments were made:

  • I think your policy is very sensible
  • It’s more about the ‘sustainability’ of your job, isn’t it ? This is now a very personal blog, and it survives or not because you want to do it (and are able to do it.) The attention this blog generates for UKOLN’s activities are worth a great deal – buying that attention through more traditional forms of marketing would be very expensive. Seen that way, it’s easy to justify the effort that goes into it.
  • I don’t think it matters where the blog is hosted and often this becomes transparent as RSS is the delivery mechanism for me. Your biggest overhead is you, how do you make your post sustainable or would to continue UK Web Focus regardless?
  • I sustain my blog through use of free tools (Google Blogger, etc.) – but it’s not an official blog; i.e., it doesn’t represent an institution or organization. (I’m also interested in seeing how others use free tools.)
  • Good question. You should maintain it no matter what, if only because it will be a major calling card when you apply for your next job.
  • If you think communication is an important part of your role, and the keeping of an notebook something you need to do anyway, what’s the overhead?
  • I think it is very sustainable! Cloud services are the apotheosis of server consolidation, and the data should be portable meaning the effort involved in re-use / archiving is as small as it can be currently.
  • Wish I had a solution.

Other Indicators

In addition to the comments which have been received it should also be noted that there has been an increase in the number of visits to the blog every month for the past six months with this month there being 9,500+ visits, over 335 per day.

The blog is also listed in 41st place in Wikio’s list of top technology blogs – and although the relevance of such indicators may be questioned I’m happy to be positioned between two other blogs I rate highly: Tony Hirsts’ OUseful blog and Martin Weller’s The Ed Techie blog. For sake of completeness Technorati gives the blog an overall authority of 518 and a ranking 3,261 and, in the Technology category, an authority of 543 and a ranking 397 and, in the Info Tech category an authority of 578 and a ranking 167. Again whilst the relevance of such figures may be questioned I feel it is worth keeping a record in case of, for example, requests for indicators of the the value of this work.

Posted in Blog, Evidence | 4 Comments »

Understanding Disruptive Innovations: Your Input Needed

Posted by Brian Kelly (UK Web Focus) on 1 December 2010

In a recent blog post CETIS Director Adam Cooper  asks “Whither innovation in educational institutions in these times of dramatic cuts in public spending and radical change in the student fees and funding arrangements for teaching in universities?“. The post goes on to suggest that that innovation follows adversity and that “necessity is the mother of invention” and introduces the term “disruptive innovation” to describe the way well-run businesses can be disrupted by newcomers with good-enough offerings that focus on core customer needs (low end disruption).

In order to better understand the potentially disruptive innovations (or opportunities to weather the storm) posed by the combination on technological developments and the changing economic and political climate, UKOLN and CETIS (JISC Innovation Support Centres which help to support and further the work of the JISC Innovation group)  are  using an feedback tool to gather and allow ranking of such innovations. The tool asks the question:

Which ICT-based innovations are potentially disruptive to current models of higher education (forms of teaching, assessment, course structure, estate, research and research management, student management, etc…)?”

This feedback tool will be available until 10 December. We invite your participation – and feel free to disseminate the URL (http://tinyurl.com/disruption2010) to others.

Posted in General | Leave a Comment »