UK Web Focus (Brian Kelly)

Innovation and best practices for the Web

Archive for Apr, 2011

Education: Addressing the gaps between the fun and the anxieties

Posted by Brian Kelly on 30 Apr 2011

Later today if my 3 minute talk is selected I’ll be giving my thoughts on education at the  #purposedpsi event in Sheffield.   Purpos/ed is “a non-partisan, location-independent organization aiming to kickstart a debate around the question: What’s the purpose of education?”. I have made my contribution on a recent post entitled “Education Will Make Us Anxious“. My brief presentation builds on this idea (which was taken from a post by Dave White)  and mashes it up with Tom Barrett’s comment that “Education should be about cradling happiness”.  I feel that both ideas are true – and the challenge for those of us working in the education profession is in understanding and addressing the gap.

A 3 minute  slidecast of a rehearsal of my talk is available on Slideshare and is embedded below.


I’d like to give acknowledgements for use of the photograph entitled “Hippie Carnival Arambol (Goa)” used on slide two which was taken by ‘PeterQ‘ and is available on Flickr and the photograph used on slide three entitled “Anxious” which was taken by ‘Phoney Nickle‘ and is available on Flickr under a Creative Commons BY-NC-ND licence. I am grateful for permission to use these images.


Note also that the script for the talk is given below.

My name is Brian Kelly. I’m based at UKOLN at the University of Bath and this is my contribution to the Purpos/ed campaign on the future of education.

In his blog post on the purpose of education Tom Barrett suggested that “Education should be about cradling happiness” .  Another way of putting this which we will be familiar with is that “learning should be fun”.  And the fun that we can have in learning new things – which can include social learning, such as the dancing illustrated in this photograph – as well as scholarly learning need  not be restricted to the learner.  It can also be fun teaching.

In contrast to Tom Barratt’s comment, Dave White felt that “Education should make us anxious”.  We will have all experienced feeling of anxiety whether it’s due, as in my case, to the  difficulties in memorising irregular French verbs, understanding molecular chemistry or, more recently, trying to lean a new rapper sword dance.

But just as the fun aspects of learning aren’t restricted to the learner, so the feelings of anxiety will be felt by others involved in learning: the teacher wondering whether the approaches they are taking are working and whether they’ve chosen the right resources for the learner.  Similarly those involved in use of technology to enhance learning may be worried whether the right technological approaches are being used.  Is the Social Web, for example, really an appropriate mechanism for supporting informal learning?  Was the open source VLE environment the right choice? And policy makers may secretly be anxious over changes in policies: “I’m suggesting that new approaches to learning will be more effective than those used 30 years ago – but I did OK from the old styles of learning – what if I’m wrong and the ‘back to basics’ campaigners are right?  After all, I’ve little evidence of the benefits of the new approaches.”

For me, then, education is about understanding and addressing the gaps between the feeling of fun and excitement in learning something new and the feelings of anxiety which we may sometimes forget about.  And let me point out that I’m not suggesting that the gap should be removed – I don’t think this is possible.  Let me quote in full Dave White’s comment on anxiety:

 “… education should make us anxious: anxious to discover new ways of understanding and influencing the world.  It should challenge our ways of seeing and force us to question our identities and our positions.

Learning professionals – and learning organisations – will continually strive to discover new ways of influencing learners and the learning processes. We will always be anxious. There will always be tensions. This is the challenge of the profession we have chosen.

Posted in Events, General | Tagged: | 1 Comment »

What I Like and Don’t Like About

Posted by Brian Kelly on 27 Apr 2011

I was recently told about the service, a repository of information about researchers and their research activities. “Not another one!” was one reaction I heard. But is there anything that can be learnt from this service, which has been developed by Mr Yang Yang, an MSc student at the University of Southampton? Les Carr, over on his Repository Man blog has been “Experimenting With Repository UI Design” and describes how he is “always on the lookout for engaging UI paradigms to inspire repository design“. Might this service provide any new UI design paradigms?

Things I Like

I have to admit that I was pleased with how easy it was to get started with the service. I signed up and asked the system to find papers associated with my email address. It found many of my papers, with much of the metadata being obtained from the University of Bath Opus repository. I them searched for other papers which weren’t included in the initial set and was able to claim them as belonging to me – including one short paper which had been published in the Russian Digital Libraries Journal in 2000 which I had forgotten about.

I can now view my 49 entries and sort them in various ways: in addition to the default date order I can also sort by item type; lead author; co-authors and keywords. The view of my co-authors (illustrated) was of particular interest: I hadn’t realised that I had written papers with 55 others.

In comparison the interface provided on my institutional repository service does now seem quite dated. However this is perhaps not unexpected as according to the Wikipedia entry the ePrints software (which is widely used across the UK) was created way back in 2000.

Revisiting the question as to whether we need another service which provides access to research information I would say ‘yes’. Such developments can help drive innovation. In this case ePrints developers are in a position to see more modern approaches to the user interface. In addition the service describes itself as “Web 3.o ready application” by which they seem to mean that the service “connects researcher and research students anywhere in the world using an intelligent network”.

I haven’t seem much evidence of Web 3.0 capabilities in the service, apart from being able to download details of my papers in FOAF format, but perhaps the “ready” word is providing a signal that such functionality is not yet available.

Things I Don’t Like

There are some typos on the data entry forms and some usability niggles, but nothing too significant – indeed after attending a recent Bathcamp Startup Night and hearing the suggestion that “If you’re not embarrassed about the launch version of your software then you released it too late” (a quote from the founder of LinkedIn) I welcome seeing this service before everything has been thoroughly checked.

The language used in the terms of service are somewhat worrying, however:

No Injunctive Relief.
In no event shall you seek or be entitled to rescission, injunctive or other equitable relief, or to enjoin or restrain the operation of the Service, exploitation of any advertising or other materials issued in connection therewith, or exploitation of the Services or any content or other material used or displayed through the Services.

It also seems that as a user of the service I undertake not to:

Duplicate, license, sublicense, publish, broadcast, transmit, distribute, perform, display, sell, rebrand, or otherwise transfer information found on iamResearcher (excluding content posted by you) except as permitted in this Agreement, iamResearcher’s developer terms and policies, or as expressly authorized by iamResearcher

Hmm. The service harvested its metadata from other repository services, such as the University of Bath’s Opus repository but does not allow others to reuse its content. This seems to undermine the benefits provided by permitting (indeed encouraging) others to make use of open data. In addition the service appears to be hypocritical, as the University of Bath’s repository policy (which was created using the OpenDOAR Policy tool) states that “The metadata must not be re-used in any medium for commercial purposes without formal permission“. Now the service does not appear to be a commercial service – but its privacy policy states that “To support the Services we provide at no cost to our Users, as well as provide a more relevant and useful experience for our Users, we serve our own ads and also allow third party advertisements on the site“. If advertising does appear on the service, won’t it then be breaching the terms and conditions of the service from which it harvested its data?

Personally I have no problem with advertising being used to fund services where, as in this case, there are multiple providers of services. Indeed those who argue for openness of data should be willing to accept that data may be used for commercial purposes. However services which accept the opportunities provided by open data should accept that they should be providing similar conditions of usage.

The final concern that I have about the service is that currently it can only be accessed if you sign in. I feel this is counter-productive – indeed one person I mentioned this service to asked why he should bother. That’s a fair comment, I think. And seeing that the terms and conditions also state that users of the service are not allowed to:

Deep-link to the Site for any purpose, (i.e. including a link to a iamResearcher web page other than iamResearcher’s home page) unless expressly authorized in writing by iamResearcher or for the purpose of promoting your profile or a Group on iamResearcher as set forth in the Brand Guidelines;

I now wonder what benefits this service can provide to the research community. Developers of other repository services, however, should be able to learn from the technological enhancements the service provides, even if the business model is questionable.

Twitter conversation from Topsy: [View]

Posted in openness, Repositories | Tagged: | 8 Comments »

Education Will Make Us Anxious

Posted by Brian Kelly on 26 Apr 2011

#Purposed, #purposedpsi and #500words

On Saturday I am attending the #purposedpsi event in Sheffield.  Purpos/ed isa non-partisan, location-independent organization aiming to kickstart a debate around the question: What’s the purpose of education?” In the run-up to the event the campaign has encouraged people to contribute 500 words on their own blogs on the purpose of education.

This has been followed with the 3×5 Flickr mashups campaign which encourages people to read the blogs, identify an interesting quotation and post an annotated image illustrating the quotation to the Purpos/ed Flickr group.

I was a bit late finding out about the campaign but as I’ll be attending the afternoon meeting later this week I thought I would give my thoughts in 500 words on a post by Dave White, a member of the ALT 2010 learning Technologist Team of the Year, entitled Education should make us anxious.

“Education should make us anxious”

The comment which Dave White published on his blog as part of the #500 words campaign is worth reading in context:

My view is that education should make us anxious: anxious to discover new ways of understanding and influencing the world.  It should challenge our ways of seeing and force us to question our identities and our positions. Education should disrupt as much as it builds, ultimately teaching how to learn not what to learn.

I’m sure we have all had that feeling of anxiety, when one’s world view is being disrupted by new ideas we start to understand and influence how we behave and how we act.  These unsettling moments were also mentioned by Josie Fraser, who in her contribution to the debate suggested that:

Education should critically ensure children, young people and adults are equipped to be unsettled, to be confronted by difference, to be changed, and to effect change.

In my attempt to belatedly participate in the #purposed campaigns I looked for a Flickr image which could be used to illustrate the quotations from Dave White’s post.

Rather than using an image of an individual looking anxious I noticed this poster showing different types of sport anxieties. It’s easy to see how some of these idea could be applied to the anxieties felt by learners:

  • I’ll look stupid and be belittled by cleverer kids.
  • They’ll make fun of my mistakes.

The poster also suggests that teachers and learning technologists  as well as learners can be anxious:

  • I do my best as a teacher and all I get back is booed at.

If they were capable of feelings the learning technologies themselves might be anxious.  Perhaps Mr Blackboard might be saying “I get shot at and can’t escape“.  On the other hand it might be the institutions themselves which are being short at.

Learning institutions are anxious

Rather than exploring the issue of the anxieties which education can cause for the learner I’d like to conclude my 500 words by reflecting on the anxieties which educational institutions will be facing. But rather than commenting on the easy target of the cuts the educational sector is currently facing I’d like to suggest that tensions between learning organisations, and learners will always be present.   This was something I was unaware of when I was an Information Officer in an IT Services department at the University of Leeds.  Colleagues in the department had to identify the ‘best application in various areas with my role being to  provide documentation and training for the recommended applications.   We’d chosen the most appropriate office applications, data visualisation tools and statistical applications and around that time I left IT support I was hearing the ‘VLE’ term being used.  Which would be the best VLE, I wondered?   Blackboard and WebCT seemed popular, although the open source Moodle application  had its supporters.  It turned out that, at the time, a home-grown solution – the Bodington VLE, was to provide the VLE environment across the institution.

But now the PLE is the new VLE – and this disrupts the view that the central IT services, working closely with users, can identify the most appropriate solution for the institution and ensure that cost-effective support services are provided. This can also be disruptive for those who felt that the solution must inevitable be an open source solution.  If the learners are using their mobile phones to access learning on YouTube and a range other Google services, where does this leave a vision for an open e-learning environment?  The learning environment can be an anxious environment for us all.

Posted in General | Leave a Comment »

The BO.LT Page Sharing Service and OERs

Posted by Brian Kelly on 22 Apr 2011

Earlier today, having just installed the Pulse app on my iPod Touch, I came across a link to an article published in TechCrunch on the launch of a new service called  The article’s headline summarises what the service will provide: “Page Sharing Service Lets You Copy, Edit And Share Almost Any Webpage“.

The comments on the article were somewhat predictable; as seems to be the norm for announcements of new services published in TechCrunch some were clearly fans (“OMG! This is going to change everything!“) whilst others point out that the new service provides nothing new: “Shared Copy ( is a great service that’s been around for 4 years that does ~the same thing“.

Of particular interest to me, however, were the comments related to the potential for copyright infringements using a services which, as the TechCrunch article announced “let’s you copy, edit and share any page“. As the first comment to the article put it: “I can just see it…this will make it easier for 1) people to create fake bank statements, 2) awesome mocking of news headlines, 3) derivative web designs“.

In order to explore the opportunities and risks posed by this service I registered for the service and created a copy of the home page for my blog and subsequently edited it to remove the left hand sidebar. As can be seen an edited version of the page has been created, and you can view the page on

So it does seem that it will be easy for people to copy Web pages, edit them for a variety of purposes, including poking fun, creating parodies (has anyone edited a Government Web page yet) as well as various illegal purposes.

But what about legitimate uses of a service which makes it easy to copy, edit, publish and share a Web resource?  The educational sector has strong interests in exploring the potential of open educational resources (OERs) which can be reused and remixed to support educational objectives.  We are seeing a growth in the number of OER repositories.  Might a service such as have a role to play in enabling such resources to be reused,I wonder?  Will turn out to be a threat to our institutions (allowing, for examples, disgruntled students unhappy at having to pay £9,000 to go to University to create parodies of corporate Web pages) or a useful tool to allow learners to be creative without having to master complex authoring tools?

Posted in openness, Web2.0 | Tagged: | 2 Comments »

Archiving Blogs and Machine Readable Licence Conditions

Posted by Brian Kelly on 21 Apr 2011

Clarifying Licence Conditions When Archiving Blogs

UKOLN’s Cultural Heritage blog has recently been frozen following the cessation of funding from the MLA (a government body which is due to be shut down shortly).

As part of the closure process for our blog we have provided a Status of the Blog page which summarises the reasons for the closure, provides a  history of the blog, outlines various statistics about the blog and provides some reflections of the effectiveness of the blog.

Another important aspect of the closure of a blog should be the clarification of the rights of the blog posts. This could be important if the blog contents were to be reused by others – which could, for example, include archiving by other agencies.

As shown a human readable summary was included in the sidebar of the blog which states that the content of the blog are provided under a Creative Commons Creative Commons Attribution-Noncommercial-Share Alike 2.0 UK: England & Wales License.

The sidebar also defined the scope of this licence which covered the textual content of blog posts and comments which were submitted to the blog.  It was pointed out that other embedded objects, such as images, video clips, slideshows, etc, may have other licence conditions.

However automated tools will not be able to understand the licence conditions.  What is needed is a definition of the licence in a format suitable for automated reading. This has been implemented using a simple use of RDFa which is included in the sidebar description.  The HTML fragment used is shown below:

<img alt=”Creative Commons License” src=”×31.png&#8221; /> This blog is licensed under a <a href=”; rel=”license”>Creative Commons Attribution-Noncommercial-Share Alike 2.0 UK: England & Wales License</a>.

How might software process such information? One example is the OpenAttribute plugin which is available for the FireFox, Chrome and Opera browsers. This is described as a “suite of tools that makes it ridiculously simple for anyone to copy and paste the correct attribution for any CC licensed work“. Use of the OpenAttribute plugin on the Cultural Heritage blog is illustrated below.

Assigning Multiple Licences To Embedded Objects in Blogs

The image above shows the licence for the blog in its entirety.  However the blog is a complex container of a variety of objects (blog posts from multiple authors;  comments from readers and embedded images and other objects from multiple sources)  and each of these embedded may have its own set of licence conditions.

How might one specify the licence conditions of such embedded objects?  In the case of the Cultural Heritage blog there was a statement that any comments added to the blog would be published under a Creative Commons licence so although anybody making a comment did not have to formally accept this licence condition, it practice we can demonstrate that we took reasonable measures to ensure that the licence conditions were made clear.

In order to specify the licence conditions for embedded images we initially looked at the Image Licenser WordPress plugin.   However this provides a mechanism for assigning licence conditions as images are embedded within a post, which are then made available as RDFa.  Since in our case we were looking at retrospectively assigning licence conditions to existing images (in total 151 items) it was not realistic to use this tool.

The Creative Commons Media Tagger provides the ability to “tag media in the media library as having a Creative Commons (CC) license“. But what licence should be assigned to images on the blog?  These include screen images and photographs which may have been include by guest bloggers but which have not been explicitly assigned a Creative Commons licence.  The question of  Who owns the copyright to a screen grab of a website? was asked recently on with a lack of consensus and a patent and trade mark attorney providing the less than helpful suggestion that “It is better to include a link to the original work if it is on the Web rather than to copy it.” The uncertainties regarding ownership of screen shots are echoed in a Wikipedia article which states:

Some companies believe the use of screenshots is an infringement of copyright on their program, as it is a derivative work of the widgets and other art created for the software. Regardless of copyright, screenshots may still be legally used under the principle of fair use in the U.S. or fair dealing and similar laws in other countries.

In light of such confusions there is a question as to what licence, if any, should be assigned to images in the blog. As described in the Creative Commons Media Tagger FAQ it is possible to run the plugin in batch mode to “tag media that was already in your media library prior to installing and activating CC-Tagger“. It occurred to me that it would be best to assign a non-CC licence by default to all images and then to manually assign an appropriate CC licence to images such as those taken from Flickr Commons in a post entitled “Around the World in 80 Gigabytes“. However using the batch made of the tool appeared not to change the content – and it is unclear to me whether there is a way of providing a machine-readable statement in RDFa stating that a resource is not available with a Creative Commons licence.

Using the Image Licenser tool on an individual image resulted in the following HTML fragment which illustrates how a machine readable statement of the licence conditions can be applied to an individual object:

<img class=”size-medium wp-image-2206″ title=”Flickr Commons” src=”×205.jpg&#8221; alt=”image of flickr commons home page” width=”300″ height=”205″ />


Whilst finalising this post I asked on TwitterIs it possible to use RDFa to provide a machine-readable statement that an image *doesn’t* have a CC licence? …” and followed this by describing the context: “.. i.e. have a blog post with CC licence for content but want to clarify lience for embedded objects. #creativecommons“.  Subsequent comments from @patlockley and @jottevanger helped to identify areas for further work which I hadn’t considered – I have kept an archive of the discussion to ensure that I don’t forget the points which were made. A summary of my thoughts is given below:

Purpose: Why should one be interested in ways in which the licence conditions of objects embedded in blog posts? My interest relates to arching policies and processes for blogs.  For example if an archiving service chooses to archive only blogs for which an explicit licence is available there will be a need to ensure that such licences are provided in a machine-readable format in automate to allow for automated harvesting.  There will also be a need to understand the scope of such licences. In addition to my interests, those involved in the provision of or reuse of OER resources will have similar interests for reusing blog posts if these are treated as OER resources.  Finally, as  @jottevanger pointed out this discussion is also relevant more widely, with Jeremy’s interests focussing on complex Web resources containing digitised museum objects.

Granularity: What level of granularity should be applied – or perhaps this might be better phrased as what level of granularity is it feasible to apply machine readable licence conditions for complex objects? Should this be at the collection level (the blog), the item level (the blog post) or for each component of the object (each individual embedded image)?

Risks: Should one take a risk averse approach, avoiding use of a Creative Commons licence at the collection level since it may be difficult to ensure that each individual item has an appropriate Creative Commons licence)? Or should one state that by default items in the collection are normally available under a Creative Commons licence, but there may be exceptions?

Viewing tools: What tools are available for processing machine understandable licence conditions? What are the requirements for such tools?

Creation tools : What tools are available for assigning machine understandable licence conditions? What level of granularity should they provide? What default values can be applied?

I know that in the OER community there are interests in these issues.  I would be interested to hear how such issues are being addressed and details of tools which may already exist – especially tools which can be used with blogs.

Posted in openness, preservation | Leave a Comment »

“UK Government Will Impose Compulsory Open Standards”

Posted by Brian Kelly on 20 Apr 2011

“UK Government Promises To Go Open – Again”

In a post entitled UK Government Promises to Go Open – Yet Again Glyn Moody provides a rather cynical view based on his experiences of Government promises regarding ICT and openness: “after years of empty promises, the UK government assures us that this time is will really open up, embracing open source and openness in all its forms”. However there is also some optimism in the column:

… there is a ray of hope. For as I reported a month ago, the Cabinet Office has settled on a rather good definition of open standards that includes the key phrase “have intellectual property made irrevocably available on a royalty free basis”, which does create a truly level playing-field that allows open source to compete fairly.”

The column concludes:

“Let’s hope it really marks the beginning of a new era of openness in UK government IT – and that I won’t have to write this article ever again.”

Publication by the Cabinet Office of the “Government ICT Strategy”

I have previously commented on the Government’s attempts at agreeing on a definition of open standards in a post entitled UK Government Survey on Open Standards: But What is an ‘Open Standard’? and pointed out some of the difficulties (is RSS an open standard, for example). But although it may be difficult to provide agreement on such definitions, I welcome the fact that the Government is asking such questions.

This is particularly important in light of the recent release of the Cabinet Office’s recent publication of the  Government ICT Strategy (PDF format). In the introduction the Right Honourable Frances Maude, Minister for the Cabinet Office lists the following challenges central government is facing:

  • Departments, agencies and public bodies too rarely reuse and adapt systems which are available ‘off the shelf’ or have already been commissioned by another part of government, leading to wasteful duplication:
  • systems are too rarely interoperable;
  • the infrastructure is insufficiently integrated, leading to inefficiency and separation;

The first bullet point could be interpretted as a signal that the government is looking to procure off-the-shelf proprietary systems.  However the other two points seem to challenge that perception, as it is precisely such monolithic proprietary systems which fail to provide the interoperability and the integrated infrastructure which is needed.   Instead in order to address these challenges the strategy announces that it intends to:

impose compulsory open standards, starting with interoperability and security;

We know that the government is prepared to take ‘bold’ decisions – but is this a perhaps unusual decision in being one that those involved in IT development activities within the high education sector would welcome?

What are the Open Standards Which Will Be Made Compulsory?

It is also pleasing to see that the Government has invited feedback on the open standards which it feels are relevant.  A SurveyMonkey form on Open Standards in the Public Sector invites feedback on its proposed set of conditions for an open standards (discussed previously) as well as listing open standards in 23 technical areas for which respondents can specify whether they think the standards should be a PRIORITY STANDARD, MANDATORY (must be used), RECOMMENDED (should be used), OPTIONAL or SHOULD NOT USE.

The 23 areas are Accessibility and usability; Biometric data interchange; Business object documents; Computer workstations; Conferencing systems over Internet Protocol (IP); Content management, syndication and synchronization; Data integration between known parties; Data publishing; e-Commerce, purchasing and logistics; e-Health and social care; e-Learning; e-News; e-Voting; Finance; Geospatial data; Identifiers; Interconnectivity; Service registry/repository; Smart cards; Smart travel documents; Voice over Internet Protocol (VOIP); Web services and Workflow and web services.

Rather than attempting to comment on all of these areas I’ll explore some of the issues with the approaches which are being taken in the survey by addressing just two areas: “Accessibility and usability” and “Computer workstations”.

“Accessibility and Usability”

The first section covers “Accessibility and usability” and addresses Human Computer Interface standards (e.g. ISO/TS 16071:2003);  Web Content standards (WCAG 1.0) and Usabilty (sic) standards (e.g. ISO 13407:1999).

This is an area of particular interest to me, so how should I respond to the survey (which is illustrated).

The first question, on WCAG 1.0, is easy – this has been superceded by WCAG 2.0 and should no longer be used.  So that is clearly be in the “Should Not Use” category.

Should, therefore, the answer to the use of WCAG 2.0 be to select it as a Priority Standard, a Mandatory Standard or a Recommended Standard, Optional or, perhaps, Should Not Use?  These terms have been defined in the survey system:

PRIORITY STANDARD – a standard that you think is is important and a priority

MANDATORY – a standard that you judge MUST be used by the UK public sector

RECOMMEND – a standard that you judge should be used by the UK public sector but recognising that there may be exceptions/caveats that mean it is sometimes not appropriate

OPTIONAL – a standard that you judge may be used by the UK public sector

SHOULD NOT USE – a standard that you judge should not be used by the UK public sector

I have previously suggested that public sector organisations in the UK should be using the BS 8878 Code of Practice for Web Accessibility as this provides a policy framework for developing accessible Web sites and provides the flexibility in the selection of accessibility guidelines, such as WCAG 2.0 which may not be applicable for use in some circumstances.  However BS 8878 isn’t included in the list of standards.  I think that WCAG 2.0 is important, but not applicable in all cases, so I guess I should select the Priority Standard option.  In addition, since it is possible to select multiple responses, I would also choose the Recommend option.

From this first two standards I have already found reasons why the Mandatory response may be be appropriate and noticed some logical flaws in the design of the survey form – it seems it is possible to select multiple responses, including ones which may be contradictory.

The third ‘standard’ is also confusing as it covers the ‘Central Office of Information Standards and Guidelines‘.  However this isn’t a standard but a set of UK Government recommendations and policies. The guidance document contains a section on Delivering inclusive websites which appears to have been published in 2009 and which requires Government Web sites to conform with WCAG 1.0 to a AA level. This ‘standard’ is not compatible with the first two areas and so therefore the Should Not Use recommendation should be given – not because the recommendations are necessarily wrong but because it is not a standard. However it is not possible to annotate the responses submitted using the survey system.

“Computer Workstations”

The misleading “Computer workstations” section is of particular interest to me since it covers various Web standards, document, standards and standards for office applications. In the list of Web standards the choices are HTML 4.01, HTML 5 or XHTML. Here the choices are between a W3C HTML 4.01 standard which was ratified in December 1999, a W3C HTML5 working draft which has not yet been ratified and which is still evolving and a W3C standard for which a version number isn’t specified which could lead to confusions over the ratified XHTML (1.0) standard and the moribund (but recently updated) XHTML 2 working draft.

The list of document types are also interesting.  RDF RTF is listed as a standard – although this is a proprietary format which is owned by Microsoft. Similarly the inclusion of PDF from version 4 covers both the proprietary version owned by Adobe as well as the ISO standard which is based on PDF 1.7. The ODF and OOXML open standards are listed although the Microsoft Document format is also included as well as the Lotus Notes Web Access format.   There are similar confusion over the open standards for spreadsheets: HTML is suggested which, although this is an open standard, will not provide the interoperability which open standards are meant to deliver.  As with the document formats ODF and OOXML are included but the proprietary MS Excel format is also listed. This pattern is repeated for presentation formats, although this time MS PowerPoint is listed.

Other Areas

The section on “Biometric data interchange” is interesting, although I know nothing of the standards used in this area. But what are the implications of responding to the question on. for example, “ISO/IEC 19794-5 Information Technology – Biometric data interchange formats – Part 5: Face image data”. If this is a Mandatory Standard could this mean that it is used in situations which I feel infringe personal liberties? The initial response might be to suggest that the standard will only be used in appropriate areas – and yet we have seen that defining WCAG as a Mandatory standards has led to it being enforced when its use may be inappropriate. It does seem to me that there is a need to define a policy layer which helps to ensure that Mandatory clauses are not used in inappropriate areas.

I’ll not comment further here on areas which I know will be of interest to the JISC development community:

Conferencing system (six standards listed), Content management, syndication and synchronisation (which covers various standards such as XML Schemas, OAI-PMH, RSS, OpenURL and Z39.50), Data integration between known parties (which includes XML, XML Schemas, XSL, UML, RDF and OWL), Data publishing (which covers RDF, SKOS and OWL), Identifiers (which covers DOIs, ISBN, ISSN, XRIs, GUID, URIs, URLs and PURLs), Interconnectivity (which covers various Internet protocols), Service management (which only includes ISO/IEC 20000) or Service registry/repository (which includes UDDI, ebXML, ebRS and edRS), e-Learning (which covers IMS, IEEE LOM and SCORM), Geo-spatial, Web Services and Workflow and web services.

or areas which will be of less direct relevance to our development community:

Business object documents, Smart cards or Smart travel documents. e-Commerce, purchasing and logistics, e-Health and social care, e-News, e-Voting, Finance and VoIP.


Despite the rhetoric in the introduction to the Government ICT Strategy document it seems that the survey is simply revisiting work which has been published previously in the e-GIF guidelines. Looking at the Technical Standards Catalogue, for example, there is a section on Specifications for computer workstations which lists PDF, MS Office formats and Lotus notes which I mentioned previously.

Looking in more detail at the survey form I find that the form is full of typos. For example (with the typos given in bold):

  • There are many different defintions of the term ‘open standard’. We’d like your feedback on our proposed definition.
  • Usabilty  (there are multiple occurrences of this typo)
  • coding of continous-tone still images (there are multiple occurrences of this typo)
  • Data defintion – Government Data Standards Catalogue (there are multiple occurrences of this typo)
  • Ontology-based inforamtion exchange (e.g. OWL)
  • Persistient identifier (e.g. XRI) (there are multiple occurrences of this typo)
  • Digital Object Indentifier (DOI)    (there are multiple occurrences of this typo)
  • HyperText Tranfer Protocol (HTTP)  (there are multiple occurrences of this typo)
  • Authetication (there are multiple occurrences of this typo)
  • Elecrtical standards (e.g. ISO/IEC 7816-10)
  • Terminal infrastrucure standards (there are multiple occurrences of this typo)

Does this matter if the meaning is obvious?  For a conversational email message or blog post perhaps not but for a formal process for gathering information it is of some concern. This is particularly true when there may be particular standards which could be mis-identified be typographical errors. So although I spotted the errors listed above (initially when reading the document and subsequently by putting the document through a spell-checker) I have no idea if the following examples could contain errors:

  • ISO/IEC 7816-15: 2004/Cor 1: 2004
  • Contact cards – Tactile identifiers BS EN 1332-2 Identification card systems – Man-machine interface Part 2: Dimensions and location of a tactile identifier for ID-1 cards

It should also be noted that the survey form itself contain flaws. As illustrated below although the form repeatedly invites respondents to “suggest other standards within this category that are not listed. Start a new line for each in reality it is not possible to enter more than a single line.

Glyn Moody felt that there was a “ray of hope” in the Governments’s apparently enlightened approach to open standards. I fear he is mistaken – sadly I see nothing to indicate that the government has an understanding of the implications of any decisions that may be taken as a result of this flawed information-gathering exercise.

Posted in standards | 6 Comments »

Are Russell Group Universities Ready for the Mobile Web?

Posted by Brian Kelly on 19 Apr 2011

Yesterday I attended Nominet’s launch event for the W3C UK and Ireland Office (and note that tweets containing the #w3cuki hashtag are available on TwapperKeeper). A number of talks covered the Mobile Web including “Mobile web: where diversity is opportunity” by Dr. Rotan Hanrahan, the Chief Innovations Architect of MobileAware.  Dr. Hahrahan informed the audience about that many assumptions about Web sites are based on desktop browser experiences and many of the assumptions are wrong in a mobile context.

This made me wonder whether the assumptions we have regarding the design and structure of institutional Web sites will be valid for mobile access.  The W3C have developed mobileOk which isa free service by W3C that helps check the level of mobile-friendliness of Web documents, and in particular assert whether a Web document is mobileOK“.

Are the home pages of Russell Group Universities ‘mobileOK’, I wondered, or have they been designed and tested for desktop access only? Yesterday I used the mobileOK checker service to check the home page of the 20 Russell group Universities.  The results are given below.

Institution Check Critical severity Severe severity Medium severity Low severity
1 University of Birmingham Check 2 3 0 4
2 University of Bristol Check 1 0 1 3
3 University of Cambridge Check 2 0 1 8
4 Cardiff University Check 1 1 3 3
5 University of Edinburgh Check 0 2 0 3
6 University of Glasgow Check 1 1 2 5
7 Imperial College Check 4 5 0 7
8 King’s College London Check 1 1 1 2
9 University of Leeds Check 1 0 0 5
10 University of Liverpool Check 0 2 1 3
11 LSE Check 2 2 2 4
12 University of Manchester Check 1 3 1 6
13 Newcastle University Check 1 1 2 5
14 University of Nottingham Check 3 2 0 4
15 University of Oxford Check 4 2 1 6
16 Queen’s University Belfast Check 0 3 4 4
17 University of Sheffield Check 1 0 0 5
18 University of Southampton Check 2 2 1 4
19 University College London Check 1 0 2 5
20 University of Warwick Check 2 2 0 7
TOTAL 30 32 22 93
AVERAGE 1.5 1.6 1.1 4.65


About The Findings

How do these findings compare with other Web sites?  A survey of the W3C home page gives a score of 0 critical, 0 severe, 1 medium and 2 low severity errors which suggests that it is possible to avoid critical and severe errors. However the findings for the home page were 4, 2, 3 and 5 which suggests that a mobile phone company is not doing as well as typical University home page.

But how relevant are the tests which are being tested?  Looking at the critical severity problem for the University of Sheffield home page we find:

The total size of the page (192KB) exceeds 20 kilobytes (Primary document: 8.9KB, Images: 180.2KB, Style sheets: 2.9KB)

It seems that pages should be less than 20 Kb in order to avoid this error.  Is this an realistic goal, I wonder?

Other critical errors which were found for other institutional home pages include:

  • There are more than 20 embedded external resources
  • The image does not match its supposed format
  • An input element with type attribute set to “image” is present

Severe error include:

  • The size of the document’s markup (78.1KB) exceeds 10 kilobytes
  • The CSS style sheet is not syntactically valid CSS
  • A pop-up was detected
  • There are nested tables

A document listing the Mobile Best Practices 1.0 guidelines is available which provides further information about the tests.

Next Steps

The summer vacation may provide an opportunity for institutions to revisit the design of the institutional home page. The mobileOK tool should be a useful tool for those working in institutional Web teams in helping to identify whether the home page (and, indeed, templates used across the Web site) are mobile-friendly. However there will be a need to recognise that mobileOK is a tool and should not be regarded as providing an infallible means of identifying whether appropriate best practices are being deployed.  But at least we now have a benchmark which will allow comparisons to be made with other institutional home pages and we will also be able to see how these findings change over time.

Posted in Evidence, Mobile | 2 Comments »

Zapd – Opportunity or Threat?

Posted by Brian Kelly on 15 Apr 2011

Introducing Zapd

I came across Zapd whilst browsing Apple’s App store on Wednesday night. It was a featured app, available for free and was highly rated – so Ii installed it on my iPod Touch.  A few minutes later I had created a Web site containing annotated photos of a wedding I went to over the weekend.  The applications byline – “Websites in 60 seconds from your iPhone” – seems to be true.  Zapd seems to provide a useful tool for such social applications, but could it be used in a professional context, I wondered. Or might it be regarded as a threat to Web professions, who might doubt whether it is possible to create a Web site so quickly, and question the underlying technical approaches (does it validate? does it conform with accessibility guidelines?), the legal implications, the dilution of an institution’s brand or the sustainability of the content.  Does Zapd provide an opportunity or a threat?

Using Zapd

Yesterday I attended the launch event of the Bath Connected Researcher series of events which has been summarised in a post by Jez Cope, one of the organisers. The #bathcr event (to use the event’s Twitter hashtag) began with a seminar given by Dr. Tristram Hooley who described how he has used social media in his research and to pursue his academic career. Tristram has written a blog post about the seminar which includes access to his slides which are embedded in the post. In addition a recording of the seminar is also available.

The seminar was aimed at researchers who may be new to social media.  I got the impression that many of the participants had not used Twitter to any significant extent.  I had been invited to participate in a workshop on the use of Twitter which was held after the seminar. As I could only attend the workshop briefly it occurred to be that I could try Zapd to see if I could create a Web site which shows how I use Twitter on my iPod Touch.

I captured screen shots of the Twitter’s mobile client, Tweetdeck and Smartr (see recent post) and added text which showed the benefits of Tweetdeck’s columns for providing filtered views of tweet streams (e.g. for an event which has a hashtag such as #bathcr) and how Twitter lists can be used to provide additional filtering capabilities for the delivery of Web pages from selected Twitter accounts.  It took 10 minutes to create and publish the Web site on my iPod Touch while I was also listening to Tristam’s seminar.

It should be noted that the application had created a Web site with its own domain: ( .  So this application does seem to provide something more than uploading photos to Flickr.


Is this a Web site? After all it’s only a simple single page containing text and a few images. But as it has its own domain name surely it must be regarded as a Web site. But should such Web sites be allowed to be created – aren’t they likely to infringe instituional policies? Aren’t we moving away from a distributed environment and towards a centrally managed environment for Web resources? After all, as was suggested to me on Twitter, aren’t Web sites which can be created in less than 10 minutes likely to be forgotten about a week later?

Perhaps this is true, but for me an important aspect of the Web is in providing a communications environment and not just a institutional tool for the publication of significant documents.  And sometimes the communications may be an informal discussion – and I think that Zapd could have a role to play in that space.

I also think that we should be willing to learn from new approaches. Being able to create a Web site on a mobile device is quite impressive. It was also interesting to observe how the service creates a new domain name for each resource created.  Should this be something for institutions to consider?

For me I regard Zapd as another in my Personal Learning Environment which I’m happy to use if it fufills a useful purpose. And if it fails to do that, I’m happy to throw it away.  And with 100,000 downloads since its launch two weeks ago it seems I’m not alone in exploring its potential.  What’s your take?

Posted in Web2.0 | Tagged: | 9 Comments »

New HTML5 Drafts and Other W3C Developments

Posted by Brian Kelly on 13 Apr 2011


New HTML5 Drafts

The W3C’s HTML Working Group has recently announced the publication of eight documents:

Last Call Working Drafts for RDFa Core 1.1 and XHTML+RDFa 1.1

Back in August 2010 in a post entitled New W3C Document Standards for XHTML and RDFa I described the latest release of RDFa Core 1.1 and XHTML+RDFa1.1 draft documents. The RDFa Working Group has now published Last Call Working Drafts of these documents: RDFa Core 1.1 and XHTML+RDFa 1.1.

New Provenance Working Group

The W3C has also recently launched a new Provenance Working Group whose mission is “to support the widespread publication and use of provenance information of Web documents, data, and resources“. The Working Group will publish W3C Recommendations that define a language for exchanging provenance information among applications. This is an area of work which is likely to be of interest to those involved in digital library development work – and it is interesting to see that a workshop on Understanding Provenance and Linked Open Data was held recently at the University of Edinburgh.

Emotion Markup Language

When I first read of the Multimodal Interaction (MMI) Working Group‘s announcement of the Last Call Working Draft of Emotion Markup Language (EmotionML) 1.0. I checked to see that it hadn’t been published on 1 April! It seems that “As the web is becoming ubiquitous, interactive, and multimodal, technology needs to deal increasingly with human factors, including emotions“.

The EmotionML Language allows various vocabularies to be used such as:

The six terms proposed by Paul Ekman (Ekman, 1972, p. 251-252) as basic emotions with universal facial expressions — emotions that are recognized and produced in all human cultures: anger; disgust; fear; happiness; sadness and surprise.

The 17 terms found in a study by Cowie et al (Cowie et al., 1999) who investigated emotions that frequently occur in everyday life: affectionate; afraid; amused; angry; bored; confident; content; disappointed; excited; happy; interested; loving; pleased; relaxed; sad; satisfied and

Mehrabian proposal of a three-dimensional description of emotion in terms of Pleasure, Arousal, and Dominance.

Posted in HTML, standards, W3C | 1 Comment »

UKOLN Seminar On OER Open to All

Posted by Brian Kelly on 11 Apr 2011

UKOLN’s seminar programme continues on Thursday 14 April 2011. Vic Jenkins and Alex Lydiate of the e-Learning team in LTEO (Learning & Teaching Enhancement Office) with describe the JISC-funded OSTRICH (OER Sustainability through Teaching & Research Innovation Cascading across HEIs) project. As described in the abstract for the seminar:

The progress of the OSTRICH project so far at the University of Bath will be described by Vic Jenkins (Learning Technologist in the Learning and Teaching Enhancement Office). This will include highlights and challenges encountered, discussions around IPR for learning and teaching resources, and the sustainability of processes for managing the release of OERs on an institutional basis.

Alex Lydiate (Educational Software and Systems Developer) will present an overview of the design of the Drupal-based OSTRICH distributed repository and the rationale behind it.  This will include an outline of the proposed strategy for representing the OSTRICH OER records on the Web.

As with previous seminars this year the event is open to others in the sector with an interest in the development of open educational resources to attend.  The seminar will also be streamed live.  If you would like to attend, either in person or remotely, please complete the online booking form.

Note that following the most recent UKOLN seminar there was a suggestion that we should make use of the Ustream streaming video service rather than Bambuser.

In order to familiarise myself with this service I created a brief video clip which provides an announcement about the seminar.  On replying the clip (which, I should add, contains no additional information) I discovered that as well as the advertisement on flights to Australia (illustrated) there is also another advert display as a caption on the screen and a video advert is played before my video starts.

It seems that:

Ustream is free because it is ad-supported, but if you want to get rid of ads on your stream ― no problem!

Going Ad-Free on Ustream is simple. With a few easy steps, you can remove ads from your channel to fully control the viewing experience.

And whilst going ad-free may be simple, it costs from $99 per month. The use of advertisement to fund online services is something we have tended to avoid in higher education in the past.   But in light of reductions in funding, I wonder if we will start to see increased use of services which contain adverts, not only in sidebar widgets but also at the start of video clips.  Will this, I wonder, be regarded as an appropriate response to addressing reductions in funding?

Posted in Events, openness | 3 Comments »

Thoughts on the New WebGL Open API Standard

Posted by Brian Kelly on 7 Apr 2011

A Brief Introduction to WebGL

A post on the TechCrunch blog today asks “Who Needs Flash? New WebGL And HTML5 Browser Game Sets Tron’s Light Cycles In 3D“. It seems the Cycleblob browser game which has been released today was written exclusively in JavaScript, using elements of WebGL and HTML5. WebGL is “a graphics library that basically extends the functionality of JavaScript to allow it to create interactive 3D graphics within ye olde browser” which was released in March 2011.

The TechCrunch article provides a summary of WebGL:

As a cross-platform API within the context of HTML5, it brings 3D graphics to the Web without using plug-ins. WebGL is managed and developed by The Khronos Group, a non-profit consortium of companies like Google, Apple, Intel, Mozilla, and more, dedicated to creating open standard APIs through which to display digital interactive media — across all platforms and devices.

Over the past decade or so that W3C’s approach to the development of open standards has focussed on the development of declarative markup languages based on XML such as SMIL and SVG.  But here’s another approach which is based on providing open APIs with buy-in from browser vendors and other IT companies. Might WebGL have an impact in the development of interactive e-learning and research applications, I wonder?

But Is WebGL Really Open?

Investigations into the potential of WebGL for development work in higher and further education should consider its openness and its likely sustainability.  Although  is has been developed and maintained by a non-profit consortium it is questionable whether an API maintained by an industry consortium should be regarded as an open standard according to a definition of an open standard which the UK Government is currently attempting to define.  As described in a recent post the UK Government’s first condition for an open standards is that it is “result[s] from and are maintained through an open, independent process“.  A industry consortium, even if non-profit making, surely cannot be considered independent; if this was the case Microsoft  could set up a similar consortium responsible for the maintainance of their formats and code base which they could then claim to be an open standard.

But such considerations are really only relevant for those who feel there is a simple binary divide between open standards and proprietary approaches. In my view there is a complex spectrum of openness and for now  I would feel that WebGL is worth considering for development work – and the fact that WebGL is not supported by Microsoft should be regarded as an interesting challenge for developers but not necessarily a reason for discounting it.

Observing WebGL’s Development

It should be noted that there is an entry for WebGL in Wikipedia and, as is often the case, the article provides a useful brief summary of the standard:

WebGL is a Web-based Graphics Library. It extends the capability of the JavaScript programming language to allow it to generate interactive 3D graphics within any compatible web browser.

The development of this entry is interesting.  A stub entry for the article was created on 14 September 2009 and there have been regular updates ever since.

I must admit I hadn’t realised that statistics for revisions of Wikipedia articles are available.  The statistics for the WebGL article reveal that there have been 192 revisions from 104 users. It is also possible to view details for those who have edited the article and to discover how many users are watching the article.

The statistics page for the article also informs us that the WebGL article has been viewed 40,009 times in March 2011 and is ranked 7,576 in traffic on

What have I learnt from observing the information about the WebGL Wikipedia article, as well as the information provided in the WebGL Wikipedia article itself?

The chart of the number of edits over time shows that there is a steady growth in the number of edits, which suggests that the article is continually being revised.  The main contributors to the article include those involved in development in computer games which may suggest that the priority for future developments may be in this area. However the article itself lists Google Body as an early application of WebGL which perhaps suggests that WebGL could have a role to play in the development of teaching and learning applications.

Your Thoughts

Are there any  examples of early use of WebGL within the higher education sector, I wonder?  I would be interested in hearing about examples and, perhaps more importantly, hearing about experiences of those involved in WebGL development work.

In addition I’d be interested in comments on observation of use and changes in Wikipedia articles as a means of providing early indications of new standards which may be of interest to  developers.  Is this an approach which could be used more widely?



Posted in openness | 4 Comments »

RDFa and WordPress

Posted by Brian Kelly on 5 Apr 2011

RDFa: A Brief Recap

RDFa (Resource Description Framework – in – attributes) is a W3C Recommendation that adds a set of attribute level extensions to XHTML for embedding rich metadata within Web documents.

As described in the Wikidpia entry for RDFa five “principles of interoperable metadata” are met by RDFa:

  1. Publisher Independence: each site can use its own standards
  2. Data Reuse: data is not duplicated. Separate XML and HTML sections are not required for the same content.
  3. Self Containment: the HTML and the RDF are separated
  4. Schema Modularity: the attributes are reusable
  5. Evolvability: additional fields can be added and XML transforms can extract the semantics of the data from an XHTML file

Additionally RDFa may benefit Web accessibility as more information is available to assistive technology.

But how does go about evaluating the potential of RDFa? Last year I wrote a post on Experiments With RDFa which was based on manual inclusion of RDFa markup in a Web page. Although this highlighted a number of issues, including the validity of pages containing RDFa, this is not a scalable approach for significant deployment of RDFa. What is needed is a content management system which can be used to deploy RDFa on existing content in order to evaluate its potential and understand deployment issues.

The Potential for WordPress

WordPress as a Blog Platform and a CMS

WordPress provides a blog platform which can be used for large-scale management of blogs which are hosted at In addition the software is available under an open source licence and can be deployed within an institution. There is increasing interest in use of WordPress within the higher education sector as can be seen from the recent launch of a WORDPRESS JISCMail list (which is aimed primarily at the UK HE sector) with some further examples of interest in use of WordPress in being available on the University Web Developers group.

A recent discussion on the WORDPRESS JISCMail lists addressed the potential of WordPress as a CMS rather than a blogging platform.  Such uses were also outlined recently in a post on the College Web Editor blog which suggested reasons why WordPress can be the right CMS for #highered websites.  In light of the growing interest in use of WordPress as a CMS it would seem that this platform could have a role to play in the deployment of new HTML developments such as RDFa.

The wp-RDFa WordPress Plugin

A strength of WordPress is its extensible architecture which allows plugins to be developed by third parties and deployed on locally installations of the software.  One such development is the wp-RDFa plugin which supports FOAF and  Dublin Core metadata. The plugin uses Dublin Core markup to tag posts with the title, creator and date elements. In addition wp-RDFa can be configured to make use of FOAF to “relate your personal information to your blog and to relate other users of your blog to you building up a semantic map of your relationships in the online world“.

Initial Experiments With wp-RDFa

Dublin Core Metadata

UKOLN’s Cultural Heritage blog has been closed recently, with no new posts planned for publication.  The blog will however continue to be hosted and can provide a test bed for experiments such as use of the wp-RDFa plugin.

In an initial experiment we found that the although the titles of each blog post were described using Dublin Core metadata, the title was replicated in the blog display. Since this was not acceptable we displayed the use of Dublin Core metadata and repeated the experiment on a private backup copy of the UK Web Focus blog. This time there were no changes in how the blog posts were displayed.

The underlying HTML code made use of the Dublin Core namespace:

<rdf:RDF xmlns:rdf=”; xmlns:dc=””&gt;

with each individual blog post containing the title and publication date provided as RDFa:

<h3 class=”storytitle”>
<span property=“dc:date” content=”2010-04-27 08:17:53″ resource=”; />
<span rel=”; property=”dc:title” resource=””>Workshop on Engagement, Impact, Value</span></a></h3>

It therefore does appear that the plugin can be deployed on local WordPress installations in order to provide richer semantic markup for existing content. I suspect that the problem with the display in the original experiment may may due to an incompatibility with the theme which is being used (Andreas09). I have reported this problem to the developer of the wp-RDFa plugin.

FOAF (Friends-of-a-Friend)

I had not expected an RDFa plugin to provide support for FOAF, the Friends-of-a-Friend vocabulary.  However since my work with FOAF dates back to at least 2004 I had an interest in seeing how it might be used in the context of a blog.

I had expected that information about the blog authors and commenters would be displayed in some way using a RDFa viewer such as the FireFox Operator plugin. However nothing seemed to be displayed using this plugin. In addition use of the RDFa Viewer and the RDFA Developer plugin also failed to detect FOAF markup embedded as RDFa.  I subsequently found that the FOAF information was provided as an external file.  Use of the FOAF Explorer service provides a display of the FOAF information which has been created by the plugin.

What surprised me with the initial display of the FOAF content was the list of names which I did not recognise.  It seems that these are authors and contributors to a variety of other blogs hosted on UKOLN’s WordPress MU (multi-user) server. I wonder whether the plugin was written for a previous version of WordPress, for which there was one blog per installation? In any case a decision has been made to provide access to a FOAF resource which contains details of the blog authors only, as illustrated.

Emerging Issues

A post on Microformats and RDFa deployment across the Web recently surveyed take-up of RDFa based on an analysis of 12 billion web pages indexed by Yahoo! Search and shows that we are seeing a growth in the take-up of semantic markup in Web pages.  As CMS systems (such as Drupal 7 which supports RDfa ‘out of the box’ – link updated in light of comment)  begin to provide RDFa support we might expect to see a sharp growth in Web pages which provide content which can be processed by software as well as being read by humans.  For those institutions which host a local WordPress installation it appears that it is now possible to begin exploring use of RDFa. As described in a post by Mark Birkbeck on RDFa and SEO an important role for RDFa will be to provide improvements to searching.  But in addition the ability to use wp-RDFa to create FOAF files makes we wonder whether this approach might be useful in describing relationships between contributors to blogs and perhaps provide the hooks to facilitate data-mining of the blogosphere.

It would be a mistake, however, to focus on one single tool for creating RDFa markup.  On the WORDPRESS JISCMail list Pat Lockley  mentioned that he is also developing an RDFa plugin for WordPress and invited feedback on further developments.  Here are some of my thoughts:

  • There is a need for a clear understanding of how the semantic markup will be applied and the user cases it aims to address.
  • There will also be a need to understand how such semantic markup would be used in non-blogging uses of WordPress, where the notions of a blog post, blog author and blog commenters may not apply.
  • There will be a need to ensure that different plugins which create RDFa markup are interoperable i.e. if a plugin is replaced by an alternative applications which process the RDFa should give consistent results.
  • Consideration should be given to privacy implications of exposing personal data (in particular) in semantic markup.

Is anyone making use of RDFa in WordPress who has experiences to share?  And are there any further suggestions which can be provided for those who are involved in related development work?

Posted in standards | Tagged: , | 9 Comments »

Are Mailing Lists Now Primarily A Broadcast Medium?

Posted by Brian Kelly on 4 Apr 2011

In a post entitled DCMI and JISCMail: Profiling Trends of Use of Mailing Lists I provided evidence of the decline in usage of mailing lists across a research community – those involved in development and use of Dublin Core metadata standard.

Nos. of message posted to web-support and websiyte-info-mgt JISCMail lists, 1999-2009This analysis followed a previous survey which was described in a post on The Decline in JISCMail Use Across the Web Management Community and is illustrated in the accompanying histogram.

Since it appears that  the various functions provided by mailing lists are being replaced by use of other channels (such as blogs, Twitter, etc.) over Christmas I decided to unsubscribe from quite a number of JISCMail lists.  Those that I remained on (primarily library-related lists) I receive via daily digests.

On Saturday I received four messages from JISCMail lists.  I noticed they contained following messages:

JISC-INNOVATION Digest – 24 Mar 2011 to 1 Apr 2011 (#2011-7)
CFP: Digital Classicist Seminars 2011: Announcement of a call for papers.

JISC-REPOSITORIES Digest – 31 Mar 2011 to 1 Apr 2011 (#2011-56)
Brief survey about initiatives to encourage deposit: Request to complete survey.
ISKO UK Biennial Conference 4th-5th July 2011 – Early Bird registration during April: Conference announcement.

LIS-WEB2 Digest – 29 Mar 2011 to 1 Apr 2011 (#2011-35)
Event: Registration now open for Usability and User-Centred Design Day: Event announcement.

LIS-LINK Digest – 31 Mar 2011 to 1 Apr 2011 (#2011-75)
Lis-Link: LCF 2011 Conference: Conference announcement.
Brief survey on work of the Coalition for LIS Research: Request to complete survey.
UKeiG Course – Don’t miss out: Mobile access to information resources: Event announcement.
Copyright Query: User query.
UKSG – win the new Kindle 3g Wifi – Credo Reference on Stand 55: Company advertisement.
Customer Services post at St George’s: Job announcement.
Fully funded PhD studentship: Loughborough University/ Amateur Swimming Association: Research vacancy announcement.
ALPSP Seminar: Making Sense of Social Media, 24 June – London UK: Event announcement.

Of these twelve message only one (the Copyright Query message) was looking to instigate a response on the mailing list: the other eleven were all looking for people to visit a Web resource.  It should also be noted that a number of the messages included “Apologies for cross-posting” comments indicating that the message were been published to multiple lists.

I can’t help but feel that although email is convenient to use with the information coming to the user, this isn’t necessarily the most efficient way of working in light of the many other tools which are now available. At a time in which there are accusations that there are efficiency savings to be made across the public  sector in general, with libraries in particular under close scrutiny, it does seem timely to revisit the question of whether continued usage of mailing lists as a default communications and alerting mechanism is the best way for the sector to proceed.  I also feel that the Library sector, with its expertise in information management, should be taking a leading role in exploring new working practices and ensuring that their user communities are made aware of the possibility of new approaches to working.

At the CILIP’s School Libraries Group Skills for the Future event held over the weekend I noticed from the tweets (archived on Twapper Keeper) that speakers at event addressed the need for school librarians to embrace such new technologies, with Phil Bradley arguing thatwe are ‘cybernomadic’ and need to be able to move all the time to where the conversation is“. I’d not heard the term “cybernomad” before; according to the Urban Dictionary it describes “someone who uses internet cafe’s a lot because they think going outside and using someone elses computer is better than using their own“. But I like Phil’s reinterpretation of the word.   I agree with Phil; there will be a need to move from the comfort of an existing online home and move to where others are – and this will be particularly true for a user-oriented service professions such as librarians, whether working on schools, pubic libraries or universities.

Revisiting the title of this post, “are mailing lists now primarily a broadcast medium?” it seems that for the one’s I’ve listed this may be the case.  But although this to be the case for my areas of interest, is it true more widely?  Indeed might Friday’s post have been an aberration,with the norm being discussions, debates and, possibly, arguments taking place on the lists?  To answer such questions – in order to inform personal decisions on use of mailing lists and polices on the establishment of new lists – it seems that there is a need to be able to easily monitor trends, including both personal usage patterns and wider developments. Unfortunately the Listserv software used on the JISCMail service does not seem to provide APIs to carry out such trend analysis. So perhaps the need is for list members to observe one’s personal uses and to be willing to question the effectiveness of continued use.  As for me, I would welcome the continuation of mailing lists as a discussion forum, and leave alerting to other tools.  Is that an unreasonable expectation?

Posted in General | 10 Comments »

Resources from Andrew Treloar’s Seminar on Data Management

Posted by Brian Kelly on 1 Apr 2011

Data Management: International Challenges, National Infrastructure and Institutional Responses – an Australian Perspective on Data Management

Earlier today UKOLN hosted a seminar entitled “Data Management: International Challenges, National Infrastructure and Institutional Responses – an Australian Perspective on Data Management” which was given by Dr Andrew Treloar, Director of Technology for the Australian National Data Service (ANDS).

As part of our policy on widening access to our seminars as well as those physically present at the seminar we also provided a live video stream of Andrew talk’s (using the Bambuser video streaming service), together with an accompanying stream of the PowerPoint slides (provided using the Broadcast feature in MS PowerPoint 2010).

For those who could not attend this amplified seminar we are pleased to announced that recordings of the talk is now available. Due to some technical problems these are available in three parts:

In addition the slides used are also available on Slideshare and are embedded below.

Reflections on the Video Streaming

I’m a believer in the maxim that “all bugs are shallow to many eyes“. I also feel that one can learn from mistakes that others make.  In order to share help other learn from my experiences I’ll describe the approaches taken and summarise ways I improvements I plan to make for future amplified events.

The intention to provide a live video stream of the seminar was announced well in advance and we used EventBrite in order to get an indication of possible numbers of remote participants. We contacted the people in advance in order to inform them of the technologies we would be using. We also asked where they were based and discovered one remote participant was based in Melbourne, Australia.

The video stream went live about 40 minutes before the start of the seminar in order to test sound levels, position of Webcam, etc.  I used a RocketFish Webcam on a Macbook Air laptop – and was informed that the autofocussing was slightly distracting if I moved around too much.

Information about the live steaming was announced on Twitter and the #ukolnseminar tag was used to help identify relevant tweets.  An additional chat channel was created on Chatzy which was provided in case of problems with the chat facility in Bambuser (and it turned out that this facility was used when the video stream connection went down.

My colleague Marieke Guy viewed the seminar remotely and kept me informed of how things were working from her perspective.  Marieke also capture a screen image of her computer which is available on Flickr and shown here.

The display shows the live video stream created using Bambuser, the streamed PowerPoint slides together with the Chatzy chat room on the bottom left and a tweet on the top right.

It should be noted that the Bambuser video stream appeared to lose connection on a couple of occasions and the video stream had to be restarted.

Afterwards Marieke provided the following summary based on her experiences:

  • The sound of typing from the computer used for the stream can be distracting.
  • Alerts from TweetDeck can also be distracting.
  • Sharing the  URL for the live video stream can cause confusions if the stream is restarted – it might be better to give the URL of the channel rather than a specific video stream/
  • It can be confusing having displays of the PowerPoint slides, a video stream, a chat facility and a Twitter client open simultaneously.
  • There was a time lag on video so the display of the PowerPoint slides were slightly out of synch with the audio and video (althpugh this was not a significant problem).
  • The hyperlinks provided in MS PowerPoint were helpful and could be used from the streamed view of the slides.
  • The multiple chat facilities on Bambuser and on Chatzy were confusing. There is a need to be clear if there is a preferred channel and what its purpose is.
  • There is a need to be clear on how remote participants should ask questions.
  • Bambuser can be a bit flaky – the video stream disconnected several times.
  • It is sometime unclear if you are watching a live video stream or a recording.
  • With so much happening it can be hard to concentrate on actual content.
  • It would be useful to be able to show a live demonstration to the remote audience.
  • Questions raised during the talk should be repeated so that the remote audience can hear.

This feedback has been very useful and will help to inform the approaches we will take for future amplified events.  Do others have additional comments or suggestions?

Posted in Events | Tagged: | 2 Comments »