UK Web Focus

Innovation and best practices for the Web

Archive for the ‘standards’ Category

What Could ITS 2.0 Offer the Web Manager?

Posted by Brian Kelly (UK Web Focus) on 24 January 2014

ITS 2.0 videoBack in October 2013 the W3C announced that the Internationalization Tag Set (ITS) version 2.0 had become a W3C recommendation. The announcement stated:

The MultilingualWeb-LT Working Group has published a W3C Recommendation of Internationalization Tag Set (ITS) Version 2.0. ITS 2.0 provides a foundation for integrating automated processing of human language into core Web technologies. ITS 2.0 bears many commonalities with its predecessor, ITS 1.0, but provides additional concepts that are designed to foster the automated creation and processing of multilingual Web content. Work on application scenarios for ITS 2.0 and gathering of usage and implementation experience will now take place in the ITS Interest Group. Learn more about the Internationalization Activity.

Following the delivery of this standard, on 17 January 2014 the MultilingualWeb-LT Working Group was officially closed.

But what exactly does ITS 2.0 do, and is it relevant to the interests of institutional web managers, or research, teaching or administrative departments within institutions?

The ITS 2.0 specification provides an overview which seeks to explain the purpose of the standard but, as might be expected in a standards document, this is rather dry. There are several other resources which discuss ITS 2.0 including:

But the resource I thought was particularly interesting was the ITS 2.0 video channel. This contains a handful of videos about the ITS standard. One video in particular provides a brief introduction to ITS 2.0 and the advantages it can offer businesses involved in multilingual communication. This 8-minute long video can be viewed on YouTube but it is also embedded below:

The video, an animated cartoon, is interesting because of the informal approach it takes to explaining the standard. This, in my experience, is unusual. The approach may not be appreciated by everyone but since standards are widely perceived to be dull and boring, although still acknowledged as important. For me, providing a summary of the importance of standards in this way can help to reach out to new audiences who might otherwise fail to appreciate the role which standards may have.

If you are involved in providing web sites or content which may be of interest to an international audience it may be worth spending 8 minutes to view this video. If ITS 2.0 does appear to be of interest the next question will be what tools are available to create and process ITS 2.0 metadata? A page on ITS Implementations is available on the W3C web site but again this is rather dry and the tools seem to be rather specialist. However more mainstream support for ITS 2.0 is likely to be provided only if there is demand for it. So if you do have an interest in metadata standards which can support automated translations and you feel ITS 2.0 may be of use, make sure you ask your CMS vendor if they intend to support it.

Might this be of interest to University web managers? If you are a marketing person at the University of Bath and wish to see your marketing resources publicised to the French-speaking world but have limited resources for translating your resources, you probably wouldn’t want:

The University of Bath is based in a beautiful georgian city: Bath. 

to be translated as:

L’université de bain est basé dans une belle ville géorgienne: bain.

And whilst Google translate actually does preserve the word “Bath” if it is given in capitals, this seems not to be the case in all circumstances. For example, the opening sentence on the Holburne Museum web site:

Welcome to Bath’s art museum for everyone. 

is translated as:

Bienvenue au musée d’art de salle de bain pour tout le monde.

Perhaps marketing people in many organisations who would like to ensure that automated translation tools do not make such mistakes should be pestering their CMS vendors for ITS 2.0 support!


View Twitter conversation from: [Topsy]

Posted in standards, W3C | Tagged: | 3 Comments »

Reflections on 16 years at UKOLN (part 4)

Posted by Brian Kelly (UK Web Focus) on 25 July 2013

Working With Funders

During my time at UKOLN there have been several core funders including BLRIC (British Library Research and Innovation Centre), LIC (Library and Information Commission) , Re:source, the MLA (Museums Libraries and Archives Council) and the JISC. Having joint funding has meant that UKOLN was able to engage with not only the higher and further education sectors but also the wider library community together with, following government reorganisations, the cultural heritage sector.

In recent posts I summarised my involvement in speaking at and organising events and writing a large number of peer-reviewed papers. This work was carried out primarily through UKOLN’s core funding. The work typically sought to address the needs of our communities through the involvement with people working directly within the sector. Such ‘customer’-focussed approaches helped, I feel, to ensure the work was relevant to the sector.

My work which was more directly involved with JISC’s needs began with work in developing documents on open standards of relevance to JISC’s digital library programmes, beginning initially with the eLib programme and followed by the DNER and the JISC Information Environment. This work led to related work for the cultural heritage sector, in particular  providing advice on standards for the NOF (New Opportunities Fund) Digitise programme.

In addition to such core-funded work I was also involved in project-funded activities including the JISC-funded QA Focus and JISC PoWR projects, the BLRIC-funded WebWatch project and the EU-funded Exploit Interactive and Cultivate Interactive ejournals. I was also involved in a number of initiatives driven by JISC such as the eFramework but, as described in Andy Powell’s post “e-Framework – time to stop polishing guys!” the time and effort expended by this international partnership failed to have any significant impact and the eFramework Web site seems to be no longer available although a copy is available in the Internet Archive.

Working With Standards

One area which was of particular interest to both of UKOLN’s core funders was the selection of open standards for use in development programmes which they funded. My initial work in this area involved contributing to a document of the open standards relevant for the eLib programme.  This subsequently led to similar documents being developed for the JISC Information Environment and the NOF-digitise programme.

At that time the funders wanted a list of the open standards which should be mandated for use in their development programmes. However JISC recognised that they did not have a compliance regime in force to address failures of projects to implement the mandated standards. In 2001 JISC announced a call for “the provision of a JISC/DNER national focus for digitisation and quality assurance in the UK“. The document described how the successful bidder would have responsibilities for:

Ensuring adherence of projects to relevant parts of DNER standards and guidelines and reporting on problems in their implementation; incorporating feedback and recommending updates to the guidelines for the community as appropriate

I submitted a successful bid for this work in conjunction with ILRT, University of Bristol. After the first year ILRT withdrew and were replaced by AHDS.  Myself, my colleague Marieke Guy and our colleagues at AHDS developed a quality assurance framework. As described in the final report:

The aim of the QA Focus project was to develop a quality assurance (QA) methodology which would help to ensure that projects funded by JISC digital library programmes were functional, widely accessible and interoperable; to provide support materials to accompany the QA framework and to help to embed the QA methodology in projects’ working practices.

The QA framework is a lightweight framework, based on the provision of technical policies together with systematic procedures for measuring compliance with the policies. The QA Framework is described in a number of the QA Focus briefing documents and the rational for the framework has formed the basis of a number of peer-reviewed papers.

This lightweight framework was described in a briefing document. In brief rather than mandating open standards which must be used across all of JISC’s activities, the framework recommended that projects should document their own policies on open standards (and related areas) and the procedures to ensure that the policies were being implemented. JISC programme managers would have flexibility in prescribing specific open standards if this was felt to be appropriate (for example, a programme designed to investigate the value of the OAI-PMH protocol for harvesting repositories could legitimately mandate use of OAI-PMH, and perhaps even a specific version ).

This approach meant that JISC could request that project reports should be provided in MS Word or PDF formats – both of which were proprietary formats at the time (although they are now both open standards). It also provided the flexibility in avoiding mandating open standards prematurely (e.g. insisting on use of SMIL rather than the proprietary Flash format) or mandating open standards when design patterns may have been more appropriate (e.g. mandating the Web Services standards such as SOAP when RESTful design practices have, in many cases, proved to be more relevant).

Standards paperThis work was carried out over a period of time. In 2003 an initial paper on “Ideology Or Pragmatism? Open Standards And Cultural Heritage Web Sites” by myself and my colleague Marieke Guy, Alastair Dunning (AHDS – the now defunct Arts and Humanities Data Service) and Lawrie Phipps (TechDis) described how:

… despite the widespread acceptance of the importance of open standards, in practice many organisations fail to implement open standards in their provision of access to digital resources. It clearly becomes difficult to mandate use of open standards if it is well-known that compliance is seldom enforced. Rather than abandoning open standards or imposing a stricter regime for ensuring compliance, this paper argues that there is a need to adopt a culture which is supportive of use of open standards but provides flexibility to cater for the difficulties in achieving this.

The next paper published two years later on “A Standards Framework For Digital Library Programmes” by myself and my UKOLN colleagues Rosemary Russell and Pete Johnston, Paul Hollins (CETIS) and Alastair Dunning and Lawrie Phipps:

describes a layered approach to selection and use of open standards which is being developed for digital library development work within the UK. This approach reflects the diversity of the technical environment, the service provider’s environment, the user requirements and maturity of standards by separating contextual aspects; technical and non-technical policies; the selection of appropriate solutions and the compliance layer. To place the layered approach in a working context, case studies are provided of the types of environments in which the standards framework could be implemented, from an established standards-based service, to a new service in the process of selecting and implementing metadata standards. These examples serve to illustrate the need for such frameworks.

Further papers on “A Contextual Framework For Standards” (by myself, Alastair Dunning, Paul Hollins, Lawrie Phipps and Sebastian Rahtz [OSS Watch])  and “Addressing The Limitations Of Open Standards” (by myself, Marieke Guy and Alastair Dunning) and “Openness in Higher Education: Open Source, Open Standards, Open Access” (by myself, Scott Wilson [CETIS] and Randy Metcalfe [OSS Watch]) subsequently developed these ideas and explored how they could be app;lied in a variety of contexts.

Conclusions

Looking at this work it strikes me the value of the expertise provided by colleagues across the sector. The papers I have listed which described the approaches and ensured that the ideas had been subject to peer review work were written by staff at UKOLN (4 individuals), CETIS (1 individual), OSS Watch (2 individuals), TechDis (1 individual and the now-defunct AHDS (2 individuals). JISC programme managers provided value project management support for the initial QA Focus work and gave early feedback on the ideas but did not have intellectual input into the ideas.

In light of the evidence given in this blog post I am somewhat concerned with the new logo which appeared on the redesigned Jisc Web site: “We are the UK’s expert on digital technologies for education and research“. Really? What is the evidence for that assertion? Wouldn’t it be more appropriate to say “We are successful in designing development programmes and providing project management expertise  to these programmes“? And equally important “We are successful in encouraging the experts in the higher education sector to work together for the benefit of the wider community“. I would be the first to give thanks to the JISC for organising events which enabled me to meet the co-authors I’ve listed above and encouraged such joint working. But “We are the experts”! Who coined that statement, I wonder?

JISC logo


 

View Twitter conversation from: [Topsy] | View Twitter statistics from: [TweetReach] – [Bit.ly]

 

Posted in General, standards | Tagged: | 1 Comment »

What Does the Demise of Google Reader Tell Us About Open Web Standards?

Posted by Brian Kelly (UK Web Focus) on 14 March 2013

Google Reader is Dead!

Google ReaderEarlier this morning I came across the news that Google have announced the demise of their Google Reader service:

We’re retiring Reader on July 1. We know many of you will be sad to see it go. Thanks for 8 great years! goo.gl/7joct

Despite the announcement only being made a few hours ago we are already seeing bloggers up in arms about the news. We might expect large-scale service such as TechCrunch (GoogleReaderpocalypse. For Real This Time.) to provide a speedy response to the news but closer to home bloggers such as James Clay have responded in blunt terms: Google Reader is Dead.

What Does the Announcement Tell us About Open Web Standards?

The implications of the demise of applications was always intended to be mitigated by use of open standards. But in this case the underlying format used by Google Reader (RSS) is widely accepted as an open standard in both its variants (RSS 1.0 and RSS 2.0). Blogs will continue to publish RSS feeds as will a variety of other tools and services. Why should the demise of Google Reader cause so much anger amongst users of the tool?

As RSS grew in popularity we saw the development of a range of RSS readers. Initially we saw dedicated RSS clients which users installed on their desktop. We then saw RSS add-ons to existing tools, including RSS extensions for popular email clients such as Outlook. But the development of the “Web as a platform” led to a growth in popularity of Web-based RSS tools, which meant that users did not have to install software on their desktop computer (which was particularly useful for those with locked-down desktops and IT Service departments who were reluctant to install new software).

One of the early Web-based RSS readers was Bloglines. I used this service many years ago but haven’t logged in for several years. As I learnt from Wikipedia the service was scheduled to be shut down on 15 November 2010 but a last-minute reprieve meant that it continued under a new owner. However a few minutes ago when on to the service I discovered that the feeds that I had subscribed to had been lost. This was not a problem for me, as I have migrated by feeds to Google Reader. But now it seems that I will once again shortly be losing the service I use to view my RSS feeds.

I should be able to export the list of my feeds held in Google Reader and return to Bloglines as my preferred RSS reader. However in reality it will not be so simple. I now use a variety of tools on my mobile devices (such as Flipboard, Currents, Pulse, etc.) to read my feeds, and use Google Reader as the intermediary for managing my large number of RSS feeds. I suspect I will be reluctant to wish to manage my subscriptions across a range of clients. For me, as for many others who have been commenting on blogs today, Google Reader has been the ideal tool.

What conclusions can we reach about the role of Web standards in light of Google’s announcement?

The view that open standards protected the user from the vagaries of the market place seems to be undermined – in reality it seems that users grow to love tools which are embedded in daily use.

It also appears that successful applications not only attract large numbers of users; successful applications can also attract developers and companies who can develop an ecosphere of applications which are dependent on services such as Google Reader.

It also seems that social sharing services are undermining the use of RSS for bringing relevant content to users. Perhaps related to this will be the difficulties companies will have in monetising RSS feeds.

It is interesting to see the arguments which have been made in the Hitler parody: Hitler finds out Google Reader is shutting down which is available on YouTube and embedded below. I’d be interested in other’s thoughts on the reasons for the closure of Google Reader and the implications of this announcement.


View Twitter conversation from: [Topsy] | View Twitter statistics from: [TweetReach] – [Bit.ly]

Posted in rss, standards | 11 Comments »

Good News From the UK Government: Launch of the Open Standards Principles

Posted by Brian Kelly (UK Web Focus) on 11 December 2012

In April 2012 I wrote a post entitled Preparing a Response to the UK Government’s Open Standards: Open Opportunities Document which summarised my experiences of support for open standards in JISC development programmes since the 1990s and encouraged others to participate in the UK Government’s consultation exercise.

A post by Simon Wardley entitled The UK’s battle for open standards which began:

Many of you are probably not aware, but there is an ongoing battle within the U.K. that will shape the future of the U.K. tech industry. It’s all about open standards.

motivated me to write a follow-up post entitled Oh What A Lovely War! in which I described the language which was being used to describe this consultation exercise:

In brief we are seeing a “battle for open standards” that will “shape the future of the UK tech industry” in which we are seeing “UK Government betrayal” which has led to a “proprietary lobby triumph” . The ugly secrets of “how Microsoft fought true open standards” have been revealed and now every man must do his duty and “get involved”! Who said standards were boring?

Yesterday I received the following email from Linda Humphries of the Government Digital Service, Cabinet Office.

Thank you for your response to the UK Government’s Open Standards: Open Opportunities public consultation. The consultation ran from 9 February to 4 June 2012. At the close of the consultation, we had received evidence from over 480 responses and we would like to take this opportunity to thank you for sharing your views and helping us to formulate new policy on this topic.
As you may know, the consultation process concluded with the publication of a government response and a new policy – the Open Standards Principles – on 1 November 2012. The government response covers the process we followed, a review of the key themes that emerged in the consultation, how they have been taken on board and the next steps for open standards in government IT.

Online submissions were published during the consultation period to encourage debate and we have now also made available the written responses submitted through other channels. The only exception to this is any submissions which explicitly requested confidentiality.

Two independent reports commissioned by the Cabinet Office from Bournemouth University have also been published and are available on the Cabinet Office website – an analysis of the consultation responses and an evidence review of aspects of the proposed policy. The responses, reports and new policy are all available here.

In the new year, we shall be setting up the Open Standards Board, as described in the Open Standards Principles. We look forward to your continuing engagement through the Standards Hub during 2013.

Kind regards,

Linda

Linda Humphries
Government Digital Service
Cabinet Office

government standards consultation

The Key Documents

The key documents which have been published are Open Standards Principles (PDF, MS Word and ODT formats), Open Standards Consultation – Government Response (PDF, MS Word and ODT formats), Statistical data (PDF, MS Word and ODT formats), An Analysis of the Public Consultation on Open Standards: Open Opportunities (PDF, MS Word and ODT formats), Open Standards in Government IT: A Review of the Evidence (PDF, MS Word and ODT formats) and B (PDF, MS Excel and CSV formats).

The first document summarised the key principles:

Open Standards Principles

These principles are the foundation for the specification of standards for software interoperability, data and document formats in government IT:

1. We place the needs of our users at the heart of our standards choices
2. Our selected open standards will enable suppliers to compete on a level playing field
3. Our standards choices support flexibility and change
4. We adopt open standards that support sustainable cost
5. Our decisions on standards selection are well informed
6. We select open standards using fair and transparent processes
7. We are fair and transparent in the specification and implementation of open standards

The introduction to the document states that:

This policy becomes active on 1 November 2012. From this date government bodies [1]
must adhere to the Open Standards Principles – for software interoperability, data and document formats in government IT specifications.

The other documents summarised the responses which had been received to the consultation (which included feedback from Adam Cooper, JISC CETIS, Rowan Wilson, JISC OSS Watch, Rob Englebright, JISC and Tony Hirst, Open University in addition to myself and several others from the university sector).

The document Open Standards in Government IT: A Review of the Evidence which provided an independent report for the Cabinet Office by the Centre for Intellectual Property & Policy Management at Bournemouth University concluded:

Although there is a lack of quantitative evidence on expected cost savings from adopting open standards, abundant examples exist where an open standards policy has been adopted with various consequent benefits, and the literature identifies few downside risks. The challenges appear to lie in the manner of implementation so that potential pitfalls, such as adopting the wrong standard, are avoided while potential gains from increased interoperability, including more competitive procurement and benefits to SMEs and citizens are maximised.

Perhaps some unexpected good news from the Government for Christmas? Might we be able to announce that the standards battle is now over and cry out “Peace in our time”? Time to read the documents in more detail, I feel. But I’d welcome comments from anyone who may already had read the documents and digested the implications.


[1] Central government departments, their agencies, non-departmental public bodies (NDPBs) and any other bodies for which they are responsible.


View Twitter conversation from: [Topsy]

Posted in standards | 3 Comments »

“Standards are voluntarily adopted and success is determined by the market”

Posted by Brian Kelly (UK Web Focus) on 15 October 2012

Yesterday (Sunday 14 October) was World Standards Day. As described on Wikipedia “The aim of World Standards Day is to raise awareness among regulators, industry and consumers as to the importance of standardization to the global economy“. It is therefore timely to highlight Open Stand. As described on the Open Stand Web site:

On August 29th five leading global organizations jointly signed an agreement to affirm and adhere to a set of Principles in support of The Modern Paradigm for Standards; an open and collectively empowering model that will help radically improve the way people around the world develop new technologies and innovate for humanity.

The “Modern Paradigm for Standards”is shaped by adherence to five principles:

  1. Due process: Decisions are made with equity and fairness among participants. No one party dominates or guides standards development. Standards processes are transparent and opportunities exist to appeal decisions. Processes for periodic standards review and updating are well defined.
  2. Broad consensus: Processes allow for all views to be considered and addressed, such that agreement can be found across a range of interests.
  3. Transparency: Standards organizations provide advance public notice of proposed standards development activities, the scope of work to be undertaken, and conditions for participation. Easily accessible records of decisions and the materials used in reaching those decisions are provided. Public comment periods are provided before final standards approval and adoption.
  4. Balance: Standards activities are not exclusively dominated by any particular person, company or interest group.
  5. Openness: Standards processes are open to all interested and informed parties.

The “Modern Paradigm for Standards” itself is based on five key approaches:

  1. Cooperation
  2. Adherence to the principles listed above
  3. Collective empowerment
  4. Availability
  5. Voluntary Adoption

The Topsy tool provides a useful means of observing Twitter discussions about web resources. Looking at recent English-language tweets about the Web site we can see a useful summary:

5 organizations - #IETF#IEEE#W3C#IAB@Internet Society – issue joint statement on open Internet standards - http://t.co/cO2rQvGH

together with a summary of the aims of this initiative:

check out the uber standards org @openstand that will drive innovation globally through interoperability http://t.co/cKrkYnvr

and an acknowledgement that more work is needed if the goal of “driving innovation globally through interoperability” is to be realised:

OpenStand (http://t.co/2g50zvMc) is good politics; that it doesn’t go far enough just shows there’s still work to be done.

However it is the single sentence summary of what is meant by “Voluntary Adoption” which struck me as being of greatest interest:

Standards are voluntarily adopted and success is determined by the market.

In the past I think there has been a view that open standards exist independently of the market place with public sector organisations, in particular, being expected to distance themselves from the market economy in the development and procurement of IT systems. However this statement of a “modern paradigm for standards” makes it clear that standards bodies such as the W3C, IETF, IEEE, IAB and the Internet Society are explicit that the success of open standards is dependent of acceptance of the standards across the market place. Back in September 2008 I highlighted the importance of market place acceptance of open standards:

many W3C standards …  have clearly failed to have any significant impact in the market place – compare, for example, the success of Macromedia’s Flash (SWF) format with the niche role that W3C’s SMIL format has.

and 2 months later a post entitled Why Did SMIL and SVG Fail? generated a discussion about criteria for identifying failed standards. Perhaps, as was suggested in the comments on the post, SMIL and SVG merely have had a very slow growth to reach market acceptance. But I can’t help but feel that if SMIL and SVG are belatedly felt to be successful standards this will have been as a result of the decision by Apple not to support Flash on the iOS plaform for Apple’s mobile devices. This seems to provide a good example of the Open Stand’s principle that “Standards are voluntarily adopted and success is determined by the market“. We can now see parallels between the selection of third-party services to support institutional activities and the selection of open standards to support development activities. Interestingly such issues were discussed at the CETIS meeting on “Future of Interoperability Standards” held in Bolton in January 2010. I hope that the Opportunities and Risks Framework For Standards which I presented at the meeting can provide an approach for helping to identify the standards which can achieve success in the market place.

Twitter conversation from: [Topsy]

Posted in standards | Leave a Comment »

Oh What A Lovely War!

Posted by Brian Kelly (UK Web Focus) on 8 May 2012

“The UK’s Battle for Open Standards”

The UK Government’s current consultation document on policies for open standards has generated a fair amount of passion. In addition to articles published in the Computer Weekly by Mark Ballard, and Glynn Moody I also recently came across the following tweet from @swardley:

I haven’t posted on the radar for a long time, really happy they took my article on the open standards battle - http://oreil.ly/Im5z0o

His post, entitled “The UK’s battle for open standards“, began:

Many of you are probably not aware, but there is an ongoing battle within the U.K. that will shape the future of the U.K. tech industry. It’s all about open standards.

and concluded:

The battle for open standards needs help, so get involved.

Earlier this year, the language used in title of Glyn Moody’s post on UK Government Betrayal of Open Standards Confirmed suggested that this was likely to be a vicious battle and his more recent article made it clear who the enemy was: How Microsoft Fought True Open Standards. Mark Ballard’s article on how Proprietary lobby triumphs in first open standards showdown reinforced the militarist angle:

In conclusion, I feel that this meeting and others like it, should not become vicarious battlegrounds for tech giants to slug out battles that they can’t or won’t conduct elsewhere – at the end of the day, it should be about delivering the best technology-enabled services possible at the best price point. 

In brief we are seeing a “battle for open standards” that will “shape the future of the UK tech industry” in which we are seeing “UK Government betrayal” which has led to a “proprietary lobby triumph” . The ugly secrets of “how Microsoft fought true open standards” have been revealed and now every man must do his duty and “get involved“! Who said standards were boring?

“Losses 60,000 Men. Ground Gained 0 Yards”

I recently watched a DVD of the film “Oh! What a Lovely War“, a film I saw when I was young which chronicles the various madnesses of the First World War. The scene depicting how the generals were happy to send soldiers to their destruction as they were convinced of the rightness of their cause came to mind when I read the blog posts which were suggested that success in the open standards battle would help the minor players (the open source community, which would be depicted by Belgium in an updated version of the film) against the evil empire (no prizes for guessing, but ignore the humanist comments of its former general).

But what of the foot soldiers? In the standards battle, these will be the users of IT services, but have little interest in the arcane decisions being made in Whitehall, in obscure European cities and by those plotting to overthrow the existing order. Will they (we) see peace in our time (to use a saying from a later war) or might winning the open standards battle fail to deliver enhanced services for users?

Addressing the Needs of the User

I’ve tried to make the point that the militaristic language which is being used by the blogging community is inappropriate in discussions about government policies on open standards. Rather than continuing with this metaphor, the issue I feel needs to be addressed is “What are the consequences of a new policy means for users of government IT services?” The current discussions are centred on the benefits of providing a level for developers, especially open source developers. But there is little discussions on what this will mean for end users, apart from an implied suggestion that open source solutions based on royalty-free open standards will inevitably provide a better environment for users of the services.

We have, for example, see how a well-intentioned government policy, such as the one which stated that All government Web sites must be WCAG compliant could lead to undesirable side-effects if it were to be implemented in a simplistic fashion. In this case, despite an Accessibility Summit meeting in which Web accessibility advocates, researchers and researchers agreed the need to avoid simplistic checkbox approaches, the government announced a policy which, if it had been implemented, could have resulted in government web sites which had trivial WCAG errors would be withdrawn from service.

In Web accessibility arena, alternative approaches led to the development of the BS 8878 Web Accessibility Code of Practice. This provides a much more realistic approach to achieving the laudable goal of enhancing access to people with disabilities, which takes contextual issues into account and focuses on best practices for the various processes in developing accessible Web sites and avoids the risk that forcing Web sites to be WCAG compliant would lead to non-conformant Web sites being removed from services or potentially valuable Web sites not being deployed due to difficulties in achieving WCAG conformance.

The current debate on open standards faces similar risks. To take a couple of simple and tangible examples:

  • The MP3 format is based on patented compression algorithms. Would a government policy which mandated patent-free standards ban use of the MP3 format? If so, since poplar audio players such as iPods, support the MP3 format but not necessarily patent-free alternatives, how will podcasts be made available for popular consumers products such as the iPod and iPhone.
  • The RSS (Really Simple Syndication/RDF Site Summary format is not an open standard since it is not owned by a trusted neutral standards body. Will RSS no longer be usable on Government Web sites and, if so, what benefits does this provide?
  • The Microsoft Office format is now an ISO Standard. Does this mean that MS Office will be an acceptable format. If so, what are the current ‘battles’ about? If not, what principles are the battles about?

Although I’m not in favour of the discussions about policies on Government use of open standards being based on military metaphors, I do agree with the call to get involved. Your country does need you, if you have an interest in the role open standards can play in the development of IT services in the public sector. In particular if you have an interest in the implications on user communities on the deployment of policies on open standards I’d encourage you to participate in the consultation.

Posted in standards | 1 Comment »

Preparing a Response to the UK Government’s Open Standards: Open Opportunities Document

Posted by Brian Kelly (UK Web Focus) on 26 April 2012

 

The UK Government’s Open Standards Consultation

The UK Government is currently seeking comments for its Open Standards Consultation for the Open Standards: Open Opportunities – Flexibility and efficiency in Government IT document (a 30 page document available in PDF format). I am currently formulating my responses to the consultation process. In light of the interests in open standards by many developers, managers and policy makers in the higher and further education sector I would encourage participation form those with interests in this area – it should be noted, however, that the consultation closes on 1 May!

Update 27 June 2012: The deadline has now been extended to Monday, 4th June 2012.

The Open Standards Survey 2011

As described in two posts entitled UK Government Survey on Open Standards: But What is an ‘Open Standard’? and “UK Government Will Impose Compulsory Open Standards” published a year ago I responded to the initial survey and gave my thoughts on the definitions of an open standard. I also commented on the flaws in the survey process which made it difficult to provide meaningful feedback.

My response was one of 970 received – and it was interesting to read in the Summary of lessons learned from the UK Government Open Standards Survey, 2011 (pdf, 246kb) that the majority came from the private sector. Looking at the pie chart given in the report I would estimate that about 200-300 responses came from the public sector (excluding central government). How many of these are from the UK higher and further education sector I do not know.

It should also be noted that although “the policy resulting from this consultation will apply to all central government departments, their agencies, non-departmental public bodies (NDPBs) and any other bodies for which they are responsible” the document goes on to add that “Local government and wider public sector bodies will be encouraged to adopt the policy to deliver wider interoperability benefits“. There is therefore an opportunity to influence government policy in an area which make affect IT development policies in the future.

Reflections on 20 years Involvement in Open Standards in UK Higher Education

Although I had serious reservations about last year’s survey in many respects I feel that the Open Standards: Open Opportunities - Flexibility and Efficiency in Government IT consultation document has its merits.

The feedback I gave in last year’s survey were based on work related to policies on use of open standards in higher education which I have been involved with since the launch of the eLib national digital library programme back in the mid 1990s. Back then those of us who were involved in contributing to the eLib Programme Technical Standards document had, in retrospect, a very naive view on open standards, with the document suggesting that standards such as VRML and whois++ could have a role to play for eLib projects. Some projects may have used these standards (I know that for a period the who++ was felt to be important for the eLib Subject Based Information Gateways) but in retrospect we were over-enthusiastic in encouragement take-up of what at the time seemed to be potentially significant standards.

The dangers of promoting (or, worse, mandating) use of emerging open standards which are being actively promoted by their supporters (and by standards bodies themselves) became apparent when we realised that W3C standards such as SMIL and SVG were not significantly challenging proprietary solutions such as Flash. In addition in 2005 a panel session entitled Web Services Considered Harmful argued that a series of overly complex open standards (several thousand pages when printed out!) was proving costly to implement and that use of ‘grassroots’ approaches, including RSS and REST, would provide more cost-effective approaches to development.

In the UK higher education sector we are aware of the dangers of mandating inappropriate open standards, with universities being mandated to support OSI networking protocols, with Coloured Book software providing a transition to this environment. Then the Internet came along and universities were initially permitted to access Internet services by a TCP/IP tunnel across JANET before the clear benefits provided by the Internet eventually became apparent to policy-makers and the sector made native use of TCP/IP.

Our understanding of the benefits which can be gained by use of open standards together with the risks of a naive and uncritical acceptance of the realities of use of open standards led to a series of papers which sought solutions to this minefield being written by myself, my colleague Marieke Guy and Rosemary Russell, my former colleague Pete Johnston, Paul Hollins and Scott Wilson (JISC CETIS), Alastair Dunning (at the time of AHDS), Sebastian Rahtz and Randy Metcalfe (then of JISC OSS Watch) and Lawrie Phipps (then of JISC TechDis):

In addition to these papers, a position paper on “An Opportunities and Risks Framework For Standards” was presented at the “Future of Interoperability Standards Meeting 2010” organised by CETIS in February 2010. The paper described how the experiences of the past led to the need for a risk management approach to use of open standards, especially emerging open standards which may not yet have achieved critical mass.

Open Standards: Open Opportunities - Flexibility and Efficiency in Government IT

In light of this background, what feedback am I planning to give to the report? I have highlighted a number of comments in the report which I intend to comment on.

Report Comment
Information technology across the government estate is expensive. (p. 4) The opening foreword highlights that the aims of the policy are cost-savings. There will be a need to ensure that the policy supports this key goal.
The Government ICT Strategy … has already committed the Government to creating a common and secure IT infrastructure based on a suite of compulsory open standards, adopting appropriate open standards wherever possible. [my emphasis] p. 5). The challenge will be in identifying what is compulsory and what the criteria are for defining “wherever possible”. The compulsory aspects could mandate specific technical standards or could mandate specific processes (e.g. an open summary of the decision-making processes).
The mandation of specific open standards will
• make IT solutions fully interoperable to allow for reuse, sharing and scalability across organisational boundaries and delivery chains;
• help the Government to avoid lengthy vendor lock-in, allowing transfer of services or suppliers without excessive transition costs, loss of data or functionality. (p. 8)
If the main goal of the open standards policies is to achieve cost savings, should this be mentioned here?
The European Commission’s EIF version 2.0 does not provide a definition of open standard, but instead describes ‘openness’ … (p. 11) This approach, which seeks to characterise open approaches, provides the flexibility to allow use of cost effective standards such as RSS (which have not been ratified by an open standards body) as well as use of design approaches (such as RESTful design) rather than over-complex open standards (such as the WS- series).
For the purpose of UK Government software interoperability, data and document formats, the definition of open standards is those standards which fulfil the following criteria: … (p. 12) It is unclear whether there should be an ‘and’ or an ‘or’ linking the five criteria.
When specifying IT requirements for software interoperability, data and document formats, government departments should request that open standards adhering to the UK Government definition are adopted, unless there are clear business reasons why this is inappropriate, in order to … (p. 13) This process-driven approach relates closely to the approaches developed in the UK HE sector and described in a paper on “Openness in Higher Education: Open Source, Open Standards, Open Access“.
Standards for software interoperability, data and document formats that do not comply with the UK Government definition of an open standard may be considered for use in government IT procurement specifications if … (p. 13) This flexibility is to be welcomed in light of the complexities related to open standards. However there will be a need to ensure that such flexibility does not allow inappropriate proprietary solutions to continue to be used.
Any standard specified that is not an open standard must be selected as a result of a pragmatic and informed decision, taking the consequences into account. The reasons should be fully documented and published, in line with the Government’s transparency agenda. (p.13) This clause is welcomed.

I welcome your comments on my views on the consultation document. More importantly, however, I’d encourage you to give your views on the consultation web site – as that is the place where your views can influence government policy decisions. Note that if you would like to see responses which have already been submitted, I suggest you visit Jenni Tennison’s post on UK Open Standards Consultation.


Twitter conversation from Topsy: [View]

Posted in standards | 5 Comments »

W3Conf: Practical Standards for Web Professionals – Free for Remote Participants!

Posted by Brian Kelly (UK Web Focus) on 28 October 2011

The W3C are hosting their first conference: “W3Conf: Practical Standards for Web Professionals” which will take on 15-16 November 2011 at the Redmond Marriot Town Center, Redmond, USA. Although the early bird registration fee of $199 for the two day event seems very reasonable I suspect that despite the event’s focus on HTML5 and the Open Web Platform probably being of interest to many readers of this blog, not many will be able to travel to the US to attend this conference (but if you do wish to attend note that the deadline for the early bird registration is 1 November when the fee will go up to $299).

However the event Web site states that “The recordings of the presentations will be freely available” and goes on to add that “During the event, there will be a live stream of the sessions, with English subtitling. After the event, each session will be archived for future reference“.

The following sessions will be held at the conference:

Day 1, 15 November:

  • Welcome: Contributing to Open Standards, Ian Jacobs (W3C)
  • Testing to Perfection, Philippe Le Hégaret (W3C)
  • Community Groups: a Case Study With Web Payments, Manu Sporny (Digital Bazaar)
  • Developer Documentation, Doug Schepers (W3C)
  • HTMl5 Games
  • Web Graphics – a Large Creative Palette, Vincent Hardy (Adobe)
  • Modern Layout: How Do You Build Layout in 2011 (CSS3)?, Divya Manian (Opera)
  • Shortcuts: Getting Off (Line) With the HTML5 Appcache, John Allsopp (Web Designs)
  • The n-Screens Problem: Building Apps in a World Of TV and Mobiles, Rajesh Lal (Nokia)
  • The Great HTML5 Divide: How Polyfills and Shims Let You Light Up Your Sites in Non-Modern Browsers, Rey Bango (Microsoft)
  • HTML5: the Foundation of the Web Platform, Paul Irish (Google)

Day 2, 16 November:

  • HTML5 Demo Fest: The Best From The Web, Giorgio Sardo (Microsoft)
  • Shortcuts: Data Visualisation With Web Standards, Mike Bostock (Square)
  • Universal Access: A Practical Guide To Accessibility, Aria, And Script, Becky Gibson (Ibm)
  • Security and Privacy: Securing User Identities and Applications, Brad Hill (Paypal), Scott Stender (Isec Partners)
  • Shortcuts: Touch Events, Grant Goodale (Massively Fun)
  • Mobile Web Development Topic: Building For Mobile Devices
  • Shortcuts: Modernizr, Faruk Ateş (Apture)
  • Browsers and Standards: Where the Rubber Hits the Road, Paul Cotton (Microsoft), Tantek Çelik (Mozilla), Chris Wilson (Google), Divya Manian (Opera)

It was very timely to read about this conference during Open Access 2011 Week, which the JISC, among many other organisations, are supporting. The free access to the talks and resources which will be used illustrates how openness can be used to enhance learning and creativity, in this context for developers who are looking to use Web standards to enhance their services.

The provision of remote access to the conference is also very timely in the context of the JISC-funded Greening Events II project which is being provided by ILRT and UKOLN.    It would be valuable if the conference organisers were able to provide statistics on remote participation during the event.  How many people viewed from the UK, for example, and for how long. It would be interesting to see if the environmental costs of delivering the steaming video and hosting videos and slides for subsequent viewing could be compared with the costs of flying to the US.

Posted in Events, standards | Leave a Comment »

Privacy Settings For UK Russell Group University Home Pages

Posted by Brian Kelly (UK Web Focus) on 24 May 2011

On the website-info-mgt JISCMail List Claire Gibbons, Senior Web and Marketing Manager at the University of Bradford today askedHas anyone done anything in particular in response to the changes to the rules on using cookies and similar technologies for storing information from the ICO?” and went on to add that “We were going to update and add to our privacy policy in terms of what cookies we use and why“.

This email message was quite timely as privacy issues will be featured in a plenary talk at UKOLN’s forthcoming IWMW  2011 workshop which will be held at the University of Reading on 26-27 July with Dave Raggett giving the following talk:

Online Privacy:
This plenary will begin with a report on work on privacy and identity in the EU FP7 PrimeLife project which looks at bringing sustainable privacy and identity management to future networks and services. There will be a demonstration of a Firefox extension that enables you to view website practices and to set personal preferences on a per site basis. This will be followed by an account of what happened to P3P, the current debate around do not track, and some thoughts about where we are headed.

The Firefox extension mentioned in the abstract is known as the ‘Privacy Dashboard’ and is described as “a Firefox add-on designed to help you understand what personal information is being collected by websites, and to provide you with a means to control this on a per website basis“. The output for a typical home page is illustrated.

The dashboard was developed by Dave Raggett with funding from the European Union’s 7th Framework Programme for the PrimeLife project, a pan-European research project focusing on bringing sustainable privacy and identity management to future networks and services.

In order to observe patterns of UK Universities practices in online privacy I have used the W3C Privacy Dashboard to analyse the home pages of the twenty UK University Russell Group Web sites. The results are given in the following table.

Ref. No. Institution Cookies External third party Invisible images
Session cookies Lasting cookies External lasting cookies Sites Cookies Lasting cookies
1 University of Birmingham 3 3 0 4 0 2 0
2 University of Bristol 0 0 0 4 0 6 8
3 University of Cambridge 1 3 0 3 1 2 0
4 Cardiff University 1 4 0 0 0 0 0
5 University of Edinburgh 1 4 0 0 0 0 0
6 University of Glasgow 2 3 0 2 1 6 2
7 Imperial College 3 3 0 3 0 2 0
8 King’s College London 3 3 0 3 1 6 0
9 University of Leeds 2 3 0 1 0 0 0
10 University of Liverpool 2 3 0 2 2 3 0
11 LSE 3 0 0 1 0 0 0
12 University of Manchester 3 0 0 1 0 0 0
13 Newcastle University 2 0 0 0 0 0 3
14 University of Nottingham 2 3 0 2 0 5 0
15 University of Oxford 1 5 0 1 0 0 1
16 Queen’s University Belfast 1 3 0 1 0 0 0
17 University of Sheffield 2 3 0 0 1 0 0
18 University of Southampton 1 3 0 3 0 0 0
19 University College London 1 2 7 0 0 0 0
20 University of Warwick 9 6 0 39 2 95 6
TOTAL 43 54 7 70   127 20 

It should be noted that the findings appear to be volatile, with significant differences being found when the findings were checked a few days after the initial survey.

How do these findings compare with other Web sites, including those on other sectors?  It is possible to query the Privacy Dashboard’s  data on Web sites for which data is available, which include Fortune 100 Web site. In addition I have used the tool on the following Web sites:

Ref. No. Institution Cookies External third party Invisible images Additional Comments
Session cookies Lasting cookies External lasting cookies Sites Cookies Lasting cookies
1 W3C  0  0 0 2  0 4 1 P3P Policy
2 Facebook Home page  4 6 0  1 0  0  1
3 Google  0  7  0 0  0  1 0
4 No. 10 Downing Street 1  4  0  8  0 52 1 (Nos. updated after publication)
5 BP 1 1 0 0 0 0 2 P3P Policy
6 Harvard 3 4 1 0 0 0
7 ICO.gov.uk 2 3 0 1 0 0 1

I suspect that many Web managers will be following Claire Gibbon’s lead in seeking to understand the implications of the changes to the rules on using cookies and similar technologies for storing information and reading the ICO’s paper on Changes to the rules on using cookies and similar technologies for storing information (PDF format).  I hope this survey provides a context to the discussions and that policy makers find the Privacy Dashboard tool useful.  But in addition to ensuring that policy statements regarding use of cookies are adequately documented, might not this also provide an opportunity to implement a machine-readable version of such policy. Is it time for P3P, the Platform for Privacy Preferences Project standard, to make a come-back?

Posted in Evidence, Legal, openness, standards, W3C | Tagged: | 15 Comments »

“UK Government Will Impose Compulsory Open Standards”

Posted by Brian Kelly (UK Web Focus) on 20 April 2011

“UK Government Promises To Go Open – Again”

In a post entitled UK Government Promises to Go Open – Yet Again Glyn Moody provides a rather cynical view based on his experiences of Government promises regarding ICT and openness: “after years of empty promises, the UK government assures us that this time is will really open up, embracing open source and openness in all its forms”. However there is also some optimism in the column:

… there is a ray of hope. For as I reported a month ago, the Cabinet Office has settled on a rather good definition of open standards that includes the key phrase “have intellectual property made irrevocably available on a royalty free basis”, which does create a truly level playing-field that allows open source to compete fairly.”

The column concludes:

“Let’s hope it really marks the beginning of a new era of openness in UK government IT – and that I won’t have to write this article ever again.”

Publication by the Cabinet Office of the “Government ICT Strategy”

I have previously commented on the Government’s attempts at agreeing on a definition of open standards in a post entitled UK Government Survey on Open Standards: But What is an ‘Open Standard’? and pointed out some of the difficulties (is RSS an open standard, for example). But although it may be difficult to provide agreement on such definitions, I welcome the fact that the Government is asking such questions.

This is particularly important in light of the recent release of the Cabinet Office’s recent publication of the  Government ICT Strategy (PDF format). In the introduction the Right Honourable Frances Maude, Minister for the Cabinet Office lists the following challenges central government is facing:

  • Departments, agencies and public bodies too rarely reuse and adapt systems which are available ‘off the shelf’ or have already been commissioned by another part of government, leading to wasteful duplication:
  • systems are too rarely interoperable;
  • the infrastructure is insufficiently integrated, leading to inefficiency and separation;

The first bullet point could be interpretted as a signal that the government is looking to procure off-the-shelf proprietary systems.  However the other two points seem to challenge that perception, as it is precisely such monolithic proprietary systems which fail to provide the interoperability and the integrated infrastructure which is needed.   Instead in order to address these challenges the strategy announces that it intends to:

impose compulsory open standards, starting with interoperability and security;

We know that the government is prepared to take ‘bold’ decisions – but is this a perhaps unusual decision in being one that those involved in IT development activities within the high education sector would welcome?

What are the Open Standards Which Will Be Made Compulsory?

It is also pleasing to see that the Government has invited feedback on the open standards which it feels are relevant.  A SurveyMonkey form on Open Standards in the Public Sector invites feedback on its proposed set of conditions for an open standards (discussed previously) as well as listing open standards in 23 technical areas for which respondents can specify whether they think the standards should be a PRIORITY STANDARD, MANDATORY (must be used), RECOMMENDED (should be used), OPTIONAL or SHOULD NOT USE.

The 23 areas are Accessibility and usability; Biometric data interchange; Business object documents; Computer workstations; Conferencing systems over Internet Protocol (IP); Content management, syndication and synchronization; Data integration between known parties; Data publishing; e-Commerce, purchasing and logistics; e-Health and social care; e-Learning; e-News; e-Voting; Finance; Geospatial data; Identifiers; Interconnectivity; Service registry/repository; Smart cards; Smart travel documents; Voice over Internet Protocol (VOIP); Web services and Workflow and web services.

Rather than attempting to comment on all of these areas I’ll explore some of the issues with the approaches which are being taken in the survey by addressing just two areas: “Accessibility and usability” and “Computer workstations”.

“Accessibility and Usability”

The first section covers “Accessibility and usability” and addresses Human Computer Interface standards (e.g. ISO/TS 16071:2003);  Web Content standards (WCAG 1.0) and Usabilty (sic) standards (e.g. ISO 13407:1999).

This is an area of particular interest to me, so how should I respond to the survey (which is illustrated).

The first question, on WCAG 1.0, is easy – this has been superceded by WCAG 2.0 and should no longer be used.  So that is clearly be in the “Should Not Use” category.

Should, therefore, the answer to the use of WCAG 2.0 be to select it as a Priority Standard, a Mandatory Standard or a Recommended Standard, Optional or, perhaps, Should Not Use?  These terms have been defined in the survey system:

PRIORITY STANDARD – a standard that you think is is important and a priority

MANDATORY – a standard that you judge MUST be used by the UK public sector

RECOMMEND – a standard that you judge should be used by the UK public sector but recognising that there may be exceptions/caveats that mean it is sometimes not appropriate

OPTIONAL – a standard that you judge may be used by the UK public sector

SHOULD NOT USE – a standard that you judge should not be used by the UK public sector

I have previously suggested that public sector organisations in the UK should be using the BS 8878 Code of Practice for Web Accessibility as this provides a policy framework for developing accessible Web sites and provides the flexibility in the selection of accessibility guidelines, such as WCAG 2.0 which may not be applicable for use in some circumstances.  However BS 8878 isn’t included in the list of standards.  I think that WCAG 2.0 is important, but not applicable in all cases, so I guess I should select the Priority Standard option.  In addition, since it is possible to select multiple responses, I would also choose the Recommend option.

From this first two standards I have already found reasons why the Mandatory response may be be appropriate and noticed some logical flaws in the design of the survey form – it seems it is possible to select multiple responses, including ones which may be contradictory.

The third ‘standard’ is also confusing as it covers the ‘Central Office of Information Standards and Guidelines‘.  However this isn’t a standard but a set of UK Government recommendations and policies. The guidance document contains a section on Delivering inclusive websites which appears to have been published in 2009 and which requires Government Web sites to conform with WCAG 1.0 to a AA level. This ‘standard’ is not compatible with the first two areas and so therefore the Should Not Use recommendation should be given – not because the recommendations are necessarily wrong but because it is not a standard. However it is not possible to annotate the responses submitted using the survey system.

“Computer Workstations”

The misleading “Computer workstations” section is of particular interest to me since it covers various Web standards, document, standards and standards for office applications. In the list of Web standards the choices are HTML 4.01, HTML 5 or XHTML. Here the choices are between a W3C HTML 4.01 standard which was ratified in December 1999, a W3C HTML5 working draft which has not yet been ratified and which is still evolving and a W3C standard for which a version number isn’t specified which could lead to confusions over the ratified XHTML (1.0) standard and the moribund (but recently updated) XHTML 2 working draft.

The list of document types are also interesting.  RDF RTF is listed as a standard – although this is a proprietary format which is owned by Microsoft. Similarly the inclusion of PDF from version 4 covers both the proprietary version owned by Adobe as well as the ISO standard which is based on PDF 1.7. The ODF and OOXML open standards are listed although the Microsoft Document format is also included as well as the Lotus Notes Web Access format.   There are similar confusion over the open standards for spreadsheets: HTML is suggested which, although this is an open standard, will not provide the interoperability which open standards are meant to deliver.  As with the document formats ODF and OOXML are included but the proprietary MS Excel format is also listed. This pattern is repeated for presentation formats, although this time MS PowerPoint is listed.

Other Areas

The section on “Biometric data interchange” is interesting, although I know nothing of the standards used in this area. But what are the implications of responding to the question on. for example, “ISO/IEC 19794-5 Information Technology – Biometric data interchange formats – Part 5: Face image data”. If this is a Mandatory Standard could this mean that it is used in situations which I feel infringe personal liberties? The initial response might be to suggest that the standard will only be used in appropriate areas – and yet we have seen that defining WCAG as a Mandatory standards has led to it being enforced when its use may be inappropriate. It does seem to me that there is a need to define a policy layer which helps to ensure that Mandatory clauses are not used in inappropriate areas.

I’ll not comment further here on areas which I know will be of interest to the JISC development community:

Conferencing system (six standards listed), Content management, syndication and synchronisation (which covers various standards such as XML Schemas, OAI-PMH, RSS, OpenURL and Z39.50), Data integration between known parties (which includes XML, XML Schemas, XSL, UML, RDF and OWL), Data publishing (which covers RDF, SKOS and OWL), Identifiers (which covers DOIs, ISBN, ISSN, XRIs, GUID, URIs, URLs and PURLs), Interconnectivity (which covers various Internet protocols), Service management (which only includes ISO/IEC 20000) or Service registry/repository (which includes UDDI, ebXML, ebRS and edRS), e-Learning (which covers IMS, IEEE LOM and SCORM), Geo-spatial, Web Services and Workflow and web services.

or areas which will be of less direct relevance to our development community:

Business object documents, Smart cards or Smart travel documents. e-Commerce, purchasing and logistics, e-Health and social care, e-News, e-Voting, Finance and VoIP.

Discussion

Despite the rhetoric in the introduction to the Government ICT Strategy document it seems that the survey is simply revisiting work which has been published previously in the e-GIF guidelines. Looking at the Technical Standards Catalogue, for example, there is a section on Specifications for computer workstations which lists PDF, MS Office formats and Lotus notes which I mentioned previously.

Looking in more detail at the survey form I find that the form is full of typos. For example (with the typos given in bold):

  • There are many different defintions of the term ‘open standard’. We’d like your feedback on our proposed definition.
  • Usabilty  (there are multiple occurrences of this typo)
  • coding of continous-tone still images (there are multiple occurrences of this typo)
  • Data defintion – Government Data Standards Catalogue (there are multiple occurrences of this typo)
  • Ontology-based inforamtion exchange (e.g. OWL)
  • Persistient identifier (e.g. XRI) (there are multiple occurrences of this typo)
  • Digital Object Indentifier (DOI)    (there are multiple occurrences of this typo)
  • HyperText Tranfer Protocol (HTTP)  (there are multiple occurrences of this typo)
  • Authetication (there are multiple occurrences of this typo)
  • Elecrtical standards (e.g. ISO/IEC 7816-10)
  • Terminal infrastrucure standards (there are multiple occurrences of this typo)

Does this matter if the meaning is obvious?  For a conversational email message or blog post perhaps not but for a formal process for gathering information it is of some concern. This is particularly true when there may be particular standards which could be mis-identified be typographical errors. So although I spotted the errors listed above (initially when reading the document and subsequently by putting the document through a spell-checker) I have no idea if the following examples could contain errors:

  • ISO/IEC 7816-15: 2004/Cor 1: 2004
  • Contact cards – Tactile identifiers BS EN 1332-2 Identification card systems – Man-machine interface Part 2: Dimensions and location of a tactile identifier for ID-1 cards

It should also be noted that the survey form itself contain flaws. As illustrated below although the form repeatedly invites respondents to “suggest other standards within this category that are not listed. Start a new line for each in reality it is not possible to enter more than a single line.

Glyn Moody felt that there was a “ray of hope” in the Governments’s apparently enlightened approach to open standards. I fear he is mistaken – sadly I see nothing to indicate that the government has an understanding of the implications of any decisions that may be taken as a result of this flawed information-gathering exercise.

Posted in standards | 6 Comments »

New HTML5 Drafts and Other W3C Developments

Posted by Brian Kelly (UK Web Focus) on 13 April 2011

 

New HTML5 Drafts

The W3C’s HTML Working Group has recently announced the publication of eight documents:

Last Call Working Drafts for RDFa Core 1.1 and XHTML+RDFa 1.1

Back in August 2010 in a post entitled New W3C Document Standards for XHTML and RDFa I described the latest release of RDFa Core 1.1 and XHTML+RDFa1.1 draft documents. The RDFa Working Group has now published Last Call Working Drafts of these documents: RDFa Core 1.1 and XHTML+RDFa 1.1.

New Provenance Working Group

The W3C has also recently launched a new Provenance Working Group whose mission is “to support the widespread publication and use of provenance information of Web documents, data, and resources“. The Working Group will publish W3C Recommendations that define a language for exchanging provenance information among applications. This is an area of work which is likely to be of interest to those involved in digital library development work – and it is interesting to see that a workshop on Understanding Provenance and Linked Open Data was held recently at the University of Edinburgh.

Emotion Markup Language

When I first read of the Multimodal Interaction (MMI) Working Group‘s announcement of the Last Call Working Draft of Emotion Markup Language (EmotionML) 1.0. I checked to see that it hadn’t been published on 1 April! It seems that “As the web is becoming ubiquitous, interactive, and multimodal, technology needs to deal increasingly with human factors, including emotions“.

The EmotionML Language allows various vocabularies to be used such as:

The six terms proposed by Paul Ekman (Ekman, 1972, p. 251-252) as basic emotions with universal facial expressions — emotions that are recognized and produced in all human cultures: anger; disgust; fear; happiness; sadness and surprise.

The 17 terms found in a study by Cowie et al (Cowie et al., 1999) who investigated emotions that frequently occur in everyday life: affectionate; afraid; amused; angry; bored; confident; content; disappointed; excited; happy; interested; loving; pleased; relaxed; sad; satisfied and
worried.

Mehrabian proposal of a three-dimensional description of emotion in terms of Pleasure, Arousal, and Dominance.

Posted in HTML, standards, W3C | 1 Comment »

RDFa and WordPress

Posted by Brian Kelly (UK Web Focus) on 5 April 2011

RDFa: A Brief Recap

RDFa (Resource Description Framework – in – attributes) is a W3C Recommendation that adds a set of attribute level extensions to XHTML for embedding rich metadata within Web documents.

As described in the Wikidpia entry for RDFa five “principles of interoperable metadata” are met by RDFa:

  1. Publisher Independence: each site can use its own standards
  2. Data Reuse: data is not duplicated. Separate XML and HTML sections are not required for the same content.
  3. Self Containment: the HTML and the RDF are separated
  4. Schema Modularity: the attributes are reusable
  5. Evolvability: additional fields can be added and XML transforms can extract the semantics of the data from an XHTML file

Additionally RDFa may benefit Web accessibility as more information is available to assistive technology.

But how does go about evaluating the potential of RDFa? Last year I wrote a post on Experiments With RDFa which was based on manual inclusion of RDFa markup in a Web page. Although this highlighted a number of issues, including the validity of pages containing RDFa, this is not a scalable approach for significant deployment of RDFa. What is needed is a content management system which can be used to deploy RDFa on existing content in order to evaluate its potential and understand deployment issues.

The Potential for WordPress

WordPress as a Blog Platform and a CMS

WordPress provides a blog platform which can be used for large-scale management of blogs which are hosted at wordpress.com. In addition the software is available under an open source licence and can be deployed within an institution. There is increasing interest in use of WordPress within the higher education sector as can be seen from the recent launch of a WORDPRESS JISCMail list (which is aimed primarily at the UK HE sector) with some further examples of interest in use of WordPress in being available on the University Web Developers group.

A recent discussion on the WORDPRESS JISCMail lists addressed the potential of WordPress as a CMS rather than a blogging platform.  Such uses were also outlined recently in a post on the College Web Editor blog which suggested reasons why WordPress can be the right CMS for #highered websites.  In light of the growing interest in use of WordPress as a CMS it would seem that this platform could have a role to play in the deployment of new HTML developments such as RDFa.

The wp-RDFa WordPress Plugin

A strength of WordPress is its extensible architecture which allows plugins to be developed by third parties and deployed on locally installations of the software.  One such development is the wp-RDFa plugin which supports FOAF and  Dublin Core metadata. The plugin uses Dublin Core markup to tag posts with the title, creator and date elements. In addition wp-RDFa can be configured to make use of FOAF to “relate your personal information to your blog and to relate other users of your blog to you building up a semantic map of your relationships in the online world“.

Initial Experiments With wp-RDFa

Dublin Core Metadata

UKOLN’s Cultural Heritage blog has been closed recently, with no new posts planned for publication.  The blog will however continue to be hosted and can provide a test bed for experiments such as use of the wp-RDFa plugin.

In an initial experiment we found that the although the titles of each blog post were described using Dublin Core metadata, the title was replicated in the blog display. Since this was not acceptable we displayed the use of Dublin Core metadata and repeated the experiment on a private backup copy of the UK Web Focus blog. This time there were no changes in how the blog posts were displayed.

The underlying HTML code made use of the Dublin Core namespace:

<rdf:RDF xmlns:rdf=”http://www.w3.org/1999/02/22-rdf-syntax-ns#&#8221; xmlns:dc=”http://purl.org/dc/elements/1.1/”&gt;

with each individual blog post containing the title and publication date provided as RDFa:

<h3 class=”storytitle”>
<span property=“dc:date” content=”2010-04-27 08:17:53″ resource=”http://blogs.ukoln.ac.uk/xxxxx/2010/04/27/workshop-on-engagement-impact-value/&#8221; />
<span rel=”http://blogs.ukoln.ac.uk/xxxxx/2010/04/27/workshop-on-engagement-impact-value/&#8221; property=”dc:title” resource=”http://blogs.ukoln.ac.uk/xxxxx/2010/04/27/workshop-on-engagement-impact-value/”>Workshop on Engagement, Impact, Value</span></a></h3>

It therefore does appear that the plugin can be deployed on local WordPress installations in order to provide richer semantic markup for existing content. I suspect that the problem with the display in the original experiment may may due to an incompatibility with the theme which is being used (Andreas09). I have reported this problem to the developer of the wp-RDFa plugin.

FOAF (Friends-of-a-Friend)

I had not expected an RDFa plugin to provide support for FOAF, the Friends-of-a-Friend vocabulary.  However since my work with FOAF dates back to at least 2004 I had an interest in seeing how it might be used in the context of a blog.

I had expected that information about the blog authors and commenters would be displayed in some way using a RDFa viewer such as the FireFox Operator plugin. However nothing seemed to be displayed using this plugin. In addition use of the RDFa Viewer and the RDFA Developer plugin also failed to detect FOAF markup embedded as RDFa.  I subsequently found that the FOAF information was provided as an external file.  Use of the FOAF Explorer service provides a display of the FOAF information which has been created by the plugin.

What surprised me with the initial display of the FOAF content was the list of names which I did not recognise.  It seems that these are authors and contributors to a variety of other blogs hosted on UKOLN’s WordPress MU (multi-user) server. I wonder whether the plugin was written for a previous version of WordPress, for which there was one blog per installation? In any case a decision has been made to provide access to a FOAF resource which contains details of the blog authors only, as illustrated.

Emerging Issues

A post on Microformats and RDFa deployment across the Web recently surveyed take-up of RDFa based on an analysis of 12 billion web pages indexed by Yahoo! Search and shows that we are seeing a growth in the take-up of semantic markup in Web pages.  As CMS systems (such as Drupal 7 which supports RDfa ‘out of the box’ – link updated in light of comment)  begin to provide RDFa support we might expect to see a sharp growth in Web pages which provide content which can be processed by software as well as being read by humans.  For those institutions which host a local WordPress installation it appears that it is now possible to begin exploring use of RDFa. As described in a post by Mark Birkbeck on RDFa and SEO an important role for RDFa will be to provide improvements to searching.  But in addition the ability to use wp-RDFa to create FOAF files makes we wonder whether this approach might be useful in describing relationships between contributors to blogs and perhaps provide the hooks to facilitate data-mining of the blogosphere.

It would be a mistake, however, to focus on one single tool for creating RDFa markup.  On the WORDPRESS JISCMail list Pat Lockley  mentioned that he is also developing an RDFa plugin for WordPress and invited feedback on further developments.  Here are some of my thoughts:

  • There is a need for a clear understanding of how the semantic markup will be applied and the user cases it aims to address.
  • There will also be a need to understand how such semantic markup would be used in non-blogging uses of WordPress, where the notions of a blog post, blog author and blog commenters may not apply.
  • There will be a need to ensure that different plugins which create RDFa markup are interoperable i.e. if a plugin is replaced by an alternative applications which process the RDFa should give consistent results.
  • Consideration should be given to privacy implications of exposing personal data (in particular) in semantic markup.

Is anyone making use of RDFa in WordPress who has experiences to share?  And are there any further suggestions which can be provided for those who are involved in related development work?

Posted in standards | Tagged: , | 9 Comments »

UK Government Survey on Open Standards: But What is an ‘Open Standard’?

Posted by Brian Kelly (UK Web Focus) on 7 March 2011

UK Government’s Open Standards Survey

I was alerted to the UK Government’s Open Standards Survey by Adam Cooper of JISC CETIS, who has already encouraged readers of his blog to participate in the survey. I’ve skimmed through the questions but haven’t yet completed the survey. What stuck me, though, was the draft definition of the term “open standard” as proposed by the UK Government.

Respondents are invited to give comments to the following five conditions:

  1. Open standards are standards which result from and are maintained through an open, independent process
  2. Open standards are standards which are approved by a recognised specification or standardisation organisation, for example W3C or ISO or equivalent. (N.B. The specification/standardisation must be compliant with Regulation 9 of the Public Contracts Regulations 2006. This regulation makes it clear that technical specifications/standards cannot simply be national standards but must also include/recognise European standards)
  3. Open standards are standards which are thoroughly documented and publicly available at zero or low cost
  4. Open standards are standards which have intellectual property made irrevocably available on a royalty free basis
  5. Open standards are standards which as a whole can be implemented and shared under different development approaches and on a number of platforms

I think the survey was wise to begin by being honest about the difficulties in defining an ‘open standard’ and inviting feedback on its proposed set of conditions. The survey follows on from work which has been carried out by UKOLN, JISC CETIS and JISC OSS Watch with our shared interests in helping the sector to exploit the potential of open standards. I thought it would be useful to revisit our work before I completed the survey.

Previous Work in Describing an ‘Open Standard’

The term “open standard” is somewhat ambiguous and open to different interpretations. In a paper entitled “Openness in Higher Education: Open Source, Open Standards, Open Access” (available in PDF, MS Word and HTML formats) Scott Wilson (CETIS), Randy Metcalfe (at the time at JISC OSS Watch) and myself pointed out that:

There are many complex issues involved when selecting and encouraging use of open standards. Firstly there are disagreements over the definition of open standards. For example Java, Flash and PDF are considered by some to be open standards, although they are, in fact, owned by Sun, Macromedia and Adobe, respectively, who, despite documenting the formats and perhaps having open processes for the evolution of the formats, still have the rights to change the licence conditions governing their use (perhaps due to changes in the business environment, company takeovers, etc.)

It should be added that this paper was written in 2007. Since then PDF has become an ISO standard so we could add the fact that proprietary formats can become standardised to the complexities.

In a UKOLN QA Focus briefing paper we tried to describe characteristics shared by open standards, which had similarities to the approaches proposed in the UK Government survey:

  • An open standards-making process
  • Documentation freely available on the Web
  • Use of the standard is uninhibited by licencing or patenting issues
  • Standard ratified by recognised standards body

It should be noted that we described these as ‘characteristics‘ of an open standard rather than mandatory requirements since we were aware that the second point, for example, would rule out standards produced by many standardisation bodies such as BSI and ISO.

Responding to the Survey

I’d like to share my thoughts prior to completing the survey.

  1. Open standards are standards which result from and are maintained through an open, independent process
  2. I would support this condition. It should be noted that this means that a standard which is owned by a vendor cannot be regarded as an open standard even if the standard is published. This means, for example that Microsoft’s RTF format is not an open standard and PDF was not an open standard until ownership was transferred to ISO in 2008. It should be noted that I believe that the US definition of ‘open standards’ does not include such a clause (there were disagreements on this blog over the status of PDF before it became an ISO standard).

  3. Open standards are standards which are approved by a recognised specification or standardisation organisation, for example W3C or ISO or equivalent. (N.B. The specification/standardisation must be compliant with Regulation 9 of the Public Contracts Regulations 2006. This regulation makes it clear that technical specifications/standards cannot simply be national standards but must also include/recognise European standard).
  4. I used to have this view. However I can recall an email discussion with Paul Miller and Andy Powell when they worked at UKOLN who argued (and convinced me) that this was an over-simplistic binary division of the world of standards. It should be noted that RSS (in any of its flavours) would not satisfy this condition. The question, then, is whether this is a concern? If the definition of an ‘open standard’ will be used to determine whether a standard should be used by the UK Government then there will be a need to avoid being too rigourous in the definition. My view would be to rule out this condition.

  5. Open standards are standards which are thoroughly documented and publicly available at zero or low cost
  6. I would agree on the importance for rigourous documentation for open standards, so that ambiguities and inconsistencies are avoided. This clause is, however, ambiguous itself – what is ‘low cost’ documentation? However I would be happy to see this condition included.

  7. Open standards are standards which have intellectual property made irrevocably available on a royalty free basis
  8. This is desirable, but what happens if it is not possible to negotiate royalty-free licences? This is particularly true for video formats. If the government uses this as a mandatory condition for open standards and subsequently requires services to make use of open standards might this result in a poorer quality environment for the end user? From an ideological position I would like to support this condition but in reality I feel that there needs to be more flexibility – there is a danger that if open standards are mandated this could mean that Government departments would be barred from making use of popular services – such as YouTube and iTunes – which many people fund helpful in gaining simple access to information of interest. I am therefore rather uncertain as to whether this should be a required condition for the definition of an open standard. It is worth noting, incidentally, that the W3C have similarly avoided grasping this particular nettle in the HTML5 standardisation work, with no specific video codex being mandated as part of the standard.

  9. Open standards are standards which as a whole can be implemented and shared under different development approaches and on a number of platforms
  10. This has always been a view I have held.

The contentious issues seems to be “Open standards are standards which have intellectual property made irrevocably available on a royalty free basis“. I suspect people will argue strongly that this condition is essential. For me, though, we are revisiting Martin Weller’s “Cato versus Cicero” argument. Should we be taking a hardline stance in order to achieve a desired goal or do we need to make compromises in order to accommodate complexities and the conflicting needs of various stakeholders?

Posted in standards | 4 Comments »

Standards for Web Applications on Mobile Devices: the (Re)birth of SVG?

Posted by Brian Kelly (UK Web Focus) on 1 March 2011

The W3C have recently published a document entitled “Standards for Web Applications on Mobile: February 2011 current state and roadmap“. The document, which describes work carried out by the EU-funded Mobile Web Applications project, begins:

Web technologies have become powerful enough that they are used to build full-featured applications; this has been true for many years in the desktop and laptop computer realm, but is increasingly so on mobile devices as well.

This document summarizes the various technologies developed in W3C that increases the power of Web applications, and how they apply more specifically to the mobile context, as of February 2011.

The document continues with a warning:

This document is the first version of this overview of mobile Web applications technologies, and represents a best-effort of his author; the data in this report have not received wide-review and should be used with caution

The first area described in this document is Graphics and since the first standard mentioned in SVG the note of caution needs to be borne in mind.  As discussed in a post published in November 2008 on “Why Did SMIL and SVG Fail?” SVG (together with SMIL) failed to live up to their initial expectations.  The post outlined some reasons for this and in the comments there were suggestions that the standard hasn’t failed as it is now supported in most widely-used browsers, with the notable exception of Internet Explorer.  In January 2010 I asked “Will The SVG Standard Come Back to Life?” following the announcement that “Microsoft Joins W3C SVG Working Group“ and an expectation that IE9 will provide support for SVG. This was subsequently confirmed in a post with the unambiguous title “SVG in IE9 Roadmap” published on the IE9 blog.

The signs in the desktop browser environments are looking positive for support for SVG.  But it may be the mobile environment in which SVG really takes off, since on the desktop Web environment we have over 15 years of experiences in using HTML and CSS  to provide user interfaces. But as described in in the W3C Roadmap:

SVG, Scalable Vector Graphics, provides an XML-based markup language to describe two-dimensions vectorial graphics. Since these graphics are described as a set of geometric shapes, they can be zoomed at the user request, which makes them well-suited to create graphics on mobile devices where screen space is limited. They can also be easily animated, enabling the creation of very advanced and slick user interfaces.

But will SVG’s strength in the mobile environment lead to a fragmented Web in which mobile users engage with an SVG  environment whilst desktop users continue to access HTML resources?  I can recall  suggestions that where being made about 10 years ago which pointed out that since SVG is the richer environment it could be used as a generic environment.  Might we see that happening?  After all, as can be seen (if you’re using a browser which supports SVG) from examples such as the Solitaire game (linked in from the Startpagina Web site which provides access to various examples of SVG uses) it is possible to provide a SVG gaming environment. Might we see Web sites like this being developed?

Perhaps rather than the question “Has SVG failed?” we may soon need to start asking “How such we use SVG?

Posted in standards, W3C | Tagged: | 1 Comment »

HTML5 Standardisation Last Call – May 2011

Posted by Brian Kelly (UK Web Focus) on 15 February 2011

I recently described the confusion over the standardisation of HTML5, with the WhatWG announcing that they are renaming HTML5 as ‘HTML’ and that it will be a ‘Living Standard’ which will continually evolve as browser vendors agree on new features to implement in the language.

It now seems that the W3C are responding to accusations that they are a slow-moving standardisatioin body with an announcement thatW3C Confirms May 2011 for HTML5 Last Call, Targets 2014 for HTML5 Standard“.  In the press release Jeff Jaffe, W3C CEO, states that:

Even as innovation continues, advancing HTML5 to Recommendation provides the entire Web ecosystem with a stable, tested, interoperable standard

I welcome this announcement as I feel that it helps to address recent uncertainties regarding the governance and roadmap for HTML developments.  The onus is now on institutions: there is now a clear roadmap for HTML5 development with a stable standard currently being finalised.  As providers of institutional Web services, what are you plans for deployment of HTML5?

Posted in standards, W3C | Tagged: | 1 Comment »

The W3C’s RDF and Other Working Groups

Posted by Brian Kelly (UK Web Focus) on 14 February 2011

The W3C have recently announced the launch of the RDF Working Group.  As described in the RDF Working Group Charter:

The mission of the RDF Working Group, part of the Semantic Web Activity, is to update the 2004 version of the Resource Description Framework (RDF) Recommendation. The scope of work is to extend RDF to include some of the features that the community has identified as both desirable and important for interoperability based on experience with the 2004 version of the standard, but without having a negative effect on existing deployment efforts.

Membership of W3C working group comprises W3C staff as well as W3C member organisations, which includes the JISC. In addition it is also possible to contact working group chairs and W3C team members in order to explore the possibility of participation as an invited expert.

Note that a list of W3C Working Groups, Interest groups, Incubator Groups and Coordination Groups is provided on the W3C Web site. The Working Groups are typically responsible for the development of new W3C standards (known as ‘recommendations’) or the maintenance of existing recommendations. There are quite a number of working groups. including working groups for well-known W3C areas of work such as HTML, CSS and WAI as well as newer or more specialised groups covering areas including Geolocation, SPARQL, RDF and RDFa.

W3C Interest Groups which may be of interest include Semantic Web, eGovernment and WAI. Similarly Incubator Groups which may be of interest to readers of this blog include the Federated Social Web, Library Linked Data, the Open Web Education Alliance and the WebID groups.

The W3C Process Document provides details of the working practices for Working Groups, Interest Groups and Incubator Groups. If anyone feels they would like to contribute to such groups I suggest you read the Process Document in order to understand the level of committment which may be expected and, if you feel you can contribute to the work of a group, feel free to contact me.

Posted in standards, W3C | Leave a Comment »

Open Source, Open Standards, Open Access – A Problem For Higher Education?

Posted by Brian Kelly (UK Web Focus) on 11 February 2011

Over on the JISC OSS Watch blog Ross Gardler has highlighted an area of concern from the recently published HEFCE Review of JISC. Ross states that:

… there is one paragraph that I am, quite frankly, appalled to see in this report:

“JISC’s promotion of the open agenda (open access, open resources, open source and open standards) is more controversial. This area alone is addressed by 24 programmes, 119 projects and five services. [7] A number of institutions are enthusiastic about this, but perceive an anti-publisher bias and note the importance of working in partnership with the successful UK publishing industry. Publishers find the JISC stance problematic.

In his post, which is titled “Is UK education policy being dictated by publishers?“, Ross goes on to summarise the benefits which can be gained from the higher education community through use of and engagement in the development of open source software.

The wording in the JISC review – open agenda (open access, open resources, open source and open standards) – reminded me of a paper written by myself (based at UKOLN), Scott Wilson (of JISC CETIS) and Randy Metcalfe (Ross Gardler’s predecessor as manager of the JISC OSS Watch service) which was entitled “Openness in Higher Education: Open Source, Open Standards, Open Access” and build on previous papers in this area.

Now if the paper had provided a simplistic view of openness I think criticism that the paper was promoting an ideological position would have been justified.  But whilst the paper highlighted potential benefits for the higher education community to be gained from use of open source software, open standards and open content the paper was honest about shortcomings. Rather than, to use the words of the review document, the “promotion of an open agenda”  the paper argued that institutions should be looking to gain the benefits themselves and not open source software, open standards or open content per se.

Perhaps such distinctions aren’t being appreciated by the wider community and openness is being seen as a ideology and used as a stick to beat commercial providers such as publishers. This approach quite clearly isn’t being taken by the co-authors of our paper. Indeed as can be seen from yesterday’s blog post on the failures of W3C’s PICS standard, the failures of open standards are being identified in order that we can learn fromsuch failures and avoid repeating the mistakes in future.

A few days ago I published a post in which Feedback [was] Invited on Draft Copy of Briefing Paper on Selection and Use of Open Standards – if open standards can prove problematic advice is needed on approaches for the selection of open standards which will minimise the risks of choosing an open standards which fails to deliver the expected benefits.

But I am sure that there is a need for continued promotion of the sophisticated approaches to the exploitation of openness which the JISC Review seems to be unaware of.  A poster summarising the approaches is being prepared for the JISC 2011 conference which will be displayed on a stand shared by UKOLN, CETIS and JISC OSS Watch.     A draft version of the posted is embedded below (and hosted on Scribd).  We feel this provides a pragmatic approach which will help to provide benefits across the HE sector and avoids accusations of taking an anti-publisher approach.

Your comments on these approaches are welcomed.

Posted in standards | Tagged: | 5 Comments »

Remember PICS? Learning From Standards Which Fail

Posted by Brian Kelly (UK Web Focus) on 10 February 2011

A Message to the PICS-interest Mailing List

Yesterday I received an email message on the W3C’s PICS-interest group’s mailing list from Eduardo Lima Martinez who asked:

I’m building a website for people over 16 years of age. This not is a porn site, but shows raw images (“curcus pretty girls doing ugly things”) not suitable for kids.

He went on to ask:

What are the correct PICS labels for this site?. I do not read/write correctly the english language. I do not understand the terms of HTTP headers “Protocol: {…}” and “PICS-Label: (…)” Can you guide me? Can you show me a sample site that has the correct PICS labels?

Leaving aside the rather unsavoury nature of the content, I was surprised to receive this email as I was unaware that I was still subscribed to the PICS-interest list.  However looking at the archives for the list as can be seen there have been a handful of postings to this list over the past five years or so, several of which are just conference announcements or spam. As seems to be the case for quite a number of mailing lists, this one has fallen into disuse. But the first legitimate posting to the list since April 2009 and the subsequent responses caused me to reflect on the rise and fall of the W3C PICS standard.

Revisiting PICS

PICS, the Platform for Internet Content Selection, was developed in 1996 in response to the proposed Communications Decency Act (CDA) US legislation. As described in encyclopedia.com:

The first version of this amendment, sponsored by Senator James Exon without hearings and with little discussion among committee members, would have made it illegal to make any indecent material available on computer networks“.

In parallel with arguments that such legislation was unconstitutional the W3C responded by the development of a standard which provide a decentralised way of labelling Web resources.  It would then be possible to configure client software to block access to resources which are deemed to be offensive or inappropriate for the end user.  This software could be managed by a parent for a home computer or by an appropriate body in a school context.  There was also an infrastructure to manage the content labelling schemes which complemented the W3C’s technical developments with, as described in Wikipedia entry,  the RSAC being founded in 1994 to provice labelling of video games and, later, the RSACi providing a similar role for online resources. This organization was closed in 1999 and reformed into the Internet Content Rating Association (ICRA). In 2007 ICRA became part of FOSI (Family Online Safety Institute)  – an organisation which, as described in an email message by Dan Brickley, no longer has any activities in this technology area or support for their older work. As Dan pointed out to Eduardo “there is no direct modern successor to the RSACi/ICRA PICS work to recommend to you“.

What Are The Lessons?

In 1996 we had a standard (actually a number of W3C Recommendation)  which provided a decentralised approach for labelling Internet content. As described above there were international organisations involved in the provision and management of labelling schemes and there were various applications which provided support for the standards including Internet Explorer, with Microsoft providing a tutorial on how to use PICS.

But what went wrong? Why did this standard and accompanying infrastructure fail to be sustainable?  Is there no longer a need to be able to manage access to pornographic, violent and related resources? Do we have a better standards-based solution?

I think it is clear that there is still a need for a solution to the problems which PICS sought to address – and the various filtering solutions which are found in schools do not provide the flexibility of a standards-based approach such as that provided by PICS.

But perhaps the cost of managing PICS labels was too expensive – after all metadata is expensive to create and manage. Of perhaps PICS was developed too soon in W3C’s life, before XML provided a generalised language for developing metadata applications?  But would replacing PICS’s use of “{” by XML’s “<” and “>”  and the accompanying portfolio of XML standards really had a significant difference?

Dan Brickley pointed out that PICS is largely obsolete technology and its core functionality is been rebuilt around RDF:

1. Roughly PICS label schemes are now RDF Schemas (or more powerfully, OWL Ontologies)
2. PICS Label Bureaus are replaced by Web services that speak W3C’s SPARQL language for querying RDF – see http://www.w3.org/TR/rdf-sparql-query/
3. PICS’ ability to make labels for all pages sharing a common URL pattern is addressed by POWDER – see http://www.w3.org/2007/powder/

Hmm, should Eduardo be looking at POWDER – a W3C standard which “has superseded PICS as the recommended method for describing Web sites and building applications that act on such descriptions“.

But perhaps this is an area in which open standards are not appropriate.  As Phil Archer pointed out in the discussion on the PICS-intertest list:

there really isn’t any advantage in adding labels, whether in PICS or POWDER, for child protection purposes. All the filters that people actually use work well without using labels at all. It’s an idea that has long had its day. If interested, see [1, 2]” [Note reference 2 is a PDF file]

I guess the organisations involved in developing the PICS standard and tools which supported PICS and organisations which labelled their resources will have failed to se a return on their investment for supporting this open standard.  Will it be any different with POWDER, I wonder?  What is different this time?

Posted in standards | Tagged: | 2 Comments »

Feedback Invited on Draft Copy of Briefing Paper on Selection and Use of Open Standards

Posted by Brian Kelly (UK Web Focus) on 8 February 2011

A draft UKOLN briefing paper on the “Selection and Use of Open Standards” is available for comments before publication. The document is based on previous work led by UKOLN in conjunction with the AHDS in the JISC-funded QA Focus project on the development of quality assurance framework for JISC-funded development projects. Subsequent work with JISC CETIS and JISC OSS Watch and others was described in papers on “A Standards Framework For Digital Library Programmes“, “A Contextual Framework For Standards” and “Openness in Higher Education: Open Source, Open Standards, Open Access” which were presented at the ichim05, WWW 2006 and elPub 2007 conferences respectively. More recently a position paper which described “An Opportunities and Risks Framework For Standards” was presented to a CETIS event on the Future of Interoperability Standards.

The briefing paper omits much of the background and discussions which were included in these papers and instead seeks to provide a more focussed summary of the contextual approaches and opportunities and risks framework which have been developed to support use of development activities especially if new and emerging standards are being considered.

The draft briefing paper is currently available on Scridb and is embedded below.

I am grateful to feedback on an earlier draft of this paper which I have received from colleagues at JISC CETIS.  Comments from the wider community are welcomed.

Posted in standards | 3 Comments »

The HTML5 Standardisation Journey Won’t Be Easy

Posted by Brian Kelly (UK Web Focus) on 3 February 2011

I recently published a post on Further HTML5 Developments in which I described how the W3C were being supportive of approaches to the promotion of HTML5 and the Open Web Platform. However in a post entitled  HTML is the new HTML5 published on 19th January 2011 on the WhatWG blog Ian Hickson, editor of the HTML5 specification (and graduate of the University of Bath who now works for Google) announced that “The HTML specification will henceforth just be known as ‘HTML’”. As described in the FAQ it is intended that HTML5 will be a “living standard:

… standards that are continuously updated as they receive feedback, either from Web designers, browser vendors, tool vendors, or indeed any other interested party. It also means that new features get added to them over time, at a rate intended to keep the specifications a little ahead of the implementations but not so far ahead that the implementations give up.

What this means for the HTML5 marketing activities is unclear. But, perhaps more worrying is what this will mean for the formal standardisation process which W3C has been involved in.  Since it seems that new HTML(5) features can be implemented by browser and tool vendors this seems to herald a return to the days of the browser wars, during which Netscape and Microsoft introduced ‘innovative’ features such as the BLINK and MARQEE tags.

On the W3C’s public-html list Joshue O Connor (a member of the W3C WAI Protocol and Formats Working Group) feels that:

What this move effectively means is that HTML (5) will be implemented in a piecemeal manner, with vendors (browser manufacturers/AT makers etc) cherry picking the parts that they want. … This current move by the WHATWG, will mean that discussions that have been going on about how best to implement accessibility features in HTML 5 could well become redundant, or unfinished or maybe never even implemented at all.

In response Anne van Kesteren of Opera points out that:

Browsers have always implemented standards piecemeal because implementing them completely is simply not doable. I do not think that accepting reality will actually change reality though. That would be kind of weird. We still want to implement the features.

and goes on to add:

Specifications have been in flux forever. The WHATWG HTML standard since 2004. This has not stopped browsers implementing features from it. E.g. Opera shipped Web Forms 2.0 before it was ready and has since made major changes to it. Gecko experimented with storage APIs before they were ready, etc. Specifications do not influence such decisions.

Just over a year ago a CETIS meeting on The Future of Interoperability and Standards in Education explored “the role of informal specification communities in rapidly developing, implementing and testing specifications in an open process before submission to more formal, possibly closed, standards bodies“. But while the value of rapid development, implementation and testing was felt to be valuable there was a recognition of the continued need for the more formal standardisation process.  Perhaps the importance of rapid development which was highlighted at the CETIS event has been demonstrated by the developments centred around HTML5, with the W3C providing snapshots once the implementation and testing of new HTML developments have taken place, but I feel uneasy at the developments. This unease has much to do with the apparent autonomy of browser vendors: I have mentioned comments from employees of Google and Opera who seem to be endorsing this move (how would we feel if it was Microsoft which was challenging the W3C’s  standardisation process?). But perhaps we should accept that significant Web developments are no longer being driven by a standards organisation or from grass-roots developments but from the major global players in the market-place? Doesn’t sound good, does it – a twenty-first century return to browser vendors introducing updated versions of BLINK and MARQUEE elements as they’ll know what users want :-(

Posted in HTML, standards, W3C | Tagged: | 3 Comments »

WAI-ARIA 1.0 Candidate Recommendation – Request for Implementation Experiences and Feedback

Posted by Brian Kelly (UK Web Focus) on 2 February 2011

W3C announced the publication of WAI-ARIA 1.0 as a W3C Candidate Recommendation on 18th January 2011. A Candidate Recommendation (CR) is a major step in the W3C standards development process which signals that there is broad consensus in the Working Group and among public reviewers on the technical content of proposed recommendation. The primary purpose of the CR stage is to implement and test WAI-ARIA. If you are interested in helping or have additional comments you are invited to follow the content submission instructions.

WAI-ARIA is a technical specification that defines a way to make Web content and Web applications more accessible to people with disabilities. It especially helps with dynamic content and advanced user interface controls developed with AJAX, HTML, JavaScript and related technologies. For an introduction to the WAI-ARIA suite please see the WAI-ARIA Overview or the WAI-ARIA FAQ.

It does occur to me that in light of the significant development work we are seeing in areas such as repositories, e-learning systems, e-research, etc. there may be examples of developments which have enhanced the user interface in ways which enhance access for users with disabilities. If you have made use of WAI-ARIA 1.0 techniques in the development of your services, as mentioned on the W3C blog, W3C WAI would welcome such feedback. Please note that the closing date for comments is 25th February 2011.

Posted in Accessibility, standards, W3C | Leave a Comment »

Further HTML5 Developments

Posted by Brian Kelly (UK Web Focus) on 25 January 2011

Updated HTML5 Documents

Back in November 2010 in a post entitled Eight Updated HTML5 Drafts and the ‘Open Web Platform’ I described how the W3C had published draft versions of eight documents related to HTML5.  It seems that W3C staff and members of various HTML5 working groups have been busy over Christmas as the HTML Working Group has published further revised versions of eight documents:

HTML5 Marketing Activities

HTML5 LogoThe significance of the development work to HTML5 specifications and the importance which W3C is giving to HTML5 can be seen from the announcement that “W3C Introduces an HTML5 Logo” which describes this “striking visual identity for the open web platform“.

The page about the logo is full of marketing rhetoric:

Imagination, meet implementation. HTML5 is the cornerstone of the W3C’s open web platform; a framework designed to support innovation and foster the full potential the web has to offer. Heralding this revolutionary collection of tools and standards, the HTML5 identity system provides the visual vocabulary to clearly classify and communicate our collective efforts.

The W3C have also pointed out how the logo is being included on t-shirts, which you can buy for $22.50.   The marketing activity continues with encouragement for HTML5 developers to engage in viral marketing:

Tweet your HTML5 logo sightings with the hashtag#html5logo

In addition to Web sites owners being able to use this logo on their Web sites and fans of HTML5 being able to wear a T-shirt (“wearware”?) as I learnt from Bruce Lawson’s post on “On The HTML5 Logo”  users of FireFox and Opera browsers can install a Greasemonkey Script or Opera extension which will display a small HTML5 logo in the top right hand corner of the window of HTML5 pages. I’ve tried this and it works.

Such marketing activities are unpopular in some circles with much of the criticismcentered around the FAQ’s original statement that the logo means “a broad set of open web technologies”, which some believe “muddies the waters” of the open web platform“.  In light of such concerns the W3C have updated the HTML5 Logo FAQ.

I have to say that personally I applaud this initiative.  In the past the commercial sector has taken a lead in popularising Web developments as we saw in the success of the Web 2.0 meme – it’s good, I feel, that the W3C are taking a high profile in the marketing of HTML5 developments. I also feel that this is indicative of the importance of HTML5, which, judging from examples of HTML5′s potential which I have described in a number of recent posts, will be of more significance than the moves from HTML 3.2 to HTML 4 and HTML 4 to XHTML 1.

Spotting HTML5 Pages – Including the Google Home Page

Use of the Opera extension which embeds a small version of the HTML5 icon in the top right hand corner of the browser display is shown (click to see full-size version).

Whilst searching for a HTML5 Web site to use for this example I discovered that the Google search page now uses HTML, with the following HTML5 declaration included at the top of the page:

<doctype html>

I had previously thought that Google was very conservative in its use of HTML as, in light of its popularity, the page had to work of a huge range of browsers. Note, though, that on using W3C’s HTML validator, which includes experimental support for HTML5, I found that there were  still HTML errors, many of which were due to unescaped ‘&’ characters.  Some time ago it was suggested that the reason Google wasn’t implementing the simple changes in order to ensure that their home page validated was in order to minimise the bandwidth usage – which will be very important for globally popular site such as Google’s which, despite losing the top slot to Facebook in the US last  year, is still pretty popular :-). Hmm, if there are around 90 million Google users per day I wonder how much bandwidth is saved by using & rather than & in its home page and search results?

Posted in HTML, standards | Tagged: | Leave a Comment »

Three CSS Publications Including Last call for CSS 2.1

Posted by Brian Kelly (UK Web Focus) on 5 January 2011

The W3C have recently published three CSS publications: a last call for comments on the CSS 2.1 specification and first drafts of Snapshot 2010 and Writing Modes Level 3.

These three documents will be of interest to different groups. The CSS 2.1 document will be of interest to those who wish to see the final documentation of approaches which have been deployed, in order to ensure that widely implemented features are thoroughly and unambiguously documented (“CSS 2.1 corrects a few errors in CSS2 and adds a few highly requested features which have already been widely implemented. But most of all CSS 2.1 represents a “snapshot” of CSS usage: it consists of all CSS features that are implemented interoperably at the date of publication“).

The CSS Snapshot 2010 document is a brief document which collects together into one definition the specs that together form the current state of Cascading Style Sheets (CSS). This will be of interest to those who like to be able to see the big picture and the relationships and dependencies.

In contrast the CSS Writing Modes Module Level 3 specification is likely to be of interest to those with specific interests in bidirectional and vertical text.

Last Call comments are welcome until 7 January 2011.

Posted in standards | Tagged: | 1 Comment »

W3C Standards for Contacts and Calenders

Posted by Brian Kelly (UK Web Focus) on 27 December 2010

I have to admit that I thought that standards for contacts and calendar entries had been established ages ago. However the W3C’s Device APIs and Policy Working Group has been set up in order to “create client-side APIs that enable the development of Web Applications and Web Widgets that interact with devices services such as Calendar, Contacts, Camera, etc.

A working draft of the Contacts API was published on 9 December 2010. As described in the W3C Newsletter:

This specification defines the concept of a user’s unified address book – where address book data may be sourced from a plurality of sources – both online and locally. This specification then defines the interfaces on which third party applications can access a user’s unified address book, with explicit user permission and filtering. The focus of this data sharing is on making the user aware of the data that they will share and putting them at the center of the data sharing process; free to select both the extent to which they share their address book information and the ability to restrict which pieces of information related to which contact gets shared.

Other work in the area includes the following draft specification:

Note that the URIs for the latest version of the a number of these draft specifications seem misleading. For example the URI for the Calendar API is stated as being http://www.w3.org/TR/calendar-api/ though this link is currently broken, with the resource actually hosted on the W3C’s development server at http://dev.w3.org/2009/dap/calendar/. Similarly the URL for The Application Launcher API is stated as being http://www.w3.org/TR/app-launcher/ though this link is currently broken, with the resource actually hosted on the W3C’s development server at http://dev.w3.org/2009/dap/app-launcher/. This may be because these are editor’s draft and the URIs for the published versions are place-holders – but for me this is an error, and one that is surprising for the W3C which places great emphasis on the importance of functioning URIs.

Posted in standards | 3 Comments »

Skype Just Works (Pain, I Know!)

Posted by Brian Kelly (UK Web Focus) on 26 December 2010

“Snowed In – Can’t Make It To London”

I recently ran a workshop in London on “Institutional and Social Web Services: Evidence for Their Value“. Although the event went well, the day before I was somewhat apprehensive as Ranji Sidu, one of the speakers, was snowed in in Scotland and thought it unlikely that he would be able to travel.

Not a problem“, I said to Ranjit. “As long as you can create a video recording of your talk we’ll be able to play that locally. And if you have network access we’ll try some form of communication technology in order that you can participate remotely.

Having sounded so confident in our email discussions I was slightly apprehensive on the morning of the workshop, especially when I discovered that the PC we would be using didn’t have Skype or AV capabilities.  I was prepared to use a streaming video application on my mobile phone and even explore whether a POTS solution could be used – yes if that was a telephone in the seminar room maybe we could use the Plain Old Telephone System.

In the event I had no need to be concerned.  Skype was installed on the local PC and a Webcam and microphone worked a well – in a room containing over 20 people the microphone could pick up questions provided people spoke clearly.

Note Just Telephony: Application Sharing and Ubiquity Too!

I had envisaged using Skype to allow Ranjit to respond to questions after his talk.  In fact we used the application-sharing feature of Skype to share the slides used by other speakers at the event.  So Ranjit, the remote participant snowed in in Scotland, benefited from being able to listen to the speakers and view their slides as they were being presented.   The only time this didn’t work was when one of the speakers used their iPad to give a presentation – if we do this again we’ll need to have contingency plans for when other devices are being used.

For me Skype’s ease-of-use, ubiquity and rich functionality (it’s more than a just a phone system) make Skype part of the infrastructure which one might reasonably expect to be able to use – I personally have used Skype clients on desktop PCs, laptops, netbooks as well as on the Apple Mac, Android Phone and iPod Touch so it seems to have escaped from the MS Windows-only barrier which has hindered take-up of other potentially useful collaborative tools.

But Skype’s Proprietary!

But Skype’s proprietary, the argument went back in 2007, we should be using an open standards solution. But those arguments seem to have gone quiet.  There appear to be occassions when the simplicity of proprietary solutions win over enough users to make the deployment of standards-based solutions difficult.  Recognising when this will happen will be the difficult thing, though, as Nick Skelton pointed out in a post which asked “Why did JANET Talk fail?

Posted in standards | Tagged: | 1 Comment »

W3C’s Online Course on “Introduction to SVG”

Posted by Brian Kelly (UK Web Focus) on 24 December 2010

How do you get training in new (and not so new) standards?  A good choice would seem to be from the organisation responsible for developing the standard.  The following online course on the SVG (Scalable Vector Graphics) may therefore be of interest to developers and others with an interest in this standard.

The W3C is running an online course on Introduction to SVG. Professor David Dailey of Slippery Rock University, Pennsylvania, will lead the course. The course will last for six weeks and starts in January 2011. During the first four weeks participants learn how to create SVG documents, to use basic elements to create effective graphics quickly and easily, add border effects, linear and radial gradients, re-use components, and rescale, rotate and translate images.

During the (optional) final two weeks of the course participants learn how to: add animation, use scripting to transform and manipulate images, and create interactive graphics. The last two weeks will most benefit those with some background in scripting. The only pre-requisite for the course is to have some familiarity with HTML/XML and the ability to edit source code directly.

The rate for the course is €165. Full details of the course (audience, content, timing, weekly commitment) are available in the Introduction to SVG: Course Description.

I should add that back in November 2008 I asked the question Why Did SMIL and SVG Fail? but then in January 2010 asked Will The SVG Standard Come Back to Life? SVG initially became a W3C recommendation in 2003 but failed to live up to initial expectations.  I feel that we often try to promote open standards too soon and early adopters can get their fingers burnt.  However there does seem to be renewed interest in  SVG , especially in a mobile context, so perhaps now, rather than in 2003, is the time to invest in training. After all,  as described in an article on “Microsoft joins IE SVG standards party” published in The Register:  “Commentors responding to Dengler’s post overwhelmingly welcomed Microsoft’s move, with people hoping it’ll lead to SVG support in IE 9“.

Posted in standards | Tagged: | Leave a Comment »

Interoperability Through Web 2.0

Posted by Brian Kelly (UK Web Focus) on 13 December 2010

I recently commented on Martin Hamilton’s blog post on Crowdsourcing Experiment – Institutional Web 2.0 Guidelines“. In addition to the open approach Martin has taken to the development of institutional guidelines on use of Web 2.0 services the other thing that occurred to me was  how the interoperability of embedding interactive multimedia objects was achieved.

Interoperability is described in Wikipedia as “a property referring to the ability of diverse systems and organizations to work together“. But how is Martin’s blog post interoperable? The post contains several examples of slideshows created by others which are embedded in the post.  In addition to the slides, which are hosted on Slideshare, the post also contains embedded video clips together with an embedded interactive timeline.

How is such interoperability achieved? We often talk about “interoperability through open standards” but in this case that’s not really the case. The slides were probably created in Microsoft PowerPoint and are thus either a proprietary format or in the (open though contentious) OOXML format. But the slides might also have been created using Open Office or made available using PDF.  In any case it’s not the format which has allowed the slides to be able to be embedded elsewhere; rather its other standards which allow embedding which are important (e.g. using HTML elements such as IFRAME, OBJECT and EMBED).

It’s also worth noting that applications are needed which implement such interoperability.  In Martin’s post he has embedded objects which are hosted in the Slideshare, YouTube and Dipity applications.  The ability to be embedded (embeddability?) in other environments may also be dependent on the policies provided by such services.  You can normally embed such objects in Web pages, but not necessarily in environment such as WordPress.com (which restricts objects which can be embedded to a number of well-known services such as SlideShare and YouTube). I would be interested to know if popular CMS services have similar limitations on embedding content from Web 2.0 services.

If the original objects which Martin used in his blog post had been simply embedded in their host Web environment, perhaps as a HTML resource, they would not have been easily reused within Martin’s blog. Interoperability is not a simple function of use of open standards; there are other issues, such as market acceptance, which need to be considered.  And the open format embedded on a Web page could, ironically, be non-interoperable whereas a proprietary format hosted in a Web 2.0 environment could be widely used elsewhere.

Or to put it another way, shouldn’t we nowadays regard the provision of an HTML page on its own as a way of providing access to multiple devices but restricting use of the resource in other environments? Web 1.0 = publishing but Web 2.0 = reuse.

I’d like to conclude this post by embedding a slideshow in a talk on “So that’s it for it services, or is it?” which I found a few days ago linked to from a timetable for HEWIT event held earlier this year.  The slideshow hosted on Slideshare is clearly so much more useful than the PowerPoint file linked to from the HEWIT timetable – and as the HEWIT timetable has the URL http://www.gregynog.ac.uk/HEWIT/ I can’t help but think that the resource could well be overwritten by next year’s timetable, with the Slideshare resource possibly access to the resource for a longer period than the Gregynod Web site

Posted in standards, Web2.0 | Leave a Comment »

“HTML5: If You Bang Your Head Against The Keyboard You’ll Create a Valid Document!”

Posted by Brian Kelly (UK Web Focus) on 10 December 2010

“HTML5 / CSS3 / JS  – a world of new possibilities”

I recently attended the 18th Bathcamp event entitled “Faster, cheaper, better!“.  For me the highlight of the evening was a talk by Elliott Kember (@elliottkember)  on “HTML5 / CSS3 / JS  – a world of new possibilities“.

The Elliottkember.com Web site describes Elliot as:

freelance web developer based in Bath, England
who builds and maintains high-traffic, powerful web apps,
resorts to using 32pt Georgia – sometimes in italic and printer’s primaries,
has 4978 followers on Twitter, speaks at conferences,
and wants to develop your idea into an application.

Elliott gave a fascinating run through some of the new presentational aspects of HTML5 and CSSS, appropriately using a HTML5 document to give the presentation.  His slides are available at http://riothtml5slides.heroku.com/ and are well worth viewing. Note that to progress through the slides you should use the forward and back arrows – and not that Elliott was experimenting with some of the innovative aspects of HTML5 and CSS3 so the presentation might not work on all browsers.

In this post I’ll not comment on the HTML5 features which Elliott described. Rather than looking at the additional features I’ll consider the implications of the ways in which the HTML5 specification is being simplified.

HTML5′s Simplicity

Elliot introduced the changes to HTML5′s by pointing out its simplicity. For example a HTML 4 document required the following Doctype definition:

<!--DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">-->

whereas HTML5 simply requires:

<!--doctype html>-->

The following illustrates a valid HTML5 document:

<!--DOCTYPE html>
Small HTML 5

Hello world
-->

As can be seen there is no requirement to include the <head> and <body> elements which are needed in order for a HTML 4 document to be valid (although HTML 4 documents which do not include these mandatory elements will be rendered correctly by Web browsers.

What about IE?

Over the years developments to HTML standards have always given rise to the question “What about legacy browsers?“. Often the answer has been “The benefits of the new standard will be self-evident and provide sufficient motivation for organisations to deploy more modern browsers“.  Whether the benefits of the developments from, say, HTML 3.2 to HTML 4 and HTML 4 to XHTML 1 have provided sufficient motivation for organisations to invest time and effort in upgrading their browers is, however, questionable – I know I have been to institutions which are still providing very dated versions of browsers on their public PCs.   And whether the HTML technology previews which tend to be demonstrated when a new version of HTML is released will be typical of the mainstream uses may also be questioned.  So there is still a question about the deployment of services based on HTML5 in an environment of flawed browsers, which includes Internet Explorer; it should also be noted that other browsers may also have limited support for new HTML5 (and CSS 3) features.

Elliott suggests that a solution to the “What about IE?” question may be provided by a HTML5 ‘shim’. A shim (which is also sometimes referred to as a ‘shiv’) is described in Wikipedia as “a small library which transparently intercepts an API, changes the parameters passed, handles the operation itself, or redirects the operation elsewhere“.

Remy Sharp has developed what he calls the HTML5 shiv, which consists of the following three lines:

<mce:script 
// -->

This code provides a mechanism for IE to recognose new elements, such as, as Elliott uses in his presentation, <slide>

Use it Now?

Should you start using HTML5 now?  Back in July in his plenary talk on “HTML5 (and friends): The future of web technologies – today” given at the IWMW 2010 event Patrick Lauke suggested that for new Web development work it would be appropriate to consider using HTML5.

Elliott was in agreement, with his slides  making the point that:

All decent browsers support enough of this stuff to make it worth using.

What this means is that you can start to make use of the simple HTML5 declaration but rather than use every HTML5 feature that is documented in the specification you should check the level of support for various features using, for example the Periodic Table of HTML5 Elements and the HTML5 Test web site and Wikipedia’s Comparison of layout engines (HTML5) as well as running standard usability checks on an appropriate range of browsers and platforms.

What About Validity of HTML5?

Following Elliott’s talk there was a question about the validity of HTML5 documents.  Elliott responding with a very graphic depiction of the much more liberal (if one dare uses that word!) approach to validity: “If you bang your head against the keyboard you’ll probably create a valid HTML5 document!“.

Such an approach is based on observing how few Web resources actually conform with existing HTML specifications.  In many cases browser rendering is being used as an acceptable test for conformity – if a Web page is displayed and is usable in popular Web browsers then it is good enough seems to be the situation today.  “After all” asked Elliott “how many people validate their Web pages today?” The small numbers of hands which were raised (including myself and Cameron Neylon) perhaps supported this view and when the follow-up question “Who bothers about using closing tags on <br> elements in XHTML documents these days?” was asked I think mine was the only hand which was raised.

The evidence clearly demonstrates that strict HTML validity, which was formally required in XHTML, has been rejected in the Web environment. In future, it would seem, there won’t be a need to bother about escaping &s and closing empty tags, although if Web authors wish to continue with such practices they can do so.

What About Complex Documents?

Such simplicity seemed to be welcomed by many who attended the Bathcamp meeting.  But myself and Cameron Neylon, an open science researcher based at the Science and Technology Facilities Council, still had some concerns.  What will the implications be if a HTML resource is being used not just for display and user interaction, but as a container for structured information?  How will automated tools process embedded information provided as RDFa or microdata if the look-and-feel and usability of a resource is the main mechanism for validation of the internal consistency of a resource?

And what if an HTML5 document is used as a container for other structured elements, such as mathematical formulae provided using MathML; chemcial formula provided using CML;  etc.?

There are dangers that endorsing current lax approaches to HTML validity can hinder the development of more sophisticated uses of HTML, especially in the research community. We are currently seeing researchers arguing that the main document format for use in scientific and research papers should move away from PDF to a more open and reusable format. HTML5 has been suggested as a possible solution? But will this require more rigourous use of the HTML5 specification?  And if the market place chooses to deploy tools which fail to implement such approaches, will this act as a barrier to deployment of HTML5 as a rich and interoperable format for the community?

Posted in HTML, standards | 4 Comments »

Moves Away From XML to JSON?

Posted by Brian Kelly (UK Web Focus) on 26 November 2010

Although in the past I have described standards developed by the W3C which have failed to set the marketplace alight I have always regarded XML as a successful example of a W3C standard.  Part of its initial success was its simplicity – I recall hearing the story of when XML 1.0 was first published, with a copy of the spec being thrown into the audience to much laughter. The reason for the audience’s response? The 10 page (?) spec fluttered gently towards the audience but the SGML specification, for which XML provided a lightweight and Web-friendly alternative, would have crushed people sitting in the first few rows!   I don’t know whether this story is actually true but it provided a vivid way of communicating the simplicity of the standard which, it was felt, would be important in ensuring the standard would gain momentum and widespread adoption.

But where are we now, 12 years after the XML 1.0 specification was published? Has XML been successful in providing a universal markup language for use in not only a variety of document formats but also in protocols?

The answer to this question is, I feel, no longer as clear as it used to be.  In a post on the Digital Bazaaar blog entitled Web Services: JSON vs XML Manu Sporny, Digital Bazaar’s Founder and CEO, makes the case for the ‘inherent simplicity of JSON, arguing that:

XML is more complex than necessary for Web Services. By default, XML requires you to use complex features that many Web Services do not need to be successful.

The context to discussions in the blogosphere over XML vs JSON is the news that Twitter and Foursquare have recently removed XML support from their Web APIs and now support only JSON.  James Clark, in a post on XML vs the Web, appears somewhat ambivalent about this debate (“my reaction to JSON is a combination of ‘Yay’ and ‘Sigh‘”) but goes on to list many advantages of JSON over XML in a Web context:

… for important use cases JSON is dramatically better than XML. In particular, JSON shines as a programming language-independent representation of typical programming language data structures.  This is an incredibly important use case and it would be hard to overstate how appallingly bad XML is for this.

The post concludes:

So what’s the way forward? I think the Web community has spoken, and it’s clear that what it wants is HTML5, JavaScript and JSON. XML isn’t going away but I see it being less and less a Web technology; it won’t be something that you send over the wire on the public Web, but just one of many technologies that are used on the server to manage and generate what you do send over the wire.

The debate continues on both of these blogs.  But rather than engaging in the finer points of the debates of the merits of these two approaches I feel it is important to be aware of decisions which have already been taken.   And as Manu Sporny has pointed out:

Twitter and Foursquare had already spent the development effort to build out their XML Web Services, people weren’t using them, so they decided to remove them.

Meanwhile in a post on Deprecating XML Norman Walsh responds with the comment “Meh” -though he more helpfully expands in this reaction by concluding:

I’ll continue to model the full and rich complexity of data that crosses my path with XML, and bring a broad arsenal of powerful tools to bear when I need to process it, easily and efficiently extracting value from all of its richness. I’ll send JSON to the browser when it’s convenient and I’ll map the the output of JSON web APIs into XML when it’s convenient.

Is this a pragmatic approach which would be shared by developers in the JISC community, I wonder? Indeed on Twitter Tony Hirst has just askedCould a move to json make Linked Data more palatable to developers?” and encouraged the #jiscri and #devcsi communities to read a draft document on “JSON-LD – Linked Data Expression in JSON“.

Posted in jiscobs, standards, W3C | 9 Comments »

HTML and RDFa Analysis of Welsh University Home Pages

Posted by Brian Kelly (UK Web Focus) on 17 November 2010

Surveying Communities

A year ago I published a survey of RSS Feeds For Welsh University Web Sites which reported on auto-discoverable RSS feeds available on the home page of 12 Welsh Universities.  This survey was carried out over a small community in order to identify patterns and best practices for the provision of RSS feeds which could inform discussions across the wider community.

Trends in Use of HTML and RDFa

As described in previous analysis of usage of RSS feeds on Scottish University home pages such surveys can help to understand the extent to which emerging new standards and best practices are being deployed within the sector and, if usage is low, in understanding the reasons and exploring ways in which barriers can be addressed.

With the growing interest in HTML5 and RDFa it will be useful to explore whether such formats are being used on institutional home pages.

An initial small-scale survey across Welsh University home pages has been carried out in order to provide some initial findings which can be used to inform discussions and further work in this area.

The Findings

The findings, based on a survey carried out on 21 October 2010, are given in the following table. Note that the HTML analysis was carried out using the W3C HTML validator. The RDFa analysis was carried out using Google’s Rich Snippets testing tool since it is felt that the benefits for searching which use of RDFa is felt to provide will be exploited initially to enhance the visibility of structured information to Google.

Institution Analysis Findings
1 Aberystwyth University HTML Analysis XHTML 1.0 Transitional
RDFa Analysis None found
2 Bangor University HTML Analysis XHTML 1.0 Transitional (with errors)
RDFa Analysis None found
3 Cardiff University HTML Analysis XHTML 1.0 Strict (with errors)
RDFa Analysis None found
4 Glamorgan University HTML Analysis HTML5 (with errors)
RDFa Analysis None found
5 Glyndŵr University HTML Analysis XHTML 1.0 Transitional (with errors)
RDFa Analysis None found
6 Royal Welsh College of Music & Drama HTML Analysis XHTML 1.0 Strict (with errors)
RDFa Analysis None found
7 Swansea University HTML Analysis XHTML 1.0 Transitional
RDFa Analysis None found
8 Swansea Metropolitan University HTML Analysis XHTML 1.0 Transitional (with errors)
RDFa Analysis None found
9 Trinity University College HTML Analysis XHTML 1.0 Strict (with errors)
RDFa Analysis None found
10 University of Wales Institute, Cardiff HTML Analysis XHTML 1.0 Strict (with errors)
RDFa Analysis None found
11 University of Wales, Newport HTML Analysis HTML 4.01 Transitional (with errors)

Discussion

Only one of the eleven Welsh institutions is currently making use of HTML5 on the institutional home page and none of them are using RDFa which can be detected by Google’s Rich Snippets testing tool.

The lack of use of RDFa, together with previous analyses of use of auto-detectable RSS feeds, would appear to indicate that University home pages are currently failing to provide machine-processable data which could be used to raise the visibility of institutional Web sites on search engines such as Google.

It is unclear whether this is due to a lack of awareness of the potential benefits which RDFa could provide, an awareness that potential benefits may not be realised due to search engines, such as Google, not currently processing RDFa from arbitrary Web sites, the difficulties in embedding RDFa due to limitations of existing CMSs, policy decisions relating to changes of such high profile pages, the provision of structured information in other ways or other reasons.

It would be useful to receive feedback from those involved in managing their  institution’s home page – and also if anyone is using RDFa (or related approaches) and does feel that they are gaining benefits.

Posted in Evidence, HTML, jiscobs, standards | 3 Comments »

Experiences Migrating From XHTML 1 to HTML5

Posted by Brian Kelly (UK Web Focus) on 10 November 2010

IWMW 2010 Web Site as a Testbed

In the past we have tried to make use of the IWMW Web site as a test bed for various emerging new HTML technologies. On the IWMW 2010 Web site this year we evaluated the OpenLike service which “provides a user interface to easily give your users a simple way to choose which services they provide their like/dislike data” as well as evaluating use of RDFa.

We also have an interest in approaches to migration from use of one set of HTML technologies to another. The IWMW 2010 Web site has  therefore provided an opportunity to evaluate deployment of HTML5 and to identify possible problem areas with backwards compatibility.

Migration of Main Set of Pages

We migrated top-level pages of the Web site from the XHTML1 Strict Doctype to HTML5 and validation of the home page, programme, list of speakers, plenary talks and workshop sessions shows that it was possible to maintain the HTML validity of these pages.

A small number of changes had to be made to in order to ensure that pages which were valid using an XHTML Doctype  were valid using HTML5. In particular we had to change the form> element for the site search and replace all occurrences of <acronym> to <abbr>. We also changed occurrences of <a name="foo"> to <a id="foo"> since the name attribute is now obsolete.

The W3C’s HTML validator also spotted some problems with links which hadn’t been spotted previously when we ran a link-checking tool. In particular we spotted a couple of occurrences of the form <a href="http://www.foo.bar "> with a space being included rather than a trailing slash. This produced the error message:

Line 175, Column 51: Bad value http://www.foo.bar for attribute href on element a: DOUBLE_WHITESPACE in PATH.
Syntax of IRI reference:
Any URL. For example: /hello, #canvas, or http://example.org/. Characters should be represented in NFC and spaces should be escaped as %20.

This seems to be an example of an instance in which HTML5 is more restrictive than XHTML 1 or HTML 4.

Although many pages could be easily converted to HTML5 a number of pages there were HTML validity problems which had been encountered with the XHTML 1 Transitional Doctype which persisted using HTML5.  These were pages which included embedded HTML fragments provided by third party Web services such as Vimeo and Slideshare. The Key Resources page illustrates the problem, for which the following  error is given:

An object element must have a data attribute or a type attribute.

related to the embedding of a Slideshare widget.

Pages With Embedded RDFa

The Web pages for each of the individual plenary talks and workshop sessions contained embedded RDFa metadata about the speakers/workshop facilitators and abstracts of the sessions themselves.  As described in a post on  Experiments With RDFa and shown in output from Google’s Rich Snippets Testing tool RDFa can be used to provide structured information such as, in this case, people, organisational and event information for an IWMW 2010 plenary talk.

However since many of the pages about plenary talks and workshop sessions contain embedded third party widgets including, for the plenary talks, widgets for videos of the talks and for the accompanying slides, these pages mostly fail to validate since the widget code provided by the services often fails to validate.

A page on “Parallel Session A5: Usability and User Experience on a Shoestringdoes, however validate using the XHTML1+RDFa Doctype, since this page does not include any embedded objects from such third party services. However attempting to validate this page using the HTML5 Doctype produces 38 error messages.

Discussion

The experiences in looking to migrate a Web site from use of XHTML 1 to HTML5 shows that in many cases such a move can be achieved relatively easily.  However pages which contain RDFa metadata may cause validation problems which might require changes in the underlying data storage.

The W3C released a working draft of a document on “HTML+RDFa 1.1: Support for RDFa in HTML4 and HTML5” in June 2010. However it is not yet clear if the W3C’s HTML validator has been updated to support the proposals containing in the draft document.  It is also unclear how embedding RDFA in HTML5 resources relates to the “HTML Microdata” working draft proposal which was also released in June 2010 (with an editor’s draft version dated 20 October 2010 also available on the W3C Web site).

I’d welcome comments from those who are working in this area.  In particular, will the user interface benefits provided by HTML5 mean that HTML5 should be regarded as a key deployment environment for new services, or is there a need to wait for consensus to emerge on ways in which metadata can be best embedded in such resources in order to avoid maintenance problems downstream?

Posted in HTML, standards | 1 Comment »