UK Web Focus (Brian Kelly)

Innovation and best practices for the Web

  • Email Subscription (Feedburner)

  • Twitter

    Posts on this blog cover ideas often discussed on Twitter. Feel free to follow @briankelly.

    Brian Kelly on Twitter Counter

  • Syndicate This Page

    RSS Feed for this page

    Licence

    Creative Commons License
    This work is licensed under a Creative Commons Attribution 2.0 UK: England & Wales License. As described in a blog post this licence applies to textual content published by the author and (unless stated otherwise) guest bloggers. Also note that on 24 October 2011 the licence was changed from CC-BY-SA to CC-BY. Comments posted on this blog will also be deemed to have been published with this licence. Please note though, that images and other resources embedded in the blog may not be covered by this licence.

    Contact Details

    Brian's email address is ukwebfocus@gmail.com. You can also follow him on Twitter using the ID briankelly. Also note that the @ukwebfocus Twitter ID provides automated alerts of new blog posts.

  • Contact Details

    My LinkedIn profile provides details of my professional activities.

    View Brian Kelly's profile on LinkedIn

    Also see my about.me profile.

  • Top Posts & Pages

  • Privacy

    Cookies

    This blog is hosted by WordPress.com which uses Google Analytics (which makes use of 'cookie' technologies) to provide the blog owner with information on usage of this blog.

    Other Privacy Issues

    If you wish to make a comment on this blog you must provide an email address. This is required in order to minimise comment spamming. The email address will not be made public.

Archive for the ‘standards’ Category

Microsoft Adopts First International Cloud Privacy Standard

Posted by Brian Kelly on 18 Feb 2015

Announcement

microsoft-adopts-first-international-cloud-privacy-standardOn Monday 16 January 2015 Microsoft announced that they had adopted the first international Cloud privacy standard.

The standard in question is ISO/IEC 27018, the code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors.

Discussion

A ZDNet article entitled “Microsoft adopts international cloud privacy standard” was published yesterday which provided Microsoft’s summary of this development:

… under the standard, enterprise customers will have control of their data; will be informed of what’s happening with their data, including whether there are any returns, transfers, or deletion of their personal information; and will be protected with “strong security” by ensuring that any people processing personally identifiable information will be subject to a confidentiality obligation.

At the same time, Microsoft has ensured that it will not use any data for advertising purposes, and that it will inform its customers if their data is accessed by the government.

Other news announcements included:

The latter article highlights one limitation of the standard: “Microsoft added the new standard forces them to inform users about government access to data, unless the disclosure is prohibited by law“. This seems to suggest that if the UK Government requests data held by Microsoft in their Cloud service conformance with the standard will require them publicise such disclosure; however this would not be the case in the US where such disclosure is seemingly prohibited by law.

Andrew Cormack, in a post on Janet’s Regulatory Developments blog pointed out that Microsoft’s new ISO/IEC 27018 standard covers “their Azure, Office365 and Intune cloud services“. This should be a pleasing development for institutions which are making use of Microsoft’s Cloud services. But here does this leave Google, Amazon and other major Cloud services?


View Twitter conversations and metrics using: [Topsy] – [bit.ly]

Posted in Legal, standards | Tagged: | 1 Comment »

OpenSocial and the OpenSocial Foundation: Moves to W3C

Posted by Brian Kelly on 17 Dec 2014

OpenSocial logoYesterday, 16th December 2014, the OpenSocial Foundation and the W3C announced that the “OpenSocial Foundation [is] Moving Standards Work to [the] W3C Social Web Activity“.

In the press release John Mertic, President of the OpenSocial Foundation, described how:

The consensus of the OpenSocial Board is that the next phase of Social Web Standards, built in large part on the success of OpenSocial standards and projects like Apache Shindig and Rave, should occur under the auspices of the W3C Social Web Working Group, of which OpenSocial is a founding member. The OpenSocial community has taken the idea of industry standards to govern the Social Web from dream to reality. By shifting our work now to the W3C Social Web Working Group, we will make the Open Social Web inevitable and ubiquitous.

OpenSocial has already developed a number of mature specifications which have been managed by the W3C Social Web Working Group including Activity Streams 2.0 and OpenSocial 2.5.1 Activity Streams and Embedded Experiences APIs.

The OpenSocial Foundation and W3C are invite participants in the following groups:

  • The Social Web Working Group, which is defining technical standards and APIs to facilitate access to social functionality.
  • The Social Interest Group, which is coordinating development of social use cases, and formulating a broad strategy to enable social business and federation.

The Social Web Working Group (SocialWG) wiki provides a summary of the proposed areas of work. I found the (current) single user case of particular interest: SWAT0, the Social Web Acid Test – Level 0, provides an integration use case for the federated social web:

  1. With his phone, Dave takes a photo of Tantek and uploads it using a service
  2. Dave tags the photo with Tantek
  3. Tantek gets a notification on another service that he’s been tagged in a photo
  4. Evan, who is subscribed to Dave, sees the photo on yet another service
  5. Evan comments on the photo
  6. David and Tantek receive notifications that Evan has commented on the photo

Such functionality will be familiar to Facebook users, but in this case the users don’t need to have accounts on the same service.

It will be interesting to see how this standardisation work develops and, in particular, the extent to which we will see take-up of the standards by existing providers of social media services and the development of new services which may provide competition to existing providers


View Twitter conversations and metrics using: [Topsy] – [bit.ly]

Posted in Social Web, standards | 1 Comment »

Standards for Web Applications on Mobile: Update on W3C Developments

Posted by Brian Kelly on 15 Sep 2014

Standards for Web Applications on Mobile: Current State and Roadmap

Back in July 2014 W3C published an overview report on Standards for Web Applications on Mobile which summarised the various technologies developed in W3C which increase the capabilities of Web applications and how they apply to use on mobile devices.

The document describes a variety of features which will enhance use of mobile devices to access Web products which are grouped into the following categories: graphics, multimedia, device adaptation, forms, user interactions, data storage, personal information management, sensors and hardware integration, network, communication and discovery, packaging, payment, performance and optimization and privacy and security.

For each of these categories a table is provided in the report which, for each of the detailed features relevant to the category, summarises the relevant standard (specification) and the W3C working group responsible for the standard. An indication is provided of the maturity of the standard and its stability (draft standards may be liable to significant changes in light of experiences gained during testing). In addition to information about the maturity and stability of the standard information is also  provided on its deployment in existing mainstream browsers together with links for developers to developer resources and test suites.

An example of the table for graphics. covering 2D vector graphics, is shown below.

Extract from chart on W3C mobile standards

Discussion

I feel that the report which summarises the current status and roadmap for future development of standards which aim to ensure that mobile devices are an integral part of the “open web platform” provides a welcomed mature approach to the complexities and obstacles which have been faced in the past in the deployment of open standards in a Web environment.

In the early days of the web there was a belief that open standards simply needed to be proven through implementation of at least two interoperable open source implementations – once that was achieved the benefits of open standards, such as platform independence, would inevitably lead to acceptance in the marketplace. That, at least, was the expectation for the W3C’s SMIL standard, which was felt to provide an open killer alternative to the proprietary Flash format.  Of course, despite the availability of a number of SMIL readers, the format failed to take off. Flash wasn’t killed by an open standard, I would argue, but by Apple decision not to support in on the iOS platform. And the eventual alternative to Flash wasn’t SMIL but a variety of W3C standards which are covered by the term “open web platform“.

I made this point in a post published in November 2008 which asked Why Did SMIL and SVG Fail? The post generated much discussion, primarily about the level of support for SVG. In August 2003 Isaac Shapira made the point that I guess in retrospect this article is very wrong. SVG is a prominent use, and has active development and support today -> in 2013“.

As can be seen from the above image this comment is correct: SVG 1.1 is now widely supported and SVG 2.0 is under development. Although, to paraphrase John Cleese “SMIL is dead. It’s passed on! This standard is no more! It has ceased to be! It’s expired and gone to meet ‘is maker!“. In contrast, SVG was merely resting!

Implications for the Sector

In retrospect institutional conservatism regarding the adoption of innovative open standards is understandable. Institutions may have legitimate reasons to be reluctant to upgrade the desktop environment due to the resource implications, the need for testing, etc. (There will probably also be less justifiable reasons to wish to avoid updating desktop browser as the use of systems which make use of proprietary features of specific – typically Microsoft’s Internet Explorer – browsers; however let us hope that this concern is no longer relevant!)

However W3C now appear to appreciate the need to be transparent about the take-up of their standards by mainstream browsers. This is to be applauded. The risk now, it would seem, involves the development or procurement of systems for use in a mobile context which are based on platform-specific apps.

I hope that everyone involved in the development or procurement of mobile applications, in managing staff with such responsibilities or with strategic planning for the institution’s IT environment will read the W3C’s report on Standards for Web Applications on Mobile and use the report to inform their planning. My concern would be with the senior manager, perhaps in the marketing department, who comes across information such as the recent (April 2014) infographic on “The rise of mobile technology in higher education” who makes a decision to invest on an institutional mobile app based on this evidence. Another interesting challenge will be faced by institutions which have already purchased a mobile app service, before the mobile web environment had approached its current level of maturity. Will this be the twenty-first century equivalent of the institutional Gopher service or Camus Wide Information Service? And is now the time to move to an infrastructure based on the open web platform?

Infographic on student use of mobiles

Posted in Mobile, standards | Tagged: , | Leave a Comment »

UK Government Mandates Open Document Format! A Brave or Foolhardy Decision?

Posted by Brian Kelly on 28 Jul 2014

UK Government Policy Announcement on Office Standards

UK Goverment policy on ODF

Image from Computer Weekly (http://www.computerworlduk.com/)

Back in October 2012 in a post entitled Good News From the UK Government: Launch of the Open Standards Principles which described how the UK government had published a series of document which outlined the government’s plans for use of open standards across government departments.

Last week the government made its first significant policy decision about one standards area: as described in a Computer Weekly article: UK government adopts ODF as standard document format.

Further Details

On 22 July 2014 in a blog post entitled Making things open, making things better posted on the UK’s Government Data Service blog Mike Bracken announced the UK Government’s policy on open standards for document formats. As described in a document entitled Viewing government documents the open standards mandated for viewing Government documents are:

  • HTML5 (either the HTML or XML formulation) must be used for all new services that produce documents for viewing online through a browser
  • PDF/A must be used for static versions of documents produced for download and archiving that are not intended for editing.

Where editable information is required the approach must be as set out in the sharing and collaborating on government documents standards profile which mandates ODF 1.2 as the standards which must be used.

A document on Open formats for documents: what publishers to GOV.UK need to know summarises the policies:

Documents for ‘viewing’ must be available in one or both of the following formats:

  • HTML5
  • PDF/AA separate set of standards applies to documents that users will want to edit. This type of document must be published in Open Document Format (ODF). The most common examples of this are:

For documents designed for sharing or collaborating:

A separate set of standards applies to documents that users will want to edit. This type of document must be published in Open Document Format (ODF). The most common examples of this are:

  • .odt (OpenDocument Text) for word-processing (text) documents
  • .ods (OpenDocument Spreadsheet) for spreadsheets
  • .odp (OpenDocument Presentation) for presentations. Once open publishing standards are adopted in full by your organisation, no documents should be published in proprietary formats.

The document goes on to explicitly state that:

Where editable information is required the approach must be as set out in the sharing and collaborating on government documents standards profile which mandates ODF 1.2 as the standards which must be used.

Discussion

Initial Skirmishes

It seems the initial battles regarding the government’s approaches to open standards took place in 2012. I commented on the initial skirmishes in May 2012 in a post on Oh What A Lovely War! and followed this in October 2012 in a post “Standards are voluntarily adopted and success is determined by the market” which described the approaches being taken by Open Stand: “five leading global organizations jointly signed an agreement to affirm and adhere to a set of Principles in support of The Modern Paradigm for Standards; an open and collectively empowering model that will help radically improve the way people around the world develop new technologies and innovate for humanity“.

The signatories affirmed that:

We embrace a modern paradigm for standards where the economics of global markets, fueled by technological advancements, drive global deployment of standards regardless of their formal status.

In this paradigm, standards support interoperability, foster global competition, are developed through an open participatory process, and are voluntarily adopted globally. These voluntary standards serve as building blocks for products and services targeted at meeting the needs of the market and consumer, thereby driving innovation. Innovation in turn contributes to the creation of new markets and the growth and expansion of existing markets.

It should be noted that the five leading global organizations which were supporting “the economics of global markets” were not IT companies such as Microsoft, Apple and Google but IETF, Internet Society, IAB, W3C and IETF.

The Government Rejects Microsoft’s Lobbying!

The Computer Weekly article entitled UK government adopts ODF as standard document format had a sub-heading “Cabinet Office resists extensive lobbying by Microsoft to adopt open standards“.

I must admit that following the Open Stand announcement I had expected those formulating national policies in the western world to take a similar approach in supporting “the economics of global markets“. The UK Government’s decision to reject Microsoft’s call for inclusion of OOXML (Open Office XML), which is an ISO standard,  appears surprising. But perhaps this provides an unusual opportunity to praise the government!

Challenges to be Faced

In making a bold decision it should be expected that there will be challenges to be faced in implementing the decision.  In this case some of the challenges to be faced may include:

Implications for use of OOXML: The document on Open formats for documents: what publishers to GOV.UK need to know states that “Once open publishing standards are adopted in full by your organisation, no documents should be published in proprietary formats“. But a government department which makes extensive use of Microsoft tool could legitimately point out that it uses OOXML, an ISO standard. There will be a need to clarify that the policy decision is concerned with specific open standards rather than open versus proprietary standards.  It may be more appropriate to say that the government is mandating particular open standards for which open source tools which provide rich support are readily available.

Financial implications: The policy decision will be seen as a move from Microsoft Office to Open Office software which will bring significant financial savings due to the licence costs of Microsoft software. But what of the costs in migrating to new office tools and new workflow processes? It might be argued that there is a need to make a change at some point and there is no point in continuing to defer such a change (indeed, it can be argued that the Labour government should have made this decision which there was more money available). But since the Government seems to be prioritising financial issues in policy decisions there will be a need for the costs of this change to be monitored.

Implications for users: The policy decisions will be seen as a move from Microsoft Office to Open Office software which will bring significant financial savings due to the licence costs of Microsoft software. But what of the

Exporting to ODF: It should be noted that the decision appears to relate to document formats when these are to be shared with others. Will be see existing Microsoft Office tools continue to be used but files exported in ODF format?

Scope of the policy: This policy would appear to apply to central government services. I would be interested to hear if its scope will go beyond this and apply to local government and, of particular interest to me, the higher and further education sectors and associated educational funding bodies and agencies. Will, for example, documents submitted by educational institutions to government departments, funding agencies, etc. be expected to be in ODF format?

Use of Cloud services: We are seeing moves to Cloud services for office applications including but not limited to Google Docs and Office 365. It seems that documents hosted on Google Drive can be exported to ODF format, although I am unclear as to whether similar functionality is available for Office 365 [Note as described in a comment, Office 365 does allow ODT documents to be created].  However if government bodies have chosen to migrate their office environment to the Cloud it will be interesting to see how this will be affected by the policy on file formats. It should be noted that there does not appear to be a mature Cloud environment which is tightly coupled with Open Office.

Your Thoughts

I suspect that many readers for this blog would feel that the UK coalition government does not have a good track record of making evidence-based policy decisions which have been widely acknowledged to have provided benefits to those living in the UK.

But might this be a decision which should be welcomed? Are the challenges I’ve listed (and there will be others I haven’t described) simply issues which will be addressed, whilst the benefits of the decision will quickly become apparent?

I’d welcome your thoughts. Feel free to leave a comment or respond to the poll.


View Twitter conversations and metrics using: [Topsy] – [bit.ly]

 

 

Posted in standards | Tagged: | 3 Comments »

The City and The City: Reflections on the Cetis 2014 Conference

Posted by Brian Kelly on 30 Jun 2014

The City and The City

City_and_the_CIty

The City and the City is a novel by China Miéville. As described in Wikipedia the novel “takes place in the cities of Besźel and Ul Qoma. These two cities actually occupy much of the same geographical space, but via the volition of their citizens (and the threat of the secret power known as Breach), they are perceived as two different cities. A denizen of one city must dutifully ‘unsee’ (that is, consciously erase from their mind or fade into the background) the denizens, buildings, and events taking place in the other city – even if they are an inch away.

I read the novel earlier this year. When I saw it in a bookshop over the weekend I thought of the parallels with the Cetis 2014 conference: two plenary talks which occupied the same space but which described the ‘unseeing’ of a shared history.

Cetis 2014: Building the Digital Institution

“lack of knowledge about the history of education and the history of education technology matter”

Phil Richards' keynote talk at Cetis 2014The Cetis 2014 conference, which had the theme Building the Digital Institution: Technological Innovation in Universities and Colleges, took place at the University of Bolton on 17-18 June. As described by Mark Johnson in his blog post about the event the conference “attracted 100 delegates from the UK HE and FE sectors eager to talk about the impact of interoperability, cloud computing, e-books, systems integration and learning analytics“. Mark went on to add that “the conversation has been more eager, imaginative and focused than in previous years. This was helped by the two keynotes“.

Mark was right to draw attention to the two keynotes which opened and closed the conference. After the conference had been opened by Paul Hollins scene-setting presentation, Phil Richards, Chief Innovation Officer at JISC gave the opening plenary talk in which he described “Innovating for the Digital Institution“. The following day Audrey Watters closed the conference with her talk on Un-Fathom-able: The Hidden History of Ed-Tech.

These talks generated much discussion on the Twitter backchannel, during the conference and afterwards. I welcomed both talks for helping to stimulate such discussions but for me, although the two speakers occupied the same physical (the lecture theatre at the University of Bolton) and virtual (the ed-tech development environment) spaces, they seemed to reflect two very different spaces.

Audrey Watters talk on The Hidden History of Ed-Tech provided examples of how the history of technological developments is written by the victors which depicts a misleading picture of the past. As Audrey described in a blog post about her talk:

[this] lack of knowledge about the history of education and the history of education technology matters. 

It matters because it supports a prevailing narrative about innovation — where innovation comes from (according to this narrative, it comes from private industry, that is, not from public institutions; from Silicon Valley, that is, not from elsewhere in the world) and when it comes (there’s this fiercely myopic fixation on the future).

I agree that such things matter. Indeed a year ago I had responsibilities for the preservation of UKOLN’s digital resources which aimed at ensuring that a record of our work in helping the development of the digital environment across the UK’s higher and further education sector was not lost. And since Audrey suggested hat there was a need for multiple recollections of the history of ed-tech developments to be published in order that historians in the future will be better placed to document the history I will provide my thoughts, with links to supporting evidence, on Phil Richards’ plenary talk.

Innovating for the Digital Institution

Phil Richards Cetis talk: outlinePhil Richards’ talk on “Innovating for the Digital Institution” was very useful in summarising Jisc’s plans for innovation in their new environment. Phil explained how the changes were based on the recommendations of the Wilson review. The Wilson Review (PDF format) described how “There is a common view that it has played a pivotal role in the UK as an enabler of innovation and early and widespread adoption of ICT …. There is no comparable body within the UK, and internationally its reputation is outstanding as a strategic leader and partner” and went on to add that “JISC is unique in the UK, providing what many stakeholders have described as a “holistic approach” to the sectors’ needs, from research and innovation, to core services, resources, advice and training“. However the review went on to comment that there had been “some criticism of the breadth and complexity of JISC’s activity, and of its structure, processes and governance arrangements“.

Phil’s slides are available on Slideshare and, as shown in the accompanying images, provided the reasons why Jisc needs to innovate, reflected on the Wilson review and outlined approaches to innovation in the future.

As can be seen from the video recording of the plenary talk it seems that Jisc needs to innovate in order that Jisc will be able to survive as an organisation, since the move to commodity IT means that Jisc will face competitors in the educational technology environment.

Jisc Moves Away from Open Standards

Phil Richards Cetis talk: standardsIn the moves towards reducing the range of activities which Jisc works on Phil highlighted a move away from working with standards, and highlighted the NHS as an example of a sector in which large sums of money had been invested in the development of interoperable systems based on open standards which had failed to deliver.

In the future Jisc will seek to focus on “innovative, successful learning technology without standards” and cited Sugata Mitra’s ‘hole in the wall ‘ work as an example of successful self-organised learning which we should seek to emulate.

This criticism of an standards-based development work was very radical in a Jisc environment in which for Jisc development programmes such as eLib and the DNER/IE, a strong emphasis had always been placed on the importance of open standards.

I should mention that back in 1996 I was a contributor to the eLib standards guidelines and in February 2001 contributed to the Working with the Distributed National Electronic Resource (DNER): Standards and Guidelines to Build a National Resource document (PDF format). In September 1997 in a talk on  talk on Standards in a Digital World: Z39.50, HTML, Java: Do They Really Work? I gave an uncritical summary of the importance of open standards in development programmes. However in June 2005 in a talk on JISC Standards: A Presentation To The JISC I highlighted the potential limitations of open standards.

But using a few slides which are presented to a small audience is, I feel, not an appropriate way to seek to change policies. At the time Jisc made use of posters which contained the slogan: “Interoperability through Open Standards“. Marketing people have a tendency to attempt to reduce complexities to such simple statements. There was a need t help develop a better understanding of the limitations of such views.

Along with colleagues working at UKOLN, CETIS, TechDis, AHDS and OSS Watch we published a number of peer-reviewed papers including “Ideology Or Pragmatism? Open Standards And Cultural Heritage Web Sites” (2003), ” A Standards Framework For Digital Library Programmes” (2005), “A Contextual Framework For Standards” (2006),  “Addressing The Limitations Of Open Standards” (2007) and “Openness in Higher Education: Open Source, Open Standards, Open Access” (2007). The first paper explained how:

The importance of open standards for providing access to digital resources is widely acknowledged. Bodies such as the W3C are developing the open standards needed to provide universal access to digital cultural heritage resources. However, despite the widespread acceptance of the importance of open standards, in practice many organisations fail to implement open standards in their provision of access to digital resources. It clearly becomes difficult to mandate use of open standards if it is well-known that compliance is seldom enforced. Rather than abandoning open standards or imposing a stricter regime for ensuring compliance, this paper argues that there is a need to adopt a culture which is supportive of use of open standards but provides flexibility to cater for the difficulties in achieving this.

This paper was based on the work of the Jisc-funded QA Focus project which ran from 2002-2004. As described in the final report the project was funded by the Jisc to advice Jisc on the conformance regime which should accompany standards documents for Jisc development programmes. The project recommended that rather than mandating conformance with open standards “JISC should mandate that funded projects address QA issues at the start of the project in order to consider potential problems and the most effective method of avoiding them. JISC should also remind projects of the need to implement QA within their workflow, allowing time at each stage to reconsider previous decisions and revise them if necessary

More recently in September 2010 Cetis organised a meeting on the Future of Interoperability Standards. An Ariadne report on the meeting provided the context for the meeting:

In his opening address, JISC CETIS Director Adam Cooper emphasised that the impetus behind this meeting was a sense of growing dissatisfaction amongst many involved in standards development and implementation within education. Where the original intentions of more-or-less formal bodies such as the Institute for Electrical and Electronics Engineers Learning Technology Standards Committee (IEEE LTSC), the IMS Global Learning Consortium (IMS GLC) and the International Organisation for Standardisation (ISO) were laudable, there has been an increasing feeling that the resource put into supporting these standards has not always borne the hoped-for fruit.

A report on the meeting highlighted the issues which had been raised in the position papers presented at the meeting, which included barriers to participation, development and adoption and the importance of supporting an open culture and community engagement in technology development and standardisation:

There is broad agreement that community engagement and openness are key factors in the development of LET standards (Hoel, 2010). Niche software developers, many coming with an open source attitude, have been especially strong advocates for open standards, arguing that their use will enable innovation to flourish. An increasing level of interest and engagement of people from open source communities will naturally drive the standards process to become more “open”. 

The importance of engaging with developers to help validate open standards and provide encouragement in the development on applications and services based on open standards has, in the past, being addressed by Cetis in Cetis ‘code bashes’ (see Engaging Developers in Standards Development; the Cetis Code Bash Approach) and the DevCSI work which was led by UKOLN.

Phil Richards Cetis talk: Standards conclusionsTo conclude, it would appear that Jisc have recognised the arguments which Cetis and UKOLN, along with several other organisations, have been making since 2003: we can’t have an uncritical belief in open standards.

Jisc may well still have to conform with the UK Government’s Open Standards Principles (which is available in PDFMS Word and ODT formats) which states that:

The publication of the Open Standards Principles is a fundamental step towards achieving a level playing field for open source and proprietary software and breaking our IT into smaller, more manageable components

But the emphasis on the value of lightweight standards reflects the advice which the former Innovation Support Centres have provided to Jisc in the past.

What seems to be missing from the new Jisc vision, however, is the community involvement in the open development of further open standards. Perhaps there is an assumption that no new standards are expected to be developed? This would be a mistake, I feel. My Cetis colleagues Phil Barker and Lorna Campbell ran a workshop session at the Cetis 2014 conference in which they asked LRMI: What on Earth Could Justify Another Attempt at Educational Metadata? As Phil described in a report on the workshop session “We really love metadata, but [had] reached a point where making ever-more elegantly complex iterations on the same idea kind of lost its appeal. So what is it that makes LRMI so different so appealing?” Phil went on to conclude that “the general feeling I had from the session was that most of the people involved thought that LRMI was a sane approach: useful, realistic and manageable“.

It would be unfortunate if Jisc and the wider community were to miss out on the benefits which emerging new standards such as LRMI can provide for the education sector. Fortunately Cetis will be continuing to work in this area.

The Jisc Forest

Phil Richards Cetis talk: Co-design work for 2013-14In addition to describing the Jisc moves away from open standards Phil went on to explain Jisc’s core areas of work. As recommended in the Wilson Review Jisc are now focussing on a small number of areas in which they hope to make significant impact.

The areas of work are agreed with the Jisc co-design partners: RLUK, RUGIT, SCONUL and UCISA. In 2013/14 these areas were Access and identity management; National monograph strategy; Summer of student innovation; Digital student; Open mirror; Spotlight on the digital and Extending Knowledge Base +.

Following on from this work five additional new areas of work have been prioritised with four areas being mentioned in Phil’s presentation: (1) research at risk; (2) effective learner analysis; (3) from prospect to alumnus and (4) building capability for new digital leadership, pedagogy and efficiency.

Phil used a forest metaphor to describe this new approach: in the eLib days in the mid to late 1990s it was explained how Jisc were encouraging a thousand flowers to bloom in order to help build capacity across the sector and help ensure that there was abroad understanding of the value of the networked environment across the sector. However in light of funding constraints there will be less experimentation and less risk-taking; rather key areas of particular relevant to the co-design partners will be identified which will form the focus of development work in the future.

Tweet about Phil Richards' talkAs can be seen from the Storify archive of tweets posted during the talk this metaphor caused a certain amount of confusion. During the questions I asked a question based on this metaphor. To paraphrase what I said then “If Jisc are now building a forest containing five types of tree, who will develop the flowers, the shrubs and the hedges? And what would happen if, in three years time when institutions can chose whether of not to buy in to Jisc’s offering, they feel that the flowers, the shrubs and the hedges provide better value for money?

Towards Orciny – the Rumoured Third City

Audrey Waters keynote talk at Cetis 2014In The City and The City it is rumoured that a third city, Orciny, exists in the interstices between one city and another, unseen by occupants of both which has a hidden history. Is there a edu-tech city to be found beyond the forested Jiscdom?

I personally do not feel that the Jisc vision as described by Phil Richards will provide a environment in which those involved in ed-tech will feel at home. For me the future needs to be based on listening and engagement. As Mark Johnson put itwe should hope that the critical debate about those technologies, their implementation and development serves to give us permission to ask the questions about education that urgently need to be asked“. Those who wish to be involved in the discussion and in facilitating the discussion must not hide behind statements such as “people above my pay grade make the key decisions“.

This vision of the future is not based on a proclamation that “We are the UK’s expert on digital technologies for education and research” but on facilitation and support: the experts, I feel, are embedded across the sector and don’t work for a single organisation.

But I think it is also inevitable that the edu-tech future will be more fragmented. In the past the broad Jisc family could provide a leadership role across a wide range of areas. But the refocussing of work will mean the missing void is likely to be filled by a range of service providers, advisory bodies and consultants. I feel that Cetis will have an important role to play in that space. I hope that this will involve continuing to work with institutions, other bodies across the sector and with Jisc itself – but without buying in to the Jisc vision of the future!

As I said earlier I enjoyed the two keynote talks at the Cetis 2014 conference which did succeed in stimulating discussion and debate. If you didn’t attend the conference video recordings of the plenary talks and the accompanying slides are embedded below and are also available form YouTube and Slideshare. I’d welcome your thoughts on these contrasting talks.

Phil Richard’s plenary talk on Innovating for the Digital Institution

Video recording (on YouTube):

Slides for Phil Richards’ plenary talk (on Slideshare)

Audrey Watters’ plenary talk on Un-Fathom-able: The Hidden History of Ed-Tech

Video recording (on YouTube)

Slides for Audrey Watters’ plenary talk (on Slideshare)


View Twitter conversations and metrics using: [Topsy] – [bit.ly]

Posted in Events, standards | Tagged: , | Leave a Comment »

Ensuring Discoverability of OA Articles in Hybrid Journals

Posted by Brian Kelly on 4 May 2014

My talk at NASIG 2014

Consultancy Work

When I was offered the job as Innovation Advocate at Cetis with the agreement of the director I decided to work part-time so that I would have some flexibility for consultancy work.

I have just completed the first significant consultancy work, which was to give a presentation  on “Hybrid journals: Ensuring systematic and standard discoverability of the latest Open Access articles on behalf of the JEMO project at the NASIG 2014 conference.

The NASIG 2014 Conference

NASIG is an “[American] organization that promotes communication and sharing of ideas among all members of the serials information chain“. NASIG 2014, the 29th annual conference, which had the theme “Taking Stock and Taming New Frontiers“, took place in Fort Worth, Texas on 1-4 May 2014 and attracted about 360 delegates.

I gave my talk on Friday 2 May from 1.10-2.10.  In this post I will give a brief summary of the talk and the preceding talk which also addressed the issue of the discoverability (and management) of open access articles.

The Challenges of Finding Open Access Articles in Hybrid Journals

Articulating the Problem

Chris Bulock and Nathan Hosburgh gave a talk on “OA in the library collection: The challenges of identifying and managing open access resources” in a session which preceded my talk. Their slides are available on Slideshare and I have embedded them in this blog post. Their talk was based on a survey which sought to investigate current practices in the management of open access resources; identify the challenges librarians face and areas for improvement.

Hybrid OA is a nightmareI was particularly interested to note the comment they received in response to their survey that “Hybrid OA is a nightmare“.

They went on to summarise the responses they received to the question “What would make the management of OA resources easier?” The suggestion:

Harry Potter, the Elder wand and the help of Dobby – the free elf

brought a smile to the faces of audience. But this also provided me with an opportunity to use Harry Potter as a metaphor for describing the solution which has been developed by the JEMO project team to the nightmare problem of open access articles in hybrid journals.

Providing a Lightweight Solution

NASIG tweetsThe slides I used in my presentation are available on Slideshare and embedded at the bottom of this post. I will not attempt to summarise the entire presentation. Rather I will summarise the proposed solution in a single sentence: “The JEMO team propose a solution based on providing Creative Commons licence information for Open Access articles which is made available in RSS feeds for hybrid journals”.

I was able to give a live demonstration of the JournalTOCs service which has provided a proof-of-concept of the value of this approach.

It should be noted that the slides provide screenshots of the steps used in discovering an open access article included in a hybrid journal.

After the presentation I captured the tweets made during that talk in a Storify summary, as illustrated.

Conclusions

I was pleased to carry out this work on behalf of the JEMO team and to renew contact with Roddy MacLeod. My attendance at the conference also provided an opportunity to hear more about developments in the Web archiving world in a particularly  interesting plenary talk on “From a System of Journals to a Web of Objects” given by Herbert Van de Sompel. I also found  Richard Wallis’s talk on The Power of Sharing Linked Data: Giving the Web What It Wants providing a useful update on Linked Data developments in the library world.


View Twitter conversations and metrics using: [Topsy] – [bit.ly]

Posted in openness, standards | Tagged: | Leave a Comment »

The Biggest Barrier to WebRTC Adoption is Lack of Awareness!

Posted by Brian Kelly on 29 Apr 2014

How Appear.in Led Me To WebRTC

The appear.in Tool Back in January 2014 in a guest post published on Sheila MacNeill’s How Sheila See IT blog I reported on our experiments with the appear.in tool. This service provides a lightweight video conferencing tool. As I described in the blog post “unlike Skype, no software needs to be installed and unlike Google Hangouts you do not need to sign up to the service“.

Although the blog post and subsequent discussion on Twitter generated some interest in the appear.in service of potentially much more significance is the emerging standard on which the service is based: WebRTC.

WebRTC: The ‘Most Exciting Technology for 2014′

On 30 December 2013 the ESNA Web site announced “WebRTC ‘Most Exciting Technology for 2014′“. The context for this was the view expressed by Davide Petramala that:

2013 was the year of democratizing video, driven by the momentum of WebRTC; Microsoft integrating Skype in its office portfolio; Google launching hangouts; and GVC and Cisco announcing Jabber C. We are finally seeing ubiquitous video across devices that is available to the masses and [we are] moving from consumer use cases (ex: calling family abroad) to business use cases (i.e. group meetings, virtual customer events and customer presentations).

This bog post aims to provide answers to the questions “What is WebRTC?“, “How well is it supported?” and, the big question, “Will it take off?“.

About the WebRTC Standard

The latest W3C’s WebRTC 1.0: Real-time Communication Between Browsers Working Draft was published in September 2013 although an editor’s draft was published 10 April 2014.

The WebRTC Web site states:

WebRTC is a free, open project that enables web browsers with Real-Time Communications (RTC) capabilities via simple JavaScript APIs. The WebRTC components have been optimized to best serve this purpose.

and goes on to explain that the mission of the WebRTC organisation is:

To enable rich, high quality, RTC applications to be developed in the browser via simple JavaScript APIs and HTML5.

Support for the Emerging Standard

WebRTC architecture

Figure 1: WebRTC Architecture (from 

From the history of development of Web standards we have learnt that such standards need to have the support of mainstream browser vendors in order to gain market acceptance. In this case the WebRTC initiative is a project supported by Google, Mozilla and Opera (clearly the lack of support from Microsoft is a significant omission).

Support for Developers

In order to enhance take-up of the standard Web RTC is providing a number of resources which are targetted at the developer community including:

As well as a number of discussion channels:

Note the architecture of WebRTC is shown in Figure 1 (taken from http://www.webrtc.org/reference/architecture).

WebRTC browser support

Browser support

WebRTC is supported in the following browsers on desktop PCs:

  • Google Chrome
  • Mozilla Firefox
  • Opera

In addition these browsers also support WebRTC on the Android platform.

Commercial Interest in WebRTC

The WebRTC Global Summit was held in London on 1-2 April 2014.The summary for the event described:

Bringing together leading telcos, mobile operators, OTT/VoIP players, web developers, analysts, regulators and key enterprise players from across the world, WebRTC Global Summit will cover all the key issues in detail from a uniquely commercial and strategic perspective through a mix of incisive keynote presentations and debate. This standalone, single-stream, two-day event will also evaluate the technology’s impact on the industry, asking to what extent will WebRTC revolutionise the communications industry as we know it?

It should be noted that tickets for the event cost up to around £2,000 (although significant discounts were available).

Opportunities and Risks

WebRTC's barriers to adoption

Figure 2: Biggest Barriers to WebRTC Adoption

Since WebRTC is being developed and promoted by significant Web browser vendors (Google, Mozilla and Opera) and we are beginning to see an interest from the telecommunications sector there is evidence to suggest that this may be an important standard to monitor.

However one of the most significant risks appears to be the lack of involvement in the standardisation process from Microsoft and Apple (the WebRTC Outlook 2014 suggests that lack of awareness of WebRTC is the most significant barrier to adoption, followed by lack of support by Microsoft and Apple).

There are mixed messages regarding potential support for WebRTC in Internet Explorer and on the Apple platform.

An article published on Gigaom in August 2012 announced that Microsoft commits to WebRTC – just not Google’s version. As described in the article”Microsoft’s commitment to this kind of technology isa big deal for the future of Skype and other messaging applications“.

Meanwhile in November 2013 WebRTC World published an article with the reassuring title Don’t Worry; Apple Will Soon Support WebRTC which was based on the news that “Apple has started to attend W3C WebRTC Working Group meetings“.

More recently (February 2014) in “An Open WebRTC Letter to Satya Nadella and Microsoft” Phil Edholm, President & Founder, PKE Consulting encouraged the new Microsoft CEO to support WebRTC since “WebRTC is going to be as big as is being forecast (6.2B WebRTC devices by 2016), why risk giving users another reason to get Chrome or Firefox?“.

Conclusions

In the list of biggest barriers to adoption of WebRTC it was interesting to note that lack of standards or developers or limited features of the standard were not regarded as significant barriers. This article aims to address the lack of awareness barrier by ensuring that the higher education community is made aware of the emerging new standard. However the uncertainties of support by Microsoft and Apple are likely to inhibit take-up of the standard across not only the higher education community but the wider market place. Developments to WebRTC will continue to monitored and news of any significant changes in the current stances taken by Microsoft and Apple will be published on this blog.

In addition comments on WebRTC are welcomed. Is anybody currently using it?


View Twitter conversations and metrics using: [Topsy] – [bit.ly]

Posted in standards | Tagged: , | 3 Comments »

What Could ITS 2.0 Offer the Web Manager?

Posted by Brian Kelly on 24 Jan 2014

ITS 2.0 videoBack in October 2013 the W3C announced that the Internationalization Tag Set (ITS) version 2.0 had become a W3C recommendation. The announcement stated:

The MultilingualWeb-LT Working Group has published a W3C Recommendation of Internationalization Tag Set (ITS) Version 2.0. ITS 2.0 provides a foundation for integrating automated processing of human language into core Web technologies. ITS 2.0 bears many commonalities with its predecessor, ITS 1.0, but provides additional concepts that are designed to foster the automated creation and processing of multilingual Web content. Work on application scenarios for ITS 2.0 and gathering of usage and implementation experience will now take place in the ITS Interest Group. Learn more about the Internationalization Activity.

Following the delivery of this standard, on 17 January 2014 the MultilingualWeb-LT Working Group was officially closed.

But what exactly does ITS 2.0 do, and is it relevant to the interests of institutional web managers, or research, teaching or administrative departments within institutions?

The ITS 2.0 specification provides an overview which seeks to explain the purpose of the standard but, as might be expected in a standards document, this is rather dry. There are several other resources which discuss ITS 2.0 including:

But the resource I thought was particularly interesting was the ITS 2.0 video channel. This contains a handful of videos about the ITS standard. One video in particular provides a brief introduction to ITS 2.0 and the advantages it can offer businesses involved in multilingual communication. This 8-minute long video can be viewed on YouTube but it is also embedded below:

The video, an animated cartoon, is interesting because of the informal approach it takes to explaining the standard. This, in my experience, is unusual. The approach may not be appreciated by everyone but since standards are widely perceived to be dull and boring, although still acknowledged as important. For me, providing a summary of the importance of standards in this way can help to reach out to new audiences who might otherwise fail to appreciate the role which standards may have.

If you are involved in providing web sites or content which may be of interest to an international audience it may be worth spending 8 minutes to view this video. If ITS 2.0 does appear to be of interest the next question will be what tools are available to create and process ITS 2.0 metadata? A page on ITS Implementations is available on the W3C web site but again this is rather dry and the tools seem to be rather specialist. However more mainstream support for ITS 2.0 is likely to be provided only if there is demand for it. So if you do have an interest in metadata standards which can support automated translations and you feel ITS 2.0 may be of use, make sure you ask your CMS vendor if they intend to support it.

Might this be of interest to University web managers? If you are a marketing person at the University of Bath and wish to see your marketing resources publicised to the French-speaking world but have limited resources for translating your resources, you probably wouldn’t want:

The University of Bath is based in a beautiful georgian city: Bath. 

to be translated as:

L’université de bain est basé dans une belle ville géorgienne: bain.

And whilst Google translate actually does preserve the word “Bath” if it is given in capitals, this seems not to be the case in all circumstances. For example, the opening sentence on the Holburne Museum web site:

Welcome to Bath’s art museum for everyone. 

is translated as:

Bienvenue au musée d’art de salle de bain pour tout le monde.

Perhaps marketing people in many organisations who would like to ensure that automated translation tools do not make such mistakes should be pestering their CMS vendors for ITS 2.0 support!


View Twitter conversation from: [Topsy]

Posted in standards, W3C | Tagged: | 3 Comments »

Reflections on 16 years at UKOLN (part 4)

Posted by Brian Kelly on 25 Jul 2013

Working With Funders

During my time at UKOLN there have been several core funders including BLRIC (British Library Research and Innovation Centre), LIC (Library and Information Commission) , Re:source, the MLA (Museums Libraries and Archives Council) and the JISC. Having joint funding has meant that UKOLN was able to engage with not only the higher and further education sectors but also the wider library community together with, following government reorganisations, the cultural heritage sector.

In recent posts I summarised my involvement in speaking at and organising events and writing a large number of peer-reviewed papers. This work was carried out primarily through UKOLN’s core funding. The work typically sought to address the needs of our communities through the involvement with people working directly within the sector. Such ‘customer’-focussed approaches helped, I feel, to ensure the work was relevant to the sector.

My work which was more directly involved with JISC’s needs began with work in developing documents on open standards of relevance to JISC’s digital library programmes, beginning initially with the eLib programme and followed by the DNER and the JISC Information Environment. This work led to related work for the cultural heritage sector, in particular  providing advice on standards for the NOF (New Opportunities Fund) Digitise programme.

In addition to such core-funded work I was also involved in project-funded activities including the JISC-funded QA Focus and JISC PoWR projects, the BLRIC-funded WebWatch project and the EU-funded Exploit Interactive and Cultivate Interactive ejournals. I was also involved in a number of initiatives driven by JISC such as the eFramework but, as described in Andy Powell’s post “e-Framework – time to stop polishing guys!” the time and effort expended by this international partnership failed to have any significant impact and the eFramework Web site seems to be no longer available although a copy is available in the Internet Archive.

Working With Standards

One area which was of particular interest to both of UKOLN’s core funders was the selection of open standards for use in development programmes which they funded. My initial work in this area involved contributing to a document of the open standards relevant for the eLib programme.  This subsequently led to similar documents being developed for the JISC Information Environment and the NOF-digitise programme.

At that time the funders wanted a list of the open standards which should be mandated for use in their development programmes. However JISC recognised that they did not have a compliance regime in force to address failures of projects to implement the mandated standards. In 2001 JISC announced a call for “the provision of a JISC/DNER national focus for digitisation and quality assurance in the UK“. The document described how the successful bidder would have responsibilities for:

Ensuring adherence of projects to relevant parts of DNER standards and guidelines and reporting on problems in their implementation; incorporating feedback and recommending updates to the guidelines for the community as appropriate

I submitted a successful bid for this work in conjunction with ILRT, University of Bristol. After the first year ILRT withdrew and were replaced by AHDS.  Myself, my colleague Marieke Guy and our colleagues at AHDS developed a quality assurance framework. As described in the final report:

The aim of the QA Focus project was to develop a quality assurance (QA) methodology which would help to ensure that projects funded by JISC digital library programmes were functional, widely accessible and interoperable; to provide support materials to accompany the QA framework and to help to embed the QA methodology in projects’ working practices.

The QA framework is a lightweight framework, based on the provision of technical policies together with systematic procedures for measuring compliance with the policies. The QA Framework is described in a number of the QA Focus briefing documents and the rational for the framework has formed the basis of a number of peer-reviewed papers.

This lightweight framework was described in a briefing document. In brief rather than mandating open standards which must be used across all of JISC’s activities, the framework recommended that projects should document their own policies on open standards (and related areas) and the procedures to ensure that the policies were being implemented. JISC programme managers would have flexibility in prescribing specific open standards if this was felt to be appropriate (for example, a programme designed to investigate the value of the OAI-PMH protocol for harvesting repositories could legitimately mandate use of OAI-PMH, and perhaps even a specific version ).

This approach meant that JISC could request that project reports should be provided in MS Word or PDF formats – both of which were proprietary formats at the time (although they are now both open standards). It also provided the flexibility in avoiding mandating open standards prematurely (e.g. insisting on use of SMIL rather than the proprietary Flash format) or mandating open standards when design patterns may have been more appropriate (e.g. mandating the Web Services standards such as SOAP when RESTful design practices have, in many cases, proved to be more relevant).

Standards paperThis work was carried out over a period of time. In 2003 an initial paper on “Ideology Or Pragmatism? Open Standards And Cultural Heritage Web Sites” by myself and my colleague Marieke Guy, Alastair Dunning (AHDS – the now defunct Arts and Humanities Data Service) and Lawrie Phipps (TechDis) described how:

… despite the widespread acceptance of the importance of open standards, in practice many organisations fail to implement open standards in their provision of access to digital resources. It clearly becomes difficult to mandate use of open standards if it is well-known that compliance is seldom enforced. Rather than abandoning open standards or imposing a stricter regime for ensuring compliance, this paper argues that there is a need to adopt a culture which is supportive of use of open standards but provides flexibility to cater for the difficulties in achieving this.

The next paper published two years later on “A Standards Framework For Digital Library Programmes” by myself and my UKOLN colleagues Rosemary Russell and Pete Johnston, Paul Hollins (CETIS) and Alastair Dunning and Lawrie Phipps:

describes a layered approach to selection and use of open standards which is being developed for digital library development work within the UK. This approach reflects the diversity of the technical environment, the service provider’s environment, the user requirements and maturity of standards by separating contextual aspects; technical and non-technical policies; the selection of appropriate solutions and the compliance layer. To place the layered approach in a working context, case studies are provided of the types of environments in which the standards framework could be implemented, from an established standards-based service, to a new service in the process of selecting and implementing metadata standards. These examples serve to illustrate the need for such frameworks.

Further papers on “A Contextual Framework For Standards” (by myself, Alastair Dunning, Paul Hollins, Lawrie Phipps and Sebastian Rahtz [OSS Watch])  and “Addressing The Limitations Of Open Standards” (by myself, Marieke Guy and Alastair Dunning) and “Openness in Higher Education: Open Source, Open Standards, Open Access” (by myself, Scott Wilson [CETIS] and Randy Metcalfe [OSS Watch]) subsequently developed these ideas and explored how they could be app;lied in a variety of contexts.

Conclusions

Looking at this work it strikes me the value of the expertise provided by colleagues across the sector. The papers I have listed which described the approaches and ensured that the ideas had been subject to peer review work were written by staff at UKOLN (4 individuals), CETIS (1 individual), OSS Watch (2 individuals), TechDis (1 individual and the now-defunct AHDS (2 individuals). JISC programme managers provided value project management support for the initial QA Focus work and gave early feedback on the ideas but did not have intellectual input into the ideas.

In light of the evidence given in this blog post I am somewhat concerned with the new logo which appeared on the redesigned Jisc Web site: “We are the UK’s expert on digital technologies for education and research“. Really? What is the evidence for that assertion? Wouldn’t it be more appropriate to say “We are successful in designing development programmes and providing project management expertise  to these programmes“? And equally important “We are successful in encouraging the experts in the higher education sector to work together for the benefit of the wider community“. I would be the first to give thanks to the JISC for organising events which enabled me to meet the co-authors I’ve listed above and encouraged such joint working. But “We are the experts”! Who coined that statement, I wonder?

JISC logo


View Twitter conversation from: [Topsy] | View Twitter statistics from: [TweetReach] – [Bit.ly]

Posted in General, standards | Tagged: | 2 Comments »

What Does the Demise of Google Reader Tell Us About Open Web Standards?

Posted by Brian Kelly on 14 Mar 2013

Google Reader is Dead!

Google ReaderEarlier this morning I came across the news that Google have announced the demise of their Google Reader service:

We’re retiring Reader on July 1. We know many of you will be sad to see it go. Thanks for 8 great years! goo.gl/7joct

Despite the announcement only being made a few hours ago we are already seeing bloggers up in arms about the news. We might expect large-scale service such as TechCrunch (GoogleReaderpocalypse. For Real This Time.) to provide a speedy response to the news but closer to home bloggers such as James Clay have responded in blunt terms: Google Reader is Dead.

What Does the Announcement Tell us About Open Web Standards?

The implications of the demise of applications was always intended to be mitigated by use of open standards. But in this case the underlying format used by Google Reader (RSS) is widely accepted as an open standard in both its variants (RSS 1.0 and RSS 2.0). Blogs will continue to publish RSS feeds as will a variety of other tools and services. Why should the demise of Google Reader cause so much anger amongst users of the tool?

As RSS grew in popularity we saw the development of a range of RSS readers. Initially we saw dedicated RSS clients which users installed on their desktop. We then saw RSS add-ons to existing tools, including RSS extensions for popular email clients such as Outlook. But the development of the “Web as a platform” led to a growth in popularity of Web-based RSS tools, which meant that users did not have to install software on their desktop computer (which was particularly useful for those with locked-down desktops and IT Service departments who were reluctant to install new software).

One of the early Web-based RSS readers was Bloglines. I used this service many years ago but haven’t logged in for several years. As I learnt from Wikipedia the service was scheduled to be shut down on 15 November 2010 but a last-minute reprieve meant that it continued under a new owner. However a few minutes ago when on to the service I discovered that the feeds that I had subscribed to had been lost. This was not a problem for me, as I have migrated by feeds to Google Reader. But now it seems that I will once again shortly be losing the service I use to view my RSS feeds.

I should be able to export the list of my feeds held in Google Reader and return to Bloglines as my preferred RSS reader. However in reality it will not be so simple. I now use a variety of tools on my mobile devices (such as Flipboard, Currents, Pulse, etc.) to read my feeds, and use Google Reader as the intermediary for managing my large number of RSS feeds. I suspect I will be reluctant to wish to manage my subscriptions across a range of clients. For me, as for many others who have been commenting on blogs today, Google Reader has been the ideal tool.

What conclusions can we reach about the role of Web standards in light of Google’s announcement?

The view that open standards protected the user from the vagaries of the market place seems to be undermined – in reality it seems that users grow to love tools which are embedded in daily use.

It also appears that successful applications not only attract large numbers of users; successful applications can also attract developers and companies who can develop an ecosphere of applications which are dependent on services such as Google Reader.

It also seems that social sharing services are undermining the use of RSS for bringing relevant content to users. Perhaps related to this will be the difficulties companies will have in monetising RSS feeds.

It is interesting to see the arguments which have been made in the Hitler parody: Hitler finds out Google Reader is shutting down which is available on YouTube and embedded below. I’d be interested in other’s thoughts on the reasons for the closure of Google Reader and the implications of this announcement.


View Twitter conversation from: [Topsy] | View Twitter statistics from: [TweetReach] – [Bit.ly]

Posted in rss, standards | 11 Comments »

Good News From the UK Government: Launch of the Open Standards Principles

Posted by Brian Kelly on 11 Dec 2012

In April 2012 I wrote a post entitled Preparing a Response to the UK Government’s Open Standards: Open Opportunities Document which summarised my experiences of support for open standards in JISC development programmes since the 1990s and encouraged others to participate in the UK Government’s consultation exercise. A post by Simon Wardley entitled The UK’s battle for open standards which began:

Many of you are probably not aware, but there is an ongoing battle within the U.K. that will shape the future of the U.K. tech industry. It’s all about open standards.

motivated me to write a follow-up post entitled Oh What A Lovely War! in which I described the language which was being used to describe this consultation exercise:

In brief we are seeing a “battle for open standards” that will “shape the future of the UK tech industry” in which we are seeing “UK Government betrayal” which has led to a “proprietary lobby triumph” . The ugly secrets of “how Microsoft fought true open standards” have been revealed and now every man must do his duty and “get involved”! Who said standards were boring?

Yesterday I received the following email from Linda Humphries of the Government Digital Service, Cabinet Office.

Thank you for your response to the UK Government’s Open Standards: Open Opportunities public consultation. The consultation ran from 9 February to 4 June 2012. At the close of the consultation, we had received evidence from over 480 responses and we would like to take this opportunity to thank you for sharing your views and helping us to formulate new policy on this topic.
As you may know, the consultation process concluded with the publication of a government response and a new policy – the Open Standards Principles – on 1 November 2012. The government response covers the process we followed, a review of the key themes that emerged in the consultation, how they have been taken on board and the next steps for open standards in government IT.
Online submissions were published during the consultation period to encourage debate and we have now also made available the written responses submitted through other channels. The only exception to this is any submissions which explicitly requested confidentiality. Two independent reports commissioned by the Cabinet Office from Bournemouth University have also been published and are available on the Cabinet Office website – an analysis of the consultation responses and an evidence review of aspects of the proposed policy. The responses, reports and new policy are all available here. In the new year, we shall be setting up the Open Standards Board, as described in the Open Standards Principles. We look forward to your continuing engagement through the Standards Hub during 2013. Kind regards, Linda Linda Humphries Government Digital Service Cabinet Office

government standards consultation

The Key Documents

The key documents which have been published are Open Standards Principles (PDF, MS Word and ODT formats), Open Standards Consultation – Government Response (PDF, MS Word and ODT formats), Statistical data (PDF, MS Word and ODT formats), An Analysis of the Public Consultation on Open Standards: Open Opportunities (PDF, MS Word and ODT formats), Open Standards in Government IT: A Review of the Evidence (PDF, MS Word and ODT formats) and B (PDF, MS Excel and CSV formats). The first document summarised the key principles:

Open Standards Principles

These principles are the foundation for the specification of standards for software interoperability, data and document formats in government IT:

1. We place the needs of our users at the heart of our standards choices 2. Our selected open standards will enable suppliers to compete on a level playing field 3. Our standards choices support flexibility and change 4. We adopt open standards that support sustainable cost 5. Our decisions on standards selection are well informed 6. We select open standards using fair and transparent processes 7. We are fair and transparent in the specification and implementation of open standards The introduction to the document states that:

This policy becomes active on 1 November 2012. From this date government bodies [1] must adhere to the Open Standards Principles – for software interoperability, data and document formats in government IT specifications. The other documents summarised the responses which had been received to the consultation (which included feedback from Adam Cooper, JISC CETIS, Rowan Wilson, JISC OSS Watch, Rob Englebright, JISC and Tony Hirst, Open University in addition to myself and several others from the university sector). The document Open Standards in Government IT: A Review of the Evidence which provided an independent report for the Cabinet Office by the Centre for Intellectual Property & Policy Management at Bournemouth University concluded:

Although there is a lack of quantitative evidence on expected cost savings from adopting open standards, abundant examples exist where an open standards policy has been adopted with various consequent benefits, and the literature identifies few downside risks. The challenges appear to lie in the manner of implementation so that potential pitfalls, such as adopting the wrong standard, are avoided while potential gains from increased interoperability, including more competitive procurement and benefits to SMEs and citizens are maximised.

Perhaps some unexpected good news from the Government for Christmas? Might we be able to announce that the standards battle is now over and cry out “Peace in our time”? Time to read the documents in more detail, I feel. But I’d welcome comments from anyone who may already had read the documents and digested the implications.


[1] Central government departments, their agencies, non-departmental public bodies (NDPBs) and any other bodies for which they are responsible.


View Twitter conversation from: [Topsy]

Posted in standards | 4 Comments »

“Standards are voluntarily adopted and success is determined by the market”

Posted by Brian Kelly on 15 Oct 2012

Yesterday (Sunday 14 October) was World Standards Day. As described on Wikipedia “The aim of World Standards Day is to raise awareness among regulators, industry and consumers as to the importance of standardization to the global economy“. It is therefore timely to highlight Open Stand. As described on the Open Stand Web site:

On August 29th five leading global organizations jointly signed an agreement to affirm and adhere to a set of Principles in support of The Modern Paradigm for Standards; an open and collectively empowering model that will help radically improve the way people around the world develop new technologies and innovate for humanity.

The “Modern Paradigm for Standards”is shaped by adherence to five principles:

  1. Due process: Decisions are made with equity and fairness among participants. No one party dominates or guides standards development. Standards processes are transparent and opportunities exist to appeal decisions. Processes for periodic standards review and updating are well defined.
  2. Broad consensus: Processes allow for all views to be considered and addressed, such that agreement can be found across a range of interests.
  3. Transparency: Standards organizations provide advance public notice of proposed standards development activities, the scope of work to be undertaken, and conditions for participation. Easily accessible records of decisions and the materials used in reaching those decisions are provided. Public comment periods are provided before final standards approval and adoption.
  4. Balance: Standards activities are not exclusively dominated by any particular person, company or interest group.
  5. Openness: Standards processes are open to all interested and informed parties.

The “Modern Paradigm for Standards” itself is based on five key approaches:

  1. Cooperation
  2. Adherence to the principles listed above
  3. Collective empowerment
  4. Availability
  5. Voluntary Adoption

The Topsy tool provides a useful means of observing Twitter discussions about web resources. Looking at recent English-language tweets about the Web site we can see a useful summary:

5 organizations – #IETF#IEEE#W3C#IAB@Internet Society – issue joint statement on open Internet standards – http://t.co/cO2rQvGH

together with a summary of the aims of this initiative:

check out the uber standards org @openstand that will drive innovation globally through interoperability http://t.co/cKrkYnvr

and an acknowledgement that more work is needed if the goal of “driving innovation globally through interoperability” is to be realised:

OpenStand (http://t.co/2g50zvMc) is good politics; that it doesn’t go far enough just shows there’s still work to be done.

However it is the single sentence summary of what is meant by “Voluntary Adoption” which struck me as being of greatest interest:

Standards are voluntarily adopted and success is determined by the market.

In the past I think there has been a view that open standards exist independently of the market place with public sector organisations, in particular, being expected to distance themselves from the market economy in the development and procurement of IT systems. However this statement of a “modern paradigm for standards” makes it clear that standards bodies such as the W3C, IETF, IEEE, IAB and the Internet Society are explicit that the success of open standards is dependent of acceptance of the standards across the market place. Back in September 2008 I highlighted the importance of market place acceptance of open standards:

many W3C standards …  have clearly failed to have any significant impact in the market place – compare, for example, the success of Macromedia’s Flash (SWF) format with the niche role that W3C’s SMIL format has.

and 2 months later a post entitled Why Did SMIL and SVG Fail? generated a discussion about criteria for identifying failed standards. Perhaps, as was suggested in the comments on the post, SMIL and SVG merely have had a very slow growth to reach market acceptance. But I can’t help but feel that if SMIL and SVG are belatedly felt to be successful standards this will have been as a result of the decision by Apple not to support Flash on the iOS plaform for Apple’s mobile devices. This seems to provide a good example of the Open Stand’s principle that “Standards are voluntarily adopted and success is determined by the market“. We can now see parallels between the selection of third-party services to support institutional activities and the selection of open standards to support development activities. Interestingly such issues were discussed at the CETIS meeting on “Future of Interoperability Standards” held in Bolton in January 2010. I hope that the Opportunities and Risks Framework For Standards which I presented at the meeting can provide an approach for helping to identify the standards which can achieve success in the market place.

Twitter conversation from: [Topsy]

Posted in standards | 1 Comment »

Oh What A Lovely War!

Posted by Brian Kelly on 8 May 2012

“The UK’s Battle for Open Standards”

The UK Government’s current consultation document on policies for open standards has generated a fair amount of passion. In addition to articles published in the Computer Weekly by Mark Ballard, and Glynn Moody I also recently came across the following tweet from @swardley:

I haven’t posted on the radar for a long time, really happy they took my article on the open standards battle – http://oreil.ly/Im5z0o

His post, entitled “The UK’s battle for open standards“, began:

Many of you are probably not aware, but there is an ongoing battle within the U.K. that will shape the future of the U.K. tech industry. It’s all about open standards.

and concluded:

The battle for open standards needs help, so get involved.

Earlier this year, the language used in title of Glyn Moody’s post on UK Government Betrayal of Open Standards Confirmed suggested that this was likely to be a vicious battle and his more recent article made it clear who the enemy was: How Microsoft Fought True Open Standards. Mark Ballard’s article on how Proprietary lobby triumphs in first open standards showdown reinforced the militarist angle:

In conclusion, I feel that this meeting and others like it, should not become vicarious battlegrounds for tech giants to slug out battles that they can’t or won’t conduct elsewhere – at the end of the day, it should be about delivering the best technology-enabled services possible at the best price point. 

In brief we are seeing a “battle for open standards” that will “shape the future of the UK tech industry” in which we are seeing “UK Government betrayal” which has led to a “proprietary lobby triumph” . The ugly secrets of “how Microsoft fought true open standards” have been revealed and now every man must do his duty and “get involved“! Who said standards were boring?

“Losses 60,000 Men. Ground Gained 0 Yards”

I recently watched a DVD of the film “Oh! What a Lovely War“, a film I saw when I was young which chronicles the various madnesses of the First World War. The scene depicting how the generals were happy to send soldiers to their destruction as they were convinced of the rightness of their cause came to mind when I read the blog posts which were suggested that success in the open standards battle would help the minor players (the open source community, which would be depicted by Belgium in an updated version of the film) against the evil empire (no prizes for guessing, but ignore the humanist comments of its former general).

But what of the foot soldiers? In the standards battle, these will be the users of IT services, but have little interest in the arcane decisions being made in Whitehall, in obscure European cities and by those plotting to overthrow the existing order. Will they (we) see peace in our time (to use a saying from a later war) or might winning the open standards battle fail to deliver enhanced services for users?

Addressing the Needs of the User

I’ve tried to make the point that the militaristic language which is being used by the blogging community is inappropriate in discussions about government policies on open standards. Rather than continuing with this metaphor, the issue I feel needs to be addressed is “What are the consequences of a new policy means for users of government IT services?” The current discussions are centred on the benefits of providing a level for developers, especially open source developers. But there is little discussions on what this will mean for end users, apart from an implied suggestion that open source solutions based on royalty-free open standards will inevitably provide a better environment for users of the services.

We have, for example, see how a well-intentioned government policy, such as the one which stated that All government Web sites must be WCAG compliant could lead to undesirable side-effects if it were to be implemented in a simplistic fashion. In this case, despite an Accessibility Summit meeting in which Web accessibility advocates, researchers and researchers agreed the need to avoid simplistic checkbox approaches, the government announced a policy which, if it had been implemented, could have resulted in government web sites which had trivial WCAG errors would be withdrawn from service.

In Web accessibility arena, alternative approaches led to the development of the BS 8878 Web Accessibility Code of Practice. This provides a much more realistic approach to achieving the laudable goal of enhancing access to people with disabilities, which takes contextual issues into account and focuses on best practices for the various processes in developing accessible Web sites and avoids the risk that forcing Web sites to be WCAG compliant would lead to non-conformant Web sites being removed from services or potentially valuable Web sites not being deployed due to difficulties in achieving WCAG conformance.

The current debate on open standards faces similar risks. To take a couple of simple and tangible examples:

  • The MP3 format is based on patented compression algorithms. Would a government policy which mandated patent-free standards ban use of the MP3 format? If so, since poplar audio players such as iPods, support the MP3 format but not necessarily patent-free alternatives, how will podcasts be made available for popular consumers products such as the iPod and iPhone.
  • The RSS (Really Simple Syndication/RDF Site Summary format is not an open standard since it is not owned by a trusted neutral standards body. Will RSS no longer be usable on Government Web sites and, if so, what benefits does this provide?
  • The Microsoft Office format is now an ISO Standard. Does this mean that MS Office will be an acceptable format. If so, what are the current ‘battles’ about? If not, what principles are the battles about?

Although I’m not in favour of the discussions about policies on Government use of open standards being based on military metaphors, I do agree with the call to get involved. Your country does need you, if you have an interest in the role open standards can play in the development of IT services in the public sector. In particular if you have an interest in the implications on user communities on the deployment of policies on open standards I’d encourage you to participate in the consultation.

Posted in standards | 2 Comments »

Preparing a Response to the UK Government’s Open Standards: Open Opportunities Document

Posted by Brian Kelly on 26 Apr 2012

 

The UK Government’s Open Standards Consultation

The UK Government is currently seeking comments for its Open Standards Consultation for the Open Standards: Open Opportunities – Flexibility and efficiency in Government IT document (a 30 page document available in PDF format). I am currently formulating my responses to the consultation process. In light of the interests in open standards by many developers, managers and policy makers in the higher and further education sector I would encourage participation form those with interests in this area – it should be noted, however, that the consultation closes on 1 May!

Update 27 June 2012: The deadline has now been extended to Monday, 4th June 2012.

The Open Standards Survey 2011

As described in two posts entitled UK Government Survey on Open Standards: But What is an ‘Open Standard’? and “UK Government Will Impose Compulsory Open Standards” published a year ago I responded to the initial survey and gave my thoughts on the definitions of an open standard. I also commented on the flaws in the survey process which made it difficult to provide meaningful feedback.

My response was one of 970 received – and it was interesting to read in the Summary of lessons learned from the UK Government Open Standards Survey, 2011 (pdf, 246kb) that the majority came from the private sector. Looking at the pie chart given in the report I would estimate that about 200-300 responses came from the public sector (excluding central government). How many of these are from the UK higher and further education sector I do not know.

It should also be noted that although “the policy resulting from this consultation will apply to all central government departments, their agencies, non-departmental public bodies (NDPBs) and any other bodies for which they are responsible” the document goes on to add that “Local government and wider public sector bodies will be encouraged to adopt the policy to deliver wider interoperability benefits“. There is therefore an opportunity to influence government policy in an area which make affect IT development policies in the future.

Reflections on 20 years Involvement in Open Standards in UK Higher Education

Although I had serious reservations about last year’s survey in many respects I feel that the Open Standards: Open Opportunities – Flexibility and Efficiency in Government IT consultation document has its merits.

The feedback I gave in last year’s survey were based on work related to policies on use of open standards in higher education which I have been involved with since the launch of the eLib national digital library programme back in the mid 1990s. Back then those of us who were involved in contributing to the eLib Programme Technical Standards document had, in retrospect, a very naive view on open standards, with the document suggesting that standards such as VRML and whois++ could have a role to play for eLib projects. Some projects may have used these standards (I know that for a period the who++ was felt to be important for the eLib Subject Based Information Gateways) but in retrospect we were over-enthusiastic in encouragement take-up of what at the time seemed to be potentially significant standards.

The dangers of promoting (or, worse, mandating) use of emerging open standards which are being actively promoted by their supporters (and by standards bodies themselves) became apparent when we realised that W3C standards such as SMIL and SVG were not significantly challenging proprietary solutions such as Flash. In addition in 2005 a panel session entitled Web Services Considered Harmful argued that a series of overly complex open standards (several thousand pages when printed out!) was proving costly to implement and that use of ‘grassroots’ approaches, including RSS and REST, would provide more cost-effective approaches to development.

In the UK higher education sector we are aware of the dangers of mandating inappropriate open standards, with universities being mandated to support OSI networking protocols, with Coloured Book software providing a transition to this environment. Then the Internet came along and universities were initially permitted to access Internet services by a TCP/IP tunnel across JANET before the clear benefits provided by the Internet eventually became apparent to policy-makers and the sector made native use of TCP/IP.

Our understanding of the benefits which can be gained by use of open standards together with the risks of a naive and uncritical acceptance of the realities of use of open standards led to a series of papers which sought solutions to this minefield being written by myself, my colleague Marieke Guy and Rosemary Russell, my former colleague Pete Johnston, Paul Hollins and Scott Wilson (JISC CETIS), Alastair Dunning (at the time of AHDS), Sebastian Rahtz and Randy Metcalfe (then of JISC OSS Watch) and Lawrie Phipps (then of JISC TechDis):

In addition to these papers, a position paper on “An Opportunities and Risks Framework For Standards” was presented at the “Future of Interoperability Standards Meeting 2010” organised by CETIS in February 2010. The paper described how the experiences of the past led to the need for a risk management approach to use of open standards, especially emerging open standards which may not yet have achieved critical mass.

Open Standards: Open Opportunities – Flexibility and Efficiency in Government IT

In light of this background, what feedback am I planning to give to the report? I have highlighted a number of comments in the report which I intend to comment on.

Report Comment
Information technology across the government estate is expensive. (p. 4) The opening foreword highlights that the aims of the policy are cost-savings. There will be a need to ensure that the policy supports this key goal.
The Government ICT Strategy … has already committed the Government to creating a common and secure IT infrastructure based on a suite of compulsory open standards, adopting appropriate open standards wherever possible. [my emphasis] p. 5). The challenge will be in identifying what is compulsory and what the criteria are for defining “wherever possible”. The compulsory aspects could mandate specific technical standards or could mandate specific processes (e.g. an open summary of the decision-making processes).
The mandation of specific open standards will
• make IT solutions fully interoperable to allow for reuse, sharing and scalability across organisational boundaries and delivery chains;
• help the Government to avoid lengthy vendor lock-in, allowing transfer of services or suppliers without excessive transition costs, loss of data or functionality. (p. 8)
If the main goal of the open standards policies is to achieve cost savings, should this be mentioned here?
The European Commission’s EIF version 2.0 does not provide a definition of open standard, but instead describes ‘openness’ … (p. 11) This approach, which seeks to characterise open approaches, provides the flexibility to allow use of cost effective standards such as RSS (which have not been ratified by an open standards body) as well as use of design approaches (such as RESTful design) rather than over-complex open standards (such as the WS- series).
For the purpose of UK Government software interoperability, data and document formats, the definition of open standards is those standards which fulfil the following criteria: … (p. 12) It is unclear whether there should be an ‘and’ or an ‘or’ linking the five criteria.
When specifying IT requirements for software interoperability, data and document formats, government departments should request that open standards adhering to the UK Government definition are adopted, unless there are clear business reasons why this is inappropriate, in order to … (p. 13) This process-driven approach relates closely to the approaches developed in the UK HE sector and described in a paper on “Openness in Higher Education: Open Source, Open Standards, Open Access“.
Standards for software interoperability, data and document formats that do not comply with the UK Government definition of an open standard may be considered for use in government IT procurement specifications if … (p. 13) This flexibility is to be welcomed in light of the complexities related to open standards. However there will be a need to ensure that such flexibility does not allow inappropriate proprietary solutions to continue to be used.
Any standard specified that is not an open standard must be selected as a result of a pragmatic and informed decision, taking the consequences into account. The reasons should be fully documented and published, in line with the Government’s transparency agenda. (p.13) This clause is welcomed.

I welcome your comments on my views on the consultation document. More importantly, however, I’d encourage you to give your views on the consultation web site – as that is the place where your views can influence government policy decisions. Note that if you would like to see responses which have already been submitted, I suggest you visit Jenni Tennison’s post on UK Open Standards Consultation.


Twitter conversation from Topsy: [View]

Posted in standards | 5 Comments »

W3Conf: Practical Standards for Web Professionals – Free for Remote Participants!

Posted by Brian Kelly on 28 Oct 2011

The W3C are hosting their first conference: “W3Conf: Practical Standards for Web Professionals” which will take on 15-16 November 2011 at the Redmond Marriot Town Center, Redmond, USA. Although the early bird registration fee of $199 for the two day event seems very reasonable I suspect that despite the event’s focus on HTML5 and the Open Web Platform probably being of interest to many readers of this blog, not many will be able to travel to the US to attend this conference (but if you do wish to attend note that the deadline for the early bird registration is 1 November when the fee will go up to $299).

However the event Web site states that “The recordings of the presentations will be freely available” and goes on to add that “During the event, there will be a live stream of the sessions, with English subtitling. After the event, each session will be archived for future reference“.

The following sessions will be held at the conference:

Day 1, 15 November:

  • Welcome: Contributing to Open Standards, Ian Jacobs (W3C)
  • Testing to Perfection, Philippe Le Hégaret (W3C)
  • Community Groups: a Case Study With Web Payments, Manu Sporny (Digital Bazaar)
  • Developer Documentation, Doug Schepers (W3C)
  • HTMl5 Games
  • Web Graphics – a Large Creative Palette, Vincent Hardy (Adobe)
  • Modern Layout: How Do You Build Layout in 2011 (CSS3)?, Divya Manian (Opera)
  • Shortcuts: Getting Off (Line) With the HTML5 Appcache, John Allsopp (Web Designs)
  • The n-Screens Problem: Building Apps in a World Of TV and Mobiles, Rajesh Lal (Nokia)
  • The Great HTML5 Divide: How Polyfills and Shims Let You Light Up Your Sites in Non-Modern Browsers, Rey Bango (Microsoft)
  • HTML5: the Foundation of the Web Platform, Paul Irish (Google)

Day 2, 16 November:

  • HTML5 Demo Fest: The Best From The Web, Giorgio Sardo (Microsoft)
  • Shortcuts: Data Visualisation With Web Standards, Mike Bostock (Square)
  • Universal Access: A Practical Guide To Accessibility, Aria, And Script, Becky Gibson (Ibm)
  • Security and Privacy: Securing User Identities and Applications, Brad Hill (Paypal), Scott Stender (Isec Partners)
  • Shortcuts: Touch Events, Grant Goodale (Massively Fun)
  • Mobile Web Development Topic: Building For Mobile Devices
  • Shortcuts: Modernizr, Faruk Ateş (Apture)
  • Browsers and Standards: Where the Rubber Hits the Road, Paul Cotton (Microsoft), Tantek Çelik (Mozilla), Chris Wilson (Google), Divya Manian (Opera)

It was very timely to read about this conference during Open Access 2011 Week, which the JISC, among many other organisations, are supporting. The free access to the talks and resources which will be used illustrates how openness can be used to enhance learning and creativity, in this context for developers who are looking to use Web standards to enhance their services.

The provision of remote access to the conference is also very timely in the context of the JISC-funded Greening Events II project which is being provided by ILRT and UKOLN.    It would be valuable if the conference organisers were able to provide statistics on remote participation during the event.  How many people viewed from the UK, for example, and for how long. It would be interesting to see if the environmental costs of delivering the steaming video and hosting videos and slides for subsequent viewing could be compared with the costs of flying to the US.

Posted in Events, standards | Leave a Comment »

Privacy Settings For UK Russell Group University Home Pages

Posted by Brian Kelly on 24 May 2011

On the website-info-mgt JISCMail List Claire Gibbons, Senior Web and Marketing Manager at the University of Bradford today askedHas anyone done anything in particular in response to the changes to the rules on using cookies and similar technologies for storing information from the ICO?” and went on to add that “We were going to update and add to our privacy policy in terms of what cookies we use and why“.

This email message was quite timely as privacy issues will be featured in a plenary talk at UKOLN’s forthcoming IWMW  2011 workshop which will be held at the University of Reading on 26-27 July with Dave Raggett giving the following talk:

Online Privacy:
This plenary will begin with a report on work on privacy and identity in the EU FP7 PrimeLife project which looks at bringing sustainable privacy and identity management to future networks and services. There will be a demonstration of a Firefox extension that enables you to view website practices and to set personal preferences on a per site basis. This will be followed by an account of what happened to P3P, the current debate around do not track, and some thoughts about where we are headed.

The Firefox extension mentioned in the abstract is known as the ‘Privacy Dashboard’ and is described as “a Firefox add-on designed to help you understand what personal information is being collected by websites, and to provide you with a means to control this on a per website basis“. The output for a typical home page is illustrated.

The dashboard was developed by Dave Raggett with funding from the European Union’s 7th Framework Programme for the PrimeLife project, a pan-European research project focusing on bringing sustainable privacy and identity management to future networks and services.

In order to observe patterns of UK Universities practices in online privacy I have used the W3C Privacy Dashboard to analyse the home pages of the twenty UK University Russell Group Web sites. The results are given in the following table.

Ref. No. Institution Cookies External third party Invisible images
Session cookies Lasting cookies External lasting cookies Sites Cookies Lasting cookies
1 University of Birmingham 3 3 0 4 0 2 0
2 University of Bristol 0 0 0 4 0 6 8
3 University of Cambridge 1 3 0 3 1 2 0
4 Cardiff University 1 4 0 0 0 0 0
5 University of Edinburgh 1 4 0 0 0 0 0
6 University of Glasgow 2 3 0 2 1 6 2
7 Imperial College 3 3 0 3 0 2 0
8 King’s College London 3 3 0 3 1 6 0
9 University of Leeds 2 3 0 1 0 0 0
10 University of Liverpool 2 3 0 2 2 3 0
11 LSE 3 0 0 1 0 0 0
12 University of Manchester 3 0 0 1 0 0 0
13 Newcastle University 2 0 0 0 0 0 3
14 University of Nottingham 2 3 0 2 0 5 0
15 University of Oxford 1 5 0 1 0 0 1
16 Queen’s University Belfast 1 3 0 1 0 0 0
17 University of Sheffield 2 3 0 0 1 0 0
18 University of Southampton 1 3 0 3 0 0 0
19 University College London 1 2 7 0 0 0 0
20 University of Warwick 9 6 0 39 2 95 6
TOTAL 43 54 7 70   127 20 

It should be noted that the findings appear to be volatile, with significant differences being found when the findings were checked a few days after the initial survey.

How do these findings compare with other Web sites, including those on other sectors?  It is possible to query the Privacy Dashboard’s  data on Web sites for which data is available, which include Fortune 100 Web site. In addition I have used the tool on the following Web sites:

Ref. No. Institution Cookies External third party Invisible images Additional Comments
Session cookies Lasting cookies External lasting cookies Sites Cookies Lasting cookies
1 W3C  0  0 0 2  0 4 1 P3P Policy
2 Facebook Home page  4 6 0  1 0  0  1
3 Google  0  7  0 0  0  1 0
4 No. 10 Downing Street 1  4  0  8  0 52 1 (Nos. updated after publication)
5 BP 1 1 0 0 0 0 2 P3P Policy
6 Harvard 3 4 1 0 0 0
7 ICO.gov.uk 2 3 0 1 0 0 1

I suspect that many Web managers will be following Claire Gibbon’s lead in seeking to understand the implications of the changes to the rules on using cookies and similar technologies for storing information and reading the ICO’s paper on Changes to the rules on using cookies and similar technologies for storing information (PDF format).  I hope this survey provides a context to the discussions and that policy makers find the Privacy Dashboard tool useful.  But in addition to ensuring that policy statements regarding use of cookies are adequately documented, might not this also provide an opportunity to implement a machine-readable version of such policy. Is it time for P3P, the Platform for Privacy Preferences Project standard, to make a come-back?

Posted in Evidence, Legal, openness, standards, W3C | Tagged: | 15 Comments »

“UK Government Will Impose Compulsory Open Standards”

Posted by Brian Kelly on 20 Apr 2011

“UK Government Promises To Go Open – Again”

In a post entitled UK Government Promises to Go Open – Yet Again Glyn Moody provides a rather cynical view based on his experiences of Government promises regarding ICT and openness: “after years of empty promises, the UK government assures us that this time is will really open up, embracing open source and openness in all its forms”. However there is also some optimism in the column:

… there is a ray of hope. For as I reported a month ago, the Cabinet Office has settled on a rather good definition of open standards that includes the key phrase “have intellectual property made irrevocably available on a royalty free basis”, which does create a truly level playing-field that allows open source to compete fairly.”

The column concludes:

“Let’s hope it really marks the beginning of a new era of openness in UK government IT – and that I won’t have to write this article ever again.”

Publication by the Cabinet Office of the “Government ICT Strategy”

I have previously commented on the Government’s attempts at agreeing on a definition of open standards in a post entitled UK Government Survey on Open Standards: But What is an ‘Open Standard’? and pointed out some of the difficulties (is RSS an open standard, for example). But although it may be difficult to provide agreement on such definitions, I welcome the fact that the Government is asking such questions.

This is particularly important in light of the recent release of the Cabinet Office’s recent publication of the  Government ICT Strategy (PDF format). In the introduction the Right Honourable Frances Maude, Minister for the Cabinet Office lists the following challenges central government is facing:

  • Departments, agencies and public bodies too rarely reuse and adapt systems which are available ‘off the shelf’ or have already been commissioned by another part of government, leading to wasteful duplication:
  • systems are too rarely interoperable;
  • the infrastructure is insufficiently integrated, leading to inefficiency and separation;

The first bullet point could be interpretted as a signal that the government is looking to procure off-the-shelf proprietary systems.  However the other two points seem to challenge that perception, as it is precisely such monolithic proprietary systems which fail to provide the interoperability and the integrated infrastructure which is needed.   Instead in order to address these challenges the strategy announces that it intends to:

impose compulsory open standards, starting with interoperability and security;

We know that the government is prepared to take ‘bold’ decisions – but is this a perhaps unusual decision in being one that those involved in IT development activities within the high education sector would welcome?

What are the Open Standards Which Will Be Made Compulsory?

It is also pleasing to see that the Government has invited feedback on the open standards which it feels are relevant.  A SurveyMonkey form on Open Standards in the Public Sector invites feedback on its proposed set of conditions for an open standards (discussed previously) as well as listing open standards in 23 technical areas for which respondents can specify whether they think the standards should be a PRIORITY STANDARD, MANDATORY (must be used), RECOMMENDED (should be used), OPTIONAL or SHOULD NOT USE.

The 23 areas are Accessibility and usability; Biometric data interchange; Business object documents; Computer workstations; Conferencing systems over Internet Protocol (IP); Content management, syndication and synchronization; Data integration between known parties; Data publishing; e-Commerce, purchasing and logistics; e-Health and social care; e-Learning; e-News; e-Voting; Finance; Geospatial data; Identifiers; Interconnectivity; Service registry/repository; Smart cards; Smart travel documents; Voice over Internet Protocol (VOIP); Web services and Workflow and web services.

Rather than attempting to comment on all of these areas I’ll explore some of the issues with the approaches which are being taken in the survey by addressing just two areas: “Accessibility and usability” and “Computer workstations”.

“Accessibility and Usability”

The first section covers “Accessibility and usability” and addresses Human Computer Interface standards (e.g. ISO/TS 16071:2003);  Web Content standards (WCAG 1.0) and Usabilty (sic) standards (e.g. ISO 13407:1999).

This is an area of particular interest to me, so how should I respond to the survey (which is illustrated).

The first question, on WCAG 1.0, is easy – this has been superceded by WCAG 2.0 and should no longer be used.  So that is clearly be in the “Should Not Use” category.

Should, therefore, the answer to the use of WCAG 2.0 be to select it as a Priority Standard, a Mandatory Standard or a Recommended Standard, Optional or, perhaps, Should Not Use?  These terms have been defined in the survey system:

PRIORITY STANDARD – a standard that you think is is important and a priority

MANDATORY – a standard that you judge MUST be used by the UK public sector

RECOMMEND – a standard that you judge should be used by the UK public sector but recognising that there may be exceptions/caveats that mean it is sometimes not appropriate

OPTIONAL – a standard that you judge may be used by the UK public sector

SHOULD NOT USE – a standard that you judge should not be used by the UK public sector

I have previously suggested that public sector organisations in the UK should be using the BS 8878 Code of Practice for Web Accessibility as this provides a policy framework for developing accessible Web sites and provides the flexibility in the selection of accessibility guidelines, such as WCAG 2.0 which may not be applicable for use in some circumstances.  However BS 8878 isn’t included in the list of standards.  I think that WCAG 2.0 is important, but not applicable in all cases, so I guess I should select the Priority Standard option.  In addition, since it is possible to select multiple responses, I would also choose the Recommend option.

From this first two standards I have already found reasons why the Mandatory response may be be appropriate and noticed some logical flaws in the design of the survey form – it seems it is possible to select multiple responses, including ones which may be contradictory.

The third ‘standard’ is also confusing as it covers the ‘Central Office of Information Standards and Guidelines‘.  However this isn’t a standard but a set of UK Government recommendations and policies. The guidance document contains a section on Delivering inclusive websites which appears to have been published in 2009 and which requires Government Web sites to conform with WCAG 1.0 to a AA level. This ‘standard’ is not compatible with the first two areas and so therefore the Should Not Use recommendation should be given – not because the recommendations are necessarily wrong but because it is not a standard. However it is not possible to annotate the responses submitted using the survey system.

“Computer Workstations”

The misleading “Computer workstations” section is of particular interest to me since it covers various Web standards, document, standards and standards for office applications. In the list of Web standards the choices are HTML 4.01, HTML 5 or XHTML. Here the choices are between a W3C HTML 4.01 standard which was ratified in December 1999, a W3C HTML5 working draft which has not yet been ratified and which is still evolving and a W3C standard for which a version number isn’t specified which could lead to confusions over the ratified XHTML (1.0) standard and the moribund (but recently updated) XHTML 2 working draft.

The list of document types are also interesting.  RDF RTF is listed as a standard – although this is a proprietary format which is owned by Microsoft. Similarly the inclusion of PDF from version 4 covers both the proprietary version owned by Adobe as well as the ISO standard which is based on PDF 1.7. The ODF and OOXML open standards are listed although the Microsoft Document format is also included as well as the Lotus Notes Web Access format.   There are similar confusion over the open standards for spreadsheets: HTML is suggested which, although this is an open standard, will not provide the interoperability which open standards are meant to deliver.  As with the document formats ODF and OOXML are included but the proprietary MS Excel format is also listed. This pattern is repeated for presentation formats, although this time MS PowerPoint is listed.

Other Areas

The section on “Biometric data interchange” is interesting, although I know nothing of the standards used in this area. But what are the implications of responding to the question on. for example, “ISO/IEC 19794-5 Information Technology – Biometric data interchange formats – Part 5: Face image data”. If this is a Mandatory Standard could this mean that it is used in situations which I feel infringe personal liberties? The initial response might be to suggest that the standard will only be used in appropriate areas – and yet we have seen that defining WCAG as a Mandatory standards has led to it being enforced when its use may be inappropriate. It does seem to me that there is a need to define a policy layer which helps to ensure that Mandatory clauses are not used in inappropriate areas.

I’ll not comment further here on areas which I know will be of interest to the JISC development community:

Conferencing system (six standards listed), Content management, syndication and synchronisation (which covers various standards such as XML Schemas, OAI-PMH, RSS, OpenURL and Z39.50), Data integration between known parties (which includes XML, XML Schemas, XSL, UML, RDF and OWL), Data publishing (which covers RDF, SKOS and OWL), Identifiers (which covers DOIs, ISBN, ISSN, XRIs, GUID, URIs, URLs and PURLs), Interconnectivity (which covers various Internet protocols), Service management (which only includes ISO/IEC 20000) or Service registry/repository (which includes UDDI, ebXML, ebRS and edRS), e-Learning (which covers IMS, IEEE LOM and SCORM), Geo-spatial, Web Services and Workflow and web services.

or areas which will be of less direct relevance to our development community:

Business object documents, Smart cards or Smart travel documents. e-Commerce, purchasing and logistics, e-Health and social care, e-News, e-Voting, Finance and VoIP.

Discussion

Despite the rhetoric in the introduction to the Government ICT Strategy document it seems that the survey is simply revisiting work which has been published previously in the e-GIF guidelines. Looking at the Technical Standards Catalogue, for example, there is a section on Specifications for computer workstations which lists PDF, MS Office formats and Lotus notes which I mentioned previously.

Looking in more detail at the survey form I find that the form is full of typos. For example (with the typos given in bold):

  • There are many different defintions of the term ‘open standard’. We’d like your feedback on our proposed definition.
  • Usabilty  (there are multiple occurrences of this typo)
  • coding of continous-tone still images (there are multiple occurrences of this typo)
  • Data defintion – Government Data Standards Catalogue (there are multiple occurrences of this typo)
  • Ontology-based inforamtion exchange (e.g. OWL)
  • Persistient identifier (e.g. XRI) (there are multiple occurrences of this typo)
  • Digital Object Indentifier (DOI)    (there are multiple occurrences of this typo)
  • HyperText Tranfer Protocol (HTTP)  (there are multiple occurrences of this typo)
  • Authetication (there are multiple occurrences of this typo)
  • Elecrtical standards (e.g. ISO/IEC 7816-10)
  • Terminal infrastrucure standards (there are multiple occurrences of this typo)

Does this matter if the meaning is obvious?  For a conversational email message or blog post perhaps not but for a formal process for gathering information it is of some concern. This is particularly true when there may be particular standards which could be mis-identified be typographical errors. So although I spotted the errors listed above (initially when reading the document and subsequently by putting the document through a spell-checker) I have no idea if the following examples could contain errors:

  • ISO/IEC 7816-15: 2004/Cor 1: 2004
  • Contact cards – Tactile identifiers BS EN 1332-2 Identification card systems – Man-machine interface Part 2: Dimensions and location of a tactile identifier for ID-1 cards

It should also be noted that the survey form itself contain flaws. As illustrated below although the form repeatedly invites respondents to “suggest other standards within this category that are not listed. Start a new line for each in reality it is not possible to enter more than a single line.

Glyn Moody felt that there was a “ray of hope” in the Governments’s apparently enlightened approach to open standards. I fear he is mistaken – sadly I see nothing to indicate that the government has an understanding of the implications of any decisions that may be taken as a result of this flawed information-gathering exercise.

Posted in standards | 6 Comments »

New HTML5 Drafts and Other W3C Developments

Posted by Brian Kelly on 13 Apr 2011

 

New HTML5 Drafts

The W3C’s HTML Working Group has recently announced the publication of eight documents:

Last Call Working Drafts for RDFa Core 1.1 and XHTML+RDFa 1.1

Back in August 2010 in a post entitled New W3C Document Standards for XHTML and RDFa I described the latest release of RDFa Core 1.1 and XHTML+RDFa1.1 draft documents. The RDFa Working Group has now published Last Call Working Drafts of these documents: RDFa Core 1.1 and XHTML+RDFa 1.1.

New Provenance Working Group

The W3C has also recently launched a new Provenance Working Group whose mission is “to support the widespread publication and use of provenance information of Web documents, data, and resources“. The Working Group will publish W3C Recommendations that define a language for exchanging provenance information among applications. This is an area of work which is likely to be of interest to those involved in digital library development work – and it is interesting to see that a workshop on Understanding Provenance and Linked Open Data was held recently at the University of Edinburgh.

Emotion Markup Language

When I first read of the Multimodal Interaction (MMI) Working Group‘s announcement of the Last Call Working Draft of Emotion Markup Language (EmotionML) 1.0. I checked to see that it hadn’t been published on 1 April! It seems that “As the web is becoming ubiquitous, interactive, and multimodal, technology needs to deal increasingly with human factors, including emotions“.

The EmotionML Language allows various vocabularies to be used such as:

The six terms proposed by Paul Ekman (Ekman, 1972, p. 251-252) as basic emotions with universal facial expressions — emotions that are recognized and produced in all human cultures: anger; disgust; fear; happiness; sadness and surprise.

The 17 terms found in a study by Cowie et al (Cowie et al., 1999) who investigated emotions that frequently occur in everyday life: affectionate; afraid; amused; angry; bored; confident; content; disappointed; excited; happy; interested; loving; pleased; relaxed; sad; satisfied and
worried.

Mehrabian proposal of a three-dimensional description of emotion in terms of Pleasure, Arousal, and Dominance.

Posted in HTML, standards, W3C | 1 Comment »

RDFa and WordPress

Posted by Brian Kelly on 5 Apr 2011

RDFa: A Brief Recap

RDFa (Resource Description Framework – in – attributes) is a W3C Recommendation that adds a set of attribute level extensions to XHTML for embedding rich metadata within Web documents.

As described in the Wikidpia entry for RDFa five “principles of interoperable metadata” are met by RDFa:

  1. Publisher Independence: each site can use its own standards
  2. Data Reuse: data is not duplicated. Separate XML and HTML sections are not required for the same content.
  3. Self Containment: the HTML and the RDF are separated
  4. Schema Modularity: the attributes are reusable
  5. Evolvability: additional fields can be added and XML transforms can extract the semantics of the data from an XHTML file

Additionally RDFa may benefit Web accessibility as more information is available to assistive technology.

But how does go about evaluating the potential of RDFa? Last year I wrote a post on Experiments With RDFa which was based on manual inclusion of RDFa markup in a Web page. Although this highlighted a number of issues, including the validity of pages containing RDFa, this is not a scalable approach for significant deployment of RDFa. What is needed is a content management system which can be used to deploy RDFa on existing content in order to evaluate its potential and understand deployment issues.

The Potential for WordPress

WordPress as a Blog Platform and a CMS

WordPress provides a blog platform which can be used for large-scale management of blogs which are hosted at wordpress.com. In addition the software is available under an open source licence and can be deployed within an institution. There is increasing interest in use of WordPress within the higher education sector as can be seen from the recent launch of a WORDPRESS JISCMail list (which is aimed primarily at the UK HE sector) with some further examples of interest in use of WordPress in being available on the University Web Developers group.

A recent discussion on the WORDPRESS JISCMail lists addressed the potential of WordPress as a CMS rather than a blogging platform.  Such uses were also outlined recently in a post on the College Web Editor blog which suggested reasons why WordPress can be the right CMS for #highered websites.  In light of the growing interest in use of WordPress as a CMS it would seem that this platform could have a role to play in the deployment of new HTML developments such as RDFa.

The wp-RDFa WordPress Plugin

A strength of WordPress is its extensible architecture which allows plugins to be developed by third parties and deployed on locally installations of the software.  One such development is the wp-RDFa plugin which supports FOAF and  Dublin Core metadata. The plugin uses Dublin Core markup to tag posts with the title, creator and date elements. In addition wp-RDFa can be configured to make use of FOAF to “relate your personal information to your blog and to relate other users of your blog to you building up a semantic map of your relationships in the online world“.

Initial Experiments With wp-RDFa

Dublin Core Metadata

UKOLN’s Cultural Heritage blog has been closed recently, with no new posts planned for publication.  The blog will however continue to be hosted and can provide a test bed for experiments such as use of the wp-RDFa plugin.

In an initial experiment we found that the although the titles of each blog post were described using Dublin Core metadata, the title was replicated in the blog display. Since this was not acceptable we displayed the use of Dublin Core metadata and repeated the experiment on a private backup copy of the UK Web Focus blog. This time there were no changes in how the blog posts were displayed.

The underlying HTML code made use of the Dublin Core namespace:

<rdf:RDF xmlns:rdf=”http://www.w3.org/1999/02/22-rdf-syntax-ns#&#8221; xmlns:dc=”http://purl.org/dc/elements/1.1/”&gt;

with each individual blog post containing the title and publication date provided as RDFa:

<h3 class=”storytitle”>
<span property=“dc:date” content=”2010-04-27 08:17:53″ resource=”http://blogs.ukoln.ac.uk/xxxxx/2010/04/27/workshop-on-engagement-impact-value/&#8221; />
<span rel=”http://blogs.ukoln.ac.uk/xxxxx/2010/04/27/workshop-on-engagement-impact-value/&#8221; property=”dc:title” resource=”http://blogs.ukoln.ac.uk/xxxxx/2010/04/27/workshop-on-engagement-impact-value/”>Workshop on Engagement, Impact, Value</span></a></h3>

It therefore does appear that the plugin can be deployed on local WordPress installations in order to provide richer semantic markup for existing content. I suspect that the problem with the display in the original experiment may may due to an incompatibility with the theme which is being used (Andreas09). I have reported this problem to the developer of the wp-RDFa plugin.

FOAF (Friends-of-a-Friend)

I had not expected an RDFa plugin to provide support for FOAF, the Friends-of-a-Friend vocabulary.  However since my work with FOAF dates back to at least 2004 I had an interest in seeing how it might be used in the context of a blog.

I had expected that information about the blog authors and commenters would be displayed in some way using a RDFa viewer such as the FireFox Operator plugin. However nothing seemed to be displayed using this plugin. In addition use of the RDFa Viewer and the RDFA Developer plugin also failed to detect FOAF markup embedded as RDFa.  I subsequently found that the FOAF information was provided as an external file.  Use of the FOAF Explorer service provides a display of the FOAF information which has been created by the plugin.

What surprised me with the initial display of the FOAF content was the list of names which I did not recognise.  It seems that these are authors and contributors to a variety of other blogs hosted on UKOLN’s WordPress MU (multi-user) server. I wonder whether the plugin was written for a previous version of WordPress, for which there was one blog per installation? In any case a decision has been made to provide access to a FOAF resource which contains details of the blog authors only, as illustrated.

Emerging Issues

A post on Microformats and RDFa deployment across the Web recently surveyed take-up of RDFa based on an analysis of 12 billion web pages indexed by Yahoo! Search and shows that we are seeing a growth in the take-up of semantic markup in Web pages.  As CMS systems (such as Drupal 7 which supports RDfa ‘out of the box’ – link updated in light of comment)  begin to provide RDFa support we might expect to see a sharp growth in Web pages which provide content which can be processed by software as well as being read by humans.  For those institutions which host a local WordPress installation it appears that it is now possible to begin exploring use of RDFa. As described in a post by Mark Birkbeck on RDFa and SEO an important role for RDFa will be to provide improvements to searching.  But in addition the ability to use wp-RDFa to create FOAF files makes we wonder whether this approach might be useful in describing relationships between contributors to blogs and perhaps provide the hooks to facilitate data-mining of the blogosphere.

It would be a mistake, however, to focus on one single tool for creating RDFa markup.  On the WORDPRESS JISCMail list Pat Lockley  mentioned that he is also developing an RDFa plugin for WordPress and invited feedback on further developments.  Here are some of my thoughts:

  • There is a need for a clear understanding of how the semantic markup will be applied and the user cases it aims to address.
  • There will also be a need to understand how such semantic markup would be used in non-blogging uses of WordPress, where the notions of a blog post, blog author and blog commenters may not apply.
  • There will be a need to ensure that different plugins which create RDFa markup are interoperable i.e. if a plugin is replaced by an alternative applications which process the RDFa should give consistent results.
  • Consideration should be given to privacy implications of exposing personal data (in particular) in semantic markup.

Is anyone making use of RDFa in WordPress who has experiences to share?  And are there any further suggestions which can be provided for those who are involved in related development work?

Posted in standards | Tagged: , | 9 Comments »

UK Government Survey on Open Standards: But What is an ‘Open Standard’?

Posted by Brian Kelly on 7 Mar 2011

UK Government’s Open Standards Survey

I was alerted to the UK Government’s Open Standards Survey by Adam Cooper of JISC CETIS, who has already encouraged readers of his blog to participate in the survey. I’ve skimmed through the questions but haven’t yet completed the survey. What stuck me, though, was the draft definition of the term “open standard” as proposed by the UK Government.

Respondents are invited to give comments to the following five conditions:

  1. Open standards are standards which result from and are maintained through an open, independent process
  2. Open standards are standards which are approved by a recognised specification or standardisation organisation, for example W3C or ISO or equivalent. (N.B. The specification/standardisation must be compliant with Regulation 9 of the Public Contracts Regulations 2006. This regulation makes it clear that technical specifications/standards cannot simply be national standards but must also include/recognise European standards)
  3. Open standards are standards which are thoroughly documented and publicly available at zero or low cost
  4. Open standards are standards which have intellectual property made irrevocably available on a royalty free basis
  5. Open standards are standards which as a whole can be implemented and shared under different development approaches and on a number of platforms

I think the survey was wise to begin by being honest about the difficulties in defining an ‘open standard’ and inviting feedback on its proposed set of conditions. The survey follows on from work which has been carried out by UKOLN, JISC CETIS and JISC OSS Watch with our shared interests in helping the sector to exploit the potential of open standards. I thought it would be useful to revisit our work before I completed the survey.

Previous Work in Describing an ‘Open Standard’

The term “open standard” is somewhat ambiguous and open to different interpretations. In a paper entitled “Openness in Higher Education: Open Source, Open Standards, Open Access” (available in PDF, MS Word and HTML formats) Scott Wilson (CETIS), Randy Metcalfe (at the time at JISC OSS Watch) and myself pointed out that:

There are many complex issues involved when selecting and encouraging use of open standards. Firstly there are disagreements over the definition of open standards. For example Java, Flash and PDF are considered by some to be open standards, although they are, in fact, owned by Sun, Macromedia and Adobe, respectively, who, despite documenting the formats and perhaps having open processes for the evolution of the formats, still have the rights to change the licence conditions governing their use (perhaps due to changes in the business environment, company takeovers, etc.)

It should be added that this paper was written in 2007. Since then PDF has become an ISO standard so we could add the fact that proprietary formats can become standardised to the complexities.

In a UKOLN QA Focus briefing paper we tried to describe characteristics shared by open standards, which had similarities to the approaches proposed in the UK Government survey:

  • An open standards-making process
  • Documentation freely available on the Web
  • Use of the standard is uninhibited by licencing or patenting issues
  • Standard ratified by recognised standards body

It should be noted that we described these as ‘characteristics‘ of an open standard rather than mandatory requirements since we were aware that the second point, for example, would rule out standards produced by many standardisation bodies such as BSI and ISO.

Responding to the Survey

I’d like to share my thoughts prior to completing the survey.

  1. Open standards are standards which result from and are maintained through an open, independent process
  2. I would support this condition. It should be noted that this means that a standard which is owned by a vendor cannot be regarded as an open standard even if the standard is published. This means, for example that Microsoft’s RTF format is not an open standard and PDF was not an open standard until ownership was transferred to ISO in 2008. It should be noted that I believe that the US definition of ‘open standards’ does not include such a clause (there were disagreements on this blog over the status of PDF before it became an ISO standard).

  3. Open standards are standards which are approved by a recognised specification or standardisation organisation, for example W3C or ISO or equivalent. (N.B. The specification/standardisation must be compliant with Regulation 9 of the Public Contracts Regulations 2006. This regulation makes it clear that technical specifications/standards cannot simply be national standards but must also include/recognise European standard).
  4. I used to have this view. However I can recall an email discussion with Paul Miller and Andy Powell when they worked at UKOLN who argued (and convinced me) that this was an over-simplistic binary division of the world of standards. It should be noted that RSS (in any of its flavours) would not satisfy this condition. The question, then, is whether this is a concern? If the definition of an ‘open standard’ will be used to determine whether a standard should be used by the UK Government then there will be a need to avoid being too rigourous in the definition. My view would be to rule out this condition.

  5. Open standards are standards which are thoroughly documented and publicly available at zero or low cost
  6. I would agree on the importance for rigourous documentation for open standards, so that ambiguities and inconsistencies are avoided. This clause is, however, ambiguous itself – what is ‘low cost’ documentation? However I would be happy to see this condition included.

  7. Open standards are standards which have intellectual property made irrevocably available on a royalty free basis
  8. This is desirable, but what happens if it is not possible to negotiate royalty-free licences? This is particularly true for video formats. If the government uses this as a mandatory condition for open standards and subsequently requires services to make use of open standards might this result in a poorer quality environment for the end user? From an ideological position I would like to support this condition but in reality I feel that there needs to be more flexibility – there is a danger that if open standards are mandated this could mean that Government departments would be barred from making use of popular services – such as YouTube and iTunes – which many people fund helpful in gaining simple access to information of interest. I am therefore rather uncertain as to whether this should be a required condition for the definition of an open standard. It is worth noting, incidentally, that the W3C have similarly avoided grasping this particular nettle in the HTML5 standardisation work, with no specific video codex being mandated as part of the standard.

  9. Open standards are standards which as a whole can be implemented and shared under different development approaches and on a number of platforms
  10. This has always been a view I have held.

The contentious issues seems to be “Open standards are standards which have intellectual property made irrevocably available on a royalty free basis“. I suspect people will argue strongly that this condition is essential. For me, though, we are revisiting Martin Weller’s “Cato versus Cicero” argument. Should we be taking a hardline stance in order to achieve a desired goal or do we need to make compromises in order to accommodate complexities and the conflicting needs of various stakeholders?

Posted in standards | 4 Comments »

Standards for Web Applications on Mobile Devices: the (Re)birth of SVG?

Posted by Brian Kelly on 1 Mar 2011

The W3C have recently published a document entitled “Standards for Web Applications on Mobile: February 2011 current state and roadmap“. The document, which describes work carried out by the EU-funded Mobile Web Applications project, begins:

Web technologies have become powerful enough that they are used to build full-featured applications; this has been true for many years in the desktop and laptop computer realm, but is increasingly so on mobile devices as well.

This document summarizes the various technologies developed in W3C that increases the power of Web applications, and how they apply more specifically to the mobile context, as of February 2011.

The document continues with a warning:

This document is the first version of this overview of mobile Web applications technologies, and represents a best-effort of his author; the data in this report have not received wide-review and should be used with caution

The first area described in this document is Graphics and since the first standard mentioned in SVG the note of caution needs to be borne in mind.  As discussed in a post published in November 2008 on “Why Did SMIL and SVG Fail?” SVG (together with SMIL) failed to live up to their initial expectations.  The post outlined some reasons for this and in the comments there were suggestions that the standard hasn’t failed as it is now supported in most widely-used browsers, with the notable exception of Internet Explorer.  In January 2010 I asked “Will The SVG Standard Come Back to Life?” following the announcement that “Microsoft Joins W3C SVG Working Group“ and an expectation that IE9 will provide support for SVG. This was subsequently confirmed in a post with the unambiguous title “SVG in IE9 Roadmap” published on the IE9 blog.

The signs in the desktop browser environments are looking positive for support for SVG.  But it may be the mobile environment in which SVG really takes off, since on the desktop Web environment we have over 15 years of experiences in using HTML and CSS  to provide user interfaces. But as described in in the W3C Roadmap:

SVG, Scalable Vector Graphics, provides an XML-based markup language to describe two-dimensions vectorial graphics. Since these graphics are described as a set of geometric shapes, they can be zoomed at the user request, which makes them well-suited to create graphics on mobile devices where screen space is limited. They can also be easily animated, enabling the creation of very advanced and slick user interfaces.

But will SVG’s strength in the mobile environment lead to a fragmented Web in which mobile users engage with an SVG  environment whilst desktop users continue to access HTML resources?  I can recall  suggestions that where being made about 10 years ago which pointed out that since SVG is the richer environment it could be used as a generic environment.  Might we see that happening?  After all, as can be seen (if you’re using a browser which supports SVG) from examples such as the Solitaire game (linked in from the Startpagina Web site which provides access to various examples of SVG uses) it is possible to provide a SVG gaming environment. Might we see Web sites like this being developed?

Perhaps rather than the question “Has SVG failed?” we may soon need to start asking “How such we use SVG?

Posted in standards, W3C | Tagged: | 1 Comment »

HTML5 Standardisation Last Call – May 2011

Posted by Brian Kelly on 15 Feb 2011

I recently described the confusion over the standardisation of HTML5, with the WhatWG announcing that they are renaming HTML5 as ‘HTML’ and that it will be a ‘Living Standard’ which will continually evolve as browser vendors agree on new features to implement in the language.

It now seems that the W3C are responding to accusations that they are a slow-moving standardisatioin body with an announcement thatW3C Confirms May 2011 for HTML5 Last Call, Targets 2014 for HTML5 Standard“.  In the press release Jeff Jaffe, W3C CEO, states that:

Even as innovation continues, advancing HTML5 to Recommendation provides the entire Web ecosystem with a stable, tested, interoperable standard

I welcome this announcement as I feel that it helps to address recent uncertainties regarding the governance and roadmap for HTML developments.  The onus is now on institutions: there is now a clear roadmap for HTML5 development with a stable standard currently being finalised.  As providers of institutional Web services, what are you plans for deployment of HTML5?

Posted in standards, W3C | Tagged: | 1 Comment »

The W3C’s RDF and Other Working Groups

Posted by Brian Kelly on 14 Feb 2011

The W3C have recently announced the launch of the RDF Working Group.  As described in the RDF Working Group Charter:

The mission of the RDF Working Group, part of the Semantic Web Activity, is to update the 2004 version of the Resource Description Framework (RDF) Recommendation. The scope of work is to extend RDF to include some of the features that the community has identified as both desirable and important for interoperability based on experience with the 2004 version of the standard, but without having a negative effect on existing deployment efforts.

Membership of W3C working group comprises W3C staff as well as W3C member organisations, which includes the JISC. In addition it is also possible to contact working group chairs and W3C team members in order to explore the possibility of participation as an invited expert.

Note that a list of W3C Working Groups, Interest groups, Incubator Groups and Coordination Groups is provided on the W3C Web site. The Working Groups are typically responsible for the development of new W3C standards (known as ‘recommendations’) or the maintenance of existing recommendations. There are quite a number of working groups. including working groups for well-known W3C areas of work such as HTML, CSS and WAI as well as newer or more specialised groups covering areas including Geolocation, SPARQL, RDF and RDFa.

W3C Interest Groups which may be of interest include Semantic Web, eGovernment and WAI. Similarly Incubator Groups which may be of interest to readers of this blog include the Federated Social Web, Library Linked Data, the Open Web Education Alliance and the WebID groups.

The W3C Process Document provides details of the working practices for Working Groups, Interest Groups and Incubator Groups. If anyone feels they would like to contribute to such groups I suggest you read the Process Document in order to understand the level of committment which may be expected and, if you feel you can contribute to the work of a group, feel free to contact me.

Posted in standards, W3C | Leave a Comment »

Open Source, Open Standards, Open Access – A Problem For Higher Education?

Posted by Brian Kelly on 11 Feb 2011

Over on the JISC OSS Watch blog Ross Gardler has highlighted an area of concern from the recently published HEFCE Review of JISC. Ross states that:

… there is one paragraph that I am, quite frankly, appalled to see in this report:

“JISC’s promotion of the open agenda (open access, open resources, open source and open standards) is more controversial. This area alone is addressed by 24 programmes, 119 projects and five services. [7] A number of institutions are enthusiastic about this, but perceive an anti-publisher bias and note the importance of working in partnership with the successful UK publishing industry. Publishers find the JISC stance problematic.

In his post, which is titled “Is UK education policy being dictated by publishers?“, Ross goes on to summarise the benefits which can be gained from the higher education community through use of and engagement in the development of open source software.

The wording in the JISC review – open agenda (open access, open resources, open source and open standards) – reminded me of a paper written by myself (based at UKOLN), Scott Wilson (of JISC CETIS) and Randy Metcalfe (Ross Gardler’s predecessor as manager of the JISC OSS Watch service) which was entitled “Openness in Higher Education: Open Source, Open Standards, Open Access” and build on previous papers in this area.

Now if the paper had provided a simplistic view of openness I think criticism that the paper was promoting an ideological position would have been justified.  But whilst the paper highlighted potential benefits for the higher education community to be gained from use of open source software, open standards and open content the paper was honest about shortcomings. Rather than, to use the words of the review document, the “promotion of an open agenda”  the paper argued that institutions should be looking to gain the benefits themselves and not open source software, open standards or open content per se.

Perhaps such distinctions aren’t being appreciated by the wider community and openness is being seen as a ideology and used as a stick to beat commercial providers such as publishers. This approach quite clearly isn’t being taken by the co-authors of our paper. Indeed as can be seen from yesterday’s blog post on the failures of W3C’s PICS standard, the failures of open standards are being identified in order that we can learn fromsuch failures and avoid repeating the mistakes in future.

A few days ago I published a post in which Feedback [was] Invited on Draft Copy of Briefing Paper on Selection and Use of Open Standards – if open standards can prove problematic advice is needed on approaches for the selection of open standards which will minimise the risks of choosing an open standards which fails to deliver the expected benefits.

But I am sure that there is a need for continued promotion of the sophisticated approaches to the exploitation of openness which the JISC Review seems to be unaware of.  A poster summarising the approaches is being prepared for the JISC 2011 conference which will be displayed on a stand shared by UKOLN, CETIS and JISC OSS Watch.     A draft version of the posted is embedded below (and hosted on Scribd).  We feel this provides a pragmatic approach which will help to provide benefits across the HE sector and avoids accusations of taking an anti-publisher approach.

Your comments on these approaches are welcomed.

Posted in standards | Tagged: | 5 Comments »

Remember PICS? Learning From Standards Which Fail

Posted by Brian Kelly on 10 Feb 2011

A Message to the PICS-interest Mailing List

Yesterday I received an email message on the W3C’s PICS-interest group’s mailing list from Eduardo Lima Martinez who asked:

I’m building a website for people over 16 years of age. This not is a porn site, but shows raw images (“curcus pretty girls doing ugly things”) not suitable for kids.

He went on to ask:

What are the correct PICS labels for this site?. I do not read/write correctly the english language. I do not understand the terms of HTTP headers “Protocol: {…}” and “PICS-Label: (…)” Can you guide me? Can you show me a sample site that has the correct PICS labels?

Leaving aside the rather unsavoury nature of the content, I was surprised to receive this email as I was unaware that I was still subscribed to the PICS-interest list.  However looking at the archives for the list as can be seen there have been a handful of postings to this list over the past five years or so, several of which are just conference announcements or spam. As seems to be the case for quite a number of mailing lists, this one has fallen into disuse. But the first legitimate posting to the list since April 2009 and the subsequent responses caused me to reflect on the rise and fall of the W3C PICS standard.

Revisiting PICS

PICS, the Platform for Internet Content Selection, was developed in 1996 in response to the proposed Communications Decency Act (CDA) US legislation. As described in encyclopedia.com:

The first version of this amendment, sponsored by Senator James Exon without hearings and with little discussion among committee members, would have made it illegal to make any indecent material available on computer networks“.

In parallel with arguments that such legislation was unconstitutional the W3C responded by the development of a standard which provide a decentralised way of labelling Web resources.  It would then be possible to configure client software to block access to resources which are deemed to be offensive or inappropriate for the end user.  This software could be managed by a parent for a home computer or by an appropriate body in a school context.  There was also an infrastructure to manage the content labelling schemes which complemented the W3C’s technical developments with, as described in Wikipedia entry,  the RSAC being founded in 1994 to provice labelling of video games and, later, the RSACi providing a similar role for online resources. This organization was closed in 1999 and reformed into the Internet Content Rating Association (ICRA). In 2007 ICRA became part of FOSI (Family Online Safety Institute)  – an organisation which, as described in an email message by Dan Brickley, no longer has any activities in this technology area or support for their older work. As Dan pointed out to Eduardo “there is no direct modern successor to the RSACi/ICRA PICS work to recommend to you“.

What Are The Lessons?

In 1996 we had a standard (actually a number of W3C Recommendation)  which provided a decentralised approach for labelling Internet content. As described above there were international organisations involved in the provision and management of labelling schemes and there were various applications which provided support for the standards including Internet Explorer, with Microsoft providing a tutorial on how to use PICS.

But what went wrong? Why did this standard and accompanying infrastructure fail to be sustainable?  Is there no longer a need to be able to manage access to pornographic, violent and related resources? Do we have a better standards-based solution?

I think it is clear that there is still a need for a solution to the problems which PICS sought to address – and the various filtering solutions which are found in schools do not provide the flexibility of a standards-based approach such as that provided by PICS.

But perhaps the cost of managing PICS labels was too expensive – after all metadata is expensive to create and manage. Of perhaps PICS was developed too soon in W3C’s life, before XML provided a generalised language for developing metadata applications?  But would replacing PICS’s use of “{” by XML’s “<” and “>”  and the accompanying portfolio of XML standards really had a significant difference?

Dan Brickley pointed out that PICS is largely obsolete technology and its core functionality is been rebuilt around RDF:

1. Roughly PICS label schemes are now RDF Schemas (or more powerfully, OWL Ontologies)
2. PICS Label Bureaus are replaced by Web services that speak W3C’s SPARQL language for querying RDF – see http://www.w3.org/TR/rdf-sparql-query/
3. PICS’ ability to make labels for all pages sharing a common URL pattern is addressed by POWDER – see http://www.w3.org/2007/powder/

Hmm, should Eduardo be looking at POWDER – a W3C standard which “has superseded PICS as the recommended method for describing Web sites and building applications that act on such descriptions“.

But perhaps this is an area in which open standards are not appropriate.  As Phil Archer pointed out in the discussion on the PICS-intertest list:

there really isn’t any advantage in adding labels, whether in PICS or POWDER, for child protection purposes. All the filters that people actually use work well without using labels at all. It’s an idea that has long had its day. If interested, see [1, 2]” [Note reference 2 is a PDF file]

I guess the organisations involved in developing the PICS standard and tools which supported PICS and organisations which labelled their resources will have failed to se a return on their investment for supporting this open standard.  Will it be any different with POWDER, I wonder?  What is different this time?

Posted in standards | Tagged: | 2 Comments »

Feedback Invited on Draft Copy of Briefing Paper on Selection and Use of Open Standards

Posted by Brian Kelly on 8 Feb 2011

A draft UKOLN briefing paper on the “Selection and Use of Open Standards” is available for comments before publication. The document is based on previous work led by UKOLN in conjunction with the AHDS in the JISC-funded QA Focus project on the development of quality assurance framework for JISC-funded development projects. Subsequent work with JISC CETIS and JISC OSS Watch and others was described in papers on “A Standards Framework For Digital Library Programmes“, “A Contextual Framework For Standards” and “Openness in Higher Education: Open Source, Open Standards, Open Access” which were presented at the ichim05, WWW 2006 and elPub 2007 conferences respectively. More recently a position paper which described “An Opportunities and Risks Framework For Standards” was presented to a CETIS event on the Future of Interoperability Standards.

The briefing paper omits much of the background and discussions which were included in these papers and instead seeks to provide a more focussed summary of the contextual approaches and opportunities and risks framework which have been developed to support use of development activities especially if new and emerging standards are being considered.

The draft briefing paper is currently available on Scridb and is embedded below.

I am grateful to feedback on an earlier draft of this paper which I have received from colleagues at JISC CETIS.  Comments from the wider community are welcomed.

Posted in standards | 3 Comments »

The HTML5 Standardisation Journey Won’t Be Easy

Posted by Brian Kelly on 3 Feb 2011

I recently published a post on Further HTML5 Developments in which I described how the W3C were being supportive of approaches to the promotion of HTML5 and the Open Web Platform. However in a post entitled  HTML is the new HTML5 published on 19th January 2011 on the WhatWG blog Ian Hickson, editor of the HTML5 specification (and graduate of the University of Bath who now works for Google) announced that “The HTML specification will henceforth just be known as ‘HTML'”. As described in the FAQ it is intended that HTML5 will be a “living standard:

… standards that are continuously updated as they receive feedback, either from Web designers, browser vendors, tool vendors, or indeed any other interested party. It also means that new features get added to them over time, at a rate intended to keep the specifications a little ahead of the implementations but not so far ahead that the implementations give up.

What this means for the HTML5 marketing activities is unclear. But, perhaps more worrying is what this will mean for the formal standardisation process which W3C has been involved in.  Since it seems that new HTML(5) features can be implemented by browser and tool vendors this seems to herald a return to the days of the browser wars, during which Netscape and Microsoft introduced ‘innovative’ features such as the BLINK and MARQEE tags.

On the W3C’s public-html list Joshue O Connor (a member of the W3C WAI Protocol and Formats Working Group) feels that:

What this move effectively means is that HTML (5) will be implemented in a piecemeal manner, with vendors (browser manufacturers/AT makers etc) cherry picking the parts that they want. … This current move by the WHATWG, will mean that discussions that have been going on about how best to implement accessibility features in HTML 5 could well become redundant, or unfinished or maybe never even implemented at all.

In response Anne van Kesteren of Opera points out that:

Browsers have always implemented standards piecemeal because implementing them completely is simply not doable. I do not think that accepting reality will actually change reality though. That would be kind of weird. We still want to implement the features.

and goes on to add:

Specifications have been in flux forever. The WHATWG HTML standard since 2004. This has not stopped browsers implementing features from it. E.g. Opera shipped Web Forms 2.0 before it was ready and has since made major changes to it. Gecko experimented with storage APIs before they were ready, etc. Specifications do not influence such decisions.

Just over a year ago a CETIS meeting on The Future of Interoperability and Standards in Education explored “the role of informal specification communities in rapidly developing, implementing and testing specifications in an open process before submission to more formal, possibly closed, standards bodies“. But while the value of rapid development, implementation and testing was felt to be valuable there was a recognition of the continued need for the more formal standardisation process.  Perhaps the importance of rapid development which was highlighted at the CETIS event has been demonstrated by the developments centred around HTML5, with the W3C providing snapshots once the implementation and testing of new HTML developments have taken place, but I feel uneasy at the developments. This unease has much to do with the apparent autonomy of browser vendors: I have mentioned comments from employees of Google and Opera who seem to be endorsing this move (how would we feel if it was Microsoft which was challenging the W3C’s  standardisation process?). But perhaps we should accept that significant Web developments are no longer being driven by a standards organisation or from grass-roots developments but from the major global players in the market-place? Doesn’t sound good, does it – a twenty-first century return to browser vendors introducing updated versions of BLINK and MARQUEE elements as they’ll know what users want :-(

Posted in HTML, standards, W3C | Tagged: | 3 Comments »

WAI-ARIA 1.0 Candidate Recommendation – Request for Implementation Experiences and Feedback

Posted by Brian Kelly on 2 Feb 2011

W3C announced the publication of WAI-ARIA 1.0 as a W3C Candidate Recommendation on 18th January 2011. A Candidate Recommendation (CR) is a major step in the W3C standards development process which signals that there is broad consensus in the Working Group and among public reviewers on the technical content of proposed recommendation. The primary purpose of the CR stage is to implement and test WAI-ARIA. If you are interested in helping or have additional comments you are invited to follow the content submission instructions.

WAI-ARIA is a technical specification that defines a way to make Web content and Web applications more accessible to people with disabilities. It especially helps with dynamic content and advanced user interface controls developed with AJAX, HTML, JavaScript and related technologies. For an introduction to the WAI-ARIA suite please see the WAI-ARIA Overview or the WAI-ARIA FAQ.

It does occur to me that in light of the significant development work we are seeing in areas such as repositories, e-learning systems, e-research, etc. there may be examples of developments which have enhanced the user interface in ways which enhance access for users with disabilities. If you have made use of WAI-ARIA 1.0 techniques in the development of your services, as mentioned on the W3C blog, W3C WAI would welcome such feedback. Please note that the closing date for comments is 25th February 2011.

Posted in Accessibility, standards, W3C | Leave a Comment »

Further HTML5 Developments

Posted by Brian Kelly on 25 Jan 2011

Updated HTML5 Documents

Back in November 2010 in a post entitled Eight Updated HTML5 Drafts and the ‘Open Web Platform’ I described how the W3C had published draft versions of eight documents related to HTML5.  It seems that W3C staff and members of various HTML5 working groups have been busy over Christmas as the HTML Working Group has published further revised versions of eight documents:

HTML5 Marketing Activities

HTML5 LogoThe significance of the development work to HTML5 specifications and the importance which W3C is giving to HTML5 can be seen from the announcement that “W3C Introduces an HTML5 Logo” which describes this “striking visual identity for the open web platform“.

The page about the logo is full of marketing rhetoric:

Imagination, meet implementation. HTML5 is the cornerstone of the W3C’s open web platform; a framework designed to support innovation and foster the full potential the web has to offer. Heralding this revolutionary collection of tools and standards, the HTML5 identity system provides the visual vocabulary to clearly classify and communicate our collective efforts.

The W3C have also pointed out how the logo is being included on t-shirts, which you can buy for $22.50.   The marketing activity continues with encouragement for HTML5 developers to engage in viral marketing:

Tweet your HTML5 logo sightings with the hashtag#html5logo

In addition to Web sites owners being able to use this logo on their Web sites and fans of HTML5 being able to wear a T-shirt (“wearware”?) as I learnt from Bruce Lawson’s post on “On The HTML5 Logo”  users of FireFox and Opera browsers can install a Greasemonkey Script or Opera extension which will display a small HTML5 logo in the top right hand corner of the window of HTML5 pages. I’ve tried this and it works.

Such marketing activities are unpopular in some circles with much of the criticismcentered around the FAQ’s original statement that the logo means “a broad set of open web technologies”, which some believe “muddies the waters” of the open web platform“.  In light of such concerns the W3C have updated the HTML5 Logo FAQ.

I have to say that personally I applaud this initiative.  In the past the commercial sector has taken a lead in popularising Web developments as we saw in the success of the Web 2.0 meme – it’s good, I feel, that the W3C are taking a high profile in the marketing of HTML5 developments. I also feel that this is indicative of the importance of HTML5, which, judging from examples of HTML5’s potential which I have described in a number of recent posts, will be of more significance than the moves from HTML 3.2 to HTML 4 and HTML 4 to XHTML 1.

Spotting HTML5 Pages – Including the Google Home Page

Use of the Opera extension which embeds a small version of the HTML5 icon in the top right hand corner of the browser display is shown (click to see full-size version).

Whilst searching for a HTML5 Web site to use for this example I discovered that the Google search page now uses HTML, with the following HTML5 declaration included at the top of the page:

<doctype html>

I had previously thought that Google was very conservative in its use of HTML as, in light of its popularity, the page had to work of a huge range of browsers. Note, though, that on using W3C’s HTML validator, which includes experimental support for HTML5, I found that there were  still HTML errors, many of which were due to unescaped ‘&’ characters.  Some time ago it was suggested that the reason Google wasn’t implementing the simple changes in order to ensure that their home page validated was in order to minimise the bandwidth usage – which will be very important for globally popular site such as Google’s which, despite losing the top slot to Facebook in the US last  year, is still pretty popular :-). Hmm, if there are around 90 million Google users per day I wonder how much bandwidth is saved by using & rather than & in its home page and search results?

Posted in HTML, standards | Tagged: | Leave a Comment »

Three CSS Publications Including Last call for CSS 2.1

Posted by Brian Kelly on 5 Jan 2011

The W3C have recently published three CSS publications: a last call for comments on the CSS 2.1 specification and first drafts of Snapshot 2010 and Writing Modes Level 3.

These three documents will be of interest to different groups. The CSS 2.1 document will be of interest to those who wish to see the final documentation of approaches which have been deployed, in order to ensure that widely implemented features are thoroughly and unambiguously documented (“CSS 2.1 corrects a few errors in CSS2 and adds a few highly requested features which have already been widely implemented. But most of all CSS 2.1 represents a “snapshot” of CSS usage: it consists of all CSS features that are implemented interoperably at the date of publication“).

The CSS Snapshot 2010 document is a brief document which collects together into one definition the specs that together form the current state of Cascading Style Sheets (CSS). This will be of interest to those who like to be able to see the big picture and the relationships and dependencies.

In contrast the CSS Writing Modes Module Level 3 specification is likely to be of interest to those with specific interests in bidirectional and vertical text.

Last Call comments are welcome until 7 January 2011.

Posted in standards | Tagged: | 1 Comment »

W3C Standards for Contacts and Calenders

Posted by Brian Kelly on 27 Dec 2010

I have to admit that I thought that standards for contacts and calendar entries had been established ages ago. However the W3C’s Device APIs and Policy Working Group has been set up in order to “create client-side APIs that enable the development of Web Applications and Web Widgets that interact with devices services such as Calendar, Contacts, Camera, etc.

A working draft of the Contacts API was published on 9 December 2010. As described in the W3C Newsletter:

This specification defines the concept of a user’s unified address book – where address book data may be sourced from a plurality of sources – both online and locally. This specification then defines the interfaces on which third party applications can access a user’s unified address book, with explicit user permission and filtering. The focus of this data sharing is on making the user aware of the data that they will share and putting them at the center of the data sharing process; free to select both the extent to which they share their address book information and the ability to restrict which pieces of information related to which contact gets shared.

Other work in the area includes the following draft specification:

Note that the URIs for the latest version of the a number of these draft specifications seem misleading. For example the URI for the Calendar API is stated as being http://www.w3.org/TR/calendar-api/ though this link is currently broken, with the resource actually hosted on the W3C’s development server at http://dev.w3.org/2009/dap/calendar/. Similarly the URL for The Application Launcher API is stated as being http://www.w3.org/TR/app-launcher/ though this link is currently broken, with the resource actually hosted on the W3C’s development server at http://dev.w3.org/2009/dap/app-launcher/. This may be because these are editor’s draft and the URIs for the published versions are place-holders – but for me this is an error, and one that is surprising for the W3C which places great emphasis on the importance of functioning URIs.

Posted in standards | 3 Comments »