UK Web Focus

Innovation and best practices for the Web

Archive for May, 2010

Bungee Jumping and Papers in the ACM Digital Library

Posted by Brian Kelly on 28 May 2010

Papers in the ACM Digital Library

My paper on “Developing countries; developing experiences: approaches to accessibility for the real world“, which I summarised in a recent blog post, is now available from the ACM Digital Library. If you wish you can purchase the PDF from the ACM – but you may prefer to download the author’s copy from the UKOLN Web site :-)  – from which you can also view the HTML version of the paper (a format not available in the ACM Digital Library).

When I visited the ACM Digital Library I noticed that I could browse other papers I had written which are available there, including “Accessibility 2.0: people, policies and processes“, “Contextual web accessibility – maximizing the benefit of accessibility guidelines“, “One world, one web … but great diversity“, “Forcing standardization or accommodating diversity?: a framework for applying the WCAG in the real world“,”A quality framework for web site quality: user satisfaction and quality assurance“,”Personalization and accessibility: integration of library and web approaches” and “Archiving web site resources: a records management view“.

Usage statistics for "Brian kelly" in ACM Digital LibraryThere were also eight papers by Brian W Kelly, Bryan Kelly and another Brian Kelly. These false hits will have skewed the bibiliometrics provided on the ACM Digital Library Web site (and also illustrated). However  the most recent paper by another Brian Kelly was published back in 2002 and there only seem to be four  downloads from other Brian Kellys – so most of the statistics shown do seem to relate to my work.

In addition to the papers listed above (which I’ve written during the past five years while based at UKOLN), there was also a paper on “Becoming an information provider on the World Wide Web” which was published by JENC in 1994 in the Selected papers of the annual conference on Internet Society/5th joint European networking conference.

Bungee Jumping

Seeing the reference to this paper brought back memories. This was the first ever peer-reviewed paper I ever wrote and the INET’94/JENC5 conference, held in Prague in 1994, was the first international conference I ever attended. I can also remember  when I decided to submit a paper to the conference. I was on holiday staying at Victoria Falls in January 1994, mulling over whether to submit a paper (at the time I was an Information officer at Leeds University and had  given training courses and a few low-key presentations – but was nervous at the thought of speaking at an international conference).

Bungee jumping off the Victoria Falls Bridge in Zambia/Zimbabwe (from Wikipedia)In the end, after being slightly disappointed after a half-day white-water rafting down the mighty Zambezi River (it was the rainy season so we sailed over many of the rapids) I decided to bungee jump off the Victoria Falls bridge (I remember being told it was the world’s highest commercial bungee jump). “If I can jump off the Victoria Falls bridge, I can do anything” I told myself “included presenting a paper at an international conference“.

So I paid the money (£60?) and went to the bridge.  There were two of us who had booked the jump and, while we were waiting for the equipment to be set up, we asked what the jump was like. “The jump is easy” we were told “But once you are lowered to the bottom, the scary part starts. You have to climb up the gorge – which is very steep.  And once you reach the bridge and have to climb up the bridge it get even more scary – although the bridge is solid, your brain doesn’t believe the evidence. Many people are in tears when they reach the top“.

After being told the story the organisers came back with some news – no jumping for the day. It had been raining and it was too dangerous to climb up the gorge.  I have to admit being secretly pleased – I’d made the commitment to make the jump, and for me, that was good enough.  And when I got back to work I worked on a paper and was pleased when it was accepted. The paper was presented in the Concert hall in the Palace of Culture in Prague. When I read details about the venue I discovered that the main hall held up to 6,00 people, but the Concert Hall only held about 1,00 people and it wasn’t full when I gave the paper. My aborted jump from Victoria Falls bridge stood me in good stead then.

I have tracked down a copy of the paper (Adobe Postscript format). I have also discovered a brief report on my presentation by George Brett. Little did I know while standing on the Victoria Falls bridge waiting for the rain to ease off what my decision to write a paper about the Web (or WWW as it was then referred to) would lead to!

Posted in Papers | Leave a Comment »

Saving Money on Accommodation

Posted by Brian Kelly on 26 May 2010

When we heard the news that  “George Osborne unveils £6.25bn spending cuts” we also learnt that “Ministers will be expected to walk or take public transport where possible“. Such austerity measures  will apply to the rest of us too.  But what IT services can we use in order to help us to save money?

Something I’ve been doing for a couple of years is to use services such as Laterooms to get discounts on room rates when I am staying away from home. I started doing this a couple of years ago while I was on holiday in Malaysia and now fairly regularly use the service to book hotels when I’m away on business.

I recently looked at details of my hotel bookings over the past six months or so.  A selection of the hotels I’ve used over that period (including some of holiday bookings)  is shown below.

Savings from use of Laterooms,com

It’s pleasing to see the significant amount of savings that have been made.  I do wonder whether we will be asked not only to make savings but also to provide evidence of the savings which have been made (just as last year I suggested that it might be useful to keep a record of the carbon cost of business trips in order to be able to demonstrate reductions over time).  But what other approaches can we take in order to save money whilst still maintaining the level of our services?

Posted in Finances | 8 Comments »

“Scrapping Flash and Betting the Company on HTML5″

Posted by Brian Kelly on 24 May 2010

Scrapping Flash

We are “Scrapping Flash and betting the company of HTML5” says the CTO of Scribd (the document sharing service) according to an article published recently in TechCrunch. But this doesn’t seem to be as much of a risk as the headline implies as, according to the article “Adobe’s much-beleaguered Flash is about to take another hit and online documents are finally going to join the Web on a more equal footing“. As the article goes on to say “Scribd is joining a chorus of companies from Apple to Microsoft in siding with HTML5 over Flash. Tomorrow only 200,000 of the most popular documents will be available in HTML5, but eventually all of them will be switched over“. The article goes on to point out that “When it’s done, Scribd alone will convert billions of document pages into Web pages“.

Open Standards and the NOF-digi Programme

Good, you may think, it’s about time we made greater use of open standards. And this sentiment underpinned various standards documents I have contributed to since about 1996 for the JISC and the cultural heritage sector.  As an example consider the NOF-digitise Technical Advisory Service which was provided by UKOLN and the AHDS  from 2001-2004.  These two service were commissioned to document the open standards to be used by this national digitisation programme. So we described open standards, such as SMIL and SVG, and, despite warning of the dangers in mandating premature adoption of open standards, the first version of the standards document did not address the potential difficulties in developing services based on these immature W3C standards.

Unsurprisingly, once the project had received their funding and began to commission development work we received questions such asDoes anyone have any thoughts on the use of file formats such as Flash or SVG in projects? There is no mention of their use in the technical specifications so I  wondered whether their suitability or otherwise had been considered“. I can remember the meeting we had with the NOF-digitise progamme managers after receiving such queries and the difficulty policy makers had in appreciating that simply mandating use of open standards might be inappropriate.

Our response was to explain the reasons why open standards were, in principle, to be preferred over use of proprietary formats:

The general advice is that where the job can be done effectively using non-proprietary solutions, and avoiding plug-ins, this should be done. If there is a compelling case for making use of proprietary formats or formats that require the user to have a plug-in then that case can be made in the business plan, provided this case does not contradict any of the MUST requirements of the nof technical guidelines document.

Flash is a proprietary solution, which is owned by Macromedia.  As with any proprietary solutions there are dangers in adopting it as a solution: there is no guarantee that readers will remain free in the long term, readers (and authoring tools) may only be available on popular platforms, the future of the format would be uncertain if the company went out of business, was taken over, etc.

However we did acknowledge the difficulties of forcing projects to use open standards and concluded:

To, to summarise, if you *require* the functionality provided by Flash, you will need to be aware of the longer term dangers of adopting it.  You should ensure that you have a migration strategy so that you can move to more open standards, once they become more widely deployed.

We subsequently recommended updates to the projects’ reporting mechanism so that projects had to respond to the following questions before use of proprietary formats would be accepted:

(a) Area in which compliance will not be achieved

(b) Explain why compliance will not be achieved including research on appropriate open standards)

(c) Describe the advantages and disadvantages of your proposed solution

(d) Describe your migration strategies in case of problems

Our FAQ provided an example of how these questions might be answered in the case of use of Flash. What we expected (and perhaps hoped for) back then was that there would be a steady growth in the development of tools which supported open standards and the benefits of the standards would lead to a move away from Flash.  This, however, hasn’t happened. Instead it seems to have been the lack of support for Flash on the iPhone and the iPad which has led to recent high-profile squabbles, in particular Steve Job’s open letter giving his Thoughts on Flash. His letter points out that

Flash was created during the PC era – for PCs and mice. Flash is a successful business for Adobe, and we can understand why they want to push it beyond PCs. But the mobile era is about low power devices, touch interfaces and open web standards – all areas where Flash falls short.

and concludes by saying:

New open standards created in the mobile era, such as HTML5, will win on mobile devices (and PCs too). Perhaps Adobe should focus more on creating great HTML5 tools for the future, and less on criticizing Apple for leaving the past behind.

It seems, according to Jobs, that it is the requirements of the mobile platform which is leading to the move towards open standards on both mobile and PC platforms.

Eight Years Later

About eight years later it now seems appropriate to move away from Flash and, instead, use HTML5. This long period between initial announcements of new open standards and their appropriateness for mainstream use will differ for different standards – in the case of RDF, for example, the initial family of standards were published in 2004 but it has only been in the past year or so that interest in the deployment of Linked Data services has gained wider popularity. But the dangers of forcing use of open standards is, I hope, becoming better understood.

And this is where I disagree with a recent article by Glyn Moody who, in a recent tweet, suggested that “European Commission Betrays Open Standards – http://bit.ly/bl6HJt pusillanimity“. In an article published in ComputerWorld UK Glyn argued that the “European Commission Betrays Open Standards“. I have skimmed through the latest leak [PDF format] of an imminent Digital Agenda for Europe. What I noticed is that the document calls for “Promoting better use of standards” which argues that “Public authorities should make better use of the full range of relevant standards when procuring hardware, software and iT systems”.  It is the failure of the document in “promoting open standards and all the benefits that these bring” which upsets Glyn, who adds that “accept[ing] ‘pervasive technologies’ that *aren’t* based on standards” is “a clear reference to Microsoft“.

But maybe the European Commission have understood the complexities of the deployment of open standards and the risks that mandating their use across public sector organisations might entail.  And let’s not forget that,in the UK, we have a history of mandating open standards which have failed to take off – remember OSI networking protocols?

Pointing out that open standards don’t always live up to their promise and it make take several years before they are ready for mainstream use is applying an evidence-based approach to policy. Surely something we need more of, n’est-ce pas?

Posted in HTML, standards | 1 Comment »

Sig.ma, Linked Data and New Media Literacy

Posted by Brian Kelly on 21 May 2010

Consuming Linked Data Tutorial

At the Consuming Linked Data tutorial I attended on Monday 25 April 2010 I heard details of a number of Linked Data applications which could be used to process the Linked Web of Data. Of particular interest to me was the sig.ma search engine. Below I discuss the implications of  discovering that personal content (perhaps provided using Social Web tools) becoming surfaced in a Linked data environment through semantic search tools such as sig.ma.

sig.ma

sig.ma is described as a “semantic information mashup” service. I used this Web-based service for vanity searching: a  search for “Brian Kelly UKOLN” provides me with an interesting view of how data about me is freely available in the Web of Linked Data. A screen shot is shown below.

Use of sig.ma service to view resources for "Brian Kelly UKON"

The service demonstrates how information from disparate sources can be brought together in a mashup and, as such, is worth trying out to see what information the Web of Linked Data has about you.I found, for example, many blog posts which I was unaware of which referenced my work in some way such as a summary of an event in Southampton I spoke at last year;a reference to a post of mine in a post on FriendFeed: where the conversation happens and a link to one of my briefing documents in a list (in the Czech language, I think) of Library 2.0 resources. In total it seems there were 152 sources of Linked Data information about me.

This service is of interest to me not only for the information it contains but also to understand incorrect information which may be held, the reasons for such information and the risks that personal information you may not wish to be shared has already been gathered and is available in Linked Data space.

Linked Data and New Media Literacy

As can be seen in the above image two data source think that my surname is ‘UKOLN’. Further investigation reveals that this is due to the slides from a talk I gave at the Museums and the Web 2009 conference, which were uploaded by the conference organisers, having incorrect metadata.

As well as information gathered from Slideshare, sig.ma has also gathered information from Twitter, the Archimuse Web site (which organised the Museums and the Web conference), this blog and various resources I maintain on the UKOLN Web site. And on asking the service to retrieve data from additional services I discover that Linked Data about me is also available from data held on Wikipedia, YouTube, Ning, Scribd, Blip.tv, VOX, Blogspot and Tumblr as well as a number of individual’s blogs e.g. posts on Stephen Downes, Dulwichonview, Daveyp, the Openwetware and no doubt (many?) other blogs. It would appear that if you are a user of these popular Social Web services your information may be available as Linked Data.

I also noticed that sig.ma knew my date of birth.  I have also tried to conceal this information from public display and was puzzled as to how it came to be known.  I discovered that I had included my data of birth in a FOAF file which I created in about 2003 – before I decided to conceal this information. I have removed the data of birth from my FOAF file – but how long will it take for sig.ma to refresh this data source, I wonder?

The large amount of information about my professional activities which can be found using sigm.ma is pleasing – and it is good to see how RSS feeds, RDFa and other structured data sources which are accessible from various Social Web services is being used.  But what if the information is wrong, misleading, embarrassing or is confidential? I have recently read that Syracuse University [is] to Provide Online Reputation Management to Graduates. We all need to have the skills to carry out such reputation management activities, I feel. And search engine which explore Linked Data sources should now be part of the portfolio of tools we need to understand. Have those involved in New Media Literacy appreciated this, I wonder?

Posted in Linked Data | 3 Comments »

The Disappearing HTTP:// Protocol

Posted by Brian Kelly on 19 May 2010

“Google kills ‘http’ URLs”!

An announcement was made last month on the ZDNet blog: “Google kills ‘http’ URLs; about time to follow suit?“.  The post describe how “Google’s Chrome browser will no longer include http:// as part of the URL field“. The post went on to add that “this has indeed ruffled some veteran’s feathers” as  “FTP, HTTPS and other protocols which are non-HTTP are still used“.  However Zack Whittacker, the author of the post felt that “I don’t think it’s that much of a deal, frankly. When have you ever heard on the television, radio, or in print media the use of ‘http://’?

He’s correct – if you listen to the TV or radio you don’t hear an announcer invited the audience to visit “aitch-tee-tee-pee-colon-slash-slash“. The scheme name in URIs has become invisible – an example of a comment I made in a recent IWR interview in which, having been invited to describe how much of a techno-geek I was using an ‘IWR’s digitometer’ “My iPod Touch, mobile phone and PC are now my pen and paper – not technologies but essential tools I use every day“.

The ‘Disappearance’ of HTTP

But what does the disappearance of a technology tell us? In the case of the growing disappearance of the HTTP scheme from URIs from the perspective of the general public I think it tells us that the standard is so ubiquitous that it no longer needs to be referred to.  The flip side of this is when something ubiquitous starts to become challenged by something new that we have to start referring to the old thing in new days – remember, for example, when watches were just watches, and we didn’t need to differentiate between analogue and digital watches?

The ZDNet blog post, then, provides us with a reminder of the success of the HTTP protocol – it has become so successful that we don’t think about it any more.

But how did HTTP achieve such a dominant role?  I have been around the Web environment to have seen the evolution of HTTP from HTTP 0.9 through to HTTP 1.0 and then HTTP 1.1 – and I’ve even read all three specifications (although many years ago, so please don’t test me)!

If I recall correctly, HTTP 0.9 was the first published version of the HyperText Transport Protocol, which I used when I first encountered the Web (or W3 as it was often referred to in the early 90s). This had the merits of being simple – a single page I have recently discovered.

HTTP/1.0 introduced MIME types so that documents retrieved over the Web could be processed by helper applications based on the MIME type rather than the file name suffix – much of the additional length of the specification is due to the formal documentation of features provided in HTTP 0.9, I think.

Then HTTP/1.1 was released, which, I remember,  provided support for caching (the UK was the first country to support a national caching service across a large community – UK HE – and the protocol support for caching in browsers and servers introduced in HTTP 1.1 was needed in order to allow old versions of resources held in caches to be refreshed ). A paper on “Key Differences between HTTP/1.0 and HTTP/1.1” provides a more detailed summary of the enhancements provided in HTTP/1.1.

And after that – nothing.  A successful standard goes through a small number of refinements until the bugs, flaws and deficiencies are ironed out and is then stable for a significant period.

The Flaws in HTTP

But is this really the case?  HTTP may be ubiquitous, but it has flaws which were initially pointed out by Simon Spero way back in 1995 (I should mention that I met Simon last month at the WWW 2010 conference after discussing the history of HTTP in the coffee queue!).

Building on this work in November 1998 an IETF INTERNET-DRAFT on “HTTP-NG Overview: Problem Statement, Requirements, and Solution Outline” was written which pointed out that “HTTP/1.1 is becoming strained modularity wise as well as performance wise“. The document pointed out that:

Modularity is an important kind of simplicity, and HTTP/1.x isn’t very modular. If we look carefully at HTTP/1.x, we can see it addresses three layers of concerns, but in a way that does not cleanly separate those layers: message transport, general-purpose remote method invocation, and a particular set of methods historically focused on document processing (broadly construed to include things like forms processing and searching).

The solution to these problems was HTTP/NG, which would “produce a simpler but more capable and more flexible system than the one currently provided by HTTP“. And who could argue against the value of having a simpler yet more flexible standard that is used throughout the Web?

We then saw a HTTP-NG Working Group proposed within the W3C which produced a number of documents – but nothing after 1999.

We now know that, despite the flaws which were well-documented over 10 years ago, there has been insufficient momentum to deploy a better version of HTTP/1.1.  And there has also been a failure to deploy alternative transfer protocols to HTTP – I can recall in the mid 1990s former colleague at Newcastle University who were involved in reliable distributed object-oriented research work suggesting that IIOP (Internet Inter-ORB Protocol) could well replace HTTP.

Conclusions

What can we conclude from this history lesson?  I would suggest that HTTP hasn’t succeeded because of its simplicity and elegance – rather it has succeed despite its flaws and limitations.  It is ‘good enough’ – despite the objections from researchers who can point out better ways of doing things.   This relates to a point made by Erik Duval who, in a position paper presented at CETIS’s Future of Interoperability Standards meeting argued that “Standards Are Not Research” and pointed out that “Once the standardization process is started, the focus shifts to consensus building“.

The consensus for HTTP is very much “it’s good enough – we don’t care about it any more“.  So much so that it is becoming invisible.  I wonder if there are other examples of Web standards which have stable for over a decade and we fail to notice them?

Posted in Addressing, standards | 1 Comment »

What Formats For Peer-Reviewed Papers?

Posted by Brian Kelly on 17 May 2010

Formats for my Papers

The papers I’ve written which have been published in peer-reviewed journals, conference proceedings or have been included in other types of publications have been listed on my papers page on the UKOLN Web site since my first papers were published in 1999. More recently I have made use of the University of Bath’s institutional repository –  OPUS.

Wherever possible I have tried to provide access to the paper itself. But what formats should I provide?  The papers are initially written using MS Word and a PDF version is submitted to the publishers.  I normally try to provide access to both formats, and also create a HTML version of the paper.  The MS Word version is the master source, and so is the richest format; the PDF version provides the ‘electronic paper’, which preserves the page fidelity and the HTML format is the most open and reusable format.  So all three formats have their uses.

But none of these formats are particularly ‘embeddable’. And even the HTML format is normally trapped within the host Web site. The HTML file also contains navigational elements in addition to the contents of the paper.

Shouldn’t the full contents of papers be provided in an RSS format, allowing the content to be easily embedded elsewhere?  And wouldn’t use of RSS enable the content to be reused in interesting ways?

Creating an RSS Format for a Paper

As an experiment I have created an RSS file for my paper on “Deployment Of Quality Assurance Procedures For Digital Library Programmes” which I wrote with Alan Dawson and Andrew Williamson for the IADIS 2003 conference.

As well as the MS Word and PDF formats of the paper I had also created a HTML version. The process for creating the RSS file was to copy and paste contents of the HTML file (omitting navigation elements of the page) into a WordPress blog. I then viewed the RSS file using the WordPress RSS view of a page and copied this RSS file to the UKOLN Web site.

Using the RSS Format

Display of RSS view of paper in Netvibes My first test was to add the RSS version of the paper to Netvibes.  As you can see the Netvibes RSS viewer successfully rendered the page.

It should be noted, however, that internal anchors (i.e. links to the references) did not link to the references within the RSS view, but back to the original paper.

I also tried FeedBucket, another Web-based RSS reader. In this case, as can be seen, the tool only displayed the first 500 characters or so of the paper. This seems to be a feature of a number of RSS tools which only provide a summary of the initial content of an RSS feed, with a link being provided for the full content.

Wordle View of PaperSince the content of the paper is available without the navigational elements and other possibly distracting content which may be provided on a HTML page, it is possible to analyse the contents of the paper. For this I used Wordle – if you wish you can view the Wordle cloud for the paper.

Should We Be Doing This?

Should we be providing access to papers in a mature and widely used format which allows the content to be reused in other environments using a wide range of readily available technologies?  And which also allows the content to be processed and analysed using simple-to-use tools such as Yahoo Pipes?

I think we should. But perhaps publishers will think differently, as they are more likely to seek to maintain tight control over papers if the copyright has been assigned to them. But is this necessarily the case?  My most recent paper, “Developing Countries; Developing Experiences: Approaches to Accessibility for the Real World” will be presented at the W4A 2010 on 26-27 April 2010.  We have recently completed the copyright form and I’ve noticed the following information on the author rights:

The right to post author-prepared versions of the Work covered by the ACM copyright in a personal collection on their own home page, on a publicly accessible server of their employer and in a repository legally mandated by the agency funding the research on which the Work is based. Such posting is limited to noncommercial access and personal use by others, and must include the following notice both embedded within the full text file and in the accompanying citation display as well:
“© ACM, (YEAR). This is the author’s version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution …

Hmm. So can I make the paper available in an RSS format as long as I include the ACM copyright statement?

Posted in rss | 23 Comments »

Engagement, Impact and Value Under a Tory Government

Posted by Brian Kelly on 13 May 2010

Well we voted Lib Dems and got the Tories:-)  And from what I read in my Twitter stream on Tuesday night there is a feeling of gloom and despondency shared by many who work in higher education.  It will no longer be “Education, Education, Education” (even the Daily Telegraph acknowledged that “Education spending increased from £36 billion in 1996/97 to £56.9 billion in 2006”). Rather after the budget we will be seeing the promised cuts in the public sector, especially in those areas which do not provide front-line service.

So how should those of us involved in providing national IT services across higher education respond?

The question provides a suitable political and economic context for a one-day workshop on  “Engagement, Impact, Value” which, as I described previously, will be held at the University of Manchester on 24 May. This event (for which there is no charge) is being organised jointly by UKOLN and Mimas.  We will hear a number of case studies of the approaches services have taken to pro-actively engaging with their users in order to maximimise the impact of the services and provide value for money.

This workshop, which is aimed primarily for those working for JISC services and involved in JISC-funded project activities, will be followed up by a number of other workshops which will be aimed at the wider HE community.

At this event we will be exploring ways in which we can demonstrate the value our user communities place on our services and obtain evidence that can be used to support our case to decision makers.

We might also, of course, look at the ways in which innovative approaches are being used to support our user communities and the metrics we will need in order to make persuasive arguments to those holding the purse strings.

Posted in Events | Leave a Comment »

Social Location Sharing Services

Posted by Brian Kelly on 12 May 2010

Using MyTracks on my Mobile Phone

I recently took a holiday to visit Cyprus and used the opportunity of a walk in the Platres to try the My Tracks GPS application on my Android phone.

A map of a walk in the Platres, CyprusWhen I subsequently found a WiFi network I was able to upload the details to Google Maps. An image of the walk is shown (which, incidentally, makes me wonder if there is a GPS challenge for walks which look like a face :-)

As use of GPS tends to drain the battery I haven’t previously made much use of GPS applications on my phone previously. But as I wasn’t reliant on the phone while I was away I had the opportunity to test out this application (having first ensured that I switched off data traffic, so I wouldn’t have a large bill to pay for 3G usage whilst abroad!). The application worked well and what was particularly useful was the ease of uploading the map to My Maps in Google so I can now share this map with others.

But although I can this map with others it is not really a ‘social’ map. I can’t easily find out if my contacts have been on the same walk or have the opportunity to make new contacts with others who have been on the same walk.

BrightKite

Back in June 2008 I joined the location-based BrightKite service. However since then I have only posted 19 updates and a similar number of contacts.

Brightkite status updateIt seems that many of my contacts are also failing to make use of BrightKite – for example the last update made by Richard Akerman (Scilib) told us that he was “On the 12 bus“. That post was sent 10 months ago – and we though the bus services were  bad in the UK :-)

So although I have a BrightKite client on my iPod Touch the service has clearly failed to gain significant take-up, within my community, at least.

Gowalla

A walk on along the canal in Bath on Easter Monday reawakened my interest in location-based services which have a social aspect. So I downloaded the Gowalla application on my Android Phone. Later in the evening, while listening to a blues band in The Bell, I downloaded the Gowalla app on my iPod Touch and created my first location.

Location of the Claverton Rooms shown in GowallaAt work the next day I checked in from a number of locations on campus – my office in Wessex House, the Claverton Rooms, the Students Union building, the Fresh shop and the University Library.

You can the view of the Claverton Rooms in the accompanying image. You should note that the location was not accurate  initially, so I had to click on the edit button and drag the icon to the correct location.

However when I checked in from the University Library I found that this location had already been created so I used this location. However as can be seen from the image shown below, the Library is also located incorrectly – it is not to be found in the car park on the left of the image, but in the large building indicated on the far right.

University of Bath Library positioned incorrectly in Gowalla

A Use Case: Location-Based Services at Events

Social Networks Prove Useful at Events

Why would I wish to use a social location-based service such as Gowalla? I joined Twitter in March 2007 but my first significant use of the service took place a year later when I attended the Museums and the Web 2008 conference held in Montreal. I used Twitter to develop and grow social links e.g. “Off to Hilton to meet up with @dmje and any others at #mw2008 who fancy a drink & meal. Meet at reception ready to leave at 19.30.” and “@frankieroberto Heading over to conf hotel. Should be there in about 20 mins? Pub again? Any others at #mw2008 interested?“. As well as the obvious social dimension when you are away at a conference such contacts were also developed by the commentaries and discussions centred around the talks at the event.

I know others who were sceptical of the relevance of Twitter who subscribed at this conference and have continued to use the service.  So for me there are clear benefits to be gained from use of Social Web tools in the context of an event – a use case which, clearly is normally based around a location.

But rather than having the connections centred around an event hashtag, as is normally the case with Twitter, can we make use of location-based services to base connections around a location?

Use of RFID at Events

Such ideas are not new. My colleague Julian Cheal explored this idea at the Dev8D event in which he made use of RFID technologies – 300 RFID tags were distributed to Dev8D attendees who could check in by scanning the tag at RFID readers located near the entrance to the various rooms used during the event. As Julian has described on his blog there were some problems with the technologies, but the idea is worth exploring some more, I feel.

Use of a Popular Location-Based Service

Although it might be possible to develop a location-based service for use at a specific event, this approach is unlikely to be used afterwards.  My interest is in use of a service which could be used independently of the technologies provided by the host institution. So although services such as Campus-M seem to be growing in popularity (Campus-M is used at the University of Sheffield, which is the location for this year’s IWMW 2010 event)   this approach would appear to require usernames on the host institution’s Campus-M service. This is then not (currently) a generic solution.

Could we use a commercial service such as Gowalla in order to evaluate the potential of a social location-based service for use at an event? The benefits of using Gowalla would seem to be that it:

  • Provides a service which is free to use and which anyone can subscribe to.
  • Makes use of widely available mobile devices (clients are available for the iPhone, iPod Touch and Android devices in addition to the Web interface).
  • Allows registered users to check-in to a location and share comments with other in the same location.

I am aware that Gowalla has limitations (I can’t see how to send a private message, for example). However use of a tool that already exists and can easily be installed and tested does appear to provide advantages – if an institution is considering developing or procuring such a service, shouldn’t testing of such existing service form part of an initial evaluation process?

The Risks

Of course it is important to ensure that people are aware of the risks in using such services.  Sending a post saying “Missed train and on my own at an empty train station” might not be wise. However I’ll not talk further on the question of risks and approaches to risk management in this particular post other than to point out that, as described in a recent post about the JISC10 conference Ning social network, people are already prepared to share with others that fact that they are at a conference.

Challenges

What limitations and challenges are there which need to be addressed?

I’ve already mentioned the problems with inaccurate locations. And if locations can be named and claimed by anyone, there will be the issue of not only incorrect but also misleading information. What will happen if somebody  locates the Vice-Chancellor’s office in a pub near the University?

There will also be the issue of the naming of locations used at events and the communication of such names to participants at such events.  Chris Gutteridge described the approaches he planned to take at the Dev8D events:

At Dev8D2010, at the end of February, I plan an experiment of assigning each location a hashtag, then publishing an electronic form of the schedule so the twitter can be merged into each session via location+program data.

And yes, at the event, there were posters on the walls with the hashtag identifying each of the rooms which provided a location-based identifier for use on Twitter, rather than the session-based hashtagging approach we took at IWMW 2009.

Perhaps on the day before the start of IWMW 2010 I should go around the various rooms to be used at the event, as well as the places to be used for accommodation and social events (including the Kelham Island Industrial Museum) and geo-locate these buildings with an appropriate name. But as the day before the start of IWMW 2010  coincides with another more important event I suspect I won’t have the time:-)

But perhaps the most important challenge is getting the community. In many respects BrightKite seems to have much potential. But it has failed, I feel, as it seems to have failed to gain a significant community. Sadly, it  never became  fashionable, as happened with Twitter. Perhaps a successful social location-based service will need the endorsement of a celebrity? And looking at Wikipedia’s List of British university chancellors and vice-chancellors (thanks to J4 and keithbrooke for the link) perhaps someone should approach Bill Bryson (the location aspect would be appropriate for a well-known travel writer).

But Seriously …

It is by no means certain that a Web 2.0 service such as Gowalla would be relevant for use in an institutional context. Perhaps such services (and I should also mention FourSquare, which I have also recently subscribed to)  will only be of interest in personal social contexts. But then again, didn’t we feel the same way about FaceBook, YouTube and iTunes as few years ago?

Posted in Geo-location | 6 Comments »

RSS Feeds For Structured Information About Events

Posted by Brian Kelly on 11 May 2010

Understanding Trends Across a Community

Back at UKOLN’s IWMW 2006 event Andy Powell gave a plenary entitled “Reflections on 10 years of the Institutional Web” in which he summarised the trends he had observed during the ten years in which UKOLN had been hosting its annual event aimed at members of institutional Web management teams.

Andy used the JICSMail archives for the web-support list in order to search for early occurrences of technologies (such as RSS and CMS). This, however, was a time-consuming process; due to the lack of APIs to the JISCMail Web archives and anyone wishing to apply further analyses would have to start from scratch.

The Role of RSS

We first provided a news page about the event at IWMW 2005 and ensured that this was also available as an RSS feed. We later realised that RSS could be used not only for providing news but also as an open format which would allow content to be reused by other applications and since then have provided a wide range of RSS feeds for the events, including details of the speakers and abstracts of the plenary talks and workshop sessions.

But what might we learn if we make available RSS feeds for even further back? Might this approach make it much easier for those who wish to gain a better understanding of how the topics addressed at the events have developed over the years? This was the question we were looking to answer.

The Community Now and Then

Wordle of abstracts for IWMW 2010 parallel sessionsA Wordle display based on the RSS feed of the abstracts of the parallel workshop sessions at this year’s IWMW 2010 event is shown.  We can see that topics such as ‘mobile’ and ‘location’ will be addressed in a number of the sessions.  It is also interesting to see that ‘HTML5′ and ‘RDF’a’ also feature fairly prominently in the display as does ‘social’ – which relates to social networks and not the social aspects of the event. I should also add that this data is based in 17 parallel sessions which last for 90 minutes, from which participants can attend two sessions.

But what was being discussed back at the IWMW 2000 event?  I have created an RSS feed of the abstracts of the workshop sessions for that event, which was held at the University of Bath 6-8 September 2000. The format of the event was slightly different back then as, for IWMW 2000 only, participants could choose one from parallel sessions which lasted for three hours or two from four sessions which lasted for 90 minutes.

Wordle display of abstracts of IWMW 2000 parallel sessionsThe Wordle display based on the RSS feed for the IWMW 2000 parallel workshop sessions is shown. This time we see that there is a strong interest in CMSs, which seems to have disappeared from this year’s event.  There was also an interest in ‘VLEs’, ‘payment’, ‘ecommerce’ and ‘security’ which, again, do not seem to be being addressed this year (or, to my recollection, in recent years).

Discussion

A better understanding of changes since 2000 would be seen if redundant words (such as ‘institutional’, ‘web’, ‘management’, ‘workshop’, ‘sessions’ and ‘participants’) which probably occur in every abstract were removed. And in additional to the graphical capabilities provided by Wordle I wonder if more sophisticated text mining tools could be used to explore the changes in the topics which the community has been addressing over the 14 years which the IWMW event has been held.

Locations of universities of plenary speakers at IWMW 1997-2010 eventsWe have RSS feeds containing providing information on the plenary talks and workshop sessions for IWMW 2000-2010 together with biographical details for the plenary speakers and workshop facilitators since the event was started. Note that the RSS feed for the plenary speakers contains geo-location information of the host institution which means that we can also display a map showing the location of the host institutions of these contributors.

The UKOLN Web site has a page which contains links to the RSS files – and note that information about an OPML file of the RSS files is also available.

I have some ideas of how this structured data about the event (as opposed to HTML pages designed for reading by human) could be used – and it is useful to have data available for use with tools such as Yahoo Pipes. I also wonder if others have any suggestions on ways this data could be used?

I also feel that other events should be providing RSS feeds of their event information in a similar fashion – especially those events which are well-established within the community.  If the abstracts of the talks given at events such as national ALT, JISC and UCISA conferences held over the years were provided in RSS this should provide a valuable open and reusable resource to facilitate data-mining. And what about the Eduweb series of conferences which provide a similar role to IWMW for university Web team in US higher educational institutions? Shouldn’t such high profile conferences aimed at technically advanced user communities be taking a lead in providing structured information about the events? Isn’t there a danger that in only focussing on the future we fail to learn lessons by looking at out past?

Posted in Events, rss | 2 Comments »

Developing Countries; Developing Experiences: Approaches to Accessibility for the Real World

Posted by Brian Kelly on 10 May 2010

The Paper

I described previously how our paper on “Developing Countries; Developing Experiences: Approaches to Accessibility for the Real World” received the John M Slatin award for the Best Communications Paper at the W4A 2010 conference. Although the paper, which was written by myself, Sarah Lewthwaite and David Sloan, has not yet been published by the ACM my author’s copy of the paper is now available on the UKOLN Web site in MS Word and HTML formats.

Although the full paper is available on the UKOLN Web site (and should also be accessible via the W4A 2010 conference Web site shortly)  it is not possible to provide comments or discuss the ideas outlined in the paper using these services. Last year I provided a summary on this blog of a paper entitled “From Web accessibility to Web adaptability”. My reason for the blog post was to provide a summary of the paper for interested readers who were understandably reluctant to pay the $50 to purchase the paper from the publishers (although published last July access to the paper via the University of Bath institutional repository is still embargoed).

Although the publishers of papers presented at the W4A 2010 conference have a more lenient approach to access I still feel that it can be beneficial to provide a summary of newly published papers on this blog, in order to provide an open feedback mechanism and to encourage discussion.

Summary of the Paper

The paper begins by summarising the limitations of the WAI model for enhancing the accessibility of Web resources, which was first described in our paper on “Forcing standardization or accommodating diversity? A framework for applying the WCAG in the real world” (also available in HTML). We describe the lack of political will to mandate use of browsers which conform with UAAG, with recent advice for government organisations in France and Germany  to migrate from Internet Explorer 6 to modern versions of browsers being provided for security and not accessibility reasons.

The paper then provides two examples from Disability Studies which illustrate the value of applying critical theories to support more holistic approaches to Web accessibility: Aversive Disablism and Hierarchies of Impairment. Aversive disablism is illustrated using M. Deal’s comparison with race theory: aversive racists are not anti-black, but pro-white. There is a need to understand how approaches to accessibility might be based on pro-non-disabled assumptions.  Such considerations should be understood from the context of Hierarchies of Impairment. We further cite M Deal who argued the need to be “focusing attention on impairment groups that face the most discrimination in society (i.e. those ranked lowest in the hierarchy of impairments), rather than viewing disabled people as a homogenous group“.  In the context of Web accessibility the focus of attention is often the needs of the visually-impaired, with the needs of users with learning difficulties having been seemingly marginalised in the development of accessibility guidelines. We conclude that “Critical research into accessibility for such groups is therefore recommended before standards can be invested“.

There is a danger that having an understanding of the technical flaws in the WAI model and the implicit assumptions which have been made in the developments of the guidelines will leave those involved in the commissioning and development of Web services feeling confused and uncertain as to what they should be doing. When thinking about digital inclusion in developing countries, there is a danger that implementing a flawed accessibility policy derived from developed world assumptions (for example a text-dominated communication system) may lead to a colonial imposition of accessibility that has the opposite effect on inclusion to what is intended. Our paper argues that rather than attempting to arrive at ‘standards’ we should now be observing patterns of effective approaches to the delivery of the service. We provides two brief case studies: one on the use of multimedia resources and the second on the provision of ‘amplified events’.

The paper summarises the difficult challenges which need to be faced when planning the development of Web services and which tend not to be addressed at a guideline-driven definition of accessibility. The paper concludes by describing a framework which can be used by practitioners around the world, in developing solutions when a simple application of WCAG guidelines is not feasible.  We also “argue for a reappraisal of mainstream approaches to Web accessibility policy work to ensure a more effective and workable approach to promoting technology as a way of globally reducing social exclusion for disabled people“.

Next Steps

Our critique of the approaches which led to the development of WAI model are intended for those involved in WAI activities and policy-makers who may have a responsibility for deciding whether to use WCAG guidelines as valuable guidelines or standards whose use should be mandated in all contexts.  However the framework we have begun to develop is intended for use by Web practitioners. We will be further developing this approach, especially for use in the provision of amplified events, which is an area of  particular interest to UKOLN.

We’d welcome your comments on the ideas described in this paper.

Posted in Accessibility | Tagged: | 2 Comments »

IWMW 2010 Open For Bookings

Posted by Brian Kelly on 6 May 2010

UKOLN’s annual Institutional Web Management Workshop (IWMW ) is now open for bookings. This 3 day event is aimed at members of institutional Web management teams  and others involved in the provision of institutional Web services in higher and further educational institutions and related organisations in the public sector.

This year’s event, the fourteenth in the series, will be held at the University of Sheffield on 12-14 July.  This year’s theme is “The Web In Turbulent Times” – and it seems appropriate to be making this announcement today, on the day of the general election.  And just as the results of the election currently appear uncertain, so too does the future of institutional Web services over the next few years, once the next Government (or whatever political hue) announces its budget for the higher and further education sectors.

For the first twelve years of IWMW (which was launched in July 1997, a few months after the Labour Party came into power) there was a feeling of optimism shared by many within the community: the early adopters of the Web were seeing their views of the strategic importance of the Web being vindicated and over the years, as the Government’s investment in education grew, we saw a move away from the individual Webmaster (or Webmistress – remember those debates?)  to the establishment of Web teams and an increase in levels of funding for these services.

But now those times are over.  Now the challenges which the Web teams will be facing will have to be addressed in the context of  reductions in funding and staffing levels and skills  (and I’ve already heard stories of experienced professionals taking early retirement and being replaced by junior members of staff).

This, then, provides the context for this year’s event.  Chris Sexton, IT Services Director at the University of Sheffield and chair of UCISA will open the event with a talk on “The Web in Turbulent Times“, giving her perspective on the implications of the cuts from her perspective as a senior manager.

Chris will be followed by Susan Farrell, former Head of Web and Portal Services at King’s College London, who, in her new consultancy role, will ask the provocative question:  “Are web managers still needed when everyone is a web ‘expert’?“. The abstract for Susan’s talk is equally provocative, suggesting that “While most senior managers would agree that the web is mission-critical, at a time when budgets are tight it becomes increasingly difficult to persuade them that employing skilled web professionals is vital“.

The opening afternoon’s session at the event seems guaranteed to generate discussion and debate – and a further plenary talk on “No money? No matter – Improve your website with next to no cash” looks at ways of continuing to provide institutional services in the context of cuts . Not all of the plenary talks will reflect the current economic and political context however: in a talk on “It’s all gone horribly wrong: disaster communication in a crisis” Jeremy Speller will seek to answer the question “How do you communicate with your staff and students and the wider world when it all goes horribly wrong?” :-)

Although these talks aim to stimulate debate informed by bad news (whether cutbacks or exploding volcanoes causing disruptions to those attending conferences)  we need to recognise that we are still seeing technical developments and innovation in the Web environment. The event will therefore provide an opportunity to hear more about “HTML5 (and friends)” and “Mobile Web and Campus Assistant“.

As well as the plenary talks we are also providing about 20 workshop sessions which provide opportunities for participants to develop new skills, covering new technical areas (with sessions on “RDFa: from theory to practice” and “Looking at Linked Data“), social media (with sessions on “‘Follow us on Twitter’…’Join our Facebook group’” and “Sheffield Made Us – using social media to engage students in the university brand“), project management skills (“A Little Project Management Can Save a Lot of Fan Cleaning … or (Agile) Project Management for the Web“), personal development plans (“Developing Your Personal Contingency Plan: Beat The Panic“) and a whole lot more.

The event will also provide a valuable opportunity for networking – which might prove to be immensely  useful for what may turn out to be a shrinking sector.  On the second night of the event the social will be held at the Kelham Island Industrial Museum – a place I’ll be looking forward to revisiting (although I might also sneak off to the Fat Cat for a pint of Kelham island bitter!).

I hope to see you at IWMW 2010. I promise you it won’t all be gloomy!

Posted in Events | Leave a Comment »

Experiments With RDFa

Posted by Brian Kelly on 3 May 2010

The Context

In a recent post I outlined some thoughts on Microformats and RDFa: Adding Richer Structure To Your HTML Pages. I suggested that it might now be timely to evaluate the potential of RDFa, but added a note of caution, pointing out that microformats don’t appear to have lived up to their initial hype.

Such reservations were echoed by Owen Stephens who considered using RDFa (with the Bibo ontology) to enable sharing of ‘references’ between students (and staff) as part of his TELSTAR project and went on to describe the reasons behind this decisions. Owen’s decision centred around deployment concerns. In contrast Chris Gutteridge had ideological reservations, as he “hate[s] the mix of visual & data markup. Better to just have blocks of RDF (in N3 for preference) in an element next to the item being talked about, or just in the page“. Like me, Stephen Downes seems to be willing to investigate and asked for “links that would point specifically to an RDFa syntax used to describe events?“. Michael Hausenblas provided links to two useful resources: W3C’s Linked Data Tutorial – on Publishing and consuming linked data with RDFa and a paper on “Building Linked Data For Both Humans and Machines” (PDF format). Pete Johnson also gave some useful comments and provided a link to recently published work on how to use RDFa in HTML 5 resources.

My Experiments

Like Stephen Downes I thought it would be useful to begin by providing richer structure about events. My experiments therefore began by adding RDFa markup for my forthcoming events page.

As the benefits of providing such richer structure for processing by browser extensions appear to be currently unconvincing my focus was in providing such markup by a search engine. The motivation is therefore primarily to provide richer markup for events which will be processed by a widely-used service in order that end users will receive better search results.

My first port of call was a Google post which introduced rich snippets. Google launched their support for Rich Snippets less than a year ago, in May 2009. They are described as “a new presentation of snippets that applies Google’s algorithms to highlight structured data embedded in web pages“.

Documentation on the use of Rich Snippets is provided on Google’s Webmaster Tools Web site. This provides me with information on RDFa (together with microdata and microformats) markup for events. Additional pages provide similar information on markup about people and businesses and organisations.

Although I am aware that Google have been criticised for developed their own vocabulary for their Rich Snippets I was more interested in carrying out a simple experiment with use of RDFa than continuing the debate on the most appropriate vocabularies.

The forthcoming events page was updated to contain RDFa markup about myself (name, organisation and location of my organisation, including the geo-location of the University of Bath.

For my talks in 2010 I replaced the microformats I have used previously with RDFa markup along the providing information on the date of the talks and their location (again with geo-location information).

No changes where noticeable when viewing the page normally. However using FireFox plugins which display RDFa (and microformat) information I can see that software is able to identify the more richly structured elements in the HTML page. The screenshot shows how the markup was rendered by the Operator sidebar and the RDFa Highlight bookmarklet and, in the status bar at the bottom of the screen, links to an RDFa validator and the SIOC RDF Browser.

Rendering of RDFa markup using various FireFox tools.

If you compare this image with the display of how microformats are rendered by the Operator plugin it will be noted that the display of microformats shows the title of the event whereas the display of RDFa lists the HTML elements which contain RDFa markup. The greater flexibility provided by RDFa appears to come at the price of a loss of context which is provided by the more constrained uses provided by microformats.

Is It Valid?

Although the HTML RDFa Highlight bookmarklet demonstrated that RDFa markup was available and indicated the elements to which the markup had been applied, there was also a need to modify other aspects of the HTML page. The DTD was changed from a HTML 1.0 Strict to:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML+RDFa 1.0//EN" "http://www.w3.org/MarkUp/DTD/xhtml-rdfa-1.dtd">

If addition the namespace of the RDFa elements needed to be defined:

<html xmlns="http://www.w3.org/1999/xhtml"
  xmlns:cc="http://creativecommons.org/ns#"
  xmlns:v="http://rdf.data-vocabulary.org/#"
  xml:lang="en">

It was possible for me to do this as I have access to the HTML page including elements defined in the HTML . I am aware that some CMS applications may not allow such changes to be made and, in addition, organisations may have policies which prohibit such changes.

On subsequently validating the page I discovered, however, HTML validity errors. It seems that my use of name="foo" attribute has been replaced by id="foo".

The changes to the DTD and the elements and the inclusion of the RDFa markup weren’t the only changes I had to make, however. I discovered that the id="foo attribute requires "foo" to start with an alphabetic character. I therefore had to change id="2010" to id="year-2010". This, for me, was somewhat more worrying as rather than just including new or slightly modified markup which was backwards-compatible, I was now having to change the URL of an internal anchor. If the anchors had started with an alphabetic character this wouldn’t have been an issue (and I would have been unaware of the problem). However it seems that a migration from a document-centred XHTML 1.0 Strict conforming world to the more data-centric XHTML 1.1+RDFa world may result in links becoming broken. I was prepared to make this change on my pages of forthcoming and recent events and change links within the pages. However if others are linking to these internal anchors (which I think is unlikely) then the links with degrade slightly (they won’t result in the display of a 404 error message; instead the top of the page will be displayed, rather than the entries for the start of the particular year).

Google’s View of the RDFa Markup

Using Google’s Rich Snippets Testing Tool it is possible to “enter a web page URL to see how it may appear in search results“. The accompanying image shows the output of this tool for my events page.

Rendering of RDFa markup

This shows the structure of the page which Google knows about. As Google knows the latitude and longitude for the location of the talk it can use this for location based services and it can provide the summary of the event and a description.

Is It Correct?

Following my initial experiment my former colleague Pete Johnston (now of Eduserv) kindly gave me some feedback. He alerted me to W3C’s RDFa Distiller and Parser service – and has recently himself published posts on Document metadata using DC-HTML and using RDFa and RDFa 1.1 drafts available from W3C.

Using the Distiller and Parser service to report on my event page (which has now been updated) I found that I had applied a single v:Event XML element where I should have used three elements for the three events. I had also made a number of other mistakes when I made use of the examples fragments provided in the Google Rich Snippets example without having a sound understanding of the underlying model and how it should be applied. I hope the page is now not only valid but uses a correct data model for my data.

I should add that I am not alone in having created resources containing Linked data errors. A paper on “Weaving the Pedantic Web” (PDF format) presented at the Linked Data on the Web 2010 workshop described an analysis of almost 150,00 URIs which revealed a variety of errors related to accessing and dereferencing resources and processing and parsing the data found. The awareness of such problems has led to the establishment of the Pedantic Web Group which “understand[s] that the standards are complex and it’s hard to get things right” but nevertheless “want[s] you to fix your data“. There will be a similar need to avoid polluting RDFa space with incorrect data.

Is It Worthwhile?

The experiences with microformats would seem to indicate that benefits of use of RDFa will be gained if large scale search engines support its use, rather than providing such information with an expectation that there will be significant usage by client-side extensions.

However the Google Rich Snippets Tips and Tricks Knol page state that “Google does not guarantee that Rich Snippets will show up for search results from a particular site even if structured data is marked up and can be extracted successfully according to the testing tool“.

So, is it worth providing RDFa in your HTML pages? Perhaps if you have a CMS which creates RDFa or you can export existing event information in an automated way it would be worth adding the additional semantic markup. But you need to be aware of the dangers of doing this in order to enhance findability of resources by Google since Google may not process your markup. And, of course, there is no guarantee that Google will continue to support Rich Snippets. On the other hand other vendors, such as Yahoo!, do seem to have an interest in supporting RDFa – so potentially RDFa could provide a competitive advantage over other search engine providers.

But, as I discovered, it is easy to make mistakes when using RDFa. So there will be essential to have an automated process for the production of pages containing RDFa – and there will be a need to ensure that the data model is correct as well as the page being valid. This will require a new set of skills as such issues are not really relevant in standard HTML markup.

I wonder if I have convinced Owen Stephens and Chris Gutteridge who expressed their reservations about use of RDFa? And are there any examples of successful use of RDFa which people know about?

“RDFa from Theory to Practice” Workshop Session

Note that if you have an interest in applying the potential of RDFa in practice my colleagues Adrian Stevenson, Mark Dewey and Thom Bunting will be running a 90 minute workshop session on “RDFa from theory to practice” at this year’s IWMW 2010 event to be held at the University of Sheffield on 12-14 July.

Posted in HTML, W3C | Tagged: | 5 Comments »