UK Web Focus

Innovation and best practices for the Web

Archive for November, 2010

IWR Information Professional of the Year: Dave Pattern

Posted by Brian Kelly (UK Web Focus) on 30 November 2010

I’m delighted to report that Dave Pattern has been announced as the Information Professional of the Year in the annual Information World Review awards which took place at the Online Information 2010 conference.

As a former winner of the award I was on the judging panel. As I know (and like and admire) Dave I felt that when I took part in the judging process I should document the reasons why I felt Dave would be a worthy winner of this award.

I felt that the award should be given to someone who not only demonstrated their value within their hist institution (there are a great many librarians and ‘shambrarians’ for whom that would be true) but also could be shown to have had an impact across the wider community.

Dave has demonstrated his impact within the wider community in two areas.  Dave has been active in supporting the Mashed Libraries series of one day events  whoch have aimed to to “bring together interested people and doing interesting stuff with libraries and technology“. The original idea was conceived by Owen Stephens in a blog post on “Mashed Libraries? Would you be interested?” on 1 July 2008. The second response was from Dave, who showed his enthusiasm together with an example of his normal self-deprecating humour: “I’ve love to see a library unconference in the UK… I’m just too lazy to try and organise one myself! Count me in and, if nothing else, I can guarentee there’ll be two of use sitting in a room with our laptops!“.

Dave certainly wasn’t lazy in his support for the events as two of the six events have been held at Dave’s host institution, the University of Huddersfield: Mash Oop North on 7 July 2009 and Chips and Mash, on 30 July 2010.

Before Dave got involved with Mashed Libraries he was demonstrating the value which can be gained from mashing up library data. As you might expect from someone who is committed to sharing best practices across a wide community Dave has a blog (which was launched way back in May 2005) . On the blog you can read his posts on usage data, which includes a post entitled “2008 — The Year of Making Your Data Work Harder” in which Dave described his “code primarily designed for our new Student Portal — course specific new book list RSS feeds“. Dave was just giving talks about ways of exploiting data, he was writing code and implementing services which demonstrated the value of the approaches he was encouraging the library community to adopt.

The benefits of openness of library data are now much more widely accepted than when Dave began his work at the University of Huddersfield Library – and it was good to see the profile of his institutional work getting a higher profile through his work on JISC-funded projects.

It is clear that Dave has been a real asset to the University of Huddersfield.   It is pleasing that his value to the wider library community is now being appreciated through the award of the Information Professional of the year.

I’m sure that those who know Dave will join in with me in expressing congratulations on a richly deserved recognition on both the value of the work Dave has done for the sector and the warmth and esteem which many of us feel for Dave.

Posted in General | 6 Comments »

Availability of Paper on “Moving From Personal to Organisational Use of the Social Web”

Posted by Brian Kelly (UK Web Focus) on 29 November 2010

I will present a paper on “Moving From Personal to Organisational Use of the Social Web” at the Online Information 2010 conference tomorrow as well as, as described previously, via a pre-recorded video at the Scholarly Communication Landscape: Opportunities and Challenges symposium.

The eight page paper will be included in the conference proceedings and can also be purchased for a sum of £135! However my paper is available (for free!) from the University of Bath Opus Repository. In addition, in order to both enhance access routes to the paper (and the ideas it contains) and to explore the potential of a Web 2.0 repository service, the document has also been uploaded to the Scribd service.

From the University of Bath repository users can access various formats of the paper and a static and persistent URI is provided for the resource.   But what does Scribd provide?

Some answers to this question can be seen from the screen shot shown below.  Two facilities which I’d like to mention are the ability to can:

  • Let others know about papers being read in Scribd using the Readcast option which will send a notification to services such as Twitter and Facebook.
  • Embed the content in third party Web pages.

In addition the Scribd URI seems likely to be persistent: http://www.scribd.com/doc/43280157/Moving-From-Personal-to-Organisational-Use-of-the-Social-Web

I had not expected the WordPress.com service to allow Scribd documents to be embedded but, as can be seen below, this is possible.

There are problems with Scribd, however.  It’s list of categories for uploaded resources is somewhat idiosyncratic (e.g. Comics, Letters to our leaders, Brochures/Catalogs). There is also a lot of content from UKOLN, my host organisation, which has been uploaded without our approval.  But in terms of the functionality and ways in which the content can be reused in other environments it has some appeal.  If only these benefits could be integrated with the more managed environment for content and metadata provided by institutional repositories.  But should that be provided by institutional repositories embedded Web 2.0 style functionality or, alternatively, by Web 2.0 repositories services adding on additional management capabilities?

Posted in Repositories, Web2.0 | Tagged: | 4 Comments »

“We are a country in crisis. A country at war.”

Posted by Brian Kelly (UK Web Focus) on 27 November 2010

Nick Poole didn’t mince his words in a blog post which summarised his keynote talk at yesterday’s UK Museums on the Web 2010 conference: “We are a country in crisis. A country at war.

The opening paragraph went on to give the political context to his views “We have a Coalition that does not fundamentally believe that culture should be funded by the taxpayer“.  This is not the type of comment you’d normally expect from the CEO of a public sector body, Collections Trust!

Having opened with this gloomy summary of the current environment Nick went to outline how the museum sector should resp0nd:

we have to use every tool in our armoury, and use them with the wisdom we have acquired in the past decade.

•    Fund imaginatively
•    Collaborate Creatively
•    Aggregate smartly
•    Build Openly

Imaginitive, creative, smart, open. These are the themes of our conference today. These are the qualities we must bring to designing this new future of ours.

Nick feels that technology is now embedded across the sector.  But this perceived maturity, rather than highlighting the importance  of IT innovation, is being used to marginalise  it and, it seems, focus simply on mainstream service delivery:

The place of technology is no longer at the margins of the museum. Our role as technologists is no longer to explore, to investigate, to discover. Our role, from today, from now is to deliver.

Is this a desirable approach? And are such views relevant for the higher education sector?

In many respects Nick is correct.  Following the initial use of the Web as a publishing medium the Web 2.0 revolution has provided a platform for much richer, interactive and user-focussed services, and the use of Social Web services makes it easier to deliver such services in a cost-effective ways.  I should add that UKOLN has been involved in support museums, libraries and archives in exploiting the potential of the Social Web through the series of workshops and presentations we have delivered across the cultural heritage sector for a number of years.

Job done? All that’s left is to persuade the risk averse local authorities to liberalise the policies regarding access to Social Web services (and the reductions to local authority funding will help that).

I think not.  Indeed as Nick said “We are a country in crisis. A country at war. We have a Coalition that does not fundamentally believe that culture should be funded by the taxpayer“. To which I might add “a Coalition that does not fundamentally believe that higher education should be funded by the taxpayer“.

Is this really a time when higher education (to position the discussion in our sector) when there is no longer a need “to explore, to investigate, to discover“?

Back is 1989 essay Francis Fukuyama published an essay “The End of History?” which was interpretted by some as an argument that a time of radical change is over and we had reached a plateau of political stability. Following 9/11 such views were widely debunked.

Are we at a time when we can predict “The End of IT’s History?“. The technology wars are over: Microsoft vs a whole range of software vendors over the years, the PC vs the Mac, the cathedral vs the bazaar, open source vs closed source, open vs closed. We now simply need to make use of commodity IT services in order to deliver our core mission, with the economic crisis providing the opportunity to recognise the need to accept this new reality?

In some areas this is true.  Running one’s own institutional email service is no longer regarded as something institutions need to do, as Chris Sexton, IT Services director at the University of Sheffield has pointed out on her blog and at high profile talks on several occasions.

But the commodification of IT in some areas does not mean that this is true in all areas. Similarly the mainstreaming of a set of technologies today does not necessarily mean that significant  changes  won’t happen again in the future.

Adam Cooper touches on such issues in a post on “Whither Innovation?“.  Adam asks a similar question to Nick’s:

Whither innovation in educational institutions in these times of dramatic cuts in public spending and radical change in the student fees and funding arrangements for teaching in universities?

but reaches a different conclusion:

It seems to me that innovation always follows adversity, that “necessity is the mother of invention”.

Adam describes how the innovation theorist Clayton M Christensen coined the term “disruptive innovation” to describe ways “apparently well-run businesses could be disrupted by newcomers with cheaper but good-enough offerings that focus on core customer needs (low end disruption) or with initial offerings into new markets that expanded into existing markets.” Adam goes on to argue that “Disruptive innovation threatens incumbents with strategy process that fails to identify and adopt viable low-end or new-market innovation. In our current context of disruption by government policy, this challenge to institutional (university) strategy is acute.

We are at a stage in which a high profile CEO of a public sector body will use the emotional language of “We are a country in crisis. A country at war.” to stimulate discussion and debate – and I very much welcome the way in which Nick has stimulated this debate (you just have to look at the evidence of the way in which Nick’s blog post was discussed on twitter earlier today to see that his post and the imagery he used stuck a chord with many).

For me the higher education sector, too, needs to be “imaginitive, creative, smart and open“.  But, unlike Nick, I feel that there is a need for technologists (developers) with our institutions to explore, to investigate and to discover – approaches which were recognised in yesterday’s news that “Bristol University ChemLabS celebrated by JISC Times Higher Education Award“.  Sarah Porter, JISC’s head of innovation and one of the judges for the awards pointed out that “By focusing on innovative approaches to using technology to improve learning, the project has had measurable, demonstrable impact on the attainment of students of chemistry at the University of Bristol“.

If we lose the experiences possessed across the sector and the culture of experimentation and creativity which is fundamental to the higher education sector we will surely condemn ourselves to a sausage-factory mentality, processing students and researchers using the centralised learning and research environments.

But perhaps the differences between Nick’s comments and  my views are more to do with the different sectors in which we work rather than any significant divergences of opinions.  Replace ‘museums’ with ‘higher education’ in Nick’s  conclusion of the way forward and I’d be in agreement:

The reality is that if we are really going to deliver a Digital offer for museums that is globally competitive, we must pool our resources, collaborate creatively, aggregate smartly, build openly. Individually, we will not do what needs to be done. Together, we can achieve anything.

So let’s be “imaginitive, creative, smart and open” and identify the areas for commodification and recognise the battles which were fought and lost – and the areas in which diversity and innovation are needed.

Posted in Finances, General | 5 Comments »

Moves Away From XML to JSON?

Posted by Brian Kelly (UK Web Focus) on 26 November 2010

Although in the past I have described standards developed by the W3C which have failed to set the marketplace alight I have always regarded XML as a successful example of a W3C standard.  Part of its initial success was its simplicity – I recall hearing the story of when XML 1.0 was first published, with a copy of the spec being thrown into the audience to much laughter. The reason for the audience’s response? The 10 page (?) spec fluttered gently towards the audience but the SGML specification, for which XML provided a lightweight and Web-friendly alternative, would have crushed people sitting in the first few rows!   I don’t know whether this story is actually true but it provided a vivid way of communicating the simplicity of the standard which, it was felt, would be important in ensuring the standard would gain momentum and widespread adoption.

But where are we now, 12 years after the XML 1.0 specification was published? Has XML been successful in providing a universal markup language for use in not only a variety of document formats but also in protocols?

The answer to this question is, I feel, no longer as clear as it used to be.  In a post on the Digital Bazaaar blog entitled Web Services: JSON vs XML Manu Sporny, Digital Bazaar’s Founder and CEO, makes the case for the ‘inherent simplicity of JSON, arguing that:

XML is more complex than necessary for Web Services. By default, XML requires you to use complex features that many Web Services do not need to be successful.

The context to discussions in the blogosphere over XML vs JSON is the news that Twitter and Foursquare have recently removed XML support from their Web APIs and now support only JSON.  James Clark, in a post on XML vs the Web, appears somewhat ambivalent about this debate (“my reaction to JSON is a combination of ‘Yay’ and ‘Sigh‘”) but goes on to list many advantages of JSON over XML in a Web context:

… for important use cases JSON is dramatically better than XML. In particular, JSON shines as a programming language-independent representation of typical programming language data structures.  This is an incredibly important use case and it would be hard to overstate how appallingly bad XML is for this.

The post concludes:

So what’s the way forward? I think the Web community has spoken, and it’s clear that what it wants is HTML5, JavaScript and JSON. XML isn’t going away but I see it being less and less a Web technology; it won’t be something that you send over the wire on the public Web, but just one of many technologies that are used on the server to manage and generate what you do send over the wire.

The debate continues on both of these blogs.  But rather than engaging in the finer points of the debates of the merits of these two approaches I feel it is important to be aware of decisions which have already been taken.   And as Manu Sporny has pointed out:

Twitter and Foursquare had already spent the development effort to build out their XML Web Services, people weren’t using them, so they decided to remove them.

Meanwhile in a post on Deprecating XML Norman Walsh responds with the comment “Meh” -though he more helpfully expands in this reaction by concluding:

I’ll continue to model the full and rich complexity of data that crosses my path with XML, and bring a broad arsenal of powerful tools to bear when I need to process it, easily and efficiently extracting value from all of its richness. I’ll send JSON to the browser when it’s convenient and I’ll map the the output of JSON web APIs into XML when it’s convenient.

Is this a pragmatic approach which would be shared by developers in the JISC community, I wonder? Indeed on Twitter Tony Hirst has just askedCould a move to json make Linked Data more palatable to developers?” and encouraged the #jiscri and #devcsi communities to read a draft document on “JSON-LD – Linked Data Expression in JSON“.

Posted in jiscobs, standards, W3C | 9 Comments »

Simultaneous Talks in London and Manchester

Posted by Brian Kelly (UK Web Focus) on 25 November 2010

Giving Two Talks on One Day

I have been invited to give a talk at the Scholarly Communication Landscape: Opportunities and challenges symposium which will be held at Manchester Conference Centre, Manchester on 30th November 2010. On the same day I’m speaking at the Online Information 2010 conference.  However this isn’t a problem; rather I regard it as an opportunity to try out new approaches to speaking at conferences – and I’m pleased to say the the event organisers have also welcomed the opportunity for such experimentation.

The title of the talk is “Personal or Institutional Use of Social Web Services For Scholarly Communication?” and this is a slight reworking of the paper on “Moving From Personal to Organisational Use of the Social Web” which I’ll be presenting at the Online Information conference.

I have used Panopto to create a screencast of the talk. The video of the slides (but not my talking head) has been uploaded to Vimeo. So that on the day the event organisers in Manchester) will be able to display the recording of my talk while, at approximately the same time, I’ll be giving a live version of the talk in London.

Where’s the experimentation, you may ask? Aren’t videos of talking heads at conferences old hat?   That’s true. My interest is in providing event materials in advance in order to explore ways of breaking out of the traditional ways in which the higher education sector has gone about organising and delivering conferences.

The need for such experimentation was highlighted by Martin Weller who recently asked “Am I done with conferencing?“. Martin asked a series of questions about traditional approaches to conferences including:

Why don’t we use the net for the information dissemination function (eg make our presentations live beforehand, as video or slidecasts) and then use the face to face segment for discussion?

OK, I’m game. So here’s access to a video presentation of the slides (in Vimeo), the slides and my talking head provided as a screen capture using Panopto and the slides available on Slideshare.  And the slides and the video are also embedded below.

But since I’ll be multitasking on the day of the symposium I won’t be able to “use the face to face segment for discussion“. Instead I suggested to the organisers of the symposium that I write a post here on this blog which can provide a forum for comments and discussions. And publishing the post in advance, rather than on the day of the  event, will enable others to provide their views and comments.

I should add that pre-recording a video can be nerve-wracking and I know I prefer giving a talk live.  I also suspect that the audience may be more prepared to be critical of a pre-recorded video than if a person was giving a live presentation (and if I was giving a live presentation I would probably update the slides, or the talk, to take into account things that had been said previously). However in order to ensure that there is some physical ‘sense of presence’ at the conference by colleague Stephanie Taylor will be attending the conference and will be able to response to any questions.

But having said that, we wouldn’t make changes if we weren’t prepared to move out of our comfort zone.  It also should be pointed out that Martin’s post didn’t specially address the need for cost savings for events.  If I had attended the symposium it would have involve the costs of the train fare and, probably, a night’s accommodation as well as my time.  Instead I spent an hour or so on making the screencast and gained benefits from spotting weaknesses in the presentation which have been fixed for the talk at Online Information. In addition there is a video and audio of the talk which would probably have not been provided  otherwise.

I should also add that I hope there will be a Twitter event hashtag for the two events, so it should be possible to engage in discussions that way.  And if the two talks do take place simultaneously I’m sure that will confuse people trying to work out what @briankelly is talking about :-)

What’s the Talk About?

But enough of the process, what about the content of the talk?  As mentioned above the title of the talk is “Personal or Institutional Use of Social Web Services For Scholarly Communication?” and this is a slight reworking of the paper on “Moving From Personal to Organisational Use of the Social Web.  The abstract of the talk is given below:

Social Web services, such as blogs, have been used successfully by early adopters. But should we now see such services being migrated to the institutional environment in order to address institutional concerns? Or should the institution seek to exploit the benefits of such out-sourced approaches?

In this talk Brian will provide examples of successful blogs provided by various early adopters within the UK higher education community. He will describe how such bloggers have developed approaches which maintain the authority and integrity of the blogger whilst maintaining a professional approach which is appreciative of potential institutional concerns.

Brian suggests that rather than seeking to move such blogs into an institutional context, the cuts in funding in higher and further education may result in greater use of Cloud Services rather than in-house software. If this is the case then the approaches taken by such early adopters may become mainstream and provide the basis for the development of institutional guidelines on use of Social Web services to support institutional activities.

Note that this talk will be given as a pre-recorded video as the speaker is giving a talk on the same day at the Online Information conference. This double-booking provides an opportunity to evaluate the potential of online delivery of talks at conference.

The slides are available on Slideshare and embedded below:

The video presentation is available on Vimeo and embedded below. Note that a screen capture is also available on Panopto.

In his post Martin Weller said that he was “done with the traditional conference format” and feels that “we should stop wasting [value time at conferences] giving presentations“. Is an alternative approach to pre-record a talk for conference delegates to watch in advance?  But will they, I wonder? And how comfortable will speakers be with recording their talks?

Posted in Events | 8 Comments »

Thoughts On The “Crowdsourcing Experiment – Institutional Web 2.0 Guidelines” Post

Posted by Brian Kelly (UK Web Focus) on 24 November 2010

Martin Hamilton, head of Internet Services at Loughborough University, has written a great blog post on the subject of “Crowdsourcing Experiment – Institutional Web 2.0 Guidelines“.  The post, which was used to support an Open Mic session at the CETIS 2010 conference, begins “I’d like use this blog post to do a bit of crowdsourcing around perspectives on institutional Web 2.0 guidelines and policies“. Martin is looking to develop guidelines along the lines of those listed in the Policy Database provided on the Social Media Governance Web site.

But rather than comment on the specifics of the content of the post (which is well-worth reading) I’d like to makes some observations on the approaches Martin has taken in producing his comprehensive multimedia post covering a variety of aspects related to institutional use of Web 2.0.

Blog post supporting a talk: When talks are given at events the norm is use of PowerPoint (or Open Office in some cases).   Increasingly you’ll find that the slides are made available, often on a slidesharing service such as Slideshare. If the talk is about a peer-reviewed paper the paper itself is the significant resource but at events such as the CETIS 2010 conference there aren’t accompanying papers.  It’s therefore pleasing to see this example of a blog post which complements the talk given at an event.  Indeed, in some respects, the blog post can be more valuable than a peer-reviewed paper as the blog post can make it easier for others to give comments and feedback.

Multi-media document: A document on guidelines for institutional use of Web 2.0 services would often be written using MS Word (or Open Office), perhaps containing images. Martin has written a multi-media document which contains embedded video clips, timelines and slide presentations.  Although we may encourage students to create multimedia essays, how often do IT professionals themselves do this?

Crowd-sourcing feedback: Martin has created an accompanying Google Doc which anyone can contribute to.  I think this is an interesting experiment in providing a mechanism whereby an audience for a talk can be more active participants by contributing to a document  which is based on the contents of the talk.

Embracing openness: Martin’s approach to the development of institutional guidelines for Web 2.0 services is taking place in public with contributions being actively sought.  This is in contrast with in-house developments of institutional policies and guidelines which might be shared with others after they are finalised.

I’d like to see greater use of the approaches which have been taken by Martin. What do you think?

Posted in Web2.0 | Tagged: | 4 Comments »

A Single Web Site For Government Departments! Higher Education Next?

Posted by Brian Kelly (UK Web Focus) on 23 November 2010

A Single Web Site For Government Departments

Yesterday a press release entitled “Digital by default proposed for government services” published on the Cabinet Office Web site” described how Martha Lane Fox, the UK Digital Champion, has published a report that calls for radical improvement to Government internet services [PDF 5.71MB, 11 pages].

The recommendations in the report call for the “simplification and strengthening of digital government to improve the quality, and consequently use, of online channels“.  The report argues that as well as providing better services for citizens “shifting 30% of government service delivery contracts to digital channels has the potential to deliver gross annual savings of more than £1.3 billion, rising to £2.2 billion if 50% of contacts shifted to digital“.

The key recommendations in the report include:

  • Making Directgov the government’s front end for all transactional online services to citizens and businesses
  • Making Directgov a wholesaler as well as the retail shop front for government services and content by mandating the development and opening up of Application Programme Interfaces (APIs) to third parties.

Government departments first and other public sector organisations, such as Universities, next?  But how should Universities react to such moves towards centralisation of networked services?  Note that in this post I’ll not address the question of whether such moves are desirable or not (which discussions are already taking place on Twitter) – rather I’ll consider the implementation issues which policy makers, who are not in a position to respond politically to Government announcements, need to consider.

Implications For Higher Education

A move towards centralised services for the citizen? Hasn’t the UK higher education sector been championing national services for the last couple of decades?  JISC Services, such as Mimas and EDINA, have been providing centralised access to services  for teaching and learning and research for many years and such services are much appreciated by the large numbers of users of the services.

Mandating the development and opening up of Application Programme Interfaces (APIs) to third parties? That sounds great and is also part of the JISC’s strategy for enhancing access to services – indeed last year the JISC funded the Good APIs work which provided advice on best practices for providing and consuming APIs.

But what of the bigger picture?  Could there be a national user-facing service which provides information about, say, courses provided by UK Universities? Again, the higher education sector has ‘been there, done that’ when a number of higher education agencies including (HEFCE, SHEFC, HEFCW and DENI set up the Hero (Higher Education and Research Opportunities) service. However, as I described in a post on “Which Will Last Longer: Hero.ac.uk or Facebook?” published in June 2009, Hero, “the official gateway to universities, colleges and research organisations in the UKwas closed on 4 June 2009.  And if there are suggestions that we should have a centralised online service for delivery of teaching resources  we should also remember the lessons of the UK eUniversity, the UK’s £62m e-learning university which was scrapped in 2004 and described  as “a “shameful waste” of tens of millions of pounds of public money”.

Will we see a move towards greater centralisation of networked services in the sector? I think this is inevitable. I also think that this can provide benefits as we build on the experiences in providing national services – which, I should add, are envied by many of those working in higher education in other countries which have not had the centralised funding and development to the extent which JISC provides in the UK. But the danger is that policy makers will failed to learn lessons from the approaches towards centralisation. Remember the “Those who forget history are doomed to repeat it“?

Posted in Web2.0 | 1 Comment »

Asynchronous Twitter Discussions Of Video Streams

Posted by Brian Kelly (UK Web Focus) on 22 November 2010

Twitter Captioned Videos Using iTitle

Martin Hawksey’s software for using Twitter to provide captions of video continues to improve.  At UKOLN’s IWMW 2010 event we used the iTitle service to mash together videos of the plenary talks with the accompanying Twitter stream. As you can see from, for example, Chris Sexton’s opening talk at the event, you can go back in time to see not only what Chris said (nothing new in providing a video of a talk) but also what the audience was tweeting about at the time – and you can also search the tweets in order to go directly (once the video has been downloaded into the local buffer) to what may be regarded as crowd-sourced video bookmarks – for example a search for “finance’ shows that at 9 mins 35 seconds into the video there was a comment that “Does anyone seriously think HR, Finance, Payroll and Student Record Systems can be run as Shared Services??! #iwmw10?“.

Asynchronous Twitter Captioning

That is an example of being able to replay the Twitter discussions which took place during a live event. But what if you wanted engage in discussions of a recorded presentation? Back in June 2010 Martin published a blog post which described uTitle, a development to his Twitter captioning service in which “Convergence @youtube meets @twitter: In timeline commenting of YouTube videos using Twitter [uTitle]“. In the post Martin said that “Having looked at synchronous communication I was interested to extend the question and look at asynchronous communication (i.e. what was said about what was said after it was said)“.

An example can be seen from the uTitled video of the When The Ax Man Cometh video, which was originally published on Seth Odell’s Higher Ed Live webinar and featured an interview with Mark Greenfield. I felt that this interview, which Mark has described on his blog, would be of particular interest to those of us working in the UK’s higher education sector as it raises challenging questions about the future of Web and IT services in higher education (and note I should thank Martin for processing the video using uTitle and Seth and Mark for giving permission for the video to be used in this way). In particular it asks the audience to consider the implications of idea’s published in a book on A University for the 21st Century written by James Duderstadt, President Emeritus at the University of Michigan:

  • Higher education is an industry ripe for the unbundling of activities. Universities will have to come to terms with what their true strengths are and how those strengths support their strategies – and then be willing to outsource needed capabilities in areas where they do not have a unique advantage.
  • Universities are under increasing pressure to spin off or sell or close down parts of their traditional operations in the face of new competition. They may well find it necessary to unbundle their many functions, ranging from admissions to counseling to instruction and certification.

Although this book was published way back in March 2000 the view that “Universities are under increasing pressure to spin off or sell or close down parts of their traditional operations” is particularly relevant to those of us working in higher education in the UK in 2010.

So if you do want to join in a debate (as opposed to simply passively watch the video) you can add comments to the post on the Higher Ed Live Web site or you can use uTitle to give your thoughts  in real time using your Twitter account. An example of the interface can be seen below in which, in response to Mark Greenfield’s assertion that “For profit companies can adapt more quickly then Universities” I respond “If true, don’t we need to accept need top change rather than accept as inevitable“.

Discussion

Rather than discussing the content of Mark’s talk in this post I’d like to give some comments on the use of Twitter for making asynchronous comments about a video clip.

The first comment is that if you do this as you watch a video your Twitter stream is likely to be confused.  Unlike use of Twitter at an amplified event you will be tweeting on your own, and you will not be taking part in a real-time conversation with others centred around an event hashtag.

Also, unlike a live presentation, it is possible to pause the video while you compose your tweet – and even fast forward to see how the ideas in the talk develop and then rewind and give your tweets. On a pre-recorded video we can benefit from the 20/20 hindsight which is not possible in real life :-)

I am also uncertain as to how people will feel about adding comments to such a video, especially those doing this when no comments have been published – there might be a concern that you will look stupid making a comment which the speaker addresses later on.

I should also add that when I made my two comments I used a second Twitter account in order to avoid spamming my Twitter followers within strange tweets.  (Note that as the account had not been validated by Twitter at the time, the tweets were not being displayed in the Twitter search interface – Martin retweeted the tweets in order to ensure that the uTitle display contained some comments).

I’d like to conclude by asking two questions:

  • Is there a demand for a service which provides captioning of pre-recorded videos?
  • Should Twitter users claim second Twitter accounts which can be used in conjunction with automated agents (such as uTitle)?

Posted in Finances, Twitter, Web2.0 | Tagged: , | 2 Comments »

Dazed and Confused After #CETIS10

Posted by Brian Kelly (UK Web Focus) on 18 November 2010

“Never Waste A Good Crisis”

On Monday and Tuesday I attended the #CETIS10 conference on “Never Waste a Good Crisis – Innovation & Technology in Institutions“. I’ve always enjoyed the CETIS conferences I’ve attended and found that they have provided a valuable way of keeping up with developments in the elearning environment as well as the equally important task of cultivating professional relationships and making new contacts.

But how might I summarise my feelings after two days at the National College for School Leadership, Nottingham, this year’s venue for the conference? If I where to look for a film title to describe how I felt on my journey home if would be “Dazed and Confused”. But not, I should hasten to add, due to any problems with the conference organisation (the venue – which was new to me, was great; the evening meal, this year, had no quirky servings and the organisation was up to its normal high standard).

Rather it was my recollections of the enthusiasm for change which I can recall from many participants and speakers at the first CETIS conferences I attended and the reality of the changes the sector is now facing – changes which were highlighted in three occurrences which took place during the conference – the opening keynote talk; a webinar on “When The Ax Man Cometh” which I heard about shortly before the conference started and the Daily Telegraph’s article on “Universities spending millions on websites which students rate as inadequate“ which was published on the second day of the conference.

“Will Universities Still Exist in 2030?”

I recall Oleg Lieber, the recently retired CETIS Director giving an opening talk at a CETIS conference in which he asked the audience to consider whether higher educational institutions as we know them will still exist by 2030. The audience, which consisted of those involved in innovative approaches to elearning, was encouraged to feel they were playing an important role in instigating significant changes within the sector, with an implicit assumption that such changes were for the good and that those who were at the leading edge who we well-positioned to exploit the new opportunities provided in a changed educational landscape.

It now seems that large-scale changes to higher education will arrive well in advance of 2030, but the changes will not be driven primarily by technological development becoming embedded across institutions; rather the changes will come about by changes in funding caused by reductions in funding and increases in student fees. These are the significant changes (which will be implemented in a short period of time), with the changes which technological innovation can provide now having to be contextualised within a radically changed funding environment and corresponding changes in user expectations, with students, for example, looking for the value provided for their fees they have mortgaged their future for.

“DIY University Edupunks, Edupreneurs and the Coming Transformation of Higher Education”

My dazed and confused feelings began during the opening plenary talk given by Anya Kamanetz which was based on her recent book on “DIY University Edupunks, Edupreneurs and the Coming Transformation of Higher Education. As summarised by Christina Smart on the JISC E-Learning Focus blog :

Recent years have seen a drive towards higher participation rates in both the UK and US … but above 40% participation rates problems occur. Issues around massification, cost shifting (where governments push the costs onto students), and student loans are all at play. There is also the influence of Baumol’s disease, where disciplines like the performing arts, are unable to make efficiency savings by reducing teacher to student ratios.

Anya argued that the combination of cost, access and quality made a compelling “case for radical innovation” in higher education. Shifting towards open content, socialisation and accreditation could result in that radical innovation, and Anya expanded on the benefits of Open Educational Resources, Personal Learning Networks and open accreditation approaches. Citing developments like Mozilla drumbeat’s P2PU – School of Webcraft, Anya described how “professional networks can bypass the need for diplomas”. She concluded with the prediction that new business models for HE would emerge, as mp3 players and digital music had transformed the business model of the music industry.

But what is a “case for radical innovation”? How about:

  • We have too many students studying at higher education.
  • Self-motivated students can learn without the need of a formal educational infrastructure.
  • The benefits of technology in enhancing learning are unproven – with Baumol’s cost disease being used “to describe the lack of growth in productivity in public services such as public hospitals and state colleges“.

I met Anya before the start of the conference and, over dinner, Anya mentioned how she has been described as a socialist in the US. But these views are often used from a right-wing perspective – and this caused my initial feelings of discomfort and unease.  I should add that I’m not saying that I’m necessarily disagreeing with such views, which are worthy of further discussion and unpicking. I suspect that, in part, my unease may reflect personal experiences (first in the family, from the working class town of Bootle, to go to University, which provided me with new opportunities) ; political disagreements with the notion that what may be good for self-motivated students (such as those who have benefitted from attendance at fee-paying public schools) will be forced on those who will benefit from learning provided by traditional institutions (whether such learning is mediated by technology or not) and professional concerns regarding the questioning of the benefits of technology (again, I’m not saying that such questions shouldn’t be asked).

In the question time after Anya’s talk I tried to articulate my concerns, but found it difficult to do so. Perhaps I might summarise my feeling by saying “There may be some merits in the issues you have raised and there is a need to gain evidence, in particular to understand the particular circumstances in which such approaches may be beneficial and those in which it can be harmful. But let’s not not take the political decision to radically change higher education based on these types of arguments across the entire sector“. Anya wasn’t of course, suggesting this – but her talk came at a time in which higher education (an, indeed, the broader public sector, is being subjected to large-scale experimentation.

“When The Ax Man Cometh”

Coincidentally after having dinner with Anya and together early arrivals at the CETIS conference on the Sunday night I came across a tweet which informed my that Mark Greenfield, director of web services at University of Buffalo was about to give a live webinar on “When The Ax Man Cometh“. I came across Mark following my post on “When the Axe Man Cometh – The Future of Institutional Web Teams” which discussed the implications of outsourcing of institutional Web teams. Mark used the Ax(e) Man metaphor in his webinar and accompanying blog post – and I should give acknowledgements to Deborah Fearne who described how “The Axe Man Came” and took her job in Web development earlier this year.

The 40 minute video of the webinar is worth watching particularly by those working in institutional Web management teams and those who may have an interest in discussions around out-sourcing.

Some of the notes I write whilst listening to the video:

The topic being addressed: Where will higher ed be in a decade? Will our jobs and departments even exist? And if that axe is coming, how can we survive the cuts? [Note the interview itself starts 6 minutes in].

For-profits can adapt more quickly than HEIs [Is that true? Is that necessarily true? Isn't the implication that HEIs need to be more adaptable rather than we need to our-source?]

The reality of HE today is that the axe man is coming, especially in IT sector.  There are systemic problems in high education (e.g. costs of tuition fees for students and related issues which Anya raised in her talk). The view that ‘It will be OK when economy recovers’ is wrong.

The axe-man has already started working with examples being given including academic programmes being cut by 30%; outsourcing (in Australia) entire IT departments to India; etc.

The cuts may also be manifested in large-scale increase in services using existing numbers of staff e.g. online learning in one US University is planned to grow by 10 fold, but without any new staff – the work will be outsourced to commercial sector.”

Many courses are the same as they were 100 years ago – but there are new models which can be used: e.g. courses which can be taken anywhere and are no longer constrained to an individual institution

Open learning environment provided by OER resources will help the development of the DIY-University [Hmm, so the JISC OER programme could be used by those with vested interests to undermine the mainstream approaches to the provision of higher education service.]

There’s a need to ask what the core mission of a University is. We can unbundle various University functions. HE is ripe for unbundling. [Note these ideas are taken from A University for the 21st Century by James Duderstadt, President Emeritus at the University of Michigan. In his blog post Mark summarised key points from the book:

    • Higher education is an industry ripe for the unbundling of activities. Universities will have to come to terms with what their true strengths are and how those strengths support their strategies – and then be willing to outsource needed capabilities in areas where they do not have a unique advantage.
    • Universities are under increasing pressure to spin off or sell or close down parts of their traditional operations in the face of new competition. They may well find it necessary to unbundle their many functions, ranging from admissions to counseling to instruction and certification.

Universities aren't primarily in the IT / Web business- so these functions can be unbundled / out-sourced. You need to justify why it exists at all.

Mark suggested the need for Universities to get "back to basics" [Note this phrase has right-wing connotations in the UK].

Those involved in the provision on institutional Web service need to defend what you do: “You need to be able to justify your existence”.

You should quantify why what you do matters. Decisions may be made just on salary costs of $60,000 pa – average salary cost in US Web team at Buffalo University (but overheads adds to this). Be proactive and not reactive. – e.g. identify costs of bad Web user experiences. Articulate success stories and efficiency gains – e.g. it has been many years since we printed class schedules. Think about the ROI of Web projects. and identify the potential value of a Web project before Web project starts.

Recession has fulled rethinking – but has been bubbling away for 10 years or so. The tipping point will arrive in 4-5 years time: from 2013 college parents will be Generation X and start to question the ROI of sending children to University? Aren’t there better ways of learning cf use of open courseware.

There is a need to follow what’s going on and learn from changes in other sectors- e.g. newspapers which failed to spot the implications of Craiglist on income froclassified adverts.

But such changes can also provide new opportunities, if you accept and embrace change and look for those opportunities.

I feel the issues Mark raised in his webinar (and accompanying blog post). I have made similar points over the years – back in 2006 I gave a talk on “IT Services: Help Or Hindrance?” at the UCISA Management Conference in which I suggested that one possible future for IT Services departments would be “to surrender to the changing environment and leave departments to make use of Web services such as GMail and Yahoo to provide institutional email and groupware facilities“. But back then I was using this as an argument for IT Services department to be more agile and user-focussed rather than making a serious proposal for large-scale out-sourcing – and in any case subsequent arguments that institutions should be exploiting Social Web services have been based on the out-sourcing of the IT components, freeing staff to provide additional services to their users. Loss of an IT infrastructure would, I feel, leave institutions vulnerable, and unable to exploit opportunities which IT can provide to support local requirements. The danger is that today’s cool GMail service (which, admit it, many users prefer to institutional email offerings) will quickly become the slow-moving enterprise service which is frequently criticised today.

I would also add that Mark’s comment that “Those involved in the provision on institutional Web service need to defend what you do: ‘You need to be able to justify your existence’” relate directly to a workshop I ran in Glasgow last Friday on “Institutional Web Services: Evidence for Their Value“. So yes, this is a valid point, which the UK HE sector is addressing.

“Universities spending millions on websites which students rate as inadequate”

Whilst the video of Mark Greenfield’s webinar is worth watching and sis useful in stimulating debate, in contrast the Daily Telegraph’s article on “Universities spending millions on websites which students rate as inadequate“ was a poorly argued polemic based on flawed use of statistics. I spent 15 minutes over lunch at the CETIS conference pointing out that, yes it’s true “University Web Sites Cost Money!“. I added that the average annual spending on the maintenance of a University Web site is £60,375 (per annum) cited on the article seems very cheap then you consider the wide range of services provided across institutional Web sites “ranging from the important promotional and marketing aspects which are designed to attract new students and research income, disseminate information on the value of the work carried out within institutions to the public as well as support collaborative and communications activities within the institution and will partners across the UK and beyond“.

Other people have made similar comments, with Piero Tintori giving the following response to the Telegraph article:

No one University is spending millions on web development. The average investment is actually very low in comparison with other industries / sectors.

 

As the web is the number one way of recruiting students and research you could say that the investment is too low. What this article really highlights is that Universities aren’t investing enough on their web presence.

In a blog post Ranjit Sidhu described why he considers the Telegraph article on University website costs and value as unbalanced:

The article in the Telegraph takes one data set; expenditure on website development and places it as a cost on a single value proposition; student experience, without considering to monitise the other important purposes of the university website. We consider this to be unbalanced …

The post went on to provide an objective critique of the underlying methodology used in the Telegraph article.

Reflections

I don’t normally write such long posts. But I’ve realised that writing this post has helped me to have a better awareness of what I believe and my concerns. I should also add that I am very aware of the political aspects of my comments. I also feel there are differing perspectives between North American and UK views on ownership of IT infrastructure and political considerations. I suspect there will also be generational differences in the UK between those who remember Thatcherite cuts are those whop were too young. And as a number of us discussed in the pub when we were in Nottingham, unlike today’s Government, Margaret Thatcher was lower-middle class and initially had a cabinet of Tory wets.

I also tend not to write such political posts. But this is where I am concerned that the opportunities for new approaches to learning identified by Anya Kamanetz won’t be regarded as ways of providing richer and more diverse ways of supporting learning experiences – rather the ideas of the DIY University will be used as a means of further reducing funding for the sector and disadvantaging those from working class backgrounds. And the arguments surrounding out-sourcing made by Mark Greenfield will similarly be used as a blunt instrument, rather than in exploring optimal ways in which higher educational institutions can adapt to a changing environment, whilst retaining the expertise needed to exploit local opportunities an requirements.

And returning to the CETIS conference and the suggestion that you should “Never Waste a Good Crisis” – I also feel that you should also never waste an opportunity to discuss whether Universities are wasting millions on their Web sites, whether we should be outsourcing our Web and IT infrastructure and whether HEIs can be replaced by the DIY-U. But remember that for some, the answer to these questions will be “Yes” – and not even a “Yes, but no, but yer, but” :-(

Posted in Finances, General | Tagged: | 6 Comments »

HTML and RDFa Analysis of Welsh University Home Pages

Posted by Brian Kelly (UK Web Focus) on 17 November 2010

Surveying Communities

A year ago I published a survey of RSS Feeds For Welsh University Web Sites which reported on auto-discoverable RSS feeds available on the home page of 12 Welsh Universities.  This survey was carried out over a small community in order to identify patterns and best practices for the provision of RSS feeds which could inform discussions across the wider community.

Trends in Use of HTML and RDFa

As described in previous analysis of usage of RSS feeds on Scottish University home pages such surveys can help to understand the extent to which emerging new standards and best practices are being deployed within the sector and, if usage is low, in understanding the reasons and exploring ways in which barriers can be addressed.

With the growing interest in HTML5 and RDFa it will be useful to explore whether such formats are being used on institutional home pages.

An initial small-scale survey across Welsh University home pages has been carried out in order to provide some initial findings which can be used to inform discussions and further work in this area.

The Findings

The findings, based on a survey carried out on 21 October 2010, are given in the following table. Note that the HTML analysis was carried out using the W3C HTML validator. The RDFa analysis was carried out using Google’s Rich Snippets testing tool since it is felt that the benefits for searching which use of RDFa is felt to provide will be exploited initially to enhance the visibility of structured information to Google.

Institution Analysis Findings
1 Aberystwyth University HTML Analysis XHTML 1.0 Transitional
RDFa Analysis None found
2 Bangor University HTML Analysis XHTML 1.0 Transitional (with errors)
RDFa Analysis None found
3 Cardiff University HTML Analysis XHTML 1.0 Strict (with errors)
RDFa Analysis None found
4 Glamorgan University HTML Analysis HTML5 (with errors)
RDFa Analysis None found
5 Glyndŵr University HTML Analysis XHTML 1.0 Transitional (with errors)
RDFa Analysis None found
6 Royal Welsh College of Music & Drama HTML Analysis XHTML 1.0 Strict (with errors)
RDFa Analysis None found
7 Swansea University HTML Analysis XHTML 1.0 Transitional
RDFa Analysis None found
8 Swansea Metropolitan University HTML Analysis XHTML 1.0 Transitional (with errors)
RDFa Analysis None found
9 Trinity University College HTML Analysis XHTML 1.0 Strict (with errors)
RDFa Analysis None found
10 University of Wales Institute, Cardiff HTML Analysis XHTML 1.0 Strict (with errors)
RDFa Analysis None found
11 University of Wales, Newport HTML Analysis HTML 4.01 Transitional (with errors)

Discussion

Only one of the eleven Welsh institutions is currently making use of HTML5 on the institutional home page and none of them are using RDFa which can be detected by Google’s Rich Snippets testing tool.

The lack of use of RDFa, together with previous analyses of use of auto-detectable RSS feeds, would appear to indicate that University home pages are currently failing to provide machine-processable data which could be used to raise the visibility of institutional Web sites on search engines such as Google.

It is unclear whether this is due to a lack of awareness of the potential benefits which RDFa could provide, an awareness that potential benefits may not be realised due to search engines, such as Google, not currently processing RDFa from arbitrary Web sites, the difficulties in embedding RDFa due to limitations of existing CMSs, policy decisions relating to changes of such high profile pages, the provision of structured information in other ways or other reasons.

It would be useful to receive feedback from those involved in managing their  institution’s home page – and also if anyone is using RDFa (or related approaches) and does feel that they are gaining benefits.

Posted in Evidence, HTML, jiscobs, standards | 3 Comments »

University Web Sites Cost Money!

Posted by Brian Kelly (UK Web Focus) on 16 November 2010

Did you know that the average spending on the maintenance of a University Web site is £60,375 (per annum)? This figure was announced by the press today. “Wow, that sounds cheap!” will be the response for those who know about the wide range of services provided across institutional Web sites, ranging from the important promotional and marketing aspects which are designed to attract new students and research income, disseminate information on the value of the work carried out within institutions to the public as well as support collaborative and communications activities within the institution and will partners across the UK and beyond.

But if, however, you read the headline “Universities spending millions on websites which students rate as inadequate” and spotted that this was published in the Daily Telegraph you will realise that a different spin has been given, with the article seeking to demonstrate inefficiencies in higher education in order to justify cuts.

This is clearly an article which has been written with a political agenda. But is also also true to say that it is not unexpected – indeed at the Mashed Library event in Liverpool a few month’s ago I mentioned to Tony Hirst that we should expect to see right-wing papers seeking to use the Freedom of Information Act (FOI) in order to gather information which can be used against the sector. And this has come to pass with the article announcing:

Using Freedom of Information legislation the Telegraph discovered eight examples of universities spending between £100,000 and £280,000 on one-off website redesigns, as much as five times higher than the average spending.

Of course a one-off Web site redesign will increase the annual expenditure; as pointed out by a spokesperson for Cranfield University “this was the University’s only major redesign of the website over the past 15 years and that the large one-off investment saved money in the longer run“. Indeed a failure to invest in a Web site redesign could lead a University open to criticisms for failing to respond to user needs for enhanced functionality; richer content; simpler interfaces and the range of other requirements which those involved in Web site management will be well-aware.

The Daily Telegraph article was published a few days after I facilitated a one-day workshop session on “Institutional Web Services: Evidence for Their Value” which was hosted at the University of Strathclyde. In the introduction I described how the workshop was part of a JISC-funded activity led by UKOLN which was seeking to develop “ways of gathering evidence which can demonstrate the impact of services and devise appropriate metrics to support such work“.The launch workshop explored questions of “How can we demonstrate the effectiveness and impact of institutional Web services? What metrics are relevant? What concerns may there be?

We are now seeing that a failure to gather evidence will leave Universities open to charges of inefficiencies. As I am currently attending the CETIS 2010 conference I haven’t the time to write any more on this topic. But I would welcome suggestions on how those involved in providing institutional Web site can demonstrate the value of the services they provide. Over to you.


Twitter conversation from Topsy: [View]

Posted in Finances | 18 Comments »

Conventions For Metrics For Event-Related Tweets

Posted by Brian Kelly (UK Web Focus) on 15 November 2010

According to Summarizr there have been 6,927 tweets for the #altc2010 event hashtag, which compares with 4,735 tweets for the #altc2009 event. We can therefore conclude that there has been an increase of almost 50% in Twitter usage. Or can we? If we had carried out the analysis immediately after the event the numbers would probably have been different. And use of either of these hashtags now, when talking about a past event, will have a different context to using the hashtag during the event, when such tags provided some level of engagement with the Twitter community centred around the event’s Twitter stream.

In order to make meaningful comparisons there is a need to be able to filter the tweets in a consistent fashion. Fortunately the Twapper Keeper service allows tweets to be filtered by various parameters, including a date range. And since the Summarizr service uses Twapper Keeper to provide its statistics it is possible to use Summarizr’s metrics in a consistent fashion.

But what date range should be used? An initial suggestion might be for the day(s) of the event. But this would fail to include discussions which take place immediately before and after an event. In addition this could also mean that tweets from an international audience not being included, such as tweets from an Australian audience which take place the following day. Such confusions over dates might apply particularly to events held in other countries since the times used in Twitter are based on GMT.

In order to avoid such confusions when I cite statistics from Summarizr I now include tweets posted during the week of an event, typically starting on the Sunday and finishing on the following Saturday. For an event lasting for a day I start on the day before the event and finish on the following day.

The syntax for obtaining statistics from Twapper Keeper over a date range is:

http://summarizr.labs.eduserv.org.uk/?hashtag=hashtag&sm=mm&sd=dd&sy=yyyy&em=mm&ed=dd&ey=yyyy

where:

sm is the start month (from 1 to 12)
sd is the start day (from 1 to 31)
sy is the start year (e.g. 2010)

em is the end month (from 1 to 12)
ed is the end day (from 1 to 31)
ey is the end year (e.g. 2010)

For example the following URL will give statistics for the #altc2009 hashtag between 6-11 September 2009:

http://summarizr.labs.eduserv.org.uk/?hashtag=altc2009&sm=9&sd=6&sy=2009&em=9&ed=11&ey=2009

and the following statistics for the #altc2010 hashtag between 5-12 September 2010:

http://summarizr.labs.eduserv.org.uk/?hashtag=altc2010&sm=9&sd=5&sy=2010&em=9&ed=12&ey=2010

This provides the following statistics:

ALT-C 2009 ALT-C 2010
Nos. of tweets 4,010 6,238
Nos. of twitterers    650    666
Nos. of hashtags tweeted    125    277
Nos. of URLs tweeted    554    683
Nos. of geo-located tweets        0      35

This indicates that there has been of 56% in twitter usage between comparable periods in 2009 and 2010.

Note that the statistics for the numbers of geo-located tweets demonstrate that in 2009 nobody was providing geo-located tweets for the event hashtag. This data could easily be lost if Twitter users today started to refer to the 2009 event and had started to make use of geo-location.

To sum up my proposal:

  • The start date for a one-day event is the previous day and the end date is the following day. This will address internationalisation issues due to engagement for those in other time zones and cover discussions just before and just after the event.
  • The start date for an event lasting longer than a single day is the previous Sunday and the end date is the following Saturday. This will address internationalisation issues due to engagement for those in other time zones and cover discussions just before and just after the event.

Is this a convention we can agree on, to ensure that meaningful comparisons can be made?


Twitter conversation from Topsy: [View]

Posted in Twitter | Tagged: , | 5 Comments »

Librarians Experimenting With Facebook Groups

Posted by Brian Kelly (UK Web Focus) on 12 November 2010

Where can Librarians discuss topics of interest?  Clearly lots of places, including mailing lists such as the LIS-* lists hosted by JISCMail; many similar lists based in the US; Web 2.0 collaborative environments such as the Library 2.0 Ning site and, of course, on Twitter.

And now there’s the Library Related People Facebook group. There was set up by Aaron Tay on Saturday 6 November and as he described in a blog post later that day:

We librarians are consummate users of social media. We are all over Friendfeed, masters of IM, Twitter & Skype. But Facebook is still the 500 pound gorilla in the room and most of us even the least techie librarian probably spends most of our time logged into Facebook.

The Facebook group chat option will allow us to chat with any of the librarians in the group. My hope is for this group to grow such that at anytime there are at least a dozen librarians online when you want to pick the brains of librarians who might be logged into facebook, you can just go to Facebook chat and send out a message.

I do feel that there is a need for such experimentation and so it is good to see that Aaron has set up this group to allow librarians from around the world to gain experience of the role, if any, which Facebook groups might provide for their users as well as possibly providing a forum for discussions by those working in the library sector.

And whilst several concerns related to use of Facebook Groups were discussed on the launch day (which I spotted in the chat window but now seems to have disappeared) it perhaps might be more interesting to discuss possible success criteria for an online community.  After all, I suspect that if an online social network had been set up using an open source software (such as Diaspora, the “The privacy aware, personally controlled, do-it-all, open source social network“) for some the political-correctness of the software environment would result in being unwilling to ask questions such as What is the environment for?; Do we need it?; how much will it cost?; What are the risks?; How will be know if it is a success and, conversely, How will we know if it’s a failure? But such questions will be asked about use of Facebook as an environment for hosting such a community.

When I joined  on Saturday 6 November 2010 the group had 46 members and by Saturday evening there were 192 members. On 8 November there were 299 members – and the current number can be see by visiting the members’ page.  But before anyone comments that the success of a social network environment shouldn’t be gauged simply by the number of members and growth rates (which, in this case, will be more to do with the extent of Aaron’s professional network and his esteem in the library community)  remember that I am seeking to understand how one can identify the success or not of a social network which could be applied equally to a Diaspora environment and a Facebook group. And if you reject the notion of success or failure, then you will be in a weak position to make criticisms of Aaron’s experiment.

My question,then, is does anyone have any suggestions for ways of identifying the success or failure of such social networks?

Posted in Facebook | 3 Comments »

Further Thoughts on Lanyrd

Posted by Brian Kelly (UK Web Focus) on 11 November 2010

Graham Attwell is a fan of Lanyard. On the Wales Wide Web he recently informed his readers that “Last night I spent a hour or so playing with new social software startup, Lanyrd. And I love it.” Graham likes it because it is so easy to use and it makes his work easier. Graham also went on to add that “The site is very open. Anyone is free to add and edit on the wikipedia shared knowledge principle.

Such freedom is an interesting aspect to the service, which I only started to appreciate after I noticed hat Martin Hawksey had added a link to a video of one of the plenary talks at the IWMW 2010 event. Hmm, anyone can create an event, add themselves as a speaker and upload slides. Sounds like this could be open to misuse – but we have no evidence that this will happen.

In any case the main interface which a registered user sees are the events which their Twitter folders are attending or have an interest in. The accompanying image, for example, shows how information on Lanyrd about the forthcoming Online Information 2010 conference includes details of seven people I follow on Twitter are speakers at the conference. And since there is some degree of trust when you choose to follow someone, I am not too concerned about misleading information being published – and the FAQ states that “We plan to offer pro accounts for conferences in the future, and one of the features will be the ability to lock a conference page so only specific people can edit it.”

The Lanyrd page for the IWMW 2010 event is illustrated. As can be seen information about 29 speakers is available and access is available to 9 videos and slideshows of the plenary speakers. But if adding content to Lanyrd is easy, what is the etiquette of doing this?

We can observe how early adopters are creating conference entries on Lanyrd and adding details about public information such as dates, venues and information of speakers.

Such early adopters may be speakers themselves but as awareness of the service grows and how it can provide viral marketing for events (as potential attendees notice that people they follow on Twitter are speaking at events and may chose to register for such event ) we might expect event organisers to be pro-active in creating event entries on the service.

But what about including intellectual content, such as links to speakers’ slides, videos of talks, etc.? What are the associated rights issues if a page contains not only links to resources but also embedded slide shows and video clips, as is the case for the Lanyrd page for Paul Boag’s talk on “No money? No matter – Improve your website with next to no cash” which he gave at IWMW 2010?

Established practices means that no permission needs to be sought in order to link to a public Web page. And the embedding of rich content? Well since these resources have been uploaded to slide and video sharing services such as Slideshare and Vimeo there is surely an implied consent that the embed capabilities of these services can be used?

Which means that a failure of event organisers to be pro-active in creating a Lanyrd page for an event could result in entries being created which fail to include desired branding and acknowledgements and inconsistencies in the coverage of specific sessions. But perhaps that is a feature of the bottom-up approach to content creation which easy-to-use services in now facilitating? Such considerations need to be considered by speakers as well as event organisers – there are currently 14 speakers listed on the Lanyrd entry for the Online Information 2010 conference. Are the many other speakers listed on the conference programme missing out on exposure and possible networking and marketing opportunities? And will those who participate in elearning conferences have different approaches to those from the library sector? I’ll be interested to see how the Lanyrd page for the Online Educa conference develops.


Twitter conversation from Topsy: [View]

Posted in Web2.0 | Tagged: | 2 Comments »

Experiences Migrating From XHTML 1 to HTML5

Posted by Brian Kelly (UK Web Focus) on 10 November 2010

IWMW 2010 Web Site as a Testbed

In the past we have tried to make use of the IWMW Web site as a test bed for various emerging new HTML technologies. On the IWMW 2010 Web site this year we evaluated the OpenLike service which “provides a user interface to easily give your users a simple way to choose which services they provide their like/dislike data” as well as evaluating use of RDFa.

We also have an interest in approaches to migration from use of one set of HTML technologies to another. The IWMW 2010 Web site has  therefore provided an opportunity to evaluate deployment of HTML5 and to identify possible problem areas with backwards compatibility.

Migration of Main Set of Pages

We migrated top-level pages of the Web site from the XHTML1 Strict Doctype to HTML5 and validation of the home page, programme, list of speakers, plenary talks and workshop sessions shows that it was possible to maintain the HTML validity of these pages.

A small number of changes had to be made to in order to ensure that pages which were valid using an XHTML Doctype  were valid using HTML5. In particular we had to change the form> element for the site search and replace all occurrences of <acronym> to <abbr>. We also changed occurrences of <a name="foo"> to <a id="foo"> since the name attribute is now obsolete.

The W3C’s HTML validator also spotted some problems with links which hadn’t been spotted previously when we ran a link-checking tool. In particular we spotted a couple of occurrences of the form <a href="http://www.foo.bar "> with a space being included rather than a trailing slash. This produced the error message:

Line 175, Column 51: Bad value http://www.foo.bar for attribute href on element a: DOUBLE_WHITESPACE in PATH.
Syntax of IRI reference:
Any URL. For example: /hello, #canvas, or http://example.org/. Characters should be represented in NFC and spaces should be escaped as %20.

This seems to be an example of an instance in which HTML5 is more restrictive than XHTML 1 or HTML 4.

Although many pages could be easily converted to HTML5 a number of pages there were HTML validity problems which had been encountered with the XHTML 1 Transitional Doctype which persisted using HTML5.  These were pages which included embedded HTML fragments provided by third party Web services such as Vimeo and Slideshare. The Key Resources page illustrates the problem, for which the following  error is given:

An object element must have a data attribute or a type attribute.

related to the embedding of a Slideshare widget.

Pages With Embedded RDFa

The Web pages for each of the individual plenary talks and workshop sessions contained embedded RDFa metadata about the speakers/workshop facilitators and abstracts of the sessions themselves.  As described in a post on  Experiments With RDFa and shown in output from Google’s Rich Snippets Testing tool RDFa can be used to provide structured information such as, in this case, people, organisational and event information for an IWMW 2010 plenary talk.

However since many of the pages about plenary talks and workshop sessions contain embedded third party widgets including, for the plenary talks, widgets for videos of the talks and for the accompanying slides, these pages mostly fail to validate since the widget code provided by the services often fails to validate.

A page on “Parallel Session A5: Usability and User Experience on a Shoestringdoes, however validate using the XHTML1+RDFa Doctype, since this page does not include any embedded objects from such third party services. However attempting to validate this page using the HTML5 Doctype produces 38 error messages.

Discussion

The experiences in looking to migrate a Web site from use of XHTML 1 to HTML5 shows that in many cases such a move can be achieved relatively easily.  However pages which contain RDFa metadata may cause validation problems which might require changes in the underlying data storage.

The W3C released a working draft of a document on “HTML+RDFa 1.1: Support for RDFa in HTML4 and HTML5” in June 2010. However it is not yet clear if the W3C’s HTML validator has been updated to support the proposals containing in the draft document.  It is also unclear how embedding RDFA in HTML5 resources relates to the “HTML Microdata” working draft proposal which was also released in June 2010 (with an editor’s draft version dated 20 October 2010 also available on the W3C Web site).

I’d welcome comments from those who are working in this area.  In particular, will the user interface benefits provided by HTML5 mean that HTML5 should be regarded as a key deployment environment for new services, or is there a need to wait for consensus to emerge on ways in which metadata can be best embedded in such resources in order to avoid maintenance problems downstream?

Posted in HTML, standards | 1 Comment »

W3C and ISO

Posted by Brian Kelly (UK Web Focus) on 9 November 2010

The World Wide Web Consortium (W3C) describes itself as “an international community where Member organizations, a full-time staff, and the public work together to develop Web standards“.  But surprisingly the W3C doesn’t actually produce standards. RatherW3C develops technical specifications and guidelines through a process designed to maximize consensus about the content of a technical report, to ensure high technical and editorial quality, and to earn endorsement by W3C and the broader community.

But this is now changing.  The W3C recently announed that “Global Adoption of W3C Standards [is] Boosted by ISO/IEC Official Recognition“.  The announcement describes how “the International Standards Organization (ISO), and the International Electrotechnical Commission (IEC) took steps that will encourage greater international adoption of W3C standards. W3C is now an ‘ISO/IEC JTC 1 PAS Submitter’ bringing ‘de jure’ standards communities closer to the Internet ecosystem.

What this means is that the W3C can submit their specifications directly for country voting to become ISO/IEC standards. The aims are to help avoid global market fragmentation;  to improve deployment within government use of W3C specifications and acceptance of a W3C specification when there is evidence of stability/market acceptance of the specification.

In their submission the W3C provided an overview of how they standardise a Web technology:

  1. W3C participants, members usually generate interest in a particular topic.
    W3C usually runs open workshops (events with a open call for papers) to identify new areas of work.
  2. When there is enough interest in a topic (e.g., after a successful Workshop and/or discussion on an Advisory Committee mailing list), the Director announces the development of a proposal for a new Activity or Working Group charter, depending on the breadth of the topic of interest.
    An Activity Proposal describes the scope, duration, and other characteristics of the intended work, and includes the charters of one or more groups (with requirements, deliverables, liaisons, etc) to carry out the work.
  3. When there is support within W3C for investing resources in the topic of interest, the Director approves the new Activity and groups get down to work.
    There are three types of Working Group participants: Member representatives, Invited Experts, and Team representatives. Team representatives both contribute to the technical work and help ensure the group’s proper integration with the rest of W3C.
  4. Working Groups create specifications based on consensus that undergo cycles of revision and review as they advance to W3C Recommendation status.
    The W3C process for producing specification includes significant review by the Members and public (every 3 months all drafts have to be made public on our Web site w3.org), and requirements that the Working Group be able to show implementation and interoperability experience.
  5. At the end of the process, the Advisory Committee (all members) reviews the mature specification, and if there is support, W3C publishes it as a Final Recommendation.
  6. The document enters what is called Life-after-Recommendation where the group/committee does maintenance, collects and publishes errata, considers minor changes, and if the technology is still evolving, prepares the next major version.

The W3C have not yet defined the selection criteria for identifying which specifications suitable for submission. I think it will be interesting to see how the market acceptance criteria will be used.  It will also be interesting to see what the timescales for such standardisation processes will be and whether the standardisation will be applied to recent W3C specification or older ones.  It seems, for example, that the ISO/IEC 15445:2000 standard for Information technology — Document description and processing languages — HyperText Markup Language (HTML) , which was first published in 2000 and updated in 2003, is the ISO standardisation of the HTML 4.0 specification. We can safely say that HTML 4 does have market acceptance, but the market place has  moved on with developers now interested in the HTML5 specification. Will the ISO standardisation take place several years after a standard has become ubiquitous, I wonder?

Posted in standards, W3C | 2 Comments »

Facebook as an eLearning Platform?

Posted by Brian Kelly (UK Web Focus) on 8 November 2010

Facebook has been described as a walled garden, but following a recent announcement that users can download their own data we found that Planet Facebook has become less of a Walled Garden, with Steve Repetti, chair of the Data Portability Project, feeling that this news was “A step in the right direction“. But could Facebook evolve to be something more than just  a social networking service and be used as an e-learning delivery platform?

Back in 2007 Michael Webb, Director of IT Services at the University of Wales, Newport described “MyNewport – MyLearning Essentials for Facebook“, a Facebook application that allows students to access to Newport’s MyLearning Essentials resources from Facebook.

Michael described how this “allows students to start creating their own personal learning environment in a platform other than the one provided by the University“, adding that “we’ve targeted Facebook at the moment as it’s the fastest growing community, but if our users like the idea but want to work in another environment then that is fine – we can create applications for them as well“.

How much development effort did this take, you may wonder? “It took about a day and half from conception of the idea and joining the Facebook developer community on 10th July to launching it as a viable application for our students to use (or comment on) on the 11th July. It was straight forward as our VLE is built from components that can easily be repurposed, and uses open standards such as RSS to allow information to be passed to the Facebook application.

Since then I’ve not been aware of much discussion about development of Facebook applications to support institutional requirements, apart from a document on COURSE PROFILES – A Facebook Application for Open University Students and Alumni written by Tony Hirst, Liam Green-Hughes, Stuart Brown and Martin Weller. Until Friday, that is, when I came across an article in Computer Weekly which described how “London School of Business and Finance offers MBA on Facebook“:

Facebook users can now study an MBA for free at the London School of Business and Finance (LSBF) after the college launched a course that will be available on the social networking website.

Students will be able to study for free and will only pay if they want to be formally assessed for an MBA. The LSBF GlobalMBA, which has received £7.5m investment, is awarded by the University of Wales.

Valery Kisilevsky, group managing director of the London School of Business and Finance, said Facebook was chosen to host its The LSBF GlobalMBA application because it offered the chance to widen the availability of education.

We looked at how our current students communicate with each other and the college and Facebook is the platform of choice‘ said Kisilevsky.

Now the London School of Business and Finance (LSBF) isn’t a University, rather it’s an “educational institution [which] offers industry-focussed programmes designed to reflect global market trends. LSBF attracts the most talented and ambitious candidates from more than 150 countries worldwide“. The Web site goes on to sate that LSBF “offers an unrivalled portfolio of professional qualifications, as well as innovative degree programmes at postgraduate and undergraduate level, with the flexibility to tailor your studies to your own career aspirations“.

Is LSBF setting a trend in exploiting a popular global social networking environment which could provide a cost-effective solution appropriate for today’s economic environment? Or will it be seen to be irrelevant?  I don’t think we can say.  But I think we do need to keep an eye out on weak signals which may hint at possible trends, especially those that might go against our preferred visions of future developments.

So is anyone engaged in development work using the Facebook platform? And what lessons can be learnt from the early work at Newport and the Open University?

Posted in Facebook | 3 Comments »

Gathering and Using Evidence of the Value of Libraries

Posted by Brian Kelly (UK Web Focus) on 5 November 2010

“Sixty Minutes To Save Libraries”

Last week I attended the MashSpa event which was organised by my colleague Julian Cheal. My contribution to the event was to co-facilitate a session on ““Sixty Minutes To Save Libraries”: Gathering Evidence to Demonstrate Library Services’ Impact and Value” . The session attracted participants primarily from the academic and public library sector. Unfortunately the session was held in a small and overcrowded room and so it wasn’t possible to break out into the four or so groups which we had initially intended and so there was only a single topic which could be discussed. However the participants seemed to be in agreement with the approach which myself and Nicola McNee, my fellow co-facilitator, took which was to argue. perhaps rather dramatically, that in order to try to save libraries from the cuts we should be gathering and using data which can be used to demonstrate the value (including the financial value) of library services.

I was pleased that participants appreciated the importance of gathering and using hard evidence in order to be able to justify services. Although the importance of anecdotes and stories was appreciated (with the Voices for the Library service being acknowledged as particularly important for those working in Public Libraries) it was acknowledged that in today’s political and economic environment we need to be able to gather and use hard evidence and data.

But did we succeed in identifying ways in which evidence could be used to demonstrate the value of services provided by Libraries? Looking back at the notes taken by my colleague Marieke Guy it seems that there was an awareness that in some areas academic libraries have not been engaged in collected evidence. However in other areas such as gate counts, opening hours, etc. SCONUL has been collecting data from academic libraries. As described on the SCONUL Web site:

SCONUL has been collecting and publishing statistics from university libraries for over twelve years, with the aim of providing sound information on which policy decisions can be based.

Further information is provided which informs readers that “All UK HE libraries are invited to complete the SCONUL Statistical Questionnaire, which forms the foundation of all SCONUL’s statistical reports and services. The questionnaire details library resources, usage, income and expenditure for any academic year.

However, as was discussed at the session, the SCONUL data is not publicly available. It seems that the SCONUL Annual Library Statistics is published yearly – and copies cost £80.

It was felt that closed access to such data was not only counter to moves towards openness and transparency with the public sector but also meant that developers were not in a position to explore the data and provide analyses and interpretations which may not be included in the SCONUL reports. There was a recommendation that a case should be made to SCONUL for opening up access to these statistics. It was also suggested that since institutions collate this information themselves individual institutions may chose to publish their data openly.

However in subsequent discussions about data on access to ejournals there was a concern that opening up access to usage statistics could lead to publishers deciding to increase subscriptions for popular ejournals. It was also pointed that providing evidence of ejournals with low levels of usage could be embarrassing (e.g. if academics had requested the library to subscribe to ejournals of particular interest to themselves).

The dangers of data being misinterpretted were also discussed. It was felt that there is a need for data to be analysed within its context of use – decreasing numbers of physical visitors to the library may be compensated by increased use of online services.

As well as the discussions about evidence of use of various library services it was also felt that it would be useful to gather evidence of ways in which those working in libraries were working  which could be used to inform political debates. We found, for example, that a significant monitory of attendees at the Mashed Library event were taking time off work to enhance their professional development – although it was recognised that such evidence  could be used in various ways (“to demonstrate levels of commitment” vs “removing staff development budgets“)

“Are UK Public Libraries Expensive to Run?”

Coincidentally (or perhaps not!) last Sunday John Kirriemuir published a post in which he asked “Are UK public libraries expensive to run?“. John pointed out that “in the UK for 2008-09 the cost of public libraries came to a shade under £1.2 billion“. Is this expensive? John pointed that that “at approaching 62 million people, less than £20 each per year for every citizen of the UK“. That is equivalent to:

A starter, verdue and a dessert at Pizza Express. The basic Sky TV package. A fraction of the cost of seeing one Premiership football match. 16 litres of unleaded petrol. 6 or 7 pretentious drinks in Starbucks (it’s just coffee). A pair of cinema tickets with drinks and popcorn. Just half of the cost of an adult ticket, on the gate, for Alton Towers. Any of those.

John then went on to point out the additional costs which would be incurred by attempted to reduce the expenditure on public libraries.

To take an extreme position closing all public libraries in order would not produce savings of £1.2 billion pounds since:

  • That’s 25,000 less employed people paying tax
  • …and 25,000 more unemployed people claiming benefits.
  • The knock-on effect to the suppliers of goods and services libraries need, will take a hit
  • …as will the providers of goods and services bought by those 25,000 library staff
  • …and author and publisher payments will be down, so less tax to be gained there as well.
  • There’s the unquantifiable number of people who use library services to get back into employment, through re-skilling, self-education or finding work. Close libraries and that’s more tax gain lost, more people still claiming benefits.

Closing libraries also means people have to pay more for information, knowledge and communication services. That ranges from a person chatting to housebound relatives online, to a senior finding travel and bus information, to someone learning a foreign language to add to their CV, and thousands of little examples in-between. There’s no enquiry or reference desk, no staff or librarians to answer those information queries any more. Close public libraries and the costs of information pursuit and communications are shifted directly onto those least able to pay for these things.

It would be possible to pick holes in these figures, and of the dangers of making comparisons, as the post does, in between the costs of Trident and public libraries – if sectors of the public are concerned of the costs of the former, couldn’t the associations backfire?

However I feel that there are benefits to be gained by opening up the debate more widely.  And it was pleasing to hear earlier today that John has been invited to contribute to a discussion of the future of public libraries on a programme to be broadcast on the BBC’s World Service.

What Next?

What should the next steps be in gathering evidence which can be used to demonstrate the value of services provide within the sector? Should we be seeking to open access to relevant data – or should we be concerned that such data might be misinterpretted or highlight short-comings and deficiencies in the services we provide? And how should we use the evidence and the data? Should we be looking to move the discussions out of the blogosphere and into the public arena, such as the programmes broadcast by the BBC? Or might this be counter-productive? Perhaps we should stay quiet under the recession is over?

I’d be interested in your views. And since John Kirriemuir’s interview on the BBC World Service will take place in two week’s time, we have an opportunity to help him make even more persuasive arguments.


Technorati Tags

Posted in Evidence, Finances | Tagged: | 15 Comments »

Eight Updated HTML5 Drafts and the ‘Open Web Platform’

Posted by Brian Kelly (UK Web Focus) on 4 November 2010

Eight Updated HTML5 Drafts

Last week the W3C announced “Eight HTML5 Drafts Updated”.  The HTML Working Group has published eight documents all of which were released on 19 October 2010:

Meanwhile on the W3C blog Philippe Le Hégaret has published a post on “HTML5: The jewel in the Open Web Platform” in which he describes how he has been “inspired by the enthusiasm for the suite of technical standards that make up what W3C calls the ‘Open Web Platform’“.

The ‘Open Web Platform’

The term ‘Open Web Platform’ seems strange, especially coming from a W3C employee. After all, has the Web always been based on an open platform since it was first launched, with open standards and open source client and server tools?

Philippe Le Hégaret goes on to say that Open Web Platform is “HTML5, a game-changing suite of tools that incorporates SVG, CSS and other standards that are in various stages of development and implementation by the community at W3C”.

Philippe described these ideas in a video on “The Next Open Web Platform” published in January 2010. From the transcript is seems that W3C are endorsing the characterisations of  “Web 1.0,  which provided a “very passive user experience“,  followed by “Web 2.0″ which provided “a more interactive user experience“.

The W3C, it seems, have announced that they are now “pushing the web in two areas, which are orthogonals. One is the Web of Data, that we refer to, of course, the Semantic Web, cloud computings that we are also interested in and mash-ups, data integration in general. And the other one is the Web of Interaction“.

Discussion

Whilst the W3C have always been prolific in publishing technical standards they have, I feel, been relatively unsuccessful in marketing their vision. It was the commercial sector which coined the term ‘Web 2.0′ – a term which had many detractors in the developer community, who showed their distaste by describing it as “a mere marketing term“.

Web 2.0 is marketing term – and a very successful marketing term, which also spun off other 2.0 memes.  So I find it interesting to observe that the W3C are now pro-active in the marketing of their new technical vision, centred around HTML5 and other presentational standards under the term ‘Open Web Platform’.

And alongside the ‘Open Web Platform W3C are continuing to promote what  they continue to describe as the ‘Semantic Web’.  But will this turn out to be a positive brand?  Over time we have seen the lower case semantic web, the pragmatic Semantic Web,  the Web of Data and Linked Data being used as a marketing term (with various degrees of technical characterisations).    But will the variety of terms which have been used result in confusion?  Looking at a Google Trend comparison of the terms “Semantic Web” and “Open Web Platform” we see a decrease in searches for “Semantic Web” since 2004, whilst there is not yet sufficient data to show the trends for the “Open Web Platform“.

Whilst I, like Philippe Le Hégaret, am also an enthusiast for the ‘Open Web Platform’ (who, after all, could fail to support a vision of an open Web?)  there is still a need to appreciate concerns and limitations and understand benefits before making decisions on significant uses of the standards which comprise the Open Web Platform. I will be exploring such issues in future posts – and welcome comments from others with an interest in this area.

Posted in jiscobs, standards, W3C | 2 Comments »

Developments to the Lanyrd Service

Posted by Brian Kelly (UK Web Focus) on 3 November 2010

The Lanyrd service was launched on 31 August and, as described on the Zeldman.com blog: “Lanyrd uses Twitter to tell you which conferences, workshops and such your friends are attending or speaking at. You can add and track events, and soon you’ll be able to export your events as iCal or into your Google calendar (the site is powered by microformats).“. The post went on to add that “Soon, too, you’ll be able to add sessions, slides, and videos“.

Yesterday there was the confirmation of  Slides, video, audio, sketchnotes… coverage on Lanyrd. This announcement was accompanied by a reference of the importance which the service places on metadata: “It’s the perfect past-time for metadata addicts like us! … Make sure to add topics and speakers to the sessions. Coverage is deeply integrated with Lanyrd, and shows up in all sorts of places when combined with the right metadata.

In order to explore how this metadata is used I created the following search queries:

  • Conferences in Sheffield containing the string “UKOLN”: see results
  • Conferences about “Web standards” containing the string “web”: see results
  • Conferences about HTML5 containing the string “standards” held in 2010: see results
  • Conferences in London containing the string “metadata”: see results

The final example has a link to a two-day event on “Maximising the Effectiveness of Your Online Resources” which I co-facilitated. At that event myself and George Munroe described various approaches which can be used to maximise awareness of and use of digital resources. Such approaches included various Search Engine Optimisation (SEO) techniques, use of metadata and exploitation of the Social Web services.

Such approaches can apply to exploitation of services such as Lanyrd (and related popular Social Web services such as YouTube, Slideshare, etc).  These services are often very popular, with links to the services helping to enhance its Google ranking – and similarly links from such services can enhance traffic to institutional services.  So adding your metadata and appropriate links can be a way of raising the visibility of your resources – and arguably could be more cost effective than adding such metadata only to in-house services (it should be noted that such services are often very easy to use).

I’ve registered for an account on this service, in part to monitor how this service develops and to claim my preferred username on the service – and in addition because I feel that use of such services can be beneficial and worth a little amount of time in registering and uploading a small number of items. I will also be interested to see if Lanyrd develops so that it could be used as a mainstream event Web site. As I asked recently Should Event Web Sites Be The First To Be Outsourced?. And, if so, what role could Lanyrd play?

Posted in Web2.0 | Tagged: | 3 Comments »

Proposed Recommendation for Mobile Web Application Best Practices

Posted by Brian Kelly (UK Web Focus) on 2 November 2010

The W3C have recently published a Proposed Recommendation of Mobile Web Application Best Practices.  This document aims to “aid the development of rich and dynamic mobile Web applications [by] collecting the most relevant engineering practices, promoting those that enable a better user experience and warning against those that are considered harmful“.

The closing dates for comments for this document is 19 November 2010.

There is much interest in mobile Web applications within the UK Higher education sector, as can be seen from recent events such as the Eduserv Symposium 2010: The Mobile University, the FOTE10 conference and the Mobile Technologies sessions at UKOLN’s IWMW 2010 event.  Much of the technical discussions which are taking place will address such best practices.  Since an effective way of ensuring that best practices can be embedded is to publish such practices by a well-known and highly regarded standards body I feel it would be useful if those who are involved in mobile Web development work were to review this document and provided feedback.

The Mobile Web Best Practices Working Group provides details on how to give feedback, through use of the public-bpwg-comments@w3.org mailing list. Note that an archive of the list is available.

 

Posted in W3C | 1 Comment »

Fourth Anniversary of this Blog – Feedback Invited

Posted by Brian Kelly (UK Web Focus) on 1 November 2010

This blog was launched on 1 November 2006. It seems appropriate to use this anniversary to reflect on how this blog has developed over the years.

I originally envisaged that the blog would primarily have a dissemination function, describing and discussing significant developments in the information landscape. However over time I found that I was using the blog as my open notebook to keep a record of activities I had been involved in and my observations and thoughts on developments. The use of the blog as an open notebook was partly for my own benefit: the writing process has helped me to reflect on my thoughts as well as helping me to ensure that I will be able to revisit the ideas in the future – indeed many ideas initially described on the blog have subsequently been reused in my talks and my papers. The open approach using this blog has also provided an opportunity for others to comment on the thoughts and ideas, which again has helped me in developing these ideas. I also hope that this open approach has proved beneficial to the readers of this blog who may share similar interests.

This open approach to development and sharing is now central to much of my work. Posts on this blog and slides I host on Slideshare and Authorstream, for example, are provided with a Creative Commons licence and, as mentioned recently, I try to make use of event amplification technologies in order to ensure that remote audiences can benefits from events I speak at even if they aren’t physically present.

I have argued previously on this blog that development projects should be encouraged to be open about their development work. This might include not only publishing information on  decisions they have made and details of the successes  - and failures – but also encouraging discussions on such issues in a more open environment that use of a mailing list provides. Using a blog environment can provides ease of access and engagement which is not available when reports are published on Web sites or sent via email. I have tried to use this blog as a way of demonstrating the benefits of openness, seeking to achieve cultural change for those who make be reluctant to adopt an open approach to development work.

Is this approach working?  From the usage statistics for this blog it would seem that the approaches taken on this blog is helping to continue to attract readers: this month has been the busiest ever, with an average (at the time of writing) of 311 daily views in October. In addition the blog has also been shortlisted for a national award organised by Computer Weekly (and there is still an opportunity to vote).

But although I know that there are significant numbers of readers who have posts delivered via email I don’t have a clear idea of how users read the posts and the platforms they use.  And more importantly I don’t have the bigger picture from the readers of their thoughts on the contents of the blog and the approaches taken.

Back in the August/September 2007 I carried out a survey of the blog.  Three years later it is now opportune to revisit that survey, so I invite readers to complete a brief survey, which has just four parts: 1) how you access the blog; 2) your engagement with the blog; 3) the contents of the blog and 4) other comments.

Thanks in advance.

Posted in Blog | 8 Comments »