UK Web Focus

Innovation and best practices for the Web

Archive for the ‘Web2.0’ Category

Is Smartr Getting Smarter or Am I Getting Dumber?

Posted by Brian Kelly (UK Web Focus) on 13 June 2011

Reviewing Smartr

20110611-164658.jpgBack in February in a post entitled Who Needs Murdoch – I’ve Got Smartr, My Own Personalised Daily Newspaper I described the Smartr personalised Twitter-based personalised newspaper service for the iPhone, iPod Touch and iPad.

This is an application which I now use on a daily basis to view the contents of the links posted by my Twitter community. It has also provided the motivation for me to make greater use of Twitter lists – the lists I have created recently include JISC Services, UKOLN colleagues, IWMW 2011 speakers and attendees at a forthcoming UKOLN workshop on Impact, Metrics and Social Web.

The accompanying image shows the content of links to resources which have been tweeted by accounts on my JISC Twitter list. As might be expected this provides content which reflects the interests of the particular service and is often content published by the service. It does occur to me that JISC Programme Managers who wish to keep informed of project developments may find it particularly useful to use Smartr in conjunction with a Twitter list of their project Twitter accounts.  However in addition to providing a simple means of getting relevant content to a iPhone/iPad environment I have to admit that my initial use of this application when I am  on the bus in the morning is to view the contents tweeted by all of the people I follow on Twitter, as this can provide serendipitous benefits which are not provided when following official accounts.

Smartr Developments

Recently I updated the app to Smartr 2.0 and started to notice that various people had started to follow me on Smartr, perhaps having read the blog post and a followup post published last month which described how Ariadne Is Getting Smartr.

When someone starts to follow you on Smartr, as with many other social apps, you get an email which provides brief information about how the person is using the service.

As can be seen from the accompanying screenshot of a recent email I received Dave has 182 followers, 134 sources and 5,208 stories. You can also see the stories which Dave has recent read which seem to indicate that he has an interest in road racing – this isn’t of particular interest to me so I decided not to follow Dave.

But the links to stories (which I prefer to refer to as articles) which Dave has recently read, as opposed to links he has recently posted, shows an aspect of Smartr which I hadn’t been aware of when I first started using the application – and whether this is because I was using version 1 or because I wasn’t following anyone within the Smartr app (as opposed to on Twitter) I don’t know.

Is seems that when someone follows you on Smartr they can see the articles you have recently read. What might be revealed in my case?

It seems that the articles I have recently read within Smartr include a post which described how World IPv6 Day went mostly smoothly, with a few surprises, another which asked What impact are your resources making and one on Posterous, From SaaS to PaaS Using an API.

So the 19 Smartr users who are following me can see note only the articles I have posted on Twitter but also the articles I have read (and the time I read them). Is this:

  • A great example of sharing resources across one’s community which exemplifies the benefits of adopting a culture of openness?
  • A privacy intrusion which should cause concerns?

What are your thoughts?

Discussion

If you visit the Smartr Web site you will see an image of Smartr running on an iPhone with a link to the iTunes store which enables you to download the app. There are links to articles about Smartr but no obvious FAQ. There is, however, a prominent Smartr byline: “See what your friends are reading on Twitter and Facebook” which perhaps suggests that you are making your reading habits publicly available.  But this aspect wasn’t mentioned in the Mashable article when Smartr was first released.  There is not just a lack of an FAQ on the Smartr Web site, there is also no information provided about release dates and the functionality of the two versions of the software which have been released to date.

Smart does have a user forum which is hosted on the Uservoice Web site.  I published a comment on the forum in which I suggested that there was a need for documentation on the functionality provided by the service and the associated privacy issues.  Temo Chalasani, the founder of the company behind Smartr, responded and asked me what documentation I feel is required. Here are my suggestions for an FAQ:

  • When was Smartr first released?
  • What subsequent versions of Smartr have been published and what additional functionality has been provided?
  • What are the privacy implications of using Smartr?
  • Can I read the contents of articles posted by my Twitter followers without others being able to see what I have read and when?
  • Can I block others from following me on Smartr as I can do on Twitter?

Will I Still Use Smartr?

Smartr does raise some interesting privacy issues – and since this is a dedicated app rather than a Web service  the use of cookies is not an issue, so recent EU legislation in which the requirement for users to opt-in to accepting cookies is irrelevant. Here are some scenarios which may concern some users:

  • The parent who follows their children on Smartr in order to see what links the child has been following.
  • The child who follows their parents on Smartr!
  • The manager who follows members of staff to see what inappropriate articles are being read during work time.
  • The journalist who follows politicians and celebrities in order to write articles about their reading habits.

It should be noted that although it is possible for the parents, children or mangers to view the links which may be being posted, Smartr provides something different – the ability to see links posted by others which are being read.

Despite such concerns, I intend to continue to make use of Smartr as I find it such a useful service even though  I am aware that I could follow a link to a Web site which I would normally be embarrassed to be seen reading. But for me the important thing is user education so that users are made aware of possible risks.  I would therefore encourage Smartr to highlight possible risks.  The question though is “Am I being smart or dumb in using this tool?”

Posted in Twitter, Web2.0 | 17 Comments »

An Opportunity To Investigate Color: a Location-Based Social Photo App

Posted by Brian Kelly (UK Web Focus) on 11 May 2011

The Event Amplifier blog has a post entitled “Color and Elastic Networks” which explores the idea of an “elastic network“. The post introduces this term in the context of a new iPhone app called Color.  As is suggested on the blog post despite some critical comments on the Apple app store this app could have a role to play in helping to develop networks in the context of an event. I tried it on Monday night at a small music venue but as there were only two of us who were using the app it didn’t provide any added value.  But if you are attending a larger event and there are significant numbers of people using the app I do wonder whether it could help to engage people by sharing photos in the way that a Twitter event hashtag enables people to share experiences in bursts of 140 characters?

Tomorrow (Thursday, 12 May) I am attending the Eduserv Symposium on “Virtualisation and the Cloud: Realising the benefits of shared infrastructure“. The event hashtag is #esym11 and, based on last year’s experiences (with, according to Summarizr, 1256 tweets from 190 twitterers) we can expert to see a large amount of tweets about the event.  Since many of the attendees are likely to have an iPhone (or iPod Touch) – the Android app has not yet been released – might this provide an opportunity to evaluate the potential of a location-based social photographic sharing app?  Note that if the function of the app is unclear you may wish to  view the accompanying video clip.

Posted in Web2.0 | 1 Comment »

Thoughts on the Purpose (and Future) of Education

Posted by Brian Kelly (UK Web Focus) on 7 May 2011

The #Purposedpsi Event

The time last week I was arriving at the first face-to-face meeting of the Purpos/ed campaign. Purpos/ed describes itself as:

a non-partisan, location-independent organization aiming to kickstart a debate around the question: What’s the purpose of education? With a 3-year plan, a series of campaigns, and a weekly newsletter we aim to empower people to get involved and make a difference in their neighbourhood, area and country.

The launch event, the Purpos/ed Summit for Instigators, was held at Sheffield Hallam University on Saturday 30 April – clearly those who attended the event were passionate about helping to engage in discussions about the future of education – or perhaps they wanted to travel to Sheffield on the previous day to escape the Royal Wedding (I have to admit that in my case both reasons are true!)

Image from Flickr

As can be seen from the list of blog posts, participants at the event seemed to find it stimulating.

The context to the day was summarised by Josie Fraser, chair of the event who described how she “spent Saturday 30 April in Sheffield, at the Purpos/ed Summit for Investigators, along with 50 delegates from across the UK who had given up their Saturday to take part in a day of discussion and action planning aroundPurpos/ed.

Julia Skinner invited readers toPicture a group of like-minded folk, a state-of-the-art university and cupcakes and you have a recipe for a great afternoon of discussion and debate” whilst Doug Belshaw, one of the co-facilitators describedyesterday [as] one of the best days of my life“.

I’ve spend a week reflecting on the event.  Whilst I too found enjoyed  meeting like-minded people and was pleased that the event was so well organised, I couldn’t help but feel that the enthusiasm for engaging in a  debate on how education can be reshaped was, perhaps, somewhat misplaced.  In my 3 minute talk on Education: Addressing the gaps between the fun and the anxieties I suggested that there will always be a need to be anxious about education and that such anxieties will not only be felt by learners but also those who are engaging in learning processes, including teachers, academics, learning support staff and learning organisations themselves.

“The Death of Universities”

Yesterday I came across a tweet which provided a link to a discussion on “Massively Open Online Courses – the Death of Universities?“.

Image from Wikipedia

The title of the discussion reminded me that the Purpos/ed meeting was held at the Conference 21 venue, which overlooks Park Hill Flats. I was told that this is the largest listed building in Europe – and the Wikipedia entry confirms this. Park Hill Flats are, however, currently empty.  Wikipedia describes how “Although initially popular and successful, over time the fabric of the building has decayed somewhat“.

I couldn’t help but wonder whether we may see a similar fate for large buildings to be found in many University campuses.  We have seen investment in higher education during the Labour Government which has some parallels with the investment in public housing in the 1950s and 60s.  However the approaches taken to providing homes weren’t sustainable and whether due to  a lack of further investment to support maintenance or the occupants’ preference for an alternative living environment, we found that such large council housing estates were either demolished (such as Quarry Hill Flats in Leeds) or  mothballed, awaiting further investment as is the case in Sheffield.

Dave Kernohan has contributed to the discussion on Massively Open Online Courses (MOOCs) by arguing that traditional Higher Education models are under attack from two sides: the government cutbacks which we will all be familiar with and, in addition, the views that independent learners are well-positioned to exploit the availability of open educational resources and the wide range of freely available online tools which are also now available  which can be effective in supporting one’s personal learning network.

Dave Kernohan suggested that “we are also seeing an attack based on stuff like Anya Kamenatz’ idea of a DIY U (http://www.diyu.com” – this echoes my “Dazed and Confused After #CETIS10” post in which I also suggested that the “case for radical innovation” associated with the DIY University could result in the dismantling of high educational institutions, rather than reforms and improvements which many of those working in education may be seeking.

If the future of education does lie in  Massively Open Online Courses it seems to me there will be many empty buildings on campuses. Perhaps every University town will be competing to boast that it has the largest listed building in the country? Of course this won’t happen (just as the developers of the high rise buildings knew that their work would also have a long-lasting impact:-) But I do  think that the debate of the Purpose of Education does need to address the negative implications of ideas for the future.

Posted in Web2.0 | 1 Comment »

Markup.io: Another Simple Service For Annotating Content

Posted by Brian Kelly (UK Web Focus) on 2 May 2011

I was recently alerted to markup.io,  a new Web-based service for annotating public Web sites. In his tweet Pat Lockley observed that this provided “another bo.lt like tool for #ukoer #oer #ocw remixing“.

I installed the Chrome extension to use this service (a bookmarklet is available for other browsers) and annotated the home page for this blog. As can be seen the service creates a copy of the page on the markup.io service with annotations using simple drawings and text tools.

I recently mentioned the Bo.lt service and suggested that although there are obvious copyright concerns in allowing any public Web page to be copied and edited, such an easy-to-use service might be particularly useful in the context of open educational resources (OER) for which licences are available which permits such re-use. It should also be noted that additional annotations can also be added – although it does not appear to be possible to delete annotations, so there will be dangers about graffiti appearing (such as, for example the name of a famous footballer who took out a super-injunction appearing on a BBC news article).

It does strike me, though, that the direct editing of a page which Bo.lt provides does have risks, not least the dangers of  the ease of forging content which Bo.lt provides.  Although markup.io is also taking a copy of a page and hosting it one its own servers the annotation approach which the service provides seems to minimise risks of forgery.  Perhaps this is a useful approach for annotating Web-based OER resources?

Posted in openness, Web2.0 | 1 Comment »

The BO.LT Page Sharing Service and OERs

Posted by Brian Kelly (UK Web Focus) on 22 April 2011

Earlier today, having just installed the Pulse app on my iPod Touch, I came across a link to an article published in TechCrunch on the launch of a new service called Bo.lt.  The article’s headline summarises what the service will provide: “Page Sharing Service Bo.lt Lets You Copy, Edit And Share Almost Any Webpage“.

The comments on the article were somewhat predictable; as seems to be the norm for announcements of new services published in TechCrunch some were clearly fans (“OMG! This is going to change everything!“) whilst others point out that the new service provides nothing new: “Shared Copy (http://sharedcopy.com/) is a great service that’s been around for 4 years that does ~the same thing“.

Of particular interest to me, however, were the comments related to the potential for copyright infringements using a services which, as the TechCrunch article announced “let’s you copy, edit and share any page“. As the first comment to the article put it: “I can just see it…this will make it easier for 1) people to create fake bank statements, 2) awesome mocking of news headlines, 3) derivative web designs“.

In order to explore the opportunities and risks posed by this service I registered for the service and created a copy of the home page for my blog and subsequently edited it to remove the left hand sidebar. As can be seen an edited version of the page has been created, and you can view the page on Bo.lt.

So it does seem that it will be easy for people to copy Web pages, edit them for a variety of purposes, including poking fun, creating parodies (has anyone edited a Government Web page yet) as well as various illegal purposes.

But what about legitimate uses of a service which makes it easy to copy, edit, publish and share a Web resource?  The educational sector has strong interests in exploring the potential of open educational resources (OERs) which can be reused and remixed to support educational objectives.  We are seeing a growth in the number of OER repositories.  Might a service such as Bo.lt have a role to play in enabling such resources to be reused,I wonder?  Will Bo.lt turn out to be a threat to our institutions (allowing, for examples, disgruntled students unhappy at having to pay £9,000 to go to University to create parodies of corporate Web pages) or a useful tool to allow learners to be creative without having to master complex authoring tools?

Posted in openness, Web2.0 | Tagged: | 2 Comments »

Zapd – Opportunity or Threat?

Posted by Brian Kelly (UK Web Focus) on 15 April 2011

Introducing Zapd

I came across Zapd whilst browsing Apple’s App store on Wednesday night. It was a featured app, available for free and was highly rated – so Ii installed it on my iPod Touch.  A few minutes later I had created a Web site containing annotated photos of a wedding I went to over the weekend.  The applications byline – “Websites in 60 seconds from your iPhone” – seems to be true.  Zapd seems to provide a useful tool for such social applications, but could it be used in a professional context, I wondered. Or might it be regarded as a threat to Web professions, who might doubt whether it is possible to create a Web site so quickly, and question the underlying technical approaches (does it validate? does it conform with accessibility guidelines?), the legal implications, the dilution of an institution’s brand or the sustainability of the content.  Does Zapd provide an opportunity or a threat?

Using Zapd

Yesterday I attended the launch event of the Bath Connected Researcher series of events which has been summarised in a post by Jez Cope, one of the organisers. The #bathcr event (to use the event’s Twitter hashtag) began with a seminar given by Dr. Tristram Hooley who described how he has used social media in his research and to pursue his academic career. Tristram has written a blog post about the seminar which includes access to his slides which are embedded in the post. In addition a recording of the seminar is also available.

The seminar was aimed at researchers who may be new to social media.  I got the impression that many of the participants had not used Twitter to any significant extent.  I had been invited to participate in a workshop on the use of Twitter which was held after the seminar. As I could only attend the workshop briefly it occurred to be that I could try Zapd to see if I could create a Web site which shows how I use Twitter on my iPod Touch.

I captured screen shots of the Twitter’s mobile client, Tweetdeck and Smartr (see recent post) and added text which showed the benefits of Tweetdeck’s columns for providing filtered views of tweet streams (e.g. for an event which has a hashtag such as #bathcr) and how Twitter lists can be used to provide additional filtering capabilities for the delivery of Web pages from selected Twitter accounts.  It took 10 minutes to create and publish the Web site on my iPod Touch while I was also listening to Tristam’s seminar.

It should be noted that the application had created a Web site with its own domain: (http://1a5c.zapd.co/) .  So this application does seem to provide something more than uploading photos to Flickr.

Discussion

Is this a Web site? After all it’s only a simple single page containing text and a few images. But as it has its own domain name surely it must be regarded as a Web site. But should such Web sites be allowed to be created – aren’t they likely to infringe instituional policies? Aren’t we moving away from a distributed environment and towards a centrally managed environment for Web resources? After all, as was suggested to me on Twitter, aren’t Web sites which can be created in less than 10 minutes likely to be forgotten about a week later?

Perhaps this is true, but for me an important aspect of the Web is in providing a communications environment and not just a institutional tool for the publication of significant documents.  And sometimes the communications may be an informal discussion – and I think that Zapd could have a role to play in that space.

I also think that we should be willing to learn from new approaches. Being able to create a Web site on a mobile device is quite impressive. It was also interesting to observe how the service creates a new domain name for each resource created.  Should this be something for institutions to consider?

For me I regard Zapd as another in my Personal Learning Environment which I’m happy to use if it fufills a useful purpose. And if it fails to do that, I’m happy to throw it away.  And with 100,000 downloads since its launch two weeks ago it seems I’m not alone in exploring its potential.  What’s your take?

Posted in Web2.0 | Tagged: | 9 Comments »

Seminar on “Mobile Technologies: Why Library Staff Should be Interested”

Posted by Brian Kelly (UK Web Focus) on 21 March 2011

I was recently invited to give a staff development session on mobile devices to staff from the University of Bath Library. The title of the seminar was “Mobile Technologies: Why Library Staff Should be Interested” and the slides I used are available on Slideshare and embedded below. As well as described how I use mobile devices (in particular the iPod Touch) the seminar also provided an ideal opportunity to demonstrate various uses of mobile technologies. This included:

Comments on the talk were made in Bambuser. In addition discussions also took place using the #bathlib Twitter hashtag. Afterwards Storify was used to aggregate these tweets.

The point of use of these technologies was to illustrate how mobile devices can be used to most publish and view lectures (on this occasion  I used a portable Apple Mac to stream the video although I have previously used an iPod Touch and a HTC Desire Android phone to do this).  There was some discussion about the quality of the video and audio. I was able to ask the remote audience for their feedback and received the following comments on Twitter:

  • Audio good, video patchy at first but now pretty good – bit blurry but very much what you’d expect from a phone and v. acceptable #bathlib
  • #bathlib Video quality better now than at start of session. Beard concealing lip-synch quality

Comments made on the Bambuser channel included:

  • 11:26  anonymous: Hi Brian!  Bir jerky on the video, audio is fine. :)
  • 11:26  working pretty well brian: Yeah a bit jerky now
  • 11:27  itsme: video jerky audio good
  • 11:27  lescarr: Quality of video & audio very good. It does halt sometimes.
  • 11:27  mhawksey: audio is great, vid a bit jerky cam keeps refocusing
  • 11:29  Jo Alcock: Audio OK – video a bit jerky (but my connection isn’t very good here)
  • 11:30  Jo Alcock: Started watching it on iPad (through Twitter app), works well but moved to desktop now to enable chat
  • 11:30  Nicola: As tweeted: Audio good, video patchy at first but now pretty good – bit blurry but very much what you’d expect from a phone and v. acceptable #bathlib
  • 11:33  working pretty well brian: Video fairly patchy – Mahendra, Audio ok

You can judged for yourself how good the video and audio were by viewing a recording of the video.  It should be noted that the quality of the audio was the most important aspect with the video helping to provide a content to the slides being displayed.

During the talk I mentioned how such lightweight video and audio streaming devices (and video recording devices such as a Flip camera) can help to enhance the benefits of such staff development courses.  I described how members of staff at the University of Bath Library who were unable to attend will be able to view the video. But in addition making such resources publicly available can help to enhance the ROI associated with the preparation and delivery of such courses.  As can be seen from the accompanying image there have so far been 62 views of the talk (of which 40 were of the live video stream).  As @annindk (Ann Priestly, an information professional currently working in Denmark) commented:

Watched yr seminar over lunch – thanks! Quality just fine, thinking ROI must be good for these quick sessions

The question of costs and ROI arose during the discussions after the presentation.  “What are the costs in making use of such technologies and can the investment be demonstrated to provide benefits?” was how I interpreted one question I received.  Following a show of hands it appeared that everyone in the room (apart from possibly one person) had a phone which contained a camera.  So we will probably find that the capital costs in the purchase of mobile devices has already been paid for and as phones are upgraded their functionality will continue to be enhanced.  So rather than having to be able to justify the costs of centralised provision of, say, video recording systems in lecture theatres I suggested that it would be more appropriate to explore a bottom-up approaches, with individuals taking responsibility for recording themselves or their colleagues. A post on the DMU Mashed Library blog picked up on this idea:

One interesting point that came out was Brian’s description of people tweeting their comments on attending conferences to a wider (twitter reading) audience: Can this really be seen as engaging in support for the Big Society? I guess I was consciously doing this from Eduserv’s ‘Work Smarter, not Harder’ workshops #oa11.

My suggestion that taking responsibility for making resources available beyond their immediate target audience could be regarded as a form of the ‘Big Society’ was slightly tongue-in-cheek. But surely if one can provide materials which will be of benefit to others at little additional cost or effort, we should be looking to do this?  And as there were about 25 people in the seminar but 40 people watching the live video stream, can’t this be regarded as providing additional ROI?

Posted in Events, Web2.0 | Tagged: | 3 Comments »

Scridb Seems to be Successful in Enhancing Access to Papers

Posted by Brian Kelly (UK Web Focus) on 10 January 2011

I first wrote about the Scribd document repository service back in March 2007 in a post entitled “Scribd – Doing For Documents What Slideshare Does For Presentations“. Since then I have uploaded a number of papers to the service.  But almost three years on, how has the service developed?

My original post summarised some of the benefits of the service but highlighted a number of concerns:

Has Scribd raised the bar in users’ expectations for digital repositories? In some respects, I feel it has. However there are concerns which need to be recognised:

  • Poor quality resources which are hosted: there is no guarantee of the quality of the resources which are hosted on Scribd. And there are copyrighted publications (including those from O’Reilly) which have already been uploaded.
  • Sustainability of the service: As will all of these type of services, there is the question as to whether such services are sustainable. Techcrunch reported on 6 March 2007 that the service “is coming out of private beta this morning with a fresh Angel investment of $300K on top of their original Y Combinator nest egg of $12,000.“This may keep the service running for a short time, but will it be around in the medium to long term? And what will happen if copyright holders, such as O’Reilly, take the service to court for their misuse of their copyrighted resources (as Viacomm have recently done to YouTube).
  • Lack of a interoperable resource discovery architecture: The approach taken by Scribd is not interoperable with the approach being taken by the JISC development community, which is looking to support the development of distributed interoperable digital repository services which make use of OAI-PMH.

Three years later the service is still available.  And looking at the statistics for access to documents I uploaded to the service, it also seems very popular:  during 2010 there were no fewer than 11,729 views of the 15 papers I uploaded to the service, an average of 32 per day.  As you can see from the graph below there were two significant peaks in the year, when there were over 800 in a day.  If I remove these outliers by viewing the statistics for the last six months of the year I find 4,215 views in the six month period, giving an average of  24 per day.

In comparison looking at the usage statistics for my 26 papers hosted in the University of Bath Opus repository I find that there have been 2,505 views during 2010.

Hmm, the repository has almost twice as many papers and resources in the repository are linked to from the UKOLN Web site and  from posts on this blog.  The repository also benefits from being part of a larger repository ecology, with access available from services such as OpenDOAR and MIMAS’s Institutional Repository Search.  And yet the Scribd service seems to get significantly more visits.

Looking at a specific instance, my most recent paper, “Moving From Personal to Organisational Use of the Social Web“, was presented at the Online Information 2010 at the end of November. This paper was uploaded to the University of Bath repository and was mentioned in a blog post on “Availability of Paper on “Moving From Personal to Organisational Use of the Social Web”” which linked to the copy in the repository.    The paper was also uploaded to Scribd – and this was also mentioned in the blog post (and was, indeed, embedded in the post). The usage statistics to date (10 January 2011) are 53 views in the University of Bath repository and 447 views on Scribd.

Scribd also provides a  easy-to-use interface for viewing usage statistics for individual papers. As can be see from the image, there was a peak (of 181 views) on the day the blog post was published with a smaller peak (102 views)  three days previously.  The total number of views from embedded reads (i.e. people who read the blog post and may – or may not -have actually read the embedded paper) is 349. This leaves 160 views of the paper within the Scribd environment – over three times as many views as received for the copy in the institutional repository.

Whilst I can’t help but think that the usage statistics are flawed, I don’t have any evidence of this. I would appreciate suggestions why the views seem so large. But I also suspect that there will be views from people who were searching for information provided in the papers – and if only 10% of the views came from satisfied users that would be on par with those viewing the larger number of papers in the institutional repository (which is also likely, of course, to be inflated by readers using  Google to view papers which aren’t of interest).

Now Scribd does seem to host, how shall I put it, a wide variety of types of documents, not all of which are of relevance to researchers. But the service does have a variety of features which can help to enhance access to documents such as links to Social Web services such as Twitter and Facebook for promoting documents of interest to one’s professional network and the ability for documents to be embedded in other Web sites.

So if one wishes to maximise the impact of one’s ideas will the institutional repository or a commercial service such as Scribd provide the best solution? Or perhaps one should use both approaches?  And if you feel that researchers will prefer to use a more research-friendly environment than is provided by Scridb, remember than researchers, like everyone else, use Google, which will also find resources of dubious scholarly relevance for searches.

Posted in Repositories, Web2.0 | Tagged: | 4 Comments »

What’s the Value of Using Slideshare?

Posted by Brian Kelly (UK Web Focus) on 23 December 2010

Back in August Steve Wheeler tweeted that “Ironically there were 15 people in my audience for this Web 3.0 slideshow but >12,000 people have since viewed it http://bit.ly/cPfjjP“.

I used that example in a talk on “What Can We Learn From Amplified Events?” I gave in Girona a week later – and in my talk I admitted that not only had I read the tweet while I was in bed but that I also viewed the slides in bed.

I made this point as I wanted to provide additional examples of the ways in which traditional academic events, such as seminars, are being amplified and how such amplification is increasingly being used by growing numbers of users which now have easy access to resources, such as slides used in seminars, which previously were not easy to access.

In a post entitled “Web 3.0 and onwards” Steve has brought this story up-to-date:

one of the surprising highlights for me was the aftermath of a presentation I gave at a school in Exeter, South West England, in July. I was invited by Vitalmeet to present my latest views on the future of the web in education, so I chose to talk about ‘Web 3.0 – the way forward?’ When I arrived, the room wasn’t that ideal, and the projector was on its last legs. Only 15 people turned up, and that included the organisers. Not an auspicous. I gave my presentation, and no-one wished to asked any questions afterwards. I made for the door… then someone asked me if they could have my slides. I promised I would post them up on my Slideshare site so they could gain access.

To say I was amazed at the response is an understatement. My Web 3.0 slideshow received 8,000 views during its first week. Within the month, the count had risen to over 15,000 views – my original audience had multiplied a thousand times. Even more valuable for me, many people commented and shared their ideas to me, which led to to write further blog posts, and publish a second, related post entitled Web x.0 and beyond.

The question I have is “Can we estimate the value which has been generated following the uploading of the slides to Slideshare and the subsequent promotion of the resource?“.

I have met Steve a couple of times and have found him to be a stimulating speaker and his blog is on my ‘must-read’ list.  So I would be happy to suggest that his talk is likely to have been well-received by the 15 people in the audience.  I could suggest that he might have received a 100% rating on the content and style of presentation – but there may have been someone in the audience who had already seen the talk and perhaps someone else who might not have been feeling well or it wasn’t an area of interest to them.  So let’s suggest a 90% average rating from the 15 people, which gives us an overall  13.5 ‘satisfaction’ rating (nos. of people * estimated rating).

But what of the 17,406 views of the slides on Slideshare? The presentation will be lacking Steve physical presentation and his engagement with the audience  and responses to questions.  Might, then, we suggest that this can, at best, provide only a 10% satisfaction rating?  We also need to remember that the 17,406 views will not necessarily related to 17,406 different users – I viewed the slides on my iPod Touch in August and have just visited the Slideshare page again, for example.  It is also difficult to know whether the viewers looked at all slides or perhaps just the first few slides and then left.  In light of such considerations, let’s suggest that the audience who have viewed of the the slides might be 10% of the total number of views. This then gives us a ‘satisfaction’ rating of 174.

So according to this formula the availability of the slides on Slideshare has provided a greater ‘impact’ than the live seminar.

Nonsense, I hear you say, and I agree.  But if there was only one person at the seminar and 1 million viewers, and we found that they all rated highly the slides might we conclude the that availability of slides on Slideshare can provided a greater ‘impact’?  I think we could, so the challenge would be to develop a more sophisticated algorithm than my back of an envelope calculation.

But what are we trying to measure?  Perhaps rather than Steve’s presentational style and personality, which is likely to influence an evaluation given immediately after a talk, we should be looking at the impact of the talk afterwards.

Would it be useful, I wonder, to ask people a few months after a talk (in this case the talk took place four months ago) and ask them to recollect what the talk was about and what things had been done differently as a result of the talk?  And then we could compare the responses from the local and remote audiences to see if there are any significant differences.  I should say that my recollection of the slides (which I’ve not looked at while I’ve been writing this post) was that Steve said that Web 2.0 was important in an elearning context and now Web 3.0 is coming along which can build on Web 2.0 and should be treated seriously. Of course Steve may have been using this slides ironically, in which case I may have picked up the wrong message.

What do you think Steve is saying from just looking at his slides (which is hosted on Slideshare)?  And what will you remember in four months time?  And if the answer is ‘not a lot’ might that require us to ask questions of the benefits and values of traditional seminars?  What, after all, is the ROI of a seminar? Might it, I wonder, be the networking? If as a result of the seminar plans were made and implemented after the seminar, this could be a more tangible impact factor.

And in the online environment perhaps they 226 Facebook users who have ‘liked’ the presentation, the 132 Slideshare users who have favourited it, the 798 users who have downloaded the presentation and the 21 comments received might also provide some tangible indications of value – although, of course, they may be liking and commenting on the design of the slides and not on their content!

Posted in Web2.0 | Tagged: | 7 Comments »

Trends For University Web Site Search Engines

Posted by Brian Kelly (UK Web Focus) on 15 December 2010

Surveys of Search Engines Used on UK University Web Sites

What search engines are Universities using on their Web sites? This was a question we sought to answer about ten years ago,with the intention of identifying trends and providing evidence which could be used to inform the development of best practices.

Search engines used across UK Universities in 1999

An analysis of the first survey findings was published in Ariadne in September 1999. As can be seen from the accompanying pie chart a significant number (59 of 160 institutions, or 37%) of University Web sites did not provide a search function. Of those that did the three most widely used search engines were ht://Dig (25 sites, 15.6%), Excite (19 sites, 11.9%) and a Microsoft indexing tool (12 sites, 7.7%).

Perhaps the most interesting observation to be made is the diversity of tool which were being used back then.  On addition to the tools I’ve mentioned universities were also using Harvest, Ultraseek, SWISH, Webinator, Netscape, WWWWais and Freefind together with an ever larger number of tools which were in use at a single institution.

The survey was repeated every six months for a number of years. A year after the initial finding had been published there had been a growth in use of the open source ht://Dig application (from 25 to 44 institutions) and a decrease in the number of institutions which did not provide a search function (down from 59 to 37).

This survey, published in July 2001 was also interesting as it provided evidence of a new search engine tool which was starting to be used: Google, which was being used at the following six institutions: Glasgow School of ArtsLampeterLeedsManchester Business SchoolNottinghamSt Mark and St John.

Two years later the survey showed that ht://Dig was still popular, showing a slight increase to use across 54 institutions.  However this time the second most popular tool was Google, which was being used in 21 institutions. Interestingly it was note that a small number of institutions were providing access to multiple search engines such as ht://Dig and Google. It was probably around this time that the discussion began as to whether one should use an externally-hosted solution (due to concerns regarding the sustainability of the provider, the loss of administrative control and use of a proprietary solution when open source solutions – particularly ht://Dig – were being widely used across the sector).

These surveys stopped in 2003. However two years later Lucy Anscombe of Thames Valley University carried out a similar survey in order to inform decision-making at her host institution. Lucy was willing to share this information to others in the sector, and so the data has been hosted on the UKOLN Web site, thus providing our most recently survey of search engine usage across UK Universities.

This time we find that Google is now the leading provider across the sector, being used in 44 of the  109 institutions which were surveyed. That figure can be increased of the five institutions which were using the Google Search Appliance are included in the total.

What’s Being Used Today?

A survey of Web site search engines used on Russell Group University Web sites was carried out recently. The results are given below.

Institution Search Engine Search
1 University of Birmingham Google Search Appliance Search University of Birmingham for “Search Engine”
2 University of Bristol ht://Dig Search University of Bristol for “Search Engine”
3 University of Cambridge Ultraseek Search University of Cambridge for “Search Engine”
4 Cardiff University Google Custom Search Search Cardiff University for “Search Engine”
5 University of Edinburgh Google Custom Search Search University of Edinburgh for “Search Engine”
6 University of Glasgow Google Custom Search(?) Search University of Glasgow for “Search Engine”
7 Imperial College Google Search Imperial College for “Search Engine”
8 King’s College London Google Search KCL for “Search Engine”
9 University of Leeds Google Search Appliance Search University of Leeds for “Search Engine”
10 University of Liverpool Google Search University of Liverpool for “Search Engine”
11 LSE Funnelback Search LSE for “Search Engine”
12 University of Manchester Google Search University of Manchester for “Search Engine”
13 Newcastle University Google Search Appliance Search Newcastle University of for “Search Engine”
14 University of Nottingham Google Search Appliance Search University of Nottingham for “Search Engine”
15 University of Oxford Google Search Appliance Search University of Oxford for “Search Engine”
16 Queen’s University Belfast Google Search Appliance Search Queen’s University Belfast for “Search Engine”
17 University of Sheffield Google Search Appliance Search University of Sheffield for “Search Engine”
18 University of Southampton Sharepoint Search University of Southampton for “Search Engine
19 University College London Google Search Appliance Search University College London for “Search Engine”
20 University of Warwick Sitebuilder Search University of Warwick for “Search Engine”

In brief 15 Russell Groups institutions (75%) use Google to provide their main institutional Web site search facility, with no other search engine being used more than once.

Note that Google provide a number of solutions including the Google Search Appliance, the Google Mini and the public Google search. Mike Nolan pointed out to me that “you can customise with API or XSLT to make [Google search results] look different” so I have only named a specific solution if this has been given on the Web site or I have been provided with additional information (note that I can update the table if I receive additional information).

Discussion

Over ten years ago there was a large diversity of search engine solutions being used across the sector. The discussions at the time tended to focus on use of open source solutions, with the argument occasionally being made that since ht://Dig was open source there was no need to look any further. There was also a suggestion that the open source Search Maestro solution, developed at Charles University and deployed at the University of Dundee could have an important role to play in the sector.

However in today’s environment it seems that a Google Search solution is now regarded as the safe option and this seems to have been corroborated with a survey carried out by Mike Nolan in December 2008. The potential of Google Custom Search will have been enhanced by the announcement, two days ago, of developments to metadata search capabilities.

There has, however, been some discussion recently on the web-support JISCMail list on software alternatives to the Google Search Appliance.Another discussion on the website-info-mgt JISCMail list has shown some interest in the Funnelback software. But, interestingly, open source solutions has not been mentioned in the discussions.

We might conclude that, in the case of Web site search engines, after ten years of the ‘bazaar’ the sector has moved to Google’s cathedral. What, I wonder, might be the lessons to be learnt from the evidence of the solutions which are used across the sector? Might it be that the HE sector has moved towards cost-effective solutions provided by Google’s free solutions or the richness of the licenced Google Search Appliance or Google Mini? And might this be used to demonstrate that the HE sector has been successful in identifying and deploying cost-effective search solutions?

Posted in Evidence, search, Web2.0 | 13 Comments »

Interoperability Through Web 2.0

Posted by Brian Kelly (UK Web Focus) on 13 December 2010

I recently commented on Martin Hamilton’s blog post on Crowdsourcing Experiment – Institutional Web 2.0 Guidelines“. In addition to the open approach Martin has taken to the development of institutional guidelines on use of Web 2.0 services the other thing that occurred to me was  how the interoperability of embedding interactive multimedia objects was achieved.

Interoperability is described in Wikipedia as “a property referring to the ability of diverse systems and organizations to work together“. But how is Martin’s blog post interoperable? The post contains several examples of slideshows created by others which are embedded in the post.  In addition to the slides, which are hosted on Slideshare, the post also contains embedded video clips together with an embedded interactive timeline.

How is such interoperability achieved? We often talk about “interoperability through open standards” but in this case that’s not really the case. The slides were probably created in Microsoft PowerPoint and are thus either a proprietary format or in the (open though contentious) OOXML format. But the slides might also have been created using Open Office or made available using PDF.  In any case it’s not the format which has allowed the slides to be able to be embedded elsewhere; rather its other standards which allow embedding which are important (e.g. using HTML elements such as IFRAME, OBJECT and EMBED).

It’s also worth noting that applications are needed which implement such interoperability.  In Martin’s post he has embedded objects which are hosted in the Slideshare, YouTube and Dipity applications.  The ability to be embedded (embeddability?) in other environments may also be dependent on the policies provided by such services.  You can normally embed such objects in Web pages, but not necessarily in environment such as WordPress.com (which restricts objects which can be embedded to a number of well-known services such as SlideShare and YouTube). I would be interested to know if popular CMS services have similar limitations on embedding content from Web 2.0 services.

If the original objects which Martin used in his blog post had been simply embedded in their host Web environment, perhaps as a HTML resource, they would not have been easily reused within Martin’s blog. Interoperability is not a simple function of use of open standards; there are other issues, such as market acceptance, which need to be considered.  And the open format embedded on a Web page could, ironically, be non-interoperable whereas a proprietary format hosted in a Web 2.0 environment could be widely used elsewhere.

Or to put it another way, shouldn’t we nowadays regard the provision of an HTML page on its own as a way of providing access to multiple devices but restricting use of the resource in other environments? Web 1.0 = publishing but Web 2.0 = reuse.

I’d like to conclude this post by embedding a slideshow in a talk on “So that’s it for it services, or is it?” which I found a few days ago linked to from a timetable for HEWIT event held earlier this year.  The slideshow hosted on Slideshare is clearly so much more useful than the PowerPoint file linked to from the HEWIT timetable – and as the HEWIT timetable has the URL http://www.gregynog.ac.uk/HEWIT/ I can’t help but think that the resource could well be overwritten by next year’s timetable, with the Slideshare resource possibly access to the resource for a longer period than the Gregynod Web site

Posted in standards, Web2.0 | Leave a Comment »

Gap Analysis: They Tweeted At #online10 But Not At #scl10

Posted by Brian Kelly (UK Web Focus) on 6 December 2010

Twitter Was Popular at #Online10

Last week I attended the Online Information 2010 conference, held at Olympia in London on 30 November – 2 December.  Unfortunately due to other commitments I could only attend on the first day.  But I was able to get a feel for the discussions on the next two days by watching the #online10 column in my Tweetdeck Twitter client – and I was able to do this during what would otherwise have been unproductive times such as standing on an overcrowded bus going to work.

At the time of writing Summarizr informs me that there have been 4,342 tweets from Twitter 1,022 users. This evidence suggests that Twitter had an important role to play at the conference, enabling those users to take part in discussions centred around the various talks presented at the conference as well as enabling conference delegates to cultivate and develop professional relationships. Without Twitter, for example, I wouldn’t have met @Ankix and, over a meal and a few pints in the Warwick Arms with longstanding Twitter colleagues @karenblakeman@hazelh and @akenyg and @stephanbuettner, another new contact, shared experiences of the implications of the cuts across the library sector in the UK, Sweden and Germany.

Little Use of Twitter at #SCL2010

On the same day that I gave a talk at Online Information I was also presenting a pre-recorded video at the Scholarly Communication Landscape: Opportunities and challenges symposium which was held at Manchester Conference Centre, Manchester. For this one-day conference Summarizr informs us that there had been only 38 tweets from 6 Twitter users, but only my colleague Stephanie Taylor (who was supporting my video presentation) and Kevin Ashley, DCC Director  and speaker at the symposium) tweeted more than once. So whilst the far fewer numbers of tweets for this symposium will be due in part to it being a smaller event, running for a single day, the lack of any participation from the audience is, I feel, interesting.

The page about the event informs us that the symposium aims to “investigate the opportunities and challenges presented by the technological, financial and social developments that are transforming scholarly communication” with the programme going to add that “Online social networks are playing an increasingly important role in scholarly communication. These virtual communities are bringing together geographically dispersed researchers to create an entirely new way of doing research and creating scholarly work.

Quite.  But this one-day event, which was open to all staff and postgraduate research students at the University of Manchester, seems to have been unsuccessful in providing an opportunity for participants to try out for themselves Twitter,  an example of a popular online social network which is playing an increasingly important role in scholarly communication, as we saw from the evidence of its use at the Online Information 2010 conference. But rather than point out what the non-users of Twitter may have been missing (such as the active learning and the community engagement which I described above) it might be more interesting to reflect on the more general issues of how non-users of a service can be identified and how one might gain feedback from non-users of a service.

Gap Analysis

Getting feedback from users of a service can be easy – you know who they are and you will often have communications channel with them in which you can invite feedback. But getting feedback from non-users can be much more difficult – although such feedback can be immensely value in understanding reasons why a service isn’t being used and ensuring that enthusiast users don’t give a misleading impression of the benefits.

It might be useful to speculate why services aren’t being used.  Possible reasons for the  lack of Twitter use by the audience at the Scholarly Communication Landscape  symposium could be:

  • Technology problems: lack of or problems with a WiFi network could be responsible for a lack of event-related tweets.
  • Technology limitations: Potential Twitter users may feel that use of a Twitter client at an event is too complex.
  • It’s trivial: Twitter might be regarded as a trivial activity.
  • It’s rude: Use of Twitter at an event might be regarded as being rude and inconsiderate to other participants and to the speakers
  • Personal/profession balance: Twitter users may use it for personal rather than professional purposes.
  • Failure to see relevance: Participants may fail to see the benefits of use of Twitter at events.
  • Relevance not applicable: Participants may appreciate potential benefits of use of Twitter at events but feel such benefits are not applicable for them.
  • Style of working: Use of Twitter (or networked technologies) may not be relevant to personal styles of working.
  • Organisational culture: managers or others in the organisation may frown on such usage.

These are some of my thoughts on why Twitter might not have been used at the symposium, and you may be able to provide additional suggestions.  But how do we find out the real reasons as opposed to our speculations?  And how do we apply approaches for gap analysis to other areas besides use of Twitter? For example, in light of the subject areas which may have been covered at the event, how could we gauge views on the areas such as openness and institutional repositories? How can we gather evidence in order to inform policies on, say, deployment and use of new services or approaches?

Increasingly I’m beginning to think that these type of events should be much more than dissemination channels and provide feedback mechanisms to provide responses, enable aggregated views to be analysed, etc. For an event aimed at staff and postgraduate research students at an institution, such as the Scholarly Communication Landscape symposium which was open to all staff and postgraduate research students at the University of Manchester it would seem that there was an ideal opportunity to gain feedback on the opportunities and challenges in the areas of scholarly communications. And those opportunities and challenges will be shared by many others in the higher education sector.

My concluding thoughts:  events can provide a valuable opportunity for gathering feedback and comments on the areas addressed at the event. There is an opportunity to gather such feedback  using simple technologies which may be very costly to gather in other ways. Open sharing of such feedback can be beneficial to the wider community.  So let’s do it.

Or to provide a more tangible example.  One could ask an audience from one’s host institution if they would be interested in using an communications tool such as Twitter or Yammer to support work activities. Or perhaps whether staff would be willing to make their professional outputs available under a Creative Commons licence.  An example of how this might be approached is given below.

Posted in General, Twitter, Web2.0 | Leave a Comment »

Availability of Paper on “Moving From Personal to Organisational Use of the Social Web”

Posted by Brian Kelly (UK Web Focus) on 29 November 2010

I will present a paper on “Moving From Personal to Organisational Use of the Social Web” at the Online Information 2010 conference tomorrow as well as, as described previously, via a pre-recorded video at the Scholarly Communication Landscape: Opportunities and Challenges symposium.

The eight page paper will be included in the conference proceedings and can also be purchased for a sum of £135! However my paper is available (for free!) from the University of Bath Opus Repository. In addition, in order to both enhance access routes to the paper (and the ideas it contains) and to explore the potential of a Web 2.0 repository service, the document has also been uploaded to the Scribd service.

From the University of Bath repository users can access various formats of the paper and a static and persistent URI is provided for the resource.   But what does Scribd provide?

Some answers to this question can be seen from the screen shot shown below.  Two facilities which I’d like to mention are the ability to can:

  • Let others know about papers being read in Scribd using the Readcast option which will send a notification to services such as Twitter and Facebook.
  • Embed the content in third party Web pages.

In addition the Scribd URI seems likely to be persistent: http://www.scribd.com/doc/43280157/Moving-From-Personal-to-Organisational-Use-of-the-Social-Web

I had not expected the WordPress.com service to allow Scribd documents to be embedded but, as can be seen below, this is possible.

There are problems with Scribd, however.  It’s list of categories for uploaded resources is somewhat idiosyncratic (e.g. Comics, Letters to our leaders, Brochures/Catalogs). There is also a lot of content from UKOLN, my host organisation, which has been uploaded without our approval.  But in terms of the functionality and ways in which the content can be reused in other environments it has some appeal.  If only these benefits could be integrated with the more managed environment for content and metadata provided by institutional repositories.  But should that be provided by institutional repositories embedded Web 2.0 style functionality or, alternatively, by Web 2.0 repositories services adding on additional management capabilities?

Posted in Repositories, Web2.0 | Tagged: | 4 Comments »

Thoughts On The “Crowdsourcing Experiment – Institutional Web 2.0 Guidelines” Post

Posted by Brian Kelly (UK Web Focus) on 24 November 2010

Martin Hamilton, head of Internet Services at Loughborough University, has written a great blog post on the subject of “Crowdsourcing Experiment – Institutional Web 2.0 Guidelines“.  The post, which was used to support an Open Mic session at the CETIS 2010 conference, begins “I’d like use this blog post to do a bit of crowdsourcing around perspectives on institutional Web 2.0 guidelines and policies“. Martin is looking to develop guidelines along the lines of those listed in the Policy Database provided on the Social Media Governance Web site.

But rather than comment on the specifics of the content of the post (which is well-worth reading) I’d like to makes some observations on the approaches Martin has taken in producing his comprehensive multimedia post covering a variety of aspects related to institutional use of Web 2.0.

Blog post supporting a talk: When talks are given at events the norm is use of PowerPoint (or Open Office in some cases).   Increasingly you’ll find that the slides are made available, often on a slidesharing service such as Slideshare. If the talk is about a peer-reviewed paper the paper itself is the significant resource but at events such as the CETIS 2010 conference there aren’t accompanying papers.  It’s therefore pleasing to see this example of a blog post which complements the talk given at an event.  Indeed, in some respects, the blog post can be more valuable than a peer-reviewed paper as the blog post can make it easier for others to give comments and feedback.

Multi-media document: A document on guidelines for institutional use of Web 2.0 services would often be written using MS Word (or Open Office), perhaps containing images. Martin has written a multi-media document which contains embedded video clips, timelines and slide presentations.  Although we may encourage students to create multimedia essays, how often do IT professionals themselves do this?

Crowd-sourcing feedback: Martin has created an accompanying Google Doc which anyone can contribute to.  I think this is an interesting experiment in providing a mechanism whereby an audience for a talk can be more active participants by contributing to a document  which is based on the contents of the talk.

Embracing openness: Martin’s approach to the development of institutional guidelines for Web 2.0 services is taking place in public with contributions being actively sought.  This is in contrast with in-house developments of institutional policies and guidelines which might be shared with others after they are finalised.

I’d like to see greater use of the approaches which have been taken by Martin. What do you think?

Posted in Web2.0 | Tagged: | 4 Comments »

A Single Web Site For Government Departments! Higher Education Next?

Posted by Brian Kelly (UK Web Focus) on 23 November 2010

A Single Web Site For Government Departments

Yesterday a press release entitled “Digital by default proposed for government services” published on the Cabinet Office Web site” described how Martha Lane Fox, the UK Digital Champion, has published a report that calls for radical improvement to Government internet services [PDF 5.71MB, 11 pages].

The recommendations in the report call for the “simplification and strengthening of digital government to improve the quality, and consequently use, of online channels“.  The report argues that as well as providing better services for citizens “shifting 30% of government service delivery contracts to digital channels has the potential to deliver gross annual savings of more than £1.3 billion, rising to £2.2 billion if 50% of contacts shifted to digital“.

The key recommendations in the report include:

  • Making Directgov the government’s front end for all transactional online services to citizens and businesses
  • Making Directgov a wholesaler as well as the retail shop front for government services and content by mandating the development and opening up of Application Programme Interfaces (APIs) to third parties.

Government departments first and other public sector organisations, such as Universities, next?  But how should Universities react to such moves towards centralisation of networked services?  Note that in this post I’ll not address the question of whether such moves are desirable or not (which discussions are already taking place on Twitter) – rather I’ll consider the implementation issues which policy makers, who are not in a position to respond politically to Government announcements, need to consider.

Implications For Higher Education

A move towards centralised services for the citizen? Hasn’t the UK higher education sector been championing national services for the last couple of decades?  JISC Services, such as Mimas and EDINA, have been providing centralised access to services  for teaching and learning and research for many years and such services are much appreciated by the large numbers of users of the services.

Mandating the development and opening up of Application Programme Interfaces (APIs) to third parties? That sounds great and is also part of the JISC’s strategy for enhancing access to services – indeed last year the JISC funded the Good APIs work which provided advice on best practices for providing and consuming APIs.

But what of the bigger picture?  Could there be a national user-facing service which provides information about, say, courses provided by UK Universities? Again, the higher education sector has ‘been there, done that’ when a number of higher education agencies including (HEFCE, SHEFC, HEFCW and DENI set up the Hero (Higher Education and Research Opportunities) service. However, as I described in a post on “Which Will Last Longer: Hero.ac.uk or Facebook?” published in June 2009, Hero, “the official gateway to universities, colleges and research organisations in the UKwas closed on 4 June 2009.  And if there are suggestions that we should have a centralised online service for delivery of teaching resources  we should also remember the lessons of the UK eUniversity, the UK’s £62m e-learning university which was scrapped in 2004 and described  as “a “shameful waste” of tens of millions of pounds of public money”.

Will we see a move towards greater centralisation of networked services in the sector? I think this is inevitable. I also think that this can provide benefits as we build on the experiences in providing national services – which, I should add, are envied by many of those working in higher education in other countries which have not had the centralised funding and development to the extent which JISC provides in the UK. But the danger is that policy makers will failed to learn lessons from the approaches towards centralisation. Remember the “Those who forget history are doomed to repeat it“?

Posted in Web2.0 | 1 Comment »

Asynchronous Twitter Discussions Of Video Streams

Posted by Brian Kelly (UK Web Focus) on 22 November 2010

Twitter Captioned Videos Using iTitle

Martin Hawksey’s software for using Twitter to provide captions of video continues to improve.  At UKOLN’s IWMW 2010 event we used the iTitle service to mash together videos of the plenary talks with the accompanying Twitter stream. As you can see from, for example, Chris Sexton’s opening talk at the event, you can go back in time to see not only what Chris said (nothing new in providing a video of a talk) but also what the audience was tweeting about at the time – and you can also search the tweets in order to go directly (once the video has been downloaded into the local buffer) to what may be regarded as crowd-sourced video bookmarks – for example a search for “finance’ shows that at 9 mins 35 seconds into the video there was a comment that “Does anyone seriously think HR, Finance, Payroll and Student Record Systems can be run as Shared Services??! #iwmw10?“.

Asynchronous Twitter Captioning

That is an example of being able to replay the Twitter discussions which took place during a live event. But what if you wanted engage in discussions of a recorded presentation? Back in June 2010 Martin published a blog post which described uTitle, a development to his Twitter captioning service in which “Convergence @youtube meets @twitter: In timeline commenting of YouTube videos using Twitter [uTitle]“. In the post Martin said that “Having looked at synchronous communication I was interested to extend the question and look at asynchronous communication (i.e. what was said about what was said after it was said)“.

An example can be seen from the uTitled video of the When The Ax Man Cometh video, which was originally published on Seth Odell’s Higher Ed Live webinar and featured an interview with Mark Greenfield. I felt that this interview, which Mark has described on his blog, would be of particular interest to those of us working in the UK’s higher education sector as it raises challenging questions about the future of Web and IT services in higher education (and note I should thank Martin for processing the video using uTitle and Seth and Mark for giving permission for the video to be used in this way). In particular it asks the audience to consider the implications of idea’s published in a book on A University for the 21st Century written by James Duderstadt, President Emeritus at the University of Michigan:

  • Higher education is an industry ripe for the unbundling of activities. Universities will have to come to terms with what their true strengths are and how those strengths support their strategies – and then be willing to outsource needed capabilities in areas where they do not have a unique advantage.
  • Universities are under increasing pressure to spin off or sell or close down parts of their traditional operations in the face of new competition. They may well find it necessary to unbundle their many functions, ranging from admissions to counseling to instruction and certification.

Although this book was published way back in March 2000 the view that “Universities are under increasing pressure to spin off or sell or close down parts of their traditional operations” is particularly relevant to those of us working in higher education in the UK in 2010.

So if you do want to join in a debate (as opposed to simply passively watch the video) you can add comments to the post on the Higher Ed Live Web site or you can use uTitle to give your thoughts  in real time using your Twitter account. An example of the interface can be seen below in which, in response to Mark Greenfield’s assertion that “For profit companies can adapt more quickly then Universities” I respond “If true, don’t we need to accept need top change rather than accept as inevitable“.

Discussion

Rather than discussing the content of Mark’s talk in this post I’d like to give some comments on the use of Twitter for making asynchronous comments about a video clip.

The first comment is that if you do this as you watch a video your Twitter stream is likely to be confused.  Unlike use of Twitter at an amplified event you will be tweeting on your own, and you will not be taking part in a real-time conversation with others centred around an event hashtag.

Also, unlike a live presentation, it is possible to pause the video while you compose your tweet – and even fast forward to see how the ideas in the talk develop and then rewind and give your tweets. On a pre-recorded video we can benefit from the 20/20 hindsight which is not possible in real life :-)

I am also uncertain as to how people will feel about adding comments to such a video, especially those doing this when no comments have been published – there might be a concern that you will look stupid making a comment which the speaker addresses later on.

I should also add that when I made my two comments I used a second Twitter account in order to avoid spamming my Twitter followers within strange tweets.  (Note that as the account had not been validated by Twitter at the time, the tweets were not being displayed in the Twitter search interface – Martin retweeted the tweets in order to ensure that the uTitle display contained some comments).

I’d like to conclude by asking two questions:

  • Is there a demand for a service which provides captioning of pre-recorded videos?
  • Should Twitter users claim second Twitter accounts which can be used in conjunction with automated agents (such as uTitle)?

Posted in Finances, Twitter, Web2.0 | Tagged: , | 2 Comments »

Further Thoughts on Lanyrd

Posted by Brian Kelly (UK Web Focus) on 11 November 2010

Graham Attwell is a fan of Lanyard. On the Wales Wide Web he recently informed his readers that “Last night I spent a hour or so playing with new social software startup, Lanyrd. And I love it.” Graham likes it because it is so easy to use and it makes his work easier. Graham also went on to add that “The site is very open. Anyone is free to add and edit on the wikipedia shared knowledge principle.

Such freedom is an interesting aspect to the service, which I only started to appreciate after I noticed hat Martin Hawksey had added a link to a video of one of the plenary talks at the IWMW 2010 event. Hmm, anyone can create an event, add themselves as a speaker and upload slides. Sounds like this could be open to misuse – but we have no evidence that this will happen.

In any case the main interface which a registered user sees are the events which their Twitter folders are attending or have an interest in. The accompanying image, for example, shows how information on Lanyrd about the forthcoming Online Information 2010 conference includes details of seven people I follow on Twitter are speakers at the conference. And since there is some degree of trust when you choose to follow someone, I am not too concerned about misleading information being published – and the FAQ states that “We plan to offer pro accounts for conferences in the future, and one of the features will be the ability to lock a conference page so only specific people can edit it.”

The Lanyrd page for the IWMW 2010 event is illustrated. As can be seen information about 29 speakers is available and access is available to 9 videos and slideshows of the plenary speakers. But if adding content to Lanyrd is easy, what is the etiquette of doing this?

We can observe how early adopters are creating conference entries on Lanyrd and adding details about public information such as dates, venues and information of speakers.

Such early adopters may be speakers themselves but as awareness of the service grows and how it can provide viral marketing for events (as potential attendees notice that people they follow on Twitter are speaking at events and may chose to register for such event ) we might expect event organisers to be pro-active in creating event entries on the service.

But what about including intellectual content, such as links to speakers’ slides, videos of talks, etc.? What are the associated rights issues if a page contains not only links to resources but also embedded slide shows and video clips, as is the case for the Lanyrd page for Paul Boag’s talk on “No money? No matter – Improve your website with next to no cash” which he gave at IWMW 2010?

Established practices means that no permission needs to be sought in order to link to a public Web page. And the embedding of rich content? Well since these resources have been uploaded to slide and video sharing services such as Slideshare and Vimeo there is surely an implied consent that the embed capabilities of these services can be used?

Which means that a failure of event organisers to be pro-active in creating a Lanyrd page for an event could result in entries being created which fail to include desired branding and acknowledgements and inconsistencies in the coverage of specific sessions. But perhaps that is a feature of the bottom-up approach to content creation which easy-to-use services in now facilitating? Such considerations need to be considered by speakers as well as event organisers – there are currently 14 speakers listed on the Lanyrd entry for the Online Information 2010 conference. Are the many other speakers listed on the conference programme missing out on exposure and possible networking and marketing opportunities? And will those who participate in elearning conferences have different approaches to those from the library sector? I’ll be interested to see how the Lanyrd page for the Online Educa conference develops.


Twitter conversation from Topsy: [View]

Posted in Web2.0 | Tagged: | 2 Comments »

Developments to the Lanyrd Service

Posted by Brian Kelly (UK Web Focus) on 3 November 2010

The Lanyrd service was launched on 31 August and, as described on the Zeldman.com blog: “Lanyrd uses Twitter to tell you which conferences, workshops and such your friends are attending or speaking at. You can add and track events, and soon you’ll be able to export your events as iCal or into your Google calendar (the site is powered by microformats).“. The post went on to add that “Soon, too, you’ll be able to add sessions, slides, and videos“.

Yesterday there was the confirmation of  Slides, video, audio, sketchnotes… coverage on Lanyrd. This announcement was accompanied by a reference of the importance which the service places on metadata: “It’s the perfect past-time for metadata addicts like us! … Make sure to add topics and speakers to the sessions. Coverage is deeply integrated with Lanyrd, and shows up in all sorts of places when combined with the right metadata.

In order to explore how this metadata is used I created the following search queries:

  • Conferences in Sheffield containing the string “UKOLN”: see results
  • Conferences about “Web standards” containing the string “web”: see results
  • Conferences about HTML5 containing the string “standards” held in 2010: see results
  • Conferences in London containing the string “metadata”: see results

The final example has a link to a two-day event on “Maximising the Effectiveness of Your Online Resources” which I co-facilitated. At that event myself and George Munroe described various approaches which can be used to maximise awareness of and use of digital resources. Such approaches included various Search Engine Optimisation (SEO) techniques, use of metadata and exploitation of the Social Web services.

Such approaches can apply to exploitation of services such as Lanyrd (and related popular Social Web services such as YouTube, Slideshare, etc).  These services are often very popular, with links to the services helping to enhance its Google ranking – and similarly links from such services can enhance traffic to institutional services.  So adding your metadata and appropriate links can be a way of raising the visibility of your resources – and arguably could be more cost effective than adding such metadata only to in-house services (it should be noted that such services are often very easy to use).

I’ve registered for an account on this service, in part to monitor how this service develops and to claim my preferred username on the service – and in addition because I feel that use of such services can be beneficial and worth a little amount of time in registering and uploading a small number of items. I will also be interested to see if Lanyrd develops so that it could be used as a mainstream event Web site. As I asked recently Should Event Web Sites Be The First To Be Outsourced?. And, if so, what role could Lanyrd play?

Posted in Web2.0 | Tagged: | 3 Comments »

iTunes U: an Institutional Perspective

Posted by Jeremy Speller on 25 October 2010

Recent posts which provided surveys of institutional use of third party services for content delivery generated a fair amount of interest and discussion. As a follow-up to the post on “What are UK Universities doing with iTunes U?” Jeremy Speller, Director of Web Services at UCL, has been invited to provide a guest post which provides an institutional perspective on use of this service.


Brian Kelly recently asked What are UK Universities doing with iTunes U?As an early adopter Brian invited me try to answer that question and to pick up on some of the comments which his post generated.

Let’s be clear on one thing – no one is fooling themselves. Apple is a hardware vendor intent on sales and iTunes U is just one of many ways in which it drives custom to its devices. Some have a philosophical objection to engaging with “trade” in this way, but for me the post-CSR university world demands that we use of the best that the commercial sector can make available to us. Have I sold my soul for the Yankee dollar? Maybe – but I’d kind of like a job next year. Strangely those that argue otherwise seem to accept Microsoft, Google and the rest.

Having dispensed with that argument let me examine why I believe that Apple has a positive contribution to make to higher education. I can think of no other major hardware vendor which has had such a clear policy over many years of engagement with education. And I’m not talking discount here – I mean services and assistance.

During 2004, Duke University bravely decided to issue iPods to its intake and to populate the devices with course material, timetables etc. Since there was no easy way to update the content en masse, Duke approached Apple to see what could be done. “Project Indigo” was born and iTunes U was the result. What’s important here is that Apple reacted to the requirement of a university and worked with Duke to deliver something that met its need.

It’s worthy of note too that many of the iTunes U team have backgrounds in education rather than software engineering or sales. Indeed Jason Ediger, who has a typical corporate title but for the purpose of this article heads up iTunes U, is a former teacher and educational technologist in the public sector.

Anyway, here are some of my views on “popular” opinions.

iTunes U is a closed ecosystem

Yes it is but the arguments for not using it are thin. In a comment on Brian’s post Andy Powell worried that:

… the overarching emphasis of sites who have bought into iTunesU is that they have bought into iTunesU – the other routes to content are presented as secondary to that. To me, that implies that users and lecturers who choose to use that route are somehow second class citizens of the institution.

I can only speak for UCL, but I would worry about any institution which bought into iTunes U as the only or primary means of distribution. Apple positively discourage use in this way – their take is “we provide the tool as one channel of communication“. UCL’s engagement with iTunes U came out of our desire to develop podcasting and other means of multimedia distribution as part of our mission to increase reach as London’s Global University. We were developing in that direction before iTunes U came to Europe. As far as primary teaching materials are concerned the Moodle course page remains the focus – the podcasts (whether taken from iTunes U or via feeds) are a value-added service to students. This is important for a metropolitan institution where students spend time offline on trains and buses getting about.

It is expensive to run

It depends. If you buy in to iTunes U without a background in multimedia distribution it could be, but I would argue that if you have not worked out a content or media distribution strategy taking into account a range of channels you shouldn’t be looking at iTunes U anyway. I have a department of around 30 souls of which a part (0.25 – 0.5 fte) of one post is a direct result of iTunes U, and that came a year after we joined. We have a multimedia unit who have been producing video since before U-matic was the format of the future. Over time the unit has moved with technology and now concentrates on streamed output and download formats – the staff complement hasn’t varied, they just do things differently. And we’d be doing all that to support a variety of distribution channels anyway.

It is PR fluff

For some reason this view is quite prevalent among those who don’t use the system and in my opinion misses the point of iTunes U completely. Sure, there is publicity to be had and, in UCL’s case as a launch partner, was valuable. Of course general PR shorts can be provided. But the real assets should be educational and examples of your institution’s scholarship. How you choose to do this and what material you provide is down to you. We increasingly provide course materials via the internal authenticated part of iTunes U to complement other teaching materials – others would argue that the provision of OER of high quality is the best PR there is for a university.

What wider and innovative uses could be made of the system in future?

adviewsBrian asks what the future holds in terms of innovative use of the system. Some of the most interesting uses we heard about at the iTunes U Conference in Munich involved the provision of primary sources for research. Duke University Libraries showed AdViews, a collection of 16mm movie film which had been digitized and which included thousands of TV commercials from the 1950′s through to the 1980′s. At Ludwig-Maximilians-Universität in Munich over 10,000 PDFs are available as LMU has chosen to provide all dissertations stored in its library back to 2002 as downloads. I’ll admit that at UCL we have yet to fulfill one of our original goals which was to open the system up to students as a collaborative environment and to submit work for assessment but that’s a matter of resource priority internally rather than a limitation of the system. Julie Usher has posted some other thoughts on innovations discussed at the conference.

Will institutional users regret lack of flexibility if Apple move in a different direction?

The lack of future-proofing is to my mind another non-argument because of the way iTunes U is architected. Apple maintain the framework and the serving of links via the iTunes Store mechanism while the feeds and media files themselves are hosted at the institution. This used not to be the case but all new sites since mid-2008, including all UK institutions, are split-hosted. This means that even if Apple pull the plug tomorrow all of your feeds and content remain yours and intact, and deliverable via whatever other channels you have in place.

Those who don’t buy into the ecosystem are 2nd class citizens

Again, if you are only providing iTunes U content this could be seen as an issue but not if you’re adopting the multi-channel model. I accept that at UCL we do sometimes plug iTunes U over other channels and that it’s something we should address. The content is nonetheless available for pretty much any modern device.

The content has poor discoverability

Because the iTunes software is a proprietary browser it does not afford discoverability to search engines. Apple fully accept that this has been an issue and have recently been including iTunes U in their iTunes Preview service. This is a conventional Web-based service which lists and includes metadata for all content in the system. Although it is early days and usage has not pumped too much to the top of Google rankings yet, search for a specific item by title and Google will return a top result. Audio content can be played directly in the page though it is still necessary to link out to iTunes to play video at present. Try searching for “Why species are fuzzy for an example. We also provide links to the preview service for the most popular items from our iTunes U launch page.

So…

… is there a cost-saving to adopting iTunes U as opposed to creating custom portals? Certainly the development grunt is removed and the system offers students who come to us with their own devices (another saving as I argued at the recent FOTE10 event) having bought into the ecosystem access to our content. For those of us committed to the distribution of media content whatever the channel the issue remains that the content has to be created and managed and therein lies the cost. I believe therefore that our efforts should lie in keeping the creation process efficient and demonstrating the value of the content to our users and paymasters. Content is, after all, still king – but as noted at the Munich Conference:

@thStamm: RT @jeremyspeller … content is king or there’s no point … I agree but we all want king arthur not king richard II #itunesuconf2010


Jeremy Speller has been involved with the UCL Web presence since 1995. Having headed UCL Web Servicesfor a number of years, Jeremy is now Director of Learning & Media Services which, along with the Web, covers AV, design, learning technology, multimedia and photography. Prior to full-time involvement with the Web, Jeremy’s background was in planning and statistics at UCL and previously at the University of Birmingham. Way back when he ran the Overseas Research Students Awards Scheme at what was then CVCP.

Some of Jeremy’s presentations are on SlideShare. You can also follow Jeremy on Twitter: @jeremyspeller


Twitter conversation from Topsy: [View]

Posted in Evidence, Guest-post, Web2.0 | Tagged: , | 6 Comments »

How is the UK HE Sector Using YouTube?

Posted by Brian Kelly (UK Web Focus) on 18 October 2010

Profiling UK HE Use of Popular Web 2.0 Services

Following on from recent posts on Planet Facebook Becomes Less of a Walled Garden and What Are UK Universities Doing With iTunesU? the next question should be How is the UK HE Sector Using YouTube? It can be useful for the higher education sector to be able to identify institutional adoption of new services at an early stage so that institutions across the sector are aware of trends and can develop plans to exploit new dissemination channels once the benefits have been demonstrated. I am aware, for example, of failures of institutions to sport the ‘weak signals’ of the importance of the Web in the early 1990s. I have recollections of institutions committing themselves to locally-developed Campus Wide Information Systems (as they were called) or moving to use of Gopher (an Internet technology which was felt to provide benefits of openness which were eventually materials, though with an alternative Internet standard!) but failing to respond to decisions of a small number of institutions who adopted Web technologies in around 1993. Adopting the wrong technologies will, in hindsight, be seen to have been a costly mistake, not just for the individual institutions but also, as we are now very aware of, the tax-payer who ultimately pays for the decisions institutions take.

This recent series of posts therefore aims to identify technologies which are starting to be adopted by institutions, so that we can have a snapshot of how such services are being used. Such an understanding of the trends within the sector can help to inform decision-making, sharing of best practices and also ways in which the return on investment use of new approaches can provide.  Such information will be of importance in demonstrating the value of the decisions the sector makes to politicians, policy makers and  the general public.

ALT’s YouTube Channel

The need to identity ways in which YouTube is being used within the sector occurred to me after received a tweet about a video of a talk on “When worlds collide – revisiting experiential learning” given by Martin Hall, Vice-Chancellor of the University of Salford presented at the ALT-C 2010 conference. From the page about this video I discovered the ALT’s Not-for-profit YouTube channel. This channel is “edited by Matt Lingard, web participation specialist on the ALT Publications Committee. Videos are uploaded and links made that serve to support ALT’s charitable objective, which is ‘to advance education through increasing, exploring and disseminating knowledge in the field of learning technology for the benefit of the general public’.

At the time of writing (7 October 2010) there are 10 video clips from the ALT-C 2010 conference hosted on this channel, the most popular being the ALT-C 2010 Sugata Mitra (457 views) followed by ALT-C 2010 Donald Clark (320 views).

The ALT YouTube channelIn addition to the ALT-C 2010 playlist the channel also has playlists for ALT-C 2009ALT – EPIGEUM Video Awards and ALT-C 2008.

YouTube provide various metrics for channels, including information on the numbers of views of the video clips and numbers of subscribers.

In addition, as can be seen in the accompanying image, ranking information is provided, and we can see that the ALT channel is the fourth most viewed non-profit channel in the UK of the week.

You can also view details of the traffic rankings for the various YouTube categories,  which indicates that theRSAorg, practicalaction and royalbritishlegion channels had the highest viewing figures for the week in which I captured the statistics.

UK University Use of YouTube

The lists of YouTube categories unfortunately doesn’t include Universities, instead having the following rather eclectic lists: Comedians, Directors, Gurus, Musicians, Non-Profit, Partners, Reporters and Sponsors. I therefore had to use YouTube’s search facility in order to identify how UK Universities are using YouTube.  Note, however, that I was subsequently informed that there is a directory of University accounts on YouTube Edu.  I have commented on this directory at the end of this post.

A search for “UK university” revealed the Bath University (my host institution) is in first place with a video in which “Jojo Mayer performs a Masterclass at the Rhythm Course at Bath University” – there have been 199,331 views of this video clip since it was uploaded 3 years ago.

There is a need, however, to be suspicious of searches which reveals that your particular interests are to be found near the top – I suspected that this result reflected my location or profile, although others based elsewhere had similar findings.

Another nearby university, Bristol University, is found in second place. This example, “Bristol University, UK – Study at Bristol – An introduction to one of the very best and most exclusive” has been provided by the official unibristol YouTube account and there have been  18,025 views.  This was the first official University page I found. I have looked through the search results looking for what appear to be official university accounts. I have excluded individual’s clips about universities and also channels such as TOEFL Destinations: University of Northampton which aren’t about a specific university, although I have included what appears to be departmental accounts if they appear to have an institutional user name. Note that the results given in the following table were found in the first five pages of results for a search for “UK University” – note that many of the results were for the University of Kentucky, which has the abbreviation ‘UK University’!

Institution Channel Views Total Nos. of
Upload Views
Subscribers Channel
Comments
Date Created
1 University of Bristol     915     18,171     27  1 16 December 2008
2 Coventry University (CovStudent) 82,375 1,036,671 1,139 42 26 November 2007
3 RHULLibrary     347       3,847     10  0 08 January 2009
4 Aston University 19,552     89,080    132  2 17 October 2007
5 UoL International Programmes 32,162     74,017   499 17 14 February 2008
6 University of Greenwich     971       9,254     19  1 16 July 2010
7 Northumbriauni     521       6,226     23  1 7 January 2010
8 Huddersfield University International study 1,220      24,195     22  0 15 May 2007
9 The University of Leicester 16,382    246,986    320  1 22 May 2008
10 University of Kent 7,725     26,996    102  7 12 May 2009
11 Canterbury Christ Church University 2,050     25,439     36  0 18 December 2006

Note that ‘Channel views’ is the number of users who have visited a channel page (which contains information about the channel) and the ‘Upload views’ is the total number of views for uploaded videos.

Although I have tried to provide a list based on an objective criterion, I feel it would also be useful if I included details for the University of Bath, my host institution and the Open University, which I know is a significant institutional user of a variety of Web 2.0 services (note that the Open University has three additional official institutional YouTube channels: OU Learn, OU Life and OU Research).

Institution Channel views Total Upload Views: Subscribers: Channel Comments Date Created
 1 University of Bath 5,011 252,850 93 3 9 August 2007
 2 Open University 257,497 391,625 2,936 56 5 July 2007

Official Directory of University Accounts on YouTube Edu

After writing the first draft of this post I realised that it would be useful to find ways of automatically obtaining statistics of institutional use of YouTube across UK Universities. I asked for suggestions on ways of doing this on the Quora question and answer service and received a response for YouTube which provided information on the directory of accounts on the YouTube Edu service. As this directory provides different information from that listed above (the University of Bath account, for example, isn’t included) I have left the details I collected in the above table.

The following 18 accounts are listed in the YouTube Edu directory of UK Universities (and as three are from the Open University this represents 16 institutions, one of which, Said Business School, is part of the University of Oxford):

Adam Smith CollegeCambridge UniversityCoventry UniversityCranfield School of ManagementEdinburgh UniversityImperial College LondonLSBF (London School of Business and Finance)Leeds Metropolitan UniversityNottingham UniversityOpen University (together with Open University – LearnOpen University – LifeOpen University – Research) – Oxford Saïd Business SchoolSt. George’s, University of LondonUniversity College LondonUniversity of DerbyWarwick University

Let’s now summarize the usage statistics for this official list of UK University accounts on YouTube Edu. Note, however, that only a single Open University account is included in the following table.

Ref.
No.
Institution Channel views Total Nos. of
Upload Views
Subscribers: Channel
Comments
Date Created
1 Adam Smith College    4,076     25,606    39  ? April 25, 2009
2 Cambridge University 221,280 1,189,778 6,921  ? September 19, 2006
3 Coventry University   82,937 1,039,817 1,147 42 November 26, 2007
4 Cranfield School of Management    5,189     20,607    82  1 October 12, 2009
5 Edinburgh University   31,388   236,884 1,280  ? November 08, 2008
6 Imperial College   48,307   353,355    859  7 April 24, 2008
7 LSBF (London School of Business and Finance)    6,999     96,212    244  7 August 25, 2008
8 Leeds Metropolitan University   67,014   589,659    512 19 January 07, 2008
9 Nottingham University   35,643   284,820    596 10 February 11, 2009
10 The Open University 258,309   392,720 2,944 56 July 05, 2007
11 Said Business School, University of Oxford   56,066   660,541 1,808 60 December 17, 2007
12 St George’s, University of London   41,983   338,276    825 12 August 20, 2007
13 UCL   47,773   287,198    810 27 May 15, 2009
14 University of Derby    8,578   117,906    106  5 September 22, 2006
15 University of Warwick   17,362    90,608    276  6 March 30, 2009

Observations

The first institutional YouTube channels seem to have been created in September 2006 (Derby University) followed by Canterbury Christchurch (December 2006). The next set of institutional accounts were created in May 2007 (Huddersfield University), July 20087 (Open University), August 2007 (St Georges and Bath University), October 2007 (Aston University), November 2007 (Coventry) and December (Said Business School).

The institution with the largest number of upload views is Cambridge University with 1,189,778 views and Coventry University with 1,039,817 views. Note that such statistics will be skewed in institutions make use of a single institutional YouTube channel or use several (as the Open University does).

It should be noted that the Coventry University account, which has the second largest number of downloads, is provided by students.

What Next?

Having more comprehensive data on the provision and usage YouTube across the sector can be useful in informing decision-making on use of YouTube as a delivery channel and how use of YouTube may relate to the institutional provision of video streaming services in-house (such as the LUTube service provided by the University of Leeds).

There might also be the need to clarify ownership of an official YouTube Edu account – in some cases the account listed in the YouTube Edu directory is used as an e-learning delivery channel (such as the St. Georges Clinical Skills Online channel, in others as a channel to provide a students’ perspective on University life (e.g. the CovStudent channel) whereas others, such as the University of Edinburgh, provide a more traditional official University view with, as in this case, an official welcome from the University Principal.

There may also be the need to share examples of best practices and policies. For example the University of Edinburgh channel states that “Please note, the University does not monitor YouTube comments. Please direct any queries via our website“.  Is this a well-established approach and what are the benefits and possible risks of adopting this approach?

Anyone have any comments or observations on the initial set of data listed above or thoughts on how the HE sector might make use of YouTube Edu?

Posted in Evidence, Web2.0 | Tagged: | 13 Comments »

What Are UK Universities Doing With iTunesU?

Posted by Brian Kelly (UK Web Focus) on 11 October 2010

Early Adopters of iTunesU

Back in 2008 Jeremy Speller and Nicolas Watson ran a workshop session on “Podcasting and iTunes U: Institutional Approaches to Scaleable Service” at UKOLN’s IWMW 2008 event. In the session Jeremy, Head of Media Services at University College London and Nicholas from the Open University described how “The Open University and UCL have been pursuing projects to deliver on-demand audio and video podcasting recording and distribution services primarily via Apple’s iTunes U service. In this talk, Nicholas and Jeremy will discuss how the different approaches of two very different institutions impacted on the nature of the two projects, how challenges were addressed and how solutions were developed.

Two years later how has iTunesU developed across UK higher educational institutions?  Are the Open University and UCL feeling slightly embarrassed, like the institutions which decided in 2003 that the future lay with Gopher, or feeling pleased that their institutional commitment had identified an important technology, as was the case when Leeds University set up its institutional Web service in January 1993? There is much that can be learnt from the experiences of early adopters.

Who’s Using iTunesU Now?

Using the iTunes software you can see a display of Universities which have an iTunes U presence. These can be selected by country as shown below. From this we can see that there are currently 16 UK universities and colleges which provide multimedia resources which can be accessed via the iTunes software.

Using a Google search for “itunesU university uk” I looked in some more detail at the information provided by a number of these institutions. A summary is given in the table below.

1 The Open University on iTunes U “In 2008 The Open University joined iTunes U, making available a range of high quality audio-visual assets used in the courses. Featuring over 280 albums with content from over 136 courses, the OU on iTunes U reflects the diversity of the university’s curriculum and the strength of the academic brand. A fantastic learning experience on offer”
2 The University of Oxford on iTunes U “Oxford has had over 3 million downloads from its iTunes U site”
3 Warwick on iTunesU “This free service allows you to access interviews with academics, programmes about research at the University, lectures, teaching materials and content from our student community.”
4 UCL on iTunes U: FAQs “Is iTunes U free to use? Yes. All UCL audio, video and PDF content is free to download. The iTunes software is also free to download.”
5 Welcome to Cambridge University on iTunesU “In October 2008 the University launched its iTunesU site, from which you can download educational multimedia resources free of charge. There is a wide choice of both video and audio, which will grow on a month-by-month basis.”
6 iTunes U – The University of Nottingham “With The University of Nottingham on iTunes U, you have access to hundreds of free educational video and audio podcasts. Anytime. Anywhere!”
7 Coventry University on iTunes U “Coventry University was among the first universities in Europe to distribute multimedia content in conjunction with Apple’s iTunes U service for education resources. The Coventry University iTunes U site launched in June 2009. It now has more than 400 audio and video podcasts from around our campus for you to download for free.”
8 Experience the University of Hertfordshire on iTunes U “Download videos and podcasts of University lectures, public talks, conferences and tutorials for free. You can also download pdf documents and find out more about studying at the University of Hertfordshire. Content can be accessed on a PC or Mac and synced with your iPod, iPhone or iPad to be connected anytime, anywhere.”
9 Birmingham City University on iTunes U Where does Birmingham City University fit in?
Birmingham City University has collected a wealth of audio and video material from across the University that can now be accessed via iTunes U and through http://www.bcu.ac.uk/podcasts. 
Does it replace internal sharing systems such as Moodle?

No, at present Birmingham City University’s iTunes U area is public-facing and accessible to people both inside and outside the University.”
10 Introduction: iTunes U, University of Edinburgh “We have our own iTunes U channel where we host video and audio files about the University and the city of Edinburgh.You can watch and listen to lectures with world-leading thinkers and subscribe to our podcasts to receive previous and future lectures, seminars and events.”

It is interesting to read how these institutions are described their use of iTunesU: a number of institutions are highlighting the amount of content which is being provided (“over 280 albums with content from over 136 courses“,”more than 400 audio and video podcasts“) or accessed (“over 3 million downloads“) whilst others point out that the content is available for free (“This free service allows you to access interviews with academics, programmes about research at the University, lectures, teaching materials“, “you can download educational multimedia resources free of charge“, “you have access to hundreds of free educational video and audio podcasts“, “Download videos and podcasts of University lectures, public talks, conferences and tutorials for free“) or combine the quantity with the free availability (“more than 400 audio and video podcasts from around our campus for you to download for free“).  Interestingly one institution points out that “You can also download pdf documents“. And whilst another answers “no” to the question “Does it replace internal sharing systems such as Moodle?” this is qualified with the words “at present“. Might iTunesU in the future have a broader remit than simply providing public access to podcasts and vodcasts, I wonder?

What Next for iTunesU?

A more important question will be the impact which iTunesU could have across the UK higher education sector in the future. The institutions which have been early adopters cover a range of institutions: we shouldn’t be surprised that a distance learning organisation such as the Open University being one of the first two institutions (who launched their presence on the same day as UCL) to make use of iTunesU. But we also see high profile and well-established institutions such as UCL, Nottingham and Edinburgh Universities in the list of early adopters alongside a number of newer universities and former polytechnics.

At the recent FOTE10 conference  I was involved in some discussions from institutions considering institutional use of the service. On the iTunes Web site I read about the financial benefits which the service can provide:

Apple provides your institution with a free iTunes U site, complete with templates you can customize with your own branding.

How the interface can be tailored:

Your institution creates its own iTunes U site that leverages the familiar interface of the iTunes Store, so it’s easy to build and even easier to use. Once your site is live, faculty members need little additional help from IT. They can start posting content right away — lectures, lab demonstrations, historical footage, and whatever else they choose to help bring their subjects to life.

and the levels of access control that can be applied

Your institution can decide whether to make its iTunes U content available only to members of your educational community (internal access) or to the world at large via the iTunes Store (public access). With an internal iTunes U site, user access is controlled through password protection. A public iTunes U site — such as those created by Yale, Stanford, UC Berkeley, Oxford, Cambridge, MIT, and broadcasters like PBS — distributes material for free on iTunes U. And there’s always the option of creating both an internal site and a public site for the best of both worlds.

Now I suspect that the reality of providing institutional use of iTunesU isn’t quite this simple. But neither will be use of an in-house service for providing access to audio and video recordings – especially on mobile devices. After all, which application are students (and staff) be more likely to be familiar with: iTunes, a home-grown synching application or a synching application provided by a CMS or VLE vendor?

Are the institutions listed above to be applauded for providing a user-friendly and cost-effective solution at a time when cost-efficiencies, in particular, are the order of the day? That seems to have been the feeling at Oxford University judging by the notes taken of an “iTunes U briefing on podcasting and mobile learning – a day at Apple” held last year: “Downloads have been enormous, iTunes U is global. Good ‘metrics of success’ 150 feeds of mostly 1h lectures“. Or will institutions which choose to make their content available by a commercial company eventually regret this decision, with perhaps a lack of flexibility in integrating multimedia content with other institutional services?


Twitter conversation from Topsy: [View]

Posted in Evidence, Web2.0 | Tagged: , | 19 Comments »

How Can We Assess the Impact and ROI of Contributions to Wikipedia?

Posted by Brian Kelly (UK Web Focus) on 27 September 2010

On Friday Andy Powell tweeted about a sentence he had written. He had written:

303 See Other is one way of responding to a request for a URI that identifies a real-world object according to Semantic Web practice (the other being the use of hash URIs)[1].

I responded to Andy suggesting that “this might have been the biggest impact you’ve made!

The contribution Andy had made was to the Wikipedia entry for the HTTP 303 status code. Andy’s contribution to this brief entry was to add a “note about Semantic Web usage of 303 response to indicate real-world object being identified”.

My comment to Andy was based on the usage statistics for this entry – in August 2010 there had been 3,515 views of the page and over the past year there have been a total of 35,489 views as illustrated.

Now although the contribution appears modest it does amount to about a quarter of the full article:

The HTTP response status code 303 See Other is the correct manner in which to redirect web applications to a new URI, particularly after an HTTP POST has been performed.

This response indicates that the correct response can be found under a different URI and should be retrieved using a GET method. The specified URI is not a substitute reference for the original resource.

This status code should be used with the location header.

303 See Other is one way of responding to a request for a URI that identifies a real-world object according to Semantic Web practice (the other being the use of hash URIs)[1].

The addition makes it clear that the HTTP status code has an important role to play in Semantic Web usage – something that wasn’t mentioned in the original version. So if there are a further 35,000+ views in the next 12 months they may benefit from this additional information. And although there are much more detailed articles about use of the HTTP 303 status code in this context, such as “How to Publish Linked Data on the Web” the addition to the Wikipedia article has the advantage of brevity and the little effort needed to add the sentence.

In a recent post on Having An Impact Through Wikipedia I suggested that it would be useful if JISC-funded project work used Wikipedia as a means of disseminating their knowledge and went on to provide examples of how well-read technical articles in Wikipedia can be. But how would we assess the impact of such work and identify the return on investment?

In the case of the HTTP 303 article it appears that Andy created the first version of his update at 09.52 on Friday 26 September with the final version being published at 10.13. This suggests that the update took about 20 minutes to produce – although it should be noted that Andy pointed out that he “contribute[s] to wikipedia so rarely, it always takes me ages when i do“.

So can we speculate that 20 minutes work may provide a significant part of an article which will be read by over 35,000 people, based on current trends? And how does this compare with other ways in which 20 minutes of work? Is a blog post likely to have a similar number of readers?

I can’t help but feel that contributions to Wikipedia (by which I mean ‘sticky’ contributions which are not removed) may have a more significant contribution in certain areas that many other dissemination channels. Unfortunately there doesn’t seem to be any incentives for such contributions to be made apart from the ‘Big Society’ approach of doing good on a voluntary basis in order to provide benefits to others. This approach, I feel, won’t scale. So shouldn’t we encourage contributions to Wikipedia as a dissemination activity which should be formally recognised?

My question, therefore. is should JISC programme managers encourage projects to contribute to Wikipedia and encourage the projects to report on successes they have doing this? And if you want an example of the outreach which can be gained through use of Wikipedia have a look at the August 2010 usage statistics for the Scientology article (158,845 visits in the month) – an article which Martin Poulter (a well-established contributor to Wikipedia who is ICT Manager at the ILRT, University of Bristol) has contributed to.


Twitter conversation from Topsy: [View]

Posted in Web2.0, Wikipedia | 10 Comments »

Approaches To Archiving Professional Blogs Hosted In The Cloud

Posted by Brian Kelly (UK Web Focus) on 17 September 2010

I was recently thinking about the “must read” blogs which are always the first I read in my blog reader.  These include:

OUseful: Tony Hirst’s blog “in part about… things that I think may be useful in an higher education context, one day…“.

eFoundations: A blog about “Web 2.0, the Semantic Web, open access, digital libraries, metadata, learning, research, government, online identity, access management, virtual worlds and anything else that takes our fancy by Pete Johnston and Andy Powell“.

The Ed Techie:  Martin Weller’s blog on “Educational Technology, web 2.0, VLEs, open content, e-learning, plus some personal stuff thrown in“.

Learning with ‘e’s: Steve Wheeler’s “thoughts about learning technology and all things digital“.

Ramblings of a Remote Worker: My colleague Marieke Guy’s reflections on working from  home and the broader issues of remote working.

What do these blogs have in common? From a personal perspective they are all written by people I like and respect – and have been out for a drink with.  But in addition the blogs are all hosted outside the blog authors’ institution, at http://ouseful.wordpress.com/ (via the http://blog.ouseful.info/domain), http://efoundations.typepad.com/, http://nogoodreason.typepad.co.uk/, http://steve-wheeler.blogspot.com/ and http://remoteworker.wordpress.com/.

Isn’t it risky that such valuable professional blogs are hosted outside the institution? Shouldn’t we be learning the lessons of the imminent demise of the Vox blogging platform and look to migrate such blogs to a trusted institutional environment? After all although the early adopters of blogs may have had to use an externally-provided platform we are now finding that institutions will be hosting blog platforms, in many cases the open source WordPress application.

I don’t think such blogs should move to the host institution. I feel that use of platforms such as WordPress.com, Typepad.com and Blogger.com can provide flexibility and autonomy which may be lost if an institutional platform were used. And, as described in a post on “Auricle: The Case Of The Disappearing E-learning Blog” there is no guarantee that  a blog hosted within the institution will necessarily be sustainable.

But if third party blogging platforms are used to support professional activities there will be a need to assess and manage possible risks of loss of the service. In the case of well-established services such as WordPress, Typepad and Blogger it is unlikely that such services will disappear overnight. If, as is the case with Vox, the service is not sustainable we could reasonably expect to be provided with notification on withdrawal of the service.

But perhaps a bigger risk relates to the responsibilities associated with ownership of the blog by individual authors as opposed to the departmental responsibility which would be the case of the institutional blog environment. What, for example, could happen to the contents of a blog if the author left his or her host institution?

In some cases it might be argued that the blog contents are owned by the individual and the host institution would have no claim on the content. But this won’t be true in many cases including, for example, blogs used to support JISC-funded projects. And at a time when the public sector spending is becoming subject to public scrutiny how would we explain to tax-payers that a University employee can own valuable content and is free to delete it if, for example, they were made redundant?

My colleague Marieke Guy and myself have written a paper on Approaches To Archiving Professional Blogs Hosted In The Cloud” which has been accepted by the iPres 2010 conference which takes place in Vienna next week.

The paper is based on UKOLN’s digital preservation work including the JISC PoWR project. Recently we have explored ways for preserving blog content, ranging from migration of rich XML content, processing a blog’s RSS feed, mirroring a blog’s Web site, creating a PDF version of a blog through to creating a paper copy of a blog! In addition to the technical approaches the paper also addresses the associated policy issues. On this blog and Marieke’s blog we have provided a policy statement which states that:

  • A copy of the contents of the blog will be made available to UKOLN (my host organisation) if I leave UKOLN. Note that this may not include the full content if there are complications concerning their party content (e.g. guest blog posts, embedded objects, etc.), technical difficulties in exporting data, etc.)
  • Since the blog reflects personal views I reserve the rights to continue providing the blog if I leave UKOLN. If this happens I will remove any UKOLN branding from the blog.

We have applied the guidelines we have developed to a number of other UKOLN blogs which are hosted externally including the IWMW 2009 event blog and the JISC SUETr project blog.

Marieke will be presenting this paper at the iPres 2010 conference next week. Her slides are available on Slideshare and are embedded below.

We’d welcome comments on the approaches we have developed?  Do they satisfy the concerns of the institution related to possible loss of valuable content whilst providing professional bloggers with the flexibility they may feel they need?

Posted in Web2.0 | 7 Comments »

On Friday: Amplified Seminar on “What Can We Learn From Amplified Events?”

Posted by Brian Kelly (UK Web Focus) on 2 September 2010

Tomorrow (Friday 3 September) I´ll be giving a seminar on “What Can We Learn From Amplified Events?” at the University of Girona (UdG) in Catalonia. A summary of the seminar is available on the UdG Web site ‘ for those of you who understand the Catalan language ‘ though the automated Google Translate option is also available if you don´t :-).

As is appropriate for a seminar on this subject, the seminar itself will be amplified, with a live video stream to be provided on http://www.livestream.com/1c4d and there will be a Twitter back channel using the #udgamp10 hashtag. There will also be a an official live Twitterer, with Kirsty Pitkin (who has provided live blogging for the IWMW10 event) using her recently established @eventamplifier Twitter account to act as a live blogger – while blogging from her home in Bath! This will be the first time Kirsty and I have worked together to provide distributed live blogging support. It will be interesting to see how this works. As well as Kirsty, there should also be two additional live bloggers who will be reporting from the seminar room, one Twittering in Catalan and one in Spanish. Again this will be another first.

This is also the first time I have given a talk about Amplified Events, as opposed to organised or participated in such events. I´ve found it useful to reflect on the approaches we´ve taken in exploiting various networked technologies at events over the past few years. I´ll be describing how amplified events can help to “avoid the constraints of space and time“, can provide “real time peer reviewing” and reflect the views that “an open exchange of ideas will promote innovation” expressed recently by the JISC´s Executive Secretary Malcolm Read and published a few days ago in the Times Higher Education.

The seminar will take place from 11.30 (BST). Further information is available on the UKOLN Web site. In addition the slides are available on Slideshare. Note that the slides are also available on Authorstream and use of this version is recommended as it supports slide builds and animation.

But rather than being a passive consumer of the slides I´d like to invite you to view the streaming video and join in the discussions on Twitter.

Posted in Events, Web2.0 | Tagged: | 3 Comments »

Draft Amplified Event Report Available For Comment on JISCPress

Posted by Brian Kelly (UK Web Focus) on 1 September 2010

I’ve previously mentioned my interests in (1) amplified events, such as recent IWMW events; (2) being open and willing to share one’s experiences with others in the sector and (3) the potential for the JISCPress service (and the digress.it WordPress plugin and community).

Bringing together these three areas of interest I am pleased to announce that a draft version of a document entitled “Organising Amplified Events: A Report Based on UKOLN Experiences” is now available for comments on the JISCPress site.

As indicated by the title this report summarises UKOLN work in both organising and participating in a range of amplified events over the past few years.

The report is at a draft stage and so there is an opportunity for comments to help shape the final structure of the document.

I was also interested in gaining experience of uploading a report to the JISCPress service. Although, as I described recently, I had been told that it could take a couple of hours to make a document available in JISCPress I found that it took less than 10 minutes :-)  I simply copied the individuals sections of the document from the MS Word file and pasted the contents into the WordPress blog interface. I used the Chrome browser and was pleased that there were no Microsoft HTML extensions included in the post – an irritation I have encountered previously when copying MS Word documents into Web interfaces. The JISCPress site also maintained much of the formatting provided in the original MS Word file, including headings, bulleted lists and bold and italicised text – all that seemed to be missing was various instances of indented text, the occasional missing hypertext links and a couple of instances of text in colour.  These were added to the JISCPress post afterwards. I  should mention, though, that the original document did not include any tables or images – I have been told that it’s the processing of these objects which can be time-consuming.

The final report will be available under a Creative Commons licence – so feedback provided using JISCPress can help to improve the quality of the final report not only for those who read the document but also for those who wish to reuse the content.  I therefore hope there will be lots of useful comments and suggestions.

Posted in Events, Web2.0 | 2 Comments »

University 2.0: the Extended University Conference

Posted by Brian Kelly (UK Web Focus) on 31 August 2010

The University 2.0: the Extended University Conference

I mentioned recently that I’ll be giving a seminar on “What can We Learn From Amplified Events?” at the University of Girona next month. My main purpose for my trip to Spain is, however, to give an invited keynote plenary talk at the University 2.0: the Extended University conference which will be held at the UIMP (Universidad Internacional Menéndez Pelayo) in Santander on 6-8th September 2010.

The title of my talk is “Embedding and Sustaining University 2.0“. The talk will, in part, be based on the risks and opportunities framework which have been described in papers on “Library 2.0: Balancing the Risks and Benefits to Maximise the Dividends” and “Empowering Users and Institutions: A Risks and Opportunities Framework for Exploiting the Social Web“. The talk will also discuss the implications of the economic crisis on the use of networked technologies in higher education, in particular the challenges and opportunities provided by use of “the Web as the platform”.

University 2.0

But what is meant by the term “University 2.0″? In a post on “Citizen 2.0, Strike 2.0, David Cameron 2.0 and Coldplay 2.0” which I wrote in 2008 I described how the “2.0 meme” had become established and we hear terms such as ‘library 2.0‘,  ‘e-learning 2.0‘, ‘research 2.0‘, ‘enterprise 2.0‘ and ‘government 2.0‘  being used in the media. But what, I wonder, might we mean by “University 2.0″?

The 2.0 is meant to signify change and a new way of doing this and places a role in the rebranding of such changes. The Web 2.0 technologies themselves (blogs, wikis, RSS,etc.) aren’t the most important aspect of such change (they have no relevance in ‘Coldplay 2.0′ or ‘Strike 2.0′, for example) although clearly use of blogs, wikis and social networks will have a role to play in a University 2.0 environment.

More importantly for me are the softer aspects which are a part of Web 2.0 including the emphasis on participation, trusting the user, user generated content, the right to remix and the ‘perpetual beta’ concept.

How might such ideas, depicted in the Web 2.0 meme map, apply to University 2.0?

Some of the softer aspects of associated with Web 2.0 seem to be very relevant to the core activities carried out in higher educational institutions:

Participation, not publishing: We expect students to take an active role in learning, and not to be passive consumers of learning materials which institutions may publish.

Right to remix: Learning and research might be regarded as processes whereby learners and researchers are exposed to new ideas and ‘remix’ them to provide something new, such as new insights.

Perpetual beta: Learning and research is a journey, not a destination. There is never a time in which learning may be felt to be ‘complete’ – learning is alway beta, always developing.

Trust your users: In educational institutions we have trusted academics who have, here at the University of Bathwithin the law to question and test received wisdom and to put forward new ideas and controversial or unpopular opinions“.

An attitude, not a technology: This aspect provides the extensibility of the 2.0 concept, which enables it to be applied in a range of areas.

The final aspect I’d like to mention appears to be particularly appropriate to today’s environment:

Web as platform: In an institutional context this highlights the regional, national and global nature of education and research, in which benefits can be gained by working beyond the constraints and limitations of the host institution, whilst gaining benefits for members of the local institution.

University 2.0 for me reflects the fundamental principles of what the University experience should be about. But then again many aspects of Web 2.0 describes Tim Berners-Lee’s original vision of the Web. In both cases there are benefits to be gained from the rebranding.

Posted in Events, Web2.0 | 1 Comment »

Best UK University Web Sites – According to Sixth Formers

Posted by Brian Kelly (UK Web Focus) on 25 August 2010

This week’s issue of the Times Higher Education contains a six page article on “Deciphering the code” which asks “do universities’ websites tell prospective students what they need to know” and invites a panel of sixth-formers to identify the top University Web sites – and those which can be improved.

What were the best performing institutional Web sites? The top ten sites are listed in the following table – and although I an aware that the methodology is open to criticism, the table does provide an opportunity to begin a debate on what potential students may wish to find on University Web sites.

Note initially the top ten sites were listed. However as the table is an alphabetic list of the institutions with 20 points or more such an incomplete listing is misleading. The list has been updated to include all institutions scoring more than 20 points. Apologies for the confusion. [Brian Kelly, 26 August 2010].

Best-performing institutions (scoring 20 points or more) Accessibility Contact information Peer review Unique selling point Insight
University of Abertay Dundee 5 5 4 4 3
Aston University 5 5 3 2 5
Bangor University 5 5 5 1 4
University of Buckingham 4 4 4 4 4
University of Cambridge 4 4 5 3 5
Edinburgh College of Art 5 4 5 5 5
University of Exeter 3 5 5 3 5
University College Falmouth 4 4 5 5 4
University of Glasgow 4 5 4 3 5
University of Greenwich 5 5 3 4 5
Harper Adams University College 5 5 3 5 4
Imperial College London 5 5 5 4 5
King’s College London 4 4 4 4 4
Kingston University 4 5 3 3 5
University of Kent 5 3 3 5 4
Leeds Metropolitan University 5 5 1 4 5
London School of Economics 4 4 3 5 4
Northumbria University 4 4 3 4 5
University of Nottingham 5 5 3 5 5
University of Oxford 5 5 5 5 5
Royal Agricultural College 4 4 3 5 4
University of Southampton 4 5 3 5 5
Swansea University 5 4 3 4 4
Teesside University 5 5 5 4 5
University of Wales, Lampeter 5 4 5 3 3
University of Wales, Newport 5 5 3 3 5

What did the representatives of the three schools particularly like? I was interested to read the comment that I struggled to find student comments, and if I did they were always good and never bad ones” – so authentic student voices, including criticisms seems to be welcomed.

I also noticed that Imperial College are “encourag[ing] both students and staff to tag their photos of campus life and engage with prospective students through Flickr and YouTube. The Imperial site also features student blogs and “a week in the life” student profiles.

But do, I wonder, the approaches which have being adopted by those top ranking universities reflect the discussions and consensus of best practices which we hear about at IWMW events?

The article also mentions that “student discussion is unlikely to take place on the university website itself. Instead, students will meet and talk at the places where they naturally congregate online, on social networking sites such as Facebook and Bebo and discussion boards such as The Student Room” and illustrates this point by describing how a student describes the Bangor University Web site as “modern” and “welcomes the clever links to the institution on social networking site Facebook“.

If the image shown below, taken from one of the top-ranked institutions, summarises where the students actually prefer to have the discussions over which institution to select what might this say about the future directions of the marketing aspects of an institution’s Web site?

Link to YouTube, iTunesU, Facebook and Twitter from a University home page

And is institutional involvement with iTunesU, YouTube, Faceboook, Twitter and Flickr now an accepted part of the portfolio of services which institutional Web team (or comms and marketing teams) will be expected to provide, support and promote? Has the “creepy tree-house” phrase which was used some time ago to criticise institutional use of Social Web services died as these services become mainstream?


Twitter conversation from Topsy: [View]

Posted in Web2.0 | 15 Comments »

Delivering Blog Posts By Email … But Not By Mailing Lists

Posted by Brian Kelly (UK Web Focus) on 24 August 2010

Blog Posts By Email …

Did you know that you can choose to receive blog posts by email?  I’ve written about this previously and described how this may have advantages for end users who either do not have access to RSS readers (e.g. they are not provided on an institutional desktop) or prefer the familiarity of their email client.

An example of how blog posts are displayed in an email client is shown below.  This illustrates that the service (in this case Feedblitz) included embedded images together with a tables of contents, providing internal links to both multiple articles and section headings in a blog post.

Blog posts received by email

… But Not By Mailing Lists

The reason I’m revisiting this subject is in response to recent discussions on Twitter and several library-related mailing lists regarding the excessive posts to such lists made by Gerry McKieran.  Gerry is a prolific blogger and clearly has a real passion about his interest.  Unfortunately Gerry fails to appreciate  that many mailing lists have been established to support community interests and multiple posts from an individual can hinder the effective workings of such lists. Such problems are compounded when the person posting (a) fails to engage in discussions on the lists and (b) duplicates the posts on other lists.

Despite a heartfelt plea from Davey Patten on the LIS-Web2 list which has been echoed by others on various lists, in a post entitled “The Universe Is Not Flat >>> Let The Conversations Continue >>>”   Gerry points out (apparently without irony) that “in most online communities, 90% of users are lurkers who never contribute, 9% of users contribute a little, and 1% of users account for almost all the action“. Therefore  he asks, in his own inimitable style, “Please Don’t Diss Me For Being Actively Engaged >>> Or Because I Have Broad Interests“.

Yes, let the conversations continue on mailing lists. But let users choose to subscribe to blog posts or receive  Twitter alerts for new posts. But let’s not replicate blog posts across multiple mailing list, please. >>>Email Delivery of Blog Posts :  An Idea Whose Time Has Come !!! >>>

Which Email Service To Use?

Although end users may wish to make their own decision as to which email subscription service to use, in reality, I think, blog owners will need to make it easy for their readers to subscribe to posts by email.

But which service to use?  I have provided two option in this side bar for this blog: WordPress’s subscription service and the service provided by Feedburner.

I added the link to the WordPress subscription service recently as I wondered if this would provide any additional benefits.  In my administrator’s interface I can see that there seems to be 26 active subscribers who receive blog posts and 146 subscribers to comments on specific posts.

Feedburner statisticsHowever the Feedburner service (now owned by Google) does seem provide more information. There are currently 91 subscribers to the service and the numbers have grown particularly since the blog post describing the service was published.

In light of this I intend to remote the link to the WordPress subscription service. I’ll also try to ensure that other blogs I contribute to provide a link to the Feedburner service – so that I won’t need to send posts my email!

Posted in Web2.0 | 4 Comments »

“When The Axe Man Cometh” – the Future of Institutional Web Teams

Posted by Brian Kelly (UK Web Focus) on 9 August 2010

Doom and Gloom

The doom and gloom of the impending cuts rang out loud and clear” described Deborah F. in her report on the IWMW 2010 event. I introduced this concern in the opening talk and then, in the second talk at the event, Susan Farrell asked “Are web managers still needed when everyone is a web ‘expert’?” As described in a report written by Amy Chamier and published on the IWMW 2010 blog Susan, former head of Web Services at Kings College, London, explained how those with front-end skills are most at risk. Susan’s advice was to “demonstrate the competitive advantage we deliver in turbulent times. We must show how websites run by web managers cut the cost of: (a) generating new customers (b) back office administration and (c) service delivery. And also, how websites run by amateurs can put an organisation’s reputation at risk.” In her conclusions Susan left the audience with a final question: “Without recognised qualifications and a professional body, do web managers and their specialist skills run the risk of extinction, as our duties are absorbed into other roles?

But isn’t this all a bit too late? Eleven days after Deborah published her post in which she described that, despite the doom and gloom, “being a hopeless optimist with a healthy realist streak I’m heading into this gloom looking for as many opportunities as possible to innovate and achieve despite the cuts” she wrote a follow-up post entitled “The Axe Man Came“. In the post Deborah described how “After the doom and gloom start to the IWMW event and the later encouragement that this could be a great time to innovate and to do things differently [she]returned to work engaged and enthused“. However shortly after she returned to work Deborah was informed that her “web team was being given to marketing, where there is already a manager“. Sadly seems that the only option available for Deborah is redundancy :-(

Death of the Web Team

The concerns over the future of Web teams isn’t restricted to the HE sector. On the Mission Creep blog Neil Williams, a “government web geek”, speculates on Death of the web team?. Neil describes the evolution of the Web within large organisations from its initial roots in IT. As the importance of content became appreciated responsibilities may have changed. As the need to engage with the user community became apparent we saw further evolution which was subsequently followed by the need to develop responsibilities for publishing. Neil feels that everyone now has the potential to be involved: “The explosion in social interaction online created direct communications between customers and employees, and before long it will be happening all over the place. The organisation is no longer in control of where customer-employee or customer-customer interaction happens; let alone what’s being said. Digital communications is now, or will soon be, everyone’s job – listening, collaborating and responding online must become core competences for all if the organisation wants to continue to manage its reputation and meet the expectations of its customers.

I agree. “Here comes everybody” – and the view that Web managers need simply to market themselves more effectively fails to recognise this changed environment. What then, is to be done? Neil Williams concludes by suggesting that “the future of the web team involves a simultaneous strengthening of control by the centre and a transfer of trust and skills to the wider organisation. It’s about choosing the right bits of digital, and the right bits of responsibility to hold onto or to devolve.

For me this transfer of trust and skills is particularly appropriate in the higher education sector. So rather than worrying about “websites run by amateurs [which] can put an organisation’s reputation at risk” there’s a need to recognise the value of the effort being provided across the institution. And such effort can ensure that an institution’s use of the Web is greater than the effort provided within central Web teams. We saw an example of this is the workshop session on “Sheffield Made Us – using social media to engage students in the university brand” which described a case study in which “the University of Sheffield ran a competition encouraging students to upload videos to Youtube with the incentive of a £3000 prize. The aim was to get the students to express in their own words what they thought of the University, and how Sheffield had made them.” This sounds like a great example of a “transfer of trust and skills to the wider organisation”.

What is to be Done?

But what of the idea of “simultaneously strengthening of control by the centre“? If you take an institutional perspective this would appear to suggest the need to strengthen centralised provision and control. But if we step outside our own institution and consider the wider perspective we may get a different perspective on what is meant by centralised provision and control.

The UK HE sector has taken a leading role in its provision of centralised services through its support for JISC services. We have also, over the past few years, seen institutions exploiting the benefits of Cloud Services. And if we focus on strengthening advice and support, rather than control, by the centre, we have a tradition which dates back since 1997 of the institutional Web management sector sharing advice on best practices and ways of exploiting new developments.

But how can Web teams continue to strengthen the support provided to higher educational institutions? Since members of institutional Web teams may regard departmental provision of Web services as failing to provide ‘competitive advantages’ why not apply that argument to the duplication which takes place across over 160 universities? How many members of institutions Web teams will currently be developing institutional strategies for exploiting the Social Web, I wonder? How much tax-payers’ money is being wasted in unnecessary duplication of effort? And how much tax-payers’ money is being wasted in a failure to share? These arguments are well-understood in the context of open access to research publications and research data but could equally be applied to support services such as institutional Web teams.

Specific Examples

In a way these suggestions are nothing new. The IWMW event was launched in 1997 and since then we have heard hundreds of talks given by members of institutional Web management teams who have been willing to share their experiences and invite discussion and debate. We have also see a similar willingness to share experiences and provide support on web-support and website-info-mgt JISCMail lists. But the IWMW event only takes place annually and, as described previously, discussions of the JISCMail lists have declined significantly over the past 5 years.

Centralised Services for the Web Management Community

An alternative approach (although it would probably be better to describe it as a complementary approach) would be to ensure that the work of institutional Web teams is published openly and in a format suitable for reuse in a variety of ways. This, quite simply, means use of blogs. In a recent post on Revisiting Web Team Blogs I described a number of benefits which can be provided by blogs. I also pointed out that the Google Custom Search Engine can be used to provide a search interface across such information, thus providing a cost-effective mechanism for knowledge sharing across the sector.

In order to encourage the “strengthening of control by the centre” I have created an institutional Web management Community page. This provides access to the search of University Web team blogs. In addition it provides links to two tools developed a couple of years ago by Tony Hirst after his participation at the IWMW 2008 event.

The Autodiscoverable RSS feeds on UK HEI home pages was developed following a suggestion that there was no reason for institutions to not publish press/media release, jobs and upcoming events auto-discoverable RSS feeds. Tony’s tool visits UK HEI home pages and dynamically reports on the numbers which are implementing autodiscoverrable RSS pages – today I find that the adoption rate for is 38.3% (51 out of 133 institutions). This is an example of a centralised auditing approach which members of institutional Web teams will be familiar with, with the intention being to encourage Web providers to implement recommended best practices.

Another tool Tony developed is the UK HEI “Page Not Found” page. In this case no statistics are provided: rather a display of thumbnails of institutional 404 pages is displayed which provides a simple means of observing the approaches taken across the community – and best practices can then be implemented locally.

The ‘Nudge’ Principle

Tony’s work was inspired, I think, by a post I wrote in 2008 on Nudge: Improving Decisions About RSS Usage which described an idea developed by US economist Richard Thaler and other behavioural economists who “want to highlight the best option, while still leaving all the bad ones open. … Rather than the state mandating solutions which aim to bring about positive benefits to society or to individuals, people are made aware of the benefits of the preferred option, but are left free to make their own decisions. In this case rather than best practices for the provision and support of institutional Web services being mandated (which is not, in any case, possible) people in Web teams are made aware of the benefits of the preferred option, but are left free to make their own decisions.

Are you convinced? Or do you think that the view that the Axe Man is visiting institutional Web management teams is an exaggeration and there is not need for change? If you are worried that the Axe Man will be paying you a visit, perhaps in the autumn, after the Comprehensive Spending Review is announced, then perhaps you may want to play a more pro-active role in a centralised but informal national network of institutional Web managers. A good start would be to create your Web team blog and leave a comment so that it can be included in the list of the early adopters amongst Web teams which have already appreciated the benefits which can be gained from greater openness and transparency. As for what the early adopters are doing, well look at the Web team and related blogs for the University of Bath, Birmingham City UniversityCanterbury Christ Church University, City UniversityUniversity of Essex, Edge Hill University, Glamorgan University, University of Lincoln, St Andrews UniversityUCL or the University of York, the aggregated blog provided by Scottish Web Folk, the departmental ECS blog at the University of Southampton or the individual blogs provided by Anthony Leonard, Claire Gibbons and Martin Hamilton.

I’m sure there will be other relevant blogs, provided either by teams or individuals, but their value to the community is diminished if the content is not easily accessible to the community. So if you want to strengthen the community, please make sure that it is included in the list.


Twitter conversation from Topsy: [View]

Posted in Blog, Web2.0 | 14 Comments »

Guide to Web Preservation Launched

Posted by Brian Kelly (UK Web Focus) on 12 July 2010

Here’s a press release about the launch of a JISC-funded Guide to Web Preservation which will be announced during the opening talk at the IWMW 2010 event later today.

The press release isn’t written in my normal conversation style but in discussions with my colleagues who worked on the JISC PoWR project in did occur to me that when a project makes a deliverable available this should be accompanied by a brief press release.  Whether the press release will be picked up by the media may be uncertain, but surely the process of reflecting on the importance of the project’s deliverables and summarising the benefits to the end users is a valuable exercise in itself? What’s your view on this suggestion?


Guide to Web Preservation to be launched at the opening session of a national event for University Web managers

At a time of cuts across the educational sector there is an urgent need to ensure that valuable teaching and learning and research Web-based resources are not lost.

Advice on how Web managers can minimise the risks of loss of such digital resources is provided in a Guide to Web Preservation which will be launched at a national event for University Web managers to be held at the University of Sheffield on 12 July.

The Guide to Web Preservation has been published by the JISC-funded Preservation of Web Resources (JISC PoWR) project which was provided by UKOLN (a national centre of expertise in digital information management) and ULCC (the University of London Computer Centre).

The Guide provides advice not only on the management of resources held on University Web sites but also in best practices when externally-hosted services are used to provide access to resources.

The Guide can be purchased from the Lulu print-on-demand service for £2.82 (plus p&p). An online version of the guide  is available from the JISC PoWR blog. In addition a commentable version is hosted on the JISCPress service.

Members of the JISC-PoWR team from UKOLN and ULCC, together with Susan Farrell, a consultant who edited the Guide, have published a series of video clips which describe the resource and further digital preservation work which the services are involved in.

Brian Kelly, the JISC PoWR project director described the Guide as “Focussed and pragmatic, explaining the importance of Web preservation and providing members of University Web teams with advice on what to do“.

In a video summary of the work Brian has described how the project team, based in Bath and London, made use of a number of Web 2.0 technologies, including blogs and resource-sharing services to support their work. “Since many Universities are using Cloud Services to deliver resources it was important that we made use of such services in order to gain experiences of preservation strategies for content on externally-hosted services.

The Guide will be launched at the opening of UKOLN’s annual Institutional Web Management Workshop (IWMW 2010) which will be held at the University of Sheffield on 12-14 July.

Posted in Web2.0 | Leave a Comment »

Having An Impact Through Wikipedia

Posted by Brian Kelly (UK Web Focus) on 2 July 2010

Re-Discovering Amplified Events

A recent tweet from Miquel Duran (a University professor, researcher in quantum chemistry, fostering communication of science 2.0 and university 2.0) alerted me to a blog post on The ‘Amplified Conference’.

As this is a particular area of interest to me I read the post and thought “yes, I agree” with its summary of the benefits of an amplified event :

  • Amplification of the audiences’ voice: Audience members through the use of such social media technologies (such as Twitter) can create online discourse during the sessions in real-time
  • Amplification of the speaker’s talk: Widespread and inexpensive video and audio-conferencing technologies
  • Amplification across time: With low-cost technologies, presentations are often made available after the event, with use of podcasting or videocasting technologies
  • Amplification of the speaker’s slides: With social media lightweight technologies, (such as Slideshare) entire presentations can simply be uploaded, shared, and embedded on other Web sites and commented upon
  • Amplification of feedback to the speaker: Micro-blogging technologies (such as Twitter) are being used not only as for discourse and knowledge exchange among conference participants
  • Amplification of collective memory: With the widespread availability of inexpensive digital cameras, photographs are often uploaded to popular photographic sharing services
  • Amplification of the learning: With the Web resources and social media technologies, following links to resources and discourse about the points made by a speaker during a talk propagates the learning which takes place at an event.
  • Amplification of the historical conference record: The ‘official’ digital resources such as slides, video and audio recordings which have been made by the conference organizers

I then thought that the words sounded familiar and, on rereading the Amplified Conference page on Wikipedia, I realised that I was reading words I had coined when I created the Wikipedia page in on 30 August 2008!

The blog post mentioned above linked to a previous post on Amplified Conferences in the Social Media World written by the author for Suite101.com. I found it interesting to compare the examples provided in the post with my Wikipedia article. I had written, for example,

Amplification of feedback to the speaker: Micro-blogging technologies, such as Twitter, are being used not only as a discussion channel for conference participants but also as a way of providing real-time feedback to a speaker during a talk. We are also now seeing dedicated microblogging technologies, such as Coveritlive and Scribblelive, being developed which aim to provide more sophisticated ‘back channels’ for use at conferences.

Amplification of a conference’s collective memory: The popularity of digital cameras and the photographic capabilities of many mobile phones is leading to many photographs being taken at conferences. With such photographs often being uploaded to popular photographic sharing services, such as Flickr, and such collections being made more easy to discovered through agreed use of tags, we are seeing amplification of the memories of an event though the sharing of such resources. The ability of such photographic resources to be ‘mashed up’ with, say, accompanying music, can similarly help to enrich such collective experiences.

The Suite101.com article had nicely summarised. It was perhaps surprising that the article hadn’t provided a link to the Wikipedia article which, I would assume, was a source resource – but this isn’t something which particularly concerns me. Indeed I did wonder that if Suite101.com has a policy that one shouldn’t cite Wikipedia entries (as may be the case in higher education) whether the author would be in a position to cite the resource? I have to admit that when I wrote the article I only cited Lorcan Dempsey’s original (brief) blog post and an article published by Paul Shabajee in the Times Higher Educational Supplement – the main body of the text was content I created in Wikipedia and had not published elsewhere (which perhaps I shouldn’t have done?).

Maximising Impact Using Wikipedia

Despite my uncertainty as to whether I should have first published an article described amplified conference which I could then cite (although I would then not have a neutral point of view!) discovering the reference to Amplified Conferences has made me appreciate the impact which an article in Wikipedia can have. Although I can’t find usage statistics for the page I suspect that the article will have been read my more people than have read my various peer-reviewed papers, blog posts, etc. (Can anyone suggest on ways in which this claim could be validated?)

I have previously suggested that Wikipedia should be used more widely across the higher education sector. Shouldn’t, where appropriate, the outputs of JISC-funded reports be included in Wikipedia articles? As an example consider the JISC-funded report on MODS: Metadata Object Description Schema [PDF]. This report, written in 2003, was commissioned by the JISC and is now hidden on the JISC Web site. meanwhile there is a brief entry on MODS in Wikipedia which, I would have thought, would have benefitted if the information provided if the JISC report had been included.

The JISC report does state that the copyright is held by JISC. This is a barrier to providing content in Wikipedia, which must be made available under a Creative Commons licence. But as JISC seek to be proactive in encouraging take-up of their deliverables under open access licences, I suspect this is not a fundamental barrier on allowing such content to be made available in a popular environment such as Wikipedia.

And with the growing interest in DBpedia (the Linked Data representation of Info boxes in Wikipedia entries) providing content in Wikipedia may also allow such content to be integrated in Linked Data applications.

Whilst I feel it would be inappropriate to mandate that the content of reports commissioned through public funding should be made available on Wikipedia, I do feel that this should be encouraged. What’s your view?


Twitter conversation from Topsy: [View]

Posted in Web2.0, Wikipedia, Wikis | 11 Comments »