UK Web Focus

Innovation and best practices for the Web

Archive for February, 2011

A Grammatical View of Web Accessibility

Posted by Brian Kelly on 28 February 2011

Later today (Monday 28 February) I’ll be giving a talk on “BS 8878 and the Holistic Approaches to Web Accessibility” at a CETIS Accessibility SIG meeting which is being held at the BSI Headquarters in London.

My talk will review the development of the holistic approach to Web accessibility and describe how this approach seems to be in harmony with the BS 8878 Code of Practice on Web accessibility, as I have previously discussed.

As I was finalising the slides it occurred to me that the WAI approach focusses on the implementation of best practices for the creation of Web resources and of the tools used to create and view the resources. The WAI model (and the WCAG, ATAG and UAAG guidelines) regard accessibility as an intrinsic property of the resource. In contrast the holistic approach regards accessibility as a property of the use of a resource and accessibility can be addressed by having a better understanding of such uses.

This approach was described in our first paper on “Developing A Holistic Approach For E-Learning Accessibility” (available in PDF, MS Word and HTML formats) in which we described how the concept of blended learning could be applied to the provision of accessible e-learning. A paper on “Implementing a Holistic Approach to E-Learning Accessibility” (available in PDF, MS Word and HTML formats) subsequently provided a case study which illustrated how these approaches are being applied to cultural heritage resources. This was followed by a paper on “Accessibility 2.0: People, Policies and Processes” (available in PDF, MS Word and HTML formats) which further developed this approach and described how it could be used in other scenarios.

Using a grammatical model (subject-verb-object) we might say that the WAI approach focusses on the object with the subject being regarded as everyone and the verb being understand or perceive. The WAI approach can be summarised as “everyone can understand all resources“.

In contrast the holistic approach regards accessibility as a function of what a user does with a resource. Accessibility is not directly a function of a resource and alternative resources (including real world resources) provide a legitimate way of enhancing accessibility. In addition the use relates to the target audience and not necessarily everybody. We might therefore apply grammatical model (subject-verb-object) but this time giving greater emphasis on the verb and appreciating that there may be a variety of subjects.

Put simply we might say that the provision of e-learning resources and real-world alternatives can provide a diversity of learning approaches:

  • John learns from the Web resource.
  • Jill learns from the real world resource.

Look back at the paper on “Developing A Holistic Approach For E-Learning Accessibility” we described a field trip for a geography student, which requires climbing a mountain or other terrain unsuited for a student in a wheelchair or with similar physical disabilities. The paper pointed out that solutions need not necessarily be restricted to those with obvious disabilities, as such concerns could be shared by an overweight student or a heavy smoker who finds physical exertions difficult. The paper described how:

… using our model the teacher would identify the learning experiences (perhaps selection of minerals in their natural environment and working in a team) and seek equivalent learning experiences (perhaps providing the student with 3G phone technologies, videos, for use in selecting the mineral, followed by team-building activities back at the base camp).

We can see how we were focussing on the activities (the verbs) in our initial paper rather than characteristics of the relevant resources.

Does this model help to provide a better understanding of our approaches? Is this model helpful in understanding how diverse approaches to Web accessibility can be implemented?

I hope to get answers to these questions at the CETIS Accessibility SIG meeting. I’d also welcome feedback on the blog.

Note that the slides are available on Slideshare and are embedded below.

Posted in Accessibility | Tagged: | 1 Comment »

How Do We Measure the Effectiveness of Institutional Repositories?

Posted by Brian Kelly on 24 February 2011

 

The Need for Metrics

How might one measure the effectiveness of an institutional repository? An approach which is arising from various activities I am involved in related to evidence, value and impact is based on the need to identify the underlying purpose(s) of services and to gather evidence related to how such purposes are being addressed.

Therefore there is a need to initially identify the purposes of an institutional repository. Institutions may have a variety of different purposes (which is why, although gathering evidence can be important, drawing up league tables is often inappropriate). But let’s suggest that two key purposes may be: (1) maximising access to research publications and (2) ensuring long-term preservation of research publications. What measures may be appropriate for ensuring such purposes are being achieved?

For maximising access to research publications two important measures will be the numbers of items in the repository and the numbers of accesses to the items. Since the numbers themselves will have little meaning in isolation there will be a need to measure trends over time, with an expectation of growth in the numbers of items deposited (which show slow down once legacy items have been uploaded and only new items are being deposited) and continual increase in overall the traffic to the repository as the number of items grows and access to the items via various resource discovery services provides easier ways of findings such resources.

Access Statistics for Institutional Repositories

The relevance of such statistics is well-understood with, here at the University of Bath, the IRStats module for the ePrints repository service providing access to information such as details of all downloads, the overall number of downloaded items (100,003 at the time of writing), the trends over time and various other summaries, as illustrated.

However it is important to recognise that such measures only indirectly provide an indication of how well a repository may be doing in maximising access to research publications. In part traffic may be generated by users following links to content of no interest to them through use of search engines such as Google (which is responsible for providing 38% of traffic to the University of Bath repository, with another 10.2% arriving via Google Scholar). In addition even if a relevant paper is found and read, the ideas it contains may not be felt to be of direct interest and may not be used to inform subsequent research activities.

A citation to a resource will provide more tangible evidence of direct benefits of a repository to supporting research activities and work such as the MESUR metrics activity is looking to “investigate an array of possible impact metrics that includes not only frequency-based metrics (citation and hit counts), but also network-based metrics such as those employed in social network analysis and web search engines“. However in this post I will focus on evidence which can be easily gleaned from repositories themselves.

Whilst it is possible to point out various limitations in using such metrics the danger is that we lose sight of the fact that they can still have a role to play in providing a proxy indicator of value. So although repository items which are found and downloaded may not be of interest or may not be used, other items will be relevant and inform, either directly or indirectly, other research work. We might therefore assert that an increase in traffic may also have a positive correlation with an increase in use.

The Numbers of Items in Repositories

Measuring the numbers and growth in numbers of items in a repository would seem to be less problematic than access statistics. This measurement can reflect the effectiveness of a repository’s aims in supporting the preservation of research publications, as publication are migrates from departmental Web sites or individual’s personal home pages to a centrally managed environment. The growth in the numbers of items should also, of course, help in enhancing access to the papers too.

Repositories may, however, only provide access to the metadata about a paper and not access to the paper itself. This may be due to a number of factors including copyright restrictions, (perceived) difficulties in uploading document or the unavailability of the documents.

There may also be a need to be able to differentiate between the total number of distinct items in a repository and the numbers of formats which may be made available. Storage of the original master format is often recommended for preservation purposes and if ease-of-reuse of the content may be required (e.g. merging together various papers and producing a table of contents can be much easier if the original files are available, rather than a series of PDFs which can be more difficult to manipulate.

Alternative formats for items may also help to enhance access for users of mobile devices or users with disabilities who may require assistive technologies to process repository items. This then leads to the question of not only the formats provided but how those formats are being used: is a PDF easily processed by assistive technology or is it simply a scanned image which cannot be read by voice browsers? In addition, as suggested by preliminary research carried out by my colleagues Emma Tonkin and Andy Hewson described in a post on “Automated Accessibility Analysis of PDFs in Repositories“, might the cover pages automatically generated by repository systems created additional barriers to access of such resources?

Trends Across the Community

This post has outlined areas in which evidence should be gathered and used in order to be able to help demonstrate the value of an institutional repository service and help to ensure that a number of best practices are being addressed (and, if not, to be able to develop plans for implementing such best practices).

Although such work should be done within the context of an individual repository service there are also benefits to be gained from observing trends across the community. My colleague Paul Walk recently mentioned on the JISC-Repositories JICMail list UKOLN development of a prototype harvesting and aggregation system for metadata from UK Institutional repositories called ‘RepUK’. One aspect of this work is aggregation of metadata records from institutional repositories and visualisation of various aspects of the data quality. Mark Dewey, lead developer for this work, has released an initial prototype tool. As can be seen this can provide a visualisation of the growth in the number of records across the 133 repositories which have been harvested.

Discussion

This post has suggested that metrics are needed in order to help to provide answers, perhaps indirectly, to questions regarding the effectiveness of institutional repositories as well as to support and inform the development of the repositories and the adoption of best practices. Of course measuring the effectiveness of institutional repositories will also require user surveys, but this post only considers quantitative approaches which are summarised in the table below.

Metric Purpose Comments
Total usage Provides an indication of repository’s effectiveness in enhancing access to research papers. Data may need to be carefully interpretted.
Number of items Provides an indication of repository’s effectiveness in both enhancing access to research papers and in ensuring their preservation. It might be expected that growth with decrease after a backlog of papers have been uploaded.
Profiling Alternative Formats May provide an indication that papers can be accessed by users with disabilities or my users using mobile devices. Provision of multiple formats may enhance access and reuse.
Profiling Format Quality Provides an indication that the formats provided are fit for purpose (e.g. PDFs are not just scanned images) This may indicate problems with repository workflow, need for education, etc.

But what additional tools may be needed (I would welcome a mobile app for my iPod Touch along the lines of the stats app for WordPress blogs)?  What advice is needed in interpretting the findings (and avoiding misinterpretations?)  Your thoughts are welcomed.


Twitter conversation from Topsy: [View]

Posted in Evidence, Repositories | 9 Comments »

Institutional Use of Twitter by the 1994 Group of UK Universities

Posted by Brian Kelly on 22 February 2011

A survey of institutional use of Twitter by Russell Group University Web sites was published on 14th January 2011. But are the approaches taken across that sector typical of the UK HE community? In order to observe approaches across a wider group of institutions the survey was repeated across the 1994 Group. This group, which was established in 1994 “brings together nineteen internationally renowned, research-intensive universities. The Group provides a central vehicle to help members promote their common interests in higher education, respond efficiently to key policy issues, and share best methods and practice.

The survey was carried out on 18-19th January 2011 and, as with the initial survey, recorded the number of followers, users followed and tweets published together with details of the location and biographical details of the institutional accounts and the provision of links to the Tweetstats service which provides statistical information on the average number of tweets posted per month .

Note that following comments made on the initial survey it was felt that it would be useful to include information on the number of Twitter lists which the accounts are included in (as described in a post on Who Needs Murdoch – I’ve Got Smartr, My Own Personalised Daily Newspaper! we may start to see Twitter lists being used in a number of interesting ways(.

In addition information on the background provided on the Twitter Web site is included, as this may have implications for accessibility, and details of the date of the first tweet have been included. The statistical information provided by the Tweetstats service was extended to profile the Twitter clients used to post tweets. Also note that the information was gathered from the Web interface while not logged in to Twitter and that the full URL of the link to the institutional Web site is provided (rather than the partial URL which is displayed which was published in the previous survey).

Ref.
No.
Institution Nos. of Followers Following Tweets Nos. of Lists First Tweet Tweetstats Background Image
1 University of Bath: @uniofbath 

Name: University of Bath
Location: Bath, England
Web: http://www.bath.ac.uk/
Bio: News from the University of Bath

5,339 73 1,642 290 19 Jan 2009 Tweetstats for University of Bath

Average: 65 tweets per month.

Twitter clients:
Tweetdeck (50%), Web (42%)

Logo and brief textual information
2 Birkbeck, University of London: 

No single central account but multiple accounts listed.

3 Durham University: @durhamuni 

Name: Durham University
Location: Durham, UK
Web: http://www.dur.ac.uk/
Bio: Shaped by the past, creating the future

4,302 2 208 233 2 Aug 2008 Tweetstats for Durham University

Average: 6 tweets per month.

Twitter clients: Twitterfeed (100%)

Purple background
4 University of East Anglia: @UEA_news 

Name: Uni of East Anglia
Location: Norwich, Norfolk, UK
Web: http://www.uea.ac.uk/
Bio: The University of East Anglia is an internationally-renowned university based in the cathedral city of Norwich in the UK.

3,256 129 307 158 26 Mar 2009 Tweetstats for University of East Anglia

Average: 13 tweets per month.

Twitter clients:
Web (53%), TwitThis (46%)

Plain blue background
5 University of Essex: @Uni_of_Essex Name: University of Essex
Location: Colchester, Loughton, Southend
Web: http://www.essex.ac.uk/
Bio: The University of Essex is one of the UK’s leading academic institutions. We are one of the UK’s top ten universities for both teaching and research.
2,259 237 876 112 27 Feb 2009 Tweetstats for Uni_of_Essex

Average: 38 tweets per month.

Twitter clients:
Tweetdeck (39%), Facebook (36%), Web (21%),  Google (3%)

Photo with text of URLs for other social Web accounts
6 University of Exeter: @uniofexeter Name: University of Exeter
Location: Devon, UKWeb: http://www.exeter.ac.uk/
Bio: Exeter is a top UK university which combines world leading research with very high levels of student satisfaction.
1,829 1,720 608 71 28 Jul 2009 Tweetstats for University of Exeter

Average: 33 tweets per month.

Twitter clients:
Tweetdeck (50%), Twitterfeed (24%), Web (9%)

Photo montage
7 Goldsmiths, University of London: @goldsmithsuol 

Name: Goldsmiths
Location: New Cross, London, SE14
Web: http://www.gold.ac.uk/
Bio: The latest news and events from Goldsmiths, University of London. Regularly updated by real people in the Goldsmiths Press Office!

2,883 458 428 174 13 Feb 2009 Tweetstats for Goldsmiths, University of London 

Average: 15 tweets per month.

Twitter clients:
Tweetdeck (59%), Web (24%)

Photo
8 Institute of Education, University of London: @IOE_London 

Name: IOE
Location: London, UK
Web: http://www.ioe.ac.uk/
Bio: News and events from the Institute of Education, University of London

699 279 226 29 22 Jan 2010 Tweetstats for Institute of Education, University of London

Average: 13 tweets per month.

Twitter clients:
Web (73%) NOTE 1

Photo, logo and textual information
9 Lancaster University: @lancasteruni 

Name: Lancaster University
Location: Lancaster, UK
Web: http://www.lancs.ac.uk/
Bio: News from Lancaster University

2,886 136 290 198 20 Mar 2009 Tweetstats for Lancaster University

Average: 10 tweets per month.

Twitter clients:
Web (57%) NOTE 1

Logo
10 University of Leicester: @UniofLeics 

Name: University Leicester
Location: University of Leicester, UK
Web: http://www.le.ac.uk/
Bio: Twitter channel for the University of Leicester

758 49 141 49 9 Oct 2009 Tweetstats for Leicester University:
Average: 14 tweets per month.
Twitter clients
:
Web (95%)
Logo
11 Loughborough University: @lborouniversity 

Name: Loughborough Uni
Location: Loughborough
Web: http://www.lboro.ac.uk/
Bio: None

423 14 164 43 5 Aug 2009 Tweetstats for Loughborough University:
Average:  9 tweets per month.
Twitter clients
: Tweetdeck (80%), Web (7%)
Photo
12 Queen Mary, University of London: @qmul 

Name: Queen Mary Uni Londn
Location: London, UK
Web: http://www.qmul.ac.uk/ – with tag info
Bio: News and events and some other musings from Queen Mary, University of London.

2,644 1,250 799 150 28 Jan 2009 Tweetstats for Queen Mary, University of London

Average: 30 tweets per month.

Twitter clients:
Tweetdeck (59%), Web (25%), bit.ly (6%), Facebook (4%)

Photo and logo
13 University of Reading: @UniRdg_News 

Name: Uni of Reading
Location: Reading, England
Web: http://www.reading.ac.uk/
Bio: Keep up to date with all the latest news from the University of Reading!

625 143 176 42 19 Jan 2010 Tweetstats for the University of Reading:
Average: 8 tweets per month. 

Twitter clients:
Web (60%)

None
14 University of St Andrews:  @univofstandrews 

Name: Univ of St Andrews
Location: St Andrews
Web: http://www.st-andrews.ac.uk/
Bio: University of St Andrews – Scotland’s first university

2,352 118 299 158 2 Feb 2009 Tweetstats for University of St Andrews:
Average: 12 tweets per month. 

Twitter clients:

Tweetfeed (78%), Twhirl (8%), Seesmic (5%), Web (2%)

Blue background and logo
15 School of Oriental and African Studies: @SOASNews  

Name: SOAS News
Location:
Web: http://www.soas.ac.uk/
Bio: None

(Note I was informed on 12 March 2011 that the @SOASnewsroom and @SOASfeed are the official SOAS Twitter feeds)

(122) (2) (0) (3) Default
16 University of  Surrey:  @uniofsurrey 

Name: University of  Surrey
Location: Guildford, UK
Web: http://www.surrey.ac.uk/
Bio: Tweets from the University of Surrey

4,058 473 710 216 ??? Tweetstats for University of  Surrey 

Average: 24 tweets per month.

Twitter clients:
Cotweet (72%), Tweetie  (5%),Web (3%), Tweetdeck (2%)

Photo and logo
17 University of Sussex: @sussexuni 

Name: University of Sussex
Location: Brighton, UK
Web: http://www.sussex.ac.uk/
Bio: University of Sussex is a top 10 UK research intensive university set in beautiful downland on the edge of Brighton, with over 11,000 students and 2,500 staff.

5,866 1,171 1,824 321 16 Feb 2009 Tweetstats for University of Sussex

Average: 74 tweets per month.

Twitter clients:
Web (50%), Hootsuite (43%), MobileWeb (3%)

Photo
18 University of York: @uniofyork 

Name: University of York
Location: York, UK
Web: http://www.york.ac.uk/news-and-events/
Bio: The latest news and events at the University of York, UK

2,822 113 394 222 30 Mar 2009 Tweetstats for University of York:
Average: 17 tweets per month.Twitter clients:
bit.ly (58%), Web (40%)
Photo
TOTAL 41,320 6,367 9,092

Note the the results from use of the MyFirstTweet service were inconsistent due to problems with the service itself. It is also unclear as to whether the correct page will be displayed by following the link provided.

Also note that the results for SOAS were not included in the subsequent discussions and analyses.

Discussion

The previous survey documented examples of emerging best practices including suggestions on:

  • Content provided in profile information (the bio: field).
  • Location information.
  • Links to the host institution.

This information is not repeated here.

Metrics

A summary showing the range of various Twitter metrics for the 1994 Group is given below:

  • Numbers of Twitter followers: The numbers ranged from 423-5,866 (in comparison with a range of 865-12,265 for Russell Group Universities).
  • Numbers of Twitter users followed: The numbers ranged from 2-1,720 (in comparison with a range of 33-5,089 for Russell Group Universities).
  • Numbers of tweets: The numbers ranged from 141-1,824  (in comparison with a range of 192-1,167 for Russell Group Universities).
  • Average numbers of tweet per month: The numbers ranged from 6-74 (in comparison with a range of 23-91 for Russell Group Universities).

Further Thoughts on Emerging Best Practices

The previous survey highlighted some suggestions for emerging best practices based on observations on how Twitter is being used across Russell Group Universities. This suggestions will not be repeated here. Instead comments will be restricted to some of the additional features which were surveyed:

  • Background image and content: In the previous survey it was pointed out that “many of the institutional Twitter accounts had branded the Twitter home page, some with just a background image but others … with additional textual information and link information“. However such approaches may, arguably, act as barriers to people with disabilities. There will be a need for institutions to understand and address such concerns.
  • Twitter clients used for posting: The Tweetstats tool provides information on clients used to post tweets. It may be useful for those involved in managing institutional Twitter accounts to monitor the various clients used in order to be able to identify tools which may prove particularly useful for institutional tweeting.
  • Dates of first tweets: The date of an initial tweet may give an indication of when an institution began tweeting (although this may not be when a institutional Twitter feed was officially launched).  However such information may indicate when Twitter became prevalent as an institutional tool.  many of the institutions seem to have launched their service in early 2009 – it would be interesting to see if that related to an event shortly before that date.

I hope these comments will prove useful for those involved in managing institutional (or department) Twitter accounts.

Posted in Evidence, Twitter | 10 Comments »

Feedback Invited on Briefing Paper on Holistic Approaches to Web Accessibility

Posted by Brian Kelly on 16 February 2011

Back in 2004 Lawrie Phipps, Elaine Swift and myself published our first paper on Web Accessibility: “Developing A Holistic Approach For E-Learning Accessibility” (which is available from the University of Bath institutional repository in HTML, PDF and MS Word formats).

Since then myself and a growing number of accessibility researchers and practitioners in the UK and Australia have built on this work through the publication of the following peer-reviewed papers.

No. Paper Details Access
1 Developing A Holistic Approach For E-Learning Accessibility Canadian Journal of Learning and Technology 2004 Repository item:
[MS Word] – [PDF] – [HTML] formats
2 Forcing Standardization or Accommodating Diversity? A Framework for Applying the WCAG in the Real World W4A 2005 Repository item:
[MS Word] – [PDF] – [HTML] formats
3 Implementing A Holistic Approach To E-Learning Accessibility ALT-C 2005 Repository item:
[MS Word] – [PDF] – [HTML] formats
4 Holistic Approaches to E-Learning Accessibility ALT-J 2006 Repository item:
[MS Word] – [PDF] – [HTML] formats
5 Contextual Web Accessibility – Maximizing the Benefit of Accessibility Guidelines W4A 2006 Repository item:
[MS Word] – [PDF
formats
6 Using Context To Support Effective Application Of Web Content Accessibility Guidelines Journal of Web Engineering 2006 Repository item:
Not currently available from repository
7 Accessibility 2.0: People, Policies and Processes W4A 2007 Repository item:
[MS Word] – [PDF] – [HTML] formats
8 One World, One Web … But Great Diversity W4A 2008 Repository item:
[MS Word] – [PDF] – [HTML] formats
9 Reflections on the Development of a Holistic Approach to Web Accessibility ADDW08 Repository item:
[MS Word] – [PDF] – [HTML] formats
10 Web Accessibility 3.0: Learning From The Past, Planning For The Future ADDW08 Repository item:
[MS Word] – [PDF] – [HTML] formats
11 Accessibility 2.0: Next Steps For Web Accessibility Journal of Access Services 2009 Repository item:
[PDF] format
12 From Web Accessibility To Web Adaptability Disability and Rehability: Assistive Technology 2009 Repository item:
[PDF] – [HTML] formats
13 Developing Countries; Developing Experiences: Approaches to Accessibility for the Real World W4A 2010 Repository item:
[MS Word] – [PDF] – [HTML] formats

This work is based on the development of a set of ideas which have been validated through peer-reviewing processes. However there is a need to synthesise these ideas and make them available in a more understandable format in order that the approaches can be implemented by policy makers and practitioners responsible for implementing or commissioning accessible Web services.

I am currently finalising a briefing paper on “Holistic Approaches to Web Accessibility” which aims to provide this summary to these audiences. The draft version has been temporarily uploaded to the Scribd repository in order to facilitate sharing and provide an additional area for receiving feedback.  In addition the briefing paper is also embedded below.

The abstract for the briefing paper states:

Providing Web services which are widely accessible to users with disabilities can be challenging. Web accessibility guidelines provide a useful starting point but the increasing diversity of ways in which the Web is used, differing user requirements and the variety of ways of accessing Web resources there is a need to avoid the simple ‘checklist’ mentality. This briefing paper describes a holistic approach to Web accessibility developers by researchers and practitioners in the UK and describes how these approaches relate to the BS 8878 Web Accessibility Code of Practice.

The briefing paper summarises the holistic approaches to Web accessibility which have been developed at UKOLN in conjunction with accessibility researchers and practitioners in the UK and Australia and describes how such approaches can relate to the BS 8879 Web Accessibility Code of Practice in particular in the context or institutional repositories and amplified events.

Your comments and feedback are welcome.

Posted in Accessibility | Tagged: | Leave a Comment »

HTML5 Standardisation Last Call – May 2011

Posted by Brian Kelly on 15 February 2011

I recently described the confusion over the standardisation of HTML5, with the WhatWG announcing that they are renaming HTML5 as ‘HTML’ and that it will be a ‘Living Standard’ which will continually evolve as browser vendors agree on new features to implement in the language.

It now seems that the W3C are responding to accusations that they are a slow-moving standardisatioin body with an announcement thatW3C Confirms May 2011 for HTML5 Last Call, Targets 2014 for HTML5 Standard“.  In the press release Jeff Jaffe, W3C CEO, states that:

Even as innovation continues, advancing HTML5 to Recommendation provides the entire Web ecosystem with a stable, tested, interoperable standard

I welcome this announcement as I feel that it helps to address recent uncertainties regarding the governance and roadmap for HTML developments.  The onus is now on institutions: there is now a clear roadmap for HTML5 development with a stable standard currently being finalised.  As providers of institutional Web services, what are you plans for deployment of HTML5?

Posted in standards, W3C | Tagged: | 1 Comment »

The W3C’s RDF and Other Working Groups

Posted by Brian Kelly on 14 February 2011

The W3C have recently announced the launch of the RDF Working Group.  As described in the RDF Working Group Charter:

The mission of the RDF Working Group, part of the Semantic Web Activity, is to update the 2004 version of the Resource Description Framework (RDF) Recommendation. The scope of work is to extend RDF to include some of the features that the community has identified as both desirable and important for interoperability based on experience with the 2004 version of the standard, but without having a negative effect on existing deployment efforts.

Membership of W3C working group comprises W3C staff as well as W3C member organisations, which includes the JISC. In addition it is also possible to contact working group chairs and W3C team members in order to explore the possibility of participation as an invited expert.

Note that a list of W3C Working Groups, Interest groups, Incubator Groups and Coordination Groups is provided on the W3C Web site. The Working Groups are typically responsible for the development of new W3C standards (known as ‘recommendations’) or the maintenance of existing recommendations. There are quite a number of working groups. including working groups for well-known W3C areas of work such as HTML, CSS and WAI as well as newer or more specialised groups covering areas including Geolocation, SPARQL, RDF and RDFa.

W3C Interest Groups which may be of interest include Semantic Web, eGovernment and WAI. Similarly Incubator Groups which may be of interest to readers of this blog include the Federated Social Web, Library Linked Data, the Open Web Education Alliance and the WebID groups.

The W3C Process Document provides details of the working practices for Working Groups, Interest Groups and Incubator Groups. If anyone feels they would like to contribute to such groups I suggest you read the Process Document in order to understand the level of committment which may be expected and, if you feel you can contribute to the work of a group, feel free to contact me.

Posted in standards, W3C | Leave a Comment »

Open Source, Open Standards, Open Access – A Problem For Higher Education?

Posted by Brian Kelly on 11 February 2011

Over on the JISC OSS Watch blog Ross Gardler has highlighted an area of concern from the recently published HEFCE Review of JISC. Ross states that:

… there is one paragraph that I am, quite frankly, appalled to see in this report:

“JISC’s promotion of the open agenda (open access, open resources, open source and open standards) is more controversial. This area alone is addressed by 24 programmes, 119 projects and five services. [7] A number of institutions are enthusiastic about this, but perceive an anti-publisher bias and note the importance of working in partnership with the successful UK publishing industry. Publishers find the JISC stance problematic.

In his post, which is titled “Is UK education policy being dictated by publishers?“, Ross goes on to summarise the benefits which can be gained from the higher education community through use of and engagement in the development of open source software.

The wording in the JISC review – open agenda (open access, open resources, open source and open standards) – reminded me of a paper written by myself (based at UKOLN), Scott Wilson (of JISC CETIS) and Randy Metcalfe (Ross Gardler’s predecessor as manager of the JISC OSS Watch service) which was entitled “Openness in Higher Education: Open Source, Open Standards, Open Access” and build on previous papers in this area.

Now if the paper had provided a simplistic view of openness I think criticism that the paper was promoting an ideological position would have been justified.  But whilst the paper highlighted potential benefits for the higher education community to be gained from use of open source software, open standards and open content the paper was honest about shortcomings. Rather than, to use the words of the review document, the “promotion of an open agenda”  the paper argued that institutions should be looking to gain the benefits themselves and not open source software, open standards or open content per se.

Perhaps such distinctions aren’t being appreciated by the wider community and openness is being seen as a ideology and used as a stick to beat commercial providers such as publishers. This approach quite clearly isn’t being taken by the co-authors of our paper. Indeed as can be seen from yesterday’s blog post on the failures of W3C’s PICS standard, the failures of open standards are being identified in order that we can learn fromsuch failures and avoid repeating the mistakes in future.

A few days ago I published a post in which Feedback [was] Invited on Draft Copy of Briefing Paper on Selection and Use of Open Standards – if open standards can prove problematic advice is needed on approaches for the selection of open standards which will minimise the risks of choosing an open standards which fails to deliver the expected benefits.

But I am sure that there is a need for continued promotion of the sophisticated approaches to the exploitation of openness which the JISC Review seems to be unaware of.  A poster summarising the approaches is being prepared for the JISC 2011 conference which will be displayed on a stand shared by UKOLN, CETIS and JISC OSS Watch.     A draft version of the posted is embedded below (and hosted on Scribd).  We feel this provides a pragmatic approach which will help to provide benefits across the HE sector and avoids accusations of taking an anti-publisher approach.

Your comments on these approaches are welcomed.

Posted in standards | Tagged: | 5 Comments »

Remember PICS? Learning From Standards Which Fail

Posted by Brian Kelly on 10 February 2011

A Message to the PICS-interest Mailing List

Yesterday I received an email message on the W3C’s PICS-interest group’s mailing list from Eduardo Lima Martinez who asked:

I’m building a website for people over 16 years of age. This not is a porn site, but shows raw images (“curcus pretty girls doing ugly things”) not suitable for kids.

He went on to ask:

What are the correct PICS labels for this site?. I do not read/write correctly the english language. I do not understand the terms of HTTP headers “Protocol: {…}” and “PICS-Label: (…)” Can you guide me? Can you show me a sample site that has the correct PICS labels?

Leaving aside the rather unsavoury nature of the content, I was surprised to receive this email as I was unaware that I was still subscribed to the PICS-interest list.  However looking at the archives for the list as can be seen there have been a handful of postings to this list over the past five years or so, several of which are just conference announcements or spam. As seems to be the case for quite a number of mailing lists, this one has fallen into disuse. But the first legitimate posting to the list since April 2009 and the subsequent responses caused me to reflect on the rise and fall of the W3C PICS standard.

Revisiting PICS

PICS, the Platform for Internet Content Selection, was developed in 1996 in response to the proposed Communications Decency Act (CDA) US legislation. As described in encyclopedia.com:

The first version of this amendment, sponsored by Senator James Exon without hearings and with little discussion among committee members, would have made it illegal to make any indecent material available on computer networks“.

In parallel with arguments that such legislation was unconstitutional the W3C responded by the development of a standard which provide a decentralised way of labelling Web resources.  It would then be possible to configure client software to block access to resources which are deemed to be offensive or inappropriate for the end user.  This software could be managed by a parent for a home computer or by an appropriate body in a school context.  There was also an infrastructure to manage the content labelling schemes which complemented the W3C’s technical developments with, as described in Wikipedia entry,  the RSAC being founded in 1994 to provice labelling of video games and, later, the RSACi providing a similar role for online resources. This organization was closed in 1999 and reformed into the Internet Content Rating Association (ICRA). In 2007 ICRA became part of FOSI (Family Online Safety Institute)  – an organisation which, as described in an email message by Dan Brickley, no longer has any activities in this technology area or support for their older work. As Dan pointed out to Eduardo “there is no direct modern successor to the RSACi/ICRA PICS work to recommend to you“.

What Are The Lessons?

In 1996 we had a standard (actually a number of W3C Recommendation)  which provided a decentralised approach for labelling Internet content. As described above there were international organisations involved in the provision and management of labelling schemes and there were various applications which provided support for the standards including Internet Explorer, with Microsoft providing a tutorial on how to use PICS.

But what went wrong? Why did this standard and accompanying infrastructure fail to be sustainable?  Is there no longer a need to be able to manage access to pornographic, violent and related resources? Do we have a better standards-based solution?

I think it is clear that there is still a need for a solution to the problems which PICS sought to address – and the various filtering solutions which are found in schools do not provide the flexibility of a standards-based approach such as that provided by PICS.

But perhaps the cost of managing PICS labels was too expensive – after all metadata is expensive to create and manage. Of perhaps PICS was developed too soon in W3C’s life, before XML provided a generalised language for developing metadata applications?  But would replacing PICS’s use of “{” by XML’s “<” and “>”  and the accompanying portfolio of XML standards really had a significant difference?

Dan Brickley pointed out that PICS is largely obsolete technology and its core functionality is been rebuilt around RDF:

1. Roughly PICS label schemes are now RDF Schemas (or more powerfully, OWL Ontologies)
2. PICS Label Bureaus are replaced by Web services that speak W3C’s SPARQL language for querying RDF – see http://www.w3.org/TR/rdf-sparql-query/
3. PICS’ ability to make labels for all pages sharing a common URL pattern is addressed by POWDER – see http://www.w3.org/2007/powder/

Hmm, should Eduardo be looking at POWDER – a W3C standard which “has superseded PICS as the recommended method for describing Web sites and building applications that act on such descriptions“.

But perhaps this is an area in which open standards are not appropriate.  As Phil Archer pointed out in the discussion on the PICS-intertest list:

there really isn’t any advantage in adding labels, whether in PICS or POWDER, for child protection purposes. All the filters that people actually use work well without using labels at all. It’s an idea that has long had its day. If interested, see [1, 2]” [Note reference 2 is a PDF file]

I guess the organisations involved in developing the PICS standard and tools which supported PICS and organisations which labelled their resources will have failed to se a return on their investment for supporting this open standard.  Will it be any different with POWDER, I wonder?  What is different this time?

Posted in standards | Tagged: | 2 Comments »

Twitter Posts Are Not Private: What are the Implications?

Posted by Brian Kelly on 9 February 2011

The article published on the BBC News Web site yesterday seemed unambiguous: “‘Twitter messages not private’ rules PCC“.  This news item summarised news published by the PCC, the Press Complaints Commission, which ruled that “Material that is published on Twitter should be considered public and can be published“. The context was a complaint by a Department of Transport official that the use of her tweets by newspapers constituted an invasion of privacy – apparently the official, who was named in the article, had tweeted about “being hungover at work“. But even though she had a clear disclaimer that the views expressed by her on Twitter were personal, her tweets were published in the press. An article in The Guardian provides further information – it seems that the Daily Mail and the Independent on Sunday) published this information.  I must admit that I find it unsurprising that the Daily Mail has used this as an opportunity to have a dig at the public sector. But what are the implications of this ruling for the rest of us? Some thoughts:

  • It’s pointless saying one’s (public) tweets are personal if you tweet in a professional capacity. The press can publish such information and use this as an opportunity to have a go at you and your host institution.  This is the standard type of advice which is given to students using social media, but perhaps we forget to think about the implications for ourselves.  Twitterer emptor Caveat Twitterer! – as perhaps the various footballers and cricketers who have been fined for tweeting inappropriate remarks would echo.
  • This news does seem to validate reuse of tweets. Martin Hawksey, who developed the iTitle Twitter captioning service will no doubt be relieved that it seems he does not need to obtain permission before reusing public tweets as will developers of Twitter archiving services (and note that in the JISC-funded developments to the Twapper Keeper Twitter archiving service for which UKOLN provided the project management we did identify that privacy concerns did need to be considered).
  • However it should be pointed out that this ruling came from the PCC – it is not a legal ruling.

Good news which seems to validate reuse of tweets or a dangerous intrusion into personal space? What do you think? Should all organisation be providing guidelines not only on institutional use of social media but personal use, such as EDINA’s guidelines which were published recently (with a Creative Commons licence) which states:

EDINA, as part of the University of Edinburgh, is your employer and as such you have a legal and moral responsibility not to bring either organisation into disrepute. Maintaining the reputation of EDINA, EDINA projects, services and staff members plays a crucial part in ensuring the continuing success of the organisation. Comments, particularly those with a strongly negative or unprofessional tone, can have serious unintended consequences. It is therefore important to remember that what you say about your work, even in personal social media presences, can reflect upon EDINA.

Please exercise common sense over whether or not the space you are posting to (whether your own or as a guest post on another person or organisation’s blog or social media presence) is an appropriate space for discussion of work or work related matters. If in doubt, you can always ask your line manager for advice.

The Hounding of the Baskerville article in the Independent on Sunday is worth reading to provide a context to such discussions.

Posted in openness, Twitter | 17 Comments »

Feedback Invited on Draft Copy of Briefing Paper on Selection and Use of Open Standards

Posted by Brian Kelly on 8 February 2011

A draft UKOLN briefing paper on the “Selection and Use of Open Standards” is available for comments before publication. The document is based on previous work led by UKOLN in conjunction with the AHDS in the JISC-funded QA Focus project on the development of quality assurance framework for JISC-funded development projects. Subsequent work with JISC CETIS and JISC OSS Watch and others was described in papers on “A Standards Framework For Digital Library Programmes“, “A Contextual Framework For Standards” and “Openness in Higher Education: Open Source, Open Standards, Open Access” which were presented at the ichim05, WWW 2006 and elPub 2007 conferences respectively. More recently a position paper which described “An Opportunities and Risks Framework For Standards” was presented to a CETIS event on the Future of Interoperability Standards.

The briefing paper omits much of the background and discussions which were included in these papers and instead seeks to provide a more focussed summary of the contextual approaches and opportunities and risks framework which have been developed to support use of development activities especially if new and emerging standards are being considered.

The draft briefing paper is currently available on Scridb and is embedded below.

I am grateful to feedback on an earlier draft of this paper which I have received from colleagues at JISC CETIS.  Comments from the wider community are welcomed.

Posted in standards | 3 Comments »

UKOLN Seminar: Website Design – Down with Technicalities, Up with the User and Crawler

Posted by Brian Kelly on 7 February 2011

Forthcoming UKOLN Seminar

As described in a recent post this year UKOLN is opening up access to its seminar programme to other staff and researchers at the University of Bath and the wider community. We are pleased to announce the first international speaker of the year: Professor Melius Weideman, Head of Department in the Faculty of Informatics and Design, Cape Peninsula University of Technology in Cape Town, South Africa. Professor Weideman will deliver a seminar on “Website Design – Down with Technicalities, Up with the User and Crawler” which will be held in the University Library at the University of Bath from 2-4 pm on Monday 21st February.

A small number of places are available for this seminar to those involved in the provision, management and promotion of  Web services.  An Eventbrite booking form is available for those who wish to reserve a free place. Details of the seminar are given below.

About The Seminar

Title: Website Design – Down with Technicalities, Up with the User and Crawler
Speaker: Professor Weideman, Cape Peninsula University of Technology, South Africa
Location: Level 4 meeting room, Library, University of Bath, BA2 7AY
Date and Time: Monday 21st February 2011 from 2-4 pm.

Further Information

The aim of this seminar is to provide the attendee with a holistic and practical view of website design, as seen from both the user and the search engine crawler side.  Some of the outcomes include:

  • The attendees should be able to evaluate the website usability of a given website, while identifying and criticizing the important aspects of user-centred design.
  • The attendees should be able to evaluate the website visibility of a given website.

Target Audience

The target audience includes any person who has an interest in improving the usability, general functioning and the visibility to crawlers of a website. This could be members of institutional Web management team, Web developers, researchers and academics, CIOs and those involved in Web-based promotional and outreach activities.

Who Can Attend?

The seminar is open to UKOLN members of staff. In addition a limited number of places are available to members of Web management teams, marketing and outreach staff and researchers at the University of Bath and other HEIs.

Abstract

The speaker will attempt to merge the (sometimes clashing) demands of website visibility with human usability and the logic flow of websites. Through both theory sessions and short demos, the elements of websites contributing to the positive and negative sides of these three issues will be explored. Practical application rather than Internet programming technologies will be covered. The emphasis will be on understanding and identifying the contributing factors in a given website and on the evaluation of a website from these two perspectives.

Biographical Details

Melius Weideman is currently a Head of Department in the Faculty of Informatics and Design, Cape Peninsula University of Technology in Cape Town. After working in the electronics and computer industry, he joined the academe in 1984.

His research interests were initially focussed on computer viruses, but after 1994 the Internet, and specifically search engines started fascinating him.

He graduated with a Doctorate in Information Science from the University of Cape Town in 2001. Numerous of his publications have seen the light since the on topics including website visibility and usability, search engines and information retrieval. He published an academic book in 2009, titled: “Website Visibility: The theory and practice of improving rankings”.

Posted in Events | 2 Comments »

Who Needs Murdoch – I’ve Got Smartr, My Own Personalised Daily Newspaper!

Posted by Brian Kelly on 4 February 2011

At about 7am this morning I noticed an interesting Facebook status update from Kerim Friedman, an anthopologist I’d met in Taiwan a few years ago. The status update came from a tweet from @Kerim:

If you use Twitter as your news reader, you really should try the “Smartr” iPhone app: http://smartr.mobi/ Nicely done!

This sounded interesting so I installed the app on my iPod Touch – and was impressed. As described in a pithy summary in a post on Mashable a few days ago “Smartr is a news reader for Twitter on the iPhone“. The post went on to add:

Instead of seeing tweets, the Smartr user views a Twitter feed filled with news snippets. “It’s a lens on top of your Twitter Feed,” says Factyle founder Temo Chalasani.

Users can click on updates in the filtered Twitter stream to read a Smartr reformatted, ad-free version of the article, share it with Facebook, Tumblr or Posterous, and choose to save it in-app or via Instapaper or Read it Later.

I tried it and was impressed. Later at work I created a Twitter list of official Twitter channels from a number of JISC services of particular interest to me. This provides a stream of official summaries of work from the various services, including links to further information, as illustrated. As can be seen this provides a summary of various reports, blog posts, news items, etc. In effect this provides the metadata for the resources and a link to the resources. But what of the resources themselves? The links need to be followed and, if like me, you use a device such as an iPod Touch you may download your tweets (and email messages and blog posts) before you head off to work to read on the bus, but aren’t able to follow any links whilst offline.

Smartr, however, follows the links to resources in your main Twitter feed or feeds in any Twitter lists you have created – i.e. it provides access to the data rather than the metadata. As illustrated the app provides a summary of the first few lines of the resource, which can then be viewed in full and also saved for reading later.

I’m impressed. In particular I think it will be useful for use with official Twitter feeds for which there is likely to be some consistency in the links which are shared, unlike the Twitter feeds from one’s followers which is likely to be a mixture of work and social links being shared (and if you follow people from around the globe they may be sharing their social interests during our working day).

This use of official Twitter accounts for resource sharing and ease of access on mobile devices is very interesting – and goes against the suggestions from Ferdinand von Prondzynski, former President of Dublin City University and forthcoming Principal and Vice-Chancellor of Robert Gordon University who, in a post on Institutional Tweets criticised typical institutional use of Twitter since “all tweets are …. announcements, either of some research project or other or of something the university wants to sell“; Twitter, he seems to feel, is a social medium and such only be used for conversations and not broadcasting. I disagree – Twitter, like all IT applications, is a tool and if it can be used successfully in novel ways I would applaud such innovation.

But, like Robert Murdoch’s The Daily newspaper for the iPad, is such innovative use proprietary? Not necessarily as it’s based on open data (tweets and links) and applications to read such information can be developed on any platform and there are other applications, such as paper.li, which provide similar functionality. For me Smartr seems  to provide strengths in being designed for a mobile device and I can see myself using it until competition catches up and provides similar functionality for my Android phone.   But to not make use of it because it is not cross-platform would deprive me of a potentially useful service.

Still unsure? Why not watch the video which is available on YouTube and embedded below. And if you’d like to install it visit the Apple iTunes store.


NOTE: On 5 March 2012 I received the following email:

Dear user,
Unfortunately, the Smartr team is moving on to new things and is unable to support its continued development. With a heavy heart, we will be pulling the plug on the service on the 15th of March @ 1pm EST.

Although Smartr no longer exists, I think it did provide an indication of a new generation of personalised newspaper, which could provide content based on Twitter feeds.

Note added on 22 August 2012.

Posted in Twitter | 9 Comments »

The HTML5 Standardisation Journey Won’t Be Easy

Posted by Brian Kelly on 3 February 2011

I recently published a post on Further HTML5 Developments in which I described how the W3C were being supportive of approaches to the promotion of HTML5 and the Open Web Platform. However in a post entitled  HTML is the new HTML5 published on 19th January 2011 on the WhatWG blog Ian Hickson, editor of the HTML5 specification (and graduate of the University of Bath who now works for Google) announced that “The HTML specification will henceforth just be known as ‘HTML’”. As described in the FAQ it is intended that HTML5 will be a “living standard:

… standards that are continuously updated as they receive feedback, either from Web designers, browser vendors, tool vendors, or indeed any other interested party. It also means that new features get added to them over time, at a rate intended to keep the specifications a little ahead of the implementations but not so far ahead that the implementations give up.

What this means for the HTML5 marketing activities is unclear. But, perhaps more worrying is what this will mean for the formal standardisation process which W3C has been involved in.  Since it seems that new HTML(5) features can be implemented by browser and tool vendors this seems to herald a return to the days of the browser wars, during which Netscape and Microsoft introduced ‘innovative’ features such as the BLINK and MARQEE tags.

On the W3C’s public-html list Joshue O Connor (a member of the W3C WAI Protocol and Formats Working Group) feels that:

What this move effectively means is that HTML (5) will be implemented in a piecemeal manner, with vendors (browser manufacturers/AT makers etc) cherry picking the parts that they want. … This current move by the WHATWG, will mean that discussions that have been going on about how best to implement accessibility features in HTML 5 could well become redundant, or unfinished or maybe never even implemented at all.

In response Anne van Kesteren of Opera points out that:

Browsers have always implemented standards piecemeal because implementing them completely is simply not doable. I do not think that accepting reality will actually change reality though. That would be kind of weird. We still want to implement the features.

and goes on to add:

Specifications have been in flux forever. The WHATWG HTML standard since 2004. This has not stopped browsers implementing features from it. E.g. Opera shipped Web Forms 2.0 before it was ready and has since made major changes to it. Gecko experimented with storage APIs before they were ready, etc. Specifications do not influence such decisions.

Just over a year ago a CETIS meeting on The Future of Interoperability and Standards in Education explored “the role of informal specification communities in rapidly developing, implementing and testing specifications in an open process before submission to more formal, possibly closed, standards bodies“. But while the value of rapid development, implementation and testing was felt to be valuable there was a recognition of the continued need for the more formal standardisation process.  Perhaps the importance of rapid development which was highlighted at the CETIS event has been demonstrated by the developments centred around HTML5, with the W3C providing snapshots once the implementation and testing of new HTML developments have taken place, but I feel uneasy at the developments. This unease has much to do with the apparent autonomy of browser vendors: I have mentioned comments from employees of Google and Opera who seem to be endorsing this move (how would we feel if it was Microsoft which was challenging the W3C’s  standardisation process?). But perhaps we should accept that significant Web developments are no longer being driven by a standards organisation or from grass-roots developments but from the major global players in the market-place? Doesn’t sound good, does it – a twenty-first century return to browser vendors introducing updated versions of BLINK and MARQUEE elements as they’ll know what users want :-(

Posted in HTML, standards, W3C | Tagged: | 3 Comments »

WAI-ARIA 1.0 Candidate Recommendation – Request for Implementation Experiences and Feedback

Posted by Brian Kelly on 2 February 2011

W3C announced the publication of WAI-ARIA 1.0 as a W3C Candidate Recommendation on 18th January 2011. A Candidate Recommendation (CR) is a major step in the W3C standards development process which signals that there is broad consensus in the Working Group and among public reviewers on the technical content of proposed recommendation. The primary purpose of the CR stage is to implement and test WAI-ARIA. If you are interested in helping or have additional comments you are invited to follow the content submission instructions.

WAI-ARIA is a technical specification that defines a way to make Web content and Web applications more accessible to people with disabilities. It especially helps with dynamic content and advanced user interface controls developed with AJAX, HTML, JavaScript and related technologies. For an introduction to the WAI-ARIA suite please see the WAI-ARIA Overview or the WAI-ARIA FAQ.

It does occur to me that in light of the significant development work we are seeing in areas such as repositories, e-learning systems, e-research, etc. there may be examples of developments which have enhanced the user interface in ways which enhance access for users with disabilities. If you have made use of WAI-ARIA 1.0 techniques in the development of your services, as mentioned on the W3C blog, W3C WAI would welcome such feedback. Please note that the closing date for comments is 25th February 2011.

Posted in Accessibility, standards, W3C | Leave a Comment »

Assessing the Value of a Tweet

Posted by Brian Kelly on 1 February 2011

Earlier today Phil Bradley published a post on “The value of a tweet“. The post was about the way in which a tweet can be retweeted, especially by someone famous with lots of followers: i.e. Neil Gaiman, @neilhimself, with his 1.5 million followers (note I’d never heard of him!), in order to generate traffic to a resource (in this case a series of photos on the value of libraries). The tweet which had the value was:

which was retweeted following a request from @arktemplar. The tweet from @neilhimself helped to raise awareness of Phil’s series of retro posters on the value of Libraries  across the Twitter community, as can be seen from Twitoaster.  As Phil described in his blog post he saw a huge increase in traffic to his Flickr set, as can be seen from the graph.

But how do we assess the value of Phil’s original tweets which referred to the Flickr photos and the subsequent retweets?

Is the value in the content of the 140 characters? In part, but the value of the content is enhanced by the esteem by which Phil is held within the Library sector, the knowledge that many people will have of Phil’s passion for libraries and the online community  which Phil is an active member of, which is based around his Twitter account, his blog and his other online accounts such as his Flickr and his Podcast accounts. Phil also knows how to make effective use of such services, so his use of the #savelibraries Twitter hashtag will have helped in the dissemination of the tweet to people who don’t follow Phil directly. In addition his use of a bit.ly short UIRL enables statistics on clicks on the URL to be accessed (by appending a + to the bit.ly URL – i.e. http://bit.ly/eI5m2e+).

But do the original tweet and the subsequent retweets have value in themselves or is the value in the impact they have?  The tweets could have some financial value if, for example, they linked to a pages which contained ads. But this isn’t the case here.  Surely, then, the value is in raising awareness of the value of libraries across large numbers of users, with the aim, clearly, of trying to address the cuts in UK public libraries.  Now how much would such a campaign cost if it was carried out using old media? I’m not in a position to make such comparisons but I can’t help but feel that Phil’s tweets, his use of the new media and his engagement with his online community have provided a valuable return on the investment for his series of Twitter posts.

Posted in Evidence, Twitter | 1 Comment »