UK Web Focus

Innovation and best practices for the Web

Archive for January, 2012

Has Machine Translation Come of Age?

Posted by Brian Kelly (UK Web Focus) on 27 January 2012

Over two years ago in a post entitled Extending Your Community – Through Machine Translation I suggested that although in the past machine translation was felt to be of little use, developments with services such as Google Translate may mean that “machine translation now does have a role to play“.

A few weeks ago I came across a referrer link to a blog post from a post entitled “Google Scholar Citations y la emergencia de nuevos actores en la evaluación de la investigación“. My Chrome browser helpfully informed me that the page was in Spanish and provided a link to an automated translation of the page. The post began:

The launch of a few months ago Google Scholar Citations [1], the tool for measuring the impact of research publications in indexed by popular search engine, leads us to revise this and other bibliometric applications such efforts to measure the visibility of academic and researchers on the web. This impact is not limited to traditional media (citations received from other scientific works) but embraces new ways of scientific communication and their associated indicators, as the number of downloads of a job, people store it in your manager references or the time that a presentation is envisioned online. It discusses briefly the extent to affect the emergence of these new tools to the traditional databases for evaluation of science, Thomson Reuters, Web of Science [2], and Scopus, Elsevier [3].

I think this provides a comprehensible summary of what the post will cover. The post concluded:

Since the level of university policy and research evaluation, the question to be made ​​is whether any of the products mentioned both Microsoft and Google mainly but also alt-metrics initiatives can be serious competitors in the near future to the two large databases that provide information bibliometric, important cost, especially in an era marked by budget cuts. Traditional products are more creditworthy and stable than new ones by offering a wide range of possibilities and associated metrics, not just jobs but also to journals in which they are published. Besides its use is widespread and there are some metrics validated by professionals and bibliometrics by agencies with responsibility for research. However, it is legitimate debate about whether these databases are essential in research assessment processes. In our opinion, at present these databases (ISI Web of Science or Scopus, no need for two) are essential for the evaluation, however the new generation Science Information Systems (CRIS) [28] together seekers free scientists such as Google Scholar, and metrics based on the use of information may provide new solutions to the evaluation of science, perhaps the medium term by decreasing the need for costly citation indexes. Making prospective fiction might think how it would change the market for scientific information and assessment if Google decided to launch its own “impact index” from the indexed information, which does not seem unreasonable since its popular search management PageRank is based on a principle that already apply other bibliometric indicators. In any case, what is certain is that new products and tools available to researchers and evaluators to facilitate the dissemination and the retrieval of scientific information and open new possibilities for the exchange of scientific information and assessment.

The meaning is less clear, but it does seem that the authors, Alvaro Cabezas Clavijo and Daniel Torres-Salinas of the EC3 Evaluation Group Scientific and Scientific Communication at the Hospital Universitario Virgen de las Nieves in Granada, have been asking whether new tools and approaches for identifying the value of scientific research are challenging the well-established tools provided by ISI Web of Science and Scopus. They seem to feel that researchers will need to continue to make use of ISI Web of Science or Scopus but new approaches may increasingly be relevant, especially if Google make a business decision to further enhance their Google Citation service.

Although not mentioned in the conclusions, the article also reviews Microsoft Academic Search and suggests that “compared to Google Scholar Citations, the process of updating the cv is heavier“; a conclusion which reflects my experiences in the long delay in having updates accepted. The article also mentions the altmetrics initiative and provides links to a number of examples of such approaches including “Total Impact [19] where, in the same line, we can find metrics posted on Slideshare presentations [20], the times they shared a scientific article on Facebook [21], or the number of groups Mendeley which has collected a certain job”.

I found the article of interest and I’m pleased to have found it via the referrer link. Should searches of online foreign language resources now become a significant part of a research strategy I wonder? I also wonder what the term “prospective fiction“, mentioned in the conclusions, means? Can any Spanish speakers explain what a better translation for the following sentence could be:

Haciendo prospectiva-ficción cabría pensar cómo cambiaría el mercado de la información y evaluación científica si Google decidiera lanzar su propio “índice de impacto” a partir de la información que indiza, lo cual no parece descabellado ya que su popular sistema de ordenación de búsqueda PageRank se basa en un principio que ya aplican otros índices bibliométricos.

Note that “prospectiva-ficción” was italicised in the original article.

Posted in General | 2 Comments »

SEO Analysis of UK Russell Group University Home Pages Using Blekko

Posted by Brian Kelly (UK Web Focus) on 25 January 2012

The JISC-funded Linking You Project

The Linking You project was provided by the University of Lincoln and funded by the JISC under the Infrastructure for Education and Research Programme. Its aim was to look at and make recommendations for improving the way that identifiers for .ac.uk domains are planned and managed in higher education institutions. The background to this work was described on the project web site:

The web is now fundamental to the activity and idea of the university. This Toolkit provides a standard way of thinking about your institutional URI structure, making it easier for people (and their browsers) to both remember your web addresses and locate where they are in your web site. It also helps prepare your institution for the world of linked data by proposing a clear and concise model for your data, making smooth integration with other systems easier and faster. A good URI structure can be easily understood by both humans and machines.

Although one of the benefits which implementation of the report’s recommendation was to “Improve discoverability of resources (and SEO)”, the Linking You project focussed primarily on identifiers for resources hosted within an institutional domain. This post aims to complement the Linking You work by gathering evidence on additional aspects: gaining an understanding of the size of institutional web sites, measuring the numbers of links to the institutional home page and other resources hosted within the domains and variants of the URI for the most important page on a web site – the institutional home page.

About Blekko

A few weeks ago James Burke (@deburca) introduced me to Blekko: a “search engine that slashes out spam, content farms, and malware. We do this by having a smaller crawl of 3 billion pages that focuses on quality websites. We also have a tool called a slashtag that organizes websites around specific topics and improves search results for those topics.” In response to the question “What information is available on the SEO pages?” the site describes how:

blekko doesn’t believe in keeping secrets. As part of our effort to make search more transparent, we provide a view of the data that our crawler gathers as it crawls the web.”

Every blekko search result has data associated with it that you can see. You can access it by either clicking on the SEO link tool in the second line of each result or else searching with the /seo slashtag. For example, apple.com /seo

Further information about Blekko (although it spells its name as ‘blekko’ on its web site I’ll use ‘Blekko’ in this post) is available on Wikipedia.

Using Blekko to Analyse Russell Group University Web Sites

What might Blekko tell us about UK university web sites? Blekko’s SEO pages provide details of the following information: geographic link distribution by state and country; inbound links; duplicated content; page source; sections and site pages. Blekko was used to survey the 20 Russell Group university home pages. The survey was carried out on 2 January 2012. However on 24 January it was noticed that the host rank, numbers of site pages and numbers of inbound links had changed significantly from 702.3 to 205.4, 945 to 8,406 and 24,442 to 627 respectively. The findings were rechecked but no other significant changes were noted.

The results are given in the following table. Note that you need to be logged in to the service in order to view the results.

Ref. No. Institution Blekko Analysis  Host
Rank
 Site   Pages   Inbound Links
(domain) 
Inbound Links
(URL) 
Outbound Links Notes
1 University of Birmingham [Analysis]  205.4   8,406 36,082 from
3,608 domains
627  {0 links} There are 13 outbound links (11 unique) from http://www.birmingham.ac.uk/index.aspx
2 University of Bristol [Analysis]  812.1  21,018 73,016 from
5,705 domains
40,101  5 links
3 University of Cambridge [Analysis]   1,042.7   16,041 309,734 from 10,145 domains 337,091  8 links
(7 unique)
4 Cardiff University [Analysis]         816.5   17,213 75,635 from
5,638 domains
59,590 5 links There are 29 links (26 unique) from http://www.cf.ac.uk/
5 University of Edinburgh [Analysis]      991.5   11,544 160,422 from 6,885 domains 168,545 {0 links} There is 1 outbound link from http://www.ed.ac.uk/home
6 University of Glasgow [Analysis] 1,090.5   12,243 100,271 from 9,454 domains 40,101 5 links
7 Imperial College [Analysis]  476.8   12,984 87,086 from
2,920 domains
34,566  {0 links} There are 3 outbound links from http://www3.imperial.ac.uk/
8 King’s College London [Analysis] 1,105.4   14,263 97,943 from
9,566 domains
26,986  {0 links} The are 11 outbound links (9 unique) from http://www.kcl.ac.uk/index.aspx
9 University of Leeds [Analysis]   1,141.8   16,617 134,501 from 10,886 domains 88,520 7 links
(5 unique)
10 University of Liverpool [Analysis]   1,260.3     4,727 59,797 from
9,794 domains
19,082  0 links
11 London School of Economics & Political Science [Analysis] 1,201.1   12,243 122,437 from 10,886 domains 29,795 {0 links} There are 3 outbound links from http://www2.lse.ac.uk/home.aspx
12 University of Manchester [Analysis]   694    13,292 186,893 from 5,193 domains 215,887  8 links
(7 unique)
13 Newcastle University [Analysis] 1,125    16,041 75,635 from
9,127 domains
  40,101 4 links
(3 unique)
14 University of Nottingham [Analysis] 1,380.8    16,041 94,551 from 10,759 domains   34,566  16 links
(12 unique)
15 University of Oxford [Analysis] 1,092.4    11,959 309,734 from 12,388 domains 290,563 1 link
16 Queen’s University Belfast [Analysis]   928.4   12,534 59,099 from
6,492 domain
  21,068 4 links
17 University of Sheffield [Analysis]   529.7    13,449 44,578 from  3,524 domains 20,050 13 links 8 outbound links from http://www.sheffield.ac.uk
are to http://www.shef.ac.uk/
18 University of Southampton [Analysis]   1,018.1    12,338 129,845 from 9,127 domains 69,132  9 links 5 outbound links from http://www.soton.ac.uk
are to http://www.southampton.ac.uk/
19 UCL [Analysis] 1,607.6  507,319 783,542 from 23,638 domains 476,718 9 links
20 University of Warwick [Analysis] 820     9,679 45,638 from
4,106 domains
  16,448 {0 links} 14 links (12 unique) for
http://www2.warwick.ac.uk/

Note that in the above table explanatory notes are given for figures displayed in braces e.g. {0}. Also note that the Universities of Newcastle and Nottingham both seem to have 16,041 pages.

A Tale of Two Domains

Whilst carrying out this survey it was noticed when checking inconsistencies that different results were obtained when using variants of the domain name and the institutional entry point. The following table lists known domain name variants. Note that the main domain was taken from the address given on the Russell Group web site.

Institution Main Domain Variant
University of Birmingham www.birmingham.ac.uk www.bham.ac.uk (Automatic redirect)
University of Bristol www.bristol.ac.uk www.bris.ac.uk
University of Cambridge www.cam.ac.uk Page at www.cambridge.ac.uk provides notice giving official domain name
University of Edinburgh www.ed.ac.uk www.edinburgh.ac.uk
University of Glasgow www.gla.ac.uk www.glasgow.ac,uk (Automatic redirect)
Imperial College www.imperial.ac.uk www.ic.ac.uk
University of Liverpool www.liv.ac.uk www.liverpool.ac.uk
University of Manchester www.manchester.ac.uk www.man.ac.uk (Automatic redirect)
Newcastle University www.ncl.ac.uk www.newcastle.ac.uk
University of Oxford www.ox.ac.uk www.oxford.ac.uk (Automatic redirect)
University of Southampton www.soton.ac.uk www.southampton.ac.uk

It should be noted that although the table describes the institutional part of the domain which is taken from the Russell Group web site, the analysis is carried out using www.official_domain.ac.uk  In two cases the well-established www. prefix was not used. These were www3.imperial.ac.uk and www2.warwick.ac.uk. However for the analyses the www.  prefix was used as it was felt that this would be the form used by the majority of users.

Discussion

The Blekko web site contains a page which summarises its core principles, which include:

Quality vs Quantity: blekko biases towards quality sites. We do not attempt to gather all of the world’s information. We purposefully bias our index away from sites with low quality content.

Source based, not link based: blekko does NOT rely solely on link based authority.Too many people engage in efforts to game search by linking for purposes other than navigation. blekko relies on human beings and their judgement of the authority of sources to dictate search results.

Open and Transparent: blekko makes freely available to its users all of the data that provide the underpinning of our search results. This includes web data, ranking information and the curation efforts of our users.

Blekko would appear to have a role to play in providing universities (which are unlikely to use ‘black hat’ SEO techniques such as use of link farms) with a better understanding of their visibility to search engines. However, despite the commitment to openness and transparency, the Blekko web site does not appear to provide details of their ranking algorithms.

Despite the current difficulties in interpretting the host rank in the above table, the information is provided as a snapshot, which may prove useful if Blekko subsequently do publish details. Of perhaps greater interest is the site pages column which, it would seem, contains the number of pages which have been indexed.

There does appear to be a significant diversity in the size of the Web sites, ranging from 4,727 (for the University of Liverpool) to  507,319 (for UCL), although apart from these two outliers the size of other Russell Group university web sites ranges from  9,679 and then 9,679 to  20,053.  These figures seem to suggest that there may be differing patterns of uses for institutional Web sites, ranging from the small and managed provision of focussed resources through to a more devolved approach. However although the managed approach would appear to have benefits, it does lead to the question as to where resources and services which are felt to be useful to the individual researcher or academic or their department should be hosted, and whether policies which acts as barriers to the creation of resources on an institutional service will result in content being hosted on cloud services.

Further interpretation of these findings will probably require an understanding of the institutional web environment.  However one aspect of the survey which does not require an understanding of the local context is the numbers of links from external services to the institutional web site. Links from authoritative web sites to a web site can influence the discoverability of the resources. A more detailed survey of such links will be published shortly.


Paradata: As described in a post on Paradata for Online Surveys it is important to document the tools and methodologies used when gathering evidence based on use of online tools in order that findings can are reproducible. In addition possible limitations in the tools or the way in which the tools are used should be documented.

This survey was carried out using the Blekko web-based service over the first two weeks in January 2012 and the findings rechecked on 25 January 2012 and changes recorded. Links are provided to the  results provided by the service. However in order to view the findings you will need to be signed in to the service (the service is free to join).

The findings for the University of Birmingham had changed significantly over a period of three weeks. It is not clear whether the variation was due to changes in the University of Birmingham web site, an artefact of the multiple domain names and entry point URLs for the University of Birmingham home page (http://www.birmingham.ac.uk/http://www.birmingham.ac.uk/index.aspxhttp://www.bham.ac.uk/ and http://www.bham.ac.uk/index.aspx all resolve to the same page) or changes at Blekko (e.g. reindexing the web site).

Note that the form of the domain name given on the Russell Group University Web site has been used. This is normally based on the full name, with the exceptions of Cambridge (which uses “cam.ac.uk”), Edinburgh (“ed.ac.uk”) and Glasgow (“gla.ac.uk”).

The results for the host rank are based on an undocumented algorithm. The information on the size of the site is dependent on the number of pages which are harvested. prefix was used as it was felt that this would be the form used by the majority of users.

Posted in Evidence | 3 Comments »

Further Reflections on My Predictions for 2012

Posted by Brian Kelly (UK Web Focus) on 23 January 2012

“Massively Scalable Sensemaking Analytics”

A recent post outlined My Predictions for 2012. However rather than just posting some idle speculations on technological developments which I feel will have an impact across the higher education sector this year, I also pointed out that there was a need  at a later date to be able to identify ways of gauging whether the predictions were accurate or not.

This suggestion followed on from a recent post in which I described “The Need for an Evidence-based Approach to Demonstrating Value“.  This post was highlighted by Stephen Downes who introduced me to “people like Rudolf Carnap [who] used to talk about ‘the requirement of total evidence’ and the ‘principle of indifference’” and went on to add that “These are as valid today as when they wrote it“. These two post inspired further discussion by Keith Lyons in a post on Probability and Sensemaking on the Clyde Street blog who cited a post on massively scalable sensemaking analytics which has links to other posts in this area including:

Sensemaking Systems Must be Expert Counting SystemsData Finds DataContext AccumulationSequence Neutrality and Information Colocation to new techniques to harness the Big Data/New Physics phenomenon.

This provides another take on my suggestion of the importance of Collective Intelligence. I’m therefore pleased to have been alerted to further relevant posts in this area. Indeed I can repeat the final two paragraphs in Keith’s posts as they are equally applicable to me:

It is fascinating that two early morning links can open up such a rich vein of discovery. At the moment I am particularly interested in how records can be used to inform decision making and what constitutes necessary and sufficient evidence to transform performance.

I have a lot of New Year reading to do!

But in addition to the analysis of big data in order to help make sense of future trends, it can also be useful to explore what other experts are predicting.

16 Predictions for Mobile in 2012

In my list of predictions I made uncontroversial comments regarding the growth in ownership of tablet computers. My interest was  not in tablet computers per se but in the implications of increased opportunities for content creation and curation, as well as content consumption which such devices would seem to provide.

On the GigaOm blog Kevin C. Tofel provides his more detailed predictions on development in mobile computing. Here are my thoughts on the implications of some of Kevin’s predictions:

Wearable computing becomes the next mobile frontier: Even more opportunities for content consumption, creation and curation. And, as explained in a post which described how “It Ain’t What You Do, It’s The Fact That You Did It” favouriting a tweet or +1ing a post can be useful and valuable activities.

A jump in wireless home broadband adoption: More opportunities for online access in the home environment.

Windows Phone usage grows, but slower than expected: There will continue to be a diversity in devices, operating systems and applications, so it will be important to provide device- and application-specific services.

Windows tablets in 2012 will sell like Android tablets did in 2011. There will continue to be a diversity in devices, operating systems and applications, so it will be important to provide device- and application-specific services.

Research In Motion will no longer exist as we know it today: Some platforms will fail, so it can help to minimise the risks by minimising developments of platform-specific services.

Nokia uses Symbian as a backup plan (but doesn’t call it Symbian): See above.

The patent wars worsen: Sigh :-( The W3C will seek to avoid standards which are encumbered by patents, but the devices themselves, their networking connective, etc. may be covered by patents which could, as we have seen recently in the case in which Dutch court blocks Galaxy phones in parts of Europe | ZDNet UK, can lead to devices not being allowed to be sold. Best avoid developing device specific services, then!

Apple’s next iPhone will be the iPhone 4GS: When will 4G arrive in the UK, I wonder?

There will be an iPad Pro available in 2012: Ooh, so we should develop apps for the iPad, should we?

Android’s momentum will continue thanks to Android 4.0: Oh, and the Android?

Hybrid apps with HTML5 will be the norm: Maybe not!

Predictions from the BBC

The BBC News blog has a post entitled Mind-reading, tablets and TV are tech picks for 2012 in which a panel of experts “look ahead to the technologies that will change the way we live and work in 2012 and beyond“.

Mt predictions of the continuing growth in importance of tablet computers and social networks, including Facebook, are echoed by Robert Scoble who points out “in terms of the businesses I follow – start-ups – they’re all building into Facebook’s Open Graph technology” and adds “I think business is going to have to have a Facebook Open Graph strategy next year. Even if we’re ignoring it because it’s too freaky on the privacy side, they’re going to have to at least consider it.“.

I suspect that universities will be amongst those businesses which will be exploring how to make greater use of Facebook. As Scoble pointed out “I visited Yahoo recently and they said they’re seeing 600% more visits from Facebook because of it” – with an increasingly competitive market place across higher education I suspect we will be seeing even greater use being made of Facebook during 2012 and, as mentioned above, there will be a need to consider “the requirement of total evidence” and the “principle of indifference“.

But in addition to Facebook as an application environment, Scoble’s comment reminded me of the importance of Facebook’s Open Graph Protocol.  I wonder whether it will be possible to gather evidence of Facebook’s success by monitoring the growth of the social graph rather than simply the numbers of Facebook users.

The continuing importance of social networks was also the key message given by Tim Barker of Salesforce.com. Barker felt that:

The big one is the social enterprise revolution.

It’s the idea that you can see the power shifting from companies to consumers. There are more than 1.7 billion people on social networks now; Facebook is the size the entire internet was in 2004.

It’s really defining the way that consumers and customers interact with companies and what they expect from them.

Such issues are equally relevant for the university sector, in part because the increasing costs of going to university will mean that future intakes of students will see themselves regarding themselves as customers who are paying a lot of money for the ‘product’ they are buying. In addition something that both staff and students have in common is that we are all consumers when we leave our ivory towers and go into town for the January sales!

We may not like such terminology and be concerned about how the future seems to be arriving, but remember “the requirement of total evidence” and the “principle of indifference“.  On the other hand, perhaps we shouldn’t be so fatalistic about the future.  But if we do wish to build an alternative reality we will still need to gather the evidence.

Posted in Facebook, jiscobs, Social Web | Leave a Comment »

Links to Social Media Sites on Russell Group University Home Pages

Posted by Brian Kelly (UK Web Focus) on 18 January 2012

Providing a Benchmark of University Use Of Social Web Services

In a recent post in which I gave My Predictions for 2012 I predicted that “Social networking services will continue to grow in importance across the higher education sector“. But how will we be able to assess the accuracy of that prediction? One approach is to see if there are significant changes in the number of links to social media services from institutional home pages.

The following survey provides a summary of links to social media services which are hosted on the institutional entry point for the 20 Russell Group universities.

Update: The information published about Imperial College was incorrect. This has been updated.

Ref No. Institution Services Type of Link Screenshot Icons for KCL
1 Birmingham None
2 Bristol None
3 Cambridge [iPhone] – [iTunesU] – [YouTube]
- [Facebook] – [Twitter] – [Flickr]
Direct link to institutional presence on social media service
4 Cardiff None
5 Edinburgh None
6 Glasgow [Generic bookmarks] – [WordPress] – [Facebook] – [Twitter] – [email] Link to visitor’s own presence on social media service.
7 Imperial College None [Delicious] – [Twitter] – [Digg] – [Stumble] – [Facebook] Link to visitor’s own presence on social media service.
8 King’s College London [Facebook] – [Twitter] – [YouTube] – [Favourites ] – [Digg] -[Delicious] – [RSS] See sidebar
9 Leeds [Facebook] – [Twitter]
10 Liverpool None
11 LSE [iTunesU] – [YouTube] – [Twitter] – [Facebook] – [Delicious] – [RSS] – [Flickr] Link to page on institutional web site providing information about institutional use of social media services.
12 Manchester [Facebook] – [Twitter] – [Google Maps] Direct link to institutional presence on social media service.
13 Newcastle [Facebook] – [Twitter]
- [YouTube] – [iTunesU]
Link to page on institutional web site providing information about institutional use of social media services.
14 Nottingham [Facebook] – [Twitter] – [YouTube] – [Flickr] – [LinkedIn] – [FourSquare] Direct link to institutional presence on social media service.
15 Oxford None
16 Queen’s University Belfast [Facebook] – [Twitter] Direct link to institutional presence on social media service.
17 Sheffield [Facebook] – [Twitter] – [YouTube] Direct link to institutional presence on social media service.
18 Southampton [Facebook] – [Twitter] – [YouTube] – [iTunesU] Direct link to institutional presence on social media service.
19 UCL [Twitter] – [YouTube] – [Facebook] – [Soundcloud] – [Flickr] – [iTunesU] Direct link to institutional presence on social media service.
20 Warwick [Facebook] – [YouTube] – [Twitter] – [iTunesU] Direct link to institutional presence on social media service.
Total 59 64

A summary of the number of occurrences of the services is given below.

Service Occurrences Note
Facebook 14 15 Links to institutional Facebook page.
Twitter 14 15 Links to institutional Twitter page.
YouTube   9 Links to institutional YouTube page.
iTunesU   6 Links to institutional iTunes page.
Flickr   4 Links to institutional Flickr page.
Delicious   2 3 (1) Provides access to links provided by the Careers Service and (2) allows page to be bookmarked.
Soundcloud   1 Links to institutional SoundCloud page.
LinkedIn   1 Links to institutional LinkedIn page.
FourSquare   1 Links to institutional FourSquare geo-location service.
Digg   2 Allows site to be bookmarked.
WordPress   1 Enables WordPress users to create post with link to University home page.
RSS   1 Purpose of this icon is not defined.
Stumble   1 Allows site to be bookmarked.
iPhone   1 Link to iPhone app about University
Google Maps   1 Link to map of University.
Generic Bookmarks   1 Link to bookmarks providing access to several social media services.
Email   1 Provides an email facility.
Total 5964

Discussion

If either all of the Russell Group University home pages had links to the same social media services or none did, this survey would be uninteresting. However since about 30% of the institutions do not have such links this seems to be suggesting that the value of having such links on a high profile page is not universally agreed.

For those institutions which do provide such links we can see that Facebook and Twitter are the most popular services, followed by social media sharing services. A number of services, including LinkedIn and FourSquare, have links from only a single institution.

It was also interesting to observe that although most institutions provided links to their institutional presence on social media services, a number of institutions used such links to allow visitors to provide links to the institution from the visitor’s own account, so that the institutional home page could be bookmarked or commented on.

Finally we can also observe how institutions label access to these services. This includes use of terms such as “Join us“, “Follow us“,”Find us on …“, and “xxx in the Social Media“.

From a user perspective we should also note that the different purposes provided by these links may be confusing. The norm is for links to provide read access to an institutional presence on a social media service. However in a number of cases the links are intended to allow users with accounts on particular services to bookmark or cite the institutional page on the service. Although this usage may be appropriate across a group of pages with the same purposes (for example, blog posts) this approach may cause confusion for a visitor who is either unfamiliar with the service or who expects the links to provide read access to the service.

Looking to the Future

This post has sought to identify patterns of usage of links to social media services on Russell Group university home pages and highlighted areas in which it may be beneficial for institutions to reappraise their uses of such services. However the main purpose of this survey was to provide a benchmark to help identify future trends in institutional use of social media.

Use of institutional home pages for such benchmarking can be beneficial since changes to institutional home pages will probably require approval at a senior level, and will therefore be less likely to reflect short term technological trends.

It will therefore be interesting at the end of the year to observe whether:

  • The current popular social networking services continue to remain popular.
  • New social media services are provided on social media services.
  • The ways in which the links to social media services are labelled and the functionality they provide changes.

I’d welcome comments on patterns across the wider University sector.

Posted in Evidence, Social Web | 8 Comments »

Should Higher Education Welcome Frictionless Sharing?

Posted by Brian Kelly (UK Web Focus) on 16 January 2012

Frictionless Sharing and The Guardian Facebook App

I recently described developments which suggest the potential for Facebook and Twitter as Infrastructure for Dissemination of Research Papers (and More). The post pointed out that links Facebook and Twitter seem to becoming more embedded within services, such as bibliographic services, in order to make it easier for researchers to share papers of interest across their professional network. Recently Martin Belam (@currybet) tweeted “Frictionless sharing – exploring the changes to Facebook” – a piece I’ve written for FUMSI magazine http://bit.ly/z930Wc and his article explored other developments we are seeing which can make sharing of resources even easier than clicking on a Like or Tweet button. Martin is the Lead User Experience & Information Architect for the Guardian Web site and blogs about UX/IA, digital media & journalism on currybet.net. He is also a contributing editor for the FUMSI online magazine. His opening paragraph, in an article aimed at information professionals, suggests that he feels that Facebook can bring benefits to this sector:

As 2012 begins, Facebook remains one of the amazing growth stories of the internet. Some argue that an eventual flotation will mark the high tide of a second internet bubble, whilst others are awe of the fact that a website that started in a college dorm has grown to have nearly one billion members

The main focus of his article are the recent technical developments which make sharing of resources transparent:

One of the biggest changes for content providers is “frictionless sharing”. In the past, users had to actively share content by pressing a “Like” button on a website, or “Like”-ing a Facebook page, or including a URL in their status update. Facebook is changing this. They have opened up what they call their “Open Graph”, which allows apps and publishers to automatically insert “actions” into a user’s Facebook timeline. And, in plain English, that means that for some sites or apps, simply listening to a song or reading an article is enough to see it posted to your Facebook activity stream without you lifting so much as a mouse-finger.

At the time of writing only a handful of applications have been launched which take advantage of the feature, including those by Yahoo!, Spotify, the Guardian, Independent and the Washington Post’s “Social Reader” app. That is sure to change in 2012, but the roll-out of further apps seems tied into Facebook launching “Timeline” – a new way for users to view their profile pages.

As an example of what is meant by frictionless sharing a screenshot of my Facebook news updates showing the Guardian articles I read using the Guardian’s Facebook app is shown. As can be seen the articles I read included ones on “Sherlock: BBC will no remove nude scenes” and “A Thatcher state funeral would be bound to lead to protests“. Note that the links I have provided go directly to the Guardian Web site so you can follow the links in the knowledge that your interest in nudity and right wing politicians will not be disclosed to your liberal colleagues :-)

This provides an interesting example of the risks of sharing the articles you read, without having to manually select an article of interest and consciously share it, whether on Twitter, Facebook, Delicious or whatever, across your network. And this is a reason why some people, including people in my network whose opinions I respect, have concerns over this development. On the other hand, the Guardian Facebook app does seem to be popular. It seems I was not alone in reading the article on how “Footage of nude dominatrix shown before 9pm watershed have prompted more than 100 complaints” and the hypocrisy of the Daily Mail in expressing their outrage whilst including the ‘shocking’ images in their web site.

But the 8,995 people who viewed the article shortly after it had been published was beaten by the 11,686 people who read the article on how Pale octopus, hairy-chested yeti crab and other new species found (warning the first link is to the Guardian Facebook app).

So how popular is the Guardian Facebook app? A post which suggested that We Can’t Ignore Facebook described how the Guardian Facebook app was launched on 22 September 2011. Statistics for a number of the Guardian sections collated on 14 January 2012, just over three months after the app’s launch, are given below.

Section Like this Talking about this
Main 242,326 13,593
Society 13,451      862
Technology 16,662   1,053
Data 3,486      100
Football 14,820      888
Sport 905       68
Culture 38,261   3,699

These figures seem to suggest the popularity of the Guardian Facebook app although, as ever, care must be taken in interpretting figures. In particular I do not know if these figures may include use of a pre-frictionless sharing app. In addition this single set of figures doesn’t provide any comparisons with views of the Guardian Web site or shown trends.

But returning to the recent FUMSI article Martin Belam provided some suggestions aimed at information professionals

Think again about Facebook metadata
Facebook’s Open Graph is a metadata standard for marking up your web content. It sits quietly in the HEAD of your HTML, and replicates many fields that you might be familiar with from metadata standards like Dublin Core. The fact that anyone can access it via a web request allows Facebook to say the standard is “open”, although they tightly control the spec themselves. To take advantage of the new frictionless sharing, even if you don’t build an app yourself, making that metadata available is going to be a requirement to have your content display properly within the many social reading experiences that are sure to be developed.

Think again about audit trails
“Frictionless sharing” changes the nature of our digital audit trails on Facebook. From a competitive intelligence point of view, it is great news, because potentially seeing what someone from a particular company is reading about and watching can give you clues as to where their work may be heading. It also means being careful not to leave audit trails yourself if you want the research you are doing to be kept “under the radar”.

Discussion

The ‘Frictionless Sharing’ Term

Martin Belam’s article generated some interesting Twitter debate on the day it was published. I spotted the initial tweet from @currybet and shortly afterwards read @ppetej’s comment that:

Much as I loathe the whole ghastly “frictionless sharing” thing, some useful thoughts/pointers by @currybettinyurl.com/6rvnqx7

and @mweller’s response:

@ppetej frictionless sharing is interesting I think for academics – it certainly shaped the way I wrote my last book

I curated the discussion on Storify since I felt it raised several interesting issues, in particular in taking the discussion about frictionless sharing beyond one particular instance (Facebook, which tends to focus concerns on other aspects of Facebook’s activities) into the more general issues of frictionless sharing in an educational context. Indeed, as Pete Johnston pointed out, a post on Martin Weller’s The Ed Techie blog published back in 2008 described The cost of sharing in which Martin made the point that “The ‘cost’ of sharing has collapsed, but institutions don’t know this“. Martin went on to point out that:

Clay Shirky argues that the cost of organisation has disappeared, and I believe this is because sharing is easy, frictionless. If I come across something I share it via Google shared items, Twitter, my blog, etc. If I want to share I stick it up on Slideshare, my blog, YouTube. There is a small cost in terms of effort to me to do the sharing, and zero cost in anyone wanting to know what I share. Sharing is just an RSS feed away.

Hmm, so back in November 2008 Martin Weller stated that “sharing is easy, frictionless“. Can anyone find an early reference to use of this term in this context? In a post on Sharing Learning Resources: shifting perspectives on process and product Amber Thomas used the term to describe activities taking place in the 1990s: “For example, the late 90s to early 2000s emphasised the benefits of collaborative resource development. Later on, some advocates of Open Educational Resources (OER) brought to the fore the concept of content as by-product, exhaust, frictionless sharing” but was not using the term at the time. I wonder if the Sharing article in Wikipedia should include a reference to ‘frictionless sharing’ and whether Martins’ blog post would be an appropriate reference for an early citing of the term in the context of sharing resources on social networking services?

Whenever the term first originated (and on Twitter Martin Weller suggested that “around the time of the dot com bubble ppl talked about the frictionless economy“) by December 2011 the ReadWriteWeb was predicting a Top Trends of 2011: Frictionless Sharing. This article illustrated frictionless sharing initially by Facebook are doing but also sharing music and news items.

But what of the potential for frictionless sharing in higher education?

Martin Weller feels that such approaches are already becoming embedded in some of his working practices, in particular: “frictionless sharing is interesting I think for academics – it certainly shaped the way I wrote my last book“. In My Predictions for 2012 I suggested that we will see an increase in the amount and types of ‘open practices’ including not only the well-established areas of open access and open educational resources, but also open approaches to being recorded and videoed. But such areas are still related to the creation of content. Frictionless sharing is interesting as it relates to openness in a more passive content: openness about what you may be reading (and as well as Faceboook, apps such as GoodReads allow one to share information on what you are reading).

Tony Hirst explored these ideas in a post published in October 2010 entitled in which he asked Could Librarians Be Influential Friends? And Who Owns Your Search Persona? when he asked “: if librarians become Facebook friends of their patrons, and start “Liking” high quality resources they find on the web, might they start influencing the results that are presented to their patrons on particular searches?“. Tony referred to this post last week when he revisited the potential role of librarians in supporting sharing of resources in a post in which he asked Invisible Library Support – Now You Can’t Afford Not to be Socials? His comment that:

The idea here was that you could start to make invisible frictionless recommendations by influencing the search engine results returned to your patrons (the results aren’t invisible because your profile picture may appear by the result showing that you recommend it. They’re frictionless in the sense that having made the original recommendation, you no longer have to do any work in trying to bring it to the attention of your patron – the search engines take care of that for you (okay, I know that’s a simplistic view;-). [Hmm.. how about referring to it as recommendation mode support?]

was particularly interesting in that Tony seems to have changed from using ‘invisible’ to ‘frictionless’ during the course of writing the post.

The Challenges

In some respects pragmatic advice regarding privacy issues and uncertainties as to how such data could subsequently be used would suggest that you should avoid the risks associated with frictionless sharing. Indeed, I made this point in a post in which I asked Is Smartr Getting Smarter or Am I Getting Dumber? following the Smartr app’s unannounced release of frictionless sharing for reading Twitter links read by members of one’s Smartr network.

But as the evidence of the Guardian app seems to suggest, people may be willing to share their interests in a passive fashion, and benefit from ways in which members of their networks reciprocate.

I guess the questions to be answered are:

  • What other types of frictionless sharing are there?
  • What benefits can frictionless sharing provide?
  • What are the risks in frictionless sharing?
  • Will the benefits outweigh the risks?

But before we can start to discuss these questions we perhaps need to define the terms. So what is ‘frictionless sharing‘? On this occasion Google currently seems to suggest that the term relates primarily to a recent Facebook developments, but I’m interested in the generic meaning of this term.  And perhaps we can use the Wikipedia entry for Frictionless sharing to agree on a definition.

Posted in Facebook, jiscobs, Social Networking | 12 Comments »

The Mobile-Only App Anti-Pattern: “You Can’t Serve Two Masters”

Posted by Brian Kelly (UK Web Focus) on 12 January 2012

We don’t even have a website

Will your app be available only from a mobile device?

In the anti-pattern Wikipedia article we learn that “In software engineering, an anti-pattern (or antipattern) is a pattern that may be commonly used but is ineffective and/or counterproductive in practice”. Reading the GigaOM article on “Whip myself–and Path–into fighting shape“, which is the ninth in a series of 12 tech leaders’ resolutions for 2012, I fear that we may be seeing the development of a mobile-only app anti-pattern.

In the article David Morin, co-founder of the Path social media sharing service, describes how:

I think 2012 will truly be the year of mobile Internet” and goes on add that “I mean, it’s so big. I get the GigaOM Pro reports on mobile, and I see these numbers: The amount of mobile display inventory, the fact that Apple’s paid out $2 billion to app developers, there are something like one million Android phones being activated daily. It goes on and on. The industry as a whole hasn’t come around to realizing how big mobile is just yet. But I think this will be the year where we focus on building companies that solely address the post-PC era.

I’d agree with that analysis.  My concern, though, is the author’s vision for Path (and Flipboard): “I think Path and Flipboard and a few others are leading the way. We don’t even have a website.”  He goes on to expand on this:

Products you build for the Web, which people access with a big screen and a keyboard and mouse while sitting at a desk, need to be completely different than what you build for a mobile device. You can’t just hire one mobile developer and take the interface you’ve built on the web and cram it onto a mobile device.

And then concludes:

It makes me think of something that Steve Jobs said: You can’t serve two masters. Well, the Bible said it first, but I think it applies to product design as well. You can’t serve both the Web and mobile with the same product. You have to choose.

It’s actually not quite true that “We don’t even have a website“. There is a Web site about the Path app, as illustrated, which has a handful of pages.  However there isn’t a Web interface for users of the app – so if you want to use the “smart journal … to share life with the one you love” you’ll have to install the app on your iPhone  or Android device (although you can, as I have done, also use an iPod Touch).

Beyond the Mobile Web vs Mobile App Debate

Much of the recent debate has focussed on whether one should develop for the Mobile Web, which through use of appropriate style sheets and other techniques, aims to ensure that the same content can be provided to both desktop computers and mobile devices, or develop Mobile App, which may exploit specific features of particular mobile devices and be more easily marketed and made available through mobile vendor’s apps stores and market places.

Source: Worklight

A Google search for “Mobile Web vs Mobile App Debate” highlights several articles including one which explains how Mobile Web App vs. Native App? It’s Complicated, This article recommends a “must-read article” on The fight gets technical: mobile apps vs. mobile sites which includes the accompanying image which graphically depicts some of the pros and cons of the different approaches to mobile development.

In the JISC CETIS briefing paper on Mobile Web Apps: A Briefing Mark Power makes the case for a universal approach to development which will ensure that access can be provided to both desktop and mobile users: “A viable, alternative approach is developing Mobile Apps using open web technologies and standards; technologies that continue to improve performance and offer more powerful functionality – as is now being talked about quite a bit on the topic of HTML5“.

There is, however, a recognition that mobile app development may provide benefits for users of the supported mobile developments. However the service provider is likely to find such development and subsequent maintenance costly and time-consuming and, at a time in which funding is being cut it would appear sensible to develop a platform- and application-independent approach through making use of W3C’ standards, such as HTML5, CSS and the related Open Web Platform standards.

However the anti-pattern described above take another approach to the issue of minimising development and maintenance costs: develop for the mobile device only and ignore the Web browser and the desktop computer!

I find this a worrying approach. However, as I described above, I have installed the Path app on two of my mobile devices. So rather than writing a post which simply reiterates the benefits of “open standards”, “device independence” and “universal access” I think there’s a need to understand the pros and cons of the approach taken by David Morin and welcome the clear and unambiguous statement he has made on why he feels this approach is best for his company:

The one big lesson I’ve learned from the past year is that every entrepreneur goes through really hard times — periods of time where people don’t believe in what you’re doing, or the numbers don’t look good. Entrepreneurs always have a vision: You wouldn’t have started a company if you didn’t. But the first implementation may not be getting you all the way there.

Find the users who see your vision and talk to them. Find out why they love the product and what they’re trying to do with it. Often, they’re trying to do something that you haven’t designed it for. You need to unlock that potential. Take away the things that don’t matter, and unlock the stuff that does — remove the complexity. That’s what will make it catch on with everyone.

I do wonder whether we will see institutions developing their own apps across a range of areas and whether we will find that the apps will not provide functionality for those without the appropriate mobile device. It would be useful to monitor such developments, particularly if the anti-pattern I have described turns out to be a successful pattern for mobile development.

As a footnote to this post I should mention the The State of the Mobile Web in Higher Education (2012) survey is currently open. The results of last year’s survey are available on the collegewebeditor.com blog. It will be interesting to see how institutional approaches to the mobile web have developed over the past year – and if institutions are considering developing mobile-only applications.

Acknowledgements: Thanks to James Burke (@deburca) for his tweet which alerted me to this article.

Posted in jiscobs, Mobile | 1 Comment »

Learning Is Performance; Performance Can, And Will, Be Analysed

Posted by Brian Kelly (UK Web Focus) on 11 January 2012

Learning is Performance

Learning is performance” Steve Wheeler tells us in his opening sentence in his first blog post of the year. Steve goes on to describe how:

Some of our earliest performances, particularly in formal learning contexts (school, college, university), are under the scrutiny of subject experts who award grades, and ultimately, some form of accreditation. This kind of performance is commonly referred to as formal assessment. Sadly, it is often the case that the measure of performance is not fit for purpose, as we have all witnessed recently in the universal failure of standardised testing, or the exam paper fiascos that continually assail our senses via the media.

The implication may to be that since sometimes (is there evidence that the term ‘often’ should be used in this context?) a “measure of performance is not fit for purpose” we should avoid assessment. However as Steve goes on to point out:

[Assessment] is important for the community, because the community needs skilled and knowledgeable members, and some form of check is required to ensure that the skill or knowledge is up to date, safe to use, and is relevant for the needs of society. If we get assessment wrong, we fail the student, and ultimately we fail society.

The JISC CETIS service has had a long-standing involvement in exploring issues related to assessment. But Steve Wheeler’s comment that “Learning is performance” has reminded me that it may be beneficial to explore approaches to assessment beyond the tools, projects and resources which CETIS have documented on their web site.

Sporting Performance

One lunch time a few months ago I met Doctor Ken Bray, a Senior Visiting Fellow in the Mechanical Engineering Department at the University of Bath.

Ken’s work has been featured in a couple of press releases published by the University of Bath. In January 2009 the focus was on work related to the physics of darts:

As the British Darts Organisation’s (BDO) Lakeside World Professional Darts Championships gets into full swing this week, new research from the University of Bath shows that the secret of true darts skills is all in the maths.

Visiting fellow Dr Ken Bray’s calculations for the Get On campaign shows how darts stars taking to the oche this week will have to master geometry, physics and algebra to win their place in the sport’s hall of fame.

However Ken’s main interest is in the science of football. Ken is author of How to Score: Science and the Beautiful Game which was published in 2006. His interests in this area have continued and were featured on the This Is Bath Web site in March 2011:

​They may not realise it, but the best footballers are actually skilled mathematicians, according to an expert from Bath.

University of Bath sports scientist Dr Ken Bray has analysed hours of football footage to conclude that as much as 30 per cent of a player’s technique is down to an intuitive understanding of maths and science.

A criticism of which could be made of a scientific study of sports is that “We all use mathematical principles – we’d fall over when walking if we didn’t!” And it would clearly be wrong to suggest that David Beckam’s success in taking free kicks is due to a conscious analysis of the variables (the distance, the weather conditions, the angles, …) and the implementation of the appropriate formula which will ensure that the ball succeeds in bending around the defensive wall and out of the reach of the goalkeeper to ensure that England reach the final stages of the World Cup, as Beckham famously did with his 30-yard free kick, three minutes into injury-time of the game against Greece in 2006.

However although footballers and other sports stars may have an “intuitive understanding of maths and science” those involved in coaching nowadays do have an understanding of the maths and physics associates with sports success and are developing measurement techniques which can provide ways of helping to ensure success.

Some approaches will be related to the individual sportsman, for example their diet and general fitness. However others will relate directly to their sporting performance and the performance of the opposition. This is now a major industry with companies such as Prozone analysing sporting performance and selling their methodologies, tools and data to interested parties, including sporting clubs, sportsmen and women, coaches, agents, newspapers and TV companies and sports fans.

As described on the Prozone web site the company provides:

Post match analysis: Analyse every aspect of team and player performance via a range of interactive platforms.

Opposition Analysis: Prozone can provide pre-match performance information on your forthcoming opponents. Commonly known as ‘technical scouting’ this allows you to identify the strengths and weaknesses of upcoming teams and individual players.

Through interactive coaching tools, users are able to gain a unique insight into the performance of upcoming opposition teams. These can help to supplement the knowledge of your scouts and enable you to better prepare for upcoming matches. Scouting analysis can be delivered using a range of video clips, in-depth data and multi-layered graphics and can be accessed online or sent direct to the training ground.

Live Performance Analysis: By offering ‘real time’ information about the game, our Live Analysis service gives management and coaching staff an immediate insight into the performance of players on the pitch.

Enhanced Player Trading: An advanced online solution allowing clubs to make objective and better informed decisions on player trading through the use of accurate performance data.

I wonder to what extend these approaches may have some relevance to the higher education sector? Back in October in a post on  Learning Analytics and New Scholarship: Now on the Technology Horizon I summarised Dave Pattern’s talk at the ILI 2011 conference which described how “The project looked at the final degree classification of over 33,000 undergraduates, in particular the honours degree result they achieved and the library usage of each student” and explored the hypothesis “There is a statistically significant correlation across a number of universities between library activity data and student attainment‘.  Hmm, does this have parallels with analyses of Arsenal’s defensive frailties and strategies for playing against them.  And should we be looking to provide services similar to Prozone’s:

Live Performance Analysis: By offering ‘real time’ information about students’ learning experiences, our Live Analysis service gives management and academic staff an immediate insight into the performance of students in their learning.

Steve Wheeler concluding his blog post by suggesting that:

Knowledge performance is at the centre of community as curriculum. From the sharing of knowledge comes the discourse that adds to everyone’s collective knowledge within the community of practice, and extends its boundaries. It is this sharing of experience, new ideas, contention and support that advances the community of practice exponentially. The tools are here to achieve it. Performance of knowledge through social media will be one of the vital components of education and training in the coming years.

I agree with that final sentence: “Performance of knowledge through social media will be one of the vital components of education and training in the coming years“. But this will not be restricted to learning and teaching. I would slightly modify this conclusion by saying: “Performance of knowledge through social media will be one of the vital components of research, education and training in the coming years“. And being able to analyse the performance will be a major growth area. Or at least that is what the  NMC Horizon Report > 2012 Higher Education Edition seems to be suggesting with the NMC Horizon’s 2012 Preview Report (PDF format) suggesting that Learning Analytics has a time-to-adoption horizon of 2-3 years.

The report defines Learning analytics as

the interpretation of a wide range of data produced by and gathered on behalf of students in order to assess academic progress, predict future performance, and spot potential issues. Data are collected from explicit student actions, such as completing assignments and taking exams, and from tacit actions, including online social interactions, extracurricular activities, posts on discussion forums, and other activities that are not directly assessed as part of the student’s educational progress.

Or if we, this time, apply this to a sporting context with the changes highlighted:

the interpretation of a wide range of data produced by and gathered on behalf of footballers in order to assess football progress, predict future performance, and spot potential issues. Data are collected from explicit sporting actions, such as completing passes and taking penalties, and from non-sporting actions, including online social interactions, extracurricular activities such as not been caught for drunken driving, posts on the footballer’s Twitter account, and other activities that are not directly assessed as part of the footballer’s sporting and non-sporting progress.

The major difference is that football is a game of two halves but an undergraduate course is a game of three years :-)

Posted in General | 4 Comments »

Learning From Shared Twitter Links (Before Trunk.ly’s Demise)

Posted by Brian Kelly (UK Web Focus) on 9 January 2012

The Forthcoming Demise of Trunk.ly

On 19th February 2011 I signed up for the trunk.ly service.  The email I received which confirmed my registration summarised the features of this service:

  • Trunk.ly indexes the full web page that all your links point to. Just search and find, no need to worry about tagging or summarizing content.
  • If you #tag content in Twitter, or tag it in Delicious, Trunk.ly will create tags for you.
  • Trunk.ly also checks your Twitter favorites so you can just favorite content with links without retweeting it if you prefer.

I have to admit that I’d forgotten about trunk.ly until I received an email recently telling me that the service has  been acquired by AVOS (who have recently acquired Delicious.com) and that the trunk.ly service will terminate from the end of the week: Friday 13th January.

The email did inform me that I can export the content created by trunk.ly:

This tool creates a list of all your bookmarks in a format understandable by most browsers. You can save the generated page (as HTML) and import it into your browser — or anything else that accepts bookmarks in a standard format.

Your tags will be included in the export file even if you don’t see them on the page. This is the limitation of the export file format.

I have exported the content and hosted it on the UKOLN Web site.  However before the service is withdrawn I thought it would be useful to explore what it can tell me about the links I have shared on Twitter.

The service is associated with my main Twitter account (@briankelly) and with the UK Web Focus blog.  Since registering with the service ten months ago it has harvested 4,997 links. I am followed by six other trunk.ly users and follow 13 users.

The service allows me to browse through the links I have created in chronological order as well as the links created by people I follow. As illustrated Trunk.ly can summarise the content of the link and, if available, include an embedded image.

Trunk.ly also allows me to explore the content by any associated tags.

As shown in the accompanying screen image I can see that I used the #altc2011 Twitter hashtag for a number of tweets.  Clicking on the tag enable me to view the three tweets I posted: one which linked to a FriendFeed post in which Seb Schmoller described how  ““Recording can improve a bad lecture! 7… – Seb Schmoller – FriendFeed”; one on ““Battling legal, logistical and technical obstacles to archiving the Web” « UK Web Focus” which summarised one of my blog posts on Twitter archiving and one on “Martin Hamilton’s blog: ALT-C 2011: Cloud Learning with Google Apps” in which I retweeeted Martin Hamilton’s link about a presentation he gave at the ALT-C 2011 conference.

Of more interest, however, is Trunk.ly’s search interface.  This enable me to search not only resources which I have shared but also resources shared by the people I follow as well as all Trunk.ly users. Examples of the terms contained in links posted by myself and Tony Hirst (@psychemedia) are given below:

User No. of
links
Search term Domain search
mashup  “RDFa  “jisc  “ukoln   “OU .ukoln .jisc “.open
@briankelly   4,930  40 157 907 832 119 151 35   16
@psychemedia 10,339 558   78 372   68 568   14 31 372

Unsurprisingly we both tweet significant numbers of links back to our host institutional Web site.

It is also possible to search by the resource type which have been shared:

User No. of
videos
No. of
images
No. of
places
No. of
PDFs
Everything
@briankelly  74  88 34  30 4,098
@psychemedia 266 271  0 137 8,533

Discussion

In February 2009 Mike Ellis that, for services such as Twitter and blogs “The person is the point“:

Twitter, like blogging, needs an edge, a voice, a riskiness. As long as institutions can retain this – i.e., do it for a reason – then, IMO, things will get more interesting. If they don’t, we’ll probably all be unfollowing museums as quickly as we can slide down the steep, slippery trough of disillusionment

That may have been the case in Twitter’s early days but now Twitter does not need to have an edge. Twitter can be used for sharing ideas and resources and for discussing the implications of the ideas and commenting on the resources.

The Trunk.ly blog has announced that:

Trunk.ly will be discontinued, and we will immediately start working to integrate our technology and insights to accelerate the link-saving and searching capabilities in Delicious. 

I’m pleased that I still have my Delicious account and will be interested  to see how the service becomes embedded within Delicious. It will also be interesting to see if the resource sharing capabilities provided by Twitter, and the ways in which such sharing can now be analysed will have a role to play in the development of altmetrics. As described in the altmetrics manifesto:

 Articles are increasingly joined by:

  • The sharing of “raw science” like datasets, code, and experimental designs
  • Semantic publishing or “nanopublication,” where the citeable unit is an argument or passage rather than entire article.
  • Widespread self-publishing via blogging, microblogging, and comments or annotations on existing work.

A Google search for “altmetrics twitter” provides a link to a tweet from @jasonpriem:

BIG #altmetrics news: Highly tweeted articles 11x more likely to be highly cited http://doi.org/hb6#scholcomm #twitter

The tweet provides a link to a paper on “Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact” which concludes:

Tweets can predict highly cited articles within the first 3 days of article publication. Social media activity either increases citations or reflects the underlying qualities of the article that also predict citations, but the true use of these metrics is to measure the distinct concept of social impact. Social impact measures based on tweets are proposed to complement traditional citation metrics. The proposed twimpact factor may be a useful and timely metric to measure uptake of research findings and to filter research findings resonating with the public in real time.

These conclusions were based on analysis of all tweets containing links to articles in the Journal of Medical Internet Research (JMIR). For a subset of 1,573 tweets about 55 articles published between issues March 2009 and February 2010, different metrics of social media impact were calculated and compared against subsequent citation data from Scopus and Google Scholar 17 to 29 months later. A heuristic to predict the top-cited articles in each issue through tweet metrics was validated.

For those working in the area of medical internet research it would seem that Twitter has an important role to play in increasing citations or helping to identify important papers. Perhaps, after all, Mike Ellis is right: the person is the point. But the person may be the researcher and the point may be the research, rather than the researcher’s edgy voice.

Survey Paradata:  As described in  a post on Paradata for Online Surveys blog posts which contain live links to data will include a summary of the survey environment in order to help ensure that survey findings are reproducible, with information on potentially misleading information being highlighted.  The survey findings described in this post were collected on 30 December 2011 using the Google Chrome browser on a PC running Windows 7.  It was noticed that there were differences between the  two ways of finding the numbers of links which have been harvested: the information provided in the user’s profile (e.g. see my profile page which states that there are 4,997 links)  and the numbers given for a search for the user (see my search results).

Posted in Twitter | Leave a Comment »

Isn’t #Sherlock Great! (TV & a ‘Second Screen’ For the Twitter Generation)

Posted by Brian Kelly (UK Web Focus) on 8 January 2012

A Scandal in Belgravia

Wasn’t last week’s episode of Sherlock (“A Scandal in Belgravia“) great! I thought so and when I looked at my Twitter stream last Sunday night it seems that many of the people I follow on Twitter were impressed. too. I then searched Twitter for #sherlock and found the approval of the first in the new series was pretty overwhelming.

As a friend of mine later said, it’s not surprising that Twitter users liked the programme so much as it was written with users who are au fait with Web technologies in mind. Note only did the programme feature @TheWhipHand it also mentioned John Watson’s blog. Both had been created to accompany the programme, and yes people did view the Twitter stream and the blog while they were watching the programme, as can be seen from the accompanying screenshot of the tweets which were posted during the show.

TV’s ‘Second Screen’

The link between a TV programme and a Twitter stream reminded me of the pioneering which Tony Hirst and Martin Hawksey were involved in back in 2009.

As described in the Wikipedia entry for “Twitter subtitling”:

The concept of combining video and twitter feeds for recorded events was first proposed Tom Smith in February 2009[1] after experiencing Graham Linehan’s BadMovieClub[2] in which at 9pm exactly on the 13th February 2009, over 2,000 Twitter users simultaneously pressed ‘Play’ on the film ‘The Happening‘ and continued to ‘tweet’ whilst watching, creating a collective viewing experience.

Smith, in response, proposed that media such as DVDs and YouTube videos could be enhanced by overlaying asynchronous status updates from other Twitter users who had watched the same media [1].

Separately, in March 2009 Tony Hirst (Open University), in consultation with Liam Green-Hughes (Open University), presented a practical solution for creating SubRip (*.srt) subtitle files from the Twitter Search API using Yahoo Pipes. The resulting file was then uploaded to a YouTube video[3] allowing users to replay in realtime audio/video with an overlay of status updates from Twitter. Hirst subsequently revisited his original solution creating the simplified Twitter Subtitle web interface for the original Yahoo Pipe[4]

The concept was revisited on the 16th February 2010 by Martin Hawksey (JISC RSC Scotland North & East) in response to a notification by Hirst made via Twitter during a broadcast of the BBC/OU’s The Virtual Revolution series in which Hirst requested information on replaying the #bbcrevolution hashtag in real-time[5].

Although Tony and Martin’s work initially focussed on providing a mashup of tweets and recordings of a number of BBC TV programmes Martin subsequently developed the iTitle tool which was used to merge event tweets with video recordings taken at a number of events held with the UK higher education sector.

As described in a post on Captioned Videos of IWMW 2010 Talks iTitle was used after UKOLN’s IWMW 2010 event to provide Twitter captions of the discussions which took place during the plenary talks at the event.  One of the developments Martin made to iTitle was to provide a search facility which enable you to jump directly to the video associated with the content of a tweet.  I described this can be used to provide crowd-sourced bookmarking capabilities of live video feeds. As illustrated using an example of the IWMW 2010 conclusions I could search for “good stuff” and find three examples of tweets containing these words.  In the screen shot I seem to be looking at the Twitter Wall at 10:51 on as @PlanetClaire as she tweetsProfessional network grown after this IWMW. Good stuff. #iwmw10″. It’s not only the BBC which can take a post-modernist approach to the blended real world and online environment!

After speaking at the University 2.0: the Extended University Conference held at the UMIP in Sandanter, Spain in 2010 at which a number of the plenary talks were live-streamed it occurred to me that there could be other ways in which iTitle could be used. Professor Alejandro Piscitelli, University of Buenos Aires gave a fascinating talk on Explorando los bordes y contornos de la Universidad 2.0. The talk was given in Spanish and I listened to the English translation. Since the audience were mostly Spanish the tweets were also in Spanish. The talk seemed to be one which Professor Piscitelli had given on a number of occasions. But what aspects of the talk would be of particular interest to the Spanish audience, to an audience in Argentina or in the UK or USA (Professor Piscitelli is a fluent English speaker). I should also add that Martin Hawksey was a remote observer of the conference. Martin processed the tweets posted during Professor Piscitelli’s talk by using Google Translate to translate them into English, Spanish and Catalan. The user could select their preferred language and view a recording of the talk will the translated tweets being displayed in the recording. Note that although this interface is still available it seems that the original video recording is no longer available at the UIMP.

These thoughts came back to me when I saw Sherlock and the accompanying Twitter backchannel.

I am sure the BBC will have been analysing the tweets and interpreting how the audience was responding to the complexities of the plot. But will they be using analyses of live Twitter posts in order to make comparisons between the posts from the UK audience and a US audience when the programme is broadcast over there?

Back in February 2010 Tony Hirst gave his thoughts on Broadcast Support – Thinking About Virtual Revolution:

I watched the broadcast on Saturday, I started wondering about ‘live annotation’ or enrichment of the material as it was broadcast via the backchannel. Although I hadn’t seen a preview of the programme, I have mulled over quite a few of the topics covered by the programme in previous times, so it was easy enough to drop resources in to the twitter feed. So for example, I tweeted a video link to Hal Varian, Google’s Chief Economist, explaining how Google ad auctions work, a tweet that was picked up by one of the production team who was annotating the programme with tweets in real time

Tony concluded by referencing Martin Hawksey:

PS here’s another interesting possibility – caption based annotations to iPlayer replays of the programme via Twitter Powered Subtitles for BBC iPlayer Content c/o the MASHe Blog (also check out the comments…)

The Ideas and Experimentation Become Apps

We are now seeing these ideas being deployed in a commercial context. Just before Christmas I came across the Zeebox app. This is described as “new way to watch television. It’s social, connecting you to your TV-watching friends, so you can chat, share and tweet about whatever’s on” which I have now installed the app on my iPod Touch. Previously I typically used my iPod Touch to view tweets and had a large enough Twitter community to spot hashtags which may emerge or have been minted about a TV programme. However apps such as Zeebox are now managing this process and provide a ‘frictionless’ way of sharing thoughts and opinions.

This is an example of a “Second screen” which is defined in Wikipedia as “A term that refers to the electronic device (tablet, smartphone) that uses a television user, to interact with the content they are consuming“.

It’s good to see ideas which were explored in the higher education sector a few years ago starting to be used by the early adopters in the mainstream community. There’s a danger, though, that such mainstream uses of Twitter will lead to a backlash by those who are uneasy when a technology become used in entertainment. But rather than looked at the trivia which we’re likely to see on the backchannel for Saturday night entertainment programmes, let’s explore how the easy-to-use applications which are now becoming available can be used to support our educational and research interests.

Looking back at the blog posts written by Tony and Martin in 2009 and 2010 might be a useful starting point for seeing what the future may hold :-)

Posted in Twitter | 8 Comments »

Call For Proposals for IWMW 2012

Posted by Brian Kelly (UK Web Focus) on 6 January 2012

UKOLN’s annual Institutional Web Management Workshop will be held at the University of Edinburgh on 18-20th June 2012.  IWMW 2012 is the sixteenth in the series of events which is aimed at those involved in the provision of institutional Web management services.

This year’s theme is “Embedding Innovation“. At the IWMW 2010 we explored the theme of The Web in Turbulent Times and last year we described institutional approaches for Responding to Change. Now, after having absorbed the implications of reductions in funding and begun the processes of new approaches to delivering services we now wish to explore ways in which embed changes related to new working practices and the rapidly changing technical environment and user expectations, especially from students who will be paying significant amounts of money to attend University.

The call for proposals is now open. Since the event is aimed at a broad section of those involved in the provision of institutional Web services we welcome proposals which cover the spectrum of  interests ranging including the technical challenges of managing institutional Web service, the ways in which a diversity of user needs can be addressed, the ways in which content and services can be managed, the increasingly challenging legal  implications of providing online services, they ways in which the Web can be used to support a broad range of business requirements, the growing importance of social media, the opportunities and challenges posed by Cloud Services, strategies for dealing with a mobile environment, staff development issues, etc.

We welcome submissions for plenary talks. There will be a small number of plenary talks which typically last for 45 minutes and should be of relevance to a broad section of the audience. Since the event has always sought to provide opportunities for active participation we will be providing a larger number of workshop sessions, which normally last for 90 minutes and aim to ensure that everyone has an opportunity to participate actively.  In addition we welcome other ideas, perhaps for panel sessions, debates, and other ways in which the challenges of managing large-scale Web services can be addressed in an informative and, perhaps, fun ways.

If you have never attended an IWMW event before you may wish to view the programme for the IWMW 2011, IWMW 2010 and IWMW 2009 events to get a feel for the range of topics which have been covered.

If you have any queries or would simply like to have a chat about possible contributions, feel free to get in touch with me.

Posted in Events | Tagged: | 4 Comments »

Alternatives To Twapper Keeper

Posted by Brian Kelly (UK Web Focus) on 5 January 2012

 

On 23 December I received an email which confirmed the news about the forthcoming demise of the Twapper Keeper Twitter archiving service:

First off, on January 6th 2012, the TwapperKeeper.com site, and all related archives, will be shutdown with no access to any existing archives. Please ensure you have compiled all of your data by this date.

What should you do if you wish to continue keeping an archive of tweets, especially for event-related tweets which seems to be one particularly valuable use case?

One solution is to use Twapper Keeper! Or perhaps I should say Your Twapper Keeper, the open source version of Twapper Keeper. As part of the developments to the Twapper Keeper service the software was made available under an open source software licence in order to decouple the provision of the service from the software used to provide the service. Anyone, therefore is free to download the software from the Github repository and set up their own Twitter archive.

For those who have warned about the risks of dependencies on third party services for which there are no formal contractual agreements this example perhaps demonstrates the value of funding the development of an open source alternative. But is this really the case? Will institutions be downloading the software in order to be able to manage their own archives? I see no evidence that this is having, but I’d like to be proved wrong.

Perhaps this is a case for which an easy-to-use proprietary solution is all that is needed, especially since the content is typically not created primarily be members of a specific institution but, in the case of event-related Twitter archives, attendees at an event who are likely to be based across the sector rather than at a single institution.

On the Event Amplifier blog in a post entitled Goodbye Twapper Keeper Kirsty Pitkin explores the possibility of using Hoot Suite, the company which purchased Twapper Keeper, for managing Twitter archives. However Kirsty has described the financial implications of such a decision:

A Pro customer (paying $5.99 per month) can archive only a measly 100 tweets, or purchase a bolt on to archive up to “100,000 tweets and download all keyword related Twitter messages”. When I attempted to upgrade my plan, I found that 10,000 additional tweets would cost me $10 per month, and 100,000 additional tweets would cost me $50 per month.

But in addition to the options of installing the Your Twapper Keeper software or purchasing an appropriate account from HootSuite, Kirsty has highlighted an alternative approach: “Martin Hawksey is a master of Google Spreadsheet tools and has created this alternative method of collecting tweets and has provided detailed instructions to archive and visualise Twitter conversations around an event hashtag“.

Martin has helpfully provided a video which is available on YouTube and embedded below which describes how to use his solution.

It will be interesting to see which, if any, of these options proves the most popular solution across the sector: the open source solution, the subscription service, the Google solution or possibly an approach I haven’t described. Which will you be choosing?


Twitter conversation from Topsy: [View]

Posted in Twitter | 9 Comments »

Facebook and Twitter as Infrastructure for Dissemination of Research Papers (and More)

Posted by Brian Kelly (UK Web Focus) on 4 January 2012

 

A tweet from @Wowter (blogger, information specialist and bibliometrician at the Wageningen UR Library) alerted me to the news of the “Free new #SpringerLink mobile app: Access 2,000+ peer-rev. journals, 49,000 books,127,000 #OA articles.http://ow.ly/8gv9W“.

I installed the app on my iPod Touch and was interested to note that there were just three ways of sending information about the 2,000+ peer-reviewed journals, 49,000 books and 127,000 open access articles: as illustrated the three dissemination tools are email, Facebook and Twitter.

Via @Wowter’s Twitter timeline I also found the news, initially announced by @MFenner, of the “New blog post: CrowdoMeter goes Mobile http://blogs.plos.org/mfenner/2012/01/04/crowdometer-goes-mobile/“.

The blog post describes how “Two weeks ago Euan Adie from altmetric.com and myself launched the website CrowdoMeter, a crowdsourcing project that tries to classify tweets about scholarly articles using the Citation Typing Ontology (CiTO) … This project is far from over, ideally we want 3-5 classifications per tweet or an additional 1,000 classifications“. In order to “make the classifications as simple as possible, and to help further with this we today [4 January 2012] launched a mobile version of CrowdoMeter. Simply browse to http://crowdometer.org with your iPhone or Android phone [and] sign in via your Twitter account“.

I did this and captured the following screenshots:


Initially in this post I intended to highlight how the Springlink app suggests that Facebook and Twitter may be becoming part of the dissemination infrastructure for research papers, especially on mobile devices. However when I read Martin Fenner’s blog post I realised that Twitter, in particular, may have a role to play in the curation of information about research papers and scientific data.

Hmm, I wonder if Twitter will catch on outside this niche area?


Twitter conversation from Topsy: [View]

Posted in Facebook, Mobile, Twitter | 15 Comments »

I Built It and They Didn’t Come! Reflections on the UK Web Focus Daily Blog

Posted by Brian Kelly (UK Web Focus) on 3 January 2012

On 1 January 2011 I set up the UK Web Focus Daily blog. As described in the initial post:

Inspired by WordPress.com’s suggestion that WordPress users may wish to publish a blog post a day (see the post on “Challenge for 2011: Want to blog more often?“) I have set up this blog. This will be used for informal notes, ideas, etc.

The blog was used actively during the first six months of the year with 30 posts being published in January, 27 in February, 26 in March, 30 in April, 24 in May and 26 in June with the final 6 posts published during the year being published in July.

The blog made use of the P2 theme which is described as “A group blog theme for short update messages, inspired by Twitter“. As can be seen in the screenshot the post creation window is displayed at the top of the blog, thus making it simple to create brief posts.

The content posted is unlikely to be of significant interest to others; the blog was primarily intended to keep brief notes about topics of interest to me. However shortly after launching the blog I realised that it could be used to see how much traffic a blog generates if no attempt is made to promote the blog. However on 8 January a post in which I described how I intended to Unsubscribing from RSS feeds with only summary content contained links to two blogs, which subsequently resulted in comments being posted on the blog. I therefore subsequently did not publish any links to blogs in subsequent posts and I described this experiment in a post entitled Build It and They’ll Come? which was published on 23 January.

As can be seen from the accompanying image, as expected the numbers of visitors to the blog were low (apart from the home page there were only four posts which received over 10 visits).

It will be noticed that there was a big jump in the numbers of statistics in June. As described in a post entitled Blog Views Up By 300%! this occurred after the blog to search engines, including Google, was removed on 31 May.

Normally experiments look at ways of measuring strategies for maximising access to resources. This experiment looked at ways of publishing content openly whilst keeping the numbers of visitors to a minimum – along the lines of publishing the plans for the destruction of Arthur Dent’s home planet to make room for an expressway at the city planning office, “on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.’

The suggestions I have for those who wish to minimise the chances that people will find a blog were:

  • Block search engines from indexing the site (note you can also create a unique string so you can check if Google has indexed the site).
  • Don’t link to other people’s blog posts: they’ll see the referrer link and possibly choose to subscribe to your blog).
  • Don’t allow comments: people may find what you are writing about of interest, add their own thoughts and then look for further comments.
  • Don’t add the blog to any directories.
  • Don’t refer to your blog on other web sites or blogs.
  • Don’t tweet about the blog.

Of course if you want others to read your posts you’ll do the opposite! More seriously, this experiment has helped to demonstrate the fact that simply building an online resource isn’t sufficient if you want users to make use of your resource.  The launch of the web site is just the start of the process.


Twitter conversation from Topsy: [View]

Posted in Blog | 4 Comments »

“It Ain’t What You Do, It’s The Fact That You Did It”

Posted by Brian Kelly (UK Web Focus) on 2 January 2012

There’s a tendency to emphasise the benefits of tangible activities which involve significant investment of time and energy: carrying out the scientific experiments; interviewing the stakeholder communities; writing the research paper; developing the software; organising the events; etc. Outside the higher education sector we see this, for example, from the New Year’s Honours list which describes how “In total 984 people have been recommended to The Queen for an award. 70 per cent of the recipients are local heroes, who’ve undertaken outstanding work in their communities“. You don’t win an award or get promotion for a trivial piece of work, would you?

I wouldn’t like to be critical of people “who’ve undertaken outstanding work in their communities” – although, as described in the Observer “It is far more difficult to see the reasoning behind the award of an unprecedented third of knighthoods to bankers and businessmen, including Paul Ruddock, a hedge fund manager and Tory donor who profited from the collapse of Northern Rock“. But rather than make this obvious political point, I feel there is also a need to reflect on the implications of the minor decisions and actions we can all make which can have an impact across the society we live in.

This is clearly true in the parliamentary democracy we live in. Last year I took part in our democratic processes by voting in the General Election. And whilst it’s true that I am unhappy with the result and the subsequent consequences, I know that that’s how western democracy works and I’ll have to accept the implications of my vote for the Lib Dems, in order to keep out the Conservatives in Bath.

Voting in general elections every four to five years is accepted as how parliamentary democracy works in the UK. But it has recently occurred to me that we are also seeing similar effects happening in the online world, in which the small actions of individuals can have a significant influence in both the online and offline (real) worlds.

We see this with Google searches, in which the first sets of results will be affected by the numbers of links to the pages. People who create Web pages containing links to other pages are therefore helping to vote for pages which will be displayed at the top of a Google search.

The influence of individual Web page authors is now likely to be fairly minimal, as Search Engine Optimisers will be using a variety of other techniques in order to manipulate Google’s search algorithms. However the social media provides an alternative means by which simple actions can have an influence.

The University of Oxford’s Facebook page informs us that there have been “349,820 likes” and “5,549 are talking about the page”.

Looking at the most recent Facebook status update for the page, the season’s greetings from the institution, we can see that 707 people have liked this and 192 comments have been made.

The implications of lightweight activities such as liking a resources, favouriting a resource or following a user struck me after the recent update to the Twitter Web site (and Twitter client on my iPod Touch).

The activities of people I follow on Twitter are now highlighted so, as illustrated, I can see how the Twitter account for the J Paul Getty Museum has favourited a tweet from Carl Silva, how Garret McMahon has started to follow Elaine Byrne and Clay Shirky, James Burke and Mike Gulliver are now following Rupert Murdoch.

Back in June 2010 Christina Rogge suggested ways in we could go about BUILDING A COLLECTIVE INTELLIGENCE WITH TWITTER and, In November a post on the Mashable blog described How Hashtagging the Web Could Improve Our Collective Intelligence. Also last year Anthony Deacon suggested ways of Using Facebook Groups to Harness Collective Intelligence.

In the 2010 General Election there were 10,706,647 votes for the Conservatives, 8,604,358 for Labour and 6,827,938 for the Liberal Democratic Party (including one from me). There have also been 349,831 Likes of the University of Oxford Facebook page, also including one from me. I wonder if my trivial activities on social media sites will have a more productive outcome than my vote in the last election? And although we will still need people to “undertake outstanding work in their communities” we should also remember that, to a certain extent:

It ain’t what you do, it’s the fact that you did it. That’s what’s gets results.

The “it” can involve a mark on a voting slip or a click on a Like or +1 button. Activists understand the importance of the need to persuade people to exercise their vote at elections. We will need to understand the potential significance of similar small-scale actions in the online environment.


Twitter conversation from Topsy: [View]

Posted in Social Web | 5 Comments »