Favourite blogs for Kieran's blog

IT People » All entries

April 25, 2011

BMW R100GS Paris–Dakar refurbishment and redesign – latest progress

Some recent updates by top expert Andrew Sexton. Including:

  • Oil sump extension;
  • New oil cooler;
  • Oil cooler relocation;
  • Oil cooler thermostat.

The parts were bought from http://www.boxxerparts.de

Andrew has also professionally rewired the electrics, making a neat job out of the Acewell speedo and a replacement rear led light. It all now seems to work perfectly. Finally, he found that it had been suffering from low oil pressure, due to a missing o-ring in the oil filter assembly (a common mistake made by a non-specialist technician). The big-end bearings had signs of damage, so were replaced. Andrew also re-seated the exhaust valves. Less smoke and more MPG have resulted.

I've added an MRA Vario screen from Motorworks, adjustable to give perfectly non-turbulent air flow. There's also a Garmin Zumo sat nav to go with the Midland BT 02 bluetooth intercom.

I've ditched the metal panniers (Ted Simon's advice). They've been replaced by a pair of Ortlieb waterproof panniers (a single pannier can carry all of my camping gear), a Hein Gericke tail bad, and a small cool bag.

Some photos:

Bike 1

Bike 2

Bike 3


April 12, 2011

Alexander playing cosmic basketball

Lawrence thought it would be amusing, so he made this image....

Basketball


March 27, 2011

Alexander Prospero O'Toole

Alexander

Magical baby.


March 10, 2011

QCon day 1

A lot of good stuff today, but I’m just going to jot down a couple of my favourite points:

Craig Larman talked about massive-scale Scrum-based projects. I don’t suppose I’m going
to be running (or even part of) at 1500-person dev team very much, but some of his points
are applicable at any scale:
  • The only job title on the team is “Team Member”. There might be people with specialist skills,
    but no-one can say “it’s not my job to fix that”
  • If you don’t align your dev. teams’ organisation with your customers, then Conway’s law means your
    architecture will not align with your customers either, and you won’t be able to react when their needs
    change
  • Don’t have management-led tranfoormation projects. How will you know when they’re done? Instead,
    management’s role is just to remove impediments that the dev team runs up against – the “servant-leader”
    model
Dan North spoke about how he moved from what he thought was a pretty cutting edge, agile environment
Thoughtworks, consulting to large organisations starting to become leaner/more agile) to a really agile
environment (DRW, releasing trading software 10s of times per day), and how if you have a team that
is technically strong, empowered, and embedded in the domain (i.e. really close to the users), you can do
away with many of the traditional rules of Agile. A couple that really struck me were:
  • Assume your code has a half-life. Don’t be afraid to just rewrite it, or bin it The stuff that stays in
    can get better over time, but it doesn’t have to be perfect from day 1
  • Don’t get emotionally attached to the software you create. Get attached to the capabilities you enable
    for your users
  • Remeber, anything is better than nothing.
Juergen Hoeller talked about what’s new in Spring 3.1. No amazing surprises here, but some nice stuff:
  • Environment-specific beans – avoid having to munge together different config files for system-test vs.
    pre-production vs. live, have a single context with everything defined in it (Even nicer, arguably, when you
    do it via Java Config and the @environment annotation)
  • c: namespace for constructor args. Tasty syntactic sugar for your XML, and the hackery they had to go through
    to get it to work is impressive (and explains why it wasn’t there from the start)
  • @cacheable annotation, with bindings for EHCache and GemFire (not for memcached yet, which is a bit of a surprise)
Liz Keogh talked about perverse incentives. Any time you have a gap between the perceived value that a metric
measures, and the actual value that you want to create, you make an environment where arbitrage can occur. People can’t
help but take advantage of the gap, even when they know at some level that they’re doing the “wrong thing”.
  • Focus on the best-performing parts of the organisation as well as the worst-performing. Don’t just say “This
    project failed; what went wrong”; make sure you also say “This project succeeded better than all the others; what went right?”
  • Don’t try and create solutions to organisation problems, or you’ll inevitably make perverse incentives. Instead, make
    make Systems (Systems-thinking, not computer programs) that allow those solutions to arise.
Chris Read and Dan North talked about Agile operations. Surprisingly for me, there wasn’t a great deal of novel stuff
here, but there were a couple of interesting points:
  • Apply an XP-ish approach to your organisational/process issues: Pick the single biggest drag on you delivering value and
    do the simplest thing to fix it. Then iterate.
  • Fast, reliable deploys are a big force multiplier for development. If you can deploy really fast, with low risk,
    then you’ll do it more often, get feedback faster, allow more experimentation, and generally waste less. The stuff that Dan
    and Chris work on (trading data) gets deployed straight from dev workstations to production in seconds; automated testing
    happens post-deploy

Yoav Landman talked about Module repositories. Alas, this was not the session I had hoped for; I was hoping for some
takeaways that we could apply to make our own build processes better, but this was really just a big plug for Artifactory,
which looks quite nice but really seems to solve a bunch of problems that I don’t run into on a daily basis. I’ve never needed
to care about providing fine-grained LDAP authorisation to our binary repo, nor to track exactly which version of hibernate was
used to build my app 2 years ago. The one problem I do have in this space (find every app which uses httpclient v3.0,
upgrade, and test it) is made somewhat easier by a tool like Artifactory, but that problem very rarely crops up, so it
doesn’t seem worth the effort of installing a repo manager to solve it. Also it doesn’t integrate with any SCM except Subversion,
which makes it pretty useless for us.


March 08, 2011

"Designing Software, Drawing Pictures

Not a huge amount of new stuff in this session, but a couple of useful things:

The goal of architecture: in particular up-front architecture, is first and foremost to communicate the vision for the system, and secondly to reduce the risk of poor design decisions having expensive consequences.

The Context->Container->Component diagram hierarchy:

The Context diagram shows a system, and the other systems with which it interacts (i.e. the context in which the system operates). It makes no attempt to detail the internal structure of any of the systems, and does not specify any particular technologies. It may contain high-level information about the interfaces or contracts between systems, if
appropriate.

The Container diagram introduces a new abstraction, the Container, which is a logical unit that might correspond to an application server, (J)VM, databases, or other well-isolated element of a system. The container diagram shows the containers within the system, as well as those immediately outside it (from the context diagram), and details the
communication paths, data flows, and dependencies between them.

The Component diagram looks within each container at the individual components, and outlines the responsibilties of each. At the component
level, techniques such as state/activity diagrams start to become useful
in exploring the dynamic behaviour of the system.

(There’s a fourth level of decomposition, the class diagram, at which we start to look at the high-level classes that make up a component, but I’m not sure I really regard this as an architectural concern)

The rule-of-thumb for what is and what isn’t architecture:

All architecture is design, but design is only architecture if it’s costly to change, poorly understood, or high risk. Of course, this means that “the architecture” is a moving target; if we can reduce the cost of change, develop a better understanding, and reduce the risk of an element then it can cease to be architecture any more and simply become part of the design.


March 07, 2011

Five things to take away from Nat Pryce and Steve Freeman's "TDD at the system scale" talk

  • When you run your system tests, build as much as possible of the environment from scratch.
    At the very least, build and deploy the app, and clear out the database before each run
  • For testing assemblies that include an asynchronous component, you want to wrap
    your assertions in a function that will repeatedly “probe” for the state you want
    until either it finds it, or it times out. Something like this
       doSomethingAsync();
       probe(interval,timeout,aMatcher,anotherMatcher...);    

    Wrap the probe() function into a separate class that has access to the objects you
    want to probe to simplify things.

  • Don’t use the logging APIs directly for anything except low-level debug() messages, and maybe
    not even then. Instead, have a “Monitoring” topic, and push structured messages/objects onto
    that queue. Then you can separate out production of the messages from routing, handling, and
    persisting them. You can also have your system tests hook into these messages to detect hard-to-observe state changes
  • For system tests, build a “System Driver” that can act as a facade to the real system, giving
    test classes easy access to a properly-initialised test environment – managing the creation and
    cleanup of test data, access to monitoring queues, wrappers for probes, etc.
  • We really need to start using a proper queueing provider

February 24, 2011

Solaris IGB driver LSO latency.

Yes, it’s another google-bait entry, because I found it really difficult to find any useful information about this online. Hopefully it’ll help someone else find a solution faster than I did.

We migrated one of our applications from an old Sun V40z server, to a newer X4270. The migration went very smoothly, and the app (which is CPU-bound) was noticeably faster on the shiny new server. All good.

Except, that when I checked nagios, to see what the performance improvement looked like, I saw that every request to the server was taking exactly 3.4 seconds. Pingdom said the same thing, but a simple “time curl …” for the same URL came back in about 20 milliseconds. What gives?
More curiously still, if I changed the URL to one that didn’t return very much content, then the delay went away. Only a page that had more than a few KBs worth of HTML would reveal the problem

Running “strace” on the nagios check_http command line showed the client receiving all of the data, but then just hanging for a while on the last read(). The apache log showed the request completing in 0 seconds (and the log line was printed as soon as the command was executed).
A wireshark trace, though, showed a 3-second gap between packets at the end of the conversation:

23    0.028612    137.205.194.43    137.205.243.76    TCP    46690 > http [ACK] Seq=121 Ack=13033 Win=31936 Len=0 TSV=176390626 TSER=899836429
24    3.412081    137.205.243.76    137.205.194.43    TCP    [TCP segment of a reassembled PDU]
25    3.412177    137.205.194.43    137.205.243.76    TCP    46690 > http [ACK] Seq=121 Ack=14481 Win=34816 Len=0 TSV=176391472 TSER=899836768
26    3.412746    137.205.243.76    137.205.194.43    HTTP    HTTP/1.1 200 OK  (text/html)
27    3.412891    137.205.194.43    137.205.243.76    TCP    46690 > http [FIN, ACK] Seq=121 Ack=15517 Win=37696 Len=0 TSV=176391472 TSER=899836768

For comparison, here’s the equivalent packets from a “curl” request for the same URL (which didn’t suffer from any lag)

46    2.056284    137.205.194.43    137.205.243.76    TCP    49927 > http [ACK] Seq=159 Ack=15497 Win=37696 Len=0 TSV=172412227 TSER=898245102
47    2.073105    137.205.194.43    137.205.243.76    TCP    49927 > http [FIN, ACK] Seq=159 Ack=15497 Win=37696 Len=0 TSV=172412231 TSER=898245102
48    2.073361    137.205.243.76    137.205.194.43    TCP    http > 49927 [ACK] Seq=15497 Ack=160 Win=49232 Len=0 TSV=898245104 TSER=172412231
49    2.073414    137.205.243.76    137.205.194.43    TCP    http > 49927 [FIN, ACK] Seq=15497 Ack=160 Win=49232 Len=0 TSV=898245104 TSER=172412231

And now, it’s much more obvious what the problem is. Curl is counting the bytes received from the server, and when it’s got as many as the content-length header said it should expect, the client is closing the connection (packet 47, sending FIN). Nagios, meanwhile, isn’t smart enough to count bytes, so it waits for the server to send a FIN (packet 27), which is delayed by 3-and-a-bit seconds. Apache sends that FIN immediately, but for some reason it doesn’t make it to the client.

Armed with this information, a bit more googling picked up this mailing list entry from a year ago. This describes exactly the same set of symptoms. Apache sends the FIN packet, but it’s caught and buffered by the LSO driver. After a few seconds, the LSO buffer is flushed, the client gets the FIN packet, and everything closes down.
Because LSO is only used for large segments, requesting a page with only a small amount of content doesn’t trigger this behaviour, and we get the FIN immediately.

How to fix? The simplest workaround is to disable LSO:

#  ndd -set /dev/ip ip_lso_outbound 0

(n.b. I’m not sure whether that persists over reboots – it probably needs adding to a file in /kernel/drv somewhere). LSO is beneficial on network-bound servers, but ours isn’t so we’re OK there.

An alternative is to modify the application code to set the TCP PSH flag when closing the connection, but (a) I’m not about to start hacking with apache’s TCP code, (b) it’s not clear to me that this is the right solution, anyway.

A third option, specific to HTTP, is just to use an HTTP client that does it right. Neither nagios nor (it seems) pingdom appear to know how to count bytes and close the connection themselves, but curl does, and so does every browser I’ve tested. So you might just conclude that there’s no need to fix the server itself.


February 16, 2011

Deploying LAMP websites with git and gitosis

Requirement of the day: I’m providing a LAMP environment to some other developers elsewhere in the organisation. They’ll do all the PHP programming, but I’ll look after the server, keep it patched, upgrade it when necessary, and so on.

At some point in the future, we’ll doubtless need to rebuild the server (new OS version, hardware disaster, need to clone it for 10 other developers to each have their own, etc…), so we want as little configuration as possible on the server. Everything should be built from configuration that lives somewhere else.

So, the developers basically need to be able to do 3 things:
– update apache content – PHP code, CSS/JS/HTML, etc.
– update apache VHost config – a rewrite here, a location there. They don’t need to touch the “main” apache config (modules, MPM settings, etc)
– do stuff (load data, run queries, monkey about) with mysql.

Everything else (installation of packages, config files, cron jobs, yada yada) is mine, and is managed by puppet.

So, we decided to use git and gitosis to accomplish this. We’re running on Ubuntu Lucid, but this approach should translate pretty easily to any unix-ish server.

1: Install git and gitosis. Push two repositories – apache-config and apache-htdocs

2: As the gitosis user, clone the apache-config repository into /etc/apache2/git-apache-conf, and the apache-htdocs repository into /usr/share/apache/htdocs/git-apache-htdocs

3: Define a pair of post-receive hooks to update the checkouts when updates are pushed to gitosis.
The htdocs one is simple:

cd /usr/share/apache2/git-apache-htdocs && env -i git pull --rebase

the only gotcha here is that because the GIT_DIR environment variable is set in a post-receive hook, you must unset it with “env -i” before trying to pull, else you’ll get the “fatal: Not a git repository: ’.’” error.

The apache config one is a bit longer but hopefully self-explanatory:

cd /etc/apache2/git-apache-conf && env -i git pull --rebase && sudo /usr/sbin/apache2ctl configtest && sudo /usr/sbin/apache2ctl graceful"

Add a file into /etc/apache/conf.d with the text “Include /etc/apache2/git-apache-conf/*” so that apache picks up the new config.

We run a configtest before restarting apache to get more verbose errors in the event of an invalid config. Unfortunately if the config is broken, then the broken config will stay checked-out – it would be nice (but much more complex) to copy the rest of the config somewhere else, check out the changes in a commit hook, and reject the commit if the syntax is invalid.

And that’s it! make sure that the gitosis user has rights to sudo apachectl, and everything’s taken care of. Except,of course, mysql – developers will still need to furtle in the database, and there’s not much we can do about that except for making sure we have good backups.

You might be wondering why we chose to involve gitosis at all, and why we didn’t just let the developers push directly to the clone repositories in /etc/apache2 and /usr/share/apache2/htdocs. That would have been a perfectly workable approach, but my experience is that in team-based (as opposed to truly decentralised) development, it’s helpful to have a canonical version of each repository somewhere, and it’s helpful if that isn’t on someone’s desktop PC. Gitosis provides that canonical source
Otherwise, one person pushes some changes to test, some more to live, and then goes on holiday for a week. Their team-mate is left trying to understand and merge together test and live with their own, divergent, repo before they can push a two-line fix.
More experienced git users than I might be quite comfortable with this kind of workflow, but to me it still seems a bit scary. My years of CVS abuse mean I like to have a repo I can point to and say “this is the truth, and the whole truth, and nothing but the truth” :-)


November 11, 2010

Final blog

Writing about web page http://blogs.warwick.ac.uk/libresearch

Dear all (or any) followers,
You have probably noticed by now the significant lack of blogging taking place on this site. Thank you to anyone who has been diligent (and optimistic enough) to still drop by to check for content from time to time and apologies to you and those following our content (or lack of) via RSS. The Research and Innovation Unit of the University of Warwick Library who were responsible for this blog, closed last year however, I would like to reassure you all that Innovative Research and Development Activities are still underway at our Library. We can now be found as part of the Library’s Academic Services Development wing and if you’d like to contact any of the team, please contact donna.carroll@warwick.ac.uk or asofficers@warwick.ac.uk and we’d be happy to help.

If you are interested in following previous bloggers from this site, Jenny Delasalle’s blog can be located at: http://blogs.warwick.ac.uk/libresearch

Many thanks and goodbye.
Dr. Donna Carroll, Academic Services Development Manager


October 12, 2010

For articles about learning, research and technology…

Please see my research blog Inspires Learning.


July 11, 2010

Road testing my rebuilt R100GS PD

Follow-up to BMW R100GS definitely almost finished soon soon from Transversality - Robert O'Toole

On Friday evening, I got my GS Paris-Dakar back from Nu-Age Kenilworth Motorcycles (thanks to Nick, Bill and all their helpers for lots of hard work). The police-specification electrics are all working well. Only two glitches: the speedo connection from the gearbox to the Acewell digital speedo has stopped working; on my first run, after half-an-hour, the clutch started to scream - I took it back, and Bill adjusted the setting. It's now fine. No, in fact, it's absolutely magnificent - just as an Airhead Gëlande Strasse should be. A bit quicker and more responsive to the throttle than before the rebuild. And without the fairing and screen, it's much smoother, with less air turbulence. And much more fun. Naked bikes feel faster, and more "involved". I did an hour's worth of riding today, getting it up to 70mph on the A46, and testing it out thoroughly on the b-roads. I'll try to use it every day this week, and at some point take onto a green lane to see if being 20KG lighter improves it's handling on dirt.

The rebuild is complete. For a while. I'll have another look at the electrics, to tidy them up and get the speedo working. And then perhaps a bigger front disk will be the next development.

Here's a full tally of the work that i've had done:

Frame, sub-frame and various components powder coated;
Nuts and bolts replaced with a stainless kit;
Downpipes and silencer replaced with a Keihan stainless set;
Fork seals replaced;
Push-rod seals replaced, and stainless steel tubes added;
Tank, mudguards and side panels repainted (fairing removed);
Headlight replaced with twin lights;
Instruments replaced with an Acewell digital system;
Timing chain replaced;
Carbs refurbished;
Pistons and heads de-coked;
1 exhaust valve replaced;
Alternator, diode board, regulator, hall sensor all replaced with improved versions;
Serviced;
Cleaned and polished.

The starter motor was replaced recently with one of the "improved" Valeo starters.

So now, I hope, it will do another 85,000 miles until the next major rebuild.

Complete


July 08, 2010

BMW R100GS definitely almost finished soon soon

Follow-up to BMW R100GS refurbishment almost finished from Transversality - Robert O'Toole

It has an MOT, and some nice new Acerbis handguards (don't pay rip-off Touratech prices for them, go to an off-road shop and they are 1/3 the price). Nu-Age Kenilworth Motorcycles couldn't get the timing exactly right, so I guessed that the mechanical retard/advance mechanism in the bean can is jammed, a common fault. They have ordered a fully electronic replacement from Motorworks. The alternator is looking worn and not charging properly, so i'll be getting a new 450w police-spec generator as well, along with a police-spec regulator to match. It will be ready soon soon. Unless I decide that I might as well replace the remaining original parts too. Anyone know where I can get a new set of forks? Ohlins, WP, Marzochi USD? Even the Marzochi insert kit would be an improvement. No one seems to sell them anymore.

GS with acerbis hand guards

GS from the front



May 18, 2010

Blossoming

Garden right

Garden right 2

Garden centre

Garden left


May 16, 2010

Nuthatch

Nuthatches are regular visitors to our garden. They have become unusually tame.

Nuthatch 1

Nuthatch 2


BMW R100GS refurbishment almost finished

Follow-up to BMW R100GS pistons and heads cleaned from Transversality - Robert O'Toole

My bike came back from Nu-Age racing in superb condition. I've started to add the final parts. The mudguards have been painted blue (Glossy Car Coats of Kenilworth) and the tank has been painted in BMW arctic white. I'm going to leave the side panels off. All bolts are now stainless steel. I've also fitted stainless down pipes and a silencer. It's in better condition that it was when I bought it nine years ago. I think it's the bike that BMW should have built.

R100GS from the side

I've removed the headlight fairing, replaced with just a simple and lightweight twin headlight set and an Acewell digital speedo bolted to the handlebars.

R100GS from the front

The last job is to fit the electrics. Getting the loom and a new set of coils in place was easy. However, the front section of the loom is far too long without the fairing, and so I must wrap it back on itself. It's now almost complete.



April 30, 2010

BMW R100GS pistons and heads cleaned

Follow-up to BMW R100GS frame and engine refurb from Transversality - Robert O'Toole

The guys at Nu-Age Racing have now carefully removed the carbon deposits from the engine. It's looking really good. Hopefully, it will all be back together by the weekend and I can start to reassemble the electrics.

Before

After
Before After


Clean

Shiny happy engine.


April 23, 2010

BMW R100GS frame and engine refurb

My R100GS Paris Dakar is currently at the very good (and friendly) Nu-Age Racing in Kenilworth having some major refurbishment work done.

The frame has been blasted and powder coated. The result is excellent, like it has just rolled off the production line:

Powder coated GS frame

After 85,000 miles, a new timing chain has been fitted as a precautionary measure:

GS crankase

As with most old airheads, the pushrod seals are leaking. They are being replaced, and stainless steel pushrod tubes added:

Heads and barrels

Taking the engine apart has revealed quite a lot of carbon deposits on the pistons and around the valves, one of which will be replaced (an exhaust valve went a few years ago):

GS piston

The gearbox and drive shaft seem fine:

GS gearbox

I'm also having the carbs refurbished.

When that is all complete, i'll be fitting the wiring loom (re-bound) and adding new twin headlights and a small digital speedo (with the old plastic fairing removed).




March 13, 2010

Making web sites

Writing about web page http://theheels.co.uk

So I decided I wanted to make a simple web site for my band, The Heels. (theheels.co.uk if you’re interested) Along the way I discovered two or maybe three things:-

  1. If you’re used to working with a content management system, it’s an unpleasant slap in the face to have to go back to using CSS & HTML for layout. It’s easy to kid yourself that if you know enough HTML to do simple formatted text, images and tables, that you know enough to do layout but that’s tragically untrue.
  2. Equally, though, for a web site as small as the one I made, it’s not worth the effort of trying to select, install, and learn how to drive a content management system of any sort. It would have taken significantly longer to get any CMS working than it did to write a dozen or so pages by hand.
  3. One thing that’s annoying to have to do by hand is a change to every page; often a CMS saves you from this. But if you pack as much as you can into common CSS files, that fixes the problem from one side, and if you have a text editor which can search and replace across many files and folders, that fixes it from the other.

December 21, 2009

Repainted R100GS Paris Dakar fuel tank

Follow-up to R100GS Paris Dakar refurbishment after 85,000 miles from Transversality - Robert O'Toole

Excellent work by Glossy Car Coats of Kenilworth. The fuel tank looks like new. The mudguards and side panels were painted in blue.

Tank


R100GS Paris Dakar refurbishment after 85,000 miles

I'm currently working on a more serious refurbishment of my BMW R100GS Paris Dakar. I started to get minor electrical faults in the headlights and instruments. On the PD they the front end is wrapped in an un-necessarily big and complicated plastic fairing. It even has large metal crash bars wrapped around it. I've never liked the fairing, and when I realised that it is quite a barrier to doing repairs on the electrics, I decided to remove it. It took much effort to remove! I bought the bike because it is supposed to be easy to work on, simple and reliable. Now that the fairing is gone, it's closer to that ideal. Once it was off, I put the whole assemblage on the scales (including instruments and crash bar). It weighs 10 kilos! A substantial weight for an off road bike.

The instruments will be replaced by an all-in-one Acewell digital system. They are available, along with a speedo cable for BMW, from Boxxerparts in Germany. The headlight will be replaced with a pair of small round "streetfighter" style headlights mounted to the fork stanchions with mini-indicators.

With the fairing, fuel tank, seats and side panels off, I could see just how bad the rest of the bike is. It's covered in 85,000 miles of road grime. My earlier attempts at anti-rust-coating and painting the frame are now being surpassed by rusting. The worst aspect is the wiring harness. The fabric cover is soaked with oil, wearing through and unwrapping:

Wiring harness and rust

The only real solution is to strip the whole bike down, clean it thoroughly, restore the wiring harness, and get the frame bead-blasted and powder coated. I'm half way through that. The next step is to remove the forks, engine and transmission. I'll need some help with the engine, and will probably struggle to get the steering bearings out of the stem.

GS stripped down from front

I think i'll get the engine and forks removed by a professional, considering this article on removing steering races and bearings.


November 05, 2009

Educause '09: Portals

Writing about web page http://www.ithaca.edu/myhome

At a session about building a portal, I was struck by the similarities between the presenters’ institution – Ithaca College – and our own setup. They have three groups governing their web presence:

  • Their web strategy group has oversight. This is a high level group with VPs, Marketing, Admissions, Provost’s office, etc.
  • IT Services has technical leadership and hosts the institutional web site(s)
  • Marketing & Comms shares responsibility with ITS for brand, high level content, UX, etc.

They have a richly functional and well populated CMS which they built themselves, and a year or so ago, decided that they would build a portal to accomplish the following:-

  • Provide a home for a person’s (not the institution’s) activities. User has complete control over portlets, tabs, etc. – except for the “message center” portlet which is mandatory. The Comms Office control what appears in the Message Center.
  • Provide a single entry point leading to other resources
  • Improve communications between institution and students
  • Make transactions easier and information easier to find
  • Make a lightweight system that reuses as much as possible of existing web services & content.

A fairly similar set of circumstances to our own. What they built was a PHP / mySQL based application which uses the iGoogle portlet standard to deliver the following:-

  • Drag & drop UI for selecting & arranging content. (Choosing a background colour for each portlet turns out to be surprisingly popular and well used.)
  • The portal is a single sign-on participant, so starting in the portal means you won’t need to sign in to move on to other apps, and data can be pulled from other apps without needing to reauthenticate.
  • Webmail & calendar views in the portal (in fact, the only access to webmail is via the portal, to drive traffic)
  • Access to third party email accounts (Yahoo, Gmail, IMAP)
  • Lots of portlets for non-institutional data – Facebook, Digg, Reddit, Twitter, RSS Feeds, etc.
  • Search portlet shows results inline for people, web pages, blogs, etc.
  • “Service tabs” are whole-page applications (eg. change your password, see your calendar).
  • Users can publish and share their tabs with others if they’ve made a useful combination of things.
  • There’s a very Facebook-like gadget which shows you who else is online, their status updates, comments on other peoples’ statuses, their photos, etc. You can define who your friends are just like Facebook.
  • Mobile-optimised rendition (webkit optimised) – mobile home page is a list of portlets, then each portlet gets its own mobile-optimised screen. Similarly, an accessibilty-optimised rendition of the portal.

What’s striking about this to me is that they reached a different conclusion to the thinking we’ve so far been doing. Their portal at present doesn’t have access to much institutional information about the individual. So there’s no gadgets for “My modules” or “My timetable” or “My coursework”. The gadgets are fundamentally just news, email and external. They hope to add gadgets which can display institutional data, but there’s back-end plumbing which needs to happen first (again, sounds kind of familiar). Until I saw this presentation, my take was that you absolutely had to have those institutional data gadgets to succeed. But the Ithaca portal has achieved the astonishingly high take up rate of 80% of the members of the university visiting it at least once per day. Without institutional data. It’s given me pause for thought.

Ithaca have an excellent micro-site intended for people who are interested in their portal but who aren’t members of the university. See for instance the home page, some video tutorials, the presentation from today, and some usage stats


Educause '09: The future of the CIO

I went to a presentation about the future of the role of CIO. Most of what was said was fairly predictable (and that’s not meant as a criticism; anyone who reflects even briefly about how IT is used in universities, how the technology itself has changed and evolved, and the changing economic and political climate in universities, could hazard a perfectly reasonable stab at what’s occupying CIOs’ time nowadays). It would have been surprising and in some ways delightful if there had been a left-field, unexpected prediction such as:-

Within five years all CIOs will need to become accomplished mandolin players

but alas, such whimsy was not to be had (though as it was the presenter’s birthday, the audience sang Happy Birthday to him, which was almost as good).

Instead, the observations revolved around the fact that IT is now deeply engrained in, and vital to, every aspect of the institution’s work, and therefore the CIO of today can expect to be spending more time and effort on quality of service issues such as availability, planned downtime, risk assessment & management, financial management, disaster recovery, and so on. There was an interesting assertion that service delivery is now as important a part of the CIO’s agenda as strategy and planning, whereas historically it wasn’t, because there was less reliance on IT and therefore a more relaxed attitude to service availability.

But the very best observation in the session came right at the end, in response to an audience question which was along the lines of “You’ve said that the CIO’s remit is broader and deeper than ever, and that there are more things than ever before which need your time and attention. How do you decide what not to do; what you can stop doing?” (referring back to this morning’s keynote by Jim Collins). The speaker observed that finding ways to stop doing things or not to do things was indeed important, and threw in a couple of great observations. Firstly:-

I try not to say no to things directly. I see it as part of my role to guide the conversation around until I’m asked something which I’m confident I can say yes to.

And then, expanding on why this is a better tactic than just saying no:-

Saying ‘no’ is exercising power, and in a university, when you use power, you use it up.


November 04, 2009

Educause '09: live@edu

I went along to a Microsoft presentation on live@edu, which is the off-premise, hosted email service which we’re going to be delivering to our students early in 2010. Since we’re already some way into the project to manage this transition, there wasn’t a lot in the presentation which I didn’t already have some dim awareness of, but there were a few interesting points:-

  • Online, hosted Sharepoint is going to be added to the live@edu offering in 2010. It’ll be free for students, chargeable for staff, with the possibility of additional paid-for support if you want it. It won’t be as feature-rich as on-premise Sharepoint, but (depending on what’s in and what’s out) that might not matter for some student purposes. More information here (and I love the almost-too-frank FAQs; “Q: Aren’t you just copying Google? A: No! No way! We were here first!”)
  • Moving student email accounts over via IMAP looks pretty do-able .
  • The tech guys I spoke to seemed very confident that it is possible to do single sign-on integration with our Shibboleth-based in-house system. We might need our email or directories team to add some extra stuff to an AD and/or stick a special certificate on one of their AD servers, but given that, the rest of it, they say, is possible. Could just be sales talk, but the people I spoke to seemed too much in love with the geeky details of how you’d do it to be sales guys. ;-)
  • There’s a Windows Explorer add-on which lets you see your Skydrive file store as if it were locally attached storage (well, almost; you can drag files into and out of it and do file operations on the remote system in Windows Explorer, but it’s a network place, not a drive letter mapping, so it doesn’t show up directly in Open and Save dialogs, which is a shame). Still, makes it much more workable to have larger file sets in Skydrive, and much easier to move stuff in and out. It’s slightly surprising that it’s a third party offering rather than a Microsoft one.

Educase '09: Engaging the community

I went to a workshop about engaging your community when doing projects. Much of the advice that came from it is, on reflection, fairly common-sense based – communicate effectively, find users who are keen to be involved, make sure that senior people who could block your project are engaged, work on framing the problem rather than jumping to a particular solution, and so on. And the session wasn’t about how to actually succeed with your project deliverables, nor was it intended to be.

But I enjoyed the session nonetheless, partly because it was a workshop with exercises, rather than a presentation, and partly because it was led by enthusiastic, engaged presenters. And it served as a useful reminder that it’s eminently possible to have a project which succeeds brilliantly in terms of delivering what it was supposed to, on time and on budget, but which on some other level is a failure because what it delivers doesn’t do what people want, doesn’t make them happy. If you work in IT, it’s easy to get caught up in the nuts and bolts of getting things working and keeping things working – and of course that’s important – but it’s perfectly possible to deliver a service where everything’s working yet nobody’s happy. This session was a great reminder of how cultivating and maintaining good, productive, collaborative relationships with your users / colleagues / customers (delete according to taste and the prevailing methodology at your institution) is so very important if you want to deliver real services, rather than just be in the hardware and software business.


Educase '09: Cloud computing

There have been a couple of presentations on cloud computing so far; one on the in-principle pros and cons, and one on the nuts and bolts of an actual on-premise private cloud implementation. My thoughts:-

  • It seems fairly clear that 99% of people talking about cloud computing are actually talking about software as a service – Google hosting your email, or an out-sourced helpdesk or whatever. I’ve spoken to only a couple of people who are doing anything with genuine cloud services such as Amazon’s EC2 or S3.
  • People pointing up the risks of apps and data held off-premise seem to have a rose-tinted going on fictional view of life with on-premise services. Of course it’s true that your SaaS arrangement could have privacy issues, availability and SLA challenges, vendor lock-in, contract risks, and lack of control over the evolution of the service. But the unspoken argument against off-premise SaaS seems to be that these issues don’t exist, or exist only trivially, if you stay on-premise. But most universities who run Microsoft Exchange on site, for example, freely admit that they have outages, data losses and meaningless SLAs. And they are just as locked in to a vendor as if they asked Microsoft to hold the data in Dublin. And if you’re in a UK university, then if I say “Contract challenges”, I’d bet reasonable money that the word that comes into your mind first is “Oracle” – for an on-premise, supposedly bought and paid for piece of software.
  • Almost everyone has anecdotal evidence of people within their institution going off-site independently of what the central IT service may or may not be doing, be it forwarding email on to a third party provider, using Google Docs to collaborate or whatever. So unless you’re an institution with unusually strong central control (either technically or at a policy level), many of your members have voted with their feet and accepted the risks (possibly unknowingly, for sure).
  • An unspoken, but I think real concern, seems to be about the loss of accountability. If you run Exchange on site and it explodes, the thinking seems to be, you could fire someone. Whether you actually would or not is a different question, of course, but the principle that you can point to someone and say “this is your fault” seems to give some people a kind of warm fuzzy feeling. So if the university’s senior management agrees to go off-premise, the argument seems to run, who could they then blame if things went wrong later? Kind of a sad world-view to be planning your blame strategy in advance, I think, but there seems to be some of that floating around.