Where in the World…?
Over the weekend I had a brief discussion with the other Dr Johnson in my house, who was asking me various questions about Exchanges, its metrics and readership. Okay, truth be told I started the conversation by wondering aloud about various aspects of our multiple audiences , as it’s a topic never too far from my mind, even on a road trip to the far south of the UK. I should mention, Mrs the Dr Johnson is a remote-sensing satellite and environmental monitoring specialist at another Midlands university, and I suspect tends to perceive the world through a geographic lens. Hence the construction of her question and interest. I’ll confess it wasn’t something I could immediately answer while driving down the M40, beyond making an assumption that our to-date core audience was located in and around Warwick, and perhaps Monash, given our concentration of editors and authors from those locations.
As I’ve discussed before, one of my (many) ambitions for Exchanges is to broaden the range of its audiences , and thinking about what we can find about the current audiences isn’t a bad place to start. I’ve two principal tools at my disposal for gathering this sort of data: Google Analytics (GA) and the Open Journal System’s (OJS) inbuilt statistics generator. The former looks a lot slicker and can churn out some pretty illuminating graphics at the click of a mouse, the latter’s UI and outputs are a lot more ‘web 1.0’ - in that creating a custom report is not a facile exercise and the platform spits out reams of largely unformatted, hard, numerical data. Both tools have their places in my working practices, for example at times it’s handy to have access and manipulate raw data, and GA doesn’t make scraping that in its entirety quite as easy. Conversely, when I need an illustrative graphic in short order for a presentation or report, GA is the tool I turn to.
The $64,000 question: does their data correlate? The answer is yes…and no. Broadly there’s some alignment, but the figures each one has presented me with are reasonably different in exact value if similar in relative magnitude. Given the issues with generating comparable data over the same period  it comes as no surprise to me that variance in ranking beyond the ‘big three’ UK, USA and Australia  exists. Perhaps more interesting are those countries which appear in one but not the other analytical tool’s top 10.
|Google Analytics||Open Journal Systems|
|1||United Kingdom||United States|
|2||United States||United Kingdom|
|8||South Korea||South Korea|
(countries appearing in both lists highlighted)
This might suggest, given GA has been running for less time than OJS’ current platform, that Canada and the Philippines are new and expanding audiences for Exchanges, with France and Russia diminishing. However, the precision in the time spans over which this data was gathered are both too limited to make such sweeping conclusions . It is pleasing to see some non-Anglophone usage though in both charts, especially considering our sole language of publication is English.
Anyway, no matter the deeper implications of this very light touch look at Exchanges’ user statistics, I think I’ve at least answered part of Mrs Dr Johnson’s question about from where in the world our usage has originated. Naturally, this beggars another question which I can’t immediately answer: where SHOULD our audiences for the journal be coming from? As always, answers in the comments below please…
 Readers, authors, potential authors, stakeholders and more…I’m still work on defining these
 Does the ISS have an ISP I can track? If it has, another mission is to get this journal read in orbit!
 These issues are multiple. For examples, with OJS, when we moved to the newer version last year this, regrettably, seemed to ‘reset’ the statistics for the platform. We’ve a back record of these, but it’s no longer possible run off a complete set since the journal began. Likewise with Google Analytics, we’ve not had this running the whole time the platform has been up, so there’s going to be a temporal discrepancy there too. Added to that neither platform counts or creates its statistics in the same way, without a LOT of lengthy post-processing and normalisation, for normal usage there are always going to be disagreements on the ‘exact’ magnitude of visitations. Just one of the reasons as a qualitative researcher, I tend to maintain a certain analytical cynicism wherever ‘statistics’ are used to justify something: there’s always likely flaws, assumptions and simplifications in the underlying data acquisition methods!
 These make up 58.4% (GA) or 77.7% (OJS) of all usage
 It is possible I could make the data collection time frames marry better, but I’m still developing an understanding on how OJS works ‘under the hood’ in this respect. Something to return to at a later date, perhaps.