December 17, 2018

Perches, Post–holes and Grids

Writing about web page https://arxiv.org/abs/1704.07342

I and my MSc student of two or three years back, Clair Barnes, produced an appendix, Perches, Post-holes and Grids, for a book being prepared by John Blair et al., arising from the project, Planning in the Early Medieval Landscape. You can find it on arXiv <https://arxiv.org/abs/1704.07342> of course! The appendix is aimed at demonstrating the application of statistical methods to the analysis of archeaological data, typically expressed in graphical form, with the objective of assessing the extent to which the spatial configuration exhibits planning by the original architects. Typical questions: did the builders use a common unit of measurement over a wide geographical region? to what extent is there evidence that they used a grid pattern when designing groups of buildings?

The name of the game is to contribute a statistical assessment to be mixed in with all sorts of other historical evidence. It's fun doing statistics in new areas like this: one learns a lot of stuff one didn't know before, and it provides a brilliant excuse to visit Anglo-Saxon Kingdoms <https://www.bl.uk/events/anglo-saxon-kingdoms> during the working week.


Crowd–sourcing data

Some really good ideas being implemented recently about and around the idea of crowdsourced data, For example:

Of course the cartoonists got there first:

Crowd-sourced data

Noise to Signal: Rob Cottingham
<https://www.robcottingham.ca/cartoon/archive/2007-08-07-crowdsourced/>


Gold Access and so on

Writing about web page https://www.xkcd.com/2085/

Lots gets said about the importance of open-access publishing. Researchers are under pressure to publish papers which are "Gold Access" (translation: they pay the publisher quite a lot of money so that the paper can be accessed freely by all and sundry). Many people discussing this, and/or making policy decisions, appear not to have noticed that in many research fields new work is invariably released as a freely available preprint using the wonderful arXiv <https://arxiv.org/>, for which the publication cost is extremely low (mostly met by academic institutions). For example virtually all of my work of the last 14 years can be found there using <https://arxiv.org/a/kendall_w_1.html>.

The web-comic xkcd makes the point well <https://www.xkcd.com/2085/>.


October 16, 2017

Password strength

Writing about web page https://xkcd.com/936/

Today in the ST116 group we discussed how to build strong passwords. A good exposition can be found in the xkcd cartoon referenced as the weblink for this article.


October 08, 2017

The media has a problem with uncertainty

Writing about web page https://fivethirtyeight.com/features/the-media-has-a-probability-problem/

Not just the media, but it's a fair point. Have a look at what Nate Silver has to say.


September 12, 2017

The secret of success?

This TED talk makes a lot of sense to me, and chimes in with more than 35 years of lecturing experience. Have a look and see what it suggests to you.


Secret–sharing and independence

The following remarkable procedure is entirely feasible: if I have a class of n students then I can distribute n different binary images, one to each student, such that each student's image looks like white noise,

typical image distributed to a student: just white noise!

and yet if all images are combined together in the right way then a meaningful picture emerges.

result of combining all images using multiplication (if black pixels are coded as -1 and white pixels are coded as -1)

What's more, I can arrange matters such that if any strict subset of the n students tried to collaborate, then all they would get would be more white noise, no matter how they manipulated their n-1 images!

So any strict subset of the students would possess no information at all about the butterfly picture, but combined all together they would be in a position to produce a perfect reproduction of the image.

How can this be done?

  1. Code each black pixel of each image as -1, each white pixel as +1, and view each distributed image as a long vector sequence of +/-1 values.
  2. Let X0 be the vector encoding the target image (the butterfly above). Generate entirely random independent vectors X1, X2, ..., Xn-1 and distribute the corresponding white noise images to the first n-1 students.
  3. Student n is given an image corresponding to the vector Xn obtained by multiplying (coordinate-wise) all the other vectors:
    Xn = X0* X1 * X2 * ... * Xn-1
    where "*" denotes coordinate-wise multiplication.
  4. It is simple arithmetic that X0 = Xn* X1 * X2 * ... * Xn-1. So all students working together possess the information to recover the butterfly image.
  5. On the other hand one can use elementary probability to show that, if one selects any subset of size n-1 of the vectors X1, X2, ..., Xn, then this subset behaves as if it is a statistically independent collection of vectors corresponding to white-noise images. (It suffices to consider just one pixel at a time, and show that the corresponding sequence of n-1 random +/-1 values obey all possible multiplication laws.) So no strict subset of the students has any information at all about the butterfly image.

There are many other ways to implement secret-sharing (Google/Bing/DuckDuckGo the phrase "secret sharing"). But this one is nice for probabilists, because it provides a graphic example of why pairwise independence (independence of any two events taken from a larger collection of events) need not imply complete independence.


July 04, 2017

An old advertisement for the ST318 "Probability Theory" module

In your third year at Warwick you often have a drink with a friend who enjoys probability (yes, you make some strange friends here). On the Sunday evening before the start of Term 2 she challenges you to a curious game of chance. Hiding a fair six-sided die (numbered 0,1,2,3,4,5) from your observation, she throws it ten times and secretly notes each outcome on a convenient beer mat. Then she turns to you and announces:
"I shall now read you the running averages of the dice throws, but in reverse order.''

(You must have looked puzzled, because she breaks off to explain)
"Let Xn be the result of the nth throw, so the nth running average is Yn=(X1+...+Xn)/n. I will read out the numbers Y10, Y9, ..., Y1 in that order.''

Clearly a flicker of comprehension must have somehow crossed your face, because she now continues:
"At any stage you can stop me reciting the sequence. If the last average I read out was Yn then with probability Yn/5 (arranged using that book of random numbers which I carry everywhere) I will buy you a pint and the game ends. If you don't stop me at all then clearly you get a pint with probability Y1/5. Otherwise you buy me a pint.''

You calculate rapidly: E[Y1/5]=E[X1/5]=(1/6) (0+1+...+5)/5=1/2 even if you wait till the last number, and you get to choose when to stop. Must be to your advantage!
"Seems a reasonably good deal to me,'' you say.

"I'm glad you think so;'' she replies, "before we begin you can buy me a pint to compensate. Then we can play the game K times, where K is as large as you think it should be so that on average you come out ahead.''

How big should K be?

The very next day you hear about this course, ST318 Probability Theory; by the end of the course you'll have learned enough to be able to answer the above question instantly without any calculation at all. All this and 15 CATS credit too. Now that is what you call fair! How can you resist?


June 03, 2017

An example of a random variable with finite Fisher information but infinite entropy

The following notes concern some calculations I have made relating to Zanella et al (2017). The considerations laid out below are entirely trivial, but are helpful in making it clear whether or not certain conditions might be logically independent of each other.

Consider a random variable X with probability density f(x) = eφ(x).

  1. The entropy of X is given by H = - ∫ log(f(x)) f(x) dx = - ∫ φ(x) f(x) dx = - E[ φ(X) ];
  2. One is also interested in something loosely related to Fisher information, namely Ι = ∫ φ'(x)2 f(x) dx = E[ φ'(X)2 ].

Question:

Is it possible for Ι to be finite while H is infinite?

Answer:

Yes.

Consider a density f(x) which is proportional (for large positive x) to 1/(x log(x)2). Consequently φ(x) = constant - (log x + 2 log log x) for large x.

1: Using a change of variable (eu = x) it can be shown that H = - ∫ log(f(x)) f(x) dx is infinite. The contribution to the entropy H for large x is given by - ∫ log(f(x)) f(x) dx, controlled by
(log x + 2 log log x) dx /(x log(x)2) = ∫ (u + 2 log u) eu du /(u2 eu) ≥ ∫du / u = ∞.

2: On the other hand elementary bounds show that Ι = ∫ φ'(x)2 f(x) dx can be finite. The contribution to the "Fisher information" Ι for large x is given by - ∫ φ'(x)2 f(x) dx, related to
(1 / x + 2 / (x log x) )2 dx /(x log(x)2) = ∫ (1 + 2 / log x )2 dx /(x3 log(x)2) < constant × ∫ dx / x3 < ∞.

An example of such a density (behaving in the required manner at ±∞) is

f(x) = log(2) |x| / ((2+x2) (log(2+x2))2) .

Reference

Zanella, G., Bédard, M., & Kendall, W. S. (2017). A Dirichlet Form approach to MCMC Optimal Scaling. Stochastic Processes and Their Applications, to appear, 22pp. http://doi.org/10.1016/j.spa.2017.03.021


A puzzle about inference

Here is a puzzle which I often use as a starter for a course of lectures on probabilistic coupling. The puzzle arose during some research -- I devised it as a simple example to show myself why a particular idea would not work -- and I developed the solution using calculations. In a lunch queue, I asked James Norris of Cambridge whether he could think of a calculation-free answer, and he found one almost immediately.

  • You go to Starbucks with a friend.
  • You are the quiet type, and stay in the front room near the window, sipping a coffee slowly to make it last.
  • She is extrovert, and has gone through to a further room where the louder people gather.
    There she plays a game: toss n coins, win if r heads or more.
  • After the game should have finished, a third friend comes through and says,
    either: "She won the first toss'';
    or: "She won at least one toss''.
    (Why don't people speak more clearly?!)
  • Which is better news for your friend? Prove your answer without calculation!

January 2019

Mo Tu We Th Fr Sa Su
Dec |  Today  |
   1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31         

Search this blog

Galleries

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXIX