All 1 entries tagged <em>Multiplication Laws</em>Wilfrid KendallTo be updated occasionally with descriptions of probabilistic and statistical curiosities. Descriptions will be patched at the level of undergraduate students taking probability modules from the Maths and Statistics departments.https://blogs.warwick.ac.uk/randomcuriosities/tag/multiplication_laws/?atom=atomWarwick Blogs, University of Warwick(C) 2024 Wilfrid Kendall2024-03-28T20:10:29ZSecret-sharing and independence by Wilfrid KendallWilfrid Kendallhttps://blogs.warwick.ac.uk/randomcuriosities/entry/secret-sharing_and_independence/2017-10-16T17:30:59Z2017-09-12T15:12:16Z<p>The following remarkable procedure is entirely feasible: if I have a class of <em>n</em> students then I can distribute <em>n</em> different binary images, one to each student, such that each student's image looks like white noise,</p>
<p><img src="http://blogs.warwick.ac.uk/images/randomcuriosities/2017/09/12/x1.png?maxWidth=500" alt="typical image distributed to a student: just white noise!" border="0" /></p>
<p>and yet if <em><strong>all</strong></em> images are combined together in the right way then a meaningful picture emerges.</p>
<p><img src="http://blogs.warwick.ac.uk/images/randomcuriosities/2017/09/12/x4321.png?maxWidth=500" alt="result of combining all images using multiplication (if black pixels are coded as -1 and white pixels are coded as -1)" border="0" /></p>
<p>What's more, I can arrange matters such that if any strict subset of the <em>n</em> students tried to collaborate, then all they would get would be more white noise, no matter how they manipulated their <em>n</em>-1 images!</p>
<p>So any strict subset of the students would possess no information at all about the butterfly picture, but combined all together they would be in a position to produce a perfect reproduction of the image.<br />
</p>
<p>How can this be done?</p>
<ol>
<li>Code each black pixel of each image as -1, each white pixel as +1, and view each distributed image as a long vector sequence of +/-1 values.</li>
<li>Let <em>X</em><sub>0</sub> be the vector encoding the target image (the butterfly above). Generate entirely random independent vectors <em>X<sub>1</sub></em>, <em>X</em><sub>2</sub>, ..., <em>X<sub>n</sub></em><sub>-1</sub><em> </em>and distribute the corresponding white noise images to the first <em>n</em>-1 students. <br />
</li>
<li>Student <em>n</em> is given an image corresponding to the vector <em>X<sub>n</sub></em> obtained by multiplying (coordinate-wise) all the other vectors:<br />
<em>X<sub>n</sub> = </em><em>X</em><sub>0</sub>* <em>X<sub>1</sub> </em>* <em>X</em><sub>2</sub> * ... * <em>X<sub>n</sub></em><sub>-1<br />
</sub>where "*" denotes coordinate-wise multiplication.</li>
<li>It is simple arithmetic that <em>X</em><sub>0</sub><em> = </em><em>X</em><sub><em>n</em></sub>* <em>X<sub>1</sub> </em>* <em>X</em><sub>2</sub> * ... * <em>X<sub>n</sub></em><sub>-1</sub>. So all students working together possess the information to recover the butterfly image.<br />
</li>
<li>On the other hand one can use elementary probability to show that, if one selects any subset of size <em>n</em>-1 of the vectors <em>X<sub>1</sub></em>, <em>X</em><sub>2</sub>, ..., <em>X<sub>n</sub></em>, then this subset behaves as if it is a statistically independent collection of vectors corresponding to white-noise images. (It suffices to consider just one pixel at a time, and show that the corresponding sequence of <em>n</em>-1 random +/-1 values obey all possible multiplication laws.) So no strict subset of the students has any information at all about the butterfly image.<br />
</li>
</ol>
<p>There are many other ways to implement secret-sharing (Google/Bing/DuckDuckGo the phrase "secret sharing"). But this one is nice for probabilists, because it provides a graphic example of why <em>pairwise independence</em> (independence of any two events taken from a larger collection of events) need not imply complete independence.</p><p>The following remarkable procedure is entirely feasible: if I have a class of <em>n</em> students then I can distribute <em>n</em> different binary images, one to each student, such that each student's image looks like white noise,</p>
<p><img src="http://blogs.warwick.ac.uk/images/randomcuriosities/2017/09/12/x1.png?maxWidth=500" alt="typical image distributed to a student: just white noise!" border="0" /></p>
<p>and yet if <em><strong>all</strong></em> images are combined together in the right way then a meaningful picture emerges.</p>
<p><img src="http://blogs.warwick.ac.uk/images/randomcuriosities/2017/09/12/x4321.png?maxWidth=500" alt="result of combining all images using multiplication (if black pixels are coded as -1 and white pixels are coded as -1)" border="0" /></p>
<p>What's more, I can arrange matters such that if any strict subset of the <em>n</em> students tried to collaborate, then all they would get would be more white noise, no matter how they manipulated their <em>n</em>-1 images!</p>
<p>So any strict subset of the students would possess no information at all about the butterfly picture, but combined all together they would be in a position to produce a perfect reproduction of the image.<br />
</p>
<p>How can this be done?</p>
<ol>
<li>Code each black pixel of each image as -1, each white pixel as +1, and view each distributed image as a long vector sequence of +/-1 values.</li>
<li>Let <em>X</em><sub>0</sub> be the vector encoding the target image (the butterfly above). Generate entirely random independent vectors <em>X<sub>1</sub></em>, <em>X</em><sub>2</sub>, ..., <em>X<sub>n</sub></em><sub>-1</sub><em> </em>and distribute the corresponding white noise images to the first <em>n</em>-1 students. <br />
</li>
<li>Student <em>n</em> is given an image corresponding to the vector <em>X<sub>n</sub></em> obtained by multiplying (coordinate-wise) all the other vectors:<br />
<em>X<sub>n</sub> = </em><em>X</em><sub>0</sub>* <em>X<sub>1</sub> </em>* <em>X</em><sub>2</sub> * ... * <em>X<sub>n</sub></em><sub>-1<br />
</sub>where "*" denotes coordinate-wise multiplication.</li>
<li>It is simple arithmetic that <em>X</em><sub>0</sub><em> = </em><em>X</em><sub><em>n</em></sub>* <em>X<sub>1</sub> </em>* <em>X</em><sub>2</sub> * ... * <em>X<sub>n</sub></em><sub>-1</sub>. So all students working together possess the information to recover the butterfly image.<br />
</li>
<li>On the other hand one can use elementary probability to show that, if one selects any subset of size <em>n</em>-1 of the vectors <em>X<sub>1</sub></em>, <em>X</em><sub>2</sub>, ..., <em>X<sub>n</sub></em>, then this subset behaves as if it is a statistically independent collection of vectors corresponding to white-noise images. (It suffices to consider just one pixel at a time, and show that the corresponding sequence of <em>n</em>-1 random +/-1 values obey all possible multiplication laws.) So no strict subset of the students has any information at all about the butterfly image.<br />
</li>
</ol>
<p>There are many other ways to implement secret-sharing (Google/Bing/DuckDuckGo the phrase "secret sharing"). But this one is nice for probabilists, because it provides a graphic example of why <em>pairwise independence</em> (independence of any two events taken from a larger collection of events) need not imply complete independence.</p>0