All 14 entries tagged <em>Maths</em>Christopher MidgleyIn which I post about whatever catches my fancy, and then give up and do nothing for several years, my last post a statement on how I have much to write about and will inevitably find the time...later.https://blogs.warwick.ac.uk/midgleyc/tag/maths/?atom=atomWarwick Blogs, University of Warwick(C) 2021 Christopher Midgley2021-02-25T02:43:32ZDeterminant of the Wronskian by Christopher MidgleyChristopher Midgleyhttps://blogs.warwick.ac.uk/midgleyc/entry/determinant_of_the/2012-07-22T11:14:12Z2012-07-20T10:39:05Z<p>The Wronksian of <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?n" alt="n" border="0" /> functions <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?f_1%2C%7E%5Cdots%2C%7Ef_n" alt="f_1, \dots, f_n" border="0" /> is the matrix determinant <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_n%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n-1%29%7D%26%5Cdots%26f_n%5E%7B%28n-1%29%7D%7D%5Cend%7Bvmatrix%7D" alt="\begin{vmatrix}f_1&\dots&f_n\\\dots&\dots&\dots\\f_1^{(n-1)}&\dots&f_n^{(n-1)}}\end{vmatrix}" border="0" />. Its derivative is the matrix determinant <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_n%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n%29%7D%26%5Cdots%26f_n%5E%7B%28n%29%7D%5Cend%7Bvmatrix%7D" alt="\begin{vmatrix}f_1&\dots&f_n\\\dots&\dots&\dots\\f_1^{(n)}&\dots&f_n^{(n)}\end{vmatrix}" border="0" /> (that is, the previous matrix with a different bottom row). It’s an interesting exercise to prove this, so let’s do that.</p>
We proceed by our old friend, induction. For <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?n%3D1" alt="n=1" border="0" /> (or 0), the case is obvious. Let it be true through <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?n-1" alt="n-1" border="0" />. Expand by the bottom row:<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_n%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n-1%29%7D%26%5Cdots%26f_n%5E%7B%28n-1%29%7D%5Cend%7Bvmatrix%7D%7E%3D%7Ef_1%5E%7B%28n-1%29%7D%5Cbegin%7Bvmatrix%7Df_2%26%5Cdots%26f_n%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_2%5E%7B%28n-2%29%7D%26%5Cdots%26f_n%5E%7B%28n-2%29%7D%5Cend%7Bvmatrix%7D%7E%2B%7E%5Cdots%7E%2B%7Ef_n%5E%7B%28n-1%29%7D%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_%7Bn-1%7D%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n-2%29%7D%26%5Cdots%26f_%7Bn-1%7D%5E%7B%28n-2%29%7D%5Cend%7Bvmatrix%7D" alt="\begin{vmatrix}f_1&\dots&f_n\\\dots&\dots&\dots\\f_1^{(n-1)}&\dots&f_n^{(n-1)}\end{vmatrix} = f_1^{(n-1)}\begin{vmatrix}f_2&\dots&f_n\\\dots&\dots&\dots\\f_2^{(n-2)}&\dots&f_n^{(n-2)}\end{vmatrix} + \dots + f_n^{(n-1)}\begin{vmatrix}f_1&\dots&f_{n-1}\\\dots&\dots&\dots\\f_1^{(n-2)}&\dots&f_{n-1}^{(n-2)}\end{vmatrix}" border="0" /><br />
We take the derivative, applying our induction assumption, obtaining <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_n%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n-2%29%7D%26%5Cdots%26f_n%5E%7B%28n-2%29%7D%5C%5Cf_1%5E%7B%28n%29%7D%26%5Cdots%26f_n%5E%7B%28n%29%7D%5Cend%7Bvmatrix%7D%2B%5Cleft%28%7Ef_1%5E%7B%28n-1%29%7D%5Cbegin%7Bvmatrix%7Df_2%26%5Cdots%26f_n%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_2%5E%7B%28n-1%29%7D%26%5Cdots%26f_n%5E%7B%28n-1%29%7D%5Cend%7Bvmatrix%7D%7E%2B%7E%5Cdots%7E%2B%7Ef_n%5E%7B%28n-1%29%7D%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_%7Bn-1%7D%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n-1%29%7D%26%5Cdots%26f_%7Bn-1%7D%5E%7B%28n-1%29%7D%5Cend%7Bvmatrix%7D%5Cright%29" alt="\begin{vmatrix}f_1&\dots&f_n\\\dots&\dots&\dots\\f_1^{(n-2)}&\dots&f_n^{(n-2)}\\f_1^{(n)}&\dots&f_n^{(n)}\end{vmatrix}+\left( f_1^{(n-1)}\begin{vmatrix}f_2&\dots&f_n\\\dots&\dots&\dots\\f_2^{(n-1)}&\dots&f_n^{(n-1)}\end{vmatrix} + \dots + f_n^{(n-1)}\begin{vmatrix}f_1&\dots&f_{n-1}\\\dots&\dots&\dots\\f_1^{(n-1)}&\dots&f_{n-1}^{(n-1)}\end{vmatrix}\right)" border="0" /><br />
But the bracketed part is just <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_%7Bn-1%7D%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n-1%29%7D%26%5Cdots%26f_%7Bn%7D%5E%7B%28n-1%29%7D%5C%5Cf_1%5E%7B%28n-1%29%7D%26%5Cdots%26f_n%5E%7B%28n-1%29%7D%5Cend%7Bvmatrix%7D" alt="\begin{vmatrix}f_1&\dots&f_{n-1}\\\dots&\dots&\dots\\f_1^{(n-1)}&\dots&f_{n}^{(n-1)}\\f_1^{(n-1)}&\dots&f_n^{(n-1)}\end{vmatrix}" border="0" />, which is zero as a matrix with repeated rows is singular. We are done.<p>The Wronksian of <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?n" alt="n" border="0" /> functions <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?f_1%2C%7E%5Cdots%2C%7Ef_n" alt="f_1, \dots, f_n" border="0" /> is the matrix determinant <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_n%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n-1%29%7D%26%5Cdots%26f_n%5E%7B%28n-1%29%7D%7D%5Cend%7Bvmatrix%7D" alt="\begin{vmatrix}f_1&\dots&f_n\\\dots&\dots&\dots\\f_1^{(n-1)}&\dots&f_n^{(n-1)}}\end{vmatrix}" border="0" />. Its derivative is the matrix determinant <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_n%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n%29%7D%26%5Cdots%26f_n%5E%7B%28n%29%7D%5Cend%7Bvmatrix%7D" alt="\begin{vmatrix}f_1&\dots&f_n\\\dots&\dots&\dots\\f_1^{(n)}&\dots&f_n^{(n)}\end{vmatrix}" border="0" /> (that is, the previous matrix with a different bottom row). It’s an interesting exercise to prove this, so let’s do that.</p>
We proceed by our old friend, induction. For <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?n%3D1" alt="n=1" border="0" /> (or 0), the case is obvious. Let it be true through <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?n-1" alt="n-1" border="0" />. Expand by the bottom row:<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_n%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n-1%29%7D%26%5Cdots%26f_n%5E%7B%28n-1%29%7D%5Cend%7Bvmatrix%7D%7E%3D%7Ef_1%5E%7B%28n-1%29%7D%5Cbegin%7Bvmatrix%7Df_2%26%5Cdots%26f_n%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_2%5E%7B%28n-2%29%7D%26%5Cdots%26f_n%5E%7B%28n-2%29%7D%5Cend%7Bvmatrix%7D%7E%2B%7E%5Cdots%7E%2B%7Ef_n%5E%7B%28n-1%29%7D%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_%7Bn-1%7D%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n-2%29%7D%26%5Cdots%26f_%7Bn-1%7D%5E%7B%28n-2%29%7D%5Cend%7Bvmatrix%7D" alt="\begin{vmatrix}f_1&\dots&f_n\\\dots&\dots&\dots\\f_1^{(n-1)}&\dots&f_n^{(n-1)}\end{vmatrix} = f_1^{(n-1)}\begin{vmatrix}f_2&\dots&f_n\\\dots&\dots&\dots\\f_2^{(n-2)}&\dots&f_n^{(n-2)}\end{vmatrix} + \dots + f_n^{(n-1)}\begin{vmatrix}f_1&\dots&f_{n-1}\\\dots&\dots&\dots\\f_1^{(n-2)}&\dots&f_{n-1}^{(n-2)}\end{vmatrix}" border="0" /><br />
We take the derivative, applying our induction assumption, obtaining <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_n%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n-2%29%7D%26%5Cdots%26f_n%5E%7B%28n-2%29%7D%5C%5Cf_1%5E%7B%28n%29%7D%26%5Cdots%26f_n%5E%7B%28n%29%7D%5Cend%7Bvmatrix%7D%2B%5Cleft%28%7Ef_1%5E%7B%28n-1%29%7D%5Cbegin%7Bvmatrix%7Df_2%26%5Cdots%26f_n%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_2%5E%7B%28n-1%29%7D%26%5Cdots%26f_n%5E%7B%28n-1%29%7D%5Cend%7Bvmatrix%7D%7E%2B%7E%5Cdots%7E%2B%7Ef_n%5E%7B%28n-1%29%7D%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_%7Bn-1%7D%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n-1%29%7D%26%5Cdots%26f_%7Bn-1%7D%5E%7B%28n-1%29%7D%5Cend%7Bvmatrix%7D%5Cright%29" alt="\begin{vmatrix}f_1&\dots&f_n\\\dots&\dots&\dots\\f_1^{(n-2)}&\dots&f_n^{(n-2)}\\f_1^{(n)}&\dots&f_n^{(n)}\end{vmatrix}+\left( f_1^{(n-1)}\begin{vmatrix}f_2&\dots&f_n\\\dots&\dots&\dots\\f_2^{(n-1)}&\dots&f_n^{(n-1)}\end{vmatrix} + \dots + f_n^{(n-1)}\begin{vmatrix}f_1&\dots&f_{n-1}\\\dots&\dots&\dots\\f_1^{(n-1)}&\dots&f_{n-1}^{(n-1)}\end{vmatrix}\right)" border="0" /><br />
But the bracketed part is just <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cbegin%7Bvmatrix%7Df_1%26%5Cdots%26f_%7Bn-1%7D%5C%5C%5Cdots%26%5Cdots%26%5Cdots%5C%5Cf_1%5E%7B%28n-1%29%7D%26%5Cdots%26f_%7Bn%7D%5E%7B%28n-1%29%7D%5C%5Cf_1%5E%7B%28n-1%29%7D%26%5Cdots%26f_n%5E%7B%28n-1%29%7D%5Cend%7Bvmatrix%7D" alt="\begin{vmatrix}f_1&\dots&f_{n-1}\\\dots&\dots&\dots\\f_1^{(n-1)}&\dots&f_{n}^{(n-1)}\\f_1^{(n-1)}&\dots&f_n^{(n-1)}\end{vmatrix}" border="0" />, which is zero as a matrix with repeated rows is singular. We are done.Experiments: Game Theory by Christopher MidgleyChristopher Midgleyhttps://blogs.warwick.ac.uk/midgleyc/entry/experiments_game_theory/2012-06-16T12:18:05Z2012-06-15T08:49:20Z<p class="answer">Follow-up to <a href="https://blogs.warwick.ac.uk/midgleyc/entry/to_ensure_that/" title="Related blog entry">To ensure that you are honest…</a> from <a href="https://blogs.warwick.ac.uk/midgleyc">Midgley, Christopher – Pointless twaddle and meaningless diatribes</a></p>
<p>Realised I could model the previous two cases of product purchase as games. So let’s do that.</p>
Game 1: <ol>
<li>You make a bid.</li>
<li>A price is randomly generated.</li>
<li>If the price is below your bid, you pay the price and get the item; otherwise you pay nothing (and don’t get the item).</li>
</ol>
Model as a game: <ul>
<li>Price to play the game is X</li>
<li>You consider the product worth Y</li>
<li>We assume the <span class="caps">RNG</span> creates numbers between 1 and 100 inclusive.</li>
<li>X% of the time, you make <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?Y%2Bi" alt="Y+i" border="0" />, where i is the number generated minus 1.</li>
<li>(100-X)% of the time, you make X.</li>
<li>Your expected reward is one hundredth of <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%28100-X%29X%2B%5Csum_%7Bi%3D0%7D%5E%7BX-1%7D%7BY%2Bi%7D%3D-%5Cfrac%7BX%5E2%7D%7B2%7D%2BXY%2B%5Cfrac%7B199X%7D%7B2%7D" alt="(100-X)X+\sum_{i=0}^{X-1}{Y+i}=-\frac{X^2}{2}+XY+\frac{199X}{2}" border="0" />.</li>
<li>For payoff we subtract the bid.</li>
</ul>
<p>At 0, we make nothing. At 100, we make 49.5+Y-X. Our maximum here occurs at X=Y+1/2 which, as we’re limited to integer solutions, still returns the same result.</p>
Game 2: <ol>
<li>You make a bid.</li>
<li>A price is randomly generated.</li>
<li>If the price is below your bid, you pay your bid and get the item; otherwise you pay nothing (and don’t get the item).</li>
</ol>
Model as a game: <ul>
<li>Price to play the game is X</li>
<li>You consider the product worth Y</li>
<li>We assume the <span class="caps">RNG</span> creates numbers between 1 and 100 inclusive.</li>
<li>X% of the time, you make <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?Y" alt="Y" border="0" />.</li>
<li>(100-X)% of the time, you make X.</li>
<li>Your expected reward is one hundredth of <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?XY%2B%28100-X%29X%3D100X%2BXY-X%5E2" alt="XY+(100-X)X=100X+XY-X^2" border="0" />.</li>
<li>For payoff we subtract the bid.</li>
</ul>
<p>At 0, we make nothing. At 100, we make Y-X. At Y=X, we make X, which is our best.</p>
<p>So in conclusion we have that both have the same maxima, but the former has a higher payoff, which I suppose you’d expect.</p><p class="answer">Follow-up to <a href="https://blogs.warwick.ac.uk/midgleyc/entry/to_ensure_that/" title="Related blog entry">To ensure that you are honest…</a> from <a href="https://blogs.warwick.ac.uk/midgleyc">Midgley, Christopher – Pointless twaddle and meaningless diatribes</a></p>
<p>Realised I could model the previous two cases of product purchase as games. So let’s do that.</p>
Game 1: <ol>
<li>You make a bid.</li>
<li>A price is randomly generated.</li>
<li>If the price is below your bid, you pay the price and get the item; otherwise you pay nothing (and don’t get the item).</li>
</ol>
Model as a game: <ul>
<li>Price to play the game is X</li>
<li>You consider the product worth Y</li>
<li>We assume the <span class="caps">RNG</span> creates numbers between 1 and 100 inclusive.</li>
<li>X% of the time, you make <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?Y%2Bi" alt="Y+i" border="0" />, where i is the number generated minus 1.</li>
<li>(100-X)% of the time, you make X.</li>
<li>Your expected reward is one hundredth of <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%28100-X%29X%2B%5Csum_%7Bi%3D0%7D%5E%7BX-1%7D%7BY%2Bi%7D%3D-%5Cfrac%7BX%5E2%7D%7B2%7D%2BXY%2B%5Cfrac%7B199X%7D%7B2%7D" alt="(100-X)X+\sum_{i=0}^{X-1}{Y+i}=-\frac{X^2}{2}+XY+\frac{199X}{2}" border="0" />.</li>
<li>For payoff we subtract the bid.</li>
</ul>
<p>At 0, we make nothing. At 100, we make 49.5+Y-X. Our maximum here occurs at X=Y+1/2 which, as we’re limited to integer solutions, still returns the same result.</p>
Game 2: <ol>
<li>You make a bid.</li>
<li>A price is randomly generated.</li>
<li>If the price is below your bid, you pay your bid and get the item; otherwise you pay nothing (and don’t get the item).</li>
</ol>
Model as a game: <ul>
<li>Price to play the game is X</li>
<li>You consider the product worth Y</li>
<li>We assume the <span class="caps">RNG</span> creates numbers between 1 and 100 inclusive.</li>
<li>X% of the time, you make <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?Y" alt="Y" border="0" />.</li>
<li>(100-X)% of the time, you make X.</li>
<li>Your expected reward is one hundredth of <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?XY%2B%28100-X%29X%3D100X%2BXY-X%5E2" alt="XY+(100-X)X=100X+XY-X^2" border="0" />.</li>
<li>For payoff we subtract the bid.</li>
</ul>
<p>At 0, we make nothing. At 100, we make Y-X. At Y=X, we make X, which is our best.</p>
<p>So in conclusion we have that both have the same maxima, but the former has a higher payoff, which I suppose you’d expect.</p>Cauchy Condensation Test by Christopher MidgleyChristopher Midgleyhttps://blogs.warwick.ac.uk/midgleyc/entry/cauchy_condensation_test/2012-05-06T23:48:33Z2012-05-04T21:19:01Z<p>The Cauchy condensation test is a convergence test. It’s also, from what I can see, one of the few we <em>didn’t</em> cover in Analysis. Which is a shame, because it’s rather nice.</p>
<p>For a positive non-increasing sequence <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?f%28n%29" alt="f(n)" border="0" />, the sum <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Csum_%7Bn%3D1%7D%5E%7B%5Cinfty%7Df%28n%29" alt="\sum_{n=1}^{\infty}f(n)" border="0" /> converges if and only if the sum <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Csum_%7Bn%3D1%7D%5E%7B%5Cinfty%7D2%5Enf%282%5En%29" alt="\sum_{n=1}^{\infty}2^nf(2^n)" border="0" /> converges.</p>
<p>A sketch proof in one direction should be rather evident: as the sequence is non-increasing, we can replace every group of length <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?2%5En" alt="2^n" border="0" /> by its initial value. For the reverse, the idea is similar.</p>
<p>Let us consider our old friend <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Csum_%7Bn%3D1%7D%5E%7B%5Cinfty%7D%5Cfrac%7B1%7D%7Bn%5Ep%7D" alt="\sum_{n=1}^{\infty}\frac{1}{n^p}" border="0" />. Consider <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Csum_%7Bn%3D1%7D%5E%7B%5Cinfty%7D2%5En%28%5Cfrac%7B1%7D%7B2%5En%7D%29%5Ep%3D%5Csum_%7Bn%3D1%7D%5E%7B%5Cinfty%7D2%5E%7Bn-np%7D%3D%5Csum_%7Bn%3D1%7D%5E%7B%5Cinfty%7D2%5E%7Bn%281-p%29%7D" alt="\sum_{n=1}^{\infty}2^n(\frac{1}{2^n})^p=\sum_{n=1}^{\infty}2^{n-np}=\sum_{n=1}^{\infty}2^{n(1-p)}" border="0" /> which is a geometric series and convergent if and only if <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?p%3E1" alt="p>1" border="0" />.</p><p>The Cauchy condensation test is a convergence test. It’s also, from what I can see, one of the few we <em>didn’t</em> cover in Analysis. Which is a shame, because it’s rather nice.</p>
<p>For a positive non-increasing sequence <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?f%28n%29" alt="f(n)" border="0" />, the sum <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Csum_%7Bn%3D1%7D%5E%7B%5Cinfty%7Df%28n%29" alt="\sum_{n=1}^{\infty}f(n)" border="0" /> converges if and only if the sum <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Csum_%7Bn%3D1%7D%5E%7B%5Cinfty%7D2%5Enf%282%5En%29" alt="\sum_{n=1}^{\infty}2^nf(2^n)" border="0" /> converges.</p>
<p>A sketch proof in one direction should be rather evident: as the sequence is non-increasing, we can replace every group of length <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?2%5En" alt="2^n" border="0" /> by its initial value. For the reverse, the idea is similar.</p>
<p>Let us consider our old friend <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Csum_%7Bn%3D1%7D%5E%7B%5Cinfty%7D%5Cfrac%7B1%7D%7Bn%5Ep%7D" alt="\sum_{n=1}^{\infty}\frac{1}{n^p}" border="0" />. Consider <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Csum_%7Bn%3D1%7D%5E%7B%5Cinfty%7D2%5En%28%5Cfrac%7B1%7D%7B2%5En%7D%29%5Ep%3D%5Csum_%7Bn%3D1%7D%5E%7B%5Cinfty%7D2%5E%7Bn-np%7D%3D%5Csum_%7Bn%3D1%7D%5E%7B%5Cinfty%7D2%5E%7Bn%281-p%29%7D" alt="\sum_{n=1}^{\infty}2^n(\frac{1}{2^n})^p=\sum_{n=1}^{\infty}2^{n-np}=\sum_{n=1}^{\infty}2^{n(1-p)}" border="0" /> which is a geometric series and convergent if and only if <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?p%3E1" alt="p>1" border="0" />.</p>Obviousness - norm implies metric by Christopher MidgleyChristopher Midgleyhttps://blogs.warwick.ac.uk/midgleyc/entry/obviousness_norm_implies/2012-01-15T11:04:41Z2012-01-12T10:39:02Z<p>We have a norm, which satisfies positivity, linearity and the triangle inequality. We have a metric, which satisfies positivity, symmetricity and the triangle inequality. We wish to prove that every norm is a metric. "Obviously", the only property we need to prove is... the triangle inequality.<br />
<br />
At the time, I couldn't see how linearity implied symmetry was obvious. Now, I think it's about as obvious as the triangle inequality was - there's a step to take, even though it's simple.<br />
<br />
Linearity requires that <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%7C%7Ccx%7C%7C%3D%7Cc%7C%7C%7Cx%7C%7C%5Ctext%7B%7Efor%7E%7D%7Ec%7E%5Cin%7E%5Cmathbb%7BC%7D" alt="||cx||=|c|||x||\text{ for } c \in \mathbb{C}" border="0" />. Symmetricity requires that d(x,y)=d(y,x).<br />
<br />
To prove it, we let d(x,y)= ||x-y|| = |-1|||x-y|| = ||-x-(-y)|| = ||y-x|| = d(y,x).</p><p>We have a norm, which satisfies positivity, linearity and the triangle inequality. We have a metric, which satisfies positivity, symmetricity and the triangle inequality. We wish to prove that every norm is a metric. "Obviously", the only property we need to prove is... the triangle inequality.<br />
<br />
At the time, I couldn't see how linearity implied symmetry was obvious. Now, I think it's about as obvious as the triangle inequality was - there's a step to take, even though it's simple.<br />
<br />
Linearity requires that <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%7C%7Ccx%7C%7C%3D%7Cc%7C%7C%7Cx%7C%7C%5Ctext%7B%7Efor%7E%7D%7Ec%7E%5Cin%7E%5Cmathbb%7BC%7D" alt="||cx||=|c|||x||\text{ for } c \in \mathbb{C}" border="0" />. Symmetricity requires that d(x,y)=d(y,x).<br />
<br />
To prove it, we let d(x,y)= ||x-y|| = |-1|||x-y|| = ||-x-(-y)|| = ||y-x|| = d(y,x).</p>[citation needed]; the difficulty of finding things by Christopher MidgleyChristopher Midgleyhttps://blogs.warwick.ac.uk/midgleyc/entry/citation_needed_the/2012-01-12T10:40:47Z2012-01-05T16:34:48Z<p>Typing in “tower property” in Google, I find that the first result is the ever ubiquitous Wikipedia (whose mastery of <span class="caps">SEO</span> means it turns up, with an occasional irrelevant article, on whatever subject you could care to name) article on Expected Value, in this case. Actually typing in “tower property” returns the article on “law of total expectation” which is apparently the one of its myriad names that Wikipedia has decided is most common. Looking at the other results on Google, even adding a helpful “statistics”, I find that “tower property” doesn’t appear to return anything else relevant. In fact, the only other place I can find it called “tower property” is in my notes :)</p>
<p>For nameless results, I find my best bet is simply to type in the result itself. For example, that E[XY]=E[YE[X|Y]] is proven at the end of <a href="http://www.maths.qmul.ac.uk/~ig/MTH5118/Notes5-09.pdf">this pdf document, which is likely lecture notes.</a> If something has a lot of roots or powers, this is somewhat less applicable.</p>
<p>As of yet, I’ve not been able to find anything on what my notes refer to as “Fisher’s theorem”. It’s a theorem named after a famous mathematician who had many theorems named after him (some with others), so we’re already off the a bad start trying to find it. The theorem reads:</p>
<p>Let <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?X_i%7E%5Csim%7EN%28%5Cmu%2C%7E%5Csigma%5E2%29" alt="X_i \sim N(\mu, \sigma^2)" border="0" /> be indepedent random variables. Define <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Coverline%7BX%7D%3D%5Csum_%7Bi%3D1%7D%5En%7EX_i" alt="\overline{X}=\sum_{i=1}^n X_i" border="0" /> and <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?s%5E2%3D%5Cfrac%7B1%7D%7Bn-1%7D%5Csum_%7Bi%3D1%7D%5En%7E%28%5Coverline%7BX%7D_n-X_i%29%5E2" alt="s^2=\frac{1}{n-1}\sum_{i=1}^n (\overline{X}_n-X_i)^2" border="0" />. Then:<br />
*<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Coverline%7BX%7D%7E%5Csim%7EN%28%5Cmu%2C%5Cfrac%7B%5Csigma%5E2%7D%7Bn%7D%29" alt="\overline{X} \sim N(\mu,\frac{\sigma^2}{n})" border="0" /><br />
*<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Coverline%7BX%7D" alt="\overline{X}" border="0" /> and <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?s%5E2" alt="s^2" border="0" /> are independent.<br />
*<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cfrac%7B%28n-1%29s%5E2%7D%7B%5Csigma%5E2%7D%7E%5Csim%7E%5Cchi%5E2_%7Bn-1%7D" alt="\frac{(n-1)s^2}{\sigma^2} \sim \chi^2_{n-1}" border="0" /><br />
*<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cfrac%7B%5Coverline%7BX%7D-%5Cmu%7D%7B%5Csqrt%7Bs%5E2%2Fn%7D%7D%7E%5Csim%7Et_%7Bn-1%7D" alt="\frac{\overline{X}-\mu}{\sqrt{s^2/n}} \sim t_{n-1}" border="0" /></p>
<p>It looks like it has something to do with sample mean and variance, but I’m only taking the first module on this topic, so what its use is I can’t say.</p><p>Typing in “tower property” in Google, I find that the first result is the ever ubiquitous Wikipedia (whose mastery of <span class="caps">SEO</span> means it turns up, with an occasional irrelevant article, on whatever subject you could care to name) article on Expected Value, in this case. Actually typing in “tower property” returns the article on “law of total expectation” which is apparently the one of its myriad names that Wikipedia has decided is most common. Looking at the other results on Google, even adding a helpful “statistics”, I find that “tower property” doesn’t appear to return anything else relevant. In fact, the only other place I can find it called “tower property” is in my notes :)</p>
<p>For nameless results, I find my best bet is simply to type in the result itself. For example, that E[XY]=E[YE[X|Y]] is proven at the end of <a href="http://www.maths.qmul.ac.uk/~ig/MTH5118/Notes5-09.pdf">this pdf document, which is likely lecture notes.</a> If something has a lot of roots or powers, this is somewhat less applicable.</p>
<p>As of yet, I’ve not been able to find anything on what my notes refer to as “Fisher’s theorem”. It’s a theorem named after a famous mathematician who had many theorems named after him (some with others), so we’re already off the a bad start trying to find it. The theorem reads:</p>
<p>Let <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?X_i%7E%5Csim%7EN%28%5Cmu%2C%7E%5Csigma%5E2%29" alt="X_i \sim N(\mu, \sigma^2)" border="0" /> be indepedent random variables. Define <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Coverline%7BX%7D%3D%5Csum_%7Bi%3D1%7D%5En%7EX_i" alt="\overline{X}=\sum_{i=1}^n X_i" border="0" /> and <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?s%5E2%3D%5Cfrac%7B1%7D%7Bn-1%7D%5Csum_%7Bi%3D1%7D%5En%7E%28%5Coverline%7BX%7D_n-X_i%29%5E2" alt="s^2=\frac{1}{n-1}\sum_{i=1}^n (\overline{X}_n-X_i)^2" border="0" />. Then:<br />
*<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Coverline%7BX%7D%7E%5Csim%7EN%28%5Cmu%2C%5Cfrac%7B%5Csigma%5E2%7D%7Bn%7D%29" alt="\overline{X} \sim N(\mu,\frac{\sigma^2}{n})" border="0" /><br />
*<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Coverline%7BX%7D" alt="\overline{X}" border="0" /> and <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?s%5E2" alt="s^2" border="0" /> are independent.<br />
*<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cfrac%7B%28n-1%29s%5E2%7D%7B%5Csigma%5E2%7D%7E%5Csim%7E%5Cchi%5E2_%7Bn-1%7D" alt="\frac{(n-1)s^2}{\sigma^2} \sim \chi^2_{n-1}" border="0" /><br />
*<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cfrac%7B%5Coverline%7BX%7D-%5Cmu%7D%7B%5Csqrt%7Bs%5E2%2Fn%7D%7D%7E%5Csim%7Et_%7Bn-1%7D" alt="\frac{\overline{X}-\mu}{\sqrt{s^2/n}} \sim t_{n-1}" border="0" /></p>
<p>It looks like it has something to do with sample mean and variance, but I’m only taking the first module on this topic, so what its use is I can’t say.</p>Combinatorics - a misread question (graph theory) by Christopher MidgleyChristopher Midgleyhttps://blogs.warwick.ac.uk/midgleyc/entry/combinatorics_a_misread/2012-05-04T21:00:39Z2011-11-16T20:34:55Z<p>While doing combinatorics questions, we come across interesting results by doing the wrong question, whether as a stepping stone (we hope!) to the correct answer, or merely completely misreading the question. Below is one that came up last week:</p>
<p><strong>Find the number of <em>labelled</em> (n-2)-regular graphs with n vertices.</strong><br />
<em>(a k-regular graph is one in which all vertices have degree k)</em></p>
<p>Clearly for n odd or less than 2 there are no such graphs. The case for even n greater than or equal to 2 runs as follows:</p>
<p>Note that if a vertex has degree (n-2), it is connected to every vertex apart from one other, which is necessarily connected to every vertex bar the first. Thus, we can ‘pair off’ vertices in this manner, obtaining <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%7Bn%7E%5Cchoose%7E2%7D%7E%7Bn-2%7E%5Cchoose%7E2%7D%7E%7Bn-4%7E%5Cchoose%7E2%7D%7E%7E%5Ccdots%7E%3D%7E%5Cfrac%7Bn%21%7D%7B%28n-2%29%212%21%7D%7E%5Cfrac%7B%28n-2%29%21%7D%7B%28n-4%29%212%21%7D%7E%5Cfrac%7B%28n-4%29%21%7D%7B%28n-6%29%212%21%7D%7E%5Ccdots%7E%3D%7E%5Cfrac%7Bn%21%7D%7B2%5E%7B%5Cfrac%7Bn%7D%7B2%7D%7D%7D" alt="{n \choose 2} {n-2 \choose 2} {n-4 \choose 2} \cdots = \frac{n!}{(n-2)!2!} \frac{(n-2)!}{(n-4)!2!} \frac{(n-4)!}{(n-6)!2!} \cdots = \frac{n!}{2^{\frac{n}{2}}}" border="0" />.</p>
<p><strong>Edit</strong><br />
We then (thanks Nick!) note that order of pairs doesn’t matter, so we need to divide by the number of ways of choosing the <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cfrac%7Bn%7D%7B2%7D" alt="\frac{n}{2}" border="0" /> pairs—<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cfrac%7Bn%7D%7B2%7D%21" alt="\frac{n}{2}!" border="0" /> – giving us <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cfrac%7Bn%21%7D%7B2%5E%7B%5Cfrac%7Bn%7D%7B2%7D%7D%5Cfrac%7Bn%7D%7B2%7D%21%7D" alt="\frac{n!}{2^{\frac{n}{2}}\frac{n}{2}!}" border="0" /></p>
<p>This gives the same result as a different way of thinking about it—<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%28n-1%29%28n-3%29%28n-5%29%7E%5Ccdots%7E%3D%7E%5Csum_%7Bk%3D1%7D%5E%7B%5Cfrac%7Bn%7D%7B2%7D%7D%7E%28n-2k%2B1%29" alt="(n-1)(n-3)(n-5) \cdots = \sum_{k=1}^{\frac{n}{2}} (n-2k+1)" border="0" /></p><p>While doing combinatorics questions, we come across interesting results by doing the wrong question, whether as a stepping stone (we hope!) to the correct answer, or merely completely misreading the question. Below is one that came up last week:</p>
<p><strong>Find the number of <em>labelled</em> (n-2)-regular graphs with n vertices.</strong><br />
<em>(a k-regular graph is one in which all vertices have degree k)</em></p>
<p>Clearly for n odd or less than 2 there are no such graphs. The case for even n greater than or equal to 2 runs as follows:</p>
<p>Note that if a vertex has degree (n-2), it is connected to every vertex apart from one other, which is necessarily connected to every vertex bar the first. Thus, we can ‘pair off’ vertices in this manner, obtaining <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%7Bn%7E%5Cchoose%7E2%7D%7E%7Bn-2%7E%5Cchoose%7E2%7D%7E%7Bn-4%7E%5Cchoose%7E2%7D%7E%7E%5Ccdots%7E%3D%7E%5Cfrac%7Bn%21%7D%7B%28n-2%29%212%21%7D%7E%5Cfrac%7B%28n-2%29%21%7D%7B%28n-4%29%212%21%7D%7E%5Cfrac%7B%28n-4%29%21%7D%7B%28n-6%29%212%21%7D%7E%5Ccdots%7E%3D%7E%5Cfrac%7Bn%21%7D%7B2%5E%7B%5Cfrac%7Bn%7D%7B2%7D%7D%7D" alt="{n \choose 2} {n-2 \choose 2} {n-4 \choose 2} \cdots = \frac{n!}{(n-2)!2!} \frac{(n-2)!}{(n-4)!2!} \frac{(n-4)!}{(n-6)!2!} \cdots = \frac{n!}{2^{\frac{n}{2}}}" border="0" />.</p>
<p><strong>Edit</strong><br />
We then (thanks Nick!) note that order of pairs doesn’t matter, so we need to divide by the number of ways of choosing the <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cfrac%7Bn%7D%7B2%7D" alt="\frac{n}{2}" border="0" /> pairs—<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cfrac%7Bn%7D%7B2%7D%21" alt="\frac{n}{2}!" border="0" /> – giving us <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cfrac%7Bn%21%7D%7B2%5E%7B%5Cfrac%7Bn%7D%7B2%7D%7D%5Cfrac%7Bn%7D%7B2%7D%21%7D" alt="\frac{n!}{2^{\frac{n}{2}}\frac{n}{2}!}" border="0" /></p>
<p>This gives the same result as a different way of thinking about it—<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%28n-1%29%28n-3%29%28n-5%29%7E%5Ccdots%7E%3D%7E%5Csum_%7Bk%3D1%7D%5E%7B%5Cfrac%7Bn%7D%7B2%7D%7D%7E%28n-2k%2B1%29" alt="(n-1)(n-3)(n-5) \cdots = \sum_{k=1}^{\frac{n}{2}} (n-2k+1)" border="0" /></p>Emptiness by Christopher MidgleyChristopher Midgleyhttps://blogs.warwick.ac.uk/midgleyc/entry/emptiness/2012-01-12T10:40:15Z2011-11-09T22:42:17Z<p>Consider the definition of a graph – does this preclude the idea of a graph with 0 nodes? No, as both the set of vertices and the set of edges may be empty. This graph exists mostly to make you consider an extra case for every problem you do :)</p>
<p>How many automorphisms are there on the empty graph (how many isomorphisms from the empty graph to the empty graph)? This is equivalent to asking how many bijections there are from the empty set to itself. For a function to be well-defined, we only require that every element in the domain is mapped somewhere in the codomain – if the domain is the empty set, this condition is trivially true (if vacuous). It turns out the answer is one – the <a href="http://en.wikipedia.org/wiki/Empty_function">empty function</a>.</p>
<p>I wondered about this a while in first year – how would you go about drawing a function if the first set in question was empty? – and I’m glad to see it’s actually something that’s been considered and given an answer.</p>
<p><strong>Edit on 10/11 in reply to Nick:</strong><br />
Well, the definition of a matrix doesn’t preclude the existence of a 0×0 matrix, and I’d suppose that different matrices differ in at least one element, so I agree that that’s the only one.</p>
<p>The question on determinant is much more interesting due to how many different ways there are to approach it. Primarily, I want to check it doesn’t run opposite to my intuition in other areas.</p>
<p>Going back to the definition, we get that <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cdet%28A%29%3D%5Csum_%7B%5Cphi%7E%5Cin%7ES_n%7D%7Esign%28%5Cphi%29%7E%5Calpha_%7B1%7E%5Cphi%7E%281%29%7D%5Calpha_%7B2%7E%5Cphi%7E%282%29%7D%7E%5Ccdots%7E%5Calpha_%7Bn%7E%5Cphi%7E%28n%29%7D" alt="\det(A)=\sum_{\phi \in S_n} sign(\phi) \alpha_{1 \phi (1)}\alpha_{2 \phi (2)} \cdots \alpha_{n \phi (n)}" border="0" />. <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?S_0" alt="S_0" border="0" /> would be the symmetric group on 0, having 0! = 1 elements (the empty function). So the determinant would be the sum of one number that was the product of no numbers. I’m of the opinion that the sum of no numbers is 1 (the multiplicative identity), so that would make the determinant one in this case.</p>
<p>Considered geometrically, the determinant represent area or volume in two or three dimensions. Extending backwards, we can get that the determinant of a 1×1 matrix acting on a line gives the length of the line under the transformation, and hence that a 0×0 matrix should act on a point. However, as the point is the entire space, this tells us nothing of what the determinant should be.</p>
<p>Considering that every matrix represents a linear transformation, and also that the empty function is bijective, we obtain that ( ) is nonsingular and hence the determinant isn’t 0, which is fine.</p>
<p>Going back to the definition I learnt in high school, where <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cbegin%7Bvmatrix%7D%7Ea%26b%5C%5Cc%26d%7E%5Cend%7Bvmatrix%7D%3Dad-bc" alt="\begin{vmatrix} a&b\\c&d \end{vmatrix}=ad-bc" border="0" /> and determinants of matrices of dimension greater than two are defined using minors, cofactors and the above definition, let us try going backwards – can we see what the determinants of 1×1 and 0×0 matrices /should/ be based only on this? Expanding the 2×2 matrix above by the first row, we find that the determinant of (d) should be d and that the determinant of (c) should be c, which fits nicely with the actually definition in addition to the geometric one above. Expanding the 1×1 matrix by the first row, we require the determinant of any 0×0 matrix to be 1 (so that the determinant of the whole thing can be the sole entry), which also fits with our definition above.</p>
<p>But now the geometric line of thinking has lead me into vector spaces! To start, there is no vector space consisting of no vectors (fails presence of an identity). Consider the vector space consisting only of the zero vector. It is a vector space, but we cannot find a basis for it – the zero vector is not linearly independent. However, the vector space is at most of dimension 1 (it has one vector in it), but its basis of length one is linearly depedent, so removing the linearly dependent vectors, we obtain a basis of zero vectors. So this way (by a rather weak argument), the vector space consisting only of the zero vector has dimension zero, as we’d expect.</p><p>Consider the definition of a graph – does this preclude the idea of a graph with 0 nodes? No, as both the set of vertices and the set of edges may be empty. This graph exists mostly to make you consider an extra case for every problem you do :)</p>
<p>How many automorphisms are there on the empty graph (how many isomorphisms from the empty graph to the empty graph)? This is equivalent to asking how many bijections there are from the empty set to itself. For a function to be well-defined, we only require that every element in the domain is mapped somewhere in the codomain – if the domain is the empty set, this condition is trivially true (if vacuous). It turns out the answer is one – the <a href="http://en.wikipedia.org/wiki/Empty_function">empty function</a>.</p>
<p>I wondered about this a while in first year – how would you go about drawing a function if the first set in question was empty? – and I’m glad to see it’s actually something that’s been considered and given an answer.</p>
<p><strong>Edit on 10/11 in reply to Nick:</strong><br />
Well, the definition of a matrix doesn’t preclude the existence of a 0×0 matrix, and I’d suppose that different matrices differ in at least one element, so I agree that that’s the only one.</p>
<p>The question on determinant is much more interesting due to how many different ways there are to approach it. Primarily, I want to check it doesn’t run opposite to my intuition in other areas.</p>
<p>Going back to the definition, we get that <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cdet%28A%29%3D%5Csum_%7B%5Cphi%7E%5Cin%7ES_n%7D%7Esign%28%5Cphi%29%7E%5Calpha_%7B1%7E%5Cphi%7E%281%29%7D%5Calpha_%7B2%7E%5Cphi%7E%282%29%7D%7E%5Ccdots%7E%5Calpha_%7Bn%7E%5Cphi%7E%28n%29%7D" alt="\det(A)=\sum_{\phi \in S_n} sign(\phi) \alpha_{1 \phi (1)}\alpha_{2 \phi (2)} \cdots \alpha_{n \phi (n)}" border="0" />. <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?S_0" alt="S_0" border="0" /> would be the symmetric group on 0, having 0! = 1 elements (the empty function). So the determinant would be the sum of one number that was the product of no numbers. I’m of the opinion that the sum of no numbers is 1 (the multiplicative identity), so that would make the determinant one in this case.</p>
<p>Considered geometrically, the determinant represent area or volume in two or three dimensions. Extending backwards, we can get that the determinant of a 1×1 matrix acting on a line gives the length of the line under the transformation, and hence that a 0×0 matrix should act on a point. However, as the point is the entire space, this tells us nothing of what the determinant should be.</p>
<p>Considering that every matrix represents a linear transformation, and also that the empty function is bijective, we obtain that ( ) is nonsingular and hence the determinant isn’t 0, which is fine.</p>
<p>Going back to the definition I learnt in high school, where <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Cbegin%7Bvmatrix%7D%7Ea%26b%5C%5Cc%26d%7E%5Cend%7Bvmatrix%7D%3Dad-bc" alt="\begin{vmatrix} a&b\\c&d \end{vmatrix}=ad-bc" border="0" /> and determinants of matrices of dimension greater than two are defined using minors, cofactors and the above definition, let us try going backwards – can we see what the determinants of 1×1 and 0×0 matrices /should/ be based only on this? Expanding the 2×2 matrix above by the first row, we find that the determinant of (d) should be d and that the determinant of (c) should be c, which fits nicely with the actually definition in addition to the geometric one above. Expanding the 1×1 matrix by the first row, we require the determinant of any 0×0 matrix to be 1 (so that the determinant of the whole thing can be the sole entry), which also fits with our definition above.</p>
<p>But now the geometric line of thinking has lead me into vector spaces! To start, there is no vector space consisting of no vectors (fails presence of an identity). Consider the vector space consisting only of the zero vector. It is a vector space, but we cannot find a basis for it – the zero vector is not linearly independent. However, the vector space is at most of dimension 1 (it has one vector in it), but its basis of length one is linearly depedent, so removing the linearly dependent vectors, we obtain a basis of zero vectors. So this way (by a rather weak argument), the vector space consisting only of the zero vector has dimension zero, as we’d expect.</p>Balls in boxes and other miscellaneous thoughts by Christopher MidgleyChristopher Midgleyhttps://blogs.warwick.ac.uk/midgleyc/entry/balls_in_boxes/2012-01-12T10:40:13Z2011-10-30T18:41:26Z<p>It is a reasonably common statement in probability that problems can be simplified down to spinners or balls in boxes. So:<br />
Let there be k balls that we wish to place in n boxes. If we can distinguish between the balls, then each ball can go in n different places, so the total is <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Ctextstyle%7En%5Ek" alt="\textstyle n^k" border="0" />. If we cannot distinguish between the balls, this is answered by the occupancy theorem: the multiset formula for n and k.</p>
<p>What, then, if it is the boxes we cannot distinguish between? If the balls also cannot be distinguished between, this is equivalent to partitioning k into at most n parts. <a href="http://oeis.org/A008639">The <span class="caps">OEIS</span> implies</a> that this has a closed-form solution: <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Ctextstyle%7E%5Cfrac%7B1%7D%7Bn%21%7D%7E%5Cfrac%7Bd%5En%5Cleft%28%7E%5Cfrac%7B1%7D%7B%5Cprod_%7Bi%3D1%7D%5Ek%281-x%5Ei%29%7D%7E%5Cright%29%7D%7Bdx%5En%7D%280%29" alt="\textstyle \frac{1}{n!} \frac{d^n\left( \frac{1}{\prod_{i=1}^k(1-x^i)} \right)}{dx^n}(0)" border="0" />, which is nice to know, even if it suggests there’s no ‘nice’ combinatorial solution.<br />
If we can distinguish between the balls but not the boxes, I believe this would come to a sum of Stirling numbers of the second kind: <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Csum_%7Bi%3D0%7D%5En%7E%5Cleft%5C%7B%5Cbegin%7Bmatrix%7D%7Ek%7E%5C%5C%7Ei%7E%5Cend%7Bmatrix%7D%5Cright%5C%7D" alt="\sum_{i=0}^n \left\{\begin{matrix} k \\ i \end{matrix}\right\}" border="0" />.</p>
<p>Other miscellany: Mario Micallef told us a few integration tricks this week: being able to cancel entire functions while integrating from 0 to 2pi (due to them containing sine), <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Ctextstyle%5Cint_0%5E%7B2%5Cpi%7Dsin%5E2%28x%29dx%3D%5Cpi" alt="\textstyle\int_0^{2\pi}sin^2(x)dx=\pi" border="0" /> by comparison with <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Ccos%5E2%28x%29" alt="\cos^2(x)" border="0" /> across the same interval. Massively time-saving tricks I’d never considered before. Great!</p>
<p>From combinatorics: wondered if there was a way to check transitivity of a relation expressed in matrix form quickly. Found nothing more than checking each row one step at a time, but I least I can do that fairly quickly now.</p><p>It is a reasonably common statement in probability that problems can be simplified down to spinners or balls in boxes. So:<br />
Let there be k balls that we wish to place in n boxes. If we can distinguish between the balls, then each ball can go in n different places, so the total is <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Ctextstyle%7En%5Ek" alt="\textstyle n^k" border="0" />. If we cannot distinguish between the balls, this is answered by the occupancy theorem: the multiset formula for n and k.</p>
<p>What, then, if it is the boxes we cannot distinguish between? If the balls also cannot be distinguished between, this is equivalent to partitioning k into at most n parts. <a href="http://oeis.org/A008639">The <span class="caps">OEIS</span> implies</a> that this has a closed-form solution: <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Ctextstyle%7E%5Cfrac%7B1%7D%7Bn%21%7D%7E%5Cfrac%7Bd%5En%5Cleft%28%7E%5Cfrac%7B1%7D%7B%5Cprod_%7Bi%3D1%7D%5Ek%281-x%5Ei%29%7D%7E%5Cright%29%7D%7Bdx%5En%7D%280%29" alt="\textstyle \frac{1}{n!} \frac{d^n\left( \frac{1}{\prod_{i=1}^k(1-x^i)} \right)}{dx^n}(0)" border="0" />, which is nice to know, even if it suggests there’s no ‘nice’ combinatorial solution.<br />
If we can distinguish between the balls but not the boxes, I believe this would come to a sum of Stirling numbers of the second kind: <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Csum_%7Bi%3D0%7D%5En%7E%5Cleft%5C%7B%5Cbegin%7Bmatrix%7D%7Ek%7E%5C%5C%7Ei%7E%5Cend%7Bmatrix%7D%5Cright%5C%7D" alt="\sum_{i=0}^n \left\{\begin{matrix} k \\ i \end{matrix}\right\}" border="0" />.</p>
<p>Other miscellany: Mario Micallef told us a few integration tricks this week: being able to cancel entire functions while integrating from 0 to 2pi (due to them containing sine), <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Ctextstyle%5Cint_0%5E%7B2%5Cpi%7Dsin%5E2%28x%29dx%3D%5Cpi" alt="\textstyle\int_0^{2\pi}sin^2(x)dx=\pi" border="0" /> by comparison with <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Ccos%5E2%28x%29" alt="\cos^2(x)" border="0" /> across the same interval. Massively time-saving tricks I’d never considered before. Great!</p>
<p>From combinatorics: wondered if there was a way to check transitivity of a relation expressed in matrix form quickly. Found nothing more than checking each row one step at a time, but I least I can do that fairly quickly now.</p>Mathematical Statistics A - Example Sheet 1 by Christopher MidgleyChristopher Midgleyhttps://blogs.warwick.ac.uk/midgleyc/entry/mathematical_statistics_a/2012-01-12T10:40:54Z2011-10-09T17:52:04Z<p>Thought I may as well put answers to the A & C sections here, for simple-ish reflection. And to notice when I can’t recall how to do things, at all.</p>
<span class="caps">QA1</span>: We compute the cumulative distribution function by integrating the density function:<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%0A%5Cbegin%7Balign*%7D%0AF_x%7E%26%3D%7E%5Cint_%7B-%5Cinfty%7D%5Ex%7Ef_x%28s%29ds%5C%5C%0A%26%3D%5Cint_0%5Ex%7Ee%5E%7B-s%7Dds%5C%5C%0A%26%3D%5Cleft.%7E-e%5E%7B-s%7D%5Cright%7C_0%5Ex%5C%5C%0A%26%3D%7E1%7E-%7Ee%5E%7B-x%7D%0A%5Cend%7Balign*%7D%0A" alt=" \begin{align*} F_x &= \int_{-\infty}^x f_x(s)ds\\ &=\int_0^x e^{-s}ds\\ &=\left. -e^{-s}\right|_0^x\\ &= 1 - e^{-x} \end{align*} " border="0" /><br />
We argue Y is discrete because…it takes a finite number of possible values? It’s a step function? I’m not sure. Its support is {0,2} (I hope), pmf and cdf are:<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%0Af_Y%28y%29%7E%3D%5Cleft%7B%7E%7B1-%5Cfrac%7B1%7D%7Be%7D%7D%7E%5Cquad%7E%28y%3D0%29%7E%5C%5C%5Cfrac%7B1%7D%7Be%7D%7E%5Cquad%5Cquad%7E%28y%3D2%29%5C%5C0%7E%5Cquad%5Cquad%7E%28y%7E%5Cnotin%7E%5C%7B0%2C%7E2%5C%7D%29%7E%5Cright.%0A" alt=" f_Y(y) =\left{ {1-\frac{1}{e}} \quad (y=0) \\\frac{1}{e} \quad\quad (y=2)\\0 \quad\quad (y \notin \{0, 2\}) \right. " border="0" /><br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%0AF_Y%28y%29%7E%3D%5Cleft%7B%7E0%7E%5Cquad%5Cquad%7E%28y%3C0%29%7E%5C%5C%5C1-%5Cfrac%7B1%7D%7Be%7D%7E%5Cquad%7E%280%5Cleq%7Ey%3C2%29%5C%5C1%7E%5Cquad%5Cquad%7E%28y%7E%5Cgeq%7E2%29%7E%5Cright.%0A" alt=" F_Y(y) =\left{ 0 \quad\quad (y<0) \\\1-\frac{1}{e} \quad (0\leq y<2)\\1 \quad\quad (y \geq 2) \right. " border="0" /><br />
<span class="caps">QA2</span> is simple infinite sum fiddling combined with the identity for <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?e%5Ek" alt="e^k" border="0" />.
<p><span class="caps">QC5</span>:<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%0AY%7E%3D%7Eg%28X%29%5C%5C%0AF_Y%28y%29%7E%3D%7E%28g%28X%29%7E%5Cleq%7Ey%29%5C%5C%0A%5Cbegin%7Balign*%7D%0AF_Y%28y%29%7E%26%3D%7E%28X%7E%5Cleq%7Eg%5E%7B-1%7D%28y%29%29%5C%5C%0A%26%3DF_X%28g%5E%7B-1%7D%28y%29%29%0A%5Cend%7Balign*%7D%0A%5C%5C%0A%5Cbegin%7Balign*%7D%0Af_y%28y%29%26%3Df_x%28g%5E%7B-1%7D%28y%29%29%5Ctimes%28g%5E%7B-1%7D%28y%29%29%27%5C%5C%0A%26%3D%7E%5Cfrac%7Bf_x%28g%5E%7B-1%7D%28y%29%29%7D%7Bg%27%28g%5E%7B-1%7D%28y%29%29%7D%0A%5Cend%7Balign*%7D%0A" alt=" Y = g(X)\\ F_Y(y) = (g(X) \leq y)\\ \begin{align*} F_Y(y) &= (X \leq g^{-1}(y))\\ &=F_X(g^{-1}(y)) \end{align*} \\ \begin{align*} f_y(y)&=f_x(g^{-1}(y))\times(g^{-1}(y))'\\ &= \frac{f_x(g^{-1}(y))}{g'(g^{-1}(y))} \end{align*} " border="0" /><br />
By the inverse function theorem (differentiation). Y is continuous as the functions that make up Y are continuous – additionally, as g is an increasing bijection it is continuous and differentiable, and its derivative is always positive, so the pdf of Y is not undefined (g’ always nonzero).</p>
<p>For the last part, simply plug in g(X) = aX + b to what you’ve already worked out, and find Y is the Gaussian (given X is the standard Gaussian).</p>
<p><span class="caps">QC6</span>:<br />
Find the mgf of the exponential in the normal way; use the same trick as Q4 to find the mgf of <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?S_n" alt="S_n" border="0" />. Find the mgf of <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?S_n" alt="S_n" border="0" /> using the pdf; as they are equal use uniqueness to conclude the distribution is correct.</p>
<p>Integrate <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?F_%7BS_n%7D" alt="F_{S_n}" border="0" /> from t to infinity; receive sum from 0 to n. Observe that as the number of photons detected must be an integer, <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?P%28N%3Dk%29%3DP%28N%3Ck%2B1%29-P%28N%3Ck%29" alt="P(N=k)=P(N<k+1)-P(N<k)" border="0" />. Plug in your previously calculated values for the answer.</p><p>Thought I may as well put answers to the A & C sections here, for simple-ish reflection. And to notice when I can’t recall how to do things, at all.</p>
<span class="caps">QA1</span>: We compute the cumulative distribution function by integrating the density function:<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%0A%5Cbegin%7Balign*%7D%0AF_x%7E%26%3D%7E%5Cint_%7B-%5Cinfty%7D%5Ex%7Ef_x%28s%29ds%5C%5C%0A%26%3D%5Cint_0%5Ex%7Ee%5E%7B-s%7Dds%5C%5C%0A%26%3D%5Cleft.%7E-e%5E%7B-s%7D%5Cright%7C_0%5Ex%5C%5C%0A%26%3D%7E1%7E-%7Ee%5E%7B-x%7D%0A%5Cend%7Balign*%7D%0A" alt=" \begin{align*} F_x &= \int_{-\infty}^x f_x(s)ds\\ &=\int_0^x e^{-s}ds\\ &=\left. -e^{-s}\right|_0^x\\ &= 1 - e^{-x} \end{align*} " border="0" /><br />
We argue Y is discrete because…it takes a finite number of possible values? It’s a step function? I’m not sure. Its support is {0,2} (I hope), pmf and cdf are:<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%0Af_Y%28y%29%7E%3D%5Cleft%7B%7E%7B1-%5Cfrac%7B1%7D%7Be%7D%7D%7E%5Cquad%7E%28y%3D0%29%7E%5C%5C%5Cfrac%7B1%7D%7Be%7D%7E%5Cquad%5Cquad%7E%28y%3D2%29%5C%5C0%7E%5Cquad%5Cquad%7E%28y%7E%5Cnotin%7E%5C%7B0%2C%7E2%5C%7D%29%7E%5Cright.%0A" alt=" f_Y(y) =\left{ {1-\frac{1}{e}} \quad (y=0) \\\frac{1}{e} \quad\quad (y=2)\\0 \quad\quad (y \notin \{0, 2\}) \right. " border="0" /><br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%0AF_Y%28y%29%7E%3D%5Cleft%7B%7E0%7E%5Cquad%5Cquad%7E%28y%3C0%29%7E%5C%5C%5C1-%5Cfrac%7B1%7D%7Be%7D%7E%5Cquad%7E%280%5Cleq%7Ey%3C2%29%5C%5C1%7E%5Cquad%5Cquad%7E%28y%7E%5Cgeq%7E2%29%7E%5Cright.%0A" alt=" F_Y(y) =\left{ 0 \quad\quad (y<0) \\\1-\frac{1}{e} \quad (0\leq y<2)\\1 \quad\quad (y \geq 2) \right. " border="0" /><br />
<span class="caps">QA2</span> is simple infinite sum fiddling combined with the identity for <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?e%5Ek" alt="e^k" border="0" />.
<p><span class="caps">QC5</span>:<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%0AY%7E%3D%7Eg%28X%29%5C%5C%0AF_Y%28y%29%7E%3D%7E%28g%28X%29%7E%5Cleq%7Ey%29%5C%5C%0A%5Cbegin%7Balign*%7D%0AF_Y%28y%29%7E%26%3D%7E%28X%7E%5Cleq%7Eg%5E%7B-1%7D%28y%29%29%5C%5C%0A%26%3DF_X%28g%5E%7B-1%7D%28y%29%29%0A%5Cend%7Balign*%7D%0A%5C%5C%0A%5Cbegin%7Balign*%7D%0Af_y%28y%29%26%3Df_x%28g%5E%7B-1%7D%28y%29%29%5Ctimes%28g%5E%7B-1%7D%28y%29%29%27%5C%5C%0A%26%3D%7E%5Cfrac%7Bf_x%28g%5E%7B-1%7D%28y%29%29%7D%7Bg%27%28g%5E%7B-1%7D%28y%29%29%7D%0A%5Cend%7Balign*%7D%0A" alt=" Y = g(X)\\ F_Y(y) = (g(X) \leq y)\\ \begin{align*} F_Y(y) &= (X \leq g^{-1}(y))\\ &=F_X(g^{-1}(y)) \end{align*} \\ \begin{align*} f_y(y)&=f_x(g^{-1}(y))\times(g^{-1}(y))'\\ &= \frac{f_x(g^{-1}(y))}{g'(g^{-1}(y))} \end{align*} " border="0" /><br />
By the inverse function theorem (differentiation). Y is continuous as the functions that make up Y are continuous – additionally, as g is an increasing bijection it is continuous and differentiable, and its derivative is always positive, so the pdf of Y is not undefined (g’ always nonzero).</p>
<p>For the last part, simply plug in g(X) = aX + b to what you’ve already worked out, and find Y is the Gaussian (given X is the standard Gaussian).</p>
<p><span class="caps">QC6</span>:<br />
Find the mgf of the exponential in the normal way; use the same trick as Q4 to find the mgf of <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?S_n" alt="S_n" border="0" />. Find the mgf of <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?S_n" alt="S_n" border="0" /> using the pdf; as they are equal use uniqueness to conclude the distribution is correct.</p>
<p>Integrate <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?F_%7BS_n%7D" alt="F_{S_n}" border="0" /> from t to infinity; receive sum from 0 to n. Observe that as the number of photons detected must be an integer, <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?P%28N%3Dk%29%3DP%28N%3Ck%2B1%29-P%28N%3Ck%29" alt="P(N=k)=P(N<k+1)-P(N<k)" border="0" />. Plug in your previously calculated values for the answer.</p>Taylor Series by Christopher MidgleyChristopher Midgleyhttps://blogs.warwick.ac.uk/midgleyc/entry/taylor_series/2012-01-12T10:43:07Z2011-08-22T20:11:04ZThe Taylor series for a (infinitely differentiable) function is a polynomial that approximates it. It can be represented in a few ways:<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?f%28x%29%3Df%28a%29%2Bf%27%28a%29%28x-a%29%2B%5Cfrac%7Bf%27%27%28a%29%7D%7B2%21%7D%28x-a%29%5E2%2Bh.o.t." alt="f(x)=f(a)+f'(a)(x-a)+\frac{f''(a)}{2!}(x-a)^2+h.o.t." border="0" />, which is equivalent to <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Csum_%7Bn%3D0%7D%5E%5Cinfty%5Cfrac%7Bf%5E%7B%28n%29%7D%28a%29%7D%7Bn%21%7D%28x-a%29%5En" alt="\sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n" border="0" /> and can be rewritten as:<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?f%28x%2Bh%29%3Df%28x%29%2Bhf%27%28x%29%2B%5Cfrac%7Bh%5E2%7D%7B2%21%7Df%27%27%28x%29%2Bh.o.t." alt="f(x+h)=f(x)+hf'(x)+\frac{h^2}{2!}f''(x)+h.o.t." border="0" />
<p>Each can be used to form Maclaurin series, where the derivatives are evaluated at zero. Some of the more common ones (such as <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?e%5Ex" alt="e^x" border="0" />) are worth memorising so you can spot them when they come up, but most can be evaluated easily enough, as long as you remember how.</p>
You can also expand for more than one variable by expanding once for each in turn until the desired level of accuracy is reached. A two-dimensional quadratic approximation is<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?f%28x%2Bh%2Cy%2Bk%29%3Df%28x%2Cy%29%2Bh%5Cfrac%7B%5Cpartial%7Ef%7D%7B%5Cpartial%7Ex%7D%28x%2Cy%29%2Bk%5Cfrac%7B%5Cpartial%7Ef%7D%7B%5Cpartial%7Ey%7D%28x%2Cy%29%2B%5Cfrac%7B1%7D%7B2%7D%5Cleft%28h%5E2%5Cfrac%7B%5Cpartial%5E2f%7D%7B%5Cpartial%7Ex%5E2%7D%28x%2Cy%29%2B2hk%5Cfrac%7B%5Cpartial%5E2f%7D%7B%5Cpartial%7Ex%7E%5Cpartial%7Ey%7D%28x%2Cy%29%2Bk%5E2%5Cfrac%7B%5Cpartial%5E2f%7D%7B%5Cpartial%7Ey%5E2%7D%28x%2Cy%29%5Cright%29%2Bh.o.t." alt="f(x+h,y+k)=f(x,y)+h\frac{\partial f}{\partial x}(x,y)+k\frac{\partial f}{\partial y}(x,y)+\frac{1}{2}\left(h^2\frac{\partial^2f}{\partial x^2}(x,y)+2hk\frac{\partial^2f}{\partial x \partial y}(x,y)+k^2\frac{\partial^2f}{\partial y^2}(x,y)\right)+h.o.t." border="0" /><br />
which can be expanded to a general vector expansion as<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?f%28%5Cvec%7Bx%7D%2B%5Cvec%7Bh%7D%29%3Df%28%5Cvec%7Bx%7D%29%2B%5Cvec%7Bh%7D%5Ccdot%5Cnabla%7Ef%28%5Cvec%7Bx%7D%29%2B%5Cfrac%7B1%7D%7B2%7D%5Csum_%7B1%5Cleq%7Ei%2Cj%7E%5Cleq%7En%7Dh_ih_j%5Cfrac%7B%5Cpartial%5E2f%7D%7B%5Cpartial%7Ex_i%7E%5Cpartial%7Ex_j%7D%2Bh.o.t." alt="f(\vec{x}+\vec{h})=f(\vec{x})+\vec{h}\cdot\nabla f(\vec{x})+\frac{1}{2}\sum_{1\leq i,j \leq n}h_ih_j\frac{\partial^2f}{\partial x_i \partial x_j}+h.o.t." border="0" /><br />
[Geometry and Motion notes]
<p>Analysis II introduces Taylor’s Theorem, which explains just why Taylor Series work, and how good an approximation they are depending on how far you get into the sum.</p>The Taylor series for a (infinitely differentiable) function is a polynomial that approximates it. It can be represented in a few ways:<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?f%28x%29%3Df%28a%29%2Bf%27%28a%29%28x-a%29%2B%5Cfrac%7Bf%27%27%28a%29%7D%7B2%21%7D%28x-a%29%5E2%2Bh.o.t." alt="f(x)=f(a)+f'(a)(x-a)+\frac{f''(a)}{2!}(x-a)^2+h.o.t." border="0" />, which is equivalent to <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?%5Csum_%7Bn%3D0%7D%5E%5Cinfty%5Cfrac%7Bf%5E%7B%28n%29%7D%28a%29%7D%7Bn%21%7D%28x-a%29%5En" alt="\sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n" border="0" /> and can be rewritten as:<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?f%28x%2Bh%29%3Df%28x%29%2Bhf%27%28x%29%2B%5Cfrac%7Bh%5E2%7D%7B2%21%7Df%27%27%28x%29%2Bh.o.t." alt="f(x+h)=f(x)+hf'(x)+\frac{h^2}{2!}f''(x)+h.o.t." border="0" />
<p>Each can be used to form Maclaurin series, where the derivatives are evaluated at zero. Some of the more common ones (such as <img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?e%5Ex" alt="e^x" border="0" />) are worth memorising so you can spot them when they come up, but most can be evaluated easily enough, as long as you remember how.</p>
You can also expand for more than one variable by expanding once for each in turn until the desired level of accuracy is reached. A two-dimensional quadratic approximation is<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?f%28x%2Bh%2Cy%2Bk%29%3Df%28x%2Cy%29%2Bh%5Cfrac%7B%5Cpartial%7Ef%7D%7B%5Cpartial%7Ex%7D%28x%2Cy%29%2Bk%5Cfrac%7B%5Cpartial%7Ef%7D%7B%5Cpartial%7Ey%7D%28x%2Cy%29%2B%5Cfrac%7B1%7D%7B2%7D%5Cleft%28h%5E2%5Cfrac%7B%5Cpartial%5E2f%7D%7B%5Cpartial%7Ex%5E2%7D%28x%2Cy%29%2B2hk%5Cfrac%7B%5Cpartial%5E2f%7D%7B%5Cpartial%7Ex%7E%5Cpartial%7Ey%7D%28x%2Cy%29%2Bk%5E2%5Cfrac%7B%5Cpartial%5E2f%7D%7B%5Cpartial%7Ey%5E2%7D%28x%2Cy%29%5Cright%29%2Bh.o.t." alt="f(x+h,y+k)=f(x,y)+h\frac{\partial f}{\partial x}(x,y)+k\frac{\partial f}{\partial y}(x,y)+\frac{1}{2}\left(h^2\frac{\partial^2f}{\partial x^2}(x,y)+2hk\frac{\partial^2f}{\partial x \partial y}(x,y)+k^2\frac{\partial^2f}{\partial y^2}(x,y)\right)+h.o.t." border="0" /><br />
which can be expanded to a general vector expansion as<br />
<img class="latex" src="http://blogs.warwick.ac.uk/cgi-bin/mimetex.cgi?f%28%5Cvec%7Bx%7D%2B%5Cvec%7Bh%7D%29%3Df%28%5Cvec%7Bx%7D%29%2B%5Cvec%7Bh%7D%5Ccdot%5Cnabla%7Ef%28%5Cvec%7Bx%7D%29%2B%5Cfrac%7B1%7D%7B2%7D%5Csum_%7B1%5Cleq%7Ei%2Cj%7E%5Cleq%7En%7Dh_ih_j%5Cfrac%7B%5Cpartial%5E2f%7D%7B%5Cpartial%7Ex_i%7E%5Cpartial%7Ex_j%7D%2Bh.o.t." alt="f(\vec{x}+\vec{h})=f(\vec{x})+\vec{h}\cdot\nabla f(\vec{x})+\frac{1}{2}\sum_{1\leq i,j \leq n}h_ih_j\frac{\partial^2f}{\partial x_i \partial x_j}+h.o.t." border="0" /><br />
[Geometry and Motion notes]
<p>Analysis II introduces Taylor’s Theorem, which explains just why Taylor Series work, and how good an approximation they are depending on how far you get into the sum.</p>