All 14 entries tagged Maths

View all 95 entries tagged Maths on Warwick Blogs | View entries tagged Maths at Technorati | View all 2 images tagged Maths

July 20, 2012

Determinant of the Wronskian

The Wronksian of n functions f_1, \dots, f_n is the matrix determinant \begin{vmatrix}f_1&\dots&f_n\\\dots&\dots&\dots\\f_1^{(n-1)}&\dots&f_n^{(n-1)}}\end{vmatrix}. Its derivative is the matrix determinant \begin{vmatrix}f_1&\dots&f_n\\\dots&\dots&\dots\\f_1^{(n)}&\dots&f_n^{(n)}\end{vmatrix} (that is, the previous matrix with a different bottom row). It’s an interesting exercise to prove this, so let’s do that.

We proceed by our old friend, induction. For n=1 (or 0), the case is obvious. Let it be true through n-1. Expand by the bottom row:
\begin{vmatrix}f_1&\dots&f_n\\\dots&\dots&\dots\\f_1^{(n-1)}&\dots&f_n^{(n-1)}\end{vmatrix} = f_1^{(n-1)}\begin{vmatrix}f_2&\dots&f_n\\\dots&\dots&\dots\\f_2^{(n-2)}&\dots&f_n^{(n-2)}\end{vmatrix} + \dots + f_n^{(n-1)}\begin{vmatrix}f_1&\dots&f_{n-1}\\\dots&\dots&\dots\\f_1^{(n-2)}&\dots&f_{n-1}^{(n-2)}\end{vmatrix}
We take the derivative, applying our induction assumption, obtaining \begin{vmatrix}f_1&\dots&f_n\\\dots&\dots&\dots\\f_1^{(n-2)}&\dots&f_n^{(n-2)}\\f_1^{(n)}&\dots&f_n^{(n)}\end{vmatrix}+\left( f_1^{(n-1)}\begin{vmatrix}f_2&\dots&f_n\\\dots&\dots&\dots\\f_2^{(n-1)}&\dots&f_n^{(n-1)}\end{vmatrix} + \dots + f_n^{(n-1)}\begin{vmatrix}f_1&\dots&f_{n-1}\\\dots&\dots&\dots\\f_1^{(n-1)}&\dots&f_{n-1}^{(n-1)}\end{vmatrix}\right)
But the bracketed part is just \begin{vmatrix}f_1&\dots&f_{n-1}\\\dots&\dots&\dots\\f_1^{(n-1)}&\dots&f_{n}^{(n-1)}\\f_1^{(n-1)}&\dots&f_n^{(n-1)}\end{vmatrix}, which is zero as a matrix with repeated rows is singular. We are done.

June 15, 2012

Experiments: Game Theory

Follow-up to To ensure that you are honest… from Midgley, Christopher - Pointless twaddle and meaningless diatribes

Realised I could model the previous two cases of product purchase as games. So let’s do that.

Game 1:
  1. You make a bid.
  2. A price is randomly generated.
  3. If the price is below your bid, you pay the price and get the item; otherwise you pay nothing (and don’t get the item).
Model as a game:
  • Price to play the game is X
  • You consider the product worth Y
  • We assume the RNG creates numbers between 1 and 100 inclusive.
  • X% of the time, you make Y+i, where i is the number generated minus 1.
  • (100-X)% of the time, you make X.
  • Your expected reward is one hundredth of (100-X)X+\sum_{i=0}^{X-1}{Y+i}=-\frac{X^2}{2}+XY+\frac{199X}{2}.
  • For payoff we subtract the bid.

At 0, we make nothing. At 100, we make 49.5+Y-X. Our maximum here occurs at X=Y+1/2 which, as we’re limited to integer solutions, still returns the same result.

Game 2:
  1. You make a bid.
  2. A price is randomly generated.
  3. If the price is below your bid, you pay your bid and get the item; otherwise you pay nothing (and don’t get the item).
Model as a game:
  • Price to play the game is X
  • You consider the product worth Y
  • We assume the RNG creates numbers between 1 and 100 inclusive.
  • X% of the time, you make Y.
  • (100-X)% of the time, you make X.
  • Your expected reward is one hundredth of XY+(100-X)X=100X+XY-X^2.
  • For payoff we subtract the bid.

At 0, we make nothing. At 100, we make Y-X. At Y=X, we make X, which is our best.

So in conclusion we have that both have the same maxima, but the former has a higher payoff, which I suppose you’d expect.


May 04, 2012

Cauchy Condensation Test

The Cauchy condensation test is a convergence test. It’s also, from what I can see, one of the few we didn’t cover in Analysis. Which is a shame, because it’s rather nice.

For a positive non-increasing sequence f(n), the sum \sum_{n=1}^{\infty}f(n) converges if and only if the sum \sum_{n=1}^{\infty}2^nf(2^n) converges.

A sketch proof in one direction should be rather evident: as the sequence is non-increasing, we can replace every group of length 2^n by its initial value. For the reverse, the idea is similar.

Let us consider our old friend \sum_{n=1}^{\infty}\frac{1}{n^p}. Consider \sum_{n=1}^{\infty}2^n(\frac{1}{2^n})^p=\sum_{n=1}^{\infty}2^{n-np}=\sum_{n=1}^{\infty}2^{n(1-p)} which is a geometric series and convergent if and only if p>1.


January 12, 2012

Obviousness – norm implies metric

We have a norm, which satisfies positivity, linearity and the triangle inequality. We have a metric, which satisfies positivity, symmetricity and the triangle inequality. We wish to prove that every norm is a metric. "Obviously", the only property we need to prove is... the triangle inequality.

At the time, I couldn't see how linearity implied symmetry was obvious. Now, I think it's about as obvious as the triangle inequality was - there's a step to take, even though it's simple.

Linearity requires that ||cx||=|c|||x||\text{ for } c \in \mathbb{C}. Symmetricity requires that d(x,y)=d(y,x).

To prove it, we let d(x,y)= ||x-y|| = |-1|||x-y|| = ||-x-(-y)|| = ||y-x|| = d(y,x).


January 05, 2012

[citation needed]; the difficulty of finding things

Typing in “tower property” in Google, I find that the first result is the ever ubiquitous Wikipedia (whose mastery of SEO means it turns up, with an occasional irrelevant article, on whatever subject you could care to name) article on Expected Value, in this case. Actually typing in “tower property” returns the article on “law of total expectation” which is apparently the one of its myriad names that Wikipedia has decided is most common. Looking at the other results on Google, even adding a helpful “statistics”, I find that “tower property” doesn’t appear to return anything else relevant. In fact, the only other place I can find it called “tower property” is in my notes :)

For nameless results, I find my best bet is simply to type in the result itself. For example, that E[XY]=E[YE[X|Y]] is proven at the end of this pdf document, which is likely lecture notes. If something has a lot of roots or powers, this is somewhat less applicable.

As of yet, I’ve not been able to find anything on what my notes refer to as “Fisher’s theorem”. It’s a theorem named after a famous mathematician who had many theorems named after him (some with others), so we’re already off the a bad start trying to find it. The theorem reads:

Let X_i \sim N(\mu, \sigma^2) be indepedent random variables. Define \overline{X}=\sum_{i=1}^n X_i and s^2=\frac{1}{n-1}\sum_{i=1}^n (\overline{X}_n-X_i)^2. Then:
*\overline{X} \sim N(\mu,\frac{\sigma^2}{n})
*\overline{X} and s^2 are independent.
*\frac{(n-1)s^2}{\sigma^2} \sim \chi^2_{n-1}
*\frac{\overline{X}-\mu}{\sqrt{s^2/n}} \sim t_{n-1}

It looks like it has something to do with sample mean and variance, but I’m only taking the first module on this topic, so what its use is I can’t say.


November 16, 2011

Combinatorics – a misread question (graph theory)

While doing combinatorics questions, we come across interesting results by doing the wrong question, whether as a stepping stone (we hope!) to the correct answer, or merely completely misreading the question. Below is one that came up last week:

Find the number of labelled (n-2)-regular graphs with n vertices.
(a k-regular graph is one in which all vertices have degree k)

Clearly for n odd or less than 2 there are no such graphs. The case for even n greater than or equal to 2 runs as follows:

Note that if a vertex has degree (n-2), it is connected to every vertex apart from one other, which is necessarily connected to every vertex bar the first. Thus, we can ‘pair off’ vertices in this manner, obtaining {n \choose 2} {n-2 \choose 2} {n-4 \choose 2}  \cdots = \frac{n!}{(n-2)!2!} \frac{(n-2)!}{(n-4)!2!} \frac{(n-4)!}{(n-6)!2!} \cdots = \frac{n!}{2^{\frac{n}{2}}}.

Edit
We then (thanks Nick!) note that order of pairs doesn’t matter, so we need to divide by the number of ways of choosing the \frac{n}{2} pairs—\frac{n}{2}! – giving us \frac{n!}{2^{\frac{n}{2}}\frac{n}{2}!}

This gives the same result as a different way of thinking about it—(n-1)(n-3)(n-5) \cdots = \sum_{k=1}^{\frac{n}{2}} (n-2k+1)


November 09, 2011

Emptiness

Consider the definition of a graph – does this preclude the idea of a graph with 0 nodes? No, as both the set of vertices and the set of edges may be empty. This graph exists mostly to make you consider an extra case for every problem you do :)

How many automorphisms are there on the empty graph (how many isomorphisms from the empty graph to the empty graph)? This is equivalent to asking how many bijections there are from the empty set to itself. For a function to be well-defined, we only require that every element in the domain is mapped somewhere in the codomain – if the domain is the empty set, this condition is trivially true (if vacuous). It turns out the answer is one – the empty function.

I wondered about this a while in first year – how would you go about drawing a function if the first set in question was empty? – and I’m glad to see it’s actually something that’s been considered and given an answer.

Edit on 10/11 in reply to Nick:
Well, the definition of a matrix doesn’t preclude the existence of a 0×0 matrix, and I’d suppose that different matrices differ in at least one element, so I agree that that’s the only one.

The question on determinant is much more interesting due to how many different ways there are to approach it. Primarily, I want to check it doesn’t run opposite to my intuition in other areas.

Going back to the definition, we get that \det(A)=\sum_{\phi \in S_n} sign(\phi) \alpha_{1 \phi (1)}\alpha_{2 \phi (2)} \cdots \alpha_{n \phi (n)}. S_0 would be the symmetric group on 0, having 0! = 1 elements (the empty function). So the determinant would be the sum of one number that was the product of no numbers. I’m of the opinion that the sum of no numbers is 1 (the multiplicative identity), so that would make the determinant one in this case.

Considered geometrically, the determinant represent area or volume in two or three dimensions. Extending backwards, we can get that the determinant of a 1×1 matrix acting on a line gives the length of the line under the transformation, and hence that a 0×0 matrix should act on a point. However, as the point is the entire space, this tells us nothing of what the determinant should be.

Considering that every matrix represents a linear transformation, and also that the empty function is bijective, we obtain that ( ) is nonsingular and hence the determinant isn’t 0, which is fine.

Going back to the definition I learnt in high school, where \begin{vmatrix} a&b\\c&d \end{vmatrix}=ad-bc and determinants of matrices of dimension greater than two are defined using minors, cofactors and the above definition, let us try going backwards – can we see what the determinants of 1×1 and 0×0 matrices /should/ be based only on this? Expanding the 2×2 matrix above by the first row, we find that the determinant of (d) should be d and that the determinant of (c) should be c, which fits nicely with the actually definition in addition to the geometric one above. Expanding the 1×1 matrix by the first row, we require the determinant of any 0×0 matrix to be 1 (so that the determinant of the whole thing can be the sole entry), which also fits with our definition above.

But now the geometric line of thinking has lead me into vector spaces! To start, there is no vector space consisting of no vectors (fails presence of an identity). Consider the vector space consisting only of the zero vector. It is a vector space, but we cannot find a basis for it – the zero vector is not linearly independent. However, the vector space is at most of dimension 1 (it has one vector in it), but its basis of length one is linearly depedent, so removing the linearly dependent vectors, we obtain a basis of zero vectors. So this way (by a rather weak argument), the vector space consisting only of the zero vector has dimension zero, as we’d expect.


October 30, 2011

Balls in boxes and other miscellaneous thoughts

It is a reasonably common statement in probability that problems can be simplified down to spinners or balls in boxes. So:
Let there be k balls that we wish to place in n boxes. If we can distinguish between the balls, then each ball can go in n different places, so the total is \textstyle n^k. If we cannot distinguish between the balls, this is answered by the occupancy theorem: the multiset formula for n and k.

What, then, if it is the boxes we cannot distinguish between? If the balls also cannot be distinguished between, this is equivalent to partitioning k into at most n parts. The OEIS implies that this has a closed-form solution: \textstyle \frac{1}{n!} \frac{d^n\left( \frac{1}{\prod_{i=1}^k(1-x^i)} \right)}{dx^n}(0), which is nice to know, even if it suggests there’s no ‘nice’ combinatorial solution.
If we can distinguish between the balls but not the boxes, I believe this would come to a sum of Stirling numbers of the second kind: \sum_{i=0}^n \left\{\begin{matrix} k \\ i \end{matrix}\right\}.

Other miscellany: Mario Micallef told us a few integration tricks this week: being able to cancel entire functions while integrating from 0 to 2pi (due to them containing sine), \textstyle\int_0^{2\pi}sin^2(x)dx=\pi by comparison with \cos^2(x) across the same interval. Massively time-saving tricks I’d never considered before. Great!

From combinatorics: wondered if there was a way to check transitivity of a relation expressed in matrix form quickly. Found nothing more than checking each row one step at a time, but I least I can do that fairly quickly now.


October 09, 2011

Mathematical Statistics A – Example Sheet 1

Thought I may as well put answers to the A & C sections here, for simple-ish reflection. And to notice when I can’t recall how to do things, at all.

QA1: We compute the cumulative distribution function by integrating the density function:
  \begin{align*} F_x &= \int_{-\infty}^x f_x(s)ds\\ &=\int_0^x e^{-s}ds\\ &=\left. -e^{-s}\right|_0^x\\ &= 1 - e^{-x} \end{align*}
We argue Y is discrete because…it takes a finite number of possible values? It’s a step function? I’m not sure. Its support is {0,2} (I hope), pmf and cdf are:
  f_Y(y) =\left{ {1-\frac{1}{e}} \quad (y=0) \\\frac{1}{e} \quad\quad (y=2)\\0 \quad\quad (y \notin \{0, 2\}) \right.
  F_Y(y) =\left{ 0 \quad\quad (y<0) \\\1-\frac{1}{e} \quad (0\leq y<2)\\1 \quad\quad (y \geq 2) \right.
QA2 is simple infinite sum fiddling combined with the identity for e^k.

QC5:
  Y = g(X)\\ F_Y(y) = (g(X) \leq y)\\ \begin{align*} F_Y(y) &= (X \leq g^{-1}(y))\\ &=F_X(g^{-1}(y)) \end{align*} \\ \begin{align*} f_y(y)&=f_x(g^{-1}(y))\times(g^{-1}(y))'\\ &= \frac{f_x(g^{-1}(y))}{g'(g^{-1}(y))} \end{align*}
By the inverse function theorem (differentiation). Y is continuous as the functions that make up Y are continuous – additionally, as g is an increasing bijection it is continuous and differentiable, and its derivative is always positive, so the pdf of Y is not undefined (g’ always nonzero).

For the last part, simply plug in g(X) = aX + b to what you’ve already worked out, and find Y is the Gaussian (given X is the standard Gaussian).

QC6:
Find the mgf of the exponential in the normal way; use the same trick as Q4 to find the mgf of S_n. Find the mgf of S_n using the pdf; as they are equal use uniqueness to conclude the distribution is correct.

Integrate F_{S_n} from t to infinity; receive sum from 0 to n. Observe that as the number of photons detected must be an integer, P(N=k)=P(N<k+1)-P(N<k). Plug in your previously calculated values for the answer.


August 22, 2011

Taylor Series

The Taylor series for a (infinitely differentiable) function is a polynomial that approximates it. It can be represented in a few ways:
f(x)=f(a)+f'(a)(x-a)+\frac{f''(a)}{2!}(x-a)^2+h.o.t., which is equivalent to \sum_{n=0}^\infty\frac{f^{(n)}(a)}{n!}(x-a)^n and can be rewritten as:
f(x+h)=f(x)+hf'(x)+\frac{h^2}{2!}f''(x)+h.o.t.

Each can be used to form Maclaurin series, where the derivatives are evaluated at zero. Some of the more common ones (such as e^x) are worth memorising so you can spot them when they come up, but most can be evaluated easily enough, as long as you remember how.

You can also expand for more than one variable by expanding once for each in turn until the desired level of accuracy is reached. A two-dimensional quadratic approximation is
f(x+h,y+k)=f(x,y)+h\frac{\partial f}{\partial x}(x,y)+k\frac{\partial f}{\partial y}(x,y)+\frac{1}{2}\left(h^2\frac{\partial^2f}{\partial x^2}(x,y)+2hk\frac{\partial^2f}{\partial x \partial y}(x,y)+k^2\frac{\partial^2f}{\partial y^2}(x,y)\right)+h.o.t.
which can be expanded to a general vector expansion as
f(\vec{x}+\vec{h})=f(\vec{x})+\vec{h}\cdot\nabla f(\vec{x})+\frac{1}{2}\sum_{1\leq i,j \leq n}h_ih_j\frac{\partial^2f}{\partial x_i \partial x_j}+h.o.t.
[Geometry and Motion notes]

Analysis II introduces Taylor’s Theorem, which explains just why Taylor Series work, and how good an approximation they are depending on how far you get into the sum.


September 2023

Mo Tu We Th Fr Sa Su
Aug |  Today  |
            1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30   

Search this blog

Galleries

Most recent comments

  • Nice proof! Does this mean you're going to specialize in analysis and differential equations next ye… by Nick on this entry
  • Hi Chris, It was most interesting to read your various reflections – thank you for sharing them. I'm… by Ceri Marriott on this entry
  • Feel free. Chris by Christopher Midgley on this entry
  • Hi Chris This is an honest final entry for the WSPA. Im glad that you have found the WSPA journey wo… by Samena Rashid on this entry
  • Knowing the maximum price you would be comfortable with paying for X is extremely useful for compani… by Nick on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIII