In My Head
https://blogs.warwick.ac.uk/shemvanich
en-GB(C) 2019https://blogs.law.harvard.edu/tech/rssWarwick Blogs, University of Warwick, https://blogs.warwick.ac.uk120What to do tomorrow by
https://blogs.warwick.ac.uk/shemvanich/entry/what_to_do_1/
<p>1) check why using “cond()” rather than “replace” leads to better rate of convergence.<br />
2) check whether we made any mistake translating code from intreg to our gmm code. (especially the cond() thing)<br />
3) if “cond()” is better than use it instead of “replace” <br />
4) Is code for b/sigma correct? that is, the “cond()” and “replace” thingy.</p>Thu, 31 Aug 2006 22:51:21 GMThttps://blogs.warwick.ac.uk/shemvanich/entry/what_to_do_1/#comments094d73990d5fc02e010d666de84605613Simulation 2 by
https://blogs.warwick.ac.uk/shemvanich/entry/simulation_2/
Today I tried to change the moment conditions for the intercept, coefficient and for sigma. I hope that, with modifications, the code might work better. Instead of writting two lines of codes for r = 1 and r = 0, I combine them together using (1–r) and (r). However, this does not change the results.Wed, 09 Aug 2006 13:55:48 GMThttps://blogs.warwick.ac.uk/shemvanich/entry/simulation_2/#comments094d73990c689ee3010cf337b05364320Simulation 1 by
https://blogs.warwick.ac.uk/shemvanich/entry/simulation_1/
Today, I compared two simulation studies; 40 replications, discrete missing mechanism, but one study uses bb–parameters whereas the other uses bs parameters. It seems that the study with bb–parameters is better. This result againsts the original result from <span class="caps">LFS</span> data. There, I concluded that bs–parameter is better.Tue, 08 Aug 2006 22:37:37 GMThttps://blogs.warwick.ac.uk/shemvanich/entry/simulation_1/#comments094d73990c689ee3010cefef123961cf0The Beginning of The End by
https://blogs.warwick.ac.uk/shemvanich/entry/the_beginning_of/
<p>(1) Time Line<br />
August : Simulations, Learn Matlab, Chapter 1?<br />
September: Moving out of the office<br />
October<br />
November<br />
December<br />
January<br />
Febuary<br />
March</p>
<p>There are about 8 months left before April!!</p>
<p>What have to be done:<br />
– Simulation Studies ( a lot)<br />
– finish First Chapter with Richard (difficult)<br />
– Tang Little and Rubin's estimator<br />
– Bounds<br />
– Maximum Score for <span class="caps">IPW</span></p>
<p>(<del>_</del>'')</p>
<p>Today, I tried to fix the simulation's programme by changing the "genr"; that is, use the discrete missing–mechanism. However, this does not help the problem. The convergence rate is still poor. why?</p>Mon, 07 Aug 2006 10:18:53 GMThttps://blogs.warwick.ac.uk/shemvanich/entry/the_beginning_of/#comments094d73990c689ee3010ce82460ce59d60Identification on the theory part by
https://blogs.warwick.ac.uk/shemvanich/entry/identification_on_the/
<p>For identification issue. When Y is continuous, Tang Little and Raghunathan (2003) mentions that only a certain type of parametric family can be allowed. We have to argue against this.</p>
<p>We might be able to use Manski's things. But we have to combine the identification in choice-based sampling with missing-data. Show that when P(Y|X) is specified, any model is ok.</p>
<p>Show that when H(Y,X) and f(X) are known, the missing-data mechanism is known. So missing-data becomes choice-based sampling problem. Then, when P(Y|X) is specified, all we have to do is to find an objective function that its unique solution is the true theta. In this second step, we may be able to use some material from Manski's chapter about choice-based sampling.</p>Thu, 05 Jan 2006 16:32:57 GMThttps://blogs.warwick.ac.uk/shemvanich/entry/identification_on_the/#comments094d73990894bc1c01089b69ef1b0a3e0Meeting with Mark, Tuesday 22 by
https://blogs.warwick.ac.uk/shemvanich/entry/meeting_with_mark/
<p>(1) Tidy up the report by </p>
<p> – find a criteria for cutting or not the outliers</p>
<p> – use pwt03 weight as pwit03 is not exogenous weights</p>
<p> – use pweight instead of fweight</p>
<p> – find in the stat literature on how to adjust extreme weights</p>
<p> – use hetprob model instead of probit, find other binary choice model as well.</p>
<p> – read the paper that Mark gave about weighted <span class="caps">OLS</span></p>
<p> – add some details that Mark asks in the report.<br />
(2) prepare for the Selection model. </p>Thu, 24 Nov 2005 16:15:43 GMThttps://blogs.warwick.ac.uk/shemvanich/entry/meeting_with_mark/#comments094d739907c2fdcb0107c30f12a700260Works to be done in the next week by
https://blogs.warwick.ac.uk/shemvanich/entry/works_to_be/
<p>1 Continue writting Richard's chapter<br />
2 Read Mark's reply and prepare for discussing with him<br />
3 Prepare for teaching next Wednesday<br />
4 Prepare for discussing with Richard about SKinner's paper</p>
<p>There is a possible overlap between (2) and (4).</p>Fri, 11 Nov 2005 11:00:52 GMThttps://blogs.warwick.ac.uk/shemvanich/entry/works_to_be/#comments094d7399077ab97b01077efc241805be0Plewis's example and Weighting in Regression Model by
https://blogs.warwick.ac.uk/shemvanich/entry/plewiss_example_and/
<p>Plewise's example of whether to use non-response weights or not is very interesting. How it is related to weighting in a regression context? Can we just adopt it? Maybe not. In conditional model context, we know that if response prob. vary with X then the adjusted mean is better than an unadjusted?</p>
<p>We should try to apply Richard approach to Examples of Plewise.<br />
In this examples, research ignored the information about missing-mechanism. That is, he still clings on to the fact that response rate in stratum 1 is 0.9 and in Stratum 2 is 0.7. How about if we stratify the population according to the values of the binary variable of interest?<br />
So the response rate is not for each stratum, but for each possible values of the variable of interest, which is 0 and 1. </p>
<p>This means that we have to stratify each stratum into two substratum and then apply the Rihcard's method. Very interesting to see what will happen if we use this weight instead.</p>Tue, 25 Oct 2005 15:00:53 GMThttps://blogs.warwick.ac.uk/shemvanich/entry/plewiss_example_and/#comments094d73990721d9c30107284bc7b902ca0An Idea about the future work on Weighting by
https://blogs.warwick.ac.uk/shemvanich/entry/an_idea_about/
<p>From the sheet about weighting of <span class="caps">ESDS</span>, we know that there are 3 types of weights:<br />
(1) Sample Design or Probability Weights;<br />
(2) Non-response weights ;<br />
(3) Post-stratification weights.<br />
Based on observed variables, one calculate the prob of an observation being included and weight the observation with the inverse of this weight.</p>
<p>Weight (2) is also a type of <span class="caps">IPW</span>. However, we use an incomplete set of variables to put observations into different classes and observations in the same class are given the same weight. Thus, we implicitly make an assumption that observations in the same class are of the same characteristic. Of course, this could be wrong.</p>
<p>Weight (3) is just a fequency weight to adjust our sample to represent the real population.</p>
<p>The thing is they normally combine these weights together. As we can see, (1) is like the weight in <span class="caps">IPW M</span>-estimtor and (2) is the weight of Richard and Esmeralda. Can we find an optimal way to combine these two weights?? Note that (1) can be continuously vary with observations according to its definition but (2) have to be constant for observations in the same class. </p>
<p>Another point is whether there is a difference between weighting of survey data in general and weighting in a particular study. For example, in a dataset from <span class="caps">LFS</span> that we are working on, there are two weights provided. However, "hrrate" is not fully observed and we would like to do <span class="caps">IPW M</span>-estimation to take an account of this missingness. So even though the weights provided (pwt03, piwt03) are calculated using non-response weight, we should calculate our own weights and combined them together???</p>Tue, 25 Oct 2005 10:13:28 GMThttps://blogs.warwick.ac.uk/shemvanich/entry/an_idea_about/#comments094d73990721d9c301072744a39701310Meeting wit Mark today by
https://blogs.warwick.ac.uk/shemvanich/entry/meeting_wit_mark/
Things to do<br />
(1) plot "lev" against "iprob" as the measure of detacting troblesome observations.<br />
(2) Test whether normality assumption is suitable for our binary choice model or not.<br />
(3) If not, find alternative (note that "unusual" weights aggsigned to observations could happen because we use the wrong model for e of the latent regression)<br />
(4) We do not include "lhourpay" into the structral model because there is not suitable interpretation of the coef of this variable.<br />
(5) we do not include "lhourpay" into the probit model because we would like to use the same set of X's as in the structural model.<br />
(6) Write a report to Mark, showing results of various regression.<br />
(7) Trim the "iprob" using the sample size because this will not affect the asymptotic probperties of the estimator (as the treshold "N" become infinity and we effectively do not trim the weights)<br />
(8) Try Logit instead of Probit to see whether it affects the two observations with high "iprob" or not.<br />
(9) try to code people who study over 30 years as leaving education at 24 to work and then later come back to study. So start calculating work experience from 24 year of age.Wed, 19 Oct 2005 12:38:13 GMThttps://blogs.warwick.ac.uk/shemvanich/entry/meeting_wit_mark/#comments094d739906ff162f010708e2ff9112340