All 3 entries tagged Methodology Chapter
No other Warwick Blogs use the tag Methodology Chapter on entries | View entries tagged Methodology Chapter at Technorati | There are no images tagged Methodology Chapter on this blog
July 02, 2018
Like a toddler running back and forth into the arms of those that love that child, ideas and visions that were previously considered irrelevant or perhaps not suitable for this project but might be for another project have been running back to me like that happy little toddler. Everyone say aww……..
(Oh by the way, I’m not at all suggesting that toddlers are irrelevant! Even if they turn into screaming delightful door slamming teenagers…………..)
The day has been a productive coding session. As I have been coding the data and observing patterns and meanings within the data, I have come to realise that certain patterns and meanings that were once considered irrelevant are now becoming more relevant and, also, I have observed new patterns and meanings that I had not previously observed when previous sets of data were coded. Or at least, new patterns and meanings that have not made themselves obvious till now, even though I might have observed them before but had not consciously acknowledged them, for whatever reason. I think this is a psychological thing: the more you become sensitised to a particular pattern or meaning you start to think later in the coding process that you have observed similar before in different contexts and then you start to identify the bigger picture or wider pattern of behaviour. It’s a very interesting and a very involving process. What I have found during the day is making me rethink what I have coded previously, and the way in which I have interpreted and perceived what is occurring in the data, which might lead to recoding the data again as I go through a more deeper coding phase as I go further into building an understanding of the phenomenon of interest. I’ll be talking more about this in another post later this week.
In the meantime however it is clearer to me now more than ever, and what might be good practice for other Ph.D. candidates to adopt, not to throw away any old ideas and visions that were previously considered irrelevant. This is an approach that I have adopted from the beginning of the Ph.D., as I have folders upon folders of books and research papers and thesis related documents and notes, and a fair percentage has been sent back and forth between the archive folders and the working folders as they were continuously examined for relevance at particular times of the project so far.
Now some of the oldest ideas and visions I had right at the earlier stages of the Ph.D. are becoming more relevant for answering my research questions and addressing the research problems. But more than that: what I was writing about earlier in a theoretical memo that documented my thinking of what I was observing was an attempt at building upon those earlier visions. It’s really interesting when you have built your earliest visions upon a section of existing literature and then to observe what you thought was irrelevant within the data brings back home the thinking that nothing is really impossible. There is a slight problem, however.
It is a fair way into the reanalysis and coding phase that these older ideas and visions have occurred, so this leaves me with a couple of questions. Do I carry on with the coding and analysis and simply suggest at what point I observed a new aspect of a phenomenon to be relevant? Or, do I reanalyse the data again and code for these additional observations that I made later in the coding?
Methodological literature that I have come across so far has not been clear on this subject although it is a subject I shall read more about. I have come across a paper that did suggest that you don’t have to reanalyse the data to code any new observations but this from what I remember was associated with grounded theory based Open Coding, where you are basically coding to build a theory and not coding to identify and relate themes. I am leaning towards yes, I would have to recode the data to code for more instances and examples of what I have observed in order to validate and authenticate the existence of what it is I have been observing.
Of course this then leads onto other philosophical questions such as does repeatability really represent truth? If you observe something often enough does it really exist in an external reality or does it exist within our own interpretations? What about if others are not able to perceive or observe what a researcher finds observable? In what way can I tell that something might exist in an external reality? In what way can I possibly know what I know to be true? These, and more, are challenging questions, but the key I think is to keep everything grounded in the data and make sure that arguments and observations are built from the data. You cannot build from existing theory; you can, however, build from a relationship between data observations and existing theory, but I shall cover that point at a later time.
With all that in mind, what I am thinking about is to analyse the data but keep the original copy of the data and embed evidence of a change in perspective or the observation of a potentially key new theme. This would be in the form of a theoretical note embedded within the data that would mark precisely the point that I began to observe the importance and relevance of an event or meaning that could form a part of a theme. This would show and evidence the progression of thinking and the way in which my thinking and thought pattern progressed to the point that I began to observe the importance and relevance of what it was I was observing. I am not really sure what the literature says on this subject, but I am becoming convinced that this might be the best approach.
The key lesson here really is, don’t throw out your old ideas. Whether that idea is represented as a few lines of writing on a scrappy piece of paper or rushed serious of paragraphs on the word processor, keep it! Archive it or put it in some relevant folder or whatever storage system you have so that you can refer back to those ideas in the future if they prove to be relevant. Another lesson is don’t focus your mind exclusively on what you found previously.
In other words, don’t code one set of data and then focus the next set of data on what you have discovered before (I know this is rather a contentious point in academic discussion from what I can understand about coding approaches and debates) (another contentious point is whether or not anything is actually discovered at all, but is actually interpreted), but keep an open mind. Of course what you find whilst you are coding and thinking about the data is exciting, overwhelmingly exciting, but keep a level head, keep an open mind, and don’t be distracted by what you have observed previously. If you become too focussed on what you have observed previously you’ll begin to lose the meaning of innovation and originality, and become potentially enslaved by previous observations. Keep an open mind and keep coding for original insights and meanings, and think and plan carefully to determine if there is a real need to reanalyse the data when you find something new a fair way into your data analysis process. This really depends on your research questions, research problem, and the way in which what you have observed relates to explaining the phenomenon of interest.
‘till next time!
June 29, 2018
Since the previous blog post I have returned to data analysis: I have reanalysed previously analysed data, managed to organise my data corpus and where I can find more data to analyse if need be, and have begun to identify potential themes and their potential relationships with each other based on the observations made of the data and coding completed so far. These themes, once determined to actually exist through further analysis, shall then become the core themes of the phenomenon of interest and, therefore, become objects of further data analysis in the phase following thematic analysis. Because more coding needs to be completed I cannot say with any solid certainty that these themes will manifest into core themes that become the focus of the rest of the analysis process; however, I have made enough observations to potentially suggest that the identified themes will be the main themes and any other themes are likely to be sub themes. An open mind, however, is still required and as I code through the data and enter the next stage of thematic analysis, I could potentially identify more core themes.
What have I done in order to reach the current point of coding? The very first step before even coding the data is to become familiar with your data. This has been a journey in itself as I battled with different philosophical perspectives and the most efficient and effective lens from which the particular kind of text should be analysed. I am more or less settled with this now and in the thesis it is a case of detailing what my philosophical beliefs are, the way in which these impact the way in which I perceive, engage with, and interpret the data, and the way in which they relate to the research problem and research questions, and fit in with the rest of the research design.
Away from Philosophy however and onto the data level, becoming familiar with the data makes sense as this gives you the widest scope and the widest sense of the nature of the data. It is through familiarising yourself with the data that you can begin to view high level, abstract structures, potential hierarchies and forms of organisation within the data. The participants might not have intended their interactions with you as a researcher directly or with each other to produce such structures, but those structures do exist in an external reality and can be reflected unconsciously within certain parts of the data at certain times. The nature and composition of these structures, hierarchies and organisations however depend on the type of text being analysed: interview transcripts, for example, shall differ completely compared with group learning transcripts. What I am finding and have found however is that data familiarity can continue past this familiarity phase and onto the coding phase. From my own experiences, as I code through the data I found myself exploring the date closely and begin to be able to view these hierarchies, structures etc at a closer level. These realisations and characteristics of the data were not revealed immediately however, it has taken several rereads and several rounds of coding in order to fully understand the nature of the data (or at least begin to understand the nature of the data) and to therefore begin to understand the constructs and structures of the data’s particular nature. This is something I shall be talking about to a more indepth level in the thesis. It’s important to state that I am not necessarily observing both “macro” and “micro” structures as what I am following is a micro level analysis set within a particular context. It really depends on what you can observe in the data and it depends on the type of text you are analysing, and the purpose of your research. Sometimes interactions can be theologically and politically influenced, for example, and this can be reflected in the data. It’s arguably simply a matter of working through the data and carefully and comprehensively thinking about what it is you are observing.
As for the coding process I am a certain way through the coding phase. I have identified the data corpus and about halfway through the coding phase. The approach to coding I have adopted is what I call a segment by segment analysis. Some argue for a line by line analysis or a sentence by sentence analysis but I am going to be arguing the ineffectiveness of these analytical approaches within the context of my research. Sometimes, a single line or a single sentence is not enough to capture the event or action that you are observing in the data: sometimes you can observe events and actions within half a sentence or half a line, sometimes they can be observed at a greater level than a sentence or a line. Segment by segment analysis based on the interpretation or observation of meaningful events or actions is a more flexible and pragmatic approach for my research: it enables me to break up each block of data into meaningful segments that can be below or above sentence level. I define a segment as meaningful because that segment contains an event or action that is expressed, described, or in some way engaged with that holds a particular meaning for my research purposes. A single sentence, therefore, could contain multiple meaningful events and activities that would be missed by sentence by sentence and line by line analytical approaches.
I have assigned each of these meaningful segments a code, which represents or encapsulates the general meaning or description of the event or activity that is contained within that segment. Again what this event or activity or action is depends on what you perceive, of what’s important to you and your research, of what relates to your research question and research problem, and what the nature of the transcript is. Previously when I used Grounded Theory I generated many codes and as I went through the previously coded transcript I altered some of the codes, dropped a few, and added new codes in. This time of coding more than ever I feel that I have been able to capture the pure essence of each segment that before I did not capture; I can perceive and observe events and activities in the data and view relationships between segments that I had not been able to previously recognise or identify. This has helped during the coding of further transcripts and even then, I have been observing new occurrences, happenings, events and actions within the data that I had not previously observed in the previous transcripts. Unsurprisingly, I have generated many codes.
The more you read through your data and become familiar with it, the more you learn about your data and therefore, with each reading session, new properties, events, dimensions and even higher level relationships and structures reveal themselves. There is much debate however as to whether or not these observations really do exist in the data, or if it is just what we perceive or observe in the data. This is a complex yet fascinating area of debate and shall be something I shall engage with in the thesis.
As I have been coding I have been writing short theoretical memos. The memos that are written at this stage serve the purpose of documenting continuous and evolving process of thinking and theorising about the codes and the data. The memos describe and explain the motivations, intentions, meanings, production, and context of the meaningful segments as well as the meaning of the code itself, and any other thoughts, hunches, ideas, observations and potential hypotheses, questions and predictions relevant to the research. These memos are very important as they ultimately form a substantial part of the chapters related to research findings and discussions, and, they assist you (along with any journals that you might have) with documenting your analytical and theoretical journey.
Your thinking, theorising, comprehension and understanding develops and progresses as you code through the data, and as you identify similar characteristics and the differences between them as well as, therefore, the similarities and differences between similarly coded data segments and, which can form the starting point of identifying and developing your themes, but that’s another aspect of the analysis to cover in another blog post!
June 10, 2018
I have now switched for the time being from the literature review to the methodology chapter(s). Unsurprisingly, there shall be a substantial amount of editing and rewriting of existing chapter sections as they were written at a time I was using a pure grounded theory approach. I think it would be a mistake however to focus any allocated time frame on just a single thesis chapter because, in my opinion, the construction of a thesis is not a linear process particularly in qualitative research. There is fluidity in the intellectual movement across thesis chapters as they are being constructed and / or edited. As you are reading and writing for a particular chapter, ideas and thoughts relevant for earlier or later chapters might be revealed. Do not fight these happenings and occurrences: record them in whatever way is convenient at a particular time, even if it’s just a few words written down quickly on a piece of paper, so that you can follow up on your ideas at a later time. We all develop a strategy for doing this: for example, I write more extensive ideas down on paper before transferring them to the computer and extending and amending accordingly; any terms I want to explore further I simply type some key words into a search engine and save the results for future exploration. Whatever you do, do not dismiss or undermine any ideas that come to you, because during the Ph.D. so far I have found a lot of value in keeping ideas, documents, papers, word processed pages of previous ideas etc. as it was proven recently that lots of previous work has suddenly become quite relevant. Don’t dismiss or discount anything that comes to you!
The current methodological writing process at the moment is on paper instead of on the computer. I find this beneficial because with writing on paper sometimes I feel that I can explore my own ideas and play with my ideas better than I can on the computer. You could call this experimental writing of ideas, where try to carefully elaborate on my ideas and test according to what is suggested in the literature, and to think carefully about the way that literature supports my ideas. I obviously cannot write a thesis chapter on paper, but what I can do more effectively is to experiment with my writing and with my thoughts. I can also do this on the computer, but I feel that it’s best to start with on paper, but that’s just my preference! Opposition is welcome too, because if you engage with opposing views you can carefully construct a reasonable response that continues to support your views. As long as what you construct is logical and counters the opposing claims in a reasonable way with well grounded elaborations and explanations, supported where necessary and appropriate by relevant literature.
The topic of my current methodological writings is philosophy; more specifically, my ontological beliefs and the way that my ontological beliefs are shaping and guiding the utalisation and perspective of the newly assigned methods, as well as the way they are shaping my views of the type and source of data. Briefly, I consider myself an ontological realist (more moderate than staunched), which impacts, as mentioned, the way that I perceive the value of different types and sources of data, and explains the way in which research methods shall be utalised. Being a realist impacts what I perceive to be real, what I consider to be a more truthful or accurate representation of reality, and therefore the way in which different types and sources of data are to be engaged with in order to best understand this reality. These are the topics I have been writing about and obviously there is much more to think about and, therefore, this is an ongoing process. Obviously as time goes on these notes shall be extended and amended in various ways.
What I intend to do is write the methodological chapter as I go through the analysis process. At least, the sections that more closely relate to the utalisation of these research methods, as the methodology chapter(s) contain sections where you have to explain and critique your own understanding and utalisation of whatever research methodology and methods you use for your research. In the meantime however, I shall be working on elaborating on my philosophical beliefs and their relationship with the research method, and the source and type of data before progressing onto engaging with the first stage of analysis, which shall be reanalysing the data.
More on this in the next blog post!