All entries for Friday 29 June 2018
June 29, 2018
Since the previous blog post I have returned to data analysis: I have reanalysed previously analysed data, managed to organise my data corpus and where I can find more data to analyse if need be, and have begun to identify potential themes and their potential relationships with each other based on the observations made of the data and coding completed so far. These themes, once determined to actually exist through further analysis, shall then become the core themes of the phenomenon of interest and, therefore, become objects of further data analysis in the phase following thematic analysis. Because more coding needs to be completed I cannot say with any solid certainty that these themes will manifest into core themes that become the focus of the rest of the analysis process; however, I have made enough observations to potentially suggest that the identified themes will be the main themes and any other themes are likely to be sub themes. An open mind, however, is still required and as I code through the data and enter the next stage of thematic analysis, I could potentially identify more core themes.
What have I done in order to reach the current point of coding? The very first step before even coding the data is to become familiar with your data. This has been a journey in itself as I battled with different philosophical perspectives and the most efficient and effective lens from which the particular kind of text should be analysed. I am more or less settled with this now and in the thesis it is a case of detailing what my philosophical beliefs are, the way in which these impact the way in which I perceive, engage with, and interpret the data, and the way in which they relate to the research problem and research questions, and fit in with the rest of the research design.
Away from Philosophy however and onto the data level, becoming familiar with the data makes sense as this gives you the widest scope and the widest sense of the nature of the data. It is through familiarising yourself with the data that you can begin to view high level, abstract structures, potential hierarchies and forms of organisation within the data. The participants might not have intended their interactions with you as a researcher directly or with each other to produce such structures, but those structures do exist in an external reality and can be reflected unconsciously within certain parts of the data at certain times. The nature and composition of these structures, hierarchies and organisations however depend on the type of text being analysed: interview transcripts, for example, shall differ completely compared with group learning transcripts. What I am finding and have found however is that data familiarity can continue past this familiarity phase and onto the coding phase. From my own experiences, as I code through the data I found myself exploring the date closely and begin to be able to view these hierarchies, structures etc at a closer level. These realisations and characteristics of the data were not revealed immediately however, it has taken several rereads and several rounds of coding in order to fully understand the nature of the data (or at least begin to understand the nature of the data) and to therefore begin to understand the constructs and structures of the data’s particular nature. This is something I shall be talking about to a more indepth level in the thesis. It’s important to state that I am not necessarily observing both “macro” and “micro” structures as what I am following is a micro level analysis set within a particular context. It really depends on what you can observe in the data and it depends on the type of text you are analysing, and the purpose of your research. Sometimes interactions can be theologically and politically influenced, for example, and this can be reflected in the data. It’s arguably simply a matter of working through the data and carefully and comprehensively thinking about what it is you are observing.
As for the coding process I am a certain way through the coding phase. I have identified the data corpus and about halfway through the coding phase. The approach to coding I have adopted is what I call a segment by segment analysis. Some argue for a line by line analysis or a sentence by sentence analysis but I am going to be arguing the ineffectiveness of these analytical approaches within the context of my research. Sometimes, a single line or a single sentence is not enough to capture the event or action that you are observing in the data: sometimes you can observe events and actions within half a sentence or half a line, sometimes they can be observed at a greater level than a sentence or a line. Segment by segment analysis based on the interpretation or observation of meaningful events or actions is a more flexible and pragmatic approach for my research: it enables me to break up each block of data into meaningful segments that can be below or above sentence level. I define a segment as meaningful because that segment contains an event or action that is expressed, described, or in some way engaged with that holds a particular meaning for my research purposes. A single sentence, therefore, could contain multiple meaningful events and activities that would be missed by sentence by sentence and line by line analytical approaches.
I have assigned each of these meaningful segments a code, which represents or encapsulates the general meaning or description of the event or activity that is contained within that segment. Again what this event or activity or action is depends on what you perceive, of what’s important to you and your research, of what relates to your research question and research problem, and what the nature of the transcript is. Previously when I used Grounded Theory I generated many codes and as I went through the previously coded transcript I altered some of the codes, dropped a few, and added new codes in. This time of coding more than ever I feel that I have been able to capture the pure essence of each segment that before I did not capture; I can perceive and observe events and activities in the data and view relationships between segments that I had not been able to previously recognise or identify. This has helped during the coding of further transcripts and even then, I have been observing new occurrences, happenings, events and actions within the data that I had not previously observed in the previous transcripts. Unsurprisingly, I have generated many codes.
The more you read through your data and become familiar with it, the more you learn about your data and therefore, with each reading session, new properties, events, dimensions and even higher level relationships and structures reveal themselves. There is much debate however as to whether or not these observations really do exist in the data, or if it is just what we perceive or observe in the data. This is a complex yet fascinating area of debate and shall be something I shall engage with in the thesis.
As I have been coding I have been writing short theoretical memos. The memos that are written at this stage serve the purpose of documenting continuous and evolving process of thinking and theorising about the codes and the data. The memos describe and explain the motivations, intentions, meanings, production, and context of the meaningful segments as well as the meaning of the code itself, and any other thoughts, hunches, ideas, observations and potential hypotheses, questions and predictions relevant to the research. These memos are very important as they ultimately form a substantial part of the chapters related to research findings and discussions, and, they assist you (along with any journals that you might have) with documenting your analytical and theoretical journey.
Your thinking, theorising, comprehension and understanding develops and progresses as you code through the data, and as you identify similar characteristics and the differences between them as well as, therefore, the similarities and differences between similarly coded data segments and, which can form the starting point of identifying and developing your themes, but that’s another aspect of the analysis to cover in another blog post!