All 7 entries tagged Coding
View all 78 entries tagged Coding on Warwick Blogs | View entries tagged Coding at Technorati | There are no images tagged Coding on this blog
February 21, 2019
Update On The Ph.D Work, Part B: Literature Positionality and Theoretical Framework
Literature Positionality
Because of the nature of inductive based qualitative research, different types of literature are positioned in different areas of the thesis. This took me a long time to understand and to understand where to position different types of literature in order to achieve different purposes, but things are getting there!
As has been mentioned, with the literature review chapter at the beginning of the thesis, literature is being used to develop a context within which I can justifiably place my research. This justifiable position comes as a result of critically analysing the way in which the social learning process and the technology of use has been defined, explored, and used before in various learning scenarios. This builds up a picture of the need to explore the specific social learning process within a particular learning scenario that is arguably unexplored or has not been yet fully explained, facilitated by particular technologies. This involves plenty of comparisons between different learning contexts and scenarios, and explorations and comparisons of the definitions, functionality and use of social learning processes and technologies within different learning contexts. That’s the aim of the earlier literature review in a nutshell. The type of literature therefore takes a broad view of the research context e.g., exploring the social learning process of interest within different technological contexts and learning contexts, and exploring the use of the technology of interest and its facilitation of social learning processes within different learning contexts. This gives weight to the justification of the research context of interest, because it indicates how the process and technology have been used and explored in different contexts, and can be used to explain how a different context can further explain aspects of the phenomenon of interest that arguably remains unexplored or / and unexplained.
Other types of literature shall be included in later thesis chapters specifically relating to the discussion of the themes. In a nutshell, the literature involved here shall involve literature that consist of similar themes to what I have found (if I did not do this, I would be falsifying findings, give misleading accounts, and would reduce the validity and verifiability of the themes), but I would use the discussions to show how I have explored the themes in a different way. This would include showing the differences in how I have explored the themes, the differences in context of theme construction, and the way in which my themes build upon what has already been discovered. The literature here is very specific and has a very specific purpose: to validate and verify the themes, and to provide a platform upon which I can build upon what already exists.
Thematic Framework
This is the core of the research and its development is a continuous and ongoing task and shall be right up to Easter and perhaps a bit beyond. However, feedback has suggested that I am nearly there! The themes appear to be fine and the codes themselves still need some work doing to them, but what I am finding is that changes to the codes do not necessarily mean changes to the theme, and indeed changes to the names of codes do not always necessitate changes to its meaning.
Meaning is a key word here and to write about the meaning of meaning (meta-meaning?) would take a thesis in its own right, but essentially because of the inductive nature I am applying meaning to what I interpret and perceive from the data (note that this does not reduce itself to relativist research as I am not adopting a relativist ontology). Themes and codes therefore capture the meaning that I am interpreting from the data, and together they describe and explain the phenomenon of interest: its behaviour, structure, impact, and existence.
In general I am getting happier with the way in which the thematic framework is going. There is still work to be done to it up to Easter and perhaps beyond, but I am pleased with where it’s going so far!
December 30, 2018
Ph.D Update: Up To Christmas 2018 Part A, Coding Framework and Thematic Analysis
Wishing my blog readers a Merry Christmas and a Happy New Year! I apologise for not writing any blog updates since the middle of November. There were a few tasks I wanted to complete before Christmas so had no spare time to complete any blog posts. Now that the New Year is approaching, I’m now planning what to do between January and Easter and there is a lot to complete but I shall get to that in a while. In the meantime, this blog post is one of two posts that shall provide an update of my most recent work: this blog post covers the development of the coding framework, and the next part shall cover the progress of the literature review.
Themes, Sub Themes and Coding Framework
When I wrote about the continuous framework development back in mid November, the coding framework was, at least tentatively, complete. I was also in the middle of rechecking all previously coded data to ensure that I had been interpreting consistently and coding accurately. Since that time, the idea of interpreting consistently and coding accurately has become clearer along with understanding how interpretation consistency increases coding accuracy. This is especially an interesting point given that coding is subsequent to, and a reflection of, the act of interpreting.
Whether or not coding accuracy and interpretation consistency increased truth or progresses towards truth is highly debatable given the nature of qualitative research and the characteristics of inductive thematic analysis approach. I could argue for, and apply means to, increasing the validity, accuracy, consistency and credibility of my approach and the findings, but can I really argue that the findings represent truth and that my approach could lead people closer to the truth?
What I can argue in the thesis is for the importance of accurate coding and consistent interpretation leading to more valid and reliable findings, whilst at the same time accepting that different researchers shall interpret the data in different ways and, therefore, could view any data segment differently depending on various personal factors. Essentially, coding is an interpretation e.g., a code represents an interpretation of whatever action, event, etc. is appropriate and relevant to the research question. If you code a series of segments using the same code but the segments are not consistent then that code would represent an inaccurate or incorrect interpretation. I have some possible examples that I could think about in the thesis, but I have to give this some thought when I put the research design chapter together. I shall be going into a lot more detail in the thesis.
Just before Christmas, I had completed the rechecking of the previously coded data and can state that I am satisfied that my coding is accurate and that my interpretations are consistent at least in accordance with my own interests and research questions (again, I shall be talking about this more substantially in the thesis). What I had not expected to complete by Christmas is the categorisation and classification of codes into different sub themes and themes. Contrary to what appears to be the norm, I have been able to develop themes from codes that were not the most commonly occurring, but codes that represent what I consider to be important observations within the data. Important observations in reference to the research questions and the characteristics and aspects of the phenomenon of research interest that interests me the most. It has to be emphasised that the coding framework and the thematic development as currently stand do not represent the final product. The themes shall be developed and reformulated as time progresses. This shall be as a result of the processes of thematic validation and verification using a variety of different processes. These include a further examination of themes to identify similarities and possible opportunities to combine themes, as well as the possibility of identifying “super themes,” and conversations with other academics regarding the codes, sub themes and themes that I am using.
In all, I am pleased with the progress that has been made with the thematic analysis and development. The next stage of the analysis shall begin early next year and this shall involve not just the validation and verification of the themes, but also validation and verification of relationships between themes through both qualitative and quantitative means. The quantitative representation does not necessitate a mixed methods approach but does necessitate a multimodal design where the quantitative data simply supports and adds weight to what was identified and explained qualitatively. Working this out shall naturally take time!
November 19, 2018
Ph.D Update: Up To Mid November 2018, Part B: Continuing To Write Extensively
Writing is a continuous, ongoing task in qualitative research but the question is, what do you write? Obviously, many qualitative methodological textbooks and my own experiences suggest that it is very important to document what you observe and begin to interpret very early in the qualitative process. Typically, quantitative research is fairly set in nature and the writing of the research findings usually take place following the analysis phase. With qualitative research, you begin to write about your findings and interpretations at the very beginning of the analytical process. Your writings, interpretations and coding schemes, etc. all change and evolve over time, and it is always wise to write about these changes as they occur.
Reflect on these changes and alternatives, explain the way in which these changes have impacted your research, compare the changed approach to the previous approach, and evaluate these changes. All these reflections shall form a part of your analysis and overall production of the research design chapter and later thesis chapters.
Typically in qualitative research, data analysis and writing of the interpretations and findings occur simultaneously. What I am finding that is in addition to the norm is that I am writing about the research design as I go through each data analysis stage and phase. I have found that my analytical lens and general analytical approach have changed as I have progressed through the data analysis and as I have reread the data several times. With this, I am not just writing and contributing towards the findings and discussion related chapters simultaneous to data analysis, but also various aspects of the research design chapter.
Trust me, this can be quite mind boggling. But for me, it’s an approach that works as I have always viewed little sense in writing the research design chapter before the data analysis began. I did attempt this before, but as I progressed through the data analysis I found that what I found was challenging what I thought, and continues to do so. It made sense for me from that point to write about the design as I progressed through the data analysis.
It was more than a couple of years or so ago that I started the qualitative journey after moving away from mixed methods approaches to investigating the phenomenon of interest. I suppose back then I was aware of the need for writing about the data itself and what I was to observe, but I had no idea that at the time I would effectively be writing about the research design AND the data observations and thematic development simultaneously but this is the way that my research appears to have been worked out.
Qualitative research is nuanced and there really is no set path towards the way you are to write your qualitative thesis! Plus do remember that it is an ongoing process: you cannot write about an observation once and then leave it. It’s a long running, complex, detailed, deep process of understanding and comprehending what it is you are observing.
'till next time, keep applying that pen to paper! Or hands to keyboard! Or both!
Ph.D Update: Up To Mid November 2018, Part A: Refining Coding Scheme
As mentioned in the previous blog post I am pretty much there with the coding scheme. That’s not to suggests that revisions and adjustments are not going to occur, but it is to suggest that I am in a happier place with the coding; I feel that the coding scheme now better represents the aims and objectives of the research. New codes and adjustments of the existing codes are likely to occur as I continue with the development of categories and themes and their verification and validation. Never ever hold anything as absolute and complete especially when you are engaged with qualitative research.
Along with refining the codes, etc. another task I am involved with is the rechecking of the coding of data characteristics. By this I mean, ensuring that the data segments have been interpreted consistently according to their characteristics, and coded accurately. There is a relationship here between interpreting consistently and coding accurately, because accurate coding can only arguably occur with consistent interpreting. A deeper question here, however, is to ask about the accuracy of interpretation, or, in what way data segments could be interpreted accurately and this is a challenging question, which I suspect is related to validation and verification. A part of this involves ensuring that the segments have been coded using the most appropriate code that best describes the activity expressed in the data segment.
I am also double checking what I call the “code memos.” These are theoretical memos, a concept from Grounded Theory, which documents my approach to developing the code, and explaining the meaning of the code, and why the code is most appropriate for each recorded data segment. All coded segments are placed in the code’s appropriate memo, and this assists with observing and documenting the capturing of variation within the code, and therefore, assists with understanding the variation of themes. These memos, therefore, shall come part of the identification and development of themes.
I have identified initial sets of themes and these themes have been / are continuing to be refined but this is a continuous process and will be for the foreseeable future.
The key is, it is my belief that my core ideas of the coding scheme are in place: I just need to validate and refine the codes as necessary. The refinement and checking of the coding scheme as explained in the previous blog post is ongoing.
October 14, 2018
Ph.D. Weekly Update: 8th October to the 12th October
I am in no doubt that after the previous reading session it is difficult to pigeon hole my approach. Even the development of the coding scheme, I am finding that my approach consists of ideas from, though not limited to, inductive reasoning, thematic analysis and grounded theory. There is another idea that I have been working on for quite a while but now am in a position where I can make significant progress in this idea. Still plenty to do, but I am thinking that this research will become a multimodal project. It’s really a case of thinking about the way that is best to present this idea in the thesis, and to think more about the way in which the coding scheme relates to this idea. Now that the coding scheme is improving and I feel is beginning to take the form I want it to take, it is expected that there shall be some significant movement in developing this other idea in the coming weeks.
Along with that, another key current task is to continue to rework the coding scheme: reread the data segments, recheck the coding, and drop codes or amend them as necessary as well as combining data segments to present a complete meaning if I feel that I have divided them a bit too much. The idea also is to continue to go through the rest of the data and recode the data as necessary to reflect new meanings and new insights I am making whilst editing on paper, which is also a continuous process.
The process of rechecking everything, as just mentioned, is being carried out on paper. I have used computer software various times to amend the coding and to think about the data in various ways but sometimes for some objectives, it is best to simply use pen and paper. Print out all of your coding, relevant theoretical memos and other relevant documents and go through everything by hand. This approach I feel is especially relevant to mine because I am comparing a lot of data, within and across data sets, in order to develop the categories and themes, and also to develop an understanding of the behaviour of the phenomenon. This is not to suggest that there is no value in computer based analysis and I plan on using that further in the future.
I am pleased that I made the choice to do this, because I have found new insights and ideas that were not at all obvious when staring at the computer screen. Being at the computer screen and being concerned with navigating the software in order to find relevant pieces of data sometimes distracts you from your objectives and can cause you miss out on important insights. Simply doing things by hand sometimes can really help you find new insights that perhaps were not so obvious before. That being said however when I was editing everything on the computer following the edits by hand, there were some insights I made on paper that actually did not make sense when I really thought about what the particular pieces of data actually meant. It’s like a constant battle between colliding thoughts inside your mind with regards to the meaning of the data and what any particular data represents, but my experiences tell me that sometimes using pen and paper is best.
When you have these colliding thoughts and when you are able to perceive and interpret any date segment or segments in various different ways, remember to think about the context within which the segment is situated and remember that whatever you observe and interpret must be relevant to your research questions.
Regardless of what methods are used to explore the data, as always the key idea is to keep asking questions about the data. Remember that the data is a representation of the phenomenon of interest, and what you observe and place meaning upon might not be the same between multiple researchers. Here, you have to try to make sure that what you observe and what you interpret is as close to the data and as close to some sense of objectivity as is possible, if that is a desire of your research. Therefore, I am continuously asking questions about the data and also about my own observations and interpretations, and the quality of those observations and interpretations.
The only way you can progress is to ask questions. Making an observation or an interpretation of something within the data is fine, but you have to be sure of what it is you are really observing or interpreting. This is a process that I am only just scratching the surface of describing the process here (seriously, it’s taking up pages of my research design chapter!), but it is a process that is worth investment and engagement with. You need to make time and effort in ensuring that your observations and interpretations are as sound as possible, regardless of their ultimate subjective nature.
Just keep asking questions about the data and your own interpretations. You learn and develop only through asking, rechecking, reconfirming, and asking again! And when you are sure you are done with everything and have all the answers (quite frankly, I doubt claims of this sort), then ask more questions again!
Keep asking questions and keep going!
‘till next time!
August 03, 2018
Ph.D. Update: Research Design And Approach Now Certain!
The main output of my research shall now be a new coding scheme designed and developed to assist with the analysis of social learning processes, with the potential to move towards contributing thematic, conceptual and possible theoretical understanding of the phenomena of interest. The development process of this coding scheme (the data analysis process) has been inspired by writers of thematic analysis and grounded theory. The coding scheme’s development process (the actual development of the coding scheme) questions some aspects of existing ways in which to develop coding schemes. Sub stages of development are being proposed and shall possibly continue to be proposed as I go through the phases of analysis.
That, folks, is basically the nutshell take away conclusion of the past couple of weeks where I have completed another full round of coding the data and taking a break from coding in order to deeply reflect on my research purpose, objectives, direction, and research design. Phew! There is clarity in the world of organised chaos!
Reflecting on my journey of the Ph.D. so far, I have experimented with and thought about various types of analytical approaches related to exploring the phenomenon of interest, and have thought deeply about the type of data source from a philosophical perspective. E.g., what can I know about the phenomenon from this type of data source? In what way is this data source different to other data sources regarding what can be known? What knowledge can potentially be revealed about the phenomenon from this data source? What can I use to extract this knowledge from this data source? What are the differences between different methods of extracting knowledge both in general and related to the data source? What would different methodologies and methods tell me? What best fits the research questions, research problem, research objectives, and research context in general? In what way can my philosophical beliefs determine what I can know? What are the limits of my knowing? What limits are placed upon my knowing? Do I need to overcome these limits to know more? If so, in what way could this be achieved? And so on and so on.
All these questions have led to various different answers e.g., through comparing different methods and methodologies regarding the questions of what I can know, what can be known, and what can be known and revealed from the data source about the phenomenon of interest. And this I shall be explaining and exploring in great detail in the thesis!
When you are developing a coding scheme, establishing a time frame can be difficult. You might have identified the stages and sub stages of coding scheme development, but it’s fairly impossible to determine a time frame. This is because developing codes from the actual data, developing categories from the codes, developing themes from the categories (this is a broad, typical process of coding scheme development), and writing the methodology chapter are all performed pretty much concurrently.
As you are thinking about the codes that reflect different events and activities of your data, you are thinking about the ways in which similar coded data could be categorised. In turn, you begin to think more abstractly and more theoretically about the way in which categories can be related and placed defined into themes. Themes are the broadest, most abstract, and most theoretical constructions of the coding process, and they explain the data as a whole related to the phenomenon of interest and the way in which you want to explore the phenomenon of interest.
As you can therefore imagine, coding data with the intentions of developing categories and / or themes is not a linear process. Not to mention, every single stage involves writing lots of theoretical memos, which capture your thoughts, theories, assumptions, hypotheses, questions, queries and ponderings of the data, code, category, or theme (and relations within and between codes, categories, and themes).
As a result of all of what I have discussed, the focus of the thesis on the latter chapters (the methodology chapters and the subsequent chapters dealing with discussions of what has been found) is on the qualitative process of coding, category development, and thematic development. At a rough guess this might come anywhere between thirty thousand to forty thousand words of the thesis though perhaps more. I shall talk about the process of writing a qualitative thesis within the context of developing coding schemes in future blog posts.
The research, therefore, has moved away from generating a new theory (as was proposed originally via the use of Grounded Theory) towards developing a new coding scheme, with the intentions of developing and extending existing themes of understanding, and create where necessary new themes, regarding the phenomenon of interest.
The qualitative research field is additionally awash with limitless debates about the ontological, epistemological and methodological levels of interacting with qualitative methods and qualitative approaches. I am not kidding here: recently I have come across many different perspectives and arguments regarding a single approach to sampling for qualitative research, and have also come across many, many arguments for and against and perspectives on qualitative control criteria particularly around the terms “validity,” “reliability,” and, “generalisability.”
I intend on engaging with debates and discussions as every level and every stage of qualitative research.
And that, folks, is what happened in a nutshell during the past couple of weeks since the previous update!
‘till next time!
July 02, 2018
Thoughts On The Coding Process: Implications Of New Insights
Like a toddler running back and forth into the arms of those that love that child, ideas and visions that were previously considered irrelevant or perhaps not suitable for this project but might be for another project have been running back to me like that happy little toddler. Everyone say aww……..
(Oh by the way, I’m not at all suggesting that toddlers are irrelevant! Even if they turn into screaming delightful door slamming teenagers…………..)
The day has been a productive coding session. As I have been coding the data and observing patterns and meanings within the data, I have come to realise that certain patterns and meanings that were once considered irrelevant are now becoming more relevant and, also, I have observed new patterns and meanings that I had not previously observed when previous sets of data were coded. Or at least, new patterns and meanings that have not made themselves obvious till now, even though I might have observed them before but had not consciously acknowledged them, for whatever reason. I think this is a psychological thing: the more you become sensitised to a particular pattern or meaning you start to think later in the coding process that you have observed similar before in different contexts and then you start to identify the bigger picture or wider pattern of behaviour. It’s a very interesting and a very involving process. What I have found during the day is making me rethink what I have coded previously, and the way in which I have interpreted and perceived what is occurring in the data, which might lead to recoding the data again as I go through a more deeper coding phase as I go further into building an understanding of the phenomenon of interest. I’ll be talking more about this in another post later this week.
In the meantime however it is clearer to me now more than ever, and what might be good practice for other Ph.D. candidates to adopt, not to throw away any old ideas and visions that were previously considered irrelevant. This is an approach that I have adopted from the beginning of the Ph.D., as I have folders upon folders of books and research papers and thesis related documents and notes, and a fair percentage has been sent back and forth between the archive folders and the working folders as they were continuously examined for relevance at particular times of the project so far.
Now some of the oldest ideas and visions I had right at the earlier stages of the Ph.D. are becoming more relevant for answering my research questions and addressing the research problems. But more than that: what I was writing about earlier in a theoretical memo that documented my thinking of what I was observing was an attempt at building upon those earlier visions. It’s really interesting when you have built your earliest visions upon a section of existing literature and then to observe what you thought was irrelevant within the data brings back home the thinking that nothing is really impossible. There is a slight problem, however.
It is a fair way into the reanalysis and coding phase that these older ideas and visions have occurred, so this leaves me with a couple of questions. Do I carry on with the coding and analysis and simply suggest at what point I observed a new aspect of a phenomenon to be relevant? Or, do I reanalyse the data again and code for these additional observations that I made later in the coding?
Methodological literature that I have come across so far has not been clear on this subject although it is a subject I shall read more about. I have come across a paper that did suggest that you don’t have to reanalyse the data to code any new observations but this from what I remember was associated with grounded theory based Open Coding, where you are basically coding to build a theory and not coding to identify and relate themes. I am leaning towards yes, I would have to recode the data to code for more instances and examples of what I have observed in order to validate and authenticate the existence of what it is I have been observing.
Of course this then leads onto other philosophical questions such as does repeatability really represent truth? If you observe something often enough does it really exist in an external reality or does it exist within our own interpretations? What about if others are not able to perceive or observe what a researcher finds observable? In what way can I tell that something might exist in an external reality? In what way can I possibly know what I know to be true? These, and more, are challenging questions, but the key I think is to keep everything grounded in the data and make sure that arguments and observations are built from the data. You cannot build from existing theory; you can, however, build from a relationship between data observations and existing theory, but I shall cover that point at a later time.
With all that in mind, what I am thinking about is to analyse the data but keep the original copy of the data and embed evidence of a change in perspective or the observation of a potentially key new theme. This would be in the form of a theoretical note embedded within the data that would mark precisely the point that I began to observe the importance and relevance of an event or meaning that could form a part of a theme. This would show and evidence the progression of thinking and the way in which my thinking and thought pattern progressed to the point that I began to observe the importance and relevance of what it was I was observing. I am not really sure what the literature says on this subject, but I am becoming convinced that this might be the best approach.
The key lesson here really is, don’t throw out your old ideas. Whether that idea is represented as a few lines of writing on a scrappy piece of paper or rushed serious of paragraphs on the word processor, keep it! Archive it or put it in some relevant folder or whatever storage system you have so that you can refer back to those ideas in the future if they prove to be relevant. Another lesson is don’t focus your mind exclusively on what you found previously.
In other words, don’t code one set of data and then focus the next set of data on what you have discovered before (I know this is rather a contentious point in academic discussion from what I can understand about coding approaches and debates) (another contentious point is whether or not anything is actually discovered at all, but is actually interpreted), but keep an open mind. Of course what you find whilst you are coding and thinking about the data is exciting, overwhelmingly exciting, but keep a level head, keep an open mind, and don’t be distracted by what you have observed previously. If you become too focussed on what you have observed previously you’ll begin to lose the meaning of innovation and originality, and become potentially enslaved by previous observations. Keep an open mind and keep coding for original insights and meanings, and think and plan carefully to determine if there is a real need to reanalyse the data when you find something new a fair way into your data analysis process. This really depends on your research questions, research problem, and the way in which what you have observed relates to explaining the phenomenon of interest.
‘till next time!