All 4 entries tagged Thematic Analysis

No other Warwick Blogs use the tag Thematic Analysis on entries | View entries tagged Thematic Analysis at Technorati | There are no images tagged Thematic Analysis on this blog

July 07, 2018

Ph.D Update: Thoughts on Themes and Categories

I have managed to code through the entire data corpus, involving the development and assignment of codes to relevant data segments; codes that capture the meaning of the assigned data segments, along with embedded theoretical memos within the data. These memos explain the nature, function, context and meaning of the code and the segment’s content and any other relevant thoughts, hypotheses and theories related to the content. However, as I was thinking about the next stage I stated to doubt myself and asked myself the main questions:


What is the real meaning of a theme?


How is a theme really constructed?


What type of theme should I be constructing?


These questions reflected the doubts that I had at the time of my understanding of what a theme really is, and the depth and breadth of which I should involve myself with theme analysis and development for the purposes of my research. These questions are continuously asked but I appear to have some clarity in my rereadings and exploration of the literature. I knew at the time the process of making themes but there was something that bought doubt into my mind: is there really no step between developing codes and developing themes? I wasn’t convinced, and hence the formation of the questions and the subsequent reading of literature. Doubt in this case has been used as a means, a process, of developing questions and of endeavouring to explore topics further.


From what I can understand of the literature, there is varying terminology to refer to the same type of theme but for the sake of brevity I shall focus on a couple of authors who are becoming key writers for my understanding and application of thematic analysis.


Braun and Clarke (2006) define the themes as semantic or latent. Semantic refers to theme development based on just the surface level meaning of the data; essentially, the researcher is not interested in anything beyond what is said literally within the text. There is therefore, from what I can understand, no attempt at understanding context, nuances, variety, diversity and deeper meaning at the semantic level. Semantic level is essentially considered to be a descriptive level of meaning.


At the latent level of theme development, however, there are attempts at going beyond the semantic level and into the realm of interpretation, assumptions, concepts, conceptualisations, meaning making, hypothesis making and theorisation. From what I can understand, Braun and Clarke (2006) describe theme analysis and development as a progress from the descriptive level to the level of interpretation and theorisation. What is identified at the semantic level is taken beyond the obvious and observable to what can be known and understood through theories and interpretations. The latent level, however, is not grounded on hairy fairly assumptions as the latent level assumptions and theorisation processes are grounded in the semantic level. Therefore, what I find or observe at the semantic level I can theorise, hypothesise, assume, and make meaning of their existence, functionality, purpose and context.


This actually makes sense, because how can I possibly stop at just a simple observation? How can I simply consider the existence and meaning of something at only the semantic level and not at the latent level? It doesn’t make any sense to me just to observe and know something at the semantic level: I am immediately drawn to theories, well grounded assumptions, hypotheses, and meaning behind existence and function. Is that because I have an academic mind? Can I perceive beyond the observable? Can I understand meaning and function beyond what is right in front of me and clearly observable? Surely I can if I am drawn to this level of understanding?


Moving forwards, I have this understanding now of semantic and latent themes so surely it is common sense that thematic analysis consists of both themes? That my research would involve the construction of both? According to the approach to thematic analysis by Braun and Clarke (2006), I would be correct.


But wait, there’s more!


Category or Theme? Should we consider both?


After spending a long time pouring over methodological papers about thematic analysis and the idea of theme development, I had more questions than answers. I came across literature that was not only encouraging me to doubt and question what to do in the next stage (I shall discuss this more in a future blog post), but also encouraged me to question my own understanding of what a theme is, and also what a category is. Is it not true that categories are an integral part of grounded theory and therefore I should not worry about them? If only our attempts at understanding the world, of social reality and all the components of social reality were that easy!


Methodological authors differ in their description and discussion of the theme development level and of the definition of categories and themes. After a long while of reading however, I am beginning to lean towards discussions around the likes of Vaismoradi et al (2016), who suggested that the thematic analyst considers both the category and theme, where the category represents the semantic content whilst the theme overarch the category and represents therefore latent data.


What is interesting here therefore is that a theme could consist of multiple categories although some authors name categories as sub-themes. Categories or sub-themes themselves are constructed through the grouping of codes; categories therefore describe and functionalise a group of codes and describe their general meaning. From what I understand of the literature and particularly Braun and Clarke (2006) is that categories (or sub-themes) are constructed first before they are them grouped into themes. But it’s not as clear cut as that, because I’ve just recently read another paper and the author suggests that there is no need for theme development and automatically considered their codes to be themes………..


It is a minefield, but the way my mind works I like the idea of progressing from codes to categories to themes (and, therefore, from semantic or manifest data, different authors label them differently, to latent data; from observation to interpretation and theorisation).


What did I learn from that process? That the whole idea of building themes is to move from semantic or manifest level to the level of interpretation and theorising and this makes a lot more sense to me now and comes to me really as quite obvious. Also reflecting back on the process I have used so far this is something that I have always done, I just wasn’t familiar with the terminology! Also, categories themselves are complex and used in different contexts. Previously I thought categories were terms and features exclusive to grounded theory, but categories are general terms but it appears to me that categories are used differently depending on the research method used. Within grounded theory, they are used to build towards a theory whilst in thematic analysis they are used as part of building understanding and not a theory.

I was right to doubt, because I was able to realise and recognise where I have to build my own understanding. This is an ongoing process, but the more I use thematic analysis and read the relevant literature the more I can understand the way in which it relates to the coding process I am carrying out, and the way in which themes can be used for the next stage of the research.


‘Till next time!

References:


Braun, V., Clarke, V (2006): Using Thematic Analysis in Psychology, Qualitative Research in Psychology, 3 (2), 77 - 101


Vaismoradi, M., Jones, J., Turunen, Shelgrove, S (2016): Theme Development in Qualitative Content Analysis and Thematic Analysis, Journal of Nursing Education and Practice, 6 (5), 100 - 110


July 02, 2018

Thoughts On The Coding Process: Implications Of New Insights

Like a toddler running back and forth into the arms of those that love that child, ideas and visions that were previously considered irrelevant or perhaps not suitable for this project but might be for another project have been running back to me like that happy little toddler. Everyone say aww……..


(Oh by the way, I’m not at all suggesting that toddlers are irrelevant! Even if they turn into screaming delightful door slamming teenagers…………..)


The day has been a productive coding session. As I have been coding the data and observing patterns and meanings within the data, I have come to realise that certain patterns and meanings that were once considered irrelevant are now becoming more relevant and, also, I have observed new patterns and meanings that I had not previously observed when previous sets of data were coded. Or at least, new patterns and meanings that have not made themselves obvious till now, even though I might have observed them before but had not consciously acknowledged them, for whatever reason. I think this is a psychological thing: the more you become sensitised to a particular pattern or meaning you start to think later in the coding process that you have observed similar before in different contexts and then you start to identify the bigger picture or wider pattern of behaviour. It’s a very interesting and a very involving process. What I have found during the day is making me rethink what I have coded previously, and the way in which I have interpreted and perceived what is occurring in the data, which might lead to recoding the data again as I go through a more deeper coding phase as I go further into building an understanding of the phenomenon of interest. I’ll be talking more about this in another post later this week.


In the meantime however it is clearer to me now more than ever, and what might be good practice for other Ph.D. candidates to adopt, not to throw away any old ideas and visions that were previously considered irrelevant. This is an approach that I have adopted from the beginning of the Ph.D., as I have folders upon folders of books and research papers and thesis related documents and notes, and a fair percentage has been sent back and forth between the archive folders and the working folders as they were continuously examined for relevance at particular times of the project so far.


Now some of the oldest ideas and visions I had right at the earlier stages of the Ph.D. are becoming more relevant for answering my research questions and addressing the research problems. But more than that: what I was writing about earlier in a theoretical memo that documented my thinking of what I was observing was an attempt at building upon those earlier visions. It’s really interesting when you have built your earliest visions upon a section of existing literature and then to observe what you thought was irrelevant within the data brings back home the thinking that nothing is really impossible. There is a slight problem, however.


It is a fair way into the reanalysis and coding phase that these older ideas and visions have occurred, so this leaves me with a couple of questions. Do I carry on with the coding and analysis and simply suggest at what point I observed a new aspect of a phenomenon to be relevant? Or, do I reanalyse the data again and code for these additional observations that I made later in the coding?


Methodological literature that I have come across so far has not been clear on this subject although it is a subject I shall read more about. I have come across a paper that did suggest that you don’t have to reanalyse the data to code any new observations but this from what I remember was associated with grounded theory based Open Coding, where you are basically coding to build a theory and not coding to identify and relate themes. I am leaning towards yes, I would have to recode the data to code for more instances and examples of what I have observed in order to validate and authenticate the existence of what it is I have been observing.


Of course this then leads onto other philosophical questions such as does repeatability really represent truth? If you observe something often enough does it really exist in an external reality or does it exist within our own interpretations? What about if others are not able to perceive or observe what a researcher finds observable? In what way can I tell that something might exist in an external reality? In what way can I possibly know what I know to be true? These, and more, are challenging questions, but the key I think is to keep everything grounded in the data and make sure that arguments and observations are built from the data. You cannot build from existing theory; you can, however, build from a relationship between data observations and existing theory, but I shall cover that point at a later time.


With all that in mind, what I am thinking about is to analyse the data but keep the original copy of the data and embed evidence of a change in perspective or the observation of a potentially key new theme. This would be in the form of a theoretical note embedded within the data that would mark precisely the point that I began to observe the importance and relevance of an event or meaning that could form a part of a theme. This would show and evidence the progression of thinking and the way in which my thinking and thought pattern progressed to the point that I began to observe the importance and relevance of what it was I was observing. I am not really sure what the literature says on this subject, but I am becoming convinced that this might be the best approach.


The key lesson here really is, don’t throw out your old ideas. Whether that idea is represented as a few lines of writing on a scrappy piece of paper or rushed serious of paragraphs on the word processor, keep it! Archive it or put it in some relevant folder or whatever storage system you have so that you can refer back to those ideas in the future if they prove to be relevant. Another lesson is don’t focus your mind exclusively on what you found previously.


In other words, don’t code one set of data and then focus the next set of data on what you have discovered before (I know this is rather a contentious point in academic discussion from what I can understand about coding approaches and debates) (another contentious point is whether or not anything is actually discovered at all, but is actually interpreted), but keep an open mind. Of course what you find whilst you are coding and thinking about the data is exciting, overwhelmingly exciting, but keep a level head, keep an open mind, and don’t be distracted by what you have observed previously. If you become too focussed on what you have observed previously you’ll begin to lose the meaning of innovation and originality, and become potentially enslaved by previous observations. Keep an open mind and keep coding for original insights and meanings, and think and plan carefully to determine if there is a real need to reanalyse the data when you find something new a fair way into your data analysis process. This really depends on your research questions, research problem, and the way in which what you have observed relates to explaining the phenomenon of interest.


‘till next time!


June 29, 2018

Ph.D Update: Venturing Back To Data Analysis

Since the previous blog post I have returned to data analysis: I have reanalysed previously analysed data, managed to organise my data corpus and where I can find more data to analyse if need be, and have begun to identify potential themes and their potential relationships with each other based on the observations made of the data and coding completed so far. These themes, once determined to actually exist through further analysis, shall then become the core themes of the phenomenon of interest and, therefore, become objects of further data analysis in the phase following thematic analysis. Because more coding needs to be completed I cannot say with any solid certainty that these themes will manifest into core themes that become the focus of the rest of the analysis process; however, I have made enough observations to potentially suggest that the identified themes will be the main themes and any other themes are likely to be sub themes. An open mind, however, is still required and as I code through the data and enter the next stage of thematic analysis, I could potentially identify more core themes.


What have I done in order to reach the current point of coding? The very first step before even coding the data is to become familiar with your data. This has been a journey in itself as I battled with different philosophical perspectives and the most efficient and effective lens from which the particular kind of text should be analysed. I am more or less settled with this now and in the thesis it is a case of detailing what my philosophical beliefs are, the way in which these impact the way in which I perceive, engage with, and interpret the data, and the way in which they relate to the research problem and research questions, and fit in with the rest of the research design.

Away from Philosophy however and onto the data level, becoming familiar with the data makes sense as this gives you the widest scope and the widest sense of the nature of the data. It is through familiarising yourself with the data that you can begin to view high level, abstract structures, potential hierarchies and forms of organisation within the data. The participants might not have intended their interactions with you as a researcher directly or with each other to produce such structures, but those structures do exist in an external reality and can be reflected unconsciously within certain parts of the data at certain times. The nature and composition of these structures, hierarchies and organisations however depend on the type of text being analysed: interview transcripts, for example, shall differ completely compared with group learning transcripts. What I am finding and have found however is that data familiarity can continue past this familiarity phase and onto the coding phase. From my own experiences, as I code through the data I found myself exploring the date closely and begin to be able to view these hierarchies, structures etc at a closer level. These realisations and characteristics of the data were not revealed immediately however, it has taken several rereads and several rounds of coding in order to fully understand the nature of the data (or at least begin to understand the nature of the data) and to therefore begin to understand the constructs and structures of the data’s particular nature. This is something I shall be talking about to a more indepth level in the thesis. It’s important to state that I am not necessarily observing both “macro” and “micro” structures as what I am following is a micro level analysis set within a particular context. It really depends on what you can observe in the data and it depends on the type of text you are analysing, and the purpose of your research. Sometimes interactions can be theologically and politically influenced, for example, and this can be reflected in the data. It’s arguably simply a matter of working through the data and carefully and comprehensively thinking about what it is you are observing.


As for the coding process I am a certain way through the coding phase. I have identified the data corpus and about halfway through the coding phase. The approach to coding I have adopted is what I call a segment by segment analysis. Some argue for a line by line analysis or a sentence by sentence analysis but I am going to be arguing the ineffectiveness of these analytical approaches within the context of my research. Sometimes, a single line or a single sentence is not enough to capture the event or action that you are observing in the data: sometimes you can observe events and actions within half a sentence or half a line, sometimes they can be observed at a greater level than a sentence or a line. Segment by segment analysis based on the interpretation or observation of meaningful events or actions is a more flexible and pragmatic approach for my research: it enables me to break up each block of data into meaningful segments that can be below or above sentence level. I define a segment as meaningful because that segment contains an event or action that is expressed, described, or in some way engaged with that holds a particular meaning for my research purposes. A single sentence, therefore, could contain multiple meaningful events and activities that would be missed by sentence by sentence and line by line analytical approaches.


I have assigned each of these meaningful segments a code, which represents or encapsulates the general meaning or description of the event or activity that is contained within that segment. Again what this event or activity or action is depends on what you perceive, of what’s important to you and your research, of what relates to your research question and research problem, and what the nature of the transcript is. Previously when I used Grounded Theory I generated many codes and as I went through the previously coded transcript I altered some of the codes, dropped a few, and added new codes in. This time of coding more than ever I feel that I have been able to capture the pure essence of each segment that before I did not capture; I can perceive and observe events and activities in the data and view relationships between segments that I had not been able to previously recognise or identify. This has helped during the coding of further transcripts and even then, I have been observing new occurrences, happenings, events and actions within the data that I had not previously observed in the previous transcripts. Unsurprisingly, I have generated many codes.


The more you read through your data and become familiar with it, the more you learn about your data and therefore, with each reading session, new properties, events, dimensions and even higher level relationships and structures reveal themselves. There is much debate however as to whether or not these observations really do exist in the data, or if it is just what we perceive or observe in the data. This is a complex yet fascinating area of debate and shall be something I shall engage with in the thesis.


As I have been coding I have been writing short theoretical memos. The memos that are written at this stage serve the purpose of documenting continuous and evolving process of thinking and theorising about the codes and the data. The memos describe and explain the motivations, intentions, meanings, production, and context of the meaningful segments as well as the meaning of the code itself, and any other thoughts, hunches, ideas, observations and potential hypotheses, questions and predictions relevant to the research. These memos are very important as they ultimately form a substantial part of the chapters related to research findings and discussions, and, they assist you (along with any journals that you might have) with documenting your analytical and theoretical journey.


Your thinking, theorising, comprehension and understanding develops and progresses as you code through the data, and as you identify similar characteristics and the differences between them as well as, therefore, the similarities and differences between similarly coded data segments and, which can form the starting point of identifying and developing your themes, but that’s another aspect of the analysis to cover in another blog post!


June 10, 2018

Previous Week's Update Part B: New Research Design

I have now settled on a new research design. The philosophical and epistemological perspectives remain the same (ontological realist; epistemologically, presently, a mix of interpretivism and constructionism but this needs further elaboration) and the methodological approach is the same (qualitative, possibly moving onto mixed methods methodology though should the need arise). But I have changed methods from a qualitative grounded theory set of methods to a qualitative multi- modal approach that incorporates both thematic analysis and discourse analysis. As a side note, multi-modal is different to a mixed methods: multi-modal is the utalisation of different analytical methods set within the same methodological approach, which in this case of my research the methodological approach is qualitative. A mixed methods methodology would include both qualitative and quantitative analytical methods. The reason for this change, as has been mentioned in previous blog posts, is because the data characteristics that I became interested are, what I argue to be, difficult if not impossible for grounded theory to capture and integrate into a theory of the phenomenon of interest.


During the previous week I have been reading more papers about thematic analysis and discourse analysis that consists mostly of the philosophical and methodological approaches to these methods. This has helped me to understand the way in which they align with my philosophical position, which is important in various ways. Firstly, from the philosophical level, it goes without saying that the use, value, understanding and application of research methods are situated within our understanding of the world, whether we are conscious or unconscious of our philosophical perspectives, and whether or not we make this explicit or implicit. More fundamental than the methods level however is the data level: our philosophical perspectives of the world highly influences the way we value and perceive different types and sources of data upon which we apply the research methods. Secondly, from a methodological perspective, the multi-modal approach has to consist of analytical methods that are used in a way that are compatible with and complements each other; where, for example, findings from each method either support each other, or extend or build upon each other in some way.


I shall be using thematic analysis and discourse analysis together in a way that findings are built upon each other. I am working this out though, and continuing to fine tune their utalisation and compatibility the more I read the literature and understand their application within the context of my philosophical beliefs, the methodological orientation, the wider purpose and objectives of research, and the type and source of data. There is a substantial need, therefore, to ensure that thematic analysis and discourse analysis are combined in a way that not only advocates a sense of unity and extended construction of findings, but also in a way that is methodologically rigorous, valid, authentic and sound. This is a huge topic that I shall engage with to a significant and detailed level in the methodology chapter (talking thousands of words and page after page after page after page after page…….you get the idea!) of the thesis with discussions posted on this blog. However in the meantime it suffices to say that I shall be carrying out a thematic analysis first, then a discourse analysis. It might be an idea, actually, and as recommended by some authors, to verify the products and results of a thematic analysis with existing published literature before engaging with discourse analysis. Either way, what is intended with thematic analysis is the generation of different themes of the phenomenon of interest through coding the data. Following this (and possible verification with published relevant literature), discourse analysis shall be utalised to analyse the discourse within and around these identified themes, leading possibly to a deeper and more substantial understanding of the way in which different social objects are used in certain learning contexts and also the way in which objects can relate to each other.


A reason why this topic is complex and vast is in part because there are various types of thematic analysis and various types of discourse analysis, aligning with differing philosophical and theoretical perspectives (a bit like Grounded Theory and near enough any other method) and therefore differing in process of analysis with each version. This is why methodological compatibility is important; that the variation of thematic analysis and discourse analysis are methodological compatible and are methodologically sound and valid, in part determined by whether or not they can capture and analyse the data characteristics of most interest regarding the phenomenon of interest.


Before I even get to this stage however, the very first task that I shall be engaging with during the coming week, along with the continuing to elaborate on my philosophical and theoretical thoughts and approaches to the research design, is to check the work that I have done so far. Because various authors have suggest that thematic analysis is similar in approach to the open coding aspect of grounded theory (both approaches use an initial coding phase), I have to check that the codes that I have used whilst using grounded theory are compatible or are in whatever way suitable for thematic analysis. From what I can currently understand, the only real difference between thematic analysis and grounded theory is that thematic analysis’ intention is not to develop a full theory but can contribute towards theorisation as a beginning phase of a multi-modal qualitative project. Also, I have to check that the codes I have created can be formed into themes, which are, from what I can currently understand, conceptually different to Grounded Theory categories. At the moment I cannot imagine there being any substantial differences in the coding engines of thematic coding and the initial stage of open coding, or initial coding as other grounded theory writers call it, but obviously this needs further checking.


I am just scratching the surface here with this blog post! It’s going to be a very busy summer with data analysis and the rewriting and further construction of the methodological chapter(s). It’s going to be challenging but exciting, and it helps that I am feeling more confident and happier with my approach compared to grounded theory.


It’s a challenging task alone to work out your research design and the methods to use especially in qualitative, emergent based research. But the best thing you can do is continue to be guided by your data. My research design is data driven: I have come away from grounded theory and onto a combined approach of thematic analysis and discourse analysis exactly because of what I have been observing in the data and coming to know that grounded theory is not able to capture what I really want to explore in the data.


Keep going!


March 2024

Mo Tu We Th Fr Sa Su
Feb |  Today  |
            1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31

Search this blog

Tags

Galleries

Most recent comments

  • Thank you :) by Alex Darracott on this entry
  • Keep going! You can make it! by Ya Lei on this entry
  • Thank you for your comment and for your feedback and you are right about the student perspective of … by Alex Darracott on this entry
  • I think that 'objectivism' (like positivism) is over–rated in social sciences (and of course, you wi… by Liviu Damsa on this entry
  • Cider consumption shall come into it when chanting mumble jumble no longer helps :P ;) by Alex Darracott on this entry

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV