Exciting futures – whither qualitative research?

This week my colleague, Dr Jenine Beekhuyzen will present a new workshop on using NVivo with non-conventional forms of qualitative data. Earlier this week I consulted with a series of PhD students and was disappointed to find that most of them were using face to face interviews as their primary method to collect data. It’s time to move on, people! Although Jenine has entitled her workshop ‘non-conventional’ data, this is merely in response to this fixation with face to face interviews!  When recording, transcribing, videoing and note taking were the only resources we could use to collect and manage our qualitative data, there was some excuse for resorting to the most convenient methods. But now we can import so many different types of data– social media, sound recordings, photos and other image media - straight into our data management software, and view or listen to them in the same state as when we collected them.

An additional benefit of using NVivo etc means we do not need to waste valuable time transcribing these data sources into some other more abstract and diluted form. We can listen to the voices, watch the images and accurately tag and code all the associated data – non verbal, environmental, etc. And we can easily go back to the context of the data – without this, the data is meaningless or even worse, misleading.

Qualitative research is about understanding. Using these ‘unconventional’ (but not so new) forms of data collection means that we are more likely to get access to the way  understanding  is being constructed and developed, especially amongst the younger generations.  Ignoring them will leave us floundering in the past.

by Dr Sue Nielsen

Qualitative research and the new world order

Post by Dr Sue Nielsen Watching the news every morning can leave you feeling confused and dispirited - so many challenges, so much conflict. The information revolution and globalisation have changed our landscape in social relations, work, and the environment, and the rate of change seems to be increasing.  To cope with all these changes we need to take fresh perspectives on things we have taken for granted, to look for new ways to live, manage and prosper.

Most importantly we need to understand what these changes mean. Human beings are driven by the need to make things meaningful and this is so evident in the growing number of programmes which encourage people to express their ideas and their feelings – Big Brother, My Kitchen Rules, Facebook, Twitter and so on. Looking at the news, we can see that most of the changes now transforming our world are driven by changes in meaning. What does it mean to be young, to be a mother or father, to be Russian, Australian or Chinese? What does work mean? How is that different from the way our parents and grandparents thought about work?

Qualitative research is about understanding meaning – it is about the investigation of the meaning of social action. It does not seek to predict or explain the causes of meaning, but to understand them. It seeks to uncover what we take for granted, or what currently seems too hard to understand, so that we can think in fresh ways about new opportunities. It aims to discover what counts, what will be worth counting, rather than counting things which we already know about. It does not ignore the past, but looks back only to look forward.

Students and researchers have many pressures on them - to get unequivocal results, to tie up all the loose ends, to make strong recommendations, to ‘publish or perish’. But these are not incompatible with attaining a deep understanding of the problem under study.

Qualitative research is not an easy option. It requires persistence, acute observation and the acceptance of ambiguity and uncertainty. But the research methods are well established and the rewards are great – new insights into our current situation and new ideas to move forward into an uncertain and rapidly changing world.

 Join our online qualitative research workshop series beginning on the 10th of March - register here

Focus groups for data collection – what’s the point?

Qualitative researchers spend a lot of time wondering ‘what does it mean’? We observe subjects and think about what is going on, and we analyse verbal data to interpret its meaning. Most people, when they are not sure of the meaning of a word, consult a dictionary. In preparation for the Focus Group workshop on October 2nd I decided to go back to my big old Websters dictionary to remind myself of what focus means. What is the point of all my rumination? Good point – since ‘point’ and ‘focus’ are strongly related. My dictionary suggests that ‘focus’ means (among other things) the point at which rays (or other phenomena) converge, or the point at which they (appear to) diverge. Converge and diverge, eh? That suggests that focus groups help us understand the convergence and divergence of ideas, amongst the members of the group, and more importantly how these occur.

Furthermore – focal length is the position in which something must be placed for clearness of image, - the point of concentration. The latter meaning is the one I think that springs to mind when we think about focus group interviews – that we ask the group to discuss a limited range of ideas which we wish to concentrate on. But it also means that the point of concentration is the group. If not, why not carry out individual interviews?

Most writers on focus groups indicate that they can be rewarding sources of qualitative data but are difficult to organise and moderate. Additional challenges include who to focus on? How to record the interactions between the focus group members?  Perhaps the most difficult challenge is how to analyse the data when the unit of analysis is the group.

But the rewards are great. The researcher can observe how consensus is reached or abandoned; how deviant ideas are suppressed or contested; how participants react to new ideas and even change their opinions and views during the group interview.

The researcher is reminded that qualitative interviews are not about eliciting facts, but about posing, confirming and refuting ideas about ‘facts’. In that way, focus group interviews seem more ‘natural’ and real than individual interviews.

How to overcome the challenges? I look forward to talking about that on October 2nd. Register today for the workshop.

Post by Dr Sue Nielsen

Manage evolving coding schemes in a codebook: 3 simple strategies

This is a repost of the blog I wrote for QSR International - See more at: http://blog.qsrinternational.com/#sthash.qYID5pdn.dpuf28 May, 2013 · by Jenine Beekhuyzen · in How-toStoriesTips

I originally began writing this blog post about teamwork and my recent experiences in seeing how important it is to clarify the definitions of codes when working in teams. But I now realize that such advice applies to all researchers, in all disciplines, studying all manner of topics. Every single researcher I have discussed this with (and they are now in the hundreds) has found some benefit in this, so I had to share it with you. (Thanks QSR for inviting me to write this post!)

The topic of a codebook came to my immediate attention when I read the article “Developing and Using a Codebook for the Analysis of Interview Data: An Example from a Professional Development Research Project” by DeCuir-Gunby, Marshall and McCulloch, which was published in the Field Methods journal in 2010. This article has become my research bible. I have seen how it helps to fix some of the challenges that researches face when coding qualitative data. How?

What’s in a name?

We all approach our data with the best of intentions, equipping ourselves with the tools (e.g. NVivo) and techniques (e.g. thematic analysis) we believe we need to do an adequate, and hopefully even good analysis of the data we often struggled many months or years to collect. We often feel we have clear conceptualizations of what we mean by different codes related to our data. But often we don’t document these in detail. It just seems too hard doesn’t it? Believe me, the tedious work is well worth it!

I find that most of us are not very articulate about what we mean by each of the codes we are using to investigate the data. How do you decide what is included in a node and what is not? How would you describe your process to someone else (i.e. your examiners!) and create a process that is repeatable?

The codebook is the answer. It helps to clarify codes and what you mean when you apply them to your data not only to yourself, but also to your team members and supervisory staff. I’ve seen experienced teams convinced that they are all on the same page about their codes, but when given the task of developing a codebook in a systematic way, they find they often have different understandings of what they mean by common terms. So my regular first response in consulting and training now related to qualitative data analysis is – where is your codebook?

Strategy 1. Create the codebook

This is basically a three-column table you can create in a memo in NVivo; you will need to update and refer to it regularly. Populate it with your code name, a definition (from the literature or your own) and an example taken from your data that is an example of this code being applied.



Example from text


Text coded to topics around the concept of community (not around specifically named communties)

I think it is just how distinct the different communities are because of the geographical isolation

You can take this a step further and include an inclusion and exclusion strategy:

Inclusion Criteria

Exclusion Criteria

Include if it discusses …

Include if it does not discuss…

The benefits become obvious pretty quickly; you know exactly what you mean by each code, as does your supervisor/s and ultimately your examiner/s. This is really important, in my opinion, to all research projects postgrad and postdoc.

The benefits for teams are also immediately apparent: each person coding knows exactly what should be coded in each node and what should be added to another or a new code – much of the ambiguity disappears as does much of the angst of the coder.

Strategy 2. Document the changes to your codebook


If you are using apriori (theoretically or domain based) codes then you might find this coding process fairly straightforward. However if you are doing thematic analysis with codes that are not well defined from the start, then your codebook WILL change. Be prepared for that. It might get messy before it gets clear, that’s ok 

A recent psychology study found it took quite a few interviews before the team codebook was agreed upon in these instances, and my own experience is seeing teams nut it out for hours on end sometimes to get finally definitions and examples that everyone agrees upon. This is good progress and a really important stage in the data analysis/coding process!

Strategy 3. Run an intercoder reliability check (specifically for teams)

After the team creates a codebook of their codes, each coder then takes a copy of the NVivo project (appended with their initials) and goes away and codes the same interview to a set of identified codes that are believed to be well defined from strategy #1.

Once the coding is complete, use NVivo to run a query to compare the coding from each coder. This is a great process to help to create really strong definitions of your codes (and test them out), as it becomes really obvious from the result of the query where the ideas and conceptualisations of the coders differ. NVivo allows you to look at exactly where the differences are, and these are then discussed with the team.


Once some decisions are made, often these include whether to code the question in addition to the response, or whether to code a line or a paragraph, then the team goes away and repeats this process with a second data source (often another interview transcript). Then run the query again and discuss the results with the aim of being as closely compatible in coding as possible (NVivo also provides a Kappa as a representation of this compatibility). The Kappa and a description of your process of coding can be used to report your teamwork.


Happy coding 


Data collection – gathering, generating or making it up?

Post by Dr Sue Nielsen Most books on research methods include long sections on Data Collection. It’s interesting then, that in Schwandt’s (2007) Dictionary of Qualitative Inquiry, there is no entry for ‘Data Collection’ but a ‘See’ reference to ‘Description’ and ‘Generating Data’. In the former entry he discusses how so called factual descriptions of the world are theory laden and that data ‘collection’ is more appropriately described as generation or construction. In the latter, he mentions that data is not ‘out there’ to be discovered like gold or collected as we would gather fruit from a tree.

But what does this mean? The word ‘generate’ gives rise to images of nature, birth, growth and so on. If constructed, how is this done? Certainly we would hope that we are not just making it up as we go along!

At this point, a researcher engaged in a large qualitative project with deadlines to meet, papers to write and colleagues to argue with, might mutter - ‘collect, gather, generate – a lot of fuss about nothing.’ But given the increasing trend to using interviews as the major means for ‘collecting’ data, and even to treat these interviews as representations of underlying phenomena,  it’s worthwhile to reconsider what we are doing when we 'collect' data and why.

One definition of data is ‘factual information’ but given the discussion above, this makes for even more complications. The meaning in Latin is more informative – something given – which would signify that the research subjects are active, or as Creswell (2007, p.118) puts it “will provide good data”. When Creswell discusses data collection he often refers to ‘studying’ a site or an individual. Put these two ideas together and data collection means studying what research subjects can give you.

Qualitative research is primarily about understanding, which most agree is achieved through interaction. This involves acknowledging the potential for misunderstanding as well as being open to challenging one’s own understanding. Recognising and exploring the role of both the researcher and the data giver may ‘generate’  or ‘construct’ greater insights. ‘Collecting’ data, removing it from its source and then subjecting it to analysis seems unlikely to achieve this.

It’s ok to use a catch-phrase such as data collection which when scrutinised could be considered misleading. Language is like that – after all, very few people are breaking a fast when they eat breakfast. But it’s good to remember the various meanings of data and collection and not misapply them to that phase of qualitative research.


Creswell, J.W. (2007) Qualitative inquiry and research design; choosing among five approaches. 2nd.ed. London, Sage.

Schwandt, T.A. (2007) The Sage dictionary of qualitative inquiry. 3rd ed. Los Angeles, Sage.

To transcribe or not to transcribe – that is the question!

Our first blog post by Dr Sue Nielsen :) This is a regular topic of conversation for us and the researchers we work with. Let us know what you think...

Qualitative researchers are often expected to make exhaustive written transcriptions of audio and video recordings of interviews, observations and other qualitative data. However, exhaustive transcribing is very costly, often poorly done and may delay the progress of the research project.

Researchers need to consider the following issues. Cost and skill – Most researchers are not fast, accurate transcribers. On the other hand, qualified transcribers may not understand the domain, leading to problems with professional and discipline terminology, jargon etc.

What and how - Transcribing is a research activity not a technical process. Most qualitative researchers analyse as they listen to or look at their data. What and how to transcribe depends on the purpose of the researcher. Most researchers need to ‘reduce’ and synthesise their data so full transcriptions are unnecessary. Different research methods such as conversation analysis, grounded theory and thematic analysis require very different skills and approaches.

Accuracy - Transcribing audio and visual data may lose authenticity and relevant information. Capturing emotional, vocal, spatial and other data is very difficult and time consuming. Much of this data gets lost or forgotten and will not be taken into account in the data analysis phase.

Trustworthiness - The purpose of the qualitative research project is to write up the findings. To ensure trustworthiness, many authors suggest establishing an audit trail to enable readers or examiners to follow the line of inquiry and analysis from the interpretation back to the original data. Researchers should keep descriptions and registers of data collected, as well as how the data was handled in the analysis process. It is misleading to assume that transcripts alone will accurately represent the original data.

Help - Computer based programmes can assist in addressing these issues. Small projects may simply use word processing, spreadsheets or databases. Longer and more complex projects will benefit from the use of a CAQDAS programme. Programmes such as NVivo enable researchers to make notes on and code audio and visual data alongside textual data.

Whatever the size of the project, think carefully before embarking on costly and possibly misleading transcriptions of all your data.

The following papers provide a good argument for working with data in its original state and an overview of emerging technologies which support data analysis and coding. Markle, D.T., West, R.E., & Rich, P.J (2011) Beyond Transcription: Technology, Change, and Refinement of Method, FQS Forum: Qualitative Research, 12 (3) September, Article 21. http://www.qualitative-research.net/index.php/fqs/article/view/1564/3249

McLellan, E., McQueen, K., & Neidig, J. (2003). Beyond the qualitative interview: Data preparation and transcription. Field Methods, 15(1), 63-84.

Love to hear your comments!

Welcome to the Adroit Research blog

Welcome to the Adroit Research blog! I'm very excited to be launching my new website, and my new blog and to start sharing all of my hints and tips about research. I love helping people with their research, because I learn so much that I can then take back to my own projects. Also researchers are so interesting! So I plan to use this blog to share some of the resources and ideas I collect on my research journey, in the hope of giving you some fresh ideas and motivation to make the most of your projects and your data. I also regularly share resources via social media, so check out our twitter, facebook and linkedin groups. I look forward to hearing your thoughts and discussing YOUR research journey with you.

Happy researching! Jenine