Researcher of the Month - A/Prof. Jo Coldwell-Neilson

AP Jo Coldwell-Neilson.jpg

Associate Professor Jo Coldwell-Neilson's expertise is in E-learning and Digital Literacy. Her work in this area focuses on the digital technology uptake in schools and higher education, digital literacy preparedness for our digital future, and preparing students for careers in a digital environment. Starting from a view that graduates must have good digital literacy skills to meet industry demands, her work demonstrates that there is no shared understanding of what digital literacy entails, which poses challenges for students who are expected to have an ill-defined and in some instances, unknown set of digital skills. It also challenges relationships between staff and students, as expectations and understanding of digital skills are not aligned.

Read more about Professor Coldwell-Neilson's work here:

Book Recommendation: Johnny Saldana - The Coding Manual for Qualitative Researchers

For those new to qualitative research, this is the #1 book we recommend to familiarise yourself with the concept of coding, with clear tips and hints to get started, and to continue exploring different aspects of your data. This book is not discipline specific and is suitable for anyone attempting qualitative research either for the first time, or as a refresher.

In Saldana's words (p1) , the purpose of the book is:

  1. to discuss the functions of codes, coding, and analytic memo writing during the qualitative data collection and analytic processes;

  2. to profile a selected yet diverse repertoire of coding methods generally applied in qualitative data analysis; and

  3. to provide readers with sources, descriptions, recommended applications, examples, and exercises for coding and further analyzing qualitative data.

"Coding is primarily an interpretive act; it is not an exact science" (p.4). Coding involves identifying patterns in the data with a pattern being "repetitive, regular, or consistent occurrences of action/data".

Saldana argues that this pattern needs to appear more than twice; on this point we don't entirely agree as sometimes the pattern is what is not being said, or something that is coded only once, which we consider significant. We agree it is not an exact science and the interpretation depends on the researcher’s philosophical beliefs, research training, methodological use etc.

Coding of qualitative data is generally done in two ways: using inductive and deductive techniques. Induction starts with recording a specific instance (e.g. a comment in an interview) and coding it to a relevant category or "code" that you create . This is a technique often associated with grounded theory and interpretive research; however we find it has relevance in most projects. Deduction starts with ‘a priori’ codes, categories you have set up before you start coding, often developed from the literature or from a theoretical framework. Both techniques can be used in combination.

Saldana argues that "coding is neither a philosophy nor a way of viewing the world; it is simply a heuristic for achieving some sense of clarity about the world from your data and your deep reflections on them". As such, this book helps with 30+ techniques for what to look for in your data. It's a great place to start, and a great go-to guide when we get stuck.

It also provides reassurance for the new researcher, reminding them that everyone experiences the ‘overwhelming fear’ when first confronted with the vast range of coding methods.

Buy now

Book Recommendation: Qualitative Data Analysis with NVivo (3rd edition)

The 3rd edition of the Jackson and Bazeley book is a useful upgrade for all NVivo beginners. It is a project style book with really useful "takeaways" at the end of each chapter. Those takeaways combined, are a great tips and hints list for working your way through NVivo.

Most people don't realise you can use NVivo to analyse social media data such as Twitter, Facebook and LinkedIn, and there is a useful chapter on this in the book. The NCapture tool, also highlighted, is one of the most underutilised tools in NVivo. It enables the user to easily capture web content.

Understanding cases and classifications is one of the most difficult thing in using NVivo, and there is a really great chapter explaining how you might set these up.

These feed into matrices, which is why you use software like NVivo. To get beyond the question of "I've done my coding, where do I go from here?"

Overall, grab this book from your university library or get a copy you can highlight and refer to in the long term.

The book is available from Amazon AU.

Researcher of the Month - Isaac Koomson

Isaac K small.jpg

I am a PhD candidate (Applied Econometrics) in the UNE Business School, University of New England, Australia. My teaching experience spans close to 10 years, with graduate level teaching starting from 2011 to date. I lectured in the university of Cape Coast, Ghana and proceeded to the University of Professional Studies, Accra (UPSA), Ghana before my PhD Candidature.

Apart from teaching, my interest in Applied Econometrics, coupled with research interest in Microfinance, Small Businesses, Agricultural Economics, Managerial Economics, Finance and Development Economics, has resulted in the publication of a number of research papers that can be found in refereed journals. I am currently the Lead Economist/Consultant for the Network for Socioeconomic Research and Advancement (NESRA), Accra, Ghana and consulted for the World Bank; Ministry of Food and Agriculture, Ghana; and the United Nations University Institute for Natural Resources in Africa (UNU-INRA).

Intrigued by the evidence of low financial literacy levels in both developed and developing countries and its implications on household welfare indicators, my PhD research focuses on the impact of financial literacy on financial inclusion and household consumption; the role of financial literacy in households’ asset accumulation process and the effect of financial inclusion on poverty and vulnerability to poverty.

Based on my previous research engagements, I began my PhD with some level of exposure into the main software and analytical techniques required for my research. This also motivated me to opt for PhD by publication which, to me, should be the choice for any PhD student who has prior experience and has available data or can collect data at the earliest. With this, the student can have some publication(s) in reputable journals before completion and also learn more from journal reviewers in addition to what your supervisory team provides.

From a personal experience, the PhD journey becomes rewarding and less-daunting if the student has some level of mastery over the proposed PhD topic and analytical procedures and tools (including software) required in the process. A deficiency in any of these breeds complex problems and a greater devotion of already-limited time to acquire these skills. The take-home message is to plan ahead and acquire some amount of skills needed to embark on a PhD journey before jumping on board. The other advice is to make friends and to seek help because regardless of the level of experience, you will always need a shoulder to lean on or someone to discuss an idea with and to get another perspective to a single concept.

My Encounter with Adroit research

In June 2019, I went into the NVIVO training with a research orientation in quantitative analyses, using statistical and econometric techniques, but approaches to finding my way around qualitative analyses and systematic reviews were new to me. The training I received from Adroit Research has added a layer of qualitative research technique to my analytical skills.

The course content and materials are very handy and provide trainees with the ability to practice during and after the course. The friendly learning atmosphere urges students to participate and ask any question at any time. Jenine combines technical and theoretical understanding of the course content and helps students based on her practical applications of the software in the production of research papers and reports. Jenine’s constant level of preparedness to address each trainee’s problems and questions makes her course unique and worth attending.

Through my training, I now have knowledge in the use of NVIVO for analysis and other important resources available to a qualitative researcher. I therefore recommend Adroit Research to anyone who is a beginner in qualitative research or has intentions of enhancing his/her qualitative analytical skills.

You can learn more about Isaac’s works and publications from the links below:

Google scholar:
UNE research profile:

Researcher of the Month - Sarah-Jane Gregory

Sarah-Jane Gregory.jpg

Our Researcher of the Month, Sarah-Jane Gregory from Griffith University shares her research story. It is so rewarding to know we were able to help her in her research journey.

“I have undertaken NVivo software training with Jenine a few times over the past four years having completed introductory training (2014) in NVivo10, advanced in NVivo11 and a refresher with the new NVivo12 (2019).

Every time Jenine has been extremely professional, encouraging and supportive of my use of the program. She was very inclusive of everyone on the courses and able to adapt to their needs no matter their level of experience, background or type of projects they were working on. She extends herself to offer support outside of the training room too; that has been very helpful.

In 2014, Jenine seeded an idea with me to utilise NVivo for coding my thesis introduction, though at the time I could see the potential, I wasn't confident enough with the software to take this on and felt I had insufficient time to master both the software and literature review writing concurrently. However, last year I hit a huge writing block and couldn't get my introduction to flow or consolidate the key themes. I lost confidence in my capacity to undertake this crucial component of my thesis.

I went back to first principles and decided that perhaps I would try Jenine's suggestion and code my literature to see if that helped with writing. Whilst there was many weeks of data entry, in the end I ended up with an amazing repository that has been instrumental to my moving forward. I was able to quickly see where the key themes were and where the gaps lay (thus the crux of a literature review). For me also it reinforced that I was actually on the right track and helped to dampen the impostor-syndrome feelings.

My supervisors both commented that they were so impressed with how my writing structure, referencing, cohesion and flow had improved dramatically overnight! In addition, now that the core of my literature is coded it is very easy to add extra papers to the file as they are published.

The benefits don't just lie in writing reviews though. I have recently been writing discussion for results chapters of my thesis. What I have found is that the code book I develop can now easily be used as a set of a priori codes for my data and in addition helps to quickly pull references to particular aspects of discussion which has sped up the writing of these sections.

Even with the software update to NVivo12 I was quite hesitant to change in the middle of data analysis. However, Jenine really encouraged me to overcome this fear. The shift has been painless and very beneficial as it is even more intuitive than previous versions, the auto-coding has vastly improved. You can now easily cross-match both quantitative and qualitative data in the one program too. You need to be a little bit mindful of how you arrange some of your data files before importing that may need a little tweaking to optimise the auto-coding but is well work the effort. What took me hours in NVivo11 took seconds in NVivo12 import.

Based on my experiences I believe that all HDR candidates should be encourage to use NVivo for the literature review process and I would highly recommend Jenine to assist.”

The Adding-In Method


When you approach your analysis, whether it is your literature review or your your interviews, it often feels overwhelming. We suggest you start with 1-2 papers per day, in what we call the "adding in" method. This is designed to break down the big task of importing everything you have, in a way you can cope with. Bite size pieces, with good habits every day to "add in" at least something, you will get there.

Day 1 - find the 2 most interesting articles you have. Import them into NVivo. Do a word frequency on them. Manually code them according to our systematic literature review process.

Day 2 - bring in the 2 next most interesting articles and repeat as above. Start to build up your coding on the topics within the papers by coding them to relevant nodes.

Day 3 - while you are keeping a research journal each day, bring in 2 more articles to add to your repository. Keep going!

Write every time you work on your research project. Then you end up with more writing than you need. Writing then becomes rewriting. Rewriting is good! :)

Photo credit: @crissyjarvis

Book Recommendation: John Creswell - Qualitative Inquiry and Research Design: Choosing Among Five Approaches

Creswell's book on 5 approaches to qualitative research is one of our all time go to books. It is easy to read for any level of researcher, and it provides new insights the more times you read it. The tables presenting comparisons between approaches are particularly useful. These allow you to not only argue and justify your approach/es, but to also justify why you did not make other certain decisions. For instance, using the table to argue the number of interviews you did for an ethnography, can help you to argue why you did not do more or less interviews - an approach more suited to another method such as a case study, or a narrative study.

We often fail as researchers to consider the pros and cons of each of the available approaches, and we choose the ones we think will yield the "best" results, or will be the most fun to do. Without considering the technical advantages and disadvantages of each approach means we are not doing justice to our research and participants because we have not fully considered the scope of our work. We encourage you to map our your own approaches with your justifications for your choices, and use these to present your decisions to your supervisors and colleagues.

Buy John Creswell’s Qualitative Inquiry and Research Design: Choosing Among Five Approaches.

Book Recommendation: Silverman's 'little pink book' on Qualitative Research

One of our favourite books about research is Silverman's "little pink book" in which he engages in a discussion about "what is data?", "where do we find it?" and "what then do we do with it?"

In particular, he distinguishes between what he calls "naturally occurring data" and "manufactured data". "Manufactured data" is the most commonly collected in qualitative research: which includes interviews, focus groups, and observations. It is said to be "manufactured" because we, the researcher and the subject, together are creating, or manufacturing data in a particular setting. "Naturally occurring data" on the other hand, is data that is collected after is has naturally occurred: which includes social media data, policy documents, websites, newspaper articles etc.

We believe there is nothing "natural" about putting someone in an non-descript office to interview them with a red flashing light flickering at them on a little recording device!

Silverman book.jpg

We see that most researchers quickly choose to do interviews in their research design as a first point of call, often without considering other methods of collecting data. Interviews, and even more so focus groups, are the most difficult data collection methods for a variety of reasons (access, timings, resources required). Thus we find it puzzling that these are the first port of call in most qualitative studies.

We consider Silverman's advice, to first collect "naturally occurring data" where possible, as it is often more easily accessible (often publicly available), and then to support this with "manufactured data" that we generate through interactions with the subject.

Our advice: consider the pros and cons of each of the possible methods in your planning, which  helps to decide which methods will yield the most useful data for your study. Doing this will help you to map out your research questions, with the most appropriate methods to answer them, which will help you to justify your choices (and non-choices).

Help your research with a journal


The research journal is (or will be) your most useful tool and companion in your research journey.  All experienced researchers know the value of documenting everything - from how you decided on your research topic, to research goal setting, to your search strategy, coding approach and finally writing up.  The more you write, the better.


Why you need to keep a good journal:

1.     Your Research Journal provides you with an audit trail for your project – use it to document your thoughts, your reasoning, what conclusions your drew from your data analysis, and why you made the decisions that you did.

2.     It also serves as a discussion tool when you meet with your professor or supervisor.

3.     Your research journal will keep you on track should you get lost in your research journey.  Reading back through your notes will help you get back on track.

4.     The more you document in your research journal, the easier it will be for you to justify your decisions and approach as it provides transparency for your project - your professors will want to know how you analysed the data and how you came to your conclusions. 

5.     It doesn’t need to be fancy – record a few notes a day if you’re pressed for time.  Use it to note down ideas you want to explore later.  The sentence structure doesn’t need to be perfect is it’s just your own notes for yourself – no one else will see it.

6.     Keeping a research journal will get you into the habit of writing every day.  This will help you immensely when it comes to writing up your paper.  You’ll likely find that your words will flow more easily as you’ve gotten used to writing.

7.     There’s nothing like writing things down to bring clarity to your thoughts – try it and see!


The best tool we recommend for keeping your research journal is the Memo section of NVivo.  We like it because:

·       You can copy and paste your data analysis queries into your NVivo journal.  Created an enlightening Word Cloud?  Put it in your journal.  Found some interesting links in your Word Tree?  Copy and paste it into your journal.

·       You can import and link your research papers to your journal.  This is really useful for keeping track of papers that you’re referencing.

·       You can keep your entire research project in one place.


Pro tipS

For each journal entry, make it a habit to capture 3 key things:

  1. Make a note of the tasks for that day - (e.g. import literature, run queries on interviews).

  2. What you actually did (e.g. what key words you used in your search) and what the results were, why you scoped it the way you did, what assumptions you made, what worked and what didn’t. 

  3. Plan your tasks for next time.

NVivo: Compare features on Windows and Mac Platforms #NVivo #research #mac #windows #ecrchat #phdchat #academiclife

word cloud.jpg

In the workshops we regularly run on NVivo, we often get a mix of people wanting to learn NVivo on Windows and Mac computers. One thing that becomes very clear, particularly when we run our advanced workshop where we do a lot of visualisations and queries, is that the Mac version has some limitations. We had hoped that the Mac version would be equal in features to the Windows version by now, but sadly this isn't the case. So we wanted to show you the comparison list of features to help you make the best possible decision when deciding which platform to use.

Now that may sound like a strange thing to suggest because wouldn't a Windows user use the Windows version and a Mac user use the Mac version, well not necessarily. I personally use a Mac but I run a virtual machine and I run Windows, and therefore the Windows version of NVivo on my Mac. I do this for historical reasons, but also because I am mostly trained in Windows. I do however have the Mac version on my machine which I can use for comparison. But to be honest, when wanting to use some of the more advanced features that the Mac version does not have, I find this a little frustrating and have chosen to continue with the Windows version.

Therefore I wanted to present this list to you to give you some guidance on which platform you should use to invest all of your time in your research and also in learning a new tool in NVivo.

Here is the product comparison list. If you need help with setting up a virtual machine - let us know

Top 10 Tips to make the most of your audio transcriptions #ECRchat #PHDchat #research

Catch these great tips from our experienced transcriber to help you get the best result at
the lowest cost.

  1. Before you conduct your interview(s), check to see that the recording device works properly and the audio is clear.  Oftentimes the audio has a fuzzy background, or the voices are distorted which makes it more difficult for the transcriber to accurately record everything that is being said. 
  2. When you conduct the interview, ensure that the facilitator and participant(s) are sitting close enough to the recording device to be heard clearly. When facilitating, avoid making additional noise – two common noises that really interfere with being able to hear the interview clearly include rustling of papers and clicking/tapping pens.
  3. When conducting the interview, consider background noise.  A quiet room is the best place to conduct an interview.  Examples to avoid are busy cafés, sitting near busy roads, sitting outside underneath a flight path, near a train station, near a playground (with children playing/screaming). 
  4. Try to encourage the participants to take turns when speaking.  It’s often difficult to hear what one participant is saying if another interrupts them.  If this does happen, asking the participant(s) to repeat themselves or to clarify what they said will ensure that potentially important data is not wasted.
  5. Getting the participants to introduce themselves at the beginning of the interview can be helpful.  It means that the transcriber will be able to identify the participants when transcribing.  You, as the facilitator, can always provide a list of pseudonyms so the transcriber can change the name(s) of the participants (for deidentification purposes), but still allowing you to identify participant responses.
  6. Sometimes when interviews are conducted, the participants may be eating or drinking.  Again, the rustling of packets can be very intrusive when trying to transcribe an interview, but it’s also difficult to hear what someone is saying if their mouth is full of food or if another participant is chewing loudly nearby.
  7. Make a note to keep an eye on your recording device to ensure that it is still recording throughout the interview.  Some interviewers prefer more than one recording device.  Make sure that your device(s) are charged up/have batteries.  If you are using your phone to record the interview, turning it on silent (without vibrate) is the best option as the phone vibrating is loud (when transcribing) and can also interrupt the train of thought for participants. 
  8. To ensure the accuracy in transcribing terminology, provide the transcriber with a web page address or a glossary which lists the most common acronyms and terminology that may be used in an interview.
  9. Terms such as ‘inaudible’ will be used when the transcriber cannot make out what is being said in the recording.  Please indicate if you wish the transcriber to use another word or any other protocols for dealing with pauses, etc., so it does not interfere with the coding of data.  For example, silence or laughter may be important to you, and you may wish these to be indicated in a particular way.
  10. When possible, provide a list of the questions (or the topics that will be covered) the facilitator will be used for the interviews.


Illuminating the underground: the reality of unauthorised file sharing - journal article now available

Five years after I submitted my doctorate, my research has now been published in the Information Systems Journal. I'm grateful for the help of my supervisors Dr Liisa von Hellens and Dr Sue Nielsen working on this paper. It is published in a special issue on the Dark Side of Information Systems. It has less than 18 months turnaround from submission to publication.

In my critical ethnography, I spent months studying a clandestine online community who engaged in unauthorised file sharing (illegal file sharing for personal use). Below is the abstract, and you can view the paper here.

This paper presents a new conceptualisation of online communities by exploring how an online community forms and is maintained. Many stakeholders in the music industry rightly point out that unauthorised file sharing is illegal, so why do so many people feel it is acceptable to download music without paying? Our study found highly cohesive, well-organised groups that were motivated by scarcity and the lack of high quality music files. Our ethnographic research provides insight into the values and beliefs of music file sharers: their demands are not currently being met. Using Actor-network theory, we are able to propose that the file sharers represent a growing potential market in the music industry and that music distribution systems should be developed accordingly to meet the demands of this user group. Therefore, this study can serve as a springboard for understanding unauthorised file sharing and perhaps other deviant behaviours using technology.


Exciting futures – whither qualitative research?

This week my colleague, Dr Jenine Beekhuyzen will present a new workshop on using NVivo with non-conventional forms of qualitative data. Earlier this week I consulted with a series of PhD students and was disappointed to find that most of them were using face to face interviews as their primary method to collect data. It’s time to move on, people! Although Jenine has entitled her workshop ‘non-conventional’ data, this is merely in response to this fixation with face to face interviews!  When recording, transcribing, videoing and note taking were the only resources we could use to collect and manage our qualitative data, there was some excuse for resorting to the most convenient methods. But now we can import so many different types of data– social media, sound recordings, photos and other image media - straight into our data management software, and view or listen to them in the same state as when we collected them.

An additional benefit of using NVivo etc means we do not need to waste valuable time transcribing these data sources into some other more abstract and diluted form. We can listen to the voices, watch the images and accurately tag and code all the associated data – non verbal, environmental, etc. And we can easily go back to the context of the data – without this, the data is meaningless or even worse, misleading.

Qualitative research is about understanding. Using these ‘unconventional’ (but not so new) forms of data collection means that we are more likely to get access to the way  understanding  is being constructed and developed, especially amongst the younger generations.  Ignoring them will leave us floundering in the past.

by Dr Sue Nielsen

Qualitative research and the new world order

Post by Dr Sue Nielsen Watching the news every morning can leave you feeling confused and dispirited - so many challenges, so much conflict. The information revolution and globalisation have changed our landscape in social relations, work, and the environment, and the rate of change seems to be increasing.  To cope with all these changes we need to take fresh perspectives on things we have taken for granted, to look for new ways to live, manage and prosper.

Most importantly we need to understand what these changes mean. Human beings are driven by the need to make things meaningful and this is so evident in the growing number of programmes which encourage people to express their ideas and their feelings – Big Brother, My Kitchen Rules, Facebook, Twitter and so on. Looking at the news, we can see that most of the changes now transforming our world are driven by changes in meaning. What does it mean to be young, to be a mother or father, to be Russian, Australian or Chinese? What does work mean? How is that different from the way our parents and grandparents thought about work?

Qualitative research is about understanding meaning – it is about the investigation of the meaning of social action. It does not seek to predict or explain the causes of meaning, but to understand them. It seeks to uncover what we take for granted, or what currently seems too hard to understand, so that we can think in fresh ways about new opportunities. It aims to discover what counts, what will be worth counting, rather than counting things which we already know about. It does not ignore the past, but looks back only to look forward.

Students and researchers have many pressures on them - to get unequivocal results, to tie up all the loose ends, to make strong recommendations, to ‘publish or perish’. But these are not incompatible with attaining a deep understanding of the problem under study.

Qualitative research is not an easy option. It requires persistence, acute observation and the acceptance of ambiguity and uncertainty. But the research methods are well established and the rewards are great – new insights into our current situation and new ideas to move forward into an uncertain and rapidly changing world.

 Join our online qualitative research workshop series beginning on the 10th of March - register here

Focus groups for data collection – what’s the point?

Qualitative researchers spend a lot of time wondering ‘what does it mean’? We observe subjects and think about what is going on, and we analyse verbal data to interpret its meaning. Most people, when they are not sure of the meaning of a word, consult a dictionary. In preparation for the Focus Group workshop on October 2nd I decided to go back to my big old Websters dictionary to remind myself of what focus means. What is the point of all my rumination? Good point – since ‘point’ and ‘focus’ are strongly related. My dictionary suggests that ‘focus’ means (among other things) the point at which rays (or other phenomena) converge, or the point at which they (appear to) diverge. Converge and diverge, eh? That suggests that focus groups help us understand the convergence and divergence of ideas, amongst the members of the group, and more importantly how these occur.

Furthermore – focal length is the position in which something must be placed for clearness of image, - the point of concentration. The latter meaning is the one I think that springs to mind when we think about focus group interviews – that we ask the group to discuss a limited range of ideas which we wish to concentrate on. But it also means that the point of concentration is the group. If not, why not carry out individual interviews?

Most writers on focus groups indicate that they can be rewarding sources of qualitative data but are difficult to organise and moderate. Additional challenges include who to focus on? How to record the interactions between the focus group members?  Perhaps the most difficult challenge is how to analyse the data when the unit of analysis is the group.

But the rewards are great. The researcher can observe how consensus is reached or abandoned; how deviant ideas are suppressed or contested; how participants react to new ideas and even change their opinions and views during the group interview.

The researcher is reminded that qualitative interviews are not about eliciting facts, but about posing, confirming and refuting ideas about ‘facts’. In that way, focus group interviews seem more ‘natural’ and real than individual interviews.

How to overcome the challenges? I look forward to talking about that on October 2nd. Register today for the workshop.

Post by Dr Sue Nielsen

Manage evolving coding schemes in a codebook: 3 simple strategies

This is a repost of the blog I wrote for QSR International - See more at: May, 2013 · by Jenine Beekhuyzen · in How-toStoriesTips

I originally began writing this blog post about teamwork and my recent experiences in seeing how important it is to clarify the definitions of codes when working in teams. But I now realize that such advice applies to all researchers, in all disciplines, studying all manner of topics. Every single researcher I have discussed this with (and they are now in the hundreds) has found some benefit in this, so I had to share it with you. (Thanks QSR for inviting me to write this post!)

The topic of a codebook came to my immediate attention when I read the article “Developing and Using a Codebook for the Analysis of Interview Data: An Example from a Professional Development Research Project” by DeCuir-Gunby, Marshall and McCulloch, which was published in the Field Methods journal in 2010. This article has become my research bible. I have seen how it helps to fix some of the challenges that researches face when coding qualitative data. How?

What’s in a name?

We all approach our data with the best of intentions, equipping ourselves with the tools (e.g. NVivo) and techniques (e.g. thematic analysis) we believe we need to do an adequate, and hopefully even good analysis of the data we often struggled many months or years to collect. We often feel we have clear conceptualizations of what we mean by different codes related to our data. But often we don’t document these in detail. It just seems too hard doesn’t it? Believe me, the tedious work is well worth it!

I find that most of us are not very articulate about what we mean by each of the codes we are using to investigate the data. How do you decide what is included in a node and what is not? How would you describe your process to someone else (i.e. your examiners!) and create a process that is repeatable?

The codebook is the answer. It helps to clarify codes and what you mean when you apply them to your data not only to yourself, but also to your team members and supervisory staff. I’ve seen experienced teams convinced that they are all on the same page about their codes, but when given the task of developing a codebook in a systematic way, they find they often have different understandings of what they mean by common terms. So my regular first response in consulting and training now related to qualitative data analysis is – where is your codebook?

Strategy 1. Create the codebook

This is basically a three-column table you can create in a memo in NVivo; you will need to update and refer to it regularly. Populate it with your code name, a definition (from the literature or your own) and an example taken from your data that is an example of this code being applied.



Example from text


Text coded to topics around the concept of community (not around specifically named communties)

I think it is just how distinct the different communities are because of the geographical isolation

You can take this a step further and include an inclusion and exclusion strategy:

Inclusion Criteria

Exclusion Criteria

Include if it discusses …

Include if it does not discuss…

The benefits become obvious pretty quickly; you know exactly what you mean by each code, as does your supervisor/s and ultimately your examiner/s. This is really important, in my opinion, to all research projects postgrad and postdoc.

The benefits for teams are also immediately apparent: each person coding knows exactly what should be coded in each node and what should be added to another or a new code – much of the ambiguity disappears as does much of the angst of the coder.

Strategy 2. Document the changes to your codebook


If you are using apriori (theoretically or domain based) codes then you might find this coding process fairly straightforward. However if you are doing thematic analysis with codes that are not well defined from the start, then your codebook WILL change. Be prepared for that. It might get messy before it gets clear, that’s ok 

A recent psychology study found it took quite a few interviews before the team codebook was agreed upon in these instances, and my own experience is seeing teams nut it out for hours on end sometimes to get finally definitions and examples that everyone agrees upon. This is good progress and a really important stage in the data analysis/coding process!

Strategy 3. Run an intercoder reliability check (specifically for teams)

After the team creates a codebook of their codes, each coder then takes a copy of the NVivo project (appended with their initials) and goes away and codes the same interview to a set of identified codes that are believed to be well defined from strategy #1.

Once the coding is complete, use NVivo to run a query to compare the coding from each coder. This is a great process to help to create really strong definitions of your codes (and test them out), as it becomes really obvious from the result of the query where the ideas and conceptualisations of the coders differ. NVivo allows you to look at exactly where the differences are, and these are then discussed with the team.


Once some decisions are made, often these include whether to code the question in addition to the response, or whether to code a line or a paragraph, then the team goes away and repeats this process with a second data source (often another interview transcript). Then run the query again and discuss the results with the aim of being as closely compatible in coding as possible (NVivo also provides a Kappa as a representation of this compatibility). The Kappa and a description of your process of coding can be used to report your teamwork.


Happy coding 


Data collection – gathering, generating or making it up?

Post by Dr Sue Nielsen Most books on research methods include long sections on Data Collection. It’s interesting then, that in Schwandt’s (2007) Dictionary of Qualitative Inquiry, there is no entry for ‘Data Collection’ but a ‘See’ reference to ‘Description’ and ‘Generating Data’. In the former entry he discusses how so called factual descriptions of the world are theory laden and that data ‘collection’ is more appropriately described as generation or construction. In the latter, he mentions that data is not ‘out there’ to be discovered like gold or collected as we would gather fruit from a tree.

But what does this mean? The word ‘generate’ gives rise to images of nature, birth, growth and so on. If constructed, how is this done? Certainly we would hope that we are not just making it up as we go along!

At this point, a researcher engaged in a large qualitative project with deadlines to meet, papers to write and colleagues to argue with, might mutter - ‘collect, gather, generate – a lot of fuss about nothing.’ But given the increasing trend to using interviews as the major means for ‘collecting’ data, and even to treat these interviews as representations of underlying phenomena,  it’s worthwhile to reconsider what we are doing when we 'collect' data and why.

One definition of data is ‘factual information’ but given the discussion above, this makes for even more complications. The meaning in Latin is more informative – something given – which would signify that the research subjects are active, or as Creswell (2007, p.118) puts it “will provide good data”. When Creswell discusses data collection he often refers to ‘studying’ a site or an individual. Put these two ideas together and data collection means studying what research subjects can give you.

Qualitative research is primarily about understanding, which most agree is achieved through interaction. This involves acknowledging the potential for misunderstanding as well as being open to challenging one’s own understanding. Recognising and exploring the role of both the researcher and the data giver may ‘generate’  or ‘construct’ greater insights. ‘Collecting’ data, removing it from its source and then subjecting it to analysis seems unlikely to achieve this.

It’s ok to use a catch-phrase such as data collection which when scrutinised could be considered misleading. Language is like that – after all, very few people are breaking a fast when they eat breakfast. But it’s good to remember the various meanings of data and collection and not misapply them to that phase of qualitative research.


Creswell, J.W. (2007) Qualitative inquiry and research design; choosing among five approaches. 2nd.ed. London, Sage.

Schwandt, T.A. (2007) The Sage dictionary of qualitative inquiry. 3rd ed. Los Angeles, Sage.

To transcribe or not to transcribe – that is the question!

Our first blog post by Dr Sue Nielsen :) This is a regular topic of conversation for us and the researchers we work with. Let us know what you think...

Qualitative researchers are often expected to make exhaustive written transcriptions of audio and video recordings of interviews, observations and other qualitative data. However, exhaustive transcribing is very costly, often poorly done and may delay the progress of the research project.

Researchers need to consider the following issues. Cost and skill – Most researchers are not fast, accurate transcribers. On the other hand, qualified transcribers may not understand the domain, leading to problems with professional and discipline terminology, jargon etc.

What and how - Transcribing is a research activity not a technical process. Most qualitative researchers analyse as they listen to or look at their data. What and how to transcribe depends on the purpose of the researcher. Most researchers need to ‘reduce’ and synthesise their data so full transcriptions are unnecessary. Different research methods such as conversation analysis, grounded theory and thematic analysis require very different skills and approaches.

Accuracy - Transcribing audio and visual data may lose authenticity and relevant information. Capturing emotional, vocal, spatial and other data is very difficult and time consuming. Much of this data gets lost or forgotten and will not be taken into account in the data analysis phase.

Trustworthiness - The purpose of the qualitative research project is to write up the findings. To ensure trustworthiness, many authors suggest establishing an audit trail to enable readers or examiners to follow the line of inquiry and analysis from the interpretation back to the original data. Researchers should keep descriptions and registers of data collected, as well as how the data was handled in the analysis process. It is misleading to assume that transcripts alone will accurately represent the original data.

Help - Computer based programmes can assist in addressing these issues. Small projects may simply use word processing, spreadsheets or databases. Longer and more complex projects will benefit from the use of a CAQDAS programme. Programmes such as NVivo enable researchers to make notes on and code audio and visual data alongside textual data.

Whatever the size of the project, think carefully before embarking on costly and possibly misleading transcriptions of all your data.

The following papers provide a good argument for working with data in its original state and an overview of emerging technologies which support data analysis and coding. Markle, D.T., West, R.E., & Rich, P.J (2011) Beyond Transcription: Technology, Change, and Refinement of Method, FQS Forum: Qualitative Research, 12 (3) September, Article 21.

McLellan, E., McQueen, K., & Neidig, J. (2003). Beyond the qualitative interview: Data preparation and transcription. Field Methods, 15(1), 63-84.

Love to hear your comments!

Welcome to the Adroit Research blog

Welcome to the Adroit Research blog! I'm very excited to be launching my new website, and my new blog and to start sharing all of my hints and tips about research. I love helping people with their research, because I learn so much that I can then take back to my own projects. Also researchers are so interesting! So I plan to use this blog to share some of the resources and ideas I collect on my research journey, in the hope of giving you some fresh ideas and motivation to make the most of your projects and your data. I also regularly share resources via social media, so check out our twitter, facebook and linkedin groups. I look forward to hearing your thoughts and discussing YOUR research journey with you.

Happy researching! Jenine