Book Recommendation: Silverman's 'little pink book' on Qualitative Research

One of our favourite books about research is Silverman's "little pink book" in which he engages in a discussion about "what is data?", "where do we find it?" and "what then do we do with it?"

In particular, he distinguishes between what he calls "naturally occurring data" and "manufactured data". "Manufactured data" is the most commonly collected in qualitative research: which includes interviews, focus groups, and observations. It is said to be "manufactured" because we, the researcher and the subject, together are creating, or manufacturing data in a particular setting. "Naturally occurring data" on the other hand, is data that is collected after is has naturally occurred: which includes social media data, policy documents, websites, newspaper articles etc.

We believe there is nothing "natural" about putting someone in an non-descript office to interview them with a red flashing light flickering at them on a little recording device!

Silverman book.jpg

We see that most researchers quickly choose to do interviews in their research design as a first point of call, often without considering other methods of collecting data. Interviews, and even more so focus groups, are the most difficult data collection methods for a variety of reasons (access, timings, resources required). Thus we find it puzzling that these are the first port of call in most qualitative studies.

We consider Silverman's advice, to first collect "naturally occurring data" where possible, as it is often more easily accessible (often publicly available), and then to support this with "manufactured data" that we generate through interactions with the subject.

Our advice: consider the pros and cons of each of the possible methods in your planning, which  helps to decide which methods will yield the most useful data for your study. Doing this will help you to map out your research questions, with the most appropriate methods to answer them, which will help you to justify your choices (and non-choices).

Help your research with a journal


The research journal is (or will be) your most useful tool and companion in your research journey.  All experienced researchers know the value of documenting everything - from how you decided on your research topic, to research goal setting, to your search strategy, coding approach and finally writing up.  The more you write, the better.


Why you need to keep a good journal:

1.     Your Research Journal provides you with an audit trail for your project – use it to document your thoughts, your reasoning, what conclusions your drew from your data analysis, and why you made the decisions that you did.

2.     It also serves as a discussion tool when you meet with your professor or supervisor.

3.     Your research journal will keep you on track should you get lost in your research journey.  Reading back through your notes will help you get back on track.

4.     The more you document in your research journal, the easier it will be for you to justify your decisions and approach as it provides transparency for your project - your professors will want to know how you analysed the data and how you came to your conclusions. 

5.     It doesn’t need to be fancy – record a few notes a day if you’re pressed for time.  Use it to note down ideas you want to explore later.  The sentence structure doesn’t need to be perfect is it’s just your own notes for yourself – no one else will see it.

6.     Keeping a research journal will get you into the habit of writing every day.  This will help you immensely when it comes to writing up your paper.  You’ll likely find that your words will flow more easily as you’ve gotten used to writing.

7.     There’s nothing like writing things down to bring clarity to your thoughts – try it and see!


The best tool we recommend for keeping your research journal is the Memo section of NVivo.  We like it because:

·       You can copy and paste your data analysis queries into your NVivo journal.  Created an enlightening Word Cloud?  Put it in your journal.  Found some interesting links in your Word Tree?  Copy and paste it into your journal.

·       You can import and link your research papers to your journal.  This is really useful for keeping track of papers that you’re referencing.

·       You can keep your entire research project in one place.


Pro tipS

For each journal entry, make it a habit to capture 3 key things:

  1. Make a note of the tasks for that day - (e.g. import literature, run queries on interviews).

  2. What you actually did (e.g. what key words you used in your search) and what the results were, why you scoped it the way you did, what assumptions you made, what worked and what didn’t. 

  3. Plan your tasks for next time.

NVivo: Compare features on Windows and Mac Platforms #NVivo #research #mac #windows #ecrchat #phdchat #academiclife

word cloud.jpg

In the workshops we regularly run on NVivo, we often get a mix of people wanting to learn NVivo on Windows and Mac computers. One thing that becomes very clear, particularly when we run our advanced workshop where we do a lot of visualisations and queries, is that the Mac version has some limitations. We had hoped that the Mac version would be equal in features to the Windows version by now, but sadly this isn't the case. So we wanted to show you the comparison list of features to help you make the best possible decision when deciding which platform to use.

Now that may sound like a strange thing to suggest because wouldn't a Windows user use the Windows version and a Mac user use the Mac version, well not necessarily. I personally use a Mac but I run a virtual machine and I run Windows, and therefore the Windows version of NVivo on my Mac. I do this for historical reasons, but also because I am mostly trained in Windows. I do however have the Mac version on my machine which I can use for comparison. But to be honest, when wanting to use some of the more advanced features that the Mac version does not have, I find this a little frustrating and have chosen to continue with the Windows version.

Therefore I wanted to present this list to you to give you some guidance on which platform you should use to invest all of your time in your research and also in learning a new tool in NVivo.

Here is the product comparison list. If you need help with setting up a virtual machine - let us know

Top 10 Tips to make the most of your audio transcriptions #ECRchat #PHDchat #research

Catch these great tips from our experienced transcriber to help you get the best result at
the lowest cost.

  1. Before you conduct your interview(s), check to see that the recording device works properly and the audio is clear.  Oftentimes the audio has a fuzzy background, or the voices are distorted which makes it more difficult for the transcriber to accurately record everything that is being said. 
  2. When you conduct the interview, ensure that the facilitator and participant(s) are sitting close enough to the recording device to be heard clearly. When facilitating, avoid making additional noise – two common noises that really interfere with being able to hear the interview clearly include rustling of papers and clicking/tapping pens.
  3. When conducting the interview, consider background noise.  A quiet room is the best place to conduct an interview.  Examples to avoid are busy cafés, sitting near busy roads, sitting outside underneath a flight path, near a train station, near a playground (with children playing/screaming). 
  4. Try to encourage the participants to take turns when speaking.  It’s often difficult to hear what one participant is saying if another interrupts them.  If this does happen, asking the participant(s) to repeat themselves or to clarify what they said will ensure that potentially important data is not wasted.
  5. Getting the participants to introduce themselves at the beginning of the interview can be helpful.  It means that the transcriber will be able to identify the participants when transcribing.  You, as the facilitator, can always provide a list of pseudonyms so the transcriber can change the name(s) of the participants (for deidentification purposes), but still allowing you to identify participant responses.
  6. Sometimes when interviews are conducted, the participants may be eating or drinking.  Again, the rustling of packets can be very intrusive when trying to transcribe an interview, but it’s also difficult to hear what someone is saying if their mouth is full of food or if another participant is chewing loudly nearby.
  7. Make a note to keep an eye on your recording device to ensure that it is still recording throughout the interview.  Some interviewers prefer more than one recording device.  Make sure that your device(s) are charged up/have batteries.  If you are using your phone to record the interview, turning it on silent (without vibrate) is the best option as the phone vibrating is loud (when transcribing) and can also interrupt the train of thought for participants. 
  8. To ensure the accuracy in transcribing terminology, provide the transcriber with a web page address or a glossary which lists the most common acronyms and terminology that may be used in an interview.
  9. Terms such as ‘inaudible’ will be used when the transcriber cannot make out what is being said in the recording.  Please indicate if you wish the transcriber to use another word or any other protocols for dealing with pauses, etc., so it does not interfere with the coding of data.  For example, silence or laughter may be important to you, and you may wish these to be indicated in a particular way.
  10. When possible, provide a list of the questions (or the topics that will be covered) the facilitator will be used for the interviews.


Illuminating the underground: the reality of unauthorised file sharing - journal article now available

Five years after I submitted my doctorate, my research has now been published in the Information Systems Journal. I'm grateful for the help of my supervisors Dr Liisa von Hellens and Dr Sue Nielsen working on this paper. It is published in a special issue on the Dark Side of Information Systems. It has less than 18 months turnaround from submission to publication.

In my critical ethnography, I spent months studying a clandestine online community who engaged in unauthorised file sharing (illegal file sharing for personal use). Below is the abstract, and you can view the paper here.

This paper presents a new conceptualisation of online communities by exploring how an online community forms and is maintained. Many stakeholders in the music industry rightly point out that unauthorised file sharing is illegal, so why do so many people feel it is acceptable to download music without paying? Our study found highly cohesive, well-organised groups that were motivated by scarcity and the lack of high quality music files. Our ethnographic research provides insight into the values and beliefs of music file sharers: their demands are not currently being met. Using Actor-network theory, we are able to propose that the file sharers represent a growing potential market in the music industry and that music distribution systems should be developed accordingly to meet the demands of this user group. Therefore, this study can serve as a springboard for understanding unauthorised file sharing and perhaps other deviant behaviours using technology.


Exciting futures – whither qualitative research?

This week my colleague, Dr Jenine Beekhuyzen will present a new workshop on using NVivo with non-conventional forms of qualitative data. Earlier this week I consulted with a series of PhD students and was disappointed to find that most of them were using face to face interviews as their primary method to collect data. It’s time to move on, people! Although Jenine has entitled her workshop ‘non-conventional’ data, this is merely in response to this fixation with face to face interviews!  When recording, transcribing, videoing and note taking were the only resources we could use to collect and manage our qualitative data, there was some excuse for resorting to the most convenient methods. But now we can import so many different types of data– social media, sound recordings, photos and other image media - straight into our data management software, and view or listen to them in the same state as when we collected them.

An additional benefit of using NVivo etc means we do not need to waste valuable time transcribing these data sources into some other more abstract and diluted form. We can listen to the voices, watch the images and accurately tag and code all the associated data – non verbal, environmental, etc. And we can easily go back to the context of the data – without this, the data is meaningless or even worse, misleading.

Qualitative research is about understanding. Using these ‘unconventional’ (but not so new) forms of data collection means that we are more likely to get access to the way  understanding  is being constructed and developed, especially amongst the younger generations.  Ignoring them will leave us floundering in the past.

by Dr Sue Nielsen

Qualitative research and the new world order

Post by Dr Sue Nielsen Watching the news every morning can leave you feeling confused and dispirited - so many challenges, so much conflict. The information revolution and globalisation have changed our landscape in social relations, work, and the environment, and the rate of change seems to be increasing.  To cope with all these changes we need to take fresh perspectives on things we have taken for granted, to look for new ways to live, manage and prosper.

Most importantly we need to understand what these changes mean. Human beings are driven by the need to make things meaningful and this is so evident in the growing number of programmes which encourage people to express their ideas and their feelings – Big Brother, My Kitchen Rules, Facebook, Twitter and so on. Looking at the news, we can see that most of the changes now transforming our world are driven by changes in meaning. What does it mean to be young, to be a mother or father, to be Russian, Australian or Chinese? What does work mean? How is that different from the way our parents and grandparents thought about work?

Qualitative research is about understanding meaning – it is about the investigation of the meaning of social action. It does not seek to predict or explain the causes of meaning, but to understand them. It seeks to uncover what we take for granted, or what currently seems too hard to understand, so that we can think in fresh ways about new opportunities. It aims to discover what counts, what will be worth counting, rather than counting things which we already know about. It does not ignore the past, but looks back only to look forward.

Students and researchers have many pressures on them - to get unequivocal results, to tie up all the loose ends, to make strong recommendations, to ‘publish or perish’. But these are not incompatible with attaining a deep understanding of the problem under study.

Qualitative research is not an easy option. It requires persistence, acute observation and the acceptance of ambiguity and uncertainty. But the research methods are well established and the rewards are great – new insights into our current situation and new ideas to move forward into an uncertain and rapidly changing world.

 Join our online qualitative research workshop series beginning on the 10th of March - register here

Focus groups for data collection – what’s the point?

Qualitative researchers spend a lot of time wondering ‘what does it mean’? We observe subjects and think about what is going on, and we analyse verbal data to interpret its meaning. Most people, when they are not sure of the meaning of a word, consult a dictionary. In preparation for the Focus Group workshop on October 2nd I decided to go back to my big old Websters dictionary to remind myself of what focus means. What is the point of all my rumination? Good point – since ‘point’ and ‘focus’ are strongly related. My dictionary suggests that ‘focus’ means (among other things) the point at which rays (or other phenomena) converge, or the point at which they (appear to) diverge. Converge and diverge, eh? That suggests that focus groups help us understand the convergence and divergence of ideas, amongst the members of the group, and more importantly how these occur.

Furthermore – focal length is the position in which something must be placed for clearness of image, - the point of concentration. The latter meaning is the one I think that springs to mind when we think about focus group interviews – that we ask the group to discuss a limited range of ideas which we wish to concentrate on. But it also means that the point of concentration is the group. If not, why not carry out individual interviews?

Most writers on focus groups indicate that they can be rewarding sources of qualitative data but are difficult to organise and moderate. Additional challenges include who to focus on? How to record the interactions between the focus group members?  Perhaps the most difficult challenge is how to analyse the data when the unit of analysis is the group.

But the rewards are great. The researcher can observe how consensus is reached or abandoned; how deviant ideas are suppressed or contested; how participants react to new ideas and even change their opinions and views during the group interview.

The researcher is reminded that qualitative interviews are not about eliciting facts, but about posing, confirming and refuting ideas about ‘facts’. In that way, focus group interviews seem more ‘natural’ and real than individual interviews.

How to overcome the challenges? I look forward to talking about that on October 2nd. Register today for the workshop.

Post by Dr Sue Nielsen

Manage evolving coding schemes in a codebook: 3 simple strategies

This is a repost of the blog I wrote for QSR International - See more at: May, 2013 · by Jenine Beekhuyzen · in How-toStoriesTips

I originally began writing this blog post about teamwork and my recent experiences in seeing how important it is to clarify the definitions of codes when working in teams. But I now realize that such advice applies to all researchers, in all disciplines, studying all manner of topics. Every single researcher I have discussed this with (and they are now in the hundreds) has found some benefit in this, so I had to share it with you. (Thanks QSR for inviting me to write this post!)

The topic of a codebook came to my immediate attention when I read the article “Developing and Using a Codebook for the Analysis of Interview Data: An Example from a Professional Development Research Project” by DeCuir-Gunby, Marshall and McCulloch, which was published in the Field Methods journal in 2010. This article has become my research bible. I have seen how it helps to fix some of the challenges that researches face when coding qualitative data. How?

What’s in a name?

We all approach our data with the best of intentions, equipping ourselves with the tools (e.g. NVivo) and techniques (e.g. thematic analysis) we believe we need to do an adequate, and hopefully even good analysis of the data we often struggled many months or years to collect. We often feel we have clear conceptualizations of what we mean by different codes related to our data. But often we don’t document these in detail. It just seems too hard doesn’t it? Believe me, the tedious work is well worth it!

I find that most of us are not very articulate about what we mean by each of the codes we are using to investigate the data. How do you decide what is included in a node and what is not? How would you describe your process to someone else (i.e. your examiners!) and create a process that is repeatable?

The codebook is the answer. It helps to clarify codes and what you mean when you apply them to your data not only to yourself, but also to your team members and supervisory staff. I’ve seen experienced teams convinced that they are all on the same page about their codes, but when given the task of developing a codebook in a systematic way, they find they often have different understandings of what they mean by common terms. So my regular first response in consulting and training now related to qualitative data analysis is – where is your codebook?

Strategy 1. Create the codebook

This is basically a three-column table you can create in a memo in NVivo; you will need to update and refer to it regularly. Populate it with your code name, a definition (from the literature or your own) and an example taken from your data that is an example of this code being applied.



Example from text


Text coded to topics around the concept of community (not around specifically named communties)

I think it is just how distinct the different communities are because of the geographical isolation

You can take this a step further and include an inclusion and exclusion strategy:

Inclusion Criteria

Exclusion Criteria

Include if it discusses …

Include if it does not discuss…

The benefits become obvious pretty quickly; you know exactly what you mean by each code, as does your supervisor/s and ultimately your examiner/s. This is really important, in my opinion, to all research projects postgrad and postdoc.

The benefits for teams are also immediately apparent: each person coding knows exactly what should be coded in each node and what should be added to another or a new code – much of the ambiguity disappears as does much of the angst of the coder.

Strategy 2. Document the changes to your codebook


If you are using apriori (theoretically or domain based) codes then you might find this coding process fairly straightforward. However if you are doing thematic analysis with codes that are not well defined from the start, then your codebook WILL change. Be prepared for that. It might get messy before it gets clear, that’s ok 

A recent psychology study found it took quite a few interviews before the team codebook was agreed upon in these instances, and my own experience is seeing teams nut it out for hours on end sometimes to get finally definitions and examples that everyone agrees upon. This is good progress and a really important stage in the data analysis/coding process!

Strategy 3. Run an intercoder reliability check (specifically for teams)

After the team creates a codebook of their codes, each coder then takes a copy of the NVivo project (appended with their initials) and goes away and codes the same interview to a set of identified codes that are believed to be well defined from strategy #1.

Once the coding is complete, use NVivo to run a query to compare the coding from each coder. This is a great process to help to create really strong definitions of your codes (and test them out), as it becomes really obvious from the result of the query where the ideas and conceptualisations of the coders differ. NVivo allows you to look at exactly where the differences are, and these are then discussed with the team.


Once some decisions are made, often these include whether to code the question in addition to the response, or whether to code a line or a paragraph, then the team goes away and repeats this process with a second data source (often another interview transcript). Then run the query again and discuss the results with the aim of being as closely compatible in coding as possible (NVivo also provides a Kappa as a representation of this compatibility). The Kappa and a description of your process of coding can be used to report your teamwork.


Happy coding 


Data collection – gathering, generating or making it up?

Post by Dr Sue Nielsen Most books on research methods include long sections on Data Collection. It’s interesting then, that in Schwandt’s (2007) Dictionary of Qualitative Inquiry, there is no entry for ‘Data Collection’ but a ‘See’ reference to ‘Description’ and ‘Generating Data’. In the former entry he discusses how so called factual descriptions of the world are theory laden and that data ‘collection’ is more appropriately described as generation or construction. In the latter, he mentions that data is not ‘out there’ to be discovered like gold or collected as we would gather fruit from a tree.

But what does this mean? The word ‘generate’ gives rise to images of nature, birth, growth and so on. If constructed, how is this done? Certainly we would hope that we are not just making it up as we go along!

At this point, a researcher engaged in a large qualitative project with deadlines to meet, papers to write and colleagues to argue with, might mutter - ‘collect, gather, generate – a lot of fuss about nothing.’ But given the increasing trend to using interviews as the major means for ‘collecting’ data, and even to treat these interviews as representations of underlying phenomena,  it’s worthwhile to reconsider what we are doing when we 'collect' data and why.

One definition of data is ‘factual information’ but given the discussion above, this makes for even more complications. The meaning in Latin is more informative – something given – which would signify that the research subjects are active, or as Creswell (2007, p.118) puts it “will provide good data”. When Creswell discusses data collection he often refers to ‘studying’ a site or an individual. Put these two ideas together and data collection means studying what research subjects can give you.

Qualitative research is primarily about understanding, which most agree is achieved through interaction. This involves acknowledging the potential for misunderstanding as well as being open to challenging one’s own understanding. Recognising and exploring the role of both the researcher and the data giver may ‘generate’  or ‘construct’ greater insights. ‘Collecting’ data, removing it from its source and then subjecting it to analysis seems unlikely to achieve this.

It’s ok to use a catch-phrase such as data collection which when scrutinised could be considered misleading. Language is like that – after all, very few people are breaking a fast when they eat breakfast. But it’s good to remember the various meanings of data and collection and not misapply them to that phase of qualitative research.


Creswell, J.W. (2007) Qualitative inquiry and research design; choosing among five approaches. 2nd.ed. London, Sage.

Schwandt, T.A. (2007) The Sage dictionary of qualitative inquiry. 3rd ed. Los Angeles, Sage.

To transcribe or not to transcribe – that is the question!

Our first blog post by Dr Sue Nielsen :) This is a regular topic of conversation for us and the researchers we work with. Let us know what you think...

Qualitative researchers are often expected to make exhaustive written transcriptions of audio and video recordings of interviews, observations and other qualitative data. However, exhaustive transcribing is very costly, often poorly done and may delay the progress of the research project.

Researchers need to consider the following issues. Cost and skill – Most researchers are not fast, accurate transcribers. On the other hand, qualified transcribers may not understand the domain, leading to problems with professional and discipline terminology, jargon etc.

What and how - Transcribing is a research activity not a technical process. Most qualitative researchers analyse as they listen to or look at their data. What and how to transcribe depends on the purpose of the researcher. Most researchers need to ‘reduce’ and synthesise their data so full transcriptions are unnecessary. Different research methods such as conversation analysis, grounded theory and thematic analysis require very different skills and approaches.

Accuracy - Transcribing audio and visual data may lose authenticity and relevant information. Capturing emotional, vocal, spatial and other data is very difficult and time consuming. Much of this data gets lost or forgotten and will not be taken into account in the data analysis phase.

Trustworthiness - The purpose of the qualitative research project is to write up the findings. To ensure trustworthiness, many authors suggest establishing an audit trail to enable readers or examiners to follow the line of inquiry and analysis from the interpretation back to the original data. Researchers should keep descriptions and registers of data collected, as well as how the data was handled in the analysis process. It is misleading to assume that transcripts alone will accurately represent the original data.

Help - Computer based programmes can assist in addressing these issues. Small projects may simply use word processing, spreadsheets or databases. Longer and more complex projects will benefit from the use of a CAQDAS programme. Programmes such as NVivo enable researchers to make notes on and code audio and visual data alongside textual data.

Whatever the size of the project, think carefully before embarking on costly and possibly misleading transcriptions of all your data.

The following papers provide a good argument for working with data in its original state and an overview of emerging technologies which support data analysis and coding. Markle, D.T., West, R.E., & Rich, P.J (2011) Beyond Transcription: Technology, Change, and Refinement of Method, FQS Forum: Qualitative Research, 12 (3) September, Article 21.

McLellan, E., McQueen, K., & Neidig, J. (2003). Beyond the qualitative interview: Data preparation and transcription. Field Methods, 15(1), 63-84.

Love to hear your comments!

Welcome to the Adroit Research blog

Welcome to the Adroit Research blog! I'm very excited to be launching my new website, and my new blog and to start sharing all of my hints and tips about research. I love helping people with their research, because I learn so much that I can then take back to my own projects. Also researchers are so interesting! So I plan to use this blog to share some of the resources and ideas I collect on my research journey, in the hope of giving you some fresh ideas and motivation to make the most of your projects and your data. I also regularly share resources via social media, so check out our twitter, facebook and linkedin groups. I look forward to hearing your thoughts and discussing YOUR research journey with you.

Happy researching! Jenine

Using Social Media to share your research

I'm fascinated with the power that researchers now have to share their research via social media. With a lot of focus now on how many citations we have as individual researchers rather than on journal rankings and impact factors (although these are still important), we need to take control of social media and really use it to our advantage to get more people reading our research, and to make more of an impact. Here is a GREAT tool I am using to manage my twitter, facebook, linkedin accounts, allowing me to post to all with a single click! Yay for awesome tech developers like this! It's called HootSuite.

Why not tweet when your hard worked journal article finally gets published after the 3 years of writing and rewriting? If nothing else, it is therapy. What you may also find is that those you never dreamed of are reading your article and citing it, and using it to stimulate their own ideas. It can't hurt right?

I admit I now love twitter: not just posting but reading the posts of others. I use it for research mostly, and I find it easy to read a few posts over breakfast or when waiting for the bus each day. Out of this I find that I learn at least one new thing about research every single day, or I find a resource I can recommend. This makes me love research even more, when you see people so passionate about it. Give it a try. I dare you! :)