I have been using automated transcription software Otter.ai throughout my 3-year PhD to facilitate data collection and analysis. This tool has been indispensable for transcribing events (e.g. workshops and conferences), in-depth interviews and focus groups with research participants, meetings with colleagues, and much more.
If you're new to using automated transcription, navigate to my previous blog posts which offer an introduction and tutorial. Importantly, automated transcription comes with a specific set of ethical and privacy considerations, which you can read more about in this post. Since writing these, I've run different talks and workshops on automated transcription - you can read a summary of the key messages from these here, including links to presentation slides and recordings.
In this post, I share some insights and tips from my experience using Otter.ai to generate, edit, and prepare transcripts ready for qualitative analysis. I use qualitative analysis software NVivo by QSR in this example. NVivo helps qualitative researchers to organise, analyse, and find insights in unstructured or qualitative data like interviews, open-ended survey responses, social media content, etc. However, there are lots of other proprietary tools you can use for analysing text, as well as free and open source options such as Voyant Tools. You can also use programming languages like R and Python to conduct text mining and analytics (e.g. see this guide for text mining in R). Of course, computer-aided qualitative analysis isn't the only way to go and manual coding remains just as important.
The core messages in this post should hopefully be relevant for a broad audience of researchers, regardless of what specific tools and approaches they are using. Equally, while I use Otter.ai in this example, there are plenty of other free and paid tools available in 2022, many of which have pretty similar core features.
1. Edit the transcript
Once you've uploaded a recording into Otter.ai (or used the live transcription function) and it has finished transcribing, you'll need to manually edit it. Despite the fact that Otter does a pretty accurate job at translating the audio to text, it will always need human input to check that there are no mistakes. This is a particularly important consideration for researchers who want to make sure that your participants' contributions are being accurately represented. It's also beneficial to spend time going through each transcript to get a 'feel' for the data.
So, this first step is to read back through the transcript and correct any mistakes. Different methods will work for different people, but I tend to read through and edit the transcript while listening back to the audio recording at around 1.5x to 2x the speed, slowing and speeding up as is necessary. Now that I've been using this method for a long time, it's become increasingly straightforward and efficient (it takes a few goes to really get used to it!).
The features offered in Otter.ai are particularly useful for editing because you can listen while editing in your internet browser. As shown in the photo below, individual words are highlighted as the audio recording plays. However, do make sure that you have a reliable internet connection to make sure everything saves properly (I've learnt this the hard way by losing lots of edited data and having to start again!). If I'm working somewhere with poor WiFi connection, I usually export the edited transcript as a text file at regular intervals, so if the edited transcript doesn't save properly at least I don't lose all of my edits.
|A screenshot of the Otter.ai browser interface showing how words in the transcript are highlighted as the audio plays, speaker labelling, and the speed settings. (Transcript source: public webinar "Engaging for the Future", Commonplace).
The key things that I check for when editing include:
- Punctuation errors - e.g. full stops, commas, and question marks where they shouldn't be (or a lack of punctuation in the right places).
- Random paragraph breaks - sometimes, for example when a speaker pauses mid-sentence, Otter.ai automatically starts a new paragraph, so it's worth checking to see if this has happened and merge paragraphs where necessary.
- Lack of paragraph breaks - Otter.ai has a tendency to generate long monologues of speech, which might need to be broken up into smaller paragraphs to make it easier to read.
- Spelling errors and incorrect words - I find this happens quite a lot when transcribing different accents, when specific names and locations are mentioned, or when abbreviations are used.
- Linked to the above, please do carefully check for any words which could be interpreted as rude or inappropriate - I won't repeat any here, but I have removed some rather interesting misinterpretations of words from some of my transcripts (!).
- Mislabelled speakers - it's really important to check that Otter.ai has labelled your speakers correctly and not mislabelled anyone (this can happen, for example, when someone interupts someone else mid-sentence, or if two people have very similar sounding voices).
- Remove repetition and utterances - in natural spoken language, people tend to repeat words, use filler words (like "uhm", "ah", and "like"), and can stop talking or change the course of conversation mid-sentence. While utterances and repetition can be useful to retain in the transcript for some purposes, there are other times when you might want to edit these out.
- Removing any identifiers - for research in particular, it's important to make sure that you protect the anonymity of participants at all times. Because Otter.ai transcribes verbatim, the text will include everything in the conversation (e.g. peoples names, names of businesses, areas, etc.). This is a particularly important consideration when conducting online interviews, for example, when the boundaries between private and professional lives can become blurred (particularly when participants are joining the interview from their home) and you can risk capturing personal information.
2. Annotate the transcript
|A screenshot of the Otter.ai browser interface showing how you can highlight and comment on text. (Transcript source: public webinar "Engaging for the Future", Commonplace).
I won't go into too much detail here as have covered this in my previous blog posts (e.g. this tutorial and this webinar), but automated transcription software can generate some really useful summaries of your transcript. This is particularly useful if you want to see quickly see some of the (potential) themes in the transcript before conducting more in-depth analysis, e.g. if you're working on a collaborative project and want to send your colleagues a brief summary. The image below shows the key words which are automatically generated by Otter.ai (which can also be viewed as a word cloud), which shows the words which appear most frequently in the transcript. In this example, you can see from the key words that this webinar was about community engagement in a planning setting. Otter.ai will also tell you the amount of time (%) that each person speaks for in the transcript, amongst other quick insights.
|A screenshot of the Otter.ai browser interface showing automatically generated key words. (Transcript source: public webinar "Engaging for the Future", Commonplace).
3. Prepare for analysis
|A screenshot of NVivo by QSR showing one way that comments from Otter.ai can be used to create annotations and themes for analysis. (Transcript source: public webinar "Engaging for the Future", Commonplace).
- Official NVivo by QSR tutorials on YouTube: https://www.youtube.com/user/QSRInternational/playlists
- Introduction and what is NVivo?
- NVivo - coding and uncoding
- Getting started with NVivo tutorial - QSR International