WhatsApp Number: +1(249) 265-0080
AI in Qualitative Research
Artificial intelligence (AI) “is the science and engineering of making intelligent machines, especially computer programs” (McCarthy, 2007, p. 2). The goal is to build computer systems that can think and act like humans with the ability to reason, infer, and generalize. Today, AI applications are ubiquitous. AI is used in most search engines, recommendation systems, and virtual assistants, as well as in facial recognition, image labeling, and spam filtering.
Check our essay writing services here
AI in Qualitative Research
In 2020, the company OpenAI unveiled the large language model (LLM) Generative Pre-trained Transformer 3 (GPT-3). GPT-3 is based on a deep learning architecture (i.e., artificial neural networks) and an attention mechanism (i.e., networks designed to mimic cognitive attention) (OpenAI, 2022). Hundreds of billions of words from the Internet were used to train GPT-3. Because transformer-based AI is good at natural language processing, GPT-3 is excellent at document translation, summarization, image processing, and turning ideas into speech (Devlin and Chang, 2018). In simple terms, older AI models are good at content analysis, that is, the systematic summarization of written data. In contrast, LLMs like GPT-3 are designed to conduct discourse analysis. In other words, generating knowledge that is based on the idea that words and sentences are linked to each other and that the terms and phrases around them influence their meaning. GPT-3 was improved and released as GPT-3.5 in 2022, and OpenAI released GPT-4 in March 2023. The free version of ChatGPT, which you can sign up to use at https://openai.com/blog/gpt-3-apps, is based on GPT-3.5 and the paid version of ChatGPT plus is based on GPT-4.
Researchers are exploring the use of LLMs for qualitative data analysis (Lennon et al., 2021). Many popular qualitative software programs already incorporate computer-assisted analysis tools. For example, NVivo includes AI-assisted transcription and native languages processing. ATLAS.ti has a beta OpenAI coding module. There are also open-source projects, such as the Qualitative Discourse Analysis Package and RQDA, that can be used with R statistical software.
How researchers use AI in qualitative research can be a continuum of complexity or original contribution. At the lowest level, AI is a tool that does what the researcher tells it to do. AI might be used to find a specific word or exact phrase in a document. At the next level, AI can assist the researcher by performing more complex tasks, such as transcribing an audio interview or completing “if this, then that tasks” (IFTTT). The potential of LLMs lies in their ability to complete tasks associated with higher-order and nonlinear thinking. These could be collaborative tasks in which an LLM can act as a researcher by generating themes or identifying relationships from qualitative data. For example, one study showed that AI systems were about 11 percent better at interpreting mammograms to predict breast cancer than human experts (McKinney et al., 2020). At the furthest point in the continuum, LLMs may be able to assume the role of scholars by generating theories grounded in data that others can apply and evaluate.
Researchers should keep several things in mind when using AI systems to assist with qualitative data analysis. First, AI systems can be wrong. In 2023, two New York lawyers were sanctioned by a U.S. district judge for submitting a legal brief that contained six fictitious cases that were generated when the individuals used ChatGPT for legal research (Merken, 2023). Second, there are privacy concerns. Most AI systems require researchers to upload documents to off-site servers, and many terms of service agreements require that some level of ownership or use of data be transferred to the AI host company. Third, AI results are based on training data that may be biased or reflect historical and/or social inequalities. For example, in 2018, the American Civil Liberties Union purchased access to Amazon’s facial surveillance software “Rekognition,” trained it with 25,000 publicly available arrest photos, and then used it with images of members of Congress (Snow, 2018). The software incorrectly identified 28 members as having been arrested for a crime. Moreover, people of color were disproportionately falsely matched (39% vs. 20%).
Before using AI or LLMs to analyze your qualitative interviews, you should check with your professor and institutional review board for guidance.
Critical Thinking
- What do you think about using AI to analyze qualitative interview data?
- What guidelines or protocols should researchers put into place when using AI to analyze qualitative data?