Superficially Plausible Outputs from a Black Box: Problematising GenAI Tools for Analysing Qualitative SoTL Data
Superficially Plausible Outputs from a Black Box: Problematising GenAI Tools for Analysing Qualitative SoTL Data
Blog Article
Generative AI tools (GenAI) are increasingly used for academic tasks, including qualitative data analysis for the Scholarship of Teaching and Learning (SoTL).In our practice as academic developers, we are frequently asked for advice on whether this use for GenAI is reliable, valid, and ethical.Since this is a new field, we have not been able to answer this confidently based on published literature, which depicts both very positive as well as highly cautionary accounts.To fill this gap, we experiment with the use of chatbot style GenAI (namely ChatGPT 4, ChatGPT 4o, and Microsoft Copilot) to support or conduct qualitative analysis of survey and interview data from synovex one grass a SoTL project, which had previously been analysed by experienced researchers using thematic analysis.
At first sight, the output looked plausible, but the results were incomplete and not reproducible.In some instances, interpretations and extrapolations of data happened when it was clearly stated in the prompt that the tool should only analyse a specified dataset based on explicit instructions.Since pomyslnaszycie.com both algorithm and training data of the GenAI tools are undisclosed, it is impossible to know how the outputs had been arrived at.We conclude that while results may look plausible initially, digging deeper soon reveals serious problems; the lack of transparency about how analyses are conducted and results are generated means that no reproducible method can be described.
We therefore warn against an uncritical use of GenAI in qualitative analysis of SoTL data.