Retraction of a plagiarised edtech book. The final chapter.

This is hopefully the last chapter of a series of activities I have started nearly one year ago. I had reported a case of AI-supported plagiarism in a book on learning analytics which has been published in German last year (see original post here). One of our papers has been plagiarized by a team of authors who have published a book in German on “Educational Data Mining and Learning Analytics”. After months of lack of reaction by the Springer and authors from Dublin City University I have reported the status in this post.

Today, I am happy to report that the book is finally retracted and it even has the following retraction message on each chapter:

“The publisher has withdrawn this volume in agreement with the editors. The volume was produced as part of a series of volumes by the publisher providing an AI-based summary and translation of the current state of the art in the field, but this process was not properly stated in the volume itself. Furthermore, the authors of the papers presented in the volume were not properly acknowledged. The editors were not made aware of these problems and have followed the publisher’s policies and guidelines in good faith. The publisher and editors apologise for any inconvenience caused.”

How did this happen? I had finally contact to someone with sense for research integrity at DCU. Their research integrity officer took action and finally the authors have initiated the retraction of the book despite the ignorant statement by Springer that this is not a case of plagiarism. For weeks the book completely disappeared but since yesterday the retraction message is visible. I am quite sure that Springer would have not taken any action, given that there are still several books of the same production logic available (1, 2, 3, 4, 5, 6, 7). If you look at the editors you might be able to see a pattern.

What can we learn from this? My only take-home message is that we need to develop more awareness about research integrity and the responsibility of each individual author in this process. In times where “content” can be generated by a click, we will see more of this kind of “publications” which in the end diminishes the value of research and researchers. In the same way that OpenAI has ignored the intellectual property of many individual authors large language models have the potential to automate the results of a process of individual discovery, learning and knowledge production. By employing these approaches in our professional context as researchers, we contribute to the further destabilisation of higher education and research.

Marco Kalz
Marco Kalz
Professor of Educational Technology

My research interests is on open education, pervasive technologies and formative assessment to support (lifelong) learning and knowledge construction.

Related