Marco Kalz is full professor of educational technology and Chief Information/Chief Digital Officer (CIO/CDO) at the Heidelberg University of Education. His research interest lies on the use of open education, pervasive technologies and formative assessment to support (lifelong) learning and knowledge construction. He has published more than 150 peer-reviewed publications. Marco is a research leader at the intersection of educational science, computer science and psychology with an interest in addressing global societal challenges like energy conservation, marine litter, resuscitation support and cancer education with educational technology.
Marco is associate editor of the IEEE Transactions on Learning Technologies, and editorial board member of Educational Technology Research and Development and the Journal of Computing in Higher Education. He is a fellow of the Interuniversity Center for Educational Sciences (ICO) and the Dutch research school on information and knowledge systems (SIKS). He is director of the study program E-Learning and Media Education and co-director of the Heidelberg Education for Sustainable Development Center. Over the years he could secure approx. 3.7 Mio EUR of research funding for his institutions from competitive projects with a total budget of 34 Mio EUR. Besides European projects he was and is regularly involved in educational innovation and consulting projects with partners inside and outside of his institutions including clients like the International Labour Organisation, United Nations Environment Program, the European Commission, UNESCO or other international and national organizations.
PhD in Educational Technology, 2009
Open University of the Netherlands
MA Multimedia Didactics, 2003
1st State Examination for Teachers, 2000
University of Cologne
Assistierte und einfach generierte intelligente Musiklehre im interaktiven Lernraum mittels Smartphone
Educational Innovation towards Organizational Development. The Art of Governing Open and Online Education in Dutch Higher Education Institutions
Through the Lens of the Learner: Using Learning Analytics to Predict Learner-Centered Outcomes in Massive Open Online Courses
Skill-based scouting of open user-generated and community-improved content for management education and training
Mind the Gap. Unravelling learner success and behaviour in Massive Open Online Courses
Während Künstliche Intelligenz schon seit 20 Jahren in Lehr- und Lernkontexten experimentell erforscht wird, hat die Veröffentlichung von sog. Large Language Models zu einer regelrechten öffentlichen Experimentierolympiade geführt, in denen Lehrende Werkzeuge wie ChatGPT auf die Integrierbarkeit in verschiedene Aspekte von Lehr- und Lernprozessen hin exploriert haben. Dabei gerät das Wechselspiel zwischen dem Vorwissen der Lernenden, den Selbstregulationskompetenzen und der Nutzung von KI-basierten Systemen oft in den Hintergrund. Im Rahmen des Vortrages sollen verschiedene wechselseitige Interaktionsoptionen mit KI-basierten Systemen im Fokus einer “geteilten Regulation” zwischen Lehrenden, Lernenden und einer KI betrachtet werden und verschiedene didaktische Handlungsoptionen für Lehrende diskutiert werden.
The integration of digital skills and media education is considered as a challenge for curriculum development and teaching practice of teacher education. In Germany, approximately 30% of the teacher education programs have not yet introduced a mandatory module on digital skills. Furthermore, these modules hardly can cope with the complexity of technology-enhanced learning and the diverse decision-making moments when introducing digital tools and teaching practices in classroom instruction. Recent developments in artificial intelligence and natural language processing are considered another wave of disruption regarding the skill development of teachers with regard to digital literacy. The talk will deal with different dimensions of the wide-spread access to AI tools for learning and teaching purposes and its implications for digital literacy of teachers and will highlight some research challenges.
2022 forderte der Wissenschaftsrat die Etablierung einer neuen Prüfungskultur an Hochschulen. Jedoch sind steigende Studierendenzahlen für Lehrende eine große Herausforderung, um individuelle Lernprozesse zu unterstützen. Das Format des Peer-Feedbacks bietet ein flexible Möglichkeit, um Studierende in die Entwicklung von Beurteilungskriterien und das Geben und Empfangen von Feedback einzubeziehen. Dabei ist die größte Herausforderung, die Komplexität des Designs und der Implementation von Peer-Feedback so herunterzubrechen, dass Peer-Feedback sich ohne extremen Aufwand in den Alltag der Lehre integrieren lässt. Im Rahmen des Beitrags wird der aktuelle Forschungsstand zu Peer-Feedback in der Hochschullehre reflektiert und es werden Forschungsergebnisse präsentiert, die beitragen zur Reduzierung der Implementationskomplexität von Peer-Feedback.
Das Konzept der Zukunftskompetenzen wird aktuell für Hochschulen als eine Option gesehen, um Studierende besser auf eine ungewisse Zukunft vorzubereiten und diese zu Problemlösern der Zukunft auszubilden. Dabei verwundert es, dass das Konzept den Eingang in die politische Förderaktivitäten gefunden hat, ohne dass eine evidenzbasierte Analyse und eine kritische Diskussion des Konzeptes stattgefunden hat. In diesem Beitrag wird die Diskussion in einen historischen Zusammenhang eingeordnet und es werden Verbindungen zu vergleichbaren Konzepten und Aktivitäten hergestellt. Auf Basis von systematischen Literaturanalysen und Evidenzsynthesen wird der aktuelle Forschungsstand zusammengefasst und 9 Problembereiche bei der Diskussion und Förderung von Zukunftskompetenzen identifiziert. Neben der fehlenden Einordnung der Zukunftskompetenzen in frühere Ansätze wurden vor allem die fehlenden empirischen Grundlagen sowie das Nicht- Vorhandensein von Messemethoden zur Analyse dieser Kompetenzen als kritisch für die Förderung von Lernangeboten für Zukunftskompetenzen identifiziert. Als alternative Forschungs- und Entwicklungsrichtung wird die Herausforderung des Transfers innerhalb und ausserhalb von Expertisefeldern diskutiert.
Given the crucial role of feedback in supporting learning in higher education, understanding the factors influencing feedback effectiveness is imperative. Student feedback literacy, that is, the set of attitudes and abilities to make sense of and utilize feedback is therefore considered a key concept. Rigorous investigations of feedback literacy require psychometrically sound measurement. To this end, the present paper reports on the development and initial validation (N= 221) of a self-report instrument. Grounded in the conceptual literature and building on previous scale validation efforts, an initial overinclusive itempool is generated. Exploratory factor analysis and the Rasch measurement model yield adequate psychometric properties of an initial scale measuring two dimensions: feedback attitudes and feedback practices with a total of 21 items. We further provide evidence for criterion-related validity. Findings are discussed in light of the emerging feedback literacy literature and avenues for further improvement of the scale are reported.
For predicting and improving the quality of essays, text analytic metrics (surface, syntactic, morphological and se- mantic features) can be used to provide formative feedback to the students in higher education. In this study, the goal was to identify a sufficient number of features that exhibit a fair proxy of the scores given by the human raters via a data-driven approach. Using an existing corpus and a text analysis tool for the Dutch language, a large number of features were extracted. Artificial neural networks, Levenberg Marquardt algorithm and backward elimination were used to reduce the number of features automatically. Irrelevant features were eliminated based on the inter-rater agreement between predicted and human scores calculated using Cohen’s Kappa (κ). The number of features in this study was reduced from 457 to 28 and grouped into different categories. The results reported in this paper are an improvement over a similar previous study. Firstly, the inter- rater reliability between the predicted scores and human raters was increased by tweaking the corpus for overfitting for average scores. The resulting maximum value of κ showed substantial agreement compared to moderate inter-rater reliability in the prior study. Secondly, instead of using a dedicated training and test set, the training and testing phases in the new experiments were performed using k-fold cross validation on the corpus of texts. The approach presented in this research paper is the first step towards our ultimate goal of providing meaningful formative feedback to the students for enhancing their writing skills and capabilities.
Past research has identified deficits in knowledge of student teachers regarding integration of technology in teaching leading to a need to investigate the efficacy of teacher training initiatives. There is a gap in understanding of developmental trajectories of these skills, as well as whether other factors moderate this. Using the TPACK-Model, the current study presents an analysis (N = 526) of a teacher training at a University of Education in Germany. Overall, results suggest trajectories where some knowledge domains are positively associated with study progress while others are not. Specifically, technology-related knowledge mostly does not show an association with study progress. However, this phenomenon is moderated by gender, suggesting that women report lower skills in technology-related dimensions and no associations with study progress. Our results illustrate the necessity to improve teacher training so that preservice teachers in general, but especially women, feel better qualified to integrate technology into the classroom.
As cancer continues to be a significant global health challenge, the education of oncology professionals plays a crucial role in providing quality cancer care and achieving optimal patient outcomes. In order to meet the growing need for flexible, accessible, and effective training, this study examines the role of technology-enhanced learning (TEL) in the education of oncology medical professionals. Following the PRISMA guidelines, this systematic review included 34 articles published between 2012 and 2022 in EBSCO and PubMed databases. Findings reveal a diverse range of digital tools being used in oncology training, despite a shortage of advanced educational technologies and limited functional improvement compared to traditional instruction. Since the training primarily targeted at multiple professions in the medical expert role, with radiation oncologists being overrepresented, other oncology domains should be examined more thoroughly in the future, taking into account distinct professional abilities, e.g. communication, collaboration, and leadership skills with reference to the CanMEDS framework. Although the training programmes generally resulted in positive outcomes according to the Kirkpatrick evaluation model, experimental research designs were rather limited. Therefore, the substantial contribution and limitations of TEL in oncology education need to be clarified. Precise reporting of digital tools and instructional processes, as well as challenges encountered, is highly recommended to increase transparency and replicability. Research methodology in digital oncology education remains a major concern and should be addressed accordingly in future research.
Providing and receiving feedback requires a certain openness in individuals which is referred to as feedback orientation. Although this openness is also required in peer-feedback processes personal factors that influence student’s openness (i.e. peer-feedback orientation) are less researched. Inspired by feedback orientation studies in a workplace setting we investigated personal factors that influence students’ peer-feedback orientation. As part of an exploratory sequential mixed methods research design, qualitative data on personal factors influencing student’s peer-feedback orientation was collected. Semi-structured interviews with students, teachers and researchers (N = 13) revealed a broad range of personal factors influencing their peer-feedback orientation. Thematic analyses of the data showed that the most prominent factors were related to the perceived usefulness of receiving and providing peer-feedback, the social bond between students, fairness and skills. The importance of existing feedback orientation dimensions (utility, accountability, social awareness and self-efficacy) by (Linderbaum and Levy, Journal of Management 36:1372–1405, 2010) was confirmed in a higher education setting. Interestingly, different interpretations of the dimensions were found which should lead to the development of a peer-feedback orientation scale for higher education.