AN ANALYSIS OF LLM USE IN INTRODUCTORY PROGRAMMING EDUCATION AND DEVELOPMENT OF AI RESISTANT ASSESSMENTS VIA CODE REVIEWS
AsArtificial Intelligence (AI) and Large Language Models (LLMs) are increasingly adopted by students in higher education, it is becoming more and more important to recognize how students are using these tools in their studies. This is especially the case in introductory programming courses where generative artificial intelligence (GenAI) can usually generate complete solutions from problem statements which can often outperform the average intro ductory programming student. This constitutes a serious threat to both academic integrity and the cognitive well-being of students as their excessive dependence on LLMs may diminish skills like problem-solving, decision-making, analytical-thinking, critical-thinking, and creativity. Developing the tools necessary to design AI-resistant examinations appears to be a clear step towards preventing this over-reliance. This research attempts to investigate what traits and considerations are necessary for developing AI-resistant assessment materials for introductory programming courses. Additionally, professional code reviews are considered to be tasks which require a fundamental understanding of code to carry out. Therefore, research was done into code reviews to see if they could potentially find use as AI-resistant material in introductory programming courses. With these research questions understood, a system utilizing code reviews as a basis for AI-resistant assessments is proposed.
History
Degree Type
- Master of Science
Department
- Computer Science
Campus location
- Fort Wayne