Exploring the use of ai code generation tools among computer programming students: design and validation of a cielf-check assessment tool/
Janeiah Rhadel T. Cidro, Michaella P. Dado, Vanessa D. Jacinto, Viviene Mae U. Malicdem, and Naobi Czyrus I. Quines.--
- Manila: Technological University of the Philippines, 2025.
- xv, 299pages: 29cm.
Bachelor's thesis
College Of Industrial Education.--
Includes bibliographic references and index.
The purpose of this study was to investigate how computer programming students used AI code generating tools. To measure students' reliance on AI code generation tool and their level of coding mastery, a CIElf-check evaluation tool was created. With an emphasis on understanding students' attitudes, behaviors, and the educational value of AI code generation tools, the study addressed concerns about how reliance on AI code generation tools may impair critical thinking and problem-solving abilities. This study employed a mixed-methods Design-Based Research (DBR) approach. It determined the influences—attitudes, subjective norms, and perceived behavioral control—that impact students' reliance on AI code generation tools, guided by the Theory of Planned Behavior. The assessment tool's design was motivated by Bloom's Taxonomy, and it was piloted with 68 students, gathering both quantitative and qualitative data for validation. The findings revealed that students valued AI code generation tools for productivity, influenced by peers and industry trends, but expressed concerns about overreliance, ethics, and limited understanding. The CIElf-check tool showed positive impact in coding skills, debugging, and independent learning, with high ratings for engagement (4.19), learning (4.21), and self-directed learning (4.19), all with low variability (SD = 0.7). Key design principles highlighted self-reflection, active engagement, clear feedback, progress tracking, and balanced AI used to support independent coding.