000 02932nam a22003257a 4500
003 OSt
005 20250729160334.0
008 250729b |||||||| |||| 00| 0 eng d
040 _aTUPM
_bEnglish
_cTUPM
_dTUPM
_erda
050 _aBTH QA 76.73
_bC53 2025
100 _aCidro, Janeiah Rhadel T.
_eauthor
245 _aExploring the use of ai code generation tools among computer programming students:
_bdesign and validation of a cielf-check assessment tool/
_cJaneiah Rhadel T. Cidro, Michaella P. Dado, Vanessa D. Jacinto, Viviene Mae U. Malicdem, and Naobi Czyrus I. Quines.--
260 _aManila:
_bTechnological University of the Philippines,
_c2025.
300 _axv, 299pages:
_c29cm.
336 _2rdacontent
337 _2rdamedia
338 _2rdacarrier
500 _aBachelor's thesis
502 _aCollege Of Industrial Education.--
_bBachelor of technical vocational teacher education major in computer programming:
_cTechnological University of the Philippines,
_d2025.
504 _aIncludes bibliographic references and index.
520 _aThe purpose of this study was to investigate how computer programming students used AI code generating tools. To measure students' reliance on AI code generation tool and their level of coding mastery, a CIElf-check evaluation tool was created. With an emphasis on understanding students' attitudes, behaviors, and the educational value of AI code generation tools, the study addressed concerns about how reliance on AI code generation tools may impair critical thinking and problem-solving abilities. This study employed a mixed-methods Design-Based Research (DBR) approach. It determined the influences—attitudes, subjective norms, and perceived behavioral control—that impact students' reliance on AI code generation tools, guided by the Theory of Planned Behavior. The assessment tool's design was motivated by Bloom's Taxonomy, and it was piloted with 68 students, gathering both quantitative and qualitative data for validation. The findings revealed that students valued AI code generation tools for productivity, influenced by peers and industry trends, but expressed concerns about overreliance, ethics, and limited understanding. The CIElf-check tool showed positive impact in coding skills, debugging, and independent learning, with high ratings for engagement (4.19), learning (4.21), and self-directed learning (4.19), all with low variability (SD = 0.7). Key design principles highlighted self-reflection, active engagement, clear feedback, progress tracking, and balanced AI used to support independent coding.
650 _aReliance
650 _aAi code generation tools
650 _aAssessment tool
700 _aDado, Michaella P.
_eauthor
700 _aJacinto, Vanessa D.
_eauthor
700 _aMalicdem, Viviene Mae U.
_eauthor
700 _aQuines, Naobi Czyrus I.
_eauthor
942 _2lcc
_cBTH CIE
_n0
999 _c30552
_d30552