Too Easy to Resist? How Perceived Ease, Usefulness, and Ethics Drive ChatGPT Adoption in Higher Education
Lana Jurcec , Maja Kolega , Irena Miljkovic Krecar
Faculty of Teacher Education, University of Zagreb
VERN' University, Zagreb, Croatia
DOI: https://doi.org/10.35609/gcbssproceeding.2025.1(28)
Academic integrity is a cornerstone of higher education, built on values like honesty, fairness, respect, responsibility, and courage (Fishman, 2014). In today's academic setting, the issue of authorship and originality has become more complicated due to the increasing role of AI in content creation. Students may be tempted to use AI tools to complete essays or assignments, raising concerns about the authenticity of their work. The Technology Acceptance Model (TAM), introduced by Davis (1989), is a widely used framework for examining how users interact with new technologies. TAM is based on the Theory of Reasoned Action (TRA) (Ajzen, 1991). Its primary goal is to understand the factors that influence technology acceptance and to provide a theoretical basis for successful technology implementation. Practically, TAM aims to predict user behavior and propose measures for technology adoption before it is introduced (Marikyan & Papagiannidis, 2023). According to TAM, two key factors influence whether a new technology will be accepted: perceived usefulness and perceived ease of use. Perceived usefulness refers to the belief that using a particular system will enhance job performance, while perceived ease of use relates to the belief that using the system will require minimal effort (Davis, 1989). TAM has been validated in numerous studies in the educational context (Abdullah & Ward, 2016, Dahri et al., 2024, Granić & Maragunić, 2019, Obenza et al., 2024, Rahman et al., 2023, Shaengchart, 2023, Sherer et al., 2019). While previous research has shown that perceived usefulness and perceived ease of use are critical in shaping students' attitudes toward using AI tools like ChatGPT for learning, there is still a gap in research exploring how students perceive ChatGPT as a potential tool for academic dishonesty.
JEL Codes: I23, O33, D83
Keywords: Academic dishonesty, AI in education, ChatGPT, Perceived Risk and Benefit Theory, TAM.