© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
The integration of intelligent systems in higher education institutions has shown considerable potential to improve operational efficiency and user satisfaction among students and administrative staff. Through automation, artificial intelligence, and educational analytics, these systems streamline administrative processes, reduce response times, and enhance the overall user experience. Despite growing interest in the field, comprehensive evidence regarding their actual impact remains scattered across diverse case studies. This systematic review analyzed 37 empirical studies selected from Scopus and Web of Science databases, following PRISMA 2020 guidelines. The review focused on evaluating the effects of intelligent systems including chatbots, AI-powered platforms, and automation tools on administrative efficiency and user satisfaction. Studies were assessed using the CASP checklist to evaluate risk of bias. Data were extracted and analyzed using RStudio, combining narrative synthesis with descriptive and inferential techniques. Findings revealed that the use of intelligent systems consistently contributed to improved processing times up to 50% reductions in some cases and high satisfaction levels among users, often exceeding 4.3 on 5-point Likert scales. Improvements were also observed in cost reduction, error minimization, service accessibility, and personalization of learning experiences. However, variability in satisfaction outcomes was influenced by contextual factors such as user expectations, previous exposure to technology, and system alignment with institutional goals. Most studies exhibited high methodological quality, although some lacked explicit discussion of researcher reflexivity or long-term implications. This review highlights the transformative role of intelligent systems in enhancing administrative and educational processes in higher education. Institutions adopting these technologies should prioritize user-centered design, ethical data governance, and strategic alignment to ensure sustainable, effective implementation.
intelligent systems, operational efficiency, higher education, user satisfaction, educational technology, AI in administration, automation in universities
The integration of intelligent systems, particularly those powered by artificial intelligence (AI), has brought about transformative changes in higher education institutions [1]. These systems have significantly enhanced operational efficiency, improved student satisfaction, and elevated administrative satisfaction. This response explores these impacts in detail, supported by insights from various research papers.
AI-powered systems have transformed resource management and operational efficiency in higher education. Through predictive analytics, these technologies have enabled institutions to forecast enrollment trends and allocate resources more effectively. The optimization of facilities and staff usage has led to reduced operational costs and increased institutional efficiency. Furthermore, AI-driven analytics have provided valuable insights into teaching effectiveness and student satisfaction, which in turn support continuous improvement efforts [2].
The automation of routine administrative tasks has also marked a significant advancement. Activities such as attendance tracking, grading, and report generation have been streamlined through AI tools, easing the administrative load on educators and allowing them to dedicate more time to strategic responsibilities. For instance, the implementation of chatbot prototypes in private higher education institutions has proven effective in engaging both students and faculty while automating key functions like course registration and responding to frequently asked questions [3].
AI has further strengthened decision-making processes within academic institutions. By identifying at-risk students early and enabling targeted interventions, AI technologies have contributed to improved retention rates. Real-time analytics also support managerial decision-making by enhancing the efficiency of resource allocation and increasing institutional responsiveness [4].
One of the most significant contributions of AI integration lies in the personalization of learning experiences. Adaptive platforms powered by AI can tailor content and support to individual student needs, identifying learning gaps and recommending targeted interventions to improve academic outcomes. These systems have been shown to enhance both student engagement and performance [2].
Student engagement and academic support have similarly benefited from AI. Tools such as chatbots deliver real-time academic updates and personalized advice, while virtual teaching assistants provide tailored feedback and contribute to improved academic outcomes. These applications have played a key role in enriching the educational experience [3, 5].
Accessibility and inclusivity have also been enhanced through AI. The development of multilingual chatbots has facilitated access for diverse student populations, while AI-powered learning tools have helped bridge the digital divide by ensuring that students from varied socio-economic backgrounds can benefit from personalized educational technologies [5, 6].
In terms of administrative satisfaction, AI has contributed to a marked improvement in institutional efficiency. The automation of tasks such as attendance and grading have freed educators to engage in higher-level planning, while tools like chatbots have simplified and accelerated administrative workflows, easing the burden on institutional staff [3].
Consequently, this study seeks to answer the following research question: What is the impact of intelligent systems on the operational efficiency of administrative processes and the perceived satisfaction of students and administrative staff in higher education institutions? To this end, the objective is to critically evaluate the available empirical evidence regarding the impact of intelligent systems, including automation tools and learning analytics, on both operational efficiency and perceived satisfaction within higher education contexts.
2.1 Type of research and reporting standards
A systematic review was conducted, analyzing empirical studies with qualitative, quantitative, or mixed-method approaches. The methodological process adhered strictly to the guidelines established by the PRISMA 2020 statement (Preferred Reporting Items for Systematic Reviews and Meta-Analyses).
2.2 PICO framework
The systematic review was structured using the PICO framework, identifying the population (P) as students and administrative staff in higher education institutions. The intervention (I) analyzed was the implementation of intelligent systems, specifically administrative automation and educational analytics. A comparator (C) was not explicitly defined, implicitly considering situations prior to the implementation or without the use of these systems. Finally, the outcomes (O) assessed included changes in operational efficiency of administrative processes, and perceptions or satisfaction levels of students and administrative personnel regarding these systems.
2.3 Inclusion and exclusion criteria
The systematic review included only empirical studies published in scientific journals or conference proceedings that specifically addressed the use of intelligent systems, such as administrative automation and educational analytics, in higher education institutions, assessing their impact on operational efficiency and/or perceived satisfaction of students and administrative staff. No restrictions were applied regarding publication year or language. Conversely, theoretical articles, narrative reviews, and studies lacking original empirical data were excluded, along with research conducted outside higher education contexts or studies exclusively focused on unrelated technologies. Additionally, incomplete documents, those for which full-text access was not available, or studies with insufficient methodological information were also excluded.
2.4 Search strategy
The literature search was conducted using the Scopus and Web of Science (WoS) databases. These databases were chosen due to their international recognition, multidisciplinary scope, and specific relevance to research areas such as higher education, educational technologies, artificial intelligence, and administrative sciences. Both databases provided consistent access to methodologically rigorous scientific studies.
A generic search formulation adapted and applied for both databases was as follows:
("intelligen*" OR "automat*" OR "artificial intelligence" OR "educational analytic*" OR "learning analytic*") AND ("efficienc*" OR "performan*" OR "administrative process*" OR "operational improv*") AND ("user satisfact*" OR "student percept*" OR "administrative staff satisfact*" OR "user experienc*") AND ("higher education" OR "universit*" OR "college*" OR "tertiary education").
This strategy was slightly modified according to the specific technical requirements of each database, employing truncation symbols (*) to maximize the retrieval of relevant studies.
2.5 Article selection process
The article selection was performed following the four stages recommended by the PRISMA 2020 guidelines. Initially, records were identified through database searches and managed using specialized reference management software. Next, screening was performed based on titles and abstracts, eliminating duplicates and clearly irrelevant studies. Subsequently, the eligibility of remaining studies was assessed through full-text reading to ensure strict compliance with the inclusion and exclusion criteria. Lastly, a definitive list of studies was compiled for detailed systematic analysis.
2.6 Risk of bias assessment of studies
The risk of bias of the selected studies was assessed using the Critical Appraisal Skills Programme (CASP) checklist. According to CASP criteria, methodological aspects evaluated included clarity of research objectives, appropriateness and coherence of study design, recruitment and participant selection procedures, rigor in data collection and analysis, ethical considerations, clarity in presentation and discussion of results, and the relevance and applicability of findings to specific contexts. The results of this evaluation were systematically recorded in tables organized in Excel spreadsheets to facilitate further analysis.
2.7 Data extraction and studied variable
Data extraction was conducted using a structured matrix in Excel, systematically capturing relevant information from included studies, such as authors, publication year, journal or conference, country, and language. Methodological details were also extracted, including study design, description of intelligent systems evaluated, indicators and methods employed to measure operational efficiency and user satisfaction, clearly reported quantitative and qualitative results, and identified study limitations. In cases where specific information was unavailable, the expression "Not declared" was explicitly recorded.
2.8 Data processing
After extraction and organization, data were exported from Excel and processed in RStudio, utilizing available statistical packages appropriate for qualitative and quantitative data analyses. Descriptive and inferential analyses were conducted, complemented by thematic or content analyses according to the specific nature of data obtained. The entire procedure was documented thoroughly through scripts to ensure complete transparency and reproducibility of the systematic review.
The study selection process followed the PRISMA flowchart guidelines. A total of 244 records were initially identified through database searches: Scopus (n = 164) and Web of Science (n = 80). Subsequently, 56 duplicate records were removed, resulting in 188 records that were screened.
During the title and abstract screening phase, 47 records were excluded for not being original research, 24 for having an irrelevant study design, and 59 for not meeting the inclusion criteria. Consequently, 58 reports were sought for full-text retrieval.
Of these, 9 reports could not be retrieved, leaving 49 reports assessed for full-text eligibility. Following this assessment, 7 studies were excluded for not being related to research and 5 for presenting incomplete data. Ultimately, 37 studies met all the inclusion criteria and were included in the final review (Figure 1).
Figure 1. PRISMA flowchart
The results indicated a generally high methodological quality among the studies reviewed. Specifically, more than 85% of the articles fully met at least 8 out of the 10 questions of the CASP checklist, suggesting a low overall risk of bias.
The first three questions concerning the clarity of the research aim, the appropriateness of the qualitative design, and methodological congruence were answered affirmatively in all the included studies, reflecting a well-structured and clearly formulated foundation across the papers. Questions 4 and 5, which focused on the appropriateness of participant recruitment and the systematic collection of data, also yielded positive results. However, partial responses were recorded in approximately 12% of the studies, primarily due to insufficient justification for participant selection or limitations in the contextual description of data collection. Question 6, assessing whether the relationship between researchers and participants (reflexivity) was adequately considered, received the highest number of partial responses. This was identified as a common limitation in qualitative research of this nature. Only 60% of the studies addressed this aspect explicitly, while the remaining 40% provided little to no information. Regarding data analysis (question 7) and the clarity of findings (question 8), most studies achieved a full score, reinforcing the reliability and transparency of the reported outcomes.
The final two questions, which addressed the value of the research and its practical applicability (questions 9 and 10), were widely satisfied. Over 90% of the articles explicitly acknowledged the practical implications of their findings and their relevance to institutional or pedagogical improvement.
The studies included in the review were conducted across 22 countries, reflecting a wide international distribution and interest in the implementation of automated systems within educational and administrative settings. The highest representation was observed in countries from China and Spain. America was represented by studies carried out in the United States, while Australia was the sole representative of Oceania. Additional contributions were found from Hong Kong, Lebanon, and Kazakhstan, underscoring the cross-continental relevance of the topic (Figure 2).
Figure 2. Geographic distribution of included studies
This geographic diversity was visualized using a single-color gradient map, where countries were shaded based on the frequency of studies conducted. Darker shades indicated a higher number of included studies, allowing for the identification of regional concentrations. The dominance of Asian countries in the dataset suggested a growing emphasis on technological innovation in education and public services within that region (Figure 2).
This review of recent studies that implemented automated systems in educational and administrative contexts revealed relevant findings regarding their effects on efficiency, user satisfaction, and the reduction of errors. A total of 8,264 participants were included in the studies analyzed, primarily involving students, but also including academics and administrative staff.
The automated systems applied were diverse, including intelligent chatbots based on ITIL, Generative AI tools such as ChatGPT, integrated BIM+GIS platforms with technologies like IoT and Big Data, AI-powered learning applications, and two-factor authentication systems with facial recognition.
Regarding processing time, improvements were consistently reported. In one study, the system's response time was reduced to 4.17 seconds, while another reported a decrease in model loading time by approximately 50%, indicating enhanced operational efficiency through automation.
In terms of user satisfaction, all studies that reported on this metric described positive outcomes. Approximate satisfaction levels ranged between 3.5 and 5.0 on a 5-point Likert scale, with some studies reporting mean satisfaction scores above 4.3. High satisfaction was particularly notable when systems were aligned with users' expectations and perceived usefulness.
Improvements were also observed in the reduction of operational errors, optimization of costs, and resource utilization. Some systems contributed to lowering operational costs, minimizing identity verification errors, and enhancing data security. Others supported greater educational inclusivity and efficient management of institutional services, demonstrating a favorable cost–benefit balance.
Finally, the key outcomes included substantial accuracy improvements up to 96% in some systems as well as increased process efficiency and positive effects on learning performance, particularly when artificial intelligence tools were integrated with user-centered design principles (Table 1).
Table 1. Summary of studies on the implementation and impact of automated systems in educational and administrative contextsproved performance over control in short term
Author |
Year |
N |
Population |
Automated System |
Processing Time Comparison |
User Satisfaction Level |
Impact on Errors, Costs, Resources |
Key Outcomes |
Ahriz et al. [7] |
2024 |
120 |
Students |
Smart chatbot system based on ITIL |
Response time decreased to 4.17 seconds |
Improved satisfaction |
Reduced waiting times and operational costs; lowered workload on IT support |
96% response accuracy; increased efficiency and decision-making quality |
Al-Emran et al. [8] |
2024 |
773 |
Students |
Generative AI tools (mainly ChatGPT) |
NA |
Positively correlated with service, system, and information quality |
Enhanced educational inclusivity and learning outcomes |
Significant positive effect on social sustainability and user satisfaction |
Chang et al. [9] |
2022 |
478 |
Students |
AI-powered English learning application (Liulishuo) |
NA |
Significantly influenced by gratification-based factors; improved learning performance |
NA |
Positive influence of attitude on learning performance and continuous use intention |
Chompookham et al. [10] |
2024 |
40 |
Students, Academics |
Two-Factor Authentication system with AI-based facial recognition |
NA |
High satisfaction (mean 4.54/5) |
Improved identity verification; enhanced information security in academic systems |
83.54% recognition accuracy; system highly accepted by users |
Chrysafiadi et al. [11] |
2023 |
140 |
Students |
Fuzzy-based Intelligent Tutoring System for programming |
Fewer interactions needed to achieve learning goals |
High usability and satisfaction; motivated learners |
Adaptive support reduced learner dropout and improved engagement |
Improved learning performance, recommendation accuracy, and user engagement |
Derbas and Voss [12] |
2023 |
28 |
Students, Administrative Staff |
Automated Shading System with AI-based control and override |
NA |
Higher satisfaction with multi-objective strategy and override option |
Reduced shade override actions; improved comfort and energy efficiency |
Robust design recommendations for balancing user comfort and energy use |
Djokic et al. [13] |
2024 |
285 |
Students |
Various AI services in education (SCAIES model) |
NA |
Positive perceptions; highest for personalized learning and sentiment analysis |
Improved prediction of performance and learning customization |
Validated reflective-formative model; student-centric AI adoption insights |
El Khodr et al. [14] |
2023 |
52 |
Students (Undergraduate and Postgraduate ICT) |
ChatGPT |
Improved speed in generating answers and information |
Generally positive; UG students found it more enjoyable |
Improved user flow and information hierarchy in tasks; time-saving |
Enhanced learning outcomes and performance with ChatGPT vs. search engines |
Fawaz et al. [15] |
2025 |
23 |
Students (Health Sciences) |
Generative AI (e.g., ChatGPT) |
Enhanced efficiency in learning and assignment completion |
High; supported autonomy, creativity, and clarity |
Improved clarity, personalized learning, and time efficiency |
Positive perceptions: improved writing, autonomous learning, innovative thinking |
Fošner [16] |
2024 |
422 |
Students (Various disciplines) |
AI tools including ChatGPT and GPT-4 |
NA |
Mixed; 89% positive about AI in education |
Concerns about overreliance, fairness, and academic integrity |
Frequent use in assignments; students report increased efficiency and accessibility |
Gonzalez-Garcia et al. [17] |
2025 |
86 |
Students (Nursing) |
ChatGPT |
NA |
89.5% reported significant improvement in academic performance |
Perceived usefulness linked with GPA and improved academic outcomes |
Higher GPA associated with ChatGPT use; more impact observed in women |
Gordillo [18] |
2019 |
94 |
Students (Programming/Engineering) |
Instructor-centered Automated Programming Assignments Grading System (IAPAGS) |
Faster assessment vs. manual grading |
Mixed; positive motivation, but 39% found feedback not useful |
Reduced grading load; improved student engagement and submission quality |
Performance improved after feedback; valuable for managing large classes |
Herrera-Viedma et al. [19] |
2009 |
18 |
Students |
Computer-supported learning system for Fuzzy Information Retrieval Systems (FIRSs) |
Improved comprehension and faster learning of complex query models |
Positive, improved motivation and exam performance |
Reduced misunderstandings and improved visualization of complex processes |
Significant improvement in exam results and conceptual understanding of FIRSs |
Hu et al. [20] |
2025 |
563 |
Pre-service Teachers |
Generative AI platforms (ChatGPT, ERNIE-3.5) |
Not explicitly measured, but indicated increased efficiency in task completion |
High behavioral intention to use GAI; influenced by effort expectancy, hedonic motivation, and habit |
Perceived risk identified as a barrier; recommendations to improve data security and reduce risk |
Behavioral intention to use GAI influenced by ease of use, social influence, and enjoyment |
Kang and Hong [21] |
2025 |
20 |
Students |
HoMemeTown Dr. CareSam chatbot (based on ChatGPT 4.0) |
Not explicitly reported, but efficiency improved over other chatbots |
High (9.0/10 for positivity and support, 8.7 for empathy) |
Improved mental health support access; cross-lingual capability reduced communication barriers |
Empathetic, user-friendly tool with risk detection; outperformed Woebot and Happify in satisfaction |
Kazanidis and Pellas [22] |
2024 |
66 |
Students (Early Childhood Education and Computer Science) |
Generative AI platforms for educational content (ChatGPT, Jasper, Animaker) |
Improved speed and creativity in instructional content creation |
Higher satisfaction in ECE group; high comfort level in CS group |
Enhanced educational design quality, facilitated interdisciplinary collaboration |
ECE students showed greater satisfaction; CS students’ higher technical proficiency |
Khumalo et al. [23] |
2023 |
200 |
Students (Education) |
AutoScholar Advisor System (Auto-Ad) |
Improved tracking and advising efficiency compared to traditional methods |
High satisfaction through self-directed learning and goal setting |
Optimized support, reduced staff workload, increased student agency |
Positive impact on academic performance, especially cum laude trajectories |
Kim et al. [24] |
2025 |
20 |
Students |
ChatGPT4-embedded writing system (Writing With GPT) |
NA |
High satisfaction; GenAI viewed as tutor, peer, and assistant |
Improved writing quality, ideation, and motivation; reduced workload for instructors |
Enhanced writing process, performance, and affective factors; benefits outweighed perceived risks |
Li et al. [25] |
2022 |
135 |
Students |
Genetic Algorithm-based grouping method (IVMGA) |
Faster collaboration efficiency in experimental groups |
Higher satisfaction and collaboration quality |
Optimized group formation reduced imbalance, improved academic outcomes |
Experimental groups outperformed traditional and random groups in performance and perception |
Li et al. [26] |
2025 |
167 |
Students |
General use of AI tools including ChatGPT |
Perceived as improving task efficiency and information retrieval |
Generally positive toward AI in learning; cautious about grading by AI |
Improved learning support; concerns about dependency and privacy |
Perceived benefits: personalized learning, efficiency, skill development; risks: reduced thinking, ethical and data concerns |
Liu [27] |
2024 |
15 |
Academics, Library Professionals |
Research Intelligence Service System with AI and ANP-gray fuzzy algorithm |
Improved information retrieval speed by 30% |
Increased by 25% |
Improved efficiency by 40%; reduced redundancy in processes |
Enhanced service capacity of libraries, better research support |
Mamun et al. [28] |
2024 |
1664 |
Students, Academics, Guardians |
Smart Reception AI-based receptionist system (Bangla language, ASR, TTS, QA, facial/speaker recognition) |
Reduced wait times in reception processes |
Over 75% satisfaction; 88% interested in real-life implementation |
Optimized administrative tasks, minimized human errors |
High usability, cultural adaptability, enhanced productivity |
Marquès et al. [29] |
2022 |
99 |
Students |
Notification, Recommendation, and Monitoring System integrated into DSLab (Automated Assessment Tool) |
Encouraged earlier and more frequent submissions |
Positive effect on student organization and perception of helpfulness |
Provided timely alerts and insights; supported assignment performance tracking |
Improved engagement, time management, and task completion rates |
Ozdere [30] |
2025 |
16 |
Students (English Language and Literature or ELT) |
AI feedback tools (ChatGPT and You.com) for academic writing |
Faster feedback cycle, iterative improvement process |
Perceived as useful, reliable, and motivating |
Enhanced writing accuracy, self-correction, and skill acquisition |
Significant improvement in writing scores; increased confidence in revision |
Orok et al. [31] |
2024 |
252 |
Students (Pharmacy) |
Chat-based AI tools (ChatGPT®, Grammarly®, etc.) for educational support |
Improved study and assignment efficiency |
88.5% positive perception; 85.3% believed it enhanced academic performance |
Enhanced personalization, reduced teacher workload |
Widespread AI adoption; recommendation to integrate AI education into curriculum |
Owusu [32] |
2024 |
687 |
Students |
Knowledge Management Systems (KMS) including LMS (Sakai), Library System, Institutional Repository, FAQs, Email, Google Forms |
Quicker response time, improved access to academic materials |
Increased satisfaction: system use influences satisfaction which impacts academic performance |
Improved efficacy and production, facilitates decision-making, provided needed performance level |
Technical and social KMS factors influence system use, which affects satisfaction and academic performance |
Rafida et al. [33] |
2024 |
20 |
Students (EFL) |
Various AI tools (Grammarly, QuillBot, ChatGPT) |
Improved efficiency in grammar correction, topic generation, and paraphrasing |
80–90% positive perceptions in Indonesia and Taiwan |
Improved grammar, reduced plagiarism risk when used carefully, helps with paraphrasing |
AI aids in grammar, rephrasing, topic generation; some concerns about dependency and authenticity |
Sáiz-Manzanares et al. [34] |
2020 |
109 |
Students (Nursing and Occupational Therapy) |
Alexa-based Intelligent Personal Assistant (UBUVoiceAssistant) |
Improved access to LMS resources, increased access to practical information |
High, especially for teaching and COVID-19 support |
Greater LMS functionality, increased efficiency, improved coordination |
Increased platform engagement, higher satisfaction, effective support during COVID-19 |
Sánchez-Vera [35] |
2025 |
42 |
Students (Early Childhood Education) |
Subject-specialized chatbot |
Improved concept clarification, not as effective for exam simulation |
91.4% clarity of doubts, 95.7% concept comprehension |
Promoted study autonomy, low effect in organization/motivation |
Moderate use correlated with best academic results; excessive use led to lower outcomes |
Shahzad et al. [36] |
2024 |
401 |
Students |
AI tools (e.g., ChatGPT) and Social Media integrated in Smart Learning environments |
Enhanced learning efficiency and peer feedback |
High; positive perception of AI/social media's impact on academic performance and mental well-being |
Improved self-directed learning, support tools, and mental health assistance |
AI and social media positively impact academic and emotional outcomes; smart learning mediates both effects |
Shorey et al. [37] |
2020 |
210 |
Students (Nursing) and Academics (Clinical Facilitators) |
Virtual Counseling Application Using Artificial Intelligence (VCAAI) |
Improved preparation for tutorials and clinicals; faster review |
Generally satisfied; praised convenience and accessibility |
Reduced dependence on standardized patients; improved skill retention |
Boosted confidence, improved technical communication skills, need for more realism in emotional expression |
Song et al. [38] |
2025 |
80 |
Students (Postgraduate) |
Generative AI Chatbot (Dou Bao by ByteDance) |
More efficient than peer discussion for creative problem solving |
Higher perceived usefulness and ease of use than peer support |
Enhanced performance, reduced workload, individualized support |
Improved creative problem-solving, dialogue dynamics, and student satisfaction |
Sun et al. [39] |
2024 |
82 |
Students |
ChatGPT-facilitated programming (CFP) |
CFP increased debugging and feedback review behaviors |
Significant improvement in perceived usefulness, ease of use, and intention to use |
Improved programming performance, more frequent debugging, reduced frustration through personalized feedback |
No statistically significant performance difference vs control, but enhanced perceptions and strategic behavior changes |
Uluskan [40] |
2023 |
373 |
Students |
SEM-ANN hybrid system for cafeteria service quality assessment |
NA |
Ranked satisfaction drivers: reliability, sufficiency, physical properties |
Identified predictors of satisfaction, optimized service variables |
Hybrid AI model improved assessment accuracy of university service quality; could guide improvements |
Yu et al. [41] |
2024 |
328 |
Students |
ChatGPT |
Improved efficiency through automation, ease of use, and task execution |
Positively correlated with perceived usefulness and ease of use |
Increased perceived ease, usefulness, and continued use intention |
Satisfaction drives continued use; compatibility and efficiency are key predictors |
Zhang et al. [42] |
2025 |
63 |
Students (Translation and Interpreting Programs) |
GenAI tools (ChatGPT, Bing Chat, Bard) |
Faster translation, revision, and information retrieval |
High satisfaction; increased confidence, motivation, and learning autonomy |
Improved grammar, terminology use, and reduced repetition |
Enhanced translation quality and efficiency; students favor integration in curriculum |
Zheldibayeva [43] |
2025 |
93 |
Students (Non-English Majors) |
ChatGPT (Telegram bot) and Gemini (Google) for writing and listening |
Improved performance over control in short term |
Positive feedback on personalization and feedback quality |
Helped with feedback, listening comprehension, writing clarity |
Short-term gains in listening and writing; sustained use needed for lasting impact |
The study consistently demonstrated that the implementation of intelligent systems, particularly those powered by artificial intelligence (AI), significantly improves operational efficiency and satisfaction among students and administrative staff within higher education institutions. Notably, response and processing times exhibited remarkable enhancements, with specific cases showing reductions of up to 50%, accompanied by high user satisfaction levels generally ranging from 3.5 to 5.0 on Likert scales.
These findings underscore the importance of intelligent systems as essential instruments for optimizing administrative and educational workflows. They emphasize the capacity of these technologies to decrease operational times, enhance accuracy, and personalize the learning experience, ultimately fostering improved academic outcomes and increased satisfaction across diverse user groups.
The observed results align with previous research highlighting perceived usefulness and ease of use as critical determinants of acceptance and satisfaction in the adoption of educational technologies. Additionally, these findings corroborate earlier studies recognizing automation and artificial intelligence as pivotal factors for increasing institutional efficiency and promoting personalized educational approaches. Administrative processes, ranging from enrollment procedures to academic performance evaluations, were optimized by these systems, resulting in cost reduction and improved service quality. According to Tarí and Dick [44], quality management in higher education was facilitated by technological tools that supported data collection and analysis, allowing for deeper insight into student needs. Additionally, the adoption of e-business approaches, as highlighted by Boys and Ford [45], allowed institutions to adapt to increasingly competitive markets, thus improving not only operational efficiency but also student and administrative staff satisfaction. This indicated that integrating intelligent systems was essential to achieving higher educational quality standards. These antecedents corroborate the findings of this research.
From the findings, it can be inferred that broader and more effective implementation of intelligent systems requires the development of user-centered strategies coupled with robust data governance frameworks, addressing identified ethical issues and privacy risks comprehensively and effectively. Automation of administrative processes within higher education institutions represented a significant shift toward enhanced operational efficiency. The implementation of intelligent systems enabled not only the optimization of routine tasks but also the effective collection and analysis of data influencing strategic decision-making. A recent study on the application of business intelligence in a student participation tracking system illustrated how automation transformed data collection into valuable information, thereby improving student engagement management and, consequently, academic satisfaction, according with Duan et al. [46]. Furthermore, the capability of these systems to integrate data from multiple sources including tracking devices and online activities demonstrated the potential of automation to not only increase operational efficiency but also enrich the student experience through more informed management practices [46].
Variability observed in user satisfaction outcomes might be attributable to differences in users' initial expectations, prior experiences, and adaptability to newly introduced technological systems. Likewise, incomplete or partially available data in some reviewed studies likely contributed to the occurrence of less conclusive or unexpected outcomes. Student satisfaction in higher education institutions was significantly influenced by the implementation of intelligent systems designed to optimize operational processes. These systems not only facilitated the integration of various information platforms but also enhanced access to relevant data, enabling students to acquire crucial information supporting their learning process. According to a study, the effective integration of enterprise application integration systems reduced data redundancy and inconsistency, subsequently enhancing the student experience by providing a unified view of their academic and administrative journey [47]. Furthermore, quality management within these environments emphasized identifying and meeting students' needs as primary stakeholders, underscoring the importance of their feedback for the continuous improvement of educational services [44].
Theoretically, these findings reinforce technology adoption models, particularly those emphasizing perceived usefulness and adaptability in educational settings. Practically, higher education institutions can greatly benefit from integrating these insights into their strategic planning processes, optimizing administrative functions, and developing increasingly personalized educational methodologies. Studies [48, 49] described personalized learning experiences emerged as a vital element in the transformation of higher education, particularly with the implementation of intelligent systems. These systems facilitated a more adaptive, student-centered approach, enabling institutions to better identify individual students’ needs and learning styles. Personalization was observed not only to promote greater student engagement but also to optimize the operational efficiency of educational institutions by reducing dropout rates and enhancing academic outcomes. Additionally, curriculum adaptation and content delivery, driven by data analysis, aligned with strategies outlined in the AI8-Point model, emphasizing the importance of data collection and outcome monitoring to enhance student experiences [49]. Moreover, the application of artificial intelligence in educational analytics enabled the forecasting of trends and behaviors, fundamentally transforming resource management and administrative processes [48].
Despite these promising results, the present study faces several methodological limitations. Primarily, there was insufficient consideration given to researcher-participant reflexivity within qualitative research components, alongside limited explicit discussion of potential biases within some analyzed studies. Furthermore, potential dependence on AI technologies, as well as ethical considerations and data privacy concerns, emerged as crucial areas requiring additional attention and investigation.
Finally, several questions remain unresolved and require further exploration, including the long-term sustainability of intelligent systems, the effectiveness and applicability of diverse implementation models across varying cultural contexts, and strategies for effectively managing ethical dilemmas and privacy concerns inherent in the extensive deployment of AI-based technologies.
The findings demonstrated significant benefits associated with the integration of intelligent systems in higher education, particularly in enhancing operational efficiency, improving user satisfaction, and supporting educational quality. However, to ensure the long-term sustainability and responsible deployment of these technologies remains essential to address the identified methodological limitations and ethical considerations.
AI |
Artificial Intelligence |
BIM |
Building Information Modeling |
GIS |
Geographic Information System |
IoT |
Internet of Things |
ITIL |
Information Technology Infrastructure Library |
N |
Sample Size (number of participants) |
ChatGPT |
Chat Generative Pre-Trained Transformer |
2FA |
Two-Factor Authentication |
Liulishuo |
AI-Powered English Learning Application |
NA |
Not Available (data not reported or not applicable) |
[1] Galdames, I.S. (2024). Integration of artificial intelligence in higher education: Relevance for inclusion and learning. SciComm Report, 4(1): 1-12. https://doi.org/10.32457/scr.v4i1.2487
[2] Abiola, O.A., Ajuwon, O., Shukurat, E., Chiekezie, N. (2024). Integrating AI and technology in educational administration: Improving efficiency and educational quality. Open Access Research Journal of Science and Technology, 11(2): 116-127. https://doi.org/10.53022/oarjst.2024.11.2.0102
[3] Bobro, N. (2024). Use of chatbots based on artificial intelligence to optimize the educational process in a higher education institution. Grail of Science. 2024(43): 308-312. https://doi.org/10.36074/grail-of-science.06.09.2024.040
[4] Indriastuti, F., Sahib, A., Nuraini, R. (2023). Development of an Adaptive Higher Education Management Model with Artificial Intelligence. al-fikrah: Jurnal Manajemen Pendidikan, 11(2): 366-379. https://doi.org/10.31958/jaf.v11i2.12121
[5] Mehnen, L., Pohn, B. (2024). Supporting Academic Teaching with Integrating AI in Learning Management Systems: Introducing a Toolchain for Students and Lecturers. In 2024 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, Croatia, pp. 1-6. https://doi.org/10.23919/SoftCOM62040.2024.10722016
[6] Hossain, R., Sohag, H.J., Hasan, F., Ahmed, S., Amin, A., Islam, M.M. (2024). Prospective artificial intelligence (AI) applications in the university education level: enhancing learning, teaching and administration through a PRISMA base review systematic review. Pakistan Journal of Life and Social Sciences, 22(2): 9173-9191. https://doi.org/10.57239/pjlss-2024-22.2.00694
[7] Ahriz, S., Gharbaoui, H., Benmoussa, N., Chahid, A., Mansouri, K. (2024). Enhancing information technology governance in universities: A smart chatbot system based on information technology infrastructure library. Engineering, Technology & Applied Science Research, 14(6): 17876-17882.
[8] Al-Emran, M., Abu-Hijleh, B., Alsewari, A.A. (2025). Examining the impact of Generative AI on social sustainability by integrating the information system success model and technology-environmental, economic, and social sustainability theory. Education and Information Technologies, 30(7): 9405-9426. https://doi.org/10.1007/s10639-024-13201-0
[9] Chang, Y., Lee, S., Wong, S.F., Jeong, S.P. (2022). AI-powered learning application use and gratification: An integrative model. Information Technology & People, 35(7): 2115-2139. https://doi.org/10.1108/ITP-09-2020-0632
[10] Chompookham, T., Nuankaew, W.S., Nuankaew, P. (2024). Two-factor authentication application using artificial intelligence to support academic information systems. International Journal of Engineering Trends and Technology. 72(12): 14-29. https://doi.org/10.14445/22315381/IJETT-V72I12P102
[11] Chrysafiadi, K., Virvou, M., Tsihrintzis, G.A., Hatzilygeroudis, I. (2023). Evaluating the user’s experience, adaptivity and learning outcomes of a fuzzy-based intelligent tutoring system for computer programming for academic students in Greece. Education and Information Technologies, 28(6): 6453-6483. https://doi.org/10.1007/s10639-022-11444-3
[12] Derbas, G., Voss, K. (2023). Assessment of automated shading systems’ utilization and environmental performance: An experimental study. Building and Environment, 244, 110805. https://doi.org/10.1016/j.buildenv.2023.110805
[13] Djokic, I., Milicevic, N., Djokic, N., Maleic, B., Kalas, B. (2024). Students' perceptions of the use of artificial intelligence in educational services. Amfiteatru Economic, 26(65): 294-310. https://doi.org/10.24818/EA/2024/65/294
[14] El Khodr, M., Gide, E., Wu, M., Darwish, O. (2023). ICT students’ perceptions towards ChatGPT: An experimental reflective lab analysis. https://doi.org/10.3934/steme.2023006
[15] Fawaz, M., El‐Malti, W., Alreshidi, S.M., Kavuran, E. (2025). Exploring health sciences students' perspectives on using generative artificial intelligence in higher education: A qualitative study. Nursing & Health Sciences, 27(1): e70030. https://doi.org/10.1111/nhs.70030
[16] Fošner, A. (2024). University students’ attitudes and perceptions towards ai tools: implications for sustainable educational practices. Sustainability, 16(19): 8668. https://doi.org/10.3390/su16198668
[17] Gonzalez-Garcia, A., Bermejo-Martinez, D., Lopez-Alonso, A.I., Trevisson-Redondo, B., Martín-Vázquez, C., Perez-Gonzalez, S. (2025). Impact of ChatGPT usage on nursing students education: A cross-sectional study. Heliyon, 11(1): e41559. https://doi.org/10.1016/j.heliyon.2024.e41559
[18] Gordillo, A. (2019). Effect of an instructor-centered tool for automatic assessment of programming assignments on students’ perceptions and performance. Sustainability, 11(20): 5568. https://doi.org/10.3390/su11205568
[19] Herrera-Viedma, E., López-Herrera, A.G., Alonso, S., Moreno, J.M., Cabrerizo, F.J., Porcel, C. (2009). A computer-supported learning system to help teachers to teach fuzzy information retrieval systems. Information Retrieval, 12: 179-200. https://doi.org/10.1007/s10791-008-9087-3
[20] Hu, L., Wang, H., Xin, Y. (2025). Factors influencing Chinese pre-service teachers’ adoption of generative AI in teaching: An empirical study based on UTAUT2 and PLS-SEM. Education and Information Technologies, pp. 1-23. https://doi.org/10.1007/s10639-025-13353-7
[21] Kang, B., Hong, M. (2025). Development and evaluation of a mental health chatbot using ChatGPT 4.0: Mixed methods user experience study with Korean users. JMIR Medical Informatics, 13: e63538. https://doi.org/10.2196/63538
[22] Kazanidis, I., Pellas, N. (2024). Harnessing generative artificial intelligence for digital literacy innovation: A comparative study between early childhood education and computer science undergraduates. AI, 5(3): 1427-1445. https://doi.org/10.3390/ai5030068
[23] Khumalo, S., Rawatlal, R., Nnadozie, V., Mahadew, A., Mpungose, C.B., Mazibuko, P. (2023). Technology-mediated advising for student success: Exploring self-mediated academic support for undergraduate students using AutoScholar Advisor System. Perspectives in Education, 41(2): 211-232. https://doi.org/10.38140/pie.v41i2.7088
[24] Kim, J., Yu, S., Detrick, R., Li, N. (2025). Exploring students’ perspectives on generative AI-assisted academic writing. Education and Information Technologies, 30(1): 1265-1300. https://doi.org/10.1007/s10639-024-12878-7
[25] Li, X., Ouyang, F., Chen, W. (2022). Examining the effect of a genetic algorithm-enabled grouping method on collaborative performances, processes, and perceptions. Journal of Computing in Higher Education, 34(3): 790-819. https://doi.org/10.1007/s12528-022-09321-6
[26] Li, Y., Castulo, N.J., Xu, X. (2025). Embracing or rejecting AI? A mixed-method study on undergraduate students’ perceptions of artificial intelligence at a private university in China. Frontiers in Education, 10: 1505856. https://doi.org/10.3389/feduc.2025.1505856
[27] Liu, M. (2024). Construction of research intelligence service system for college libraries in the era of artificial intelligence. Applied Mathematics and Nonlinear Sciences, 9(1): 1-15. https://doi.org/10.2478/amns-2024-0758
[28] Mamun, K.A., Nabid, R.A., Pranto, S.I., Lamim, S.M., Rahman, M.M., Mahammed, N., Khan, R.R. (2024). Smart reception: An artificial intelligence driven bangla language based receptionist system employing speech, speaker, and face recognition for automating reception services. Engineering Applications of Artificial Intelligence, 136: 108923. https://doi.org/10.1016/j.engappai.2024.108923
[29] Marquès, J.M., Calvet, L., Arguedas, M., Daradoumis, T., Mor, E. (2022). Using a notification, recommendation and monitoring system to improve interaction in an automated assessment tool: An analysis of students’ perceptions. International Journal of Human–Computer Interaction, 38(4): 351-370. https://doi.org/10.1080/10447318.2021.1938400
[30] Ozdere, M. (2025). AI in academic writing: Assessing the effectiveness, grading consistency, and student perspectives of ChatGPT and You. com for EFL students. International Journal of Technology in Education, 8(1): 123-154. https://doi.org/10.46328/ijte.1001
[31] Orok, E., Okaramee, C., Egboro, B., Egbochukwu, E., Bello, K., Etukudo, S., Akawa, O. (2024). Pharmacy students’ perception and knowledge of chat-based artificial intelligence tools at a Nigerian University. BMC Medical Education, 24(1): 1237. https://doi.org/10.1186/s12909-024-06255-8
[32] Owusu, A. (2024). Knowledge management systems implementation effects on university students’ academic performance: The socio-technical theory perspective. Education and Information Technologies, 29(4): 4417-4442. https://doi.org/10.1007/s10639-023-11999-9
[33] Rafida, T., Suwandi, S., Ananda, R. (2024). EFL students’ perception in Indonesia and Taiwan on using artificial intelligence to enhance writing skills. Jurnal Ilmiah Peuradeun, 12(3): 987-1016. https://doi.org/10.26811/peuradeun.v12i3.1520
[34] Sáiz-Manzanares, M.C., Marticorena-Sánchez, R., Ochoa-Orihuel, J. (2020). Effectiveness of using voice assistants in learning: A study at the time of COVID-19. International Journal of Environmental Research and Public Health, 17(15): 5618. https://doi.org/10.3390/ijerph17155618
[35] Sánchez-Vera, F. (2024). Subject-specialized chatbot in higher education as a tutor for autonomous exam preparation: Analysis of the impact on academic performance and students’ perception of its usefulness. Education Sciences, 15(1): 26. https://doi.org/10.3390/educsci15010026
[36] Shahzad, M.F., Xu, S., Lim, W.M., Yang, X., Khan, Q.R. (2024). Artificial intelligence and social media on academic performance and mental well-being: Student perceptions of positive impact in the age of smart learning. Heliyon, 10(8): e29523. https://doi.org/10.1016/j.heliyon.2024.e29523
[37] Shorey, S., Ang, E., Ng, E.D., Yap, J., Lau, L.S.T., Chui, C.K. (2020). Communication skills training using virtual reality: A descriptive qualitative study. Nurse Education Today, 94: 104592. https://doi.org/10.1016/j.nedt.2020.104592
[38] Song, Y., Huang, L., Zheng, L., Fan, M., Liu, Z. (2025). Interactions with generative AI chatbots: Unveiling dialogic dynamics, students’ perceptions, and practical competencies in creative problem-solving. International Journal of Educational Technology in Higher Education, 22(1): 12. https://doi.org/10.1186/s41239-025-00508-2
[39] Sun, D., Boudouaia, A., Zhu, C., Li, Y. (2024). Would ChatGPT-facilitated programming mode impact college students’ programming behaviors, performances, and perceptions? An empirical study. International Journal of Educational Technology in Higher Education, 21(1): 14. https://doi.org/10.1186/s41239-024-00446-5
[40] Uluskan, M. (2023). Structural equation modelling—Artificial neural network based hybrid approach for assessing quality of university cafeteria services. The TQM Journal, 35(4): 1048-1071. https://doi.org/10.1108/TQM-01-2022-0001
[41] Yu, C., Yan, J., Cai, N. (2024). ChatGPT in higher education: Factors influencing ChatGPT user satisfaction and continued use intention. Frontiers in Education, p. 9. https://doi.org/10.3389/feduc.2024.1354929
[42] Zhang, W., Li, A.W., Wu, C. (2025). University students’ perceptions of using generative AI in translation practices. Instructional Science, pp. 1-23. https://doi.org/10.1007/s11251-025-09705-y
[43] Zheldibayeva, R. (2025). GenAI as a learning buddy for non-English majors: Effects on listening and writing performance. Educational Process: International Journal. https://doi.org/10.22521/edupij.2025.14.51
[44] Tarí, J.J., Dick, G. (2016). Trends in quality management research in higher education institutions. Journal of Service Theory and Practice, 26(3). https://doi.org/10.1108/JSTP-10-2014-0230
[45] Boys, J., Ford, P. (2007). The e-Revolution and Post-Compulsory Education: Using e-Business Models to Deliver Quality Education. Routledge.
[46] Duan, Y., Cao, G., Ong, V.K., Woolley, M. (2013). Big data in higher education: An action research on managing student engagement with business intelligence. http://hdl.handle.net/10547/308798.
[47] Aserey, N., Alshawi, S.N. (2013). A conceptual model of enterprise application integration in higher education institutions. In Proceedings of the European, Mediterranean & Middle Eastern Conference on Information Systems (EMCIS), Windsor, United Kingdom.
[48] Bharadiya, J.P., Bharadiya, J.P. (2023). Machine learning and AI in business intelligence: Trends and opportunities. International Journal of Computer (IJC), 48(1): 123-134.
[49] Barnes, E., Hutson, J. (2024). Strategic integration of AI in higher education and industry: The AI8-point model. Advances in Social Sciences and Management, 2(6): 39-52. https://doi.org/10.63002/assm.26.520