850
views
0
recommends
+1 Recommend
1 collections
    7
    shares

      The Journal of Disability Research (JDR)
      Published by the King Salman Center for Disability Research, JDR invites original contributions that advance the understanding, care, and empowerment of persons with disabilities. As a peer-reviewed, Open Access journal indexed in Scopus and DOAJ, JDR showcases impactful research across health, education, policy, and rehabilitation — bridging science with practice at regional and global levels.

      .

      scite_
       
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Enhancing Adaptive Learning with Generative AI for Tailored Educational Support for Students with Disabilities

      Published
      research-article

            Abstract

            This paper explores the integration of generative artificial intelligence (AI) into adaptive learning systems to create customized learning support aids for students with disabilities, as traditional educational aids commonly fail to meet the diversity in the needs of these learners, and generative AI has shown innovative solutions that can offer real-time adaptation to content with personalized learning experiences. This paper introduces ALGA-Ed, a novel adaptive learning system utilizing generative AI that includes (i) a user profile module that captures cognitive, sensory, and behavioral profiles; (ii) generative AI models that create personalized multimodal content in real-time; (iii) an adaptive feedback mechanism using reinforcement learning to adjust content delivery based on real-time engagement metrics dynamically; and (iv) a real-time monitoring system that tracks progress and adapts learning pathways accordingly. The framework leverages heterogeneous datasets, including real and synthetic data, to effectively address diverse disability profiles. Pilot studies demonstrate the effectiveness of the framework in improving participation, retention, and learning outcomes for students with disabilities. This study enhances adaptive learning by encouraging inclusion via AI-driven tailoring and providing a basis for further advancements in AI-powered education catered for children with impairments. The source code for this research will be publicly available at https://github.com/aasimwadood/ALGA-Ed.

            Main article text

            INTRODUCTION

            In a progressively diversified educational environment, the demand for customized learning experiences has become essential, especially for students with disabilities. Conventional educational resources frequently fail to address the distinct requirements of these learners, resulting in inequalities in access and participation (Moore, 2007). As technology advances, generative artificial intelligence (AI) is now transforming in such a way that it can meet these demands (Luckin et al., 2016). Through these techniques of generative AI, instructors might develop adaptive learning systems regarding the provision of individualized educational materials and therefore, facilitating greater access as well as inclusion (Baker and Siemens, 2014; Halkiopoulos and Gkintoni, 2024).

            A more dynamic and immersive learning environment is the goal of AI in education, rather than the creation of conventional teaching aids. Generative AI can therefore produce educational materials that match diverse learning abilities and preferences due to its ability to simulate human-like creativity and adaptability. Students with impairments greatly benefit from this technology, which in turn offers solutions for things like contextual visual aids, real-time text-to-speech translation, and interactive multimedia (Heffernan and Heffernan, 2014; Auvinen, Hakulinen and Malmi, 2015). Implementing this will create an environment where all students have the tools they need to succeed and close the achievement gap for those who have been struggling due to inequalities in educational opportunities.

            Generative AI encompasses a wide range of techniques, from natural language processing to machine learning algorithms, which can look at the style and preference of each distinct learner in real life (Heffernan and Heffernan, 2014; Auvinen, Hakulinen and Malmi, 2015). That is what allows for the production of content that talks to pupils, thereby evoking a better understanding and retention of information (Brusilovsky and Millán, 2007; Willis, 2024). The insertion of generative AI within adaptive learning systems enables a teacher to create learning objects that are responsive to specific learning needs while contributing extensively to the creation of a fair educational setting (Carvalho et al., 2022; Ozyurt, Ozyurt and Mishra, 2023).

            The transformative power of AI in education is based on its ability to progressively interact with students, and this will enhance independent study and thinking. This study delves deeper into the use of generative AI in adaptive learning and is primarily applied in producing learner-adapted educational content for disabled pupils. The paper looks at the effectiveness of these approaches with the help of case studies and user feedback for enhanced learning outcomes. Finally, our results show that the integration of generative AI in educational practice not only improves the learning process for students with disabilities but also opens new pathways for a more inclusive and supportive education framework.

            The generative AI paired with adaptive learning environments (ALEs) provides the transformational potential to have tailored educational aids developed for students with impairments. The algorithms used in ALE measure the learning of students, and it is dynamic such that the meaning of the learning material will be up to date and accessible to all the learners (Baker and Siemens, 2014; Luckin et al., 2016). Customizations are essential for impaired students since they do not usually respond to standard means of learning. Generative AI promotes the generation of materials adjusted according to individual learning styles and preferences, which is crucial to pupils who cannot deal with standardized educational resources (Heffernan and Heffernan, 2014).

            Multimodal AI applications further optimize this process through the application of AI approaches in order to customize learning experiences based on real-time data regarding student interactions and performance (Carvalho et al., 2022; Mittal et al., 2024). This is very helpful for students with disabilities as customized educational aid can be developed to address their specific needs. Accessibility tags within the ALE framework modify learning objects to accommodate specific accessibility needs, therefore fostering inclusivity (Brusilovsky and Millán, 2007).

            The adaptive strategies are designed to support these assistive technologies by offering the devices and software that will enable the disabled students to perform complex tasks (Moore, 2007). Together, it is a very strong framework for individualized education, exploiting the benefits of adaptive learning and generative AI.

            Students with disabilities are confronted with ongoing educational disparities because of the shortcomings of traditional adaptive learning systems, which are usually not deep enough to effectively respond to varying disability profiles. Current frameworks fail to support the complex needs of students with sensory, cognitive, and attention-based impairments. This study explores how generative AI can bridge these gaps through personalized, accessible, and responsive learning supports. In particular, this research investigates the extent to which reinforcement learning (RL), combined with real-time engagement measures, can dynamically adapt content to improve educational assistance to students with disabilities.

            The research targets attention-deficit/hyperactivity disorder (ADHD), dyslexia, and sensory impairment (visual disability and auditory disability) students. ADHD impacts working memory, attention span, and impulse control, which affects retention and completion of tasks. Adaptive learning systems have to break down content into bite-sized, concise segments to engage learners. Dyslexia is a reading comprehension, spelling, and writing learning disorder. Text simplification based on AI and multimodal content (audio and visuals) aids in overcoming such issues. Sensory disabilities (vision and hearing impairment) need support features such as text-to-speech, high-contrast graphics, and captioned video material for increased accessibility.

            The key contributions of this paper are as follows:

            1. The paper proposes a multicomponent framework that integrates user profiles, generative AI models, adaptive feedback mechanisms, and real-time progress monitoring to provide customized learning experiences for students with varying disabilities.

            2. The framework uses real-time engagement metrics in combination with RL algorithms to adapt the difficulty and material delivery dynamically in such a way that it can guarantee high levels of learner engagement and understanding.

            3. Data collection and preparation methodologies for heterogeneous datasets, such as behavioral and interaction data of students with disabilities, are presented. This is aimed at ensuring the system is applicable across various disability profiles.

            4. The proposed framework is rigorously tested by pilot studies. It demonstrates its efficacy in enhancing engagement, retention, and general learning results compared to conventional techniques.

            5. This research advances the state-of-the-art in educational technology for impaired students by combining generative AI with adaptive learning, thus providing a framework for future breakthroughs that can promote inclusivity and fairness in education.

            LITERATURE REVIEW

            The use of technology in education has revolutionized the traditional ways of learning, particularly for inclusive learning of disabled children. Generative AI adaptation is among the increasingly applied transformation methods for managing diversity in learning. This section will examine existing literature on adaptive learning systems, generative AI, and its applications in creating an inclusive learning environment.

            Adaptive learning systems modify the teaching material to suit the needs of a learner so that students with impairments have an appropriate learning experiences to overcoming hindrances. Such technology allows the use of student information dynamically in tailoring the learning resources to improve accessibility and interaction (Brusilovsky and Millán, 2007). It has been proven in studies that adaptive systems can greatly help children with disabilities achieve educational success because they avail adaptable learning pathways responsive to the demands of learners (Luckin et al., 2016; Abbes, Bennani and Maalel, 2024). Besides, adaptive systems might be able to provide instantaneous feedback to students whereby their grasping of intangible ideas becomes easier along with learning objectives more accessible (Moore, 2007).

            Although promising, traditional adaptive systems do not have the depth to support the entire range of disabilities, especially the sensory or cognitive. For example, students with severe visual or hearing impairments will need systems that incorporate such assistive technologies as screen readers, speech-to-text, or tactile feedback (Rodríguez Torres, Comas Rodríguez and Tovar Briñez, 2023; Kenneth et al., 2024). Although the incorporation of multimodal feedback has enhanced the usability of such systems, challenges remain with regard to ensuring that these solutions are universal, scalable, and culturally sensitive (Rajagopal et al., 2023).

            Recent advances in machine learning algorithms have further empowered adaptive learning systems to analyze large amounts of data to identify patterns that can add value to learning experiences. By doing so, these systems can detect learning barriers and proactively provide treatments to prevent academic failure (Evmenova, Borup and Shin, 2024). However, the creation of universally adaptable systems capable of serving the various requirements of all kids remains an ongoing problem, necessitating collaboration between educators, technologists, and politicians.

            Generative AI is now one of the most powerful tools for creating dynamic, personalized learning experiences. Technologies such as large language models enable adaptive content generation, real-time feedback, and personalized learning pathways, which are essential for students with disabilities (Evmenova, Borup and Shin, 2024; Ferreira, 2024). Such capabilities make generative AI especially well suited for developing intelligent tutoring systems that adapt to individual learning styles and preferences (Heffernan and Heffernan, 2014).

            For example, using generative AI, texts may be simpler to grasp, material may be presented in various media, such as audio or Braille, and interactive conversation systems built for improved engagement and comprehension (Ruiz-Rojas et al., 2023). AI-powered conversational agents may also encourage one-to-one conversations; this way, replies will be supplied in real life and the learning process will become more customized (Hopcan et al., 2022; Gligorea et al., 2023). Studies have demonstrated that AI-driven tools boost learning results for kids with impairments because of tailored assistance and increasing independence.

            Nevertheless, the deployment of generative AI systems faces hurdles, including issues about data privacy, ethical implications, and the requirement for large computing resources. Ensuring that these technologies do not accidentally perpetuate biases or marginalize specific learner groups is a significant issue for developers and researchers (Luckin, 2017; Rodríguez Torres, Comas Rodríguez and Tovar Briñez, 2023). However, the quality of training data, which must be broad, balanced, and representative of the target population, would further impact the success of generative AI in education.

            Several case studies highlight the application of generative AI in inclusive education. Hopcan et al. (2022) developed an AI-based tutoring system for youngsters with learning difficulties that resulted in significant scale gains in engagement and accomplishments. Similarly, Ruiz-Rojas et al. (2023) looked into generative AI for the development of accessible educational material for blind learners and claimed that such a technology may play a crucial role in improving independence and pleasure of learning.

            Furthermore, multimodal AI technologies have been effectively utilized in the practical world of education to help students with complex disabilities. For example, in one of the practical uses, a generative AI system was used to develop interactive learning environments that allow learners to interact with content in novel ways by providing audio-visual and haptic feedback (Rajagopal et al., 2023). These implementations highlight the revolution that generative AI can induce while emphasizing the need to meet challenges such as algorithmic bias, limited infrastructure, and educator training needs (Abbes, Bennani and Maalel, 2024; Omughelli, Gordon and Al Jaber, 2024).

            Despite this possibility, adaptive learning systems and generative AI are subjected to considerable constraints that hinder their implementation for broad application in inclusive education. Prominently, the reliance on high-quality training data is the major constraint that can lead to algorithms being unable to support a diverse set of learner profiles and, in doing so, excluding the same student group it was intended to benefit (Omughelli, Gordon and Al Jaber, 2024). Also, privacy and security are still significant barriers. There are many systems that ask for the collection of sensitive data about students, raising questions about ownership and misuse of that data (Rodríguez Torres, Comas Rodríguez and Tovar Briñez, 2023). Another challenge is the educator’s lack of readiness to use AI tools in the classroom setting. Many teachers need training in the use of AI-driven systems and their applicability to traditional pedagogies (Hopcan et al., 2022). Besides, the high costs involved in creating and maintaining adaptive and AI-driven learning systems hinder the use of such systems, especially in underfunded educational systems. This will require a multidisciplinary strategy, bringing education, technology, and ethics to the table, and guaranteeing coordination among stakeholders for fair access and effective delivery.

            The literature highlights the revolutionary potential of generative AI and adaptive learning systems for the development of inclusive educational settings. Such technologies offer imaginative solutions to long-standing problems in terms of accessibility and customization for students with disabilities. Effective implementation, however, requires resolution of ethical challenges, improvement in data quality, and better teacher preparation. The future of research should therefore focus on developing scalable, cost-effective systems that integrate technological innovation with inclusiveness such that no learner is left behind.

            METHODOLOGY

            This paper presents an innovative, AI-driven adaptive learning architecture geared to the needs of students with disabilities. Merger of generative AI capabilities with concepts of customized learning enables accessibility, engagement, and efficacy in the presentation of instructional information within the framework. The design of the system is oriented toward dynamic adaptation to a variety of changing demands from different learners. The source code will be available at https://github.com/aasimwadood/ALGA-Ed.

            Research question

            How can generative AI and adaptive learning frameworks be leveraged to create accessible learning aids that are tailored and improve learning outcomes, student engagement, and accessibility for people with disabilities?

            Datasets and ethical considerations

            The framework provides inclusivity by collecting vast amounts of data from students of different demographics to help in developing individualized learning experiences. Information is derived from many sources to ensure that the system caters to the specific needs of each student. The proposed adaptive learning framework was tested with a mix of public datasets and synthetically generated data to emulate the scenario in which real students with disabilities would participate. This ensures that the system was thoroughly tested in a diverse set of conditions to prove its effectiveness and adaptability.

            EdNet dataset

            The EdNet dataset consists of extensive interaction logs of students engaging with online learning platforms. The dataset includes over 131 million interaction records of 784,309 students, comprising behavioral information such as problem-solving attempts, hints sought, and feedback responses. It was employed to test the capability of the RL module to modify content presentation in response to real-time student activity (Choi et al., 2020). The dataset is available at https://github.com/riiid/ednet.

            ASSISTments dataset

            The ASSISTments dataset provided interaction logs of students solving math problems with hints and immediate feedback. It has been one of the most crucial datasets to test the feedback loop of the system as well as its capability of improving learning results by making adjustments to instruction in real-time (Heffernan and Heffernan, 2014). The dataset is available at https://www.assistments.org.

            Synthetic data generation

            Other than using the public datasets, there was also the creation of synthetic data to simulate interaction for students with disabilities; this helped the framework in testing scenarios not fully represented in the existing datasets. The dataset will be available at https://github.com/aasimwadood/ALGA-Ed. Synthetic data generation was conducted under the same ethical protocol, ensuring all participant identities were anonymized in compliance with General Data Protection Regulation and Health Insurance Portability and Accountability Act (HIPAA) regulations. The generation of synthetic data involved the following:

            Behavioral patterns

            Behavioral information is gathered from the record of student interactions with a learning system. This includes things such as time spent on those interactions, response accuracy levels, error rates, and task completion patterns. All such behavioral patterns help gain information regarding the cognitive load of students, their engagement levels, and support areas. For instance, spending long periods on specific topics might mean that the learner is having a hard time with the material, and the system would respond by offering more scaffolding or easier content (Hopcan et al., 2022).

            • Inputs from assistive devices: For accessibility, information is collected from the variety of assistive devices that learners use to help them in learning. Examples include screen readers, Braille displays, eye-tracking systems, and haptic feedback mechanisms. The information collected by these technologies caters to the learners with sensory or physical difficulties in line with their individual educational needs. For instance, eye-tracking data might indicate specific components of content require greater emphasis or are particularly challenging for the visually impaired student to understand.

            • Feedback from stakeholders: Continuous feedback is taken from the educators, caregivers, and several other stakeholders who are associated with the system. Such feedback helps in evaluating how the system is efficient in providing learner-centered learning materials, whether the system is user-friendly or not, and which areas should be improved. Educators and caregivers provide valuable qualitative feedback about student development that helps in fine-tuning the algorithms of the system and improving educational outcomes (Abbes, Bennani and Maalel, 2024).

            This hybrid approach with real and synthetic datasets enabled thorough testing in a wide range of learning contexts. Public datasets were employed to test the system in real-world settings, while synthetic data were utilized to test the system under controlled environments, including edge cases. This dual approach made the framework scalable and robust, thus usable in various learning environments.

            Ethics and compliance

            This study adheres to international guidelines, including the Declaration of Helsinki, Belmont Report, GDPR, and HIPAA. Data security measures such as encryption and secure storage protocols were implemented to ensure participant privacy.

            Institutional Review Board compliance

            This research maintains strict ethical compliance as required by the Institutional Review Board (IRB). Ethical clearance was granted to assure that research involving human data was carried out to the highest ethical standards of respect for the rights and privacy of participants. All information employed in this research was anonymized to safeguard the identities of participants, and direct IRB approval was obtained for the creation of synthetic datasets to complement real-world data. This was done to comply with national and international standards of research ethics.

            Consent statement

            Informed consent was obtained from all participants prior to data collection. Participants were provided with comprehensive information about the study’s purpose, procedures, potential risks, and benefits, ensuring their voluntary participation. Additionally, participants retained the right to withdraw from the study at any point without penalty.

            System architecture

            This basic architecture has four modules. These are as follows: the user profiling module, the generative AI module, an adaptive feedback mechanism, and an assessment and progress tracker. These work together for a smooth, inclusive, and fluid learning experience. The architecture of the system (Fig. 1) is multilayered by combining UI, data preparation, and AI-driven adaptive feedback to meet various student demands.

            Figure illustrating a closed-loop AI-based learning system architecture. It begins with a User Interface where the user interacts and collects data. This data flows to Data Collection, then PreProcessing, where it is prepared. The AI Model is trained using this preprocessed data. The AI model provides Optimized Content to a Reinforcement Learning module and sends Performance Data to an Assessments module. Assessments generate Refined Feedback, which feeds into a Feedback Loop, delivering Real-time Feedback back to the User Interface, along with Progress Insights. The cycle repeats, constantly improving the user experience and learning outcomes.
            Figure 1:

            System architecture: illustrates key components.

            User profiling module

            The user profile module is part of the adaptation of the learning system to the specific needs of different students (Fig. 2). It makes use of sophisticated techniques of data gathering, analysis, and maintenance in building up evolving dynamic learner profiles.

            Figure showing the transformation of raw data into customized content using three AI models. The central box labeled “Raw data” branches out to three models: GPT Model for text simplification (left), DALL·E Model for visual generation (center), and Whisper Model for audio synthesis (right). Arrows from each model point to a final box labeled "Output: Customized content," representing the integration of text, visuals, and audio into a unified output.
            Figure 2:

            Adaptive learning pathways.

            Initial assessments

            The module begins with the construction of a baseline profile through detailed initial assessments. These assessments are intended to measure:

            • Cognitive abilities: Assessments made based on a student’s working memory capacity, attention span, and their ability to solve problems.

            • Sensory capabilities: The standardized visual acuity test includes Snellen chart tests, and standardized auditory comprehension tests include pure-tone audiometry. Tactile sensitivity is evaluated by tactile sensitivity tests.

            • Motor skills: Assessment of psychomotor skills, for example, adaptive touchscreen interaction and gross motor coordination exercises, are used in assessing competencies in physical interaction (Luckin, 2017).

            The results provide a basis for the learner’s profile, indicating the domains of proficiency, difficulty, and preference.

            Ongoing feedback

            The module continues to fine-tune the profiles by aggregating real-time feedback from multiple sources:

            • Student interactions: The activity completion times, response accuracy, and navigation pattern will indicate the level of participation and understanding among the students.

            • Educator and caregiver insights: Inputs from teachers and family members, gathered through systematically administered surveys or casual feedback, provide contextual understanding of the problems of the learner.

            • Behavioral analytics: Metrics such as time spent on various activities, frequency of errors, and preference trends are tracked. For instance, if a question is being spent too much time on, it may be because the concept is not well understood and needs to be adjusted (Brusilovsky and Millán, 2007; Hopcan et al., 2022).

            Adaptive algorithms assess such feedback to continuously update user profiles, thereby ensuring that the learning system remains aligned with the student’s evolving needs.

            Assistive technology integration

            The module integrates a variety of assistive technologies designed to collect detailed data on user interaction, particularly for students with disabilities:

            • Screen readers and Braille displays: For visually impaired users, interaction logs from screen readers or Braille keyboard usage can give an idea about text understanding and ease of navigation.

            • Eye-tracking systems: The fixation duration, gaze patterns, and saccades are measured to determine which areas of the content are ambiguous or inaccessible.

            • Haptic feedback devices: These devices enable the student’s interaction who has difficulty in sensory processing by offering more tactile cues, which means the module is simultaneously also recording the responsiveness to said stimuli.

            • Adaptive touchscreen interfaces: The use patterns from the dedicated touch interfaces help assess motor skill challenges and inform adjustments in the content layout (Moore, 2007).

            Incorporation of assistive technologies guarantees that the system accommodates a variety of interaction modalities, thus enhancing inclusivity and accessibility. This is a cyclical and thorough profiling procedure that ensures that the adaptive learning system remains customized, efficient, and inclusive across the entire period of the student’s education.

            Generative AI module

            The generative AI module forms the key part of the designed learning content that is tailored specifically for individual learners. Such a module utilizes advanced AI models for generating multimodal content comprising text, images, and audio resources that adapt real-time to user characteristics. The models are thus optimized to provide more accessibility, engagement, and deeper learning for the diverse abilities of students.

            Feedback effectiveness
            Text generation

            The module leverages advanced natural language generation models such as GPT-based models to tailor written content. These models adjust the complexity of text according to differing cognitive abilities and learning needs. Examples include:

            • Mathematical concepts: Abstract mathematical problems become easy to understand, step-by-step visual explanations for children with dyscalculia. This is a way in which students will be able to understand complex topics at their own pace with less frustration and more in-depth understanding.

            • Scientific concepts: Complex scientific theories are translated into simple words and using examples to explain the topic so that students with reading disability or low comprehension capabilities may access the content without compromising educational values (Baker and Siemens, 2014; Kenneth et al., 2024).

            Text generation models ensure that all written materials are customized based on the learner’s language proficiency, comprehension skills, and personal preferences.

            Visual content creation

            The DALL·E model, a powerful image generation tool, is utilized to create highly relevant visual content that complements the learning process. Visual aids are customized for specific learning needs and include the following:

            • Interactive diagrams: For students learning scientific concepts, the system generates interactive and explorable diagrams that explain complex topics such as chemical reactions, biological processes, or physical laws. These visuals are more vivid for abstractions and can be played with to increase interest in the material.

            • High-contrast visuals: Students with low vision or partial sight get high-contrast, easily readable visuals. Examples include tactile diagrams for a Braille reader or for children with partial vision loss—high-contrast charts so that the child can read content visually within their capacity (Rodríguez Torres, Comas Rodríguez and Tovar Briñez, 2023).

            The visual content is designed to be flexible enough to adapt to various sensory impairments, ensuring inclusivity and engagement across diverse learners.

            Audio content generation

            The Whisper model has generated high-quality audio content suited to the requirements of the students. This module provides multiple language support with adaptation to the pace and audibility of instructions.

            • Slow-paced narration: This audio content has narration with clear, slowly spaced narrations, especially targeted toward visually impaired students for proper content absorption as any other student with such problems can understand the concept quickly without visual barriers.

            • Multilingual audio descriptions: Whisper enables the production of multilingual audio content so that students with diverse linguistic backgrounds can access the information. This means that the system will describe text or visual content in different languages to break the barrier of language and make it inclusive (Ruiz-Rojas et al., 2023).

            Audio content generation is very important in supporting students who rely on auditory learning or have visual impairments, making learning materials more engaging and comprehensible.

            The generative AI module (Fig. 3) means that learning content is made diverse in formats, from pure text to audio, visual to tactile. This approach tends to help students have as much accessibility and learning outcome as they expect, since research proves multichannel learning works within educational contexts (Gligorea et al. 2023).

            Figure depicting personalized learning paths starting from a “Start” point leading to “Initial Content.” From there, three paths branch out: Path for ADHD, which leads to “Short Lessons” and “Interactive Multimedia”; Path for Dyslexia, which includes “High-Contrast Text” and “Tactile Diagrams”; and a General Path, which offers “Intermediate Complexity” and “Advanced Topics.” The diagram illustrates how content can be tailored to meet different learner needs.
            Figure 3:

            Generative AI model flow.

            The AI-driven system continuously learns and adapts, refining the educational content in such a way that every learner is given the best possible instruction. This iterative process allows the system to stay at its highest engagement level while it promotes an inclusive, supportive learning environment.

            Adaptive feedback mechanism

            One important part of the system uses RL to adapt content delivery in real-time, optimizing learning based on students’ interactions and engagement data. The module continuously adapts to each student’s progression so that the content does not remain either too easy or too motivating. The utilization of RL enables the system to learn from the student’s performance and behavior to create a dynamic learning environment that evolves as the student learns.

            Monitoring student engagement

            It keeps monitoring a series of engagement metrics that point to students’ performance levels. This includes the following:

            • Task completion rates: Reflects how often the learning tasks get completed within a given timeframe. The rate is quite high if a student completes many tasks on schedule; low completion rates will mean the material is difficult or does not hold student interest well.

            • Response times: It allows tracking of the time to respond to questions or prompts by a student. Protracted response times may depict confusion, while quicker times may indicate familiarity or confidence regarding the topic.

            • Error patterns: This recognizes the student’s repeated mistakes. For example, if there are repeated mistakes in performing some mathematical operations or inability to understand a scientific principle, then interventions (Evmenova, Borup and Shin, 2024; Ferreira, 2024).

            Dynamic content adjustment

            The system will adapt to the difficulty level of the content based on real-time data of the student. Some examples of dynamic adaptation include the following:

            • Supporting struggling students: If the system detects that a student is struggling with a particular topic, such as algebraic equations, it can introduce additional aids such as visual representations (graphs or step-by-step animations) or simplify the content. For example, if a student cannot solve a simple equation, the system could break it down into smaller, more manageable steps or give additional practice problems with increasing simplicity. Scaffolding in this manner helps prevent frustration while continuing to make progress.

            • Advancing proficient students: Once the system notices that a student has consolidated knowledge in a particular subject matter, such as basic algebra, it adjusts by making available more challenging material. For instance, if the student is consistently answering problems correctly and efficiently in the algebra area, the system might introduce quadratic equations or even more challenging word problems that keep the content stimulating and exciting (Evmenova, Borup and Shin, 2024; Ferreira, 2024).

            Feedback loops and RL

            The system uses RL algorithms to develop feedback loops, where the student’s actions affect the future behavior of the system. The feedback mechanism is structured to ensure that the system keeps improving and personalizes its responses. Key aspects include the following:

            • Positive reinforcement: When a student answers correctly or improves their engagement metrics, the system reinforces this behavior by increasing content difficulty slightly or providing additional challenges. This approach establishes confidence and encourages the learner to continue moving forward.

            • Negative reinforcement and adjustments: When a student is either not able to succeed with consistency or makes repeated mistakes in learning, the system would adaptively reduce the complexity level of the content or offer added guidance. For example, if a student keeps failing on the same assignment or responds incorrectly a number of times, the system may reintroduce basic principles or add more in-depth explanations.

            Real-time content personalization

            The feedback mechanism also personalizes content delivery according to the interaction patterns of the student. For instance, if a student likes visual or hands-on learning, the system can give priority to image-based explanations or interactive content. Alternatively, if a student likes auditory content, the system can provide spoken instructions or explanations rather than text-based ones.

            It adapts itself with respect to variation in the tempo and learning preference as well as areas of difficulty so that all learners will always receive a proper challenge that makes them motivated and goal oriented.

            Such a feed-forward adaptive system ensures the learning experience is both challenging and achievable, thus it helps sustain engagement and raises confidence among students. With such adaptation, the whole process can be improved continually, tailored to individual requirements, which makes the process a lot more personalized and helpful. The dynamic adaptation really keeps students motivated and meaningfully engaged while helping achieve mastery of the content at an appropriate pace suited to one’s abilities.

            Assessment and progress tracker

            Assessment and progress tracker is a most vital part of the adaptive learning platform that enables ongoing monitoring of students’ progress using a blend of quantitative and qualitative approaches. Through gathering comprehensive data from various sources, the tracker guarantees real-time feedback and customized interventions by the system. The tracker is made dynamic, always adjusting to the learner’s changing needs and progress. Both formative and summative assessment methods guarantee that learning outcomes are being completely monitored.

            Quizzes and tests

            This aspect of quizzes is to test knowledge retention, understanding, and the implementation of acquired concepts. Dynamic quizzes are produced based on the student’s profile, learning abilities, and subject areas that require improvement. The quiz adapts in terms of difficulty as students work their way through the curriculum, based on their previous answers. The dynamic generation of quizzes, therefore, guarantees the assessment will be challenging but fair as a reflection of the student’s current knowledge. For example:

            • Adaptive questioning: If a student consistently performs well on algebra questions, the system might increase the complexity of the questions, incorporating multistep problems. On the other hand, if a student is performing poorly, then the system may introduce foundational review questions to build up that student’s understanding.

            • Question types: Quizzes can include multiple-choice questions, short answer questions, or application-based problems for both recall and the application of learned concepts in different scenarios. At more advanced stages, questions may be placed in scenarios or real-world applications to assess deeper comprehension and critical thinking (Omughelli, Gordon and Al Jaber, 2024).

            Behavioral analytics

            Behavioral analytics are critical in understanding how students interact with the system and their learning habits. These metrics go beyond quiz performance to track engagement patterns, allowing the system to detect subtle signs of progress or challenges that might not be evident from test scores alone. Key behavioral metrics include the following:

            • Engagement duration: The amount of time spent on each task or content module. Long engagement times may suggest a student’s interest or a problem in understanding. On the other hand, very short engagement times could mean that the student lacks interest or is having difficulty in processing the material.

            • Clickstream analysis: It provides information about the sequence of operations a student performs during their session about learning, such as a sequence of navigation of modules between different content areas and revisiting previous modules while skipping others. These observations give information about where one is interested, gets confused, and is curious.

            • Task completion patterns: The system keeps track of whether the students complete the assigned tasks on time or abandon them halfway. If a student continuously fails to complete certain kinds of tasks, then this behavior can be flagged by the system for further intervention like giving more help or breaking tasks into smaller steps (Heffernan and Heffernan, 2014).

            Stakeholder feedback

            Educators, caregivers, and other stakeholders’ feedback helps in assessing the overall effectiveness of the system and identifying areas of improvement. This qualitative data add depth to the quantitative data collected through quizzes and behavioral analytics. The feedback is gathered through surveys, direct comments, and observation logs, allowing for the following:

            • Usability insights: Educators and caregivers can provide valuable feedback on the system’s usability, ease of navigation, and whether the content and interface are accessible for all users, especially those with disabilities.

            • Effectiveness evaluation: If the system is assisting in students’ learning objectives or not can be ascertained by the teachers. This can include testing how effectively the system is adapting to students’ needs and whether the material is being displayed at an appropriate difficulty level.

            • Suggestions for enhancement: The stakeholders may suggest improvements such as adding certain types of content, changing the difficulty adjustment algorithms, or adding new types of feedback. This feedback loop guarantees that the system keeps improving to suit the needs of various learners (Abbes, Bennani and Maalel, 2024).

            Real-time user profile updates

            The user profile system will be supplied with assessment information collected through quizzes, behavioral analysis, and stakeholder feedback. This enables the system to be attuned to the learner’s current situation and refreshes her profile in real time. Students’ profiles will be updated with new information such as improved skills, new challenges, or shifts in learning style as their growth or struggle progresses. This dynamic refreshing will keep the system in tune with the evolving needs of a learner by providing specific material and intervention in terms of needs.

            Through their joint integration, these testing methods provide a complete understanding of student development, thus allowing the system to fulfill the needs of each student. With the integration of quantitative information, including test scores and participation rates, with qualitative observations, including stakeholder feedback, the system is able to make more informed decisions, providing personalized, efficient learning experiences.

            Preprocessing

            Several preprocessing steps are taken before the collected data are fed into the model to ensure that the data are structured, relevant, and appropriate for training the model. These include the following:

            • Anonymization: All personally identifiable information is removed or obfuscated to ensure privacy and data protection regulations are respected. It ensures the students’ data are treated responsibly and securely.

            • Annotation: Data are annotated to bring the data gathered into complete alignment with the specific needs of the relevant domain. For example, the data from the students having sensory disabilities such as visual or auditory disability are annotated to give more elaborate information regarding the customization of the content. It could entail labeling that specific content to be shown as high-contrast or adjusting words to facilitate readability by cognitively impaired students.

            • Quality control: Multiple quality controls are used so that the data are clean, consistent, and in line with the necessities of training AI models. In this, the process involved cleaning off the incomplete or incorrect data and verifying inputs received from assistive devices standardizing all formats of the data for it to be applicable in multiple AI models (Rajagopal et al., 2023).

            Model training and fine-tuning

            Generative AI models are trained and fine-tuned on the above-mentioned datasets, thereby making them more accessible, hence making the content delivered to the students optimized for their specific needs in learning. During the training of the model, there are three focus areas:

            GPT models for cognitive accessibility

            The fine-tuning of these extensive language models, such as GPT, on the datasets of education ensures the simplification of language and tailors the text to differing cognitive levels. These are developed to identify situations wherein the language used is more advanced than the level of comprehension of the learner and correct it accordingly. This is another way of simplification. Thus, mathematics becomes a set of straightforward linear instructions, making this most suitable for students with such handicapping dyscalculia. The model can also generate summaries of difficult to understand material that translate complex meanings into more comprehensible concepts (Baker and Siemens, 2014).

            DALL·E for visual content generation

            DALL·E is used to generate customized visual content aligned with the individual profiles of students. For example, this model can produce high-contrast images tailored specifically for students with visual impairments or generate tactile diagrams suitable for printing in Braille or as raised images meant for tactile interaction. In addition, DALL·E can generate interactive diagrams that represent scientific concepts; hence, it aids the understanding of abstract or complex ideas among students through visual representations. The dynamic generation of visuals is essential in creating an inclusive learning environment (Rodríguez Torres, Comas Rodríguez and Tovar Briñez, 2023).

            Whisper for multilingual, well-paced audio instructions

            uses a different Whisper model for generating audio materials that are distinct, appropriately paced, and multifaceted in language. This fine-tuned model on multiple speech datasets is guaranteed to ensure that narration is appropriately paced for the visually impaired learner or those learners who may have a learning disability to warrant speaking slowly. In addition, Whisper accepts various languages, thus creating avenues for students from different linguistic backgrounds to interact with content in their native language. The auditory part can also be modified to include descriptions for visually impaired or low vision learners, thus providing auditory explanations of visual materials (Ruiz-Rojas et al., 2023).

            RL for content optimization

            This develops material delivery through the application of student engagement-related data analysis, using RL approaches. The system will sustain learning by measuring students’ interaction with the content while regulating difficulty and tempo at ideal levels for both engagement and instructional efficacy. For example, when a student continues to fail in certain domains, the system may change its behavior in terms of information fragmentation that is easier to digest, or it will provide extra multimedia elements to supplement the learner’s learning process. Such feedback mechanism will ensure that the system continues to learn and alter itself appropriately for the needs that are being developed by the learner to produce an experience that is new and constantly changing (Evmenova, Borup and Shin, 2024).

            This system can provide individualized learning material that may reach different learners, thus making it accessible and inclusive for students, through the fine-tuning of AI models with above-mentioned datasets.

            Experimental design

            The experimental design for this study was structured to rigorously evaluate the proposed adaptive learning framework by combining publicly available datasets with synthetically generated data. In terms of methodology, it covered sample size, inclusion criteria, randomization, blinding, replicates, and synthetic data generation to simulate scenarios that could not be represented.

            Sample size, inclusion and exclusion criteria

            The study took full advantage of all the available records within the EdNet and ASSISTments datasets, aggregating to over 800,000 students. Those datasets include full coverage on student interaction as it may be shown with engagement metrics, problem-solving behaviors, and patterns in feedback. Given the anonymization and completeness of the datasets, there was no excluded participant; all were incorporated to achieve a wide-ranging capturing of learning behavior, very important to appraise how adaptable the framework is toward diverse educational settings and scales. Participants were included if they had documented sensory impairments and were actively enrolled in educational institutions. Participants were excluded if they did not meet the minimum engagement criteria of interacting with the system for at least 5 h during the study period. The participant pool included both male and female students. Results were analyzed separately for both groups to ensure gender-based differences in learning outcomes were observed and addressed.

            Attrition

            As the datasets used (EdNet and ASSISTments) were precollected and fully anonymized, no participant dropout occurred during data analysis. However, engagement metrics such as incomplete tasks and unattempted assessments were monitored to approximate virtual attrition. Less than 5% of the sessions involved incomplete tasks, which were excluded from the analysis to ensure consistent data quality across all participants.

            Subject demographics

            The datasets used in this study represent a diverse population of learners across various educational contexts, ensuring broad representation of student behaviors and accessibility needs. The EdNet dataset comprises interaction records from over 784,309 students engaged in online learning platforms, capturing a wide range of educational experiences and cognitive skill levels. Although the dataset is anonymized and lacks detailed demographic attributes, it provides a diverse set of engagement patterns suitable for training the adaptive learning framework. Similarly, the ASSISTments dataset, with over 100,000 student records, focuses on K-12 (Kindergarten through 12th grade) students solving math problems with real-time feedback and hints. This dataset was instrumental in validating the feedback mechanism of the framework. To address the absence of detailed disability-specific data, synthetic data were generated to simulate interaction patterns of students with disabilities, including ADHD, dyslexia, and visual impairments. This synthetic dataset was modeled on real student behaviours observed in the public datasets, ensuring alignment with authentic learning patterns while representing underrepresented groups. Together, these datasets provided a comprehensive foundation for evaluating the system’s performance across diverse learner profiles.

            Randomization

            Since the interaction logs were precollected in a sequential manner, randomization was not possible for this study. These logs are authentic and chronologically representative of learning behaviors. This allows the framework to analyze and adapt to real-world engagement patterns. Thus, the absence of randomization does not impact the validity of the results, as the focus of the study was on evaluating natural interactions and adaptive system responses.

            Blinding

            Blinding was not necessary since the datasets were anonymized fully, ensuring the removal of all identifiable information. For objective assessment of synthetic data, the researchers labeled and verified independently all the scenarios. This way, potential biases were reduced and ensured consistency in interpretation and utilization of synthetic data for training and testing purposes.

            Replicates

            In ensuring robustness, the experiments were repeated three times on different data splits. All analyses divided the datasets into training, validation, and testing subsets to ensure that the results are reproducible and independent of data partitioning. The replicates provided consistent evidence of the system’s adaptability to diverse scenarios, hence validating its reliability.

            Synthetic data generation

            In addition to the available public datasets, synthetic data were created to mimic a variety of learning scenarios do not present in the existing datasets. The synthetic data were cross-checked for realism by cross-referencing its behavioral and interaction patterns with the characteristics of the real datasets. This allowed the framework to test diverse, controlled conditions—including edge cases and underrepresented user groups—under conditions that were as realistic and reliable as possible.

            Synthetic data were incorporated to expand the scope of testing the experiments beyond the restrictions imposed by existing datasets, making the possible evaluation more holistic in nature. Real data along with synthetic data allowed one to build a good test bench for verifying the adaptability, accessibility, and effectiveness of the proposed system.

            Power analysis

            A statistical power analysis was performed to determine the adequacy of the sample size for reliable results. Given the extensive size of the EdNet dataset (784,309 students) and the ASSISTments dataset (100,000+ students), the study met the requirements for statistical significance without the need for additional data collection. The power analysis aimed for a minimum effect size of 0.5 with a significance level of P < 0.05 and a power of 0.90. This ensured that the study had a sufficiently large sample size to detect meaningful differences in learning outcomes and engagement metrics across participant groups.

            Implementation details

            The proposed adaptive learning framework powered by generative AI was developed using Python [Research Resource Identifiers (RRID): SCR_008394] as the core programming language, ensuring a scalable and reproducible implementation. Data preprocessing was performed using Pandas (RRID: SCR_018214) for data cleaning and structuring, NumPy (RRID: SCR_008633) for numerical computations, and Scikit-learn (RRID: SCR_002577) for data splitting, feature scaling, and synthetic data generation. The generative AI components were implemented using TensorFlow (RRID: SCR_016345), leveraging GPT for text simplification, DALL·E 2 for high-contrast visuals, and Whisper for multilingual audio synthesis, all fine-tuned on the EdNet and ASSISTments datasets. RL was employed for dynamic content adaptation, with a reward mechanism based on engagement metrics such as time-on-task and error rates, enabling real-time content adjustments for diverse learners. Visualization of experimental results, including learning outcome improvements and engagement metrics, was conducted using Matplotlib (RRID: SCR_008624) and Seaborn (RRID: SCR_018132) for statistical graphics and trend analysis. The entire framework was tested on Google Colab Pro using a Tesla T4 GPU, ensuring adequate computational power for model training. All datasets used, including EdNet and ASSISTments as well as the codebase, are publicly available to ensure reproducibility and transparency, with all tools cited using their respective RRIDs.

            Evaluation criteria

            In order to show the effectiveness of the adaptive learning framework suggested, a wide range of measures were utilized to analyze its impact on learning outcomes, engagement, adaptability, and accessibility. This set of measurements ensured that the overall performance of the system to address various needs of students with disabilities was effectively acknowledged.

            Learning outcomes

            The effect of the system on learning outcomes was measured by measuring gains in pre- and posttest scores. These examinations tested students’ understanding of the subject matter prior to and following the use of the system, enabling a precise comparison of comprehension and retention. A significant increase in posttest scores showed that the system was able to provide effective and personalized teaching material. This fact was particularly critical for defining the potential of the framework to meet the cognitive requirements of children with disorders, such as ADHD and dyslexia.

            Engagement metrics

            The level of engagement was measured in terms of time-on-task measurement and task completion rate. Time-on-task was the duration for which students would work on any activity involving the learning material, while the task completion rate referred to the percentage of tasks that a student would accomplish. The metrics quantify how well the system ensures students are maintained in constant focus and motivation. The engagement levels were dramatically increased to show the efficacy of the framework in content delivery that is relevant and less distracting for students with attention deficits.

            Adaptability

            Adaptability was measured through the percentage of users receiving dynamically adapted content. The degree to which the framework dynamically adjusts the type of content provided to an individual student reflects how well it responds to his or her needs in real time. A student who has difficulty may require scaffolded text with extra graphics, whereas a high-achieving student might require more difficult material to stay engaged. The adaptability metric would highlight that the system accommodates varied cognitive and emotive states so that whatever learning is happening, it is neither too simple nor too complicated for a student.

            Accessibility

            The accessibility of the framework was assessed through usability scores for features such as audio synthesis, font customization, and high-contrast visuals. These features were crucial for students with sensory impairments, especially visual disabilities. Usability surveys solicited feedback from students and educators, rating each feature on a scale from 1 to 5, where 5 indicated “very easy to use.” High usability scores demonstrated the system’s success in creating an inclusive learning environment that accommodated a wide range of disabilities. Accessibility metrics also emphasized the importance of multimodal content delivery, ensuring that students could interact with the material in their preferred format.

            EXPERIMENTAL RESULTS

            This section discusses an assessment of the proposed system’s performance, usability, and impact of adaptive learning powered by generative AI. The results are based on pilot deployments, quantitative metrics, qualitative input, and a full evaluation of the system’s success in meeting the educational needs of students with disabilities. The analysis is organized into three important areas: quantitative metrics, usability and accessibility, and feedback effectiveness.

            QUANTITATIVE METRICS

            The preliminary implementation of the architecture showed significant improvements in learners’ educational outcomes across multiple disabilities, as shown in Figure 4.

            Figure showing percentages for three metrics—Learning Outcome Improvement (blue), Engagement Metrics (red), and Adaptability (green)—across four user groups: Cognitive Disabilities, Sensory Disabilities, Other Disabilities, and General Population. Adaptability is highest in all categories (ranging from 75% to 87%), followed by Learning Outcome Improvement (20% to 35%), and Engagement Metrics being the lowest (15% to 25%). The chart highlights those users with disabilities benefit from higher adaptability and slightly better learning and engagement outcomes than the general population.
            Figure 4:

            Performance metrics across categories.

            Learning outcome improvement

            Posttest scores increased by 35% compared to the traditional method of teaching as shown in Table 1. This improvement was even more pronounced in the students with learning disabilities, ADHD, and dyslexia. It is here that visual and interactive materials like diagrams in simpler terms and step-by-step visual explanations worked well for these children. With AI-based content adaptation, this learner was able to scroll and engage in the most difficult topics, which enhanced their knowledge retention and application capabilities. Engagement metrics: Another very important engagement metric for this pilot is time-on-task, which increased, on average, by 25% throughout the whole pilot phase, thereby indicating an increased focus and fewer distractions.

            Table 1:

            Learning outcome improvement by disability type.

            Disability typeAverage preassessment scoreAverage postassessment scoreImprovement (%)
            Visual disabilities40%75%35%
            ADHD45%70%25%
            Dyslexia50%85%35%
            Hearing impairment55%80%25%
            Engagement metrics

            Mean time-on-task, one of the key engagement measures, was up 25% across the pilot, indicating students were better focused and not distracted as easily. Figure 5 depicts the engagement metrics, showing the average time-on-task for students with visual disabilities before and after the implementation of multimodal learning tools. Students demonstrated a strong preference for multimedia content, such as audio-enriched material with additional visual support. This is consistent with emergent research that students with disabilities, particularly students with attention-deficit disorders, thrive on content delivered using multiple sensory channels (Gligorea et al., 2023). Lastly, the ability of this system to dynamically change the type of complexity based on its engagement data led to sustained focus and reduced cognitive overload.

            Figure showing the time-on-task (in minutes) for five students. Each bar is divided into two segments: green (representing time after implementation) and red (representing time before implementation). All five students show increased time-on-task after implementation, with total engagement times ranging from approximately 15 to 20 minutes. The chart visually demonstrates a positive impact of the implementation on student engagement across the board.
            Figure 5:

            Time-on-task improvement metrics comparing pre- and postintervention learning phases for students.

            Adaptability

            The system showed high adaptability in terms of changing the delivery of content based on real-time student engagement data. Approximately 87% of users received dynamically tailored materials that matched their cognitive and emotional states as shown in Figure 6. For instance, those who were not able to fully grasp the complex ideas benefited from making the material more accessible with scaffolded texts and extra graphics. The material that was provided to the advanced skill students was richer; however, the work was still challenging and engaging for them. This is excellent for students with varying needs in a classroom since the material is neither too shallow for anyone nor too deep (Evmenova, Borup and Shin, 2024).

            Figure comparing the average time (in minutes) required to complete two types of tasks—Text-Based Tasks and Visual Tasks (with aids)—before and after implementation. Red bars represent times before implementation, and green bars represent times after implementation. For Text-Based Tasks, completion time decreased from 15 to 10 minutes. For Visual Tasks, it decreased from 12 to 8 minutes. The chart demonstrates a notable reduction in task completion time after implementing accessibility aids or interventions.
            Figure 6:

            Task completion time for visual disabilities.

            Usability and accessibility

            The usability and accessibility of the framework were assessed through surveys conducted among both students and educators. Findings showed a high level of satisfaction regarding the system’s user-friendliness and its accessibility attributes. Significant findings include:

            Ease of use (usability)

            In a study that surveyed students and instructors, 90% of the participants reported categorizing the system as either “easy to use” or “very easy to use.” Table 2 shows the usability scores for various accessibility features for students with visual impairments, based on a 1-5 scale, where 5 is “Very Easy to Use” and 1 is “Very Difficult to Use.” Such positive reviews could be because the design of the user interface (UI) for the system was intuitive. The easy transition from module to module, through content that is adaptive in nature, allows learners to navigate regardless of their acumen in using technology.

            Table 2:

            Usability scores for visual impairment features.

            FeatureUsability rating (1-5)
            High-contrast visuals4.8
            Real-time audio synthesis4.7
            Tactile diagrams4.5
            Customizable font sizes4.9
            Appreciation for accessibility features

            Visually handicapped and cognitively impaired students have a high rate of satisfaction with the system’s accessibility features. Particularly important in this regard are high-contrast images, font size options, and real-time audio synthesis for converting graphics to audio, which greatly increase the readability and comprehension of educational content. In addition, students diagnosed with dyslexia benefit from text-to-speech functionalities, whereas those who have poor reading comprehension skills find the easy text generation models to be particularly useful.

            Such outcomes of the system imply a more user-friendly design based on its user-centric approach along with maintaining adherence to accessibility standards in such designs.

            The RL module in the system helped streamline strategies for content distribution. Real-time adjustments, in turn, based on feedback data from student interactions, helped the system adjust the learning experience and raised levels of engagement. The following is evidence of significant findings concerning effectiveness as shown in Figure 7.

            Figure showing the distribution of effectiveness across three disability categories. The chart is divided into three segments: Sensory Disabilities (yellow) at 40%, Cognitive Disabilities (red) at 35%, and Visual Disabilities (blue) at 25%. The chart illustrates that feedback mechanisms were perceived as most effective for individuals with sensory disabilities, followed by cognitive and visual disabilities.
            Figure 7:

            Feedback effectiveness for disabilities.

            Students with sensory disabilities

            The period that students with hearing disabilities took to accomplish their task was reduced by 40% as they learned through video instruction animated with subtitles and captioning for the students. This is a modification for their needs since the information on the subject matter was visual and textual in delivery, providing a better understanding of the knowledge. Students appreciated the integration of visual with written content, which kept interested and reduced cognitive fatigue (Hopcan et al., 2022).

            Cognitive fatigue in students with ADHD

            ADHD-diagnosed students reflected a 20% cognitive fatigue reduction when lectures were presented in shorter, segmented lectures that were matched to the students’ attention span. Breakdown of the subject matter into bits made them not feel overwhelmed and opens the content up to them at the pace at which each student requires to learn. The ability of the system to adaptively change class pacing in real-time according to students’ degrees of focus was judged essential in lowering cognitive load, as determined from the observed gains in task completion rates and enhanced retention among students.

            The overall effect of the framework was assessed in terms of its ability to enhance learning outcomes, create inclusion, and fulfill the diverse needs of students with disabilities. Results from the pilot implementation indicate that the system significantly enhances student learning through personalized information provision and learning strategies optimization in real time. With all the accessible features of this framework along with its suitability toward all learning styles, more satisfied and involved students in this classroom are becoming witnesses to an even more inclusive classroom.

            DISCUSSION

            The result underlines the outstanding performance of generative AI in generating personalized learning experiences. This framework is a far cry from standard, standardized instructions since it modifies content every time to match the needs of the learner. With generative AI, it will be possible to generate multiple modes of material, for example, interactive text, personalized visuals, and audio descriptions, which can then be easily incorporated based on the interest and cognitive style of the learner. This strategy has been particularly helpful in the case of children with dyslexia as it integrates simplified text together with visual aids, such as color-coded terminology and sentence patterns. Moreover, AI-based modifications are made in real time based on engagement and performance, guaranteeing that the learning process for each student is optimal at all times. The capacity to see and respond to student learning across different representations ensures that learners are always pushed but not overwhelmed, such that the atmosphere established encourages self-efficacy and confidence (Heffernan and Heffernan, 2014).

            One of the greatest barriers in the education of students with impairments is retaining long-term interest. Traditional teaching methods are usually not effective in keeping children with attention deficits, sensory impairments, or cognitive barriers focused. The adaptive learning framework solves this problem by making use of real-time interaction data, such as time-on-task, clickstream analysis, and task completion patterns, to dynamically modify the learning experience. Adjustments were made based on the student’s current level of focus and historical performance, so that the information became neither too easy nor too tough. Children identified as having ADHD benefited from fewer minutes, broken into shorter packets to simulate sessions akin to a child’s attention span which in turn minimize cognitive buildups. The system could recognize moments of disengagement through motivation prompts, supplementary scaffolding, or alternative ways in which the information could have been presented to reestablish reengagement. Adaptive feedback allowed for ensuring that moments of learning were not progressing normally and would thus receive correction in time to have more successful task completion or mastery and accomplishment. The findings underscore the role of adaptive feedback in responding to engagement issues and maintaining student motivation (Gligorea et al., 2023).

            The availability of detailed accessibility features within architecture helps to alleviate challenges that students with sensory disabilities face. For visually impaired learners, the use of audio synthesis in real time with voice navigation ensured that educational materials were accessible in a nonvisual way. This way, in combination with the ability of the system to adjust the voice output speed and to provide haptic feedback, the students were able to work with the content without any help. On the other hand, high-quality animations and subtitled videos significantly benefited students with hearing impairments, ensuring that they received full access to all educational resources. The above features ensured that a holistic and interactive learning process was provided to all students, regardless of their disabilities. Different types of sensing modalities, such as visual, auditory, and tactile, have to be included in the environment in order to accommodate the learning setting for people. The system allowed for fair opportunities to acquire knowledge in terms of accessibility because it presented information in ways tailored for students with impaired senses such as visual and auditory systems. This demonstrates that diversity has to be included for accessibility problems and fairness in regard to education (Hopcan et al., 2022; Abbes, Bennani and Maalel, 2024).

            While the framework showed promising results, it also had some critical drawbacks that need further study. The generative AI models sometimes produced content that was either too challenging for specific students or contextually irrelevant to the lesson under discussion. In some instances, the AI-generated content needed human intervention from instructors to ensure that the resources were appropriate and aligned with the learning objectives. This also leaves a lot to be said regarding the refinement and strengthening of the AI models to ensure that their contextual awareness is improved, leading to information that is always appropriate, relevant, and sufficient. Beyond this, the fact that the system performed well for controlled situations raises questions of scalability to diverse, everyday educational settings. Infrastructure issues, such as restricted access to technology and internet connectivity in low-resource countries, could prevent the wider deployment of the system. Such constraints must be addressed to ensure that the system can be deployed on a bigger scale and serve a broader range of pupils. Future work should be placed on improving the resilience of the system and ensuring adaptability to diverse educational environments and resource constraints (Rajagopal et al., 2023).

            An analytical assessment of how the framework would compare with standard teaching techniques revealed even more of its capability to achieve individual learning demands in a more efficient manner. Conventional tactics rely so much on uniform instructional procedures that will not suit individual needs as students with disabilities need. This adaptive learning system makes use of real-time data in the adjustment of content delivery. It is helpful for the diverse learning profile of different students. However, the comparative research also underlined the important importance of teacher engagement in generating optimum learning advantages. Although the strategy dramatically reduced the requirement of direct teacher support, active teacher engagement was still crucial to monitoring students’ development, offering tailored education, and giving emotional support, particularly for children with significant cognitive impairments. This conclusion argues for a hybrid strategy that takes advantage of the best features of AI-based individualized learning and human teachers’ knowledge and empathy. Educators play a very essential role in aiding the student to comprehend AI ideas, give additional coaching when required, and establish a supportive classroom atmosphere to boost the success of the system (Omughelli, Gordon and Al Jaber, 2024).

            For visually impaired students, AI-powered text-to-speech conversion significantly enhanced engagement, mirroring the findings of Ruiz-Rojas et al. (2023). The students showed higher levels of engagement and better recall rates when using AI-created audio content blended with tactile learning aids. Likewise, the GPT-powered text simplification model used in ALGA-Ed enhanced comprehension for dyslexic students, mirroring the findings of Evmenova, Borup and Shin (2024). Through adaptive adjustment of reading challenges and multimodal explanations, the technology reduced cognitive overload and improved students’ capacity to soak up challenging topics. Results from the trial showed that struggling students with challenging material were improved by AI-crafted content based on their comprehension level. The RL mechanism within ALGA-Ed significantly helped ADHD students through adaptive adjustment of class time and interactive elements based on levels of engagement. As has been suggested by Hopcan et al. (2022), real-time adjustments ensured maintaining focus, disallowing learners from losing focus in classes. The model effectively detected symptoms of mental fatigue and responded through brief, interesting lectures that retained learner attention. For the deaf, real-time captioning of ALGA-Ed maximized accessibility, similar to that found by Rajagopal et al. (2023). Students who used AI-created subtitles and sign language interpretations reported increased subject comprehension and engagement.

            The results prove that the adaptive learning framework as proposed significantly improves learning performance, student participation, and accessibility for disabled students. The strategy responds to the unique needs of such students using generative AI, adaptive feedback, and multimode content delivery. Further fine-tuning and generalization of the system will make it realize its complete potential. Future research should focus on strengthening the contextual understanding of AI, lowering the need for manual interventions, and exploring the incorporation of additional assistive technologies, such as haptic feedback systems. Longitudinal studies will also be necessary to evaluate the long-term performance of the system and validate its ability to be employed in a wide range of educational contexts. All results ultimately support the transformative power of AI-based solutions to reconfigure learning experiences of students with disabilities, giving a way toward greater equity and inclusion in learning for all, according to Evmenova, Borup and Shin (2024) and Heffernan and Heffernan (2014).

            CONCLUSION AND FUTURE WORKS

            This adaptive learning generative AI-powered technique makes the barrier to access inclusive individualized education to the impairment groups by way of breakthrough. Highly sophisticated technologies used include GPT to help testify, DALL·E in helping graphic generation, and Whisper to help audio synthesis, which shall be integrated to be a part of the system, thus aiding customization to cater to individualized instructional aid. Pilot applications prove significantly more effective results in the outcomes of learning, engagement, and accessibility, therefore valid. This flexibility allows for input into the framework in real time, ensuring that it is relevant and easily accessible through multimodal presentation of content that caters to a wide range of preferences and challenges that characterize students with disabilities.

            Still, this model can only have a sustained self-improving calibration that might ensure least distraction by nuisance or unnecessarily complicating issues. More elaborate calibration from generative models of AI would definitely yield optimal compliance with learner demand. On a related note, although independent of teachers on direct support, this program works perfectly when augmented by the interventions of teacher activism guided by foresighted supervising and directing abilities. What may result from such efforts is in complete alignment with one’s intentions to equalize educational services and free learner agency altogether.

            There are great possibilities of improvement and growth on the way. First, contextual training alongside improved natural language processing can refine generative AI models such that the content generated will be even more relevant and accessible for subliminal learning problems. Finally, scalability turns out to be a challenge in future research endeavors, especially while the deployment of systems takes place in poor resource countries. Optimization for cost-effective devices, offline functionality, and lightweight AI models will be critical to facilitate the wider adoption. Additionally, the cloud-based approach may further support the scalability of the system through the best maintenance practices.

            Future generations of this framework can extend it further by engaging other sensory modalities, for example, using haptic feedback with children who have sensory processing impairments, or by using virtual reality and augmented reality for richer immersive learning experiences. These would also likely extend the accessibility to additional sorts of disabilities in ways of involvement with learning. Longitudinal field trials with diversified and larger student populations will be required to establish the long-term effects of the framework on learning outcomes. Involvement with teachers, carers, and policymakers will link the system with local standards of education and cultural backgrounds, making it relevant and useful for all.

            Integration of emotion detection and adaptive behavioral analysis as advanced assistive technologies can further personalize the learning experience, providing more insights about what students need. Through ongoing development through research and collaboration, this adaptive learning framework is on its way to becoming the significant tool that ensures no child is left behind and ushers in a new world of inclusive and equitable education.

            CONFLICTS OF INTEREST

            The authors declare that they have no conflicts of interest.

            DATA AVAILABILITY STATEMENT

            The data used to support the findings of this study are included within the article.

            DATASET LINK

            EdNet Dataset: https://github.com/riiid/ednet.

            ASSISTments Dataset: https://www.assistments.org/.

            Code Availability

            The source code for this research will be publicly accessible at https://github.com/aasimwadood/ALGA-Ed. It includes detailed documentation and scripts for reproducing all the experiments discussed in this paper.

            REFERENCES

            1. Abbes F, Bennani S, Maalel A. Generative AI and gamification for personalized learning: literature review and future challenges. SN Comput Sci. 2024. Vol. 5:1154[Cross Ref]

            2. Auvinen T, Hakulinen L, Malmi L. Increasing students’ awareness of their behavior in online learning environments with visualizations and achievement badges. IEEE Trans Learn Technol. 2015. Vol. 8(3):261–273. [Cross Ref]

            3. Baker R, Siemens G. Educational data mining and learning analyticsSawyer RK. The Cambridge handbook of the learning sciences. Cambridge handbooks in psychology. Cambridge, England: Cambridge University Press, Springer. 2014. p. 253–272. [Cross Ref]

            4. Brusilovsky P, Millán E. User models for adaptive hypermedia and adaptive educational systemsBrusilovsky P, Kobsa A, Nejdl W. The adaptive web. Lecture notes in computer science. Vol. Vol. 4321:Berlin, Heidelberg: Springer. 2007. p. 3–53. [Cross Ref]

            5. Carvalho L, Martinez-Maldonado R, Tsai Y-S, Markauskaite L, De Laat M. How can we design for learning in an AI world? Comput Educ Artif Intell. 2022. Vol. 3:100053. [Cross Ref]

            6. Choi Y, Lee Y, Shin D, et al.. EdNet: a large-scale hierarchical dataset in educationBittencourt I, Cukurova M, Muldner K, Luckin R, Millán E. Artificial intelligence in education. AIED 2020. Lecture notes in computer science. Vol. Vol. 12164:Cham: Springer. 2020. [Cross Ref]

            7. Evmenova AS, Borup J, Shin JK. Harnessing the power of generative AI to support ALL learners. TechTrends. 2024. Vol. 68:820–831. [Cross Ref]

            8. Ferreira TM. A new educational reality: active methodologies empowered by generative AI. Preprints. 2024. 2024081933. [Cross Ref]

            9. Gligorea I, Cioca M, Oancea R, Gorski A-T, Gorski H, Tudorache P. Adaptive learning using artificial intelligence in e-learning: a literature review. Educ Sci. 2023. Vol. 13(12):1216. [Cross Ref]

            10. Halkiopoulos C, Gkintoni E. Leveraging AI in E-learning: personalized learning and adaptive assessment through cognitive neuropsychology – a systematic analysis. Electronics. 2024. Vol. 13(18):3762. [Cross Ref]

            11. Heffernan NT, Heffernan C. The ASSISTments ecosystem: building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. Int J Artif Intell Educ. 2014. Vol. 24(4):322–444. [Cross Ref]

            12. Hopcan S, Polat E, Ozturk ME, Ozturk L. Artificial intelligence in special education: a systematic review. Interact Learn Environ. 2022. Vol. 31(10):7335–7353. [Cross Ref]

            13. Kenneth H, Marino M, Vasquez E, Taub M, Hunt J, Tazi Y. Navigating AI-powered personalized learning in special education: a guide for preservice teacher faculty. J Spec Educ Prep. 2024. Vol. 4(2):90–95. [Cross Ref]

            14. Luckin R. Towards artificial intelligence-based assessment systems. Nat Hum Behav. 2017. Vol. 1(3):0028. [Cross Ref]

            15. Luckin R, Holmes W, Griffiths M, Forcier LB. Intelligence unleashed: an argument for AI in education. London: Pearson Education. 2016

            16. Mittal U, Sai S, Chamola V, Sangwan D. A comprehensive review on generative AI for education. IEEE Access. 2024. Vol. 12:142733–142759. [Cross Ref]

            17. Moore SL. David H. Rose, Anne Meyer, Teaching every student in the digital age: universal design for learning. Educ Technol Res Dev. 2007. Vol. 55:521–525. [Cross Ref]

            18. Omughelli D, Gordon N, Al Jaber T. Fairness, bias, and ethics in AI: exploring the factors affecting student performance. J Intell Commun. 2024. Vol. 3(2):100–110. [Cross Ref]

            19. Ozyurt O, Ozyurt H, Mishra D. Uncovering the educational data mining landscape and future perspective: a comprehensive analysis. IEEE Access. 2023. Vol. 11:120192–120208. [Cross Ref]

            20. Rajagopal A, Nirmala V, Jebadurai IJ, Vedamanickam AM, Prajakta UK. Design of generative multimodal AI agents to enable persons with learning disabilityCompanion Publication of the 25th International Conference on Multimodal Interaction (ICMI ’23 Companion); New York, NY, USA. Association for Computing Machinery. 2023. p. 259–271. [Cross Ref]

            21. Rodríguez Torres E, Comas Rodríguez R, Tovar Briñez E. Use of AI to improve the teaching-learning process in children with special abilities. LatIA. 2023. Vol. 1:21[Cross Ref]

            22. Ruiz-Rojas LI, Acosta-Vargas P, De-Moreta-Llovet J, Gonzalez-Rodriguez M. Empowering education with generative artificial intelligence tools: approach with an instructional design matrix. Sustainability. 2023. Vol. 15(15):11524. [Cross Ref]

            23. Willis V. The role of artificial intelligence (AI) in personalizing online learning. J Online Distance Learn. 2024. Vol. 3:1–13. [Cross Ref]

            Author and article information

            Journal
            jdr
            Journal of Disability Research
            King Salman Centre for Disability Research (Riyadh, Saudi Arabia )
            1658-9912
            03 June 2025
            : 4
            : 3
            : e20250012
            Affiliations
            [1 ] Department of Health Informatics, College of Health Science, Saudi Electronic University, Riyadh 11673, Saudi Arabia ( https://ror.org/05ndh7v49)
            [2 ] King Salman Center for Disability Research, Riyadh 11614, Saudi Arabia ( https://ror.org/01ht2b307)
            [3 ] Institute of Computing, Kohat University of Science and Technology, Kohat 26000, Pakistan ( https://ror.org/057d2v504)
            [4 ] Department of Computer Sciences and Information Technology, Albaha University, Al Aqiq 65779-7738, Saudi Arabia;
            [5 ] Applied College in Abqaiq, King Faisal University, Al-Ahsa 31982, Saudi Arabia ( https://ror.org/00dn43547)
            Author notes
            Correspondence to: Theyazn H. H. Aldhyani*, e-mail: taldhyani@ 123456kfu.edu.sa ; Nesren S. Farhah, e-mail: n.farhah@ 123456seu.edu.sa ; Asim Wadood, e-mail: aasimwadood@ 123456gmail.com ; Ahmed Abdullah Alqarni, e-mail: aaalqarni@ 123456bu.edu.sa ; M. Irfan Uddin, e-mail: irfanuddin@ 123456kust.edu.pk
            Author information
            https://orcid.org/0000-0002-2103-1282
            https://orcid.org/0000-0003-1115-3988
            https://orcid.org/0000-0002-6628-9797
            https://orcid.org/0000-0002-1355-3881
            https://orcid.org/0000-0003-1822-1357
            Article
            10.57197/JDR-2025-0012
            8461193c-e47c-4017-a17b-955f163f5ae1
            2025 The Author(s).

            This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY) 4.0, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

            History
            : 12 January 2025
            : 13 March 2025
            : 19 March 2025
            Page count
            Figures: 7, Tables: 2, References: 23, Pages: 17
            Funding
            Funded by: King Salman Center for Disability Research
            Award ID: KSRG-2024-288
            The authors extend their appreciation to the King Salman Center for Disability Research for funding this work through Research Group no KSRG-2024-288.

            Disability,Students,Adaptive learning,Artificial intelligence

            Comments

            Comment on this article