Can Educational Software Support Learning in the Global North and South Equally? A Comparison Case Study

Nowadays, many educational software applications are designed with input only from specific user groups from developed countries in the Global North, without considering the needs of users living in the Global South, potentially compromising their effectiveness. To contribute to the understanding of how educational software influences learning depending on the country of origin of end-users, this paper reported results of learning performance and user experience (UX) from 176 students from Ecuador in the Global South, and compared those from 196 students in the United Kingdom in the Global North. Results in the Ecuadorian sample showed that there were some significant differences on the students’ perception of software usability and their self-reported measure of user engagement, but no significant differences were found on their learning gains unlike the sample of students from the UK. Comparisons between the schools in the two countries showed some differences in the students’ attitudes and motivations towards learning science prior to the study. We discussed how these could have influenced students’ learning performance and UX.


INTRODUCTION
An exponential growth of computing devices and a widespread access to the Internet have changed over the past few decades the ways in which people communicate, work, and learn. In fact, we have seen and experienced over the last couple of years a drastic change in typical human habits of software use due to the Covid-19 pandemic, and we have witnessed a visibly larger scale of acceptance and adoption of online educational software, ascertaining its utility and necessity in the modern world. Concomitantly, the Human-Computer Interaction (HCI) community has been exploring how modern technologies shape the development of future education in the quest of understanding how software applications can influence people's routines and behaviours, and vice versa [1]. This paper focuses on how educational methods have adapted to the generalised use of the Internet, and how educational software influences learning on students in the Global North (high-income countries) and the Global South (low-income countries).
Technologically speaking, ongoing debates in the HCI community have brought up the issue that, far from being universal, software design practices are still inherently based on westernized models that are applied in the Global South without accounting for local knowledge systems that could impact their interactions with the software [2]. Learning environments that are aware of social and political dimensions are therefore critical in the development of better technology that supports universal learning and knowledge sharing [3]. In an attempt to contribute to the understanding of how educational software influences learning in the context of the digital divide. this paper mainly collects responses from students in two secondary schools in a Global South country (Ecuador), but it also contrasts these results with those collected (see [4]) from two secondary schools participating in a similar study in a Global North country (The United Kingdom).
In addition, this study investigates how the inclusion of a popular educational theory, namely Learning by Questioning, and an engagement strategy, namely Gamification, could improve performance in young students and enhance their user experience. Hence, the contribution of this research is twofold: (i) comparing individual user interface (UI) design elements in an educational application with the objective to analyse which specific gamification elements are more beneficial to students; and, (ii) to contrast the responses of students in two different countries (one in the Global North and one in the Global South) to analyse if our gamified questioningbased approach positively influences students regardless of their country of origin. This work is framed within the Go-Lab platform 1 , which provides online tools (such as lesson content and learning apps) to teach STEM subjects to students in different age groups and in several languages. The Go-Lab platform provided the context of the activity, where all students learnt about the same lesson topic, created questions about it, and earned a reward in the form of a digital gamification element for their participation (see Section 3.6). Learning apps were designed and developed for our studies with the input from teachers and students in Ecuador and the United Kingdom. For the purpose of rewarding students (see Section 3.5.3) gamification elements were embedded in Graasp 2 to analyse with qualitative and quantitative data collection methods (Section 3.4) how these influence UX in terms of perceptions of software usability, user engagement, and motivation, as well as student performance.

CONTEXTUAL BACKGROUND
This section starts by addressing the topic of educational technologies, followed by the discussion how gamified systems and the most salient gamification elements in the related literature have been implemented and studied. This section also explores how students could benefit from a popular educational theory, which is the pedagogical backbone of this research. At last, this section discusses the Global North-South divide.

Internet Access and Digital Educational Technology
Nowadays, most people connect and interact with each other via the Internet, making it an indispensable collaborative tool [5]. In 2006, 57% of households in the United Kingdom had access to the Internet; and this number elevated to an astonishing 96% in 2020 [6]. Although this broader availability of online solutions is really a global trend, countries in the Global South have shown a slower growth of households having access to the Internet. In the particular case of Ecuador, 22.5% of households had Internet access in 2012, increasing to only 45.5% in 2019 [7]. Potentially, this could be problematic for low-income countries and students that have been forced to stay at home during the pandemic. Nonetheless, as the access to reliable educational materials is no longer restricted to traditional lectures, students and teachers have been adapting their routines to utilise the benefits of modern technology.
Likewise, with the generalised use of the Internet, schools and universities have similarly seen an increase in the number of technological tools available to their students, in and outside the classroom. However, this availability of educational resources online does not guarantee their favoured use, especially when employed as extracurricular activities for young students to do on their own, as many of them would still prefer to spend their time on the Internet on more recreational and ludic ventures, such as playing games [8]. Consequently, a number of approaches and techniques have increasingly been utilized in educational settings with the goal to enhance the learning process and increase student engagement, supported by the interactivity of new educational technologies [9]. Gamification is one of those methods. 2

Gamifying educational software
Gamification is generally defined as "the use of game design elements in non-game contexts" [10]. The goal of gamification in education is to enhance UX to boost student involvement with the use of appealing interface design elements and rewarding interactions with the software applications. Points, levels, progress bars, quests, badges, virtual goods, avatars, and leaderboards are some of the most popular gamification mechanics or elements [11]. In this study, points, badges, and leaderboards (also known as the PBL triad [12]) were selected due to their popularity in a range of domains, to be embedded in an educational application to evaluate their individual impact on UX and learning performance. Moreover, the three gamification elements were selected as they were the most suggested by (Ecuadorian and British) teachers and students answering to a pre-study interview, where participants were asked about their preferences regarding ludic and educational software.
Although the actual impact of gamification both in the short and long term is still debatable, research on the PBL elements has mostly found encouraging results. For example, research on leaderboards suggests that they can be a motivator of software use [13], whereas other studies have shown similar positive results for the use of badges [14] and points [15]. Therefore, in our research we study these three gamification elements separately, to assess their individual impact on UX and performance in students with different backgrounds. The decision to choose gamification as an engaging technique in this study is due to the challenging nature of the learning activity, as students were asked to synthetize their knowledge about a topic and create relevant questions (cf. Section 2.3) that were assessed to grant their quality with a reward.

Learning by questioning
Questioning can help develop critical thinking in many ways [16]. For example, allowing students to investigate a topic based on their own crossexaminations, enabling students to exchange and contrast ideas with other classmates, or allowing them to express their conceptual thoughts through verbal or written questions. In addition, the process empowers students to be more consciously involved in the learning process [17]. However, the act of creating quality questions is a cognitively demanding task as it requires in-depth information processing and a higher level of thinking [16,18]. Hence, the use of gamification could encourage students through the task of questioning with the use of interactive and enjoyable interactions to sustain their mental effort for the task.
Interest in developing and studying questioningbased activities online appears to be growing, as technological tools are becoming more widely available for teachers and students to use. It is possible to see similar robust commercial software applications (e.g. Blackboard and Moodle) adapting their software for the integration of questioning activities and even digital rewards. Therefore, based on this growing interest, we investigate in this study the use of a gamified questioning-based online activity to support online learning in young students.

The digital divide
Worldwide, many teachers who have access to technology tend to incorporate it into their teaching practices, thereby enabling their students to have better access to learning resources. Among others, the benefits of educational technologies include promoting creativity, personal development, and satisfaction. However, large disparities are seen between the access and usage of educational technologies in countries from the Global North and the Global South [20] (cf. Section 2.1). Bridging this gap is therefore imperative to create fair information societies that can benefit from software applications regardless of their socio-economic disparities [21].
Moreover, understanding how socio-economic variables contribute to usage patterns could greatly improve the design of effective software systems that serve their original purpose, taking into consideration the particular needs of its users [22]. For example, as young people are now more connected than ever, this enables them to navigate at ease in most devices and platforms, whereas this is not the case in older user groups [23]. Digital skills and Internet use habits could be-in this senseclosely related to the economic, social, and cultural status of users. Disadvantaged students (such as many native from countries in the Global South, for instance) could potentially reach higher academic levels if they had better access to the Internet infrastructures and services. Consequently, governments, non-profit organisations, industries, and academia have done tremendous efforts in recent years to bridge the digital divide. Likewise, to provide fairer and more useful educational technologies, software designers have been turning to various inclusive practices such as conducting Participatory Design activities with key stakeholders from different socio-economic backgrounds (cf. Section 2.2), or designing software that could be easily adapted to different languages (cf. Section 4.5), benefiting students that otherwise would have not had access to such technologies.

METHODOLOGY
This study focuses on whether the use of specific gamification elements can have a significant impact on the user engagement, motivation, and performance of lower secondary school students when learning about a physics topic using a questioning-based technique, and whether this impact is mediated by the participants' perceived usability of the software or their predisposition towards learning sciences. Our selection of this specific age group (12-16 years old) is due to the few studies that have been conducted with target groups other than those in Higher or Distance Education [11,19]. The four versions of the educational software under evaluation are designated with acronyms for ease of reference: Non-Gamified -NG; Gamified with Points -GP; Gamified with Badges -GB; and Gamified with Leaderboards -GL.

Participants
This study was conducted with two secondary schools located in the same area of the highlands of Ecuador. School Y was a mixed gender secular school (with a neutral approach towards promoting religious content in their curricula). School Z was a single-sex school religiously affiliated (under the influence of the church, having religion as part of the school's curricula). Data of these two schools were analysed separately throughout this paper due to the significant differences found between the scores of School Y and School Z on the ATS, the SQM, and the initial knowledge test (pre-test) as shown in the sections below.
Schools Y and Z in Ecuador had access to fair scientific and technological facilities, compared to their counterparts Schools A and B in the United Kingdom which had very good technological facilities. School A was a public and secular secondary school, whereas School B was a private boarding school in the same geographical area [4].

Goals of the study
The core objectives of this study are: To contrast student perceptions of the gamification elements points, badges, and leaderboards implemented in Go-Lab.
To analyse if and how learning performance is affected with the integration of the gamification elements in the questioningbased activity prepared for this study.
To compare the user experience and the learning performance of secondary school students in two different countries.

Hypotheses
The null hypotheses of this study are closely related to the main goals presented in the section above.
H1a: There are no significant differences on the perceptions of software usability among students interacting with the NG, GP, GB, and GL versions of the software. H1b: There are no significant differences on the self-reported perceptions of user engagement among the NG, GP, GB, and GL groups. H1c: There are no significant differences on motivation among the NG, GP, GB, and GL groups. H2a: There are no significant differences in the learning gains as operationalized by the difference between the pre-and the post-knowledge tests among groups. H2b: There are no significant relationships between the students' pre-existing attitudes and motivations towards learning sciences, and their perception of software usability, their engagement with the activity, their motivation, and their learning gains. H3a: There are no significant differences on the students' pre-existing attitudes and motivations towards learning sciences depending on the students' nationality.
H3b: There are no significant differences on the students' learning performance according to their country of origin.

Methods and Instruments
Several methods and instruments were used in this study to (i) test our null hypotheses and (ii) to collect data from the interactions of secondary school students with the educational software. These are summarised in the sections below.

Attitudes Towards Science Measures (ATS)
The ATS [24] measures the attitudinal predisposition of young students towards learning science. The ATS is composed of 34 items and six factors (Learning Science in School, Practical Work in Science, Science Outside of School, Importance of Science, Self-Concept in Science, and Future Participation in Science). Results are presented in this study on a range 0-1. The Cronbach's alpha is 0.872 in this sample, which indicates a high level of internal consistency (the original was a>0.700 [24]).

Science Motivation Questionnaire II (SMQ)
The SMQ [25] measures students' motivation to learn science in college and secondary school courses. It contains 25 items and four main factors (Intrinsic Motivation, Self-Determination, Self-Efficacy, and Career Motivation). Results are presented in this study on a range 0-1. The Cronbach's alpha in this sample is 0.776, which indicates a good level of internal consistency (the original ranged between 0.710 to 0.900 [25]).

Demographic Questionnaire
This is a homegrown questionnaire with questions relevant to the participants' background (gender, age), their self-reported measures of IT competence, their perceptions about the educational software they had used in the past, and their use habits and preferences.

Knowledge Tests (Pre-and Post-tests)
These are homegrown questionnaires comprising 12 questions. The pre-and the post-tests had identical questions, which were all multiple choice with a single possible correct answer.

Situational Motivation Scale (SIMS)
The SIMS [26] measures factors of situational intrinsic and extrinsic motivations, rated on a seven-point Likert scale. It comprises 16 items and four factors: Intrinsic Motivation, Identified Regulation, External Regulation, and Amotivation. The Cronbach's alpha in this sample is good, at 0.730 (the reported Cronbach's alpha was between 0.65 and 0.92 [26]).

System Usability Scale (SUS)
The SUS [27] measures subjective assessments of software usability. It is a ten-item questionnaire using a five-point Likert scale. Results are presented in a range 0-100. The Cronbach's alpha in this sample is good, at 0.760 (the reported Cronbach's alpha in past research was between 0.71 and 0.90).

User Engagement Scale (UES-SF)
The UES-SF questionnaire [28] measures subjective assessments of user engagement. It is a twelve-item questionnaire with four different factors (Focused Attention, Perceived Usability, Aesthetic Appeal and the Reward Factor). Results are presented on a scale 1-5. The Cronbach's alpha in this sample is 0.710 (the authors' was between 0.700 and 0.890 [28]).

Post-Intervention Survey
This survey was designed as an online poll (created with Kahoot, Quizizz, or similar) where students selected from multiple options what they considered more accurate to their case. Participants were asked for their feedback on their overall experience using the software application.

Group Discussion
Following the post-intervention survey, students were encouraged to exchange opinions through a group discussion moderated by a researcher. Group discussions were audio-recorded with the participants' consent during the experimental session of this study.

Other materials
Three other sources were used in this study: the online lesson that students received before completing any standardized questionnaires, the set of three questions students created about the topic of the online lesson, and the rewards students received for their effort in creating said questions.

The online lesson
Due to the schools' preferences and the students' age range, the subject of electric circuits was chosen by teachers from a predefined list of different scientific topics. An online lesson aligned with the relevant curricula of Ecuador and England was then selected from Go-Lab for the purpose of our studies. The online lesson (Figure 1) covered related concepts such as current, tension, and Ohm's law. In addition, the online lesson was adapted to enable learning by questioning. Phases of the activity were progressively made available to students according to the task they were asked to complete (e.g. read the online lesson, create questions, check rewards).

Student questions
After finishing with the online lesson, students were asked to formulate three questions about the topic using the software. For this purpose, a new tab was made available for them on the environment of the activity (see "Preguntas" or "Questions" on the lefthand side menu on Figure 1). Student questions were assessed in this study manually by a researcher in terms of novelty, clarity, and relevance on a scale 0-5. Novelty referred to the level of interestingness and originality of the proposed question. Clarity referred to the understandability and clear and adequate structuring of the question. Relevance related to the closeness of the question to the topic of the learning activity.

The rewards
Three gamification elements-points (Figure 2), badges (Figure 3), and leaderboards ( Figure 4)were used to reward, in their respective group, the effort that individual students put in creating a set of questions about the topic of electric circuits. A nongamified interface was shown to students in the control groups. Afterwards, students were asked to create three more questions for the chance to improve on their work ("Preguntas 2" on Figure 1).

Procedure
This study was designed as a questioning-based learning activity part of a physics lesson where the gamification elements were introduced individually within different groups to evaluate their impact on the student perception of software usability, their user experience (in terms of motivation and user engagement) and their learning performance. The study consisted of two experimental sessions with each participating class, in school premises where students had access to their own device. Prior the first session, participants gave appropriate consent and students filled in the ATS and SMQ under the supervision of their schools. Classes were randomly assigned to one of the four versions of the educational software (NG, GP, GB, GL).
During the first experimental session, students were asked to fill in a demographic questionnaire, they revised the same online physics lesson on the topic of Electric Circuits and completed a pre-test. Also, students were asked to create a set of three questions about the topic and they completed a motivational test. All students then had a break between 1-72 hours.
Starting the second session, students were presented with results to their three questions in the form of one of three gamification elements (points, badges, leaderboards). The non-gamified groups displayed a neutral interface of the learning activity, with no visible gamification element. Students were then given the opportunity to create a different set of three questions and they filled in a post-test. Afterwards, participants gave feedback about their perception of software usability (SUS), their user engagement (UES-SF), and for the second time, their motivation (SIMS). To wrap up the session, students answered an online survey and participated from a moderated group discussion.

RESULTS
Nine methods and instruments were applied to collect data from student participants in the age range of 12-16 years old from two schools in Ecuador. The main results from their responses and interactions are summarised in the sections below.

Demographic data
A total of 176 students participated in this study: 34 were male, 131 were female, and 11 people preferred not to say. The average student age in School Y was 15.31 (SD=1.35) and the average student age in School Z was 12.78 (SD=0.63). In School Y, 61.8% of students (N=42) claimed to spend 3-7 hours per day on an electronic device (0-3 hours= 17.6%; more than 7 hours= 20.6%). Similarly, the majority of participants in School Z (N=42, 38.9%) said to daily spend 3-7 hours on an electronic device (0-3 hours= 33.3%; more than 7 hours= 26.6%).  In addition, results of the Mann-Whitney test showed significant differences between the Ecuadorian schools studied in this paper and British data previously collected [4], both for the ATS (U= 14579, Z= -2.884, p= 0.004) and the SMQ (U= 15310, Z= -2.044, p= 0.041), rejecting the null hypothesis H3a. Significant differences were found between the self-reported motivations (SMQ) and attitudes (ATS) of students prior the experimental sessions, depending on their country of origin ( Figures 5 & 6).

Student situational motivation
Students were asked to complete the SIMS (Section 3.4.5) twice in this study, to measure any changes on their motivation during the learning activity. A total of 151 students filled in both questionnaires; 59 in School Y, and 92 in School Z (see Table 1 for details). Results of the Shapiro-Wilk test showed that data were normally distributed in all cases.
Overall, students had a slightly higher score on the second SIMS (  These results support the null hypothesis H1c. No important changes were found on the motivation of the participants during the experiment. Likewise, there were no statistically significant differences when analysing each of the four factors measured in the SIMS questionnaire (Intrinsic Motivation, Identified Regulation, External Regulation, Amotivation). In addition, no correlations were found between the pre-existent students' attitudes and motivations towards learning science and their situational motivation during the activity.

Software usability
Out of the 176 students participating in this study (see Table 2), 157 completed the SUS (Section 3.4.6). Overall, School Y (N=58) had a mean score of 61.29 (SD=15.05) and School Z (N=99) a mean score of 56.11 (SD=14.50). Data was normally distributed in School Y, but not in School Z.

User engagement
In total, 145 students (see Table 3) completed the UES-SF (Section 3.4.7). Overall, School Y (N=57) had a mean score of 3.43 (SD=0.58) and School Z (N=88) a mean score of 3.51 (SD=0.68). Results of Shapiro-Wilk tests showed that data were normally distributed in School Z, but not in School Y.  d=1.202, r=0.515). In School Z, results of the one-way ANOVA test showed insignificant differences among groups (F(3,84)= 0.917, p= 0.436). Based on these results, the null hypothesis H1b is partially rejected. Some differences were found on the self-reported user engagement in the badges group (as compared to the control group) in School Y.

Learning performance
As seen on Table 4, 163 students answered to both knowledge tests of this study (Section 3.4.4). Out of 12 possible points on the pre-test, School Y had a mean score of 5.74 (SD=1.59) and School Z a mean score of 4.86 (SD=1.61). On the post-test, School Y had a mean score of 6.26 (SD=2.40) and School Z a mean score of 5.69 (SD=1.82). Results of Shapiro-Wilk tests showed that data were normally distributed in all cases in School Z, but only in six out of eight cases in School Y. In School Y, a paired sample t test showed a significant improvement on the learning performance of students (t(64)= -2.030, p= 0.047). However, when comparing the two knowledge tests using a Kruskal-Wallis test In School Y, insignificant differences were found among groups (H(3)= 1.043, p= 0.791). Likewise, in School Z a paired sample t test showed a significant improvement on the learning gains (t(97)= -5.273, p< 0.001), but results of the one-way ANOVA test showed insignificant differences among groups (F(3,95)= 0.152, p=0.928). These results support the null hypothesis H2a. No significant differences were found on the learning gains among groups in this study.
Additionally, linear regressions showed no influence of the students' previous attitudes (ATS) and motivations towards science (SMQ), their user engagement (UES-SF), and their perception of software usability (SUS) on the learning gains, supporting the null hypothesis H2b. Nonetheless, results of the one-way ANOVA (F(1,340)= 4.839, p=0.028) showed a significant difference with a low effect size (Cohen's d= 0.241, r= 0.119) on the learning gains of the Ecuadorian students studied in this report, compared to the learning gains of British students in a similar published study (see [4]) rejecting the null hypothesis H3b.

DISCUSSION
When comparing the outcomes of the Ecuadorian students reported in this study to the previously collected results of the British students (see [4]) significant differences were found between the selfreported motivation (SMQ, Figure 5) and attitudes of students (ATS, Figure 6) prior to the start of the experimental sessions (Section 4.2). For instance, as private schools with higher household incomes (School B in the UK and School Z in Ecuador) scored significantly higher in both scales than their counterparts in each respective country. This suggests that, depending on socio-economic factors, students could hold different views about science, which could consequently influence their predisposition to use educational software. However, as we are analysing merely two schools in each country, generalisations cannot be drawn from the results presented in this paper.
Concerning UX, we see for example that, unlike the case of British secondary school students before studied [4], Ecuadorian students in this study did not show a pattern on their preferences towards the four versions of the software (see Table 2). The trend in British schools was to score the usability of the nongamified version of the educational software the lowest, followed by the leaderboards, badges, and points, in that particular order (see Table 5). Nonetheless, we see that in both countries at least one school scored the usability of the software application gamified with points significantly higher than the non-gamified version, suggesting that the design of the gamification element, as well as how it was awarded and displayed was more usable to students. As points were awarded to all participants in the group (which was not the case with the badges, for example) another reason could be that students might have felt more attracted to the GP interface due to the novelty effect [29]. Overall, results from our studies suggested that students with a broader experience with educational software could have interacted with any version of the software application at ease, without perceiving a significant difference in usability.
Additionally, we see that the user engagement was not affected similarly in both countries. In the United Kingdom (see [4]), students from both schools in the gamified groups were significantly more engaged than students in the non-gamified groups (Table 5). In this study, the Ecuadorian students interacting with the badges in School Y were the only group that showed a significantly higher user engagement than the control groups (Table 3). We believe this could be due to the previous exposure that participants from the British schools had similar reward schemes as part of their learning activities at school, which helped them recognise the attractiveness of receiving a digital reward. Also, we see that overall the Ecuadorian students rated their user engagement higher than British students, which could be due to-among other factors-the novelty effect [29], as students in the Global South commonly have less access to educational software to learn at school. Regarding students' situational motivation, both studies showed similar outcomes. An insignificant improvement was found in the motivation of Ecuadorian (Table 1) and British (Table 6) students which, we assume, could be due to the short interaction time students had with the software application, and also due to the topic of the online lesson which was mentioned several times by students during the group discussions in both countries and all schools. However, we see that the Ecuadorian scores on self-reported motivation are slightly higher than those of students in UK, which could be due to the novelty effect once again [29]. Concerning learning performance, The Ecuadorian students in this report showed insignificant differences in their learning gains among (NG, GL, GB, GP) groups (Table 4). On the other hand, results from the study with British students (see [4]) showed significant differences of the gamified groups compared to the non-gamified in both schools (Table 7). We suspected that one variable that could have played a pivotal role in the insignificant results of this study is the young age of students in School Z (see Section 4.1), who mentioned during the group discussions that they had very limited knowledge about the topic of electric circuits. Nonetheless, as the Ecuadorian students scored lower in the knowledge tests than the British students (suggesting a gap in their knowledge), they also had more room for improvement from the pre-test to the post-test, which could also help explain insignificant differences in this study. Based on these empirical results, we infer that the design of the gamified questioning-based software could be improved for future studies, to benefit students in the Global North and the Global South more uniformly. Firstly, one should carefully select lesson topics closely aligned to global curricula, so that students on the same age range in any country could benefit from the use of the educational software equally. Secondly, by ensuring the consistency of the assessment criteria for determining the rewards to be granted, to avoid confusion in the participants. Thirdly, by using modern interactive technologies to aid the design of the gamification elements, that should appeal visually to the students with the use of appropriate colours, positioning, animations, etc. And, lastly, it could be beneficial to allow students to share questions and rewards with their peers to boost participation and engagement.
Nonetheless, answering to a post-intervention survey in this study, the vast majority of Ecuadorian students claimed to believe that technology could enhance their learning experience. More importantly, a total of 68.12% of students said that they would be more motivated to learn physics online if the software offered a gamified style, compared to a 13.79% of students that said it would not make any difference to them (the rest were unsure or preferred not to answer).
In addition, students in both countries showed an increase in the quality of their questions, with a significant difference in the gamified groups as compared to the control groups (see Table 8). The mean of all questions created by Ecuadorian students during the first round increased from 3.11 (N= 169, SD= 0.50) to 3.54 points (N= 169, SD=0.59), whereas in the case of British schools it increased from 3.26 (N= 191, SD= 0.49) to 3.69 points (N= 191, SD= 0.54). However, due to the limited time of the experiment (around 90 minutes in total), it was not possible to measure the long-term changes on student behaviours and perceptions of the gamified questioning-based software. Future research could analyse the particular goals and needs of endusers in the Global North and the Global South to influence user enjoyment and long-term behavioural change, with the use of a wider range of gamification elements in a longitudinal study.
Likewise, focusing on a particular and relevant aspect of implementing gamification in educational software could enhance the interpretation of the results and strengthen the conclusions of future work. As this research involved several dimensions that were analysed individually and comparatively (software usability, user engagement and motivation, learning performance, quality of questioning, etc.), it was challenging to collect qualitative data on all aspects from the student participants in such a limited time. Therefore, future research should expand the exposure time students have with the gamification elements but focus on collecting enough data without harming students' willingness to participate, so they can consequently give proper feedback about their experience.

CONCLUSION
In this paper we contrast a sample of the Global North-South perspectives, to contribute to the understanding of how socio-economic factors could influence the effectiveness of online educational software applications. However, sociocultural differences are not systematically analysed in this study to avoid over-complicating the experiment and overburdening participants by collecting more data from them. As the small sample size is a limitation, no generalisation of the results of this work could be made, as it only compared two Ecuadorian schools with two British schools representing the Global South and Global North, respectively.
Nonetheless, based on the findings of our studies, we agree with some researchers [21,22,23] that have suggested that digital skills are related to the economic, social, and cultural status of students, as they depend on the access that they have to the Internet and educational online tools. Hence, governments should provide appropriate infrastructures, developers should consider inclusive design practices, and learning spaces should adequately fulfil their purposes, regardless of the environment in which they are being implemented.