: Measuring the Quality of Student Support Services in Open Distance E-Learning

Student support services are essential elements in the success of open distance e-learning. These services help students to become competent in achieving their goals. However, the level of the quality of services should be measured so that continuous improvements can be made, which in turn bring about sustainability of positive results. This article takes content validity as its focus in the development of an instrument that allows for measuring student support services in open distance e-learning. By exposing the instrument to 10 experts, items were refined in the form of deletions, additions, and splitting of double-barrelled items. Experts’ ratings were analysed through the content validity index for each item, each dimension and the overall instrument. The content validity processes undertaken in this article have resulted in getting a 50-item 5-dimensional instrument. This article reveals that the use of content validity procedures is one of the most recommended methods in securing the validity of an instrument in terms of its involving appropriate items and dimensions.


Introduction
The efficiency and effectiveness of open distance e-learning (ODeL) programmes depend on the individualised student support services on offer.Student support services in ODeL help to decrease dropout rates and increase the success of the programmes.Moreover, students can be retained in the system and more throughput rates can be achieved.It is through student support services that learners are linked to the educational institution (Southard and Mooney 2015, 56).Student support services boost ODeL students' confidence and make them more competent in the world of work.These call for the offered student support services to be of the right quality (Dzakiria 2005, 106).
The openness of ODeL allows flexibility by making learners choose what to learn and when to learn it.ODeL uses the technology of the time in maximising accessibility of education to learners by bringing knowledge to their respective locations (Tait 2014, 15).These characteristics call for the right means of assessing quality in the ODeL system (Stella and Gnanam as quoted in Jung et al. 2011, 64).
An organisation can be competitive when its products or services are of the right quality.This leads to the satisfaction of its customers, who are doctoral students in the case of this study.Being an elusive construct, however, the first step in understanding and improving quality is measuring it (Jain, Sinha, and De 2010, 144).Measuring service quality in turn depends on the context in which the goods and services are offered and the types of industry such as education or tourism (Teeroovengadum, Kamalanabhan, and Seebaluck 2016, 246).
The concept of quality demands continued improvement of goods and services on offer, which in turn is best done through customers' evaluation.In higher education, students, as primary customers, should evaluate the services to identify the points that need intervention by the educational institution.Apart from getting customers' views, it is important to compare the offerings with institutions that are in similar business in order to determine the competitiveness of the firm (Maguad and Krone 2012, 27).
The fact that quality is a multi-dimensional construct calls for the development of an instrument that includes essential dimensions which truly reflect the subject of investigation; in the case of this study, the quality of student support services in ODeL (Sultan and Wong 2010, 262).The other nature of quality is its being context-bound.This calls for developing an instrument that fits the nature of business under investigation, in the case of this article, higher education.Moreover, the fact that it is ODeL and not the conventional mode of education is another factor that influences the context.The socio-economic condition of the respondents also contributes to defining the context (Jain, Sinha, and De 2010, 145;Malhotra et al. 2005, 259).
The intention of this study is therefore to come up with a suitable instrument that can measure the quality of doctoral student support services with particular reference to the ODeL environment.One of the most contributing factors to come up with a valid and reliable instrument is ascertaining its content validity.Essentially, content validity is one of the many methods that increase the trustworthiness of an instrument (Newman, Lim, and Pineda 2013, 244).It is defined as "the ability of an instrument to measure the properties of the construct under study" (Zamanzadeh et al. 2015, 165).It is the first step in the validation process of an instrument and helps to secure the appropriate content in the effort of developing a measure for a certain construct.The process of content validity assures the operational definitions of the instrument with respect to its items, dimensions and the overall content (Shrotryia and Dhanda 2019, 2-3).Generally, content validity presupposes having adequate number of items in an instrument that are able to fully measure relevant aspects of the construct (Field 2009, 12).
In this article, content validity is conceptualised from the point of view of nursing science, in which case the items in an instrument are subjected to a small number of persons in the field.This is done for the purpose of evaluating each item's clarity and relevance to discard less relevant and less clear items, to identify and correct doublebarrelled items, and to include useful items that best measure the construct under study (Magasi et al. 2012, 743).The process of content validity further guarantees to secure the dimensionality (the factor in which each item falls) and the comprehensiveness of an instrument (Rubio et al. 2003, 94).
In the process of securing the content validity of an instrument, authors in the field recommend that items be exposed to experts (Kassam-Adams et al. 2015, 3).The pool of experts can be formed from persons who are researchers and have good knowledge of the field under study.The contribution of this group of persons is evaluating the relevance, clarity and dimensionality of items, and the comprehensiveness of an instrument.Potential respondents of the instrument can be part of the expert pool to identify if there are omitted items in the instrument, and to refine the dimensions (Vogt, King, and King 2004, 233).Persons in the field of testing and measurement can form another pool of experts.This group can make a good contribution in terms of identifying the appropriateness of the structure of the instrument, if there are double-barrelled items, and recommending the statistical tools that should be used after data collection (Grant and Davis 1997, 270;Lampley 2001, 10).The number of persons involved to evaluate an instrument may vary from one study to another with the recommendation of 5 to 10 persons to secure control over chance agreement among them (Zamanzadeh et al. 2015, 168).
Against this background, this article focuses on the use and importance of content validity in the development and refinement of a context-sensitive instrument that is sound enough to measure doctoral student support service quality in the ODeL environment.

Relevant Empirical Studies on Content Validity
Nguyen, Douglas and Bonner (2019) used a content validity index (CVI) in the process of checking the validity of an English-to-Vietnamese translated instrument that measures self-management behaviour of "chronic kidney disease" patients in Vietnam.They used 10 experts in the field and used a four-point scale so that the experts check the relevance, clarity, and appropriateness of each item.The comprehensiveness of the instrument was also assessed by a two-point scale: "should be deleted" and "should be retained".A space was provided next to each item for comments.After calculating the item-content validity index (I-CVI) for each item and the scale-content validity index (S-CVI) for the whole instrument, they developed a 32-item four dimensional instrument for further evaluation by other validation techniques such as factor analysis.Zamanzadeh et al. (2015) used 15 experts in the development process of a patientcentred communication instrument that measures the communication between cancer patients and nurses.In the effort of developing a valid instrument, experts were requested to rate the relevance, clarity, and comprehensiveness of an instrument that had 57 items.After using the I-CVI for each item and the S-CVI for the comprehensiveness of the whole instrument, the authors developed a 53-item seven dimensional instrument.In the process, experts' comments were used to modify items that needed improvement.
In the process of validating a 22-item four dimensional instrument that measures stress levels of nursing students in China, Guo et al. (2019) used various methods of which content validity was one.The authors exposed the items to a panel of six experts in the field of nursing (head nurses, nursing professors and clinical practitioners) asking them to rate the relevance of each item on a scale of 1 to 4. Inter-rater agreement (IRA) was employed to calculate the responses to the six experts and the result was found to have I-CVI values in the range of 0.83-1.0.
Witte, Labeau, and Keyzer (2011) employed the content validity procedure on the Clinical Learning Environment and Supervision Scale after translating it into Dutch.This instrument is designed to measure nursing students' experiences when they are placed in different clinics.To contextualise the original instrument to the Flemish culture, five items were added by the authors.The translated and modified instrument was then subjected to 12 experts for clarity, relevance and readability.The authors also used other statistical methods (for example, factor analysis) in refining the instrument.The results proved that all the experts agreed on the 32-item five-dimensional instrument having relevant, clear and readable items.Shrotryia and Dhanda (2019) employed content validity to refine an instrument which they developed to assess the construct of employee engagement.The authors developed the instrument by doing an exploratory study on Human Resource heads of 15 companies, which helped them to come up with 30-item three-dimensional instrument.For the content validity study, they exposed the instrument to six experts, who were pulled from academics and practitioners to evaluate the items and the right alignment of the three dimensions of the instrument.Two items were rated as "not necessary" and hence the instrument passed towards further statistical procedures with 28 items and three dimensions.

Methods and Procedures
The article focuses on the content validity procedures that were followed in developing an instrument that measures the quality of student support services.Before this procedure, the instrument development process passed through two steps.
Step one was development of items with possible dimensions from the literature and students' repeated complaints, which were recorded in the form of reports.During step two, the items were assessed by two raters or judges who evaluated each item along the dimension in which it should fall.The kappa statistic for inter-rater reliability was run and a kappa value of 0.66 was secured (Aberra 2016, 113).In the content validity process, the items and their dimensions were exposed to 10 experts.The experts were drawn from doctoral students (who were potential respondents), professors of the Addis Ababa University in the fields of Marketing and Educational Measurement, and an alumnus, as shown in Table 1.After obtaining the experts' consent to evaluate the items and the full instrument, the abstract of this article was sent to them by email.On the meeting date, a presentation was made on the purpose of this article and on the procedures of the evaluation process.The experts were asked to assess each item for relevance and clarity.They were also asked to evaluate if each item belongs to the dimension it was originally placed and if the overall instrument is comprehensive enough in terms of measuring the quality of student support services in the ODeL context.The contextual meaning of each dimension was stated for ease of reference by the experts (Aberra 2016, 107).
The instrument that consisted of four columns with the headings of evaluating relevance, clarity, and dimension for each item was distributed to the experts.On the right end of the table, a space was provided against each item so that the experts can give comments whenever they find it applicable (Table 2).The measuring scales were adopted from Rubio et al. (2003, 96) and Polit, Beck, and Owen (2007, 460).After the evaluation session, discussions took place about each of the comments that were made by the experts.The responses that were collected from the experts were analysed both quantitatively and qualitatively.The IRA is one of the most common methods that are used to calculate the CVI.It adds the ratings of three and four to an item and divides the result by the number of experts (Polit and Beck as quoted in Guo et al. 2019, 33;Shrotryia and Dhanda 2019, 4).It is employed in this article as it is the preferred method for the purpose of ease of both calculation and getting the meaning of the results by showing the consensus or agreement among experts (Polit, Beck, and Owen 2007, 462).It also gives information at an item level and over the whole instrument (Zamanzadeh et al. 2015, 169).The experts' responses in the last column of the instrument were considered qualitative findings and used in accordance with the experts' suggestions.

Data Analysis, Findings and Discussions
A content validity study generates both quantitative and qualitative data.Accordingly, in this article, the experts rated each item for its relevance, clarity, and representativeness, and evaluated the overall instrument quantitatively.They also gave useful comments on individual items and dimensions.Taking the qualitative data, the experts' comments resulted in changing the dimension that was originally called "Corporate Quality" to become "Corporate Image" as they indicated the smallness of the number of items in this dimension.They also commented that the dimension that was originally called "Communication" should not stand on its own as there is communication between students and academic supervisors, administrative support providers or academic facilitators in one form or another.Hence the items were agreed to be placed in the different dimensions that they were deemed to fall into.The experts suggested that the dimension that was originally called "Admin" be changed to "Administrative Support" as the former sounded informal (Aberra 2016, 110, 118).
Table 3 shows the details of the experts' comments of the different items in the instrument.

Table 3: Formulation of items based on experts' recommendations
Item before expert check Item after comments by experts Supervisors should reflect an approachable attitude when communicating with their students Supervisors should be friendly/warm when communicating with their students Supervisors should communicate with their students via different technological media Supervisors should communicate with their students via different technological media such as email, Skype, and chatting Supervisors should encourage their students to complete and submit draft chapters regularly Supervisors should periodically encourage their students to make the required submissions such as chapters Unisa should provide training to students on the way in which to write a doctoral proposal Unisa should provide training to students on the way in which to develop a doctoral proposal The Ethiopia Centre should ensure that its library possesses a wide range of subjectrelated and research books (Item was found to have been doublebarrelled and split into two) The Ethiopia Centre should ensure that its library possesses a wide range of subject-related materials The Ethiopia Centre should ensure that the library is equipped with recent research books Newly added item Unisa should ensure that payment made by self-sponsored students is reflected on their accounts as quickly as possible Unisa should ensure that departmental higher degrees committees communicate with doctoral students on their decisions regarding students' proposals in a reasonable time Item placed under the dimension of Administrative Support as the dimension that was originally named Communication was deleted Unisa should provide information about administrative procedures involving doctoral students (eg intention to submit, and library block) Same as above Supervisors and staff members of the Ethiopia Centre should give information concerning bursary applications and research fund possibilities (Item was found to be double-barrelled and placed under "Communication" dimension) Supervisors should provide information about research fund possibilities (Item was placed under the dimension of Supervision Support) Unisa should ensure that the bursary section provides timely responses concerning bursary applications (Item was placed under the dimension of Administrative Support)

Item before expert check
Item after comments by experts The Ethiopia Centre should have staff members who are freely available to respond to students' enquiries (Item was place under "Communication" dimension) The Ethiopia Centre should have staff members who actively engage in supporting doctoral students (Item was rephrased and placed under the dimension of Academic Facilitation) The process of content validity is iterative in reviewing literature, consulting with relevant persons, checking and rechecking items and dimensions by a sample of experts (Vogt, King, and King 2004, 236).Accordingly, in this study, six of the experts from the first pool (Kassam-Adams et al. 2015, 3) again checked the items for reconfirmation and found the distribution of the items that were originally placed under the dimension of Communication acceptable.They also rerated the split and the reformulated items, all of which resulted in having an I-CVI of 1.0.Table 4 shows the results of the experts' ratings.

Conclusion and Recommendations
This study clearly shows the importance and usefulness of content validity in the process of developing a valid instrument.The procedures followed helped to gather comments for refining the items and their respective dimensions in terms of reformulation, splitting of double-barrelled items, and adding omitted items.In terms of the quantitative findings, items' clarity in being understandable to the target group of doctoral students, their relevance in measuring the quality of student support services, and the dimensionality in specifying each item to its respective dimension were all secured.The overall comprehensiveness of the instrument has also been achieved.It is recommended that content validity be undertaken for the sake of making an instrument for the appropriate items and content domains.Procedures such as running Cronbach's alpha and factor analysis also help to secure construct validity for further refinement of an instrument.

Table 1 :
Distribution of pool of experts

Table 2 :
Rating scale as used by content experts

Table 4 :
Results of the item-content validity index (I-CVI) by experts