This paper aims to examine the impact of E-learning quality on students' satisfaction and performance before and during the COVID-19 pandemic and the role of gender as a moderating variable. A descriptive and inferential approach was used where a specially designed questionnaire was distributed on a simple random sample of students enrolled in Public Security Training City in Riyadh, Saudi Arabia. Data were gathered from 352 respondents. The multiple regression analysis was used to justify the proposed hypotheses. The study found that E-learning system quality and E-learning service quality positively impacted student satisfaction, which made a significant and positive impact on student performance. Moreover, gender played a moderating role in the relationship between E-learning system quality, E-learning service quality, and student satisfaction. Finally, it was found that the relationship between all variables during COVID-19 was stronger than relationships that existed before the COVID-19 pandemic.
Keywords: E-learning, Information System (IS), Information Technology Communication (ITC), Service Quality (SERQ), Student Performance (SP), Student Satisfaction (SS), System Quality (SYSQ).
Received: 11 August 2021 / Revised: 15 September 2021 / Accepted: 13 October 2021/ Published: 2 November 2021
This study is one of very few studies which have investigated relationships between E-learning system quality, E-learning service quality, and student satisfaction in the public security sector where new policies and regulations have been applied as a part of Saudi vision (2030), including women participation in this sector.
1.1. Background of the Study
Since the late 1990s, the use of Information Technology and Communication (ITC) has been implemented for education, and its development is rapid until now (Hermawan, Deswila, & Yunita, 2018). The use of ITC for educational purposes has been widely developed and researched (Aldholay, Abdullah, Ramayah, Isaac, & Mutahar, 2018). There are many terms which represent this approach, such as distance learning, e-learning, and blended learning. Nowadays, this approach has already been used on the internet, assisted by devices such as laptops, PCs, smartphones, and tablets (Clark & Mayer, 2016) . Before the COVID-19 pandemic, Saudi Arabia showed a high technology adoption level, although the development and access quality was relatively low especially in public education schools (Schwab, 2020). However, when this pandemic started in the beginning of March 2020, all the educational activities were required to be done online to prevent virus spread. This situation opened up the possibility that Saudi teachers should familiarize themselves with e-learning tools to substitute their face-to-face classroom learning sessions. In essence, technology became a fundamental need for educational purposes from elementary school to higher education level in Saudi Arabia during the Covid-19 pandemic (Aliyyah et al., 2020; Dong, Cao, & Li, 2020) . In Saudi Arabia, many educational institutions, especially public education institutions, have started using Zoom, Microsoft teams, and Google Meet to conduct their learning sessions since the first month of pandemic (Hidayatullah, Khouroh, Windhyastiti, Patalo, & Waris, 2020). However, in the school year (2020-2021), the Saudi ministry of education (MOE) launched madrasati platform to implement the E-learning process where both students and all teachers can access this platform and practice the educational process as if they were in school. However, some other higher education institutions also started to optimize their previously-developed learning management systems such as Blackboard, which was a new learning management system, more sophisticated than the regular systems. In this new scenario, the students got an easier and simpler learning method than before, as they need not visit the university to find out details of their teaching. The Blackboard was a complete educational administration application. They need not involve into any complex processes except to log in the Blackboard application and to get all notifications of their university education instantly.
These learning systems have developed and evolved for many years. Their workload increased during this pandemic as more teachers started using it since the pandemic started. Since these applications were novel and required techno-savvy use, the performance of teachers must be evaluated to examine shortcomings and what better improvements could be done in future. Furthermore, as students were the most important stakeholders in e-learning systems, their satisfaction and performance were the benchmarks for the success of the e-learning information system (Almala, 2006; Bhardwaj & Goundar, 2018) . From an Information System (IS) point of view, many proposed and recommended models estimate and explain technology usage, such as DeLone and McLean Information Systems Success Model (DMISSM) (DeLone & McLean, 1992; DeLone & McLean, 2003), Diffusion of Innovation (DOI) Theory (Rogers, 2010), The Model of PC Utilization (Chang & Cheung, 2001), Unified Theory of Acceptance and Use of Technology (Venkatesh, Morris, Davis, & Davis, 2003) and so on. In a sense, these theories examine technology empirically without evaluating the application (Islam, 2013). However, in line with the development of technology, research related to system evaluation has changed in recent years. The research trends now emphasize system usability associated with individual performance to measure the system effectiveness (Hidayatullah et al., 2020; Wahyudi, Respati, & Ardianto, 2017). In its application, many aspects influence individual performance. In this case, in order to measure the individual performance, those affecting aspects are entered into the system and service quality of the system is ensured for user’s satisfaction.
1.2. Research Objective
This research aimed to examine and assess the impact of E-learning quality on students' satisfaction and performance before and during the COVID-19 pandemic, with the role of gender as a moderating variable in both situations. To achieve this objective, a descriptive and an analytical approach was used. The sample was selected using the simple random sampling method, which comprised students at Public Security Training City in Riyadh, where a specially designed questionnaire was distributed to them for this purpose.
1.3. Problem Statement
Based on the objective, a few research statements were formulated resulting in a central question:
RQ1: What is the impact of E-learning quality on students' satisfaction and performance before and during the COVID-19 pandemic? The following sub-questions stemmed from the central question :
RQ2: What is the impact of E-learning system quality on students' satisfaction before COVID-19 pandemic ?
RQ3: What is the impact of E-learning system quality on students' satisfaction during COVID-19 pandemic ?
RQ4: What is the impact of E-learning service quality on students' satisfaction before COVID-19 pandemic ?
RQ5: What is the impact of E-learning service quality on students' satisfaction during COVID-19 pandemic ?
RQ6: What is the role of gender in the relationship between E- learning and students’ satisfaction before COVID-19 pandemic ?
RQ7: What is the role of gender in the relationship between E- learning and students’ satisfaction during COVID-19 pandemic ?
RQ8: What is the impact of students’ satisfaction on students’ performance before COVID-19 pandemic?
RQ9: What is the impact of students’ satisfaction on students’ performance during COVID-19 pandemic?
1.4. Importance of Study
From the literature review, it is evident that the impact of E-learning quality on students' satisfaction and performance has recently received researchers' much attention. However, there is no particular study that examined these relationships in public security sectors, which have recently witnessed the entry of women to work. New policies and regulations are applied in Saudi Arabia as a part of Saudi vision (2030), and the development projects are currently initiated by the Ministry of Interior (MOI). These include the entry of women and their participation at different managerial levels in the MOI itself. This encouraged us to conduct this study on the role of gender as a variable in the context of MOI. In the past, no military organizations allowed to conduct any research for security reasons. However, owing to current changes and in order to bring flexibility, several military organizations have permitted to conduct research studies on the matters related to them. There exists no earlier research study on the role of gender as a moderator in the relationship between. These reasons give further uniqueness to this study and would call for optimism that this study will contribute to an increase in knowledge. This research gap offers an understanding of experiences in the military sector. It would be argued that this study would contribute to increasing the body of knowledge.
2.1. Theoretical Background
DeLone and McLean (1992) suggested a framework to measure the success of the information system (DMISSM) by assessing the quality of the system and the quality of information associated with actual use and user satisfaction, which further makes an impact on individuals and organizations. This model was drafted by its creator and updated 10 years later (DeLone & McLean, 2003). Service quality was added to the updated version and individual impact and organizational impact were condensed as one variable called net benefit (DeLone & McLean, 2003). In many studies conducted by later researchers, this model has been used as a base model to measure the success of the information system and further modified to better reflect the system being assessed by researchers. System quality, information quality, service quality, as well as user satisfaction and stakeholder or user performance are the variables used in measuring systems, which at times translate into a net benefit variable (Chopra, Madan, Jaisingh, & Bhaskar, 2019; Wahyudi et al., 2017).
The past research has highlighted system usability related to the individual performance in the last five years to measure system efficiency (Hidayatullah et al., 2020; Wahyudi et al., 2017). Many factors influenced individual performance in its implementation, which are the system quality, service quality, and user satisfaction. Althonayan and Althonayan (2017) generate their model from DMISSM, Task-Technology Fit (TTF), model by Goodhue (1995) and Doll and Torkzadeh's End User Computing Satisfaction (EUCS) Model. The model examines the quality of management, quality of service, and system quality influence on the performance of stakeholders in the MADAR ERP system at King Saud University (KSU). In the same year, DMISSM was also used by Wahyudi et al. (2017) to assess the effectiveness of the DAPODIK Information System for Public Senior High Schools that had chosen user satisfaction as a mediator variable in the relationship between system quality and information quality towards net system benefit. A research on Yemen's online learning was carried out by Aldholay, Abdullah, Isaac, and Mutahar (2019) using three variables: overall quality (system quality, information quality, and service quality), transformative leadership, and compatibility, which influence user satisfaction and actual use, which in turn affect the impact on performance. On the other hand, in the same year, Chopra et al. (2019) investigated Coursera's effectiveness as an e-learning platform. This study used a DMISSM derivative to investigate the effect of e-learning systems (system quality, information quality, and service quality on e-learning effectiveness (net benefits and user satisfaction). There are studies which measure information systems usage in higher education institutions (Althonayan and Althonayan, 2017); Aldholay et al.,2019; Chopra et al.,2019) which have also observed a higher education institution's information system in e-learning. In all studies, except (Althonayan & Althonayan, 2017) user satisfaction was measured as it was directly related to the system quality, service quality, as well as management quality and the performance of stakeholders. Chopra et al. (2019) assessed the effectiveness of the e-learning system by analyzing the responses and outcomes of students in the educational environment, while the study itself did not evaluate the correlation between user satisfaction and net benefit, instead it combined them into a variable called e-learning effectiveness (Chopra et al., 2019).
The conceptual model to be proposed in this study would include four variables that are System Quality, Service Quality, Students' Satisfaction, and Students' Performance, based on the review conducted on the literature.
2.2. System Quality (SYSQ)
The system quality is defined as a system that offers ease of learning, ease of use, suitability of access, utility of system features, complication of the system, system characteristics, and information system response time (Beheshti & Beheshti, 2010). In addition, Sedera and Dey (2013) consider system quality as ease of learning, effectiveness, consistency, configurable, convenient, customizable, fulfilling the requirements, ease of use, and reliability. In addition, system quality is also identified by measures: ease of use, availability, flexibility, reliability, utility, and response time (DeLone & McLean, 2003).
The system quality is the main component that is considered as a measurement of effectiveness of an information system (IS) (Gtirkut & Nat, 2017; Muda & Erlina, 2019). Ifinedo and Nahar (2007) emphasize on the significance of the system quality since it makes a massive influence on the e-learning platform. E-learning system quality produces query results more quickly. In addition, system quality significantly raises the end-user's attention. The user-friendly, and modern graphical interfaces also boosts user satisfaction levels. Petter, DeLone, & McLean (2013) stressed that the new modifications must be implemented by the service provider and the system must be updated from time to time. In the context of the E-learning system, the system quality structure has also been used to show if the user is satisfied with the quality of the E-learning portal. The most used measure of "system quality" is "Perceived ease of use" (Davis, 1989). Unless the requirements and expectations of users are met, IS cannot be considered a high-quality system. It requires a well-designed system to qualify as a good IS, offering and improving diverse assistance and information. A provider that delivers and maintains the IS plays a central role in satisfying users with achievement. In the case of e-learning, educational institutions should be evaluated by quality of the system. The evaluation will enhance the quality of e-learning based on user feedback. This action will offer long-term value to educational institutions. Over the time, certain changes in system quality norms have taken place. Some indicators, however, remain the same and are applied permanently and validated such as: luxury of use, learning comfort, reliability, personalization, response time, availability, system interactivity, and security of the system. System quality, in this research, is associated with the experiences of students while using e-learning, such as convenience of use, simplicity of understanding, ease of learning, and interest as well. In addition, the quality of the system will influence users’ satisfaction (Hossain, 2016). In this situation, there are many components that can be used to measure the quality of e-learning such as: simplicity of use, flexibility and timeliness. The performance characteristics of the system are known as simplicity of use (Ifinedo & Nahar, 2007). Flexibility is the system’s capability to respond effectively to a changed situation(Gong & Janssen, 2010). Time-saving, decreasing duplication, and increasing productivity are important requirements in system use. Timeliness, therefore, becomes an indicator to measure the quality of the system.
2.3. Service Quality (SERQ)
Quality of service is described as the variation between the expectations and the perceptions of students (Stodnick & Rogers, 2008). It often points out to "responsiveness," a measure that suggests how effectively the technical support responds to the students' requests and assist them with empathy (Haryaka, Agus, & Kridalaksana, 2017; Petter et al., 2013). Service quality plays a vital role in enhancing benefits by offering something distinctive or adding something additional, to increase students’ satisfaction (Pham, Williamson, & Berry, 2018). For educational purposes, e-learning service quality can enhance a learning service through online platform (Mayer, 2017). Through constant interaction with students and their feedback, the system administrator can also develop its service. In order to meet student expectations and satisfaction, the quality of the service can be measured by interactivity, functionality, and responsiveness (Almazán, Tovar, & Quintero, 2017).
Five main factors in the e-learning system are related to service quality: administrative and support, quality of the instructor, precision, course items and tools, and safety (Pham et al., 2018). A good quality of e-learning service will definitely have a positive impact on students' satisfaction and performance. Service quality is indeed one of the main elements that must be present as a success for educational institutions. In order to offer a good service in education, educational institutions need to enhance the quality of e-learning. The enhancement can be carried out through assessment based on the experiences and perceptions of students.
Several items were indicated to measure the quality of service of e-learning system and mostly cited tools developed by past researches (Parasuraman, Zeithaml, & Berry, 1988; Parasuraman, Berry, & Zeithaml, 2002). Originally, ten factors were used to measure the quality of service, which later became five known dimensions such as tangibility, reliability, accountability, assurance, and empathy. The updated D&M model seems to be very helpful in assessing the effectiveness of various types of applications related to technology. Much of the researches used this model of an information system such as the performance of the ERP system, the implementation of e-procurement, the user's perspective of e-government usage, the performance of e-banking applications, and also several other business achievements through the online services (Almarashdeh, 2016; Aparicio, Bacao, & Oliveira, 2017; Cidral, Oliveira, Di Felice, & Aparicio, 2018; Hsu, Yen, & Chung, 2015).
2.4. Students’ Satisfaction (SS)
The students’ satisfaction in general is considered one of the primary aspects in the field of education (Aldholay et al., 2019; Bhardwaj & Goundar, 2018). In terms of experience, function, and usability, education effectiveness can be measured through students’ satisfaction (Xinli, 2015). In other words, the satisfaction of students is correlated to the effectiveness of the e-learning system in satisfying their requirements (Chopra et al., 2019; Eom, Wen, & Ashill, 2006). An e-learning system is expected to enhance and fulfill the goal of a student as an assistance tool (Arkorful & Abaidoo, 2015; Kintu, Zhu, & Kagambe, 2017). In order to achieve the satisfaction of students, e-learning must provide all the content, features, and facilities needed to support them.
All of these requirements reflect the satisfaction of students in terms of quality of content, usability of system, and technical factors. In the e-learning system, student satisfaction is defined as increasing their skills and knowledge (Reynolds, 2011).
The high level of satisfaction of students indicates that the system can be beneficial for them as end-users and for educational institutions as service providers. In addition, in recent relevant studies, satisfaction components such as system and service quality are being used to analyze e-learning performance (Chiu, Chiu, & Chang, 2007; Park & Gretzel, 2007).
2.5. Students’ Performance (SP)
The appropriate quality of work while using a system is associated with personal performance (Li, Yu, Liu, Shieh, & Yang, 2014) which include assisting to accomplish assignments rapidly, allowing job control, enhancing job performance, eliminating mistakes, and optimizing workplace efficiency (Issac, Masoud, Samad, & Abdullah, 2016; Norzaidi, Chong, Murali, & Salwani, 2007). In order to improve the performance of students, good human computer interaction is taken into account. This thing, therefore, is connected to the system and service quality that influences the satisfaction of students, and their performance consequently. As an end-user of the e-learning system, the performance of students has become the critical factor for evaluating the success of the e-learning system. Therefore, in this study, the performance is related to the performance improvement of the students while using an e-learning method. Resource savings, efficiency, student capability, and knowledge achievement are the measurement of this concept (Issac et al., 2016).
This study aimed to examine the e-learning system through students' satisfaction and performance. Based on the literature reviews a conceptual model was prepared to measure students' performance through e-learning . This model included two independent variables, system quality and service quality, one moderate variable, gender, and one dependent variable, students' satisfaction, which is also the independent variable on student’s performance. Figure 1 represents the conceptual model of this study .
DMISSM is the base of this proposed research framework. This model was chosen since it is optimal and helpful for assessing e-learning through the satisfaction of students and has been commonly applied in previous related research to evaluate the success of IS (Isaac, Abdullah, Ramayah, & Mutahar, 2017). The use of user satisfaction as a mediating variable is one of the DMISSM features. This variable is used to bridge the relationship between the independent and dependent variables and further explain them. This will then help evaluate how the quality of the e-learning system and the quality of service will influence the performance of students through their satisfaction. With several related studies, such as the perspectives of students on e-learning effectiveness (Aldholay et al., 2019; Chopra et al., 2019) and user performance using ERP in higher education (Althonayan & Althonayan, 2017) the model was adopted and adjusted.
Each variable in this model presents several indicators. Three indicators were identified to represent system quality variable of this study namely, Ease of use, which describes the level of system usability, is the first indicator. An easy-to-use system shows that the system has been developed well (Helia, Asri, Kusrini, & Miranda, 2018; Yuniarto, Suryadi, Firmansyah, Herdiana, & Rahman, 2018) and that it will positively impact the students’ satisfaction . The second indicator is flexibility, that explains the capability of the system to fulfill the students’ requirements (Chen, Yan, & Ke, 2019). In this case, an e-learning system must achieve the students’ requirements related to both the process of learning and administration. It can also respond to the students’ feedback in order to enhance the current system. The third indicator of system quality is timeliness. This refers to the expected time for information to be available which the students need. The required information should be provided on time in a good quality system.
There are also three indicators within the service quality variable in this study. The first indicator is responsiveness, explaining that the scheme can respond to the order of the students quickly and accurately. Next indicator is functionality, which refers to the features of e-learning system such as online courses, self-registration, exams, etc (Chou, Wu, & Tsai, 2019). This indicator of functionality ensures that the features of the system are sufficient and that it is developed effectively according to the needs of students. Interactivity is the third indicator which requires that an e-learning system must provide two-way interaction (Zhang, 2016). For instance, there can be an interaction between student and lecturer as users in a learning process or user and administrator for quality improvement purposes.
Another variable of this study is students’ satisfaction. This can also be measured by three indicators. The first indicator is system satisfaction which refers to general satisfaction among students after using the system and having developed the feeling that they are satisfied with the e-learning system. The fulfillment of expectations is the second indicator which will assess that the system has met the students’ needs thoroughly, such as content, characteristics, function, and design. The final indicator is the students’ interest’ to continue using the system that explains sustainability. This also shows whether students feel that the system suitable for use in specific times (e.g., this pandemic context), and also continuously in future.
Eventually, the performance of the students was the final result of the conceptual model. This variable represented four indicators. The first indicator was the improvement of performance related to the process of learning, requiring that the system must be able to support and facilitate student work, such as online classes, availability of learning materials, collection of tasks, and so on (Kauffman, 2015). The second indicator was improvement of efficiency seen in providing a service whenever students wanted and wherever they were, the system was supposed to make learning more effectively. The third indicator was system saving the time until a certain task is complete. This meant that the system should save students’ time when they are searching for learning material, for new knowledge, or doing a task. Knowledge improvement is the last indicator of the students’ performance which checked whether the system assisted students to improve their knowledge by providing the newest and most accurate information and learning material.
Based on the above framework and variables identified for this study, the following hypotheses were established for this study:
H1: There is a relationship between system quality and students’ satisfaction.
H2: There is a relationship between service quality and students’ satisfaction.
H3: There is a relationship between students’ satisfaction and students’ performance.
H4: Gender moderates the relationship between system quality and students’ satisfaction.
H5: Gender moderates the relationship between service quality and students’ satisfaction.
Figure-1. Conceptual framework.
To examine the relationship between E-learning quality, Student satisfaction, and student performance before and during COVID-19 Pandemic, the following hypotheses were formulated:
H6: There is differences in the relationship between system quality and students’ satisfaction before and during COVID-19 pandemic.
H7: There is differences in the relationship between service quality and students’ satisfaction before and during COVID-19 pandemic.
H8: There is differences in the relationship between students’ satisfaction and students’ performance before and during COVID-19 pandemic.
This research adopted the quantitative research design and used a questionnaire for the primary data collection. A descriptive and inferential analysis was carried out using SPSS version 21.0. The target population of this study were students from both genders enrolled in different institutes at Public Security Training City (PSTC). The unit of analysis required to respond to the questionnaire was each student enrolled in different institutes at PSTC. The questionnaires were distributed electronically among these students, who were identified through a simple random sampling method, which gave each student the same opportunity to participate in the questionnaire.
There are eight institutes at PSTC and the target populations of this study were the students enrolled in these institutes -- (2243) students. The sample size was determined on the basis of the responses received, which was 352 filled in usable questionnaires. Using a deductive approach, most of the questionnaire items in this research were adapted from published studies (Table 1). Since these studies were performed in different settings, a few modifications were made by inserting local words and sentence restructuring.
Table-1. Published studies in which the questionnaire items were adapted from
Construct | Items | Reference |
System Quality | 4 | (Abbad & Jaber, 2014; Ifinedo & Nahar, 2007; McGill, Hobbs, & Klobas, 2003) |
Service Quality | 6 | (Abbad & Jaber, 2014; Ifinedo & Nahar, 2007) |
Students’ Satisfaction | 3 | (Abbad & Jaber, 2014; Ifinedo & Nahar, 2007; McGill et al., 2003) |
Students’ Performance | 5 | (González, Jover, Cobo, & Muñoz, 2010) |
The questionnaire included an introduction which notified respondents about the objective of the research. Besides, there were separate sections on sociodemographic data, E-learning quality before COVID-19, and E-learning quality during COVID-19. Participation was voluntary and participants’ responses were promised to keep anonymous and confidential. The questionnaire was prepared in both English and Arabic versions. Usually, pilot study procedures are adopted in order to develop a survey questionnaire (Forza, 2002; Hinkin, 1998; Sekaran, 2003). Nevertheless, Rowley (2014) recommended theory testing or deductive research to adapt questionnaire items from published research retrieving them partially or entirely. The benefit of such retrievals is to become aware beforehand about potential problems in a specific context before they contrast with study findings.
Though measures were mostly adapted from published articles, the risk to reliability and validity still existed as the current study was conducted in a different context. Therefore, there was a need to test the research instrument appropriately and ensure the research community of the validity of scientific results. For this purpose, a pilot survey was implemented on 30 respondents to verify the reliability and validity of the questionnaire. All four constructs of the study (SYSQ, SERQ, SS, and SP) were operationalized to reduce their abstract concepts into observable and measurable elements. The researchers aimed to specifically address the behavioral factors, elements, or aspects that measured each construct's concepts . During the pilot test, respondents’ feedback encouraged the researchers to make appropriate adjustments in the questionnaire. The instrumentation validation test was also carried out before actual data collection in order to verify that the constructs were likely to be real, consistent, and, more importantly, that the instrument measured the correct material.
According to Straub, Boudreau, and Gefen (2004) a reliability test is a measurement evaluation within a construct that tests the degree to which a respondent will answer the same questions in the same way addressed. One of the methods to measure reliability is the Cronbach alpha coefficient. When calculating the Cronbach alpha, two items for SERQ were reduced Cronbach alpha measured from 0.732 to 0.5 and therefore was extracted. Besides, one item from the SP measured Cronbach alpha from 0.689 to 0.448 and was consequently removed. As a result, each construct’s reliability coefficients were in the range of 0.689 to 0.732. Table 2 presents the minimum cut-off value (Sekaran, 2003) and reflects the instrument's consistency . Although assessment validity can be performed during actual data collection, researchers were prepared to address any potential threats to validity caused due to the instrument’s design, from very early stage to the data collection stage (Green, Tonidandel, & Cortina, 2016).
Construct validity is an evaluation between constructs to suggest a fair operationalization of a given construct (Cronbach & Meehl, 1955). It investigates whether, in the presence of other constructs' measures, each measure of a specific construct merely fits together and links closely within that particular construct. In other words, construct validity ensures that the in-built method can test whatever is to be predicted (Wood, 2011). To determine the questionnaire validity, the researcher used a correlation method to compare item correlation with the total items (Guilford, 1954). Table 2 reveals that all correlations for each item of all constructs were statistically significant.
Table-2. Measuring reliability and validity of the questionnaire.
Construct | Items | Cronbach alpha |
Total Correlation |
SESQ_B | Before COVID-19, I believe E-learning system was easy to use. | 0.724 |
0446** |
Before COVID-19, I believe E-learning system was easy to learn by new users. | 0.546** |
||
Before COVID-19, I believe E-learning provided me a flexible way to study | 0.625** |
||
Before COVID-19, I believe E-learning system offered timely and updated information. | 0.451** |
||
SERQ_B | Before COVID-19, the technical support response through the system was fast and efficient. | 0.732 |
0.595** |
Before COVID-19, the E-learning offered high-quality knowledge for students. | 0.500** |
||
Before COVID-19, the E-learning offered educational-wide communication. | 0.437** |
||
Before COVID-19, the E-learning offered work-group Possibility. | 0.569** |
||
SS_B | Before COVID-19, the E-learning met my requirements as a student. | 0.703 |
0.560** |
Before COVID-19, the E-learning fulfilled my expectation as a student. | 0.491** |
||
Before COVID-19, I was interested in continuing using the e-learning system. | 0.512** |
||
SP_B | Before COVID-19, the E-learning improved my academic performance as a student. | 0.689 |
0.438** |
Before COVID-19, the E-learning improved the students’ participation in the class. | 0.539** |
||
Before COVID-19, the E-learning saved time for individual tasks and assignments. | 0.490** |
||
Before COVID-19, the E-learning allowed for better use of educational data resources. | 0.422** |
||
SESQ_D | During COVID-19, I believe E-learning system was easy to use. | 0.834 |
0626** |
During COVID-19, I believe E-learning system was easy to learn by new users. | 0.736** |
||
During COVID-19, I believe E-learning provided me a flexible way to study | 0.805** |
||
During COVID-19, I believe E-learning system offered timely and updated information. | 0.671** |
||
SERQ_D | During COVID-19, the technical support response through the system was fast and efficient. | 0.912 |
0.615** |
During COVID-19, the E-learning offered high-quality knowledge for students. | 0.620** |
||
During COVID-19, the E-learning offered educational-wide communication. | 0.637** |
||
During COVID-19, the E-learning offered work-group Possibility. | 0.679** |
||
SS_D | During COVID-19, the E-learning met my requirements as a student. | 0.803 |
0.670** |
During COVID-19, the E-learning fulfilled my expectation as a student. | 0.501** |
||
During COVID-19, I was interested in continuing using the e-learning system. | 0.622** |
||
SP_D | During COVID-19, the E-learning improved my academic performance as a student. | 0.729 |
0.548** |
During COVID-19, the E-learning improved the students’ participation in the class. | 0.659** |
||
During COVID-19, the E-learning saved time for individual tasks and assignments. | 0.510** |
||
During COVID-19, the E-learning allowed for better use of educational data resources. | 0.532** |
Descriptive statistical analysis applied to display the socio-demographic characteristics of the sample Table 3.
Table-3. Descriptive statistics of demographic data.
Number |
Percentage |
|
Gender | ||
Male | 211 |
59.9 % |
Female | 141 |
40.1 % |
Age | ||
18-21 | 141 |
40.1 % |
22-25 | 176 |
50 % |
26-29 | 35 |
9.9 % |
Educational Level | ||
High School | 141 |
40.1 % |
Diploma | 106 |
30.1 % |
Bachelor | 105 |
29.8 % |
5.1. Testing Hypothesis
To test the research hypothesis, simple linear regression method was used to explore the relationship between variables. The researchers were interested in testing the individual relationship between the SYSQ and SERQ as independent variables and SS as a dependent variable. Moreover, the individual relationship between the SS as an independent variable and SP as a dependent variable was also tested . Before testing the model, the underlying assumption regarding the regression analysis was performed to verify whether the data met the statistical assumptions of simple linear regression analysis. These assumptions were: linearity (linear relationship between the dependent variables (DV’s) and Independent Variables (IV’s)), Errors are independent, Errors are normally distributed, Errors have a mean of zero homoscedasticity .
Figure-2. Linearity assumption.
Table-4. Durbin Watson of the independent variables with dependent variables.
SS |
SP |
|
SYSQ |
2.007 |
------ |
SERQ |
1.949 |
------ |
SS |
------ |
1.978 |
Figure-3. Assumption of error normally distribution.
Figure-4. Assumption of error Homoscedasticity.
Since all assumption of simple linear regression were met, the hypotheses were examined as follows:
To examine the relationship between SYSQ and SS, a simple linear regression was conducted. Table 5 reveals that there existed a significant relationship between these variables in both situation; before and during COVID-19. Before COVID-19 pandemic, β = 0.468, t = 8.532 and P = 0.000 < 0.05, P < 0.01, R = 0.256 and 18.3% of variability in SS is explained by SYSQ. During COVID-19 pandemic, β = 0.501, t = 11.195 and P = 0.000 < 0.05, P < 0.01, R = 0.396 and 23.3% of variability in SS is explained by SYSQ. Therefore, the relationship between SYSQ and SS was statistically significant and hypothesis 1 was supported. Table 5 also reveals that the β values, R, and R2 were all higher during COVID-19, which reflects stronger impact of SYSQ, stronger correlations between SYSQ and SS, and stronger explanation SS variances by SYSQ during COVID-19 than before COVID-19 pandemic. Therefore, Hypothesis 6 was also supported.
Table-5. SYSQ and SS.
Construct |
b |
t |
P |
R |
R2 |
||
H1 |
SYSQ |
Before COVID-19 |
0.468 |
8.532 |
0.000 |
0.256 |
0.183 |
During COVID-19 |
0.501 |
11.195 |
0.000 |
0.396 |
0.233 |
To examine the relationship between SERQ and SS, a simple linear regression was conducted. Table 6 reveals a significant relationship between these variables in both situations; before and during COVID-19. Before COVID-19 pandemic, β = 0.262, t = 4.243 and P = 0.000 < 0.05, P < 0.01, R = 0.221 and 4.9% of variability in SS is explained by SERQ. During COVID-19 pandemic, β = 0.357, t = 6.515 and P = 0.000 < 0.05, P < 0.01, R = 0.329 and 10.8% of variability in SS is explained by SERQ. Therefore, the relationship between SERQ and SS was statistically significant and hypothesis 2 was supported. Table 6 also reveals that the β values, R, and R2 all are higher during COVID-19, which reflects stronger impact of SERQ, stronger correlations between SERQ and SS, and stronger explanation SS variances by SERQ during COVID-19 than before COVID-19 pandemic. Therefore, Hypothesis 7 was also supported.
Table-6. SERQ and SS.
Construct |
b |
t |
P |
R |
R2 |
||
H2 |
SERQ |
Before COVID-19 |
0.262 |
4.243 |
0.000 |
0.221 |
0.049 |
During COVID-19 |
0.357 |
6.515 |
0.000 |
0.329 |
0.108 |
To examine the relationship between SS and SP, a simple linear regression was conducted. Table 7 reveals the significant relationship between these variables in both situation; before and during COVID-19. Before COVID-19 pandemic, β = 0.415, t = 8.532 and p = 0.000 < 0.05, P < 0.01, R = 0.415 and 17.2% of variability in SP was explained by SS. During COVID-19 pandemic, β = 0.513, t = 11.172 and p = 0.000 < 0.05, P < 0.01, R = 0.513 and 26.4% of variability in SP was explained by SS. Therefore, the relationship between SS and SP is statistically significant and hypothesis 3 is supported . The β values, R, and R2 all were found higher during COVID-19, which reflects stronger impact of SS, stronger correlations between SS and SP, and stronger explanation SP variances by SS during COVID-19 than before COVID-19 pandemic. Therefore, Hypothesis 8 is also supported.
Table-7. SS and SP.
Construct |
b |
t |
P |
R |
R2 |
||
H3 |
SS |
Before COVID-19 |
0.415 |
8.532 |
0.000 |
0.415 |
0.172 |
During COVID-19 |
0.513 |
11.172 |
0.000 |
0.513 |
0.264 |
To examine the moderating role of gender between the relationship between SYSQ, SERQ and SS, a process procedure for moderation analysis was conducted. Table 8 reveals the gender moderation and its strong relationship between these variables in both situation: before and during COVID-19. With gender moderating role between SYSQ and SS, before COVID-19 β = 0.526, t = 2.627 and P = 0.000 < 0.05, P < 0.01, R = 0.305 and 9.3% of variability in SS is explained by SYSQ. During COVID-19 pandemic, β = 0.610, t = 10.189 and P = 0.000 < 0.05, P < 0.01, R = 0.453 and 16.4% of variability in SS is explained by SYSQ. All of these values are higher than the value without the moderate effect of gender. Therefore, the relationship between SYSQ, SERQ and SS is moderated by gender and hypotheses 4 is supported. Moreover, Table 8 also reveals that gender moderate and strength the relationship between SERQ and SS in both situation; before and during COVID-19. Before COVID-19 β = 0.598, t = 3.657 and P = 0.000 < 0.05, P < 0.01, R = 0.565 and 11.3% of variability in SS is explained by SERQ. During COVID-19 pandemic, β = 0.678, t = 12.219 and P = 0.000 < 0.05, P < 0.01, R = 0.673 and 21.4% of variability in SS is explained by SERQ. All of these values are higher than the value without the moderate effect of gender Table 8. Therefore, the relationship between SERQ and SS is positively strength by moderate role of gender and hypotheses 5 is supported. In Addition, the β values, R, and R2 in both relations are all higher during COVID-19, which reflect stronger impact of gender role during COVID-19 than before COVID-19 pandemic.
Table-8. The moderate role of gender.
Construct |
b |
t |
P |
R |
R2 |
LLCI |
ULCI |
|
H4 |
Before COVID-19 |
0.526 |
2.627 |
0.000 |
0.305 |
0.093 |
0.188 |
0.243 |
During COVID-19 |
0.610 |
10.189 |
0.000 |
0.453 |
0.164 |
0.165 |
0.220 |
|
H5 |
Before COVID-19 |
0.598 |
3.657 |
0.000 |
0.565 |
0.113 |
0.218 |
0.323 |
During COVID-19 |
0.678 |
12.219 |
0.000 |
0.673 |
0.214 |
0.235 |
0.440 |
The Independent sample T-test was conducted to examine the differences between genders in their perspectives of the relationships between SYSQ, SERQ and SS. Table 9 illustrates that before COVID-19, there was a significant difference in perspectives for SYSQ between male (M=2, SD = 0.817) and female (M = 1.497, SD = 0.502) t (350) = 6.542, p = 0.000. Moreover, there is a significant difference in perspectives for SERQ between male (M=1.858, SD = 0.779) and female (M = 1.609, SD = 0.570) t (350) = 6.342, p = 0.000.
On the other hand, Table 10 also illustrates that during COVID-19, there was a significant difference in perspectives for SYSQ between male (M=4, SD = 0.826) and female (M = 4.503, SD = 0.612) t (350) = 5.495, p = 0.000. Moreover, there was also a significant difference in perspectives for SERQ between male (M=4.151, SD = 0.725) and female (M = 4.419, SD = 0.490) t (350) = 5.678, p = 0.000.
Table-9. Genders differences in perspectives of SYSQ, and SERQ.
Gender |
N |
Before COVID-19 |
During COVID-19 |
|||
Mean |
Std. Deviation |
Mean |
Std. Deviation |
|||
SYSQ |
Male |
211 |
2.000 |
0.817 |
4.000 |
0.826 |
Female |
141 |
1.497 |
0.502 |
4.503 |
0.612 |
|
SERQ |
Male |
211 |
1.858 |
0.779 |
4.151 |
0.725 |
Female |
141 |
1.609 |
0.570 |
4.419 |
0.490 |
A paired t-test was also used in order to compare the means of the same variable before and during COVID-19 pandemic and examine the variances of respondents’ beliefs on these relationships. Table 10 reveals that there was a significant difference in the scores for SYSQ before COVID-19 pandemic (M = 1.7983, SD = 0.748) and during COVID-19 pandemic (M = 4.2, SD = 0.749) conditions; t(350 )= 30.119, P = 0.000. Moreover, there was also a significant difference in the scores for SERQ before COVID-19 pandemic (M = 1.8983, SD = 0.732) and during COVID-19 pandemic (M = 4.36, SD = 0.750) conditions; t (350) = 27.867, P = 0.000. A significant difference is also noted in the scores for SS before COVID-19 pandemic (M = 1.90, SD = 0.845) and during COVID-19 pandemic (M = 4.18, SD = 0.831) conditions; t (350) = 30.325, P = 0.000. Finally, there was also a significant difference noticed in the scores for SP before COVID-19 pandemic (M = 1.7926, SD = 0.747) and during COVID-19 pandemic (M = 4.2, SD = 0.726) conditions; t (350) = 27.529, P = 0.000. It can be concluded that there is a significant E-learning quality impact on SS and SP as well before COVID-19 and during COVID-19 pandemic.
Table-10. Paired t-test for all variables.
Mean |
N |
Std. Deviation |
||
Pair 1 |
SYSQ_B |
1.7983 |
352 |
0.74856 |
SYSQ_D |
4.2017 |
352 |
0.74421 |
|
Pair 2 |
SERQ_B |
1.8983 |
352 |
0.73241 |
SERQ_D |
4.3617 |
352 |
0.75016 |
|
Pair 3 |
SS_B |
1.9034 |
352 |
0.84516 |
SS_D |
4.1761 |
352 |
0.83107 |
|
Pair 4 |
SP_B |
1.7926 |
352 |
0.74700 |
SP_D |
4.2074 |
352 |
0.72630 |
This study aimed to examine the impact of E-learning quality on students' satisfaction and performance before and during the COVID-19 pandemic. As the result shows, there is a strong and positive relationship between SYSQ and SS, which is consistent with the findings of Petter et al. (2013) and Davis (1989). In addition, there is a strong and positive relationship between SERQ and SS, which is in line with the findings of Pham et al. (2018); Mayer (2017) and Almazán et al. (2017). Moreover, there is also a strong and positive relationship between SS and SP, which agrees with the findings of Bhardwaj and Goundar (2018); Aldholay et al. (2019); Xinli (2015); Chopra et al. (2019) and Eom et al. (2006). All these relationships are stronger during COVID-19 than before. It can therefore be argued that social distancing and government regulations regarding the educational process during COVID-19 played a prominent role in this regard. Before COVID-19, many students from both genders did not mainly rely on the E-Learning system for educational purposes since face-to-face, and traditional classes were available and more convincing for students, instructors, and educational institutes management . However, some students faced a challenge in using computers; in spite of technological advancements, many people can use smartphones but not computers (PC or Laptops). Because E-Learning systems are mainly accessed by computers, many students found themselves in a situation that forced them to learn how to use the computer in order to move forward in educational processes. Eventually these students, as users, built their perceptiveness and experiences about E-Learning system. As shown in Table 11, there is a dramatic shift of students’ impression about E-Learning, where the mean of all questionnaire items was shifted from disagreeing before COVID-19 toward agreeing during COVID-19.
Table-11. Mean and Standard Deviation of questionnaire items before and during COVID-19.
Items | Before |
After |
||
Mean |
S. D |
Mean |
S. D |
|
I believe using the E-learning system is easy. | 1.699 |
0.748 |
4.201 |
0.825 |
I believe the E-learning system is easy to learn by new users. | 1.798 |
0.859 |
4.015 |
0.636 |
I believe E-learning provides me a flexible way to study | 1.546 |
0.765 |
4.141 |
0.928 |
I believe E-learning system offers timely and updated information. | 1.668 |
0.649 |
4.109 |
0.750 |
I believe technical support response through the system is fast and efficient. | 1.523 |
0.713 |
4.264 |
0.721 |
I believe E-learning offers high-quality knowledge to students. | 1.824 |
0.522 |
4.245 |
0.932 |
I believe E-learning offers wide educational communication. | 1.934 |
0.823 |
4.271 |
0.817 |
I believe E-learning facilitates work-group possibility. | 1.690 |
0.725 |
4.255 |
0.629 |
I believe E-learning meets my requirements as a student. | 1.798 |
0.845 |
4.036 |
0.825 |
I believe E-learning fulfils my expectation as a student. | 1.769 |
0.954 |
4.176 |
0.930 |
I feel I am interested in continuing using the e-learning system. | 1.975 |
0.842 |
4.127 |
0.623 |
I believe E-learning improves my academic performance as a student. | 1.945 |
0.798 |
4.268 |
0.747 |
I believe E-learning improves the students’ participation in the class. | 1.885 |
0.767 |
4.189 |
0.763 |
I believe E-learning saves time for individual tasks and assignments. | 1.792 |
0.847 |
4.211 |
0.844 |
I believe E-learning allows for better use of educational data resources. | 1.799 |
0.7727 |
4.254 |
0.627 |
Gender in both situations affected the relationships between SYSQ, SERQ on one side, and SS on the other side. Moreover, there were differences between males and females in their perspectives toward E-learning. This would lead to conclude the importance of designing an E-learning system based on these differences in order to achieve their satisfaction. The results also illustrate the increase of SYSQ, SERQ impact on SS during COVID-19 than before, which was due to the dependence on E-learning system as the only way to learn during COVID-19.
In addition, the relationships between SS and SP before and during were found significant, positive, and strong, rather much stronger during COVID-19 than before. This leads to conclude the importance of SS to achieve the desired SP.
Educational institutions are constantly looking forward to improving their students’ performance. When those students graduate and move to the labor market, their qualifications, skills, and educational effectiveness shall affect the reputation of these institutes. There are unlimited advantages from this objective; if the graduate characteristics meet or go beyond labor market expectations, they would set these institutes in a strong position, gain many benefits such as funds, sponsorship contracts by organizations in the labor market. In order to achieve the desired performance by students, educational institutions should first explore the features that meet students' satisfaction, design the educational system in a way that meets student’s satisfaction by increasing the quality of internal learning system. With increased global dependence on technology especially in the educational field, one of the methods to achieve students’ satisfaction is to increase the quality of E-learning system, and E-learning services by making them: easy to learn and use, flexible, offer timely and updated information, offers high-quality knowledge, meet the students' requirements and expectations, and offered reliable technical support for students.
Future research may be conducted by examining the impact of other quality features of E-learning, add more indicators for SYSQ, SERQ, SS, and SP. Additionally, future research is suggested to be conducted in other local, regional, or global educational institutes to compare the phenomenon under study based on the results of this study. Future research may also study other moderating variables such as academic level (Bachelor, Master, and Ph.D.) to assess whether they play a moderate role between variables.
Funding: This study received no specific financial support. |
Competing Interests: The authors declare that they have no competing interests. |
Acknowledgement: Both authors contributed equally to the conception and design of the study. |
Abbad, M., & Jaber, F. (2014). Evaluating E-learning systems: An empirical investigation on students' perception in higher education area. International Journal of Emerging Technologies in Learning (iJET), 9(4), 27-34. Available at: https://doi.org/10.3991/ijet.v9i4.3480.
Aldholay, A. H., Abdullah, Z., Ramayah, T., Isaac, O., & Mutahar, A. M. (2018). Online learning usage and performance among students within public universities in Yemen. International Journal of Services and Standards, 12(2), 163-179. Available at: https://doi.org/10.1504/ijss.2018.10012964.
Aldholay, A., Abdullah, Z., Isaac, O., & Mutahar, A. M. (2019). Perspective of Yemeni students on use of online learning: Extending the information systems success model with transformational leadership and compatibility. Information Technology & People, 33(1), 106-128. Available at: https://doi.org/10.1108/itp-02-2018-0095.
Aliyyah, R. R., Rachmadtullah, R., Samsudin, A., Syaodih, E., Nurtanto, M., & Tambunan, A. R. S. (2020). The perceptions of primary school teachers of online learning during the COVID-19 pandemic period: A case study in Indonesia. Journal of Ethnic and Cultural Studies, 7(2), 90-109. Available at: https://doi.org/10.29333/ejecs/388.
Almala, A. H. (2006). Who are the key stakeholders in a quality e-learning environment? Distance Learning, 3(4), 1-6.
Almarashdeh, I. (2016). Sharing instructors experience of learning management system: A technology perspective of user satisfaction in distance learning course. Computers in Human Behavior, 63, 249-255. Available at: https://doi.org/10.1016/j.chb.2016.05.013.
Almazán, D. A., Tovar, Y. S., & Quintero, J. M. M. (2017). Influence of information systems on organizational results. Accounting and Administration, 62(2), 321-338. Available at: https://doi.org/10.1016/j.cya.2017.03.001.
Althonayan, M., & Althonayan, A. (2017). E-government system evaluation: The case of users’ performance using ERP systems in higher education. Transforming Government: People, Process and Policy, 11 (3), 306–342. Available at: https://doi.org/10.1108/tg-11-2015-0045.
Aparicio, M., Bacao, F., & Oliveira, T. (2017). Grit in the path to e-learning success. Computers in Human Behavior, 66, 388-399. Available at: https://doi.org/10.1016/j.chb.2016.10.009.
Arkorful, V., & Abaidoo, N. (2015). The role of e-learning, advantages and disadvantages of its adoption in higher education. International Journal of Instructional Technology and Distance Learning, 12(1), 29-42.
Beheshti, H. M., & Beheshti, C. M. (2010). Improving productivity and firm performance with enterprise resource planning. Enterprise Information Systems, 4(4), 445-472. Available at: https://doi.org/10.1080/17517575.2010.511276.
Bhardwaj, A., & Goundar, S. (2018). Student's perspective of E learning and the future of education with MOOCs. International Journal of Computer Science Engineering, 7(5), 248-260.
Chang, M. K., & Cheung, W. (2001). Determinants of the intention to use Internet/WWW at work: A confirmatory study. Information & Management, 39(1), 1-14. Available at: https://doi.org/10.1016/s0378-7206(01)00075-1.
Chen, S., Yan, J., & Ke, Q. (2019). An investigation to the impacts of information systems flexibility on information systems strategy implementation. Paper presented at the International Conference on Human-Computer Interaction.
Chiu, C.-M., Chiu, C.-S., & Chang, H.-C. (2007). Examining the integrated influence of fairness and quality on learners’ satisfaction and Web-based learning continuance intention. Information Systems Journal, 17(3), 271-287. Available at: https://doi.org/10.1111/j.1365-2575.2007.00238.x.
Chopra, G., Madan, P., Jaisingh, P., & Bhaskar, P. (2019). Effectiveness of e-learning portal from students' perspective: A structural equation model (SEM) approach. Interactive Technology and Smart Education, 16(2), 94-116. Available at: https://doi.org/10.1108/itse-05-2018-0027.
Chou, T.-L., Wu, J.-J., & Tsai, C.-C. (2019). Research trends and features of critical thinking studies in e-learning environments: A review. Journal of Educational Computing Research, 57(4), 1038-1077. Available at: https://doi.org/10.1177/0735633118774350.
Cidral, W. A., Oliveira, T., Di Felice, M., & Aparicio, M. (2018). E-learning success determinants: Brazilian empirical study. Computers & Education, 122, 273-290. Available at: https://doi.org/10.1016/j.compedu.2017.12.001.
Clark, R. C., & Mayer, R. E. (2016). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning. New York: John Wiley & Sons.
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Erlbaum.
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281-302. Available at: https://doi.org/10.1037/h0040957.
Davis, F. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. Available at: https://doi.org/10.2307/249008.
DeLone, W. H., & McLean, E. R. (1992). Information systems success: The quest for the dependent variable. Information Systems Research, 3(1), 60-95. Available at: https://doi.org/10.1287/isre.3.1.60.
DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information systems success: A ten-year update. Journal of Management Information Systems, 19(4), 9-30. Available at: https://doi.org/10.1080/07421222.2003.11045748.
Dong, C., Cao, S., & Li, H. (2020). Young children’s online learning during COVID-19 pandemic: Chinese parents’ beliefs and attitudes. Children and Youth Services Review, 118, 105440. Available at: https://doi.org/10.1016/j.childyouth.2020.105440.
Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students' perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215-235. Available at: https://doi.org/10.1111/j.1540-4609.2006.00114.x.
Forza, C. (2002). Survey research in operations management: A process-based perspective. International Journal of Operations & Production Management, 22(2), 152-194. Available at: https://doi.org/10.1108/01443570210414310.
Gong, Y., & Janssen, M. (2010). Measuring process flexibility and agility. Paper presented at the Proceedings of the 4th International Conference on Theory and Practice of Electronic Governance.
González, J. A., Jover, L., Cobo, E., & Muñoz, P. (2010). A web-based learning tool improves student performance in statistics: A randomized masked trial. Computers & Education, 55(2), 704-713. Available at: https://doi.org/10.1016/j.compedu.2010.03.003.
Goodhue, D. L. (1995). Understanding user evaluations of information systems. Management Science, 41(12), 1827-1844. Available at: https://doi.org/10.1287/mnsc.41.12.1827.
Green, J. P., Tonidandel, S., & Cortina, J. M. (2016). Getting through the gate: Statistical and methodological issues raised in the reviewing process. Organizational Research Methods, 19(3), 402-432. Available at: https://doi.org/10.1177/1094428116631417.
Gtirkut, C., & Nat, M. (2017). Important factors affecting student information system quality and satisfaction. EURASIA Journal of Mathematics, Science and Technology Education, 14(3), 923-932. Available at: https://doi.org/10.12973/ejmste/81147.
Guilford, J. P. (1954). Psychometric methods (2nd ed.). New York: McGraw-Hill.
Haryaka, U., Agus, F., & Kridalaksana, A. H. (2017). User satisfaction model for e-learning using smartphone. Procedia Computer Science, 116, 373-380. Available at: https://doi.org/10.1016/j.procs.2017.10.070.
Helia, V. N., Asri, V. I., Kusrini, E., & Miranda, S. (2018). Modified technology acceptance model for hospital information system evaluation—a case study. Paper presented at the MATEC Web of Conferences.
Hermawan, H. D., Deswila, N., & Yunita, D. N. (2018). Implementation of ITC in education during 2004-2017. Paper presented at the International Symposium on Educational Technology.
Hidayatullah, S., Khouroh, U., Windhyastiti, I., Patalo, R. G., & Waris, A. (2020). Implementation of the DeLone and McLean information system success model for learning based on the zoom application during the covid-19 pandemic. Journal of Information Technology and Management, 6(1), 44-52.
Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods, 1(1), 104-121. Available at: https://doi.org/10.1177/109442819800100106.
Hossain, M. A. (2016). Assessing m-health success in Bangladesh: An empirical investigation using IS success models. Journal of Enterprise Information Management, 29(5), 774-796. Available at: https://doi.org/10.1108/jeim-02-2014-0013.
Hsu, P.-F., Yen, H. R., & Chung, J.-C. (2015). Assessing ERP post-implementation success at the individual level: Revisiting the role of service quality. Information & Management, 52(8), 925-942. Available at: https://doi.org/10.1016/j.im.2015.06.009.
Ifinedo, P., & Nahar, N. (2007). ERP systems success: An empirical analysis of how two organizational stakeholder groups prioritize and evaluate relevant measures. Enterprise Information Systems, 1(1), 25-48. Available at: https://doi.org/10.1080/17517570601088539.
Isaac, O., Abdullah, Z., Ramayah, T., & Mutahar, A. M. (2017). Internet usage within government institutions in Yemen: An extended technology acceptance model (TAM) with internet self-efficacy and performance impact. Science International, 29(4), 737-747.
Islam, A. N. (2013). Investigating e-learning system usage outcomes in the university context. Computers & Education, 69, 387-399. Available at: https://doi.org/10.1016/j.compedu.2013.07.037.
Issac, Masoud, Y., Samad, S., & Abdullah, Z. (2016). The mediating effect of strategic implementation between strategy formulation and organizational performance within government institutions in Yemen. Research Journal of Applied Sciences, 11(10), 1002-1013.
Kauffman, H. (2015). A review of predictive factors of student success in and satisfaction with online learning. Research in Learning Technology, 23(1), 1-13. Available at: https://doi.org/10.3402/rlt.v23.26507.
Kintu, M. J., Zhu, C., & Kagambe, E. (2017). Blended learning effectiveness: The relationship between student characteristics, design features and outcomes. International Journal of Educational Technology in Higher Education, 14(1), 1-20. Available at: https://doi.org/10.1186/s41239-017-0043-4.
Li, Y.-S., Yu, W.-P., Liu, C.-F., Shieh, S.-H., & Yang, B.-H. (2014). An exploratory study of the relationship between learning styles and academic performance among students in different nursing programs. Contemporary Nurse, 48(2), 229-239. Available at: https://doi.org/10.1080/10376178.2014.11081945.
Mayer, R. E. (2017). Using multimedia for e-learning. Journal of Conpliter Assisted Learning, 33(5), 403-423.
McGill, T., Hobbs, V., & Klobas, J. (2003). User developed applications and information systems success: A test of DeLone and McLean's model. Information Resources Management Journal, 16(1), 24-45. Available at: https://doi.org/10.4018/irmj.2003010103.
Muda, I., & Erlina, A. A. (2019). Influence of human resources to the effect of system quality and information quality on the user satisfaction of accrual-based accounting system. Accounting and Administration, 64(2), 100. Available at: https://doi.org/10.22201/fca.24488410e.2019.1667.
Norzaidi, M. D., Chong, S. C., Murali, R., & Salwani, M. I. (2007). Intranet usage and managers' performance in the port industry. Industrial Management & Data Systems, 107(8), 1227-1250. Available at: https://doi.org/10.1108/02635570710822831.
Parasuraman, A., Zeithaml, V. A., & Berry, L. (1988). SERVQUAL: A multiple-item scale for measuring consumer perceptions of service quality. 64(1), 12-40.
Parasuraman, A., Berry, L., & Zeithaml, V. (2002). Refinement and reassessment of the SERVQUAL scale. Journal of Retailing, 67(4), 114-144.
Park, Y. A., & Gretzel, U. (2007). Success factors for destination marketing web sites: A qualitative meta-analysis. Journal of Travel Research, 46(1), 46-63. Available at: https://doi.org/10.1177/0047287507302381.
Petter, S., DeLone, W., & McLean, E. R. (2013). Information systems success: The quest for the independent variables. Journal of Management Information Systems, 29(4), 7-62. Available at: https://doi.org/10.2753/mis0742-1222290401.
Pham, L., Williamson, S., & Berry, R. (2018). Student perceptions of e-learning service quality, e-satisfaction, and e-loyalty. International Journal of Enterprise Information Systems, 14(3), 19-40. Available at: https://doi.org/10.4018/ijeis.2018070102.
Reynolds, P. (2011). UDENTE (universal dental E-learning) a golden opportunity for dental education. Bulletin of the International Group for Scientific Research in Stomatology and Odontology, 50(3), 11-19.
Rogers, E. M. (2010). Diffusion of innovations. New York: Simon and Schuster.
Rowley, J. (2014). Designing and using research questionnaires. Management Research Review, 37(3), 308-330. Available at: https://doi.org/10.1108/mrr-02-2013-0027.
Schwab, K. (2020). The global competitiveness report 2019, Geneva, 2019. Retrieved from http://weforum.org/TheGlobalCompetitivenessReport2019.pdf. [Accessed Feb 15, 2021].
Sedera, D., & Dey, S. (2013). User expertise in contemporary information systems: Conceptualization, measurement and application. Information & Management, 50(8), 621-637. Available at: https://doi.org/10.1016/j.im.2013.07.004.
Sekaran, U. (2003). Research method for business: A skill-building approach (4th ed.). New York: John Wiley & Sons.
Stodnick, M., & Rogers, P. (2008). Using SERVQUAL to measure the quality of the classroom experience. Decision Sciences Journal of Innovative Education, 6(1), 115-133. Available at: https://doi.org/10.1111/j.1540-4609.2007.00162.x.
Straub, D., Boudreau, M.-C., & Gefen, D. (2004). Validation guidelines for IS positivist research. Communications of the Association for Information systems, 13(1), 380-427. Available at: https://doi.org/10.17705/1cais.01324.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478. Available at: https://doi.org/10.2307/30036540.
Wahyudi, F., Respati, H., & Ardianto, Y. T. (2017). Study on DAPODIK information system: User satisfaction as mediation of system quality and information quality on net benefit. Information and knowledge Management, 7(7), 53-62.
Wood, M. S. (2011). A process model of academic entrepreneurship. Business Horizons, 54(2), 153-161. Available at: https://doi.org/10.1016/j.bushor.2010.11.004.
Xinli, H. (2015). Effectiveness of information technology in reducing corruption in China. The Electronic Library, 33(1), 52-64. Available at: https://doi.org/10.1108/el-11-2012-0148.
Yuniarto, D., Suryadi, M., Firmansyah, E., Herdiana, D., & Rahman, A. B. A. (2018). Integrating the readiness and usability models for assessing the information system use. Paper presented at the 6th International Conference on Cyber and IT Service Management.
Zhang, B. (2016). A personalised digital classroom with improved interactive responses. World Transdisciplinary Engineering Technology Education, 14(1), 95-100.
Views and opinions expressed in this article are the views and opinions of the author(s), International Journal of Education and Practice shall not be responsible or answerable for any loss, damage or liability etc. caused in relation to/arising out of the use of the content. |