This study used a self-and peer-assessment activity in Moodle, a learning management system, to investigate the self-and peer-assessment abilities of student-teachers. To enhance self- and peer-assessment in some author-taught courses, students submitted online texts graded by their peers using a grading scale determined by course facilitators as a rubric or aspects approved by course facilitators. This study aimed to design and investigate self-, and peer-assessment activities among 18 students (10 boys and 8 girls) enrolled in one of Kathmandu University Schools of Education’s MPhil/Ph.D. program in STEAM (Science, Technology, Engineering, Arts and Mathematics) in Nepal in 2021. The objective was to identify the most effective strategies for involving student-teachers in the learning and assessment processes using self- and peer-assessment as a learning and assessment tool using action research as a research methodology. This study followed the procedures of planning, intervening, evaluating effectiveness, and sharing learning processes within the action research framework. In order to achieve the research objectives, this article discussed some of the Workshop activity’s key strengths and applications within cognitive and constructivist theoretical frameworks, among others. The article concluded by outlining the key advantages of self-and peer assessment in increasing student engagement and improving higher-order skills. These are analytical, and evaluation skills developed through the process of grading self- and peer submissions on a variety of criteria (aspects). The findings show that students are responsible for learning, have metacognitive skills, and use a dialogical, shared teaching and learning model. For higher education, it is a useful tool for self-and peer- assessment.
Keywords:Action research, Aspects , Assessment, EngagementEvaluating, Grading scale, Self and peer assessment.
Received: 2 August 2022 / Revised:16 September 2022 / Accepted: 4 October 2022 / Published: 19 October 2022
The study contributes to developing a new perspective on self- and peer-assessment mechanisms in higher education. Using self- and peer-assessment as a learning and assessment tool, the study identifies the most effective strategies for involving student-teachers in the learning and evaluation processes and highlights significant benefits and applications within cognitive and constructivist theoretical frameworks leading to key benefits of self- and peer-assessment such as increased student engagement and improvement of higher-order skills.
During the pandemic and/or before or after the pandemic, digital learning platforms and a variety of tools were integrated into higher education and school education. In developing countries like Nepal, where web-based course delivery has been used for nearly a decade, facilitators accepted a wide range of students’ submissions of assignments. Uploading files (e.g.,.doc,.docx,.ppt,.pptx) to Moodle or other LMSs (Learning Management Systems) for administering quizzes, using the collaborative wiki, offering options, and involving students in forum discussions are all examples of such submissions.
With the development of innovative pedagogies, it is true that current technologies do not adequately support interactive teaching and learning. Growing discussion about alternative approaches to education has brought up the valid point that today's tools do not adequately facilitate student-centered instruction. Discussions centered on the question, which tools, especially those included in learning management systems or easy to use, are most effective for making teaching and learning interactive?
To address the gaps mentioned above and questions about enhancing the self- and peer-assessment of student-teachers at our institution, we chose the self- and peer-assessment tool as the Workshop activity in Moodle. Self- and peer-assessment tools in Moodle can be used for reading discussions and reflections on term papers among others can be used for self- and peer-assessment. It provides numerous opportunities for student-teacher interaction, reflection, and development of higher-order thinking skills. Likewise, “self and peer assessments are considered beneficial for preparing the real teaching practicum and future career development” (Ratminingsih, Artini, & Padmadewi, 2017). In the same way, we looked at how the chosen tool could be used to teach and learn, and we made this study to look into and improve self- and peer-assessment skills. Before putting out self- and peer-assessed final grades, the study tried to figure out how to improve the quality of feedback by giving grades based on the quality of the work and how well the feedback was thought out.
It is expected that peer assessment we conducted not only assisted us in establishing performance expectations and evaluating how well we are performing against them, but it also had a similar impact (Boud, 2013). It also reduces the course facilitator's evaluation effort significantly, as the students are now performing a significant portion of this task. However, student-teachers claimed that conducting self-and peer-assessments was difficult at first, causing stress, confusion, and uncertainty about whether or not they did it correctly, they should be given regular and continuous practice to become accustomed to doing so (Ratminingsih et al., 2017). To the extent that Tousignant and DesMarchais (2002) had predicted, students' self-perceptions were less accurate than actual performance. Such reasons necessitate the need to use this strategy alongside other methods to get the desired results. The students better understood their own performance's flaws and strengths (Handayani & Genisa, 2019; Sanjayanti, Suastra, Suma, & Adnyana, 2022).
Peer-and self-evaluations can also promote intrinsic motivation, self-imposed effort, a mastery goal orientation, and deeper learning through higher-order thinking, as outlined by McMillan and Hearn (2008). Students who are aware of their accomplishments are more likely to work harder to improve their grades. It is possible to increase the effectiveness of assessment by encouraging students to participate in the learning process (Khonbi & Sadeghi, 2012). Self- and peer-evaluations help students take more responsibility for their learning and performance (Pantiwati & Husamah, 2017). Peer- and self-assessment are depicted in a diagram by the authors to explain the meaning of self- and peer-assessment (Zimmerman, 2002). All of these factors are critical to enhancing students' higher-level ability to think critically (Anderson & Krathwohl, 2001). With the advent of web-enhanced teaching/learning tools like the workshop tool, Alcarria, Bordel, and de Andrà (2018) came up with a framework for evaluating peer assessment in MOOCs (Massive Open Online Course) that takes assignment and review analysis into account. This framework lets students check the originality of their work and reviews and helps course facilitators to find low-level feedback and biased reviews.
Further, self- and peer-based formative assessment helps students learn to control themselves, and the e-portfolio system makes this process even better (Welsh, 2012). Hence, this article illustrates how beneficial it is to use them for classroom instruction and student learning in the industrial revaluation 4.0 and/or digital age and look into ways to improve the design and use of future web-based peer assessment tasks in different courses (Dominguez, Nascimento, Maia, Pedrosa, & Cruz, 2014).
To conduct a systematic investigation into the potential and application of the interactive learning/teaching tool, this study was guided by research questions: (1) What are the advantages and disadvantages of using Workshop as a self- and peer-assessment tool? (2) How could student-teachers use the tool to improve interactive learning as a self- and peer-assessment tool? This article attempts to find answers to these questions.
Theories of cognitive and constructivism guide this study. In the first position, the cognitive theory is aligned with the transformation of an individual's thoughts during self- and peer-review processes. It is the process by which something is recognized and taken action upon (Belbase & Sanzenbacher, 2016; Freire, 1970). A learner learns something new when they are able to apply what they have learned in the real world. Adapting to a changing environment and evaluating the work of one's peers are just two of the ways people learn. They can also learn from experience or by changing what they already know. It also attempts to demonstrate that information is received, processed, stored, retrieved, and acted upon in accordance with the context in which it was received. For cognitive scientists, learning is a process of processing and remembering information gained from examining one's own and others' work. Individuals can examine their own and their peers' work using a tool like Workshop tool, one of many available educational software solutions.
In addition to encouraging students to think critically about new ideas and experiences, it also encourages them to consider altering some of their own ideas and build meaningful learning environments while reviewing their peers' work (Dahal et al., 2022; Huang, 2002). The idea behind a constructivist approach to education is to have students build and deconstruct their own knowledge through peer analysis, experimentation, and self-analysis. There are multiple ways to write when critiquing other people's work, and students try to write from a variety of angles when writing their own work (Conway, 2022). In order to do this, students must conduct their own research, investigate the work of others, and evaluate what they have learned (Wasson, 2022). According to radical constructivism, it is impossible to pass on knowledge from one person to the next; rather, learning occurs through the process of "learning by doing" while peer reviews are conducted according to predetermined criteria (as shown in Table 2).
When using radical constructivism-aligned classrooms, the teacher's role changes from one evaluator to one facilitator of the evaluation process. In addition, students were free to express their opinions on the work of their peers if they met certain criteria in their evaluations. There was a high level of enthusiasm among the students throughout the entire evaluation process. They were eager to share their evaluations of specific aspects of the procedure with others more than ever. Even when they made mistakes grading their peers' work, they did not hold back when discussing their issues. They were able to apply their mistakes to specific aspects of peer work to make us feel better as a whole. Our goal was to put students at the center of the evaluation process. We wanted to see whether they had the potential to become more than just passive recipients of information; or they could become creators and assessors of the information.
The research design comprised a self-and peer-assessment activity among the course facilitators in one of the MPhil/Ph.D. programs in STEAM Education at Kathmandu University School of Education, Nepal. The tool design and implementation goal made quality self- and peer-assessment easier for students and teachers. The student's ability to provide feedback and interact with facilitators or peers improved because of these interactions and engagements. Likewise, the researchers conducted this action research as part of their research teaching practices (Dahal & Pangeni, 2019; Dahal, Luitel, Pant, Shrestha, & Manandhar, 2020; Mertler, 2009) with a focus on the course Advanced Qualitative Research for MPhil/PhD in STEAM Education. Besides, action interventions and their self and peer assessment tasks were studied in one of the Journal writing assignments in the second semester of the MPhil/PhD in STEAM Education in 2021.
The study was completed after the action research cycles were implemented in various phases. In the first part of the study, the focus was on finding the problem. This helped figure out what type of intervention was needed. During the middle phase, we oriented the students on how to assess the peers' works in the Workshop activity. In the last phase, we analyzed the results by talking to the participants and reading how they reflected on the Moodle online protocol. Students in the researchers' own classes are also important participants in the research. Table 1 shows the total number of the students participated, and Table 2 shows the rubrics used in the study.
Course | Level | Course Credit |
Male |
Female |
Total |
Advanced Qualitative Research | MPhil in STEAM Education | 3 |
10 |
8 |
18 |
S.N. |
Aspects | Points for submission |
Points on assessment |
1 |
Correctness of language | 10 |
10* |
2 |
Correctness of list of References | 10 |
|
3 |
Correctness of in-text Citation | 10 |
|
4 |
Reflective and Autobiographical voice | 10 |
|
5 |
Examples of ideas | 10 |
|
6 |
Argument/Critical voice | 10 |
|
7 |
Coherence | 10 |
|
8 |
Format and word length | 10 |
|
9 |
Flow of writing | 10 |
|
10 |
Engagement in scholarly ideas | 10 |
|
Total Points =110 |
Note: *points on assessment will get only if the learners submit the work on allocated date and time. |
To answer the outlined research questions, a variety of data-gathering techniques were used to collect data. We used a survey, an interview, phone calls, and informal discussions. Course facilitators received training on setting up and using the tool for self- and peer-assessment. All the processes were documented and reported. However, there was no set plan for collecting data in this study. We used an iterative action process during the phases of finding the problem, exploring, evaluating, and meaning-making.
The findings from each stage of the research cycle were analyzed and discussed in this section. The use of self- and peer-assessment tools in online platforms and what that means for higher education in terms of interaction, self- and peer-assessment, and reflective learning in both on-campus and blended modes were discussed.
4.1. Opportunities and Challenges of the Tool
Listening to what students had to say about their experiences working on peer review assignments was incredibly rewarding. As a result of the assignment’s appeal, various sources relied heavily on their own conceptualizations. We did not expect students to have the same views as researchers. As part of our preparations for the workshop, we conducted a survey and an interview. Aside from this, all students agreed that peer review was an excellent way to learn as well as to evaluate one’s peers. The overwhelming majority of students agreed that the assignment requiring peer reviews gave them the best opportunity to both correct themselves and be corrected by their peers.
Additionally, teachers can have assigned individual students a set number of submissions from their peers to grade and comment on when the peer assessment feature is activated. For each opinion, which can be seen as an aspect, they receive a score that is added to their grade for submitting their work on time. This determines the final grade for the assignment. Because of this, the workshop’s primary goal was to foster an environment that encourages students to provide feedback on their classmates’ work and to learn from one another in a collaborative learning environment. It is important for students to be able to evaluate their own and their peers’ work as part of the learning process in order for them to gain a better understanding of what is being taught in the text (Dahal & Pangeni, 2019).
The comments and advice they receive from their peers provide them with additional and often all-encompassing perspectives on their individual efforts. Students can benefit from the feedback they received from their peers because it can help them identify areas of improvement in their work that they may not be able to see on their own. A student may be given their own work to evaluate and comment on when the self-assessment feature in workshop is activated. Alternatively, they will receive a grade based on the evaluation of their own work, which will be included in their overall grade. Their overall grade will be determined by combining this grade with the one they received for submitting the work and assessment from the self-and peer. Participants in this activity will be able to determine whether or not students can identify their strengths and weaknesses and revisit their work objectively.
To make matters more complicated, students’ self-assessment strategies focus solely on their own activities, whereas their peers’ peer-assessment strategies considered their own and their classmates’ actions. It is possible for students to improve their higher-order thinking skills through the practice of multiple metacognitive skills developed through self- and peer assessment. As a result, peer learning has the potential to transform educational platforms into environments that foster the development of critical thinkers capable of weighing the advantages and disadvantages of different ideas or points of view (Spiller, 2009). In response to this question, Wang, Liang, Liu, and Liu (2014) proposed an approach to decide what they called the non-consensus. An activity is considered non-consensual if two or more students disagree on its reasonableness. In this regard, Shiba and Sugawara (2014) developed a trusted network evaluation model to gauge how well students can give and receive feedback in groups. In higher education, these groups can be rearranged and rearranged at any time during the semester.
Students who participated in our study shared that the activities helped them better understand their own writing style and improve their writing skills by critiquing the work of their peers. Many of the students agreed that the use of peer review helped them cultivate a culture of idea-sharing for the purpose of learning in a cooperative setting. Peer review helped some students come up with new ideas and concepts, learn new vocabulary and sentence structures, and better understand how to compare or contrast different aspects of the same thing. The majority of participants agreed that the workshop provided opportunities for learning through self- and peer-review of one’s own and others’ writing and expression of ideas.
Those who participated in the study found the peer review process to be both rewarding and challenging. Overwhelmingly, participants agreed that the Workshop process improved communication. However, the quality of their writing was difficult to defend. There are several types of challenges that students face: information about grading and marking, “content-based analysis, time restraints in completing the process, liberalism” (Dahal & Pangeni, 2019) and a lack of knowledge about certain aspects (such as the rubrics listed in Table 2). Some of them had difficulty grading their peers because they feared their classmates would complain about their grades. To counter this, they had to put in two times the effort to complete a single task. Students are required to review at least one of their classmates’ works in the course of submitting their own work to their instructors. As a result, they must be able to make sound decisions and thoroughly understand the assignment. This tool could play a role in changing the culture of learning in higher education because it aids students in learning while they are being graded (Dahal, 2019; Dahal & Pangeni, 2019).
4.2. Students’ Performing as Reviewers: A Chance
Once the course facilitators had set up the activity in their LMS course block and reached the submission stage, the LMS system can start accepting student work. Before submitting their own work, students were able to review the sample submission set up for them by the facilitators of the course, making it easier for them to understand how the feature works. A large majority of the students submitted an online text file or a file attachment. A student’s work underwent peer review after it had been submitted to the assessment stage, which commences after the submission phase has concluded.
Table 2 shows examples of rubrics that can be used for evaluation. The first step in this process was to figure out how students will give and receive feedback and how they will justify the grade they assign to one another. In the course of our investigation, we discovered that students were applying a rigid standard of evaluation to the criteria that had been provided to them. In some cases, comments and other forms of feedback can also be extremely valuable. Some students did not address all the evaluation components they were expected to, despite this fact. Their comments and feedbacks were so general, hence, the researchers suspected that favoritism may have been a factor in their findings. Instead of explaining their friends' peer evaluations, some students offered 100% scores to their peers without reason. Furthermore, when students viewed their grades, comments, and feedback, they did not bother to think about what their peers thought of any of the individual components. The overall feedback as well as each individual comment, was excessively generic.
A major part of the evaluation process should be focused on ensuring fairness when it comes to how one student marks the other students they evaluate and the feedback they give to them. Preventing favoritism and bias are other issues that the course facilitator will have to deal. Because of this, instructors can provide students with accurate information about their assessment scores (Students will get a low score on the test if they don’t give detailed comments or feedback).
4.3. Workshop tool for Facilitators
It was found that the course facilitators significantly impacted the activity’s planning and implementation. Course facilitators must think about how they will evaluate the activity and how it will be implemented in terms of both ideas and technology. While it is still important for the course facilitators to be available to help students with any issues they may have, once the activity is set up with all of the essential aspects, like instructions for submission and instructions for grading, their role is less important. This activity will only run smoothly if the submissions are assigned to reviewers manually or automatically and if the transitions between phases are done carefully.
This kind of activity also makes it easier for course leaders to keep track of grades and assign work for peer review, grading, and comments. In the same way, the average of both submission and assessment grades is automatically calculated. With this activity, course leaders do not have to do as much work by hand when teaching and grading small groups of students or large groups of students. Although some instructors did not set up the activity because they thought email was just as easy. Instead, they emailed the student submissions to other and asked what they thought. From the study, here are some of the reasons why the course leader(s) did not use the Moodle-based Workshop activity: (i) I don’t have enough time to learn how to use the new tool; (ii) I’m not sure how the system works; and (iii) I’m used to communicating with students through email.
Adding grades to submissions and assessments is also an interesting feature of this system. Instead of grading, they can add their own grades to each student’s record or accept the ratings of their peers. Lastly, teachers can protect their students from giving perfect scores to their peers by hiding the author and reviewer names when assigning review work. When a student cannot see the name of the author or reviewer, they are more likely to give an honest rating. On the other hand, teachers need to ensure they know exactly what they are doing when setting up the activity.
4.4. Experiences of Participants with the Tool
Among the students who took part, the process of self- and peer-assessment was completely unfamiliar. They found the peer review process and the subsequent test to be enjoyable. Self-awareness could be seen in the way they used standard language, elaborated on their arguments, judged their friends’ work fairly, and completed their assignments back-to-back. How difficult it was, to be honest about their own work and compare it to the work of others was also mentioned. Some of the participants had never done peer review before and had never used the LMS Workshop feature for peer review before. Because of this, some learners felt at ease, while others had no idea what to do or how to proceed. In spite of this, as stated in the section on opportunities, all workshop participants agreed that the workshop provides learners with opportunities to gain knowledge. Learning can also be improved by comparing one’s work to the work of peers on the same task, learning from how others do their work, and comparing how well friends’ grades are done.
Students shared about their general experiences with submitting and getting feedback. They said it wasn’t hard for them to get their work up on time. All the participants did their tasks on time and turned them in. But one of the participants messed up and uploaded the wrong file. He couldn’t change his submission because he didn’t have any editing options to choose. Later, he asked the course facilitator of the course to make the necessary changes and for his work to be looked over by other friends. Some of the participants had trouble with peer review because they had to judge three other friends. They thought it was too much work to look over the work of three peers at once. It was also hard to judge classmates’ work by comparing it to what they had already learned, making connections between ideas and the learning process, and evaluating the depth of the content. But some participants were adamant that it was straightforward due to the clarity of the review criteria and the items to be marked.
During the research process, researchers reflect on what they had learned. First and foremost, configuring the icon and various settings options in the Workshop in LMS was simple. When we placed our cursor over the light bulb icon in the research process, we noticed the “Switch phase”. It was, in fact, unintentional learning. We could not find anything that gave us the idea in our quick search. The system did not work with “automatic switching,” as indicated in the settings. After learning about the icon’s purpose, we were eager to continue. Several other course instructors who were part of the study were given individual orientations. Even though some of them agreed and looked into the process, they never used it. Course facilitators had never implemented this type of activity in other courses, so we were unable to conduct a larger study on this topic.
Graduate students who took the part in the activity were able to learn how to assess students in blended and/or on-campus by implementing it in a graduate course. It gives us a way to evaluate students’ submission in self- and peer- form. A new method for enticing students to learn. As a starting point for both learning and assessment, workshops are essential, using this tool, individuals can keep track of peer reviews and assignments. Anonymous authors and reviewers have a distinct advantage in this regard. One of the most important resources for online and distance learning students’ self- and peer assessments. Students can help their peers by making insightful comments and providing constructive feedback when they have the ability. Peer-review activities can be used to keep students engaged when the course facilitators are occupied with their regular academic responsibilities. Facilitators must devote their full attention to learning and implementing the software before they can begin setting up the workshop and moving between phases. The research process was a lot of fun for us. Whether the course is online, distance or face-to-face, this activity can be used as an effective tool for students to assess their own and their peers’ works.
Likewise, the primary objective of the action research was to look into how self- and peer-assessment could be used in e-learning courses. The discussions on opportunities and challenges of the tool, students’ performing as reviewers: a chance, workshop tool for facilitators and experiences of participants with the tool suggested the findings that engaging students in self-and peer- assessment activities support learning in higher education regardless of the courses. This study, even though it was brief and only examined one Journal evaluation task, helped us identify issues and begin making adjustments to the Moodle-based assessment system to address them. This study's first step involves illustrating a workshop tool to evaluate how well it functions. The workshop activity accepts submission in a variety of formats (word, ppt, online text to name). It is simple to set up, works without any issues, makes teachers' jobs easier, and gives students more control over their evaluation processes by allowing them to read and evaluate their peers' work. Activities of this nature benefit both students and teachers in various ways(s). Therefore, the main goal should be to ensure that this type of self- and peer-assessment tool is utilized in every class at least once per semester.
To cover how such an assessment tool is used in courses offered in departments/colleges each semester, it would also be beneficial for this type of research to be conducted over a longer time frame. A variety of lessons and exercises could be used to illustrate how this operates. We also assisted teachers and students in using self- and peer-assessment to learn. Finally, these research findings demonstrate that it is effective in all university courses across the schools or departments. The technological tools used in the workshop are also novel and have the potential to alter the way teachers teach, but because of the course facilitators technical skills, they struggle to create, assemble, and manage the activity. Course facilitators must first attend internal faculty training sessions to get the most out of the Workshop activity tool. These activities are simple to plan and execute for high-quality learning. No matter the learning subject or context, assessment that encourages higher-order thinking in higher education is creative and innovative. To the end, Moodle system's workshops present opportunities and challenges for teachers and students in higher education to correct their learnings and ways of evaluation.
Funding: This work is supported by the University Grants Commission, Nepal (Award No.: PhD-77/78-Edu-05) and the Rupantaran Project at Kathmandu University School of Education, Hattiban, Lalitpur, Nepal (Grant number: NORHAD 2017-2023). |
Competing Interests: The authors declare that they have no competing interests. |
Authors’ Contributions: All authors contributed equally to the conception and design of the study. |
Alcarria, R., Bordel, B., & de AndrÃ, D. M. (2018). Enhanced peer assessment in MOOC evaluation through assignment and review analysis. International Journal of Emerging Technologies in Learning, 13(1), 206-219.Available at: https://doi.org/10.3991/ijet.v13i01.7461.
Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of educational objectives: Longman.
Belbase, A., & Sanzenbacher, G. T. (2016). Cognitive aging: A primer. Center for Retirement Research at Boston College, 16, 1-9.
Boud, D. (2013). Enhancing learning through self-assessment: Routledge.
Conway, P. (2022). Who am I to judge? The six (or so) dimensions of critique: Springer.
Dahal, N. (2019). Online assessment through Moodle platform in higher education. Paper presented at the In ICT Integration in Education: Access, Quality, and Equity Education, ICT in Education Conference, September 2019, Nepal.
Dahal, N., & Pangeni, S. K. (2019). Workshopping in online courses: Insights for learning and assessment in higher education. International Journal of Multidisciplinary Perspectives in Higher Education, 4(1), 89-110.Available at: https://doi.org/10.32674/jimphe.v4i1.1275.
Dahal, N., Luitel, B. C., Pant, B. P., Shrestha, I. M., & Manandhar, N. K. (2020). Emerging ICT tools, techniques, and methodologies for online collaborative teaching and learning mathematics. Mathematics Education Forum Chitwan, 5(5), 17–21.Available at: https://doi.org/10.3126/mefc.v5i5.34753.
Dahal, N., Manandhar, N. K., Luitel, L., Luitel, B. C., Pant, B. P., & Shrestha, I. M. (2022). ICT tools for remote teaching and learning mathematics: A proposal for autonomy and engagements. Advances in Mobile Learning Educational Research, 2(1), 289-296.
Dominguez, C., Nascimento, M. D. S., Maia, A., Pedrosa, D., & Cruz, G. (2014). Come together: peer review with energy engineering students. International Journal of Engineering Pedagogy, 4(5), 34–41.Available at: https://doi.org/10.3991/ijep.v4i5.3537.
Freire, P. (1970). The adult literacy process as cultural action for freedom. Harvard Educational Review, 40(2), 205-225.Available at: https://doi.org/10.17763/haer.40.2.q7n227021n148p26.
Handayani, R. A. D., & Genisa, M. U. (2019). Empowering physics students' performance in a group discussion through Two Types of Peer Assessment. International Journal of Instruction, 12(1), 655-668.Available at: https://doi.org/10.29333/iji.2019.12142a.
Huang, H. M. (2002). Toward constructivism for adult learners in online learning environments. British Journal of Educational Technology, 33(1), 27-37.Available at: https://doi.org/10.1111/1467-8535.00236.
Khonbi, Z. A., & Sadeghi, K. (2012). The effect of assessment type (self vs. peer vs. teacher) on Iranian university EFL students' course achievement. Language Testing in Asia, 2(4), 1-28.Available at: https://doi.org/10.1186/2229-0443-2-4-47.
McMillan, J. H., & Hearn, J. (2008). Student self-assessment: The key to stronger student motivation and higher achievement. Educational Horizons, 87(1), 40-49.
Mertler, C. A. (2009). Action research: Teachers as researchers in the classroom (2nd ed.): Sage.
Pantiwati, Y., & Husamah, H. (2017). Self and peer assessments in active learning model to increase metacognitive awareness and cognitive abilities. International Journal of Instruction, 10(4), 185-202.Available at: https://doi.org/10.12973/iji.2017.10411a.
Ratminingsih, N. M., Artini, L. P., & Padmadewi, N. N. (2017). Incorporating self and peer assessment in reflective teaching practices. International Journal of Instruction, 10(4), 165-184.Available at: https://doi.org/10.12973/iji.2017.10411a.
Sanjayanti, N. P. A. H., Suastra, I. W., Suma, K., & Adnyana, P. B. (2022). Effectiveness of science learning model containing balinese local wisdom in improving character and science literacy of Junior High School students. International Journal of Innovative Research and Scientific Studies, 5(4), 332–342.Available at: https://doi.org/10.53894/ijirss.v5i4.750.
Shiba, Y., & Sugawara, T. (2014). A fair assessment of group work by mutual evaluation based on trust network. Paper presented at the In 2014 IEEE Frontiers in Education Conference (FIE) Proceedings.
Spiller, D. (2009). Assessment matters: Self-assessment and peer assessment. Technical report. New Zealand: University of Waikato.
Tousignant, M., & DesMarchais, J. E. (2002). Accuracy of student self-assessment ability compared to their own performance in a problem-based learning medical program: A correlation study. Advances in Health Sciences Education, 7(1), 19-27.
Wang, Y., Liang, Y., Liu, L., & Liu, Y. (2014). A motivation model of peer assessment in programming language learning. USA: Department of Information Systems, College of Business Administration, California State University Long Beach.
Wasson, R. J. (2022). Learning by doing, reflecting, and teaching. In Emerging Pedagogies for Policy Education (pp. 61-80). Singapore: Palgrave Macmillan.
Welsh, M. (2012). Student perceptions of using the Pebble Pad e-portfolio system to support self-and peer-based formative assessment. Technology, Pedagogy and Education, 21(1), 57-83.Available at: https://doi.org/10.1080/1475939X.2012.659884.
Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory Into Practice, 41(2), 64-70.Available at: https://doi.org/10.1207/s15430421tip4102_2.
Views and opinions expressed in this article are the views and opinions of the author(s), International Journal of Education and Practice shall not be responsible or answerable for any loss, damage or liability etc. caused in relation to/arising out of the use of the content. |