Subconsciously, experienced teachers make use of situated knowledge: knowledge that is connected to, and arises from, the interactions between the physical environment where the action takes place and a material body acting in that environment. Since situated knowledge is important in day-to-day teaching to solve incidents, it should be taught in teacher training courses. Consequently, in the test at the end of such a course, these learning objectives should be tested. By using cases and by setting higher learning objectives that explicitly address the situated nature of this knowledge, situated knowledge can be assessed. In this study, the main question is: Do teacher trainers use cases that are aimed at acquiring situated knowledge? An empirical study, carried out in eleven secondary teacher-training programs in the Netherlands, revealed that just a single program did indeed assess situated knowledge. Considering its importance, however, it is essential that it is included in all secondary teacher-training programs. To assist teacher training programs to achieve this goal, this article offers recommendations for implementing tests with cases to assess situated knowledge.
Keywords:Video cases, Teacher training, Knowledge development, Learning objectives, Tests, Pre-service teachers, Situated knowledge
Received: 5 February 2018 / Revised:10 May 2018 / Accepted: 15 May 2018 / Published:17 May 2018
Subconsciously, experienced teachers use situated knowledge. This knowledge is the result of thought processes that are established through, and are related to, interactions between an actor’s body in an environment and the physical environment in which the actions take place (Roth and Jornet, 2013). This indicates that situated knowledge has an embedded, as well as an embodied character. As a result, situational experiences result in situated knowledge. Situated knowledge is holistic, readily available and linked to specific situations. It is used to solve events in the teacher’s teaching practice. Situated knowledge is produced through direct interaction between earlier experiences and the present day-to-day teaching setting (Putnam and Borko, 2000; Borko, 2004). The difference between situated knowledge and cognitive knowledge lies in the way they view the teaching process. Teacher training, according to the wider view that situated knowledge takes, should not just focus on acquiring knowledge, but should also teach the teacher in training how to function in an environment that is complex, constantly changing, and involves other people as well as materials. Situated knowledge is required in order to do so, and its value has been understood for decades (Brown et al., 1989; Greeno, 1997; Putnam and Borko, 2000; Opfer and Pedder, 2011). Teachers in training have limited experience with the workplace and their situated knowledge should be more developed for them to be able to successfully teach in various contexts. Teacher educators explicitly include situated knowledge in the learning objectives for the courses they teach. This study aims to provide them with practical advice for this process, so that they can better assist teachers in training in the development of their situated knowledge.
This article will elucidate to what extent situated knowledge is strived for in the curriculum of the teacher training. First, the way watching and analyzing video cases can contribute to situated knowledge acquisition is delineated. Second, whether this particular method is included in the current curriculum is determined by looking at summative tests that are used at vocational teacher training programs at Dutch universities of applied sciences. By checking these tests for learning objectives that focus on situated knowledge, the suitability of the curriculum for the acquisition of situated knowledge by students is determined.
The acquisition of situated knowledge can be facilitated by setting higher learning objectives, working on them in courses, and testing them in the summative assessment at the end of those courses. Bloom’s taxonomy is most commonly used for categorizing learning objectives in teacher training programs (Furst, 1981; Krathwohl, 2002; Athanassiou et al., 2003). Six cognitive levels are described in this taxonomy, which are ranked in a hierarchy: Remember, Understand, Apply, Analyze, Evaluate, and Create. Remember, Understand and Apply are known as lower learning objectives, Analyze, Evaluate and Create are considered higher learning objectives. All six levels can be linked to kinds of knowledge that have to be learned during a course. For example, the lower levels can be linked to knowledge of facts and procedures (Momsen et al., 2010). The higher learning objectives, on the other hand, are used to indicate, for example, analyzing and evaluating complex teaching situations. The aim of these higher learning objectives is for teachers in training to contemplate their teaching practice and the way it relates to theoretical concepts and strategies. Doing so can assist them in the preparation of their own forthcoming teaching practice. As these teachers in training need to ultimately become effective, both embodied as well as embedded, actors in complex situations, it is important that higher learning objectives are transferred in the form of situated knowledge. The use of higher learning objectives can be facilitated through cases that allow the development of situated knowledge, meaning that these cases contain both holistic and contextual knowledge.
Teachers in training can link theoretical knowledge to unfamiliar and complex situations by studying written or filmed cases. Written cases, however, have the drawback that the holistic character of a real situation is lost in its written description (Geerts et al., 2015). Video cases more closely correspond to the way teachers encounter pedagogical and didactical problems, as their information is presented in a contextual and holistic manner (Blijleven, 2005) and can therefore contribute to the acquisition of situated knowledge.
Teachers in training have to be able to discriminate between important and unimportant aspects of the situation depicted in a case in order to make sense of it. Blijleven (2005) showed that this is impossible with written cases, because such cases have often been structured by the author. When teachers in training watch video cases and subsequently apply their theoretical knowledge, they are not only able to analyze real-world teaching situations, but also identify how experienced teachers deal with situations like it (Kurz et al., 2004; Blijleven, 2005). Analyzing real-life situations allows them to step out of their teaching role and objectively observe the educational situation ‘from a distance’ (van Es and Sherin, 2002; Rosaen et al., 2008). Video analysis re-enforces the teacher in training’s belief that he can acquire the skills, the knowledge and the attitudes that are necessary to be an effectively functioning teacher (Shulman, 1992). Hence, the use of video cases is a prime method for assisting teachers in training to pursue the higher learning objectives that are needed for the development of situated knowledge, notwithstanding that practical experiences in the classroom remain important as well, of course.
The higher learning objectives to be achieved by teachers in training in a situated context in any teacher training course should be reflected in the corresponding assessment. The term constructive alignment is used by Biggs and So-Kum (2011) to indicate that the contents of a course, its test and learning objectives, should all be aligned. The construction of a test should therefore be guided by the learning objectives of the course. Further, a summative test with content validity echoes the contents of the course; so, if higher learning objectives are used as a foundation of the course, these must be addressed in the summative test (Hamp-Lyons, 1997; Spratt, 2005). Students will merely study the objectives of the test if these do not correspond to the objectives of the course (Hamp-Lyons, 1997). There are several rules for a test that aims to assess situated knowledge. Representative summative tests should follow these rules in order to be valid. Multiple sets of rules will now be examined to determine a balanced set.
The best way to assess situated knowledge is an authentic assessment. This would require teachers in training to solve a realistic, life-like problem (Gulikers et al., 2008; Brush and Saye, 2014 ). However, it is virtually impossible to place teachers in front of a classroom and then wait for a situation that is suitable for testing their knowledge. Because of this, and for efficiency reasons, authentic assessment is regularly simulated using an authentic (video) case. Video cases are readily available, and this allows teacher trainers to efficiently plan their assessments. The teachers in training, when confronted with summative tests comprised of authentic cases that describe realistic situations, are able to practically apply the things they have learned during the course (Wiggins, 1998; Brush and Saye, 2014).
A case serves to simulate a teaching reality. It is authentic when (Wiggins, 1998; Darling-Hammond and Snyder, 2000; Gulikers et al., 2008; Ploegman and De Bie, 2008):
• The problem is realistic;
• Teachers in training are required to evaluate the situation and provide a solution of their own (innovation);
• It entails actively dealing with a given situation;
• It contains a realistic context that a professional would regularly deal with;
• It assesses how effectively and efficiently teachers in training are able to complete a complex task by using a sizable repertoire of skills and knowledge;
• It provides teachers in training with a possibility to practice, allowing them to use their resources and receive feedback to improve performance and achieve better outcomes.
For cases, length is not a prerequisite for being authentic. Compared to longer cases, short cases have the advantage that their reliability and validity are greater (van Berkel and Bax, 2006). The teacher in training will need to use his situated knowledge to answer the questions related to the case. This knowledge consists, among other things, of a wide array of context specific knowledge and the capacity to deal with incidents in the day-to-day teaching practice. Several short cases can be used in an hour of testing time, which increases the summative test’s validity. In addition, cases demonstrate that experts can greatly differ in the way they handle complex situations, despite the outcomes being equal. This mechanism is called idiosyncrasy: Experts develop individual ways of dealing with similar problems (Regehr and Norman, 1996; Adams and Wieman, 2011) based on their personal situated knowledge. Moreover, experts can allow themselves to skip steps in the problem-solving process, as they are more efficient than non-experts (Regehr and Norman, 1996). The first guideline, therefore, is that the case needs to contain an authentic problem.
Experts use a wide variety of effective problem-solving processes. To account for this, students should be exposed to a variety of representative cases. Whether teachers in training can flexibly make use of their problem-solving skills can be tested in this way. This prompts the second requirement for testing with cases: a test should contain multiple short cases, as this will increase the tests’ validity.
When testing is involved, a written test is usually the first thing that springs to mind. It is also the most frequently used test type in Dutch teacher-training education. But when it comes to testing situated knowledge, oral tests are far more realistic and authentic. Furthermore, a more complete and accurate picture of the teacher in training can be obtained from the results of an oral test (Huxham et al., 2012). The answers to an oral test are part of a conversation, and an immediate reaction can be given to the answers because of this, which enables clarification of the answer, or additions to be made to it. It is therefore advisable, when assessing situated knowledge, to have the test exclusively contain oral questions or a combination of written and oral questions. This way, the reliability of the test is increased.
Situated knowledge is required to solve a problem related to a case. We will use the following example of such a problem to illustrate this: a classroom situation that is characterized by a certain amount of disorder. Test questions related to this situation, that are designed to test situated knowledge, need to focus on key feature decisions: essential decisions that are required to solve the problem (Farmer and Page, 2005; van Berkel and Bax, 2006; Opfer and Pedder, 2011). In the example of the disorderly classroom, the teacher in training needs to answer these questions: 1) Are the pupils causing the disorder or is it caused by the teacher? 2) Is the disorder caused by the layout of the lesson or by classroom management? 3) Does the disorder result from the strategy the teacher chose for this lesson or by the way that strategy is implemented by the teacher? 4) Should the teacher immediately interfere or wait? In conclusion, the fourth requirement for testing with cases is that the questions should be constructed using key features.
Suitable test questions for testing key features should contain verbs that reflect higher learning objectives. Whether a test item focuses on analysis, evaluation or creation, is indicated by the verb that is used in that question (van Berkel and Bax, 2006). Analyzing can be assessed by using verbs as ‘distinguish’, ‘relate’ and ‘clarify’. For evaluation, verbs like ‘interpret’, ‘justify’ and ‘appreciate’ are used. For creation, finally, questions with verbs such as ‘revise’ or ‘design’ are written. Test questions reflecting the situated character of higher learning objectives should use verbs like these mentioned here. A test with such questions is better suited to assess situated knowledge. Questions that focus on higher learning objectives should comprise the majority of the test, which is the fifth requirement for testing with cases. This requirement can be expanded on using six additions stated by Wiggins (1998). He proposes that higher learning objective test questions need to be formulated so that they:
• can assess whether students have understood the complete situation;
• can assess if students have understood the actual goals of the skills, actions and knowledge, rather than merely implementing a plan of action;
• require students to change their perspective;
• are able to assess the completeness and precision of the knowledge independently from understanding it;
• test the student’s self-knowledge;
• focus on creating, analyzing and evaluating.
In addition, it is important not to focus all of the test questions on higher learning objectives when assessing situated knowledge, as lower learning objectives (such as factual knowledge) require independent test questions (Wiggins, 1998). In summation, higher learning objectives pertain to the test as a whole, and also to specific test questions. Therefore, the fifth requirement for testing with cases is that the majority of the questions in the test should focus on higher learning objectives.
Finally, ‘overarching questions’ and their place in the test will be considered. These types of questions offer the students insight into the practical application of the course for their future teaching practice (Wiggins, 1998) and are central to the course. Because they are broad questions that use various elements of their knowledge to completely fathom the subject matter, they enable students to enlarge their situated knowledge. “How can you define good teaching?” and “Are there any recent developments in education?” are examples of overarching questions. Thus, the final requirement for properly including cases in a test is that overarching questions are contained within.
In this introduction, the importance of assessing higher learning objectives aimed at situated knowledge is shown. This kind of assessment is best done with (video) cases. Six requirements for the construction of such tests, using suitable test questions, were derived from the literature:
1 An authentic problem should be present in the case.
2 To increase validity, the test comprises multiple short cases.
3 To increase its reliability, only oral questions or a combination of oral and written questions are contained in the test.
4 Key features are use in the test construction.
5 Most of the questions in the test focus on higher learning objectives.
6 Overarching questions are included in the case tests.
In the previous section, it is argued that the development of situated knowledge is a vital feature of appropriate teacher training. If situated knowledge acquisition is aimed for in the teacher training curricula, this should be echoed in the contents of the courses. Do teacher trainers use cases aimed at situated knowledge acquisition? The contents of the course, in turn, should be reflected in its tests. So, to elucidate whether teacher trainers use cases aimed at situated knowledge acquisition in their courses, we detail whether they are used in their tests. When the content and format of the test is known among the teachers in training during the course itself, their learning processes are shaped by the format and content of that test. Therefore, a relatively straightforward method to determine the extent to which the acquisition of situated knowledge is facilitated by the course content is to examine the format of the test. Consequently, the main research question of this study is: Do teacher trainers use summative case tests that are aimed at situated knowledge acquisition?
To answer this main question, we explore whether the summative case tests that are used at a course’s time of completion meet the six requirements for situated knowledge testing. The courses that are studied are each part of an accredited teacher-training. A particularly important requirement is the fifth requirement, as the focus of a case test can only be on testing situated knowledge if the corresponding test questions test higher learning objectives. Because of this, the first step will be to test whether higher learning objectives are tested in the test questions. The first hypothesis is that most of the test questions focus on higher learning objectives that define situated knowledge. The case cannot be considered to be testing situated knowledge if the fifth requirement is not met, in which case the first hypothesis is wrong. Only if the questions are aimed at higher learning objectives can the remaining five requirements have meaning. If, however, the findings indicate that the first hypothesis is correct, this does not sufficiently establish that the test in question is aimed at testing situated knowledge. This can be established only through meeting the remaining five requirements. Hypothesis two, therefore, is that tests are generally constructed in accordance with the six requirements for tests with cases.
3.1. Sample Survey and Response
All eleven accredited secondary-teacher-education institutes in the Netherlands were asked to submit a summative test from their vocational training used at a course’s conclusion. The accreditation of these institutes has been done by the independent Flemish and Dutch accreditation organization (NVAO), which determined that these courses meet the quality requirements. Ten of these institutes submitted a test they projected to pay attention to the acquisition of situated knowledge. These were the tests that have been included in this study. When requesting a test, our request included the words ‘higher learning objectives’. The tests aim at the higher learning objectives that teachers in training need to achieve. All tests were in Dutch and were used in the academic years 2011-2012 or 2012-2013. All the tests were used in vocational training for formal evaluations of course results (summative assessment). Only a single institute sent in a test that included a video case, all others made use of written cases. The one test that contained a video case consisted of five written cases besides the one video case. Considerable variation was found between the cases, both in subject as well as case length. As an example, a written case and its accompanying test questions can be found below. This case is representative of the cases used in the study regarding level, amount of detail and length:
It is the end of October. Several teachers have indicated that Lenny from class 1A is a difficult pupil. The team leader is getting complaints about him, and Lenny has been sent out of the classroom on more than one occasion, which is more than usual for a pupil his age.
After consulting the mentor, the team leader decides to set up a protocol. Lenny is given a separate desk and is first given a warning if he shows disruptive behavior. If that doesn’t help, he is moved to the front of the class and put to work copying lines. If he continues to be difficult, he must leave the class and report to the office. If he behaves well, he is complimented, and it is noted in the class ledger.
The team leader and mentor have created the protocol together and sent it to the teachers of 1C. The email opens with ‘Due to Lenny’s behavior, we have come to the following agreements’, followed by the description given above.
Unfortunately, these measures have not worked.
A. At what point in the process concerning Lenny do you think it went wrong? Give three explanations for the failure of the team leader and mentor’s measures (3 points).
B. As Lenny’s mentor, how would you tackle the problem? Explain your choice (2 points).
(From ‘Test 7’ in this study)
3.2. Materials
To learn how higher learning objectives are measured by the summative tests, an assessment form was custom-made using the six requirements for testing with cases that were detailed in the previous section. One aspect, i.e. that an authentic case should “[give] teachers in training the opportunity to repeat and practice, using resources and gaining feedback to enhance their performance and get better learning outcomes” (Wiggins, 1998) has not been included on the form. The reason for this is that it predominantly applies to completing a course, and not to test construction. The assessment form itself was originally written in Dutch. From the previous section can be concluded that the six requirements that have been used vary in nature. To determine whether some requirements are met by the test, a single question can sometimes suffice. Other requirements need multiple questions to assess to what extent the aspects of the requirements are met. To determine if a requirement was met, the list of criteria that can be found in Table 1 was used. The order of these criteria has been adjusted to optimize the scoring procedure.
Table-1. Overview of the requirements in the assessment form (Adapted from Educative assessment: Designing assessment to inform and improve student performance, p. 141, by Wiggins (1998) San Fransisco: Jossey-Bass Inc. Publishers. Adapted with permission.)
Requirement | Number of aspects on the assessment form |
Requirement is met if: |
1 The case has to contain an authentic problem; | 5 |
≥ 3 aspects |
2 The test contains multiple short cases in order to increase validity; | ≥ 3 cases |
|
3 The test consists of just oral questions, or both oral and written questions, in order to increase its reliability; | 2 |
≥1 oral question and ≥ 1 written question |
4 The test questions have been formulated using key terms; | 1 |
≥ 3 key terms for at least half the number of cases |
5 The majority of the questions test higher learning objectives; | 6 |
A positive score on ≥ 5 of the 6 aspects |
6 The test with cases includes overarching questions. | 1 |
≥ 1 overarching question |
3.3. Procedures
Currently used summative test were selected by senior learning plan experts at universities of applied sciences in The Netherlands. These tests were used to examine to what extent a test meets the requirements for a test with cases. The learning plan experts received the following instructions for selecting a test: 1) Choose a summative test on vocational training that contains a video case. In case there are two or more summative tests containing a video case, the summative test that contains the most case-related questions is to be selected. 2) If there are no summative tests on vocational training that contain a video case, a summative test that contains a written case is to be selected according to the instructions written under 1). 3) In case a summative test on vocational training that includes either a video or written case is unavailable, no test is to be submitted.
Experienced teacher educators assessed the tests with cases that were obtained in this way. For this assessment, they used a newly designed assessment form. From each of the three teacher trainer departments at the NHL University of Applied Sciences (Language, Social Science and Science), two teacher educators were selected. These teacher educators, six total, were informed that they would be using an assessment form to evaluate the tests. Only yes or no answers could be entered on the form by the assessors and the tests were assigned to them at random. Forty minutes were allotted to the teacher educators for each test evaluation. This amount of time was determined beforehand to be sufficient through a random sample measurement. All teacher educators from the different departments evaluated five tests individually. This way, all tests were evaluated three times, each time by a teacher educator from a different department. This was done to increase the reliability of the data and to rule out any influence by the teacher educators’ background on their evaluation of the test. The consistency between the assessors was determined through an interrater reliability analysis; by calculating Fleiss’s Kappa (Fleiss, 1971). This particular measure was used because the number of assessors was set (6). The Kappa was calculated for each of the two groups of three assessors. This analysis of interrater reliability was carried out on the items that used answer categories (25 in each group). The first group consisted of three assessors who assessed the five first tests. Their agreement expressed in a percentage was 51,6%. Fleiss’s Kappa was calculated to be κ = .395 (95% CI, .352 to .438), p < .0.005. This means that there was fair agreement between the three assessors in this group. This also means that we can assume that sufficiently equal assessments were made by these three assessors. The remaining five tests were assessed by the second group, also consisting of three assessors. The agreement between these three assessors expressed as a percentage was 48,1%. Like the first group, the agreement among the assessors was fair, Fleiss’s κ = .352 (95% CI, .310 to .395), p < .005. Based on these results, it can be concluded that the form used in the assessment is suitable for uniform assessments. The results of the assessments as done by the teacher trainers are the input for accepting of rejecting the hypotheses detailed in the previous section.
To determine if teacher educators set higher learning objectives for their tests (hypothesis 1), the fifth requirement for constructing tests with higher learning objectives was used. The higher learning objectives, such as analyzing, evaluating and creating cannot be achieved without self-knowledge, perspective changes, awareness of the relevant of the subject matter and a general overview. Only once a test meets all six requirements it can be called adequate. In practice, however, this condition will be considered to have been met if at least five of the requirements are found in the test. Tests met the fifth requirement if it conformed with two conditions: a) at least five out of six aspects of higher learning objectives can be found in the test questions, and b) test questions on higher learning objectives take up the majority of the test, which mean that a score of 51% on the test is achievable by correctly answering these questions. This final condition was determined by tallying the number of available points for the questions that test higher learning objectives. The teacher educators assessed the first condition to get the best possible judgement. The second was determined by adding up question points. The first hypothesis is accepted in case a majority of the examined tests complies with the two above-mentioned conditions.
The second hypothesis is that the construction of the tests was done conforming to the six requirements for assessment with cases. No tests but those that met the requirements for the first hypothesis were tested for the second hypothesis. A case containing tests is considered to be ‘sufficient’ if it meets five of the six requirements for testing with cases. Further, of those requirements, at least the following two should be met: The problem contained in the case is authentic (requirement 1) and the questions of the test focus on higher learning objectives (requirement 5). This requirement, number 5, was tested in hypothesis 1.
Short discussions were held with the assessors once the completed assessments forms had been analyzed. The assessors were asked about their experiences in these discussions. For example, possible gained insights, possible resolutions for developing their own tests with cases and their experiences with the form, were covered. Based on these discussions and analyses, teacher educator requirements were developed for constructing tests with cases.
4.1. Hypothesis 1
Most of the Test Questions Focus on Higher Learning Objectives That Define Situated Knowledge.
The fifth requirement for constructing tests with cases comprises six aspects that together determine whether a test used at courses for training secondary school teachers contains higher learning objectives. From the assessment of the tests it can be gathered that four out of ten tests comply with the fifth requirement, i.e. contain higher learning objectives (see Table 2).
Table-2. First conditional requirement 5: Test questions that test higher learning objectives; number of achieved aspects per test. (Adapted from Educative assessment: Designing assessment to inform and improve student performance, p. 98, by Wiggins (1998) San Fransisco: Jossey-Bass Inc. Publishers. Adapted with permission.)
Aspects of first condition 5: |
Test |
|||||||||
Questions on… | 1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
1 Overview of complete situation | 1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
2 Perspective changes | 1 |
0 |
1 |
0 |
1 |
0 |
0 |
0 |
0 |
0 |
3 Awareness of the relevance of the subject material | 1 |
0 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
4 Knowledge independent from comprehension | 1 |
1 |
1 |
1 |
1 |
1 |
1 |
0 |
1 |
1 |
5 Self-knowledge | 0 |
0 |
0 |
1 |
1 |
0 |
0 |
1 |
0 |
0 |
6 Analyze, evaluate, create | 1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
1 |
Total number of aspects found | 5 |
3 |
5 |
5 |
6 |
4 |
4 |
4 |
4 |
4 |
Assessment: “sufficient” if ≥5 aspects found | 1 |
0 |
1 |
1 |
1 |
0 |
0 |
0 |
0 |
0 |
Table 2 reveals that two aspects can be found in all of the tests, that is, questions that require the answer to include an overview of the entire situation as well as questions that focus on analysis, evaluation and creation. Only three out of ten tests contained questions about changing perspectives and self-knowledge.
Test questions aimed at higher learning objectives could also score points for the test. These scores were added up to evaluate and assess the second condition of requirement 5 (a focus on higher learning objectives). A summary of the total number of points and corresponding percentages can be found in Table 3.
Table-3. Second conditional requirement 5: Percentage of questions on higher learning objectives
Points in the test |
Test |
|||||||||
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
|
Total points | 20 |
50 |
80 |
50 |
60 |
25 |
35 |
100 |
76 |
38 |
Number of points for higher learning objectives | 8,85 |
35 |
28 |
25 |
60 |
23 |
27 |
79 |
54 |
18 |
Percentage of points for higher learning objectives | 44 |
70 |
35 |
50 |
100 |
92 |
77 |
79 |
71 |
47 |
What stands out here is that, in Table 3, tests scored either quite low (50% or less) or quite high (70% or higher). As shown in Table 3, tests 1, 3, 4 and 5 meet five or more criteria for test questions centered on higher learning objectives. Of these, three scored lower than 50% on the number of points available for questions related to higher learning objectives, and therefore failed to meet the requirements for hypothesis 1. Such a test might include several questions that focus on higher learning objectives, but these questions fulfill only a minor role in the test as a whole. Only test five scored over 50%. Consequently, this singular test meets both conditions of requirement 5 and therefore hypothesis 1 (its test questions focus on achieving higher learning objectives and its test questions yield the majority of the obtainable points).
4.2. Hypothesis 2
Tests Are Constructed in Accordance with the Six Requirements for Tests with Cases.
Because test 5 is the only test that met the requirements of hypothesis 1, test 5 was solely assessed for hypothesis 2. An overview of the requirements for hypothesis 2 that the test met, is provided in Table 4.
Table-4. Overview of the requirements met by test 5
Requirement | Test |
5 |
|
1 Authentic problem | 1 |
2 Several short cases | 0 |
3 Oral and written questions | 0 |
4 Three or more key terms | 1 |
5 Questions testing higher learning objectives | 1 |
6 Overarching questions | 1 |
Total number of requirements met | 4 |
In case a test meets at least four requirements, requirement 1 and 5 among them, it is considered sufficient. Of the requirements for a case test, two are not met by test 5, as shown in Table 4. Unlike what was required in the second requirement, various short cases, this test included only one case. Requirement 3 is also not met by the test, as it consists exclusively of written questions. The remaining four requirements, including 1 and 5, are met by test 5. Therefore, it can be concluded that test 5 meets the presupposed requirements concerning constructing tests with cases, and matches hypothesis 2.
4.3. Support for the Use of Tests with Cases
In this study, the tests that had been submitted to be analyzed were assessed by six experienced teacher educators. The process led to discourse regarding the use of cases in testing. Resultantly, it was concluded that evaluating the tests using the assessment form produced newly gained insights about testing with cases. For example, according to one teacher educator, the instrument enabled him to realize at which level of mastery the questions would have to be formulated. The assessment form and its use encouraged the teacher educators to reevaluate the construction of tests aimed at higher learning objectives. All of them had mentioned doing so currently, but added that the assessment form would help them keep higher learning objectives in mind. Because the assessment criteria were reformulated as instructions they have, in addition to being an assessment form, become a practical instrument that can be used when constructing tests with cases aimed at higher learning objectives that support the development of situated knowledge. For a user-friendly experience, the assessment form has been transformed into a list of instructions that can be used by teacher educators in secondary education. These instructions are enclosed in Appendix 1.
Out of eleven secondary teacher education programs operating in the Netherlands, ten are currently working with summative test with cases. From the fact that ten of the institutes sent in summative tests with cases, we can conclude that the intention to test situated knowledge does exist. This intention was also articulated in the replies to the requests for the tests. The evaluation of the ten submitted tests involved the use of a newly developed assessment form that delineates six requirements for such tests. The first hypothesis was that the majority of the test questions focus on achieving higher learning objectives and in this way suitably facilitate the development of the teachers in training’s situated knowledge. This hypothesis was not confirmed; merely a single test covered five of the six aspects for testing higher learning objectives. Additionally, this same singular summative test allocated over 50% of the highest possible score to questions leading to higher learning objectives. The second hypothesis, that the summative tests are constructed in accordance with the requirements for tests with cases, cannot be confirmed based on these results. Only one test confirmed hypothesis 1, and only this test could therefore be used to test hypothesis 2. This test did confirm hypothesis 2. But because this test was the only one that was able to do so, hypothesis 2 was also rejected. The main conclusion should be that teacher educators rarely set higher learning objectives for their tests with cases that explicitly focus on situated knowledge. This also means that summative tests with cases are not being used to their full capacity. Support for this outcome comes from previous research (Geerts et al., 2015) that has revealed that higher learning objectives are not optimally achieved through the use of summative tests with cases in teacher education.
Despite the rejection of the hypothesis, ten out of eleven teacher education institutions submitted a test with cases. The analysis of these tests revealed that they were not suitable for situated knowledge evaluation but did focus on it nevertheless. So, in fact, teacher trainers are looking for possibilities to map out their students’ situated knowledge. The testing of situated knowledge is only just being addressed in teacher trainer didactics, which has a long tradition of factual knowledge.
The requirements are meant to facilitate testing with cases that focus on higher learning objectives and contribute to the teacher trainers that are tasked with designing such tests. A summative test is used as the final assessment in a course with learning objectives. Through the improvement of test quality, the quality of the course is improved as well Biggs and So-Kum (2011). The improvements suggested here focus on the acquisition of situated knowledge. This contributes to becoming a better teacher, which in turn improves the quality of the education taught at secondary schools.
The results of this paper show that it is possible to improve the quality of summative tests. Moreover, the study has yielded instructions for the development of tests with cases that focus on testing situated knowledge. It is recommended that the adapted assessment form containing the list of instructions for the construction of summative tests with cases is made available for use by all secondary teacher training education institutes, to encourage a focus on situated knowledge. Future research should pay attention to the way teacher trainer’s consciousness can be raised to acknowledge the importance of these requirements for summative tests with cases, so they can facilitate the future teachers that they teach to optimally develop themselves. This will hopefully lead to situated knowledge obtaining a permanent place in the teacher training curriculum and assessment.
Funding: This study received no specific financial support. |
Competing Interests: The authors declare that they have no competing interests. |
Contributors/Acknowledgement: All authors contributed equally to the conception and design of the study. |
Adams, W.K. and C.E. Wieman, 2011. Development and validation of instruments to measure learning of expert-like thinking. International Journal of Science Education, 33(9): 1289-1312. View at Google Scholar | View at Publisher
Athanassiou, N., J.M. McNett and C. Harvey, 2003. Critical thinking in the management classroom: Bloom’s taxonomy as a learning tool. Journal of Management Education, 27(5): 533-555. View at Google Scholar | View at Publisher
Biggs, J.B. and T.C. So-Kum, 2011. Teaching for quality learning at university: What the student does. 4th Edn., Maidenhead: McGraw-Hill/Society for Research into Higher Education/Open University Press.
Blijleven, P.J., 2005. Multimedia-cases: Naar een brug tussen theorie en praktijk [multimedia cases: Toward a bridge between theory and practice]. Enschede: Universiteit Twente.
Borko, H., 2004. Professional development and teacher learning: Mapping the terrain. Educational Researcher, 33(8): 3-15. View at Google Scholar | View at Publisher
Brown, J.S., A. Collins and P. Duguid, 1989. Situated cognition and the culture of learning. Educational Researcher, 18(1): 32-42. View at Google Scholar | View at Publisher
Brush, T. and J. Saye, 2014. An instructional model to support problem-based historical inquiry: The persistent issues in history network. Interdisciplinary Journal of Problem-Based Learning, 8(1): 38-50. View at Google Scholar | View at Publisher
Darling-Hammond, L. and J. Snyder, 2000. Authentic assessment of teaching in context. Teaching and Teacher Education, 16(5–6): 523-545. View at Google Scholar | View at Publisher
Farmer, E.A. and G. Page, 2005. A practical guide to assessing clinical decision-making skills using the key features approach. Medical Education, 39(12): 1188-1194. View at Google Scholar | View at Publisher
Fleiss, J.L., 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5): 378-382. View at Google Scholar | View at Publisher
Furst, E.J., 1981. Bloom’s taxonomy of educational objectives for the cognitive domain: Philosophical and educational issues. Review of Educational Research, 51(4): 441-453.View at Google Scholar | View at Publisher
Geerts, W., A. Van der Werff, H.G. Hummel, H.W. Steenbeek and P.L. Van Geert, 2015. Assessing situated knowledge in secondary teacher training by using video cases. EAPRIL 2015 Proceedings. Luxembourg City: University of Luxembourg. pp: 46-52.
Greeno, J.G., 1997. On claims that answer the wrong questions. Educational Researcher, 26(1): 5-17. View at Google Scholar | View at Publisher
Gulikers, J.T.M., L. Kester, P.A. Kirschner and T.J. Bastiaens, 2008. The effect of practical experience on perceptions of assessment authenticity, study approach, and learning outcomes. Learning and Instruction, 18(2): 172-186. View at Google Scholar | View at Publisher
Hamp-Lyons, L., 1997. Washback, impact and validity: Ethical concerns. Language Testing, 14(3): 295-303. View at Google Scholar | View at Publisher
Huxham, M., F. Campbell and J. Westwood, 2012. Oral versus written assessments: A test of student performance and attitudes. Assessment & Evaluation in Higher Education,, 37(1): 125-136. View at Google Scholar | View at Publisher
Krathwohl, D.R., 2002. A revision of bloom’s taxonomy: An overview. Theory into Practice, 41(4): 212-218. View at Google Scholar | View at Publisher
Kurz, T.L., G. Llama and W. Savenye, 2004. Issues and challenges of creating video cases to be used with preservice teachers. TechTrends, 49(4): 67-73.View at Google Scholar | View at Publisher
Momsen, J.L., T.M. Long, S.A. Wyse and D. Ebert-May, 2010. Just the facts? Introductory undergraduate biology courses focus on low-level cognitive skills. CBE-Life Sciences Education, 9(4): 435-440. View at Google Scholar | View at Publisher
Opfer, V.D. and D. Pedder, 2011. Conceptualizing teacher professional learning. Review of Educational Research, 81(3): 376-407. View at Google Scholar | View at Publisher
Ploegman, M. and D. De Bie, 2008. Aan de slag! Inspirerende opdrachten maken voor beroepsopleidingen. Houten: Bohn Stafleu van Loghum.
Putnam, R.T. and H. Borko, 2000. What do new views of knowledge and thinking have to say about research on teacher learning? Educational Researcher, 29(1): 4-15. View at Google Scholar | View at Publisher
Regehr, G. and G.R. Norman, 1996. Issues in cognitive psychology: Implications for professional education. Academic Medicine, 71(9): 988-1001. View at Google Scholar | View at Publisher
Rosaen, C.L., M. Lundeberg, M. Cooper, A. Fritzen and M. Terpstra, 2008. Noticing noticing. Journal of Teacher Education, 53(4): 347-360. View at Google Scholar | View at Publisher
Roth, W. and A. Jornet, 2013. Situated cognition. Wiley Interdisciplinary Reviews: Cognitive Science, 4(5): 463-478. View at Google Scholar | View at Publisher
Shulman, L.S., 1992. Toward a pedagogy of cases teacher-written cases with commentaries: A teacher-researcher collaboration. In J. Shulman (Ed.), Case methods in teacher education. New York: Teachers College Press. pp: 1-33.
Spratt, M., 2005. Washback and the classroom: The implications for teaching and learning of studies of washback from exams. Language Teaching Research, 9(1): 5-29. View at Google Scholar | View at Publisher
van Berkel, H. and A. Bax, 2006. Toetsen in het Hoger Onderwijs. Houten: Bohn Stafleu van Loghum.
van Es, E. and M.G. Sherin, 2002. Learning to notice: Scaffolding new teachers’ interpretations of classroom interactions. Journal of Technology and Teacher Education, 10(4): 571-596. View at Google Scholar
Wiggins, G., 1998. Educative assessment: Designing assessment to inform and improve student performance. San Fransisco: Jossey-Bass Inc. Publishers.
Appendix A
Instructions for the Creation of a summative Test with Cases
The questions on this first page are only about questions that accompany cases. | |
Instruction: Multiple cases in a summative test | |
Does your test contain at least three cases? Better three short cases than one long one. | |
Instruction: The case should concern an authentic problem | |
Are the situation descriptions in the cases realistic? (In other words, is it a situation that a teacher in training is likely to face in practice?) | |
If a test contains more than one case, answer the following questions for each case in turn. You can give the score for a second case in the second column, and any further cases in the remaining columns. | |
Does the case contain realistic tasks that the student will encounter in this manner in practice? Answer ‘yes’ if: 1 the case is meant for all subjects in your teacher education, or 2 the case is meant for your own subject. |
|
Does the student have to think of their own solution, for each case, based on their assessment of the situation? | |
Does each case have a question that asks how the student would act? | |
Does each case have a question about a complex task in the case, and does the student need acquired knowledge and skills in order to answer it? | |
Instruction: The questions about the case are made using key features | |
Do most of the cases have at least three corresponding questions about the most important decisions needed to solve the case? (Key features: The essential decisions that the student must make in order to solve the case) |
|
Instruction: The test questions test higher learning objectives | |
Does each case have at least one question that tests whether the student grasps the situation as a whole? | |
Does each case have at least one question that requires the student to shift perspective? | |
The following questions concern all questions in the test | |
Does your test contain at least two questions that lead the student to understand why they must master the subject matter? (In other words, they must apply theory in practice.) |
|
Does your test contain at least two questions that test whether the student’s knowledge is complete and correct, independently from the student’s level of understanding? (For example, first test knowledge and then comprehension, even if it is within the same question.) |
|
Does your test contain at least one question that tests the student’s self-knowledge (the knowledge they have about themselves)? | |
Does your test contain at least two questions that test comprehension? (It should contain verbs such as criticize, conclude, contrast, deduce, illustrate, interpret, distinguish, support, analyze, justify, relate, sketch, explain, validate, defend, compare, or judge.) | |
Instruction: The test contains overarching questions | |
Does your test contain at least one overarching question? These questions have the following characteristics: They concern the core of being a teacher, don’t have just one correct answer, test the higher learning objectives in Bloom’s taxonomy, recur throughout the program (with a constantly developing answer), are formulated in such a manner that they challenge and interest the student, and are connected to other essential questions. |
|
Instruction: The test contains oral and written questions | |
Does your test contain at least one oral question? | |
Does your test contain at least one written question? |