Pk-12 Master’s Program Assessment

Departmental assessment of student learning is an important part of our work. Equally important is sharing the results with students. In the Pk-12 administration master’s program, one way we analyze the success of our program is by students’ performance on the final comprehensive exam.

The exam is the culmination of the final internship. Each master’s student defines a problem and then collects and analyzes data to understand the problem and implications for the elementary or secondary students if the problem is not rectified. Master’s students also provide recommendations to administrators regarding changes that may address the problem. Master’s students present their study to a comprehensive exam committee.

The committee uses a rubric of 12 competencies to assess knowledge attainment. Scores range from 0 to 3, with a score of 0 indicating the competency was not addressed; a score of 1 indicating competency was minimally or weakly addressed; a score of 2 indicating the competency was addressed; and a score of 3 indicating that understanding of the competency exceeded expectations. A passing exam score is an average of 24 out of a possible 36 points.

In each of the past five years, the exam has been administered and graded. Average scores for each criterion and the overall average exam score are reviewed by faculty. Faculty have been able to see which competencies tend to have the lowest scores (Pk-12 Table 1) as well as those in which students have performed well (Pk-12 Table 2). Faculty can also compare overall student performance from year to year (Pk-12 Table 3). Alterations in teaching and course content are made to address low scoring competencies. Students most often struggle in analyzing data and identifying the implications of the results if the problem isn’t addressed. Students also struggle in identifying what steps could be taken to address the problem.

Faculty members assess how well the alterations have helped students to master competencies through the results of that year’s exam. In looking at the assessment results for the 2015 year, faculty interventions appear to have been successful as evidenced by the increase in competency scores. Future years will tell us whether the performance was an anomaly or truly reflects an increase in student understanding.

A new competency was added to the 2015 assessment. Students’ use of the literature to support claims was added as a new criterion in the 2015 exam. Using the research in the field to back up one’s ideas is important for administrators of schools. This year provided us a baseline from which to understand results in future years. Future exams will tell us more about students’ mastery on this competency.