Improving Assessment across a University: Four Steps
by Keston H. Fulcher & Chris D. Orem
Several years ago the first author was charged with coordinating assessment for our institution’s approximately 100 undergraduate and graduate academic degree programs. Implicit to this charge was making assessment stronger across programs. As with many big projects, we began by posing the question, "what does success look like?" In essence, what were we trying to achieve and how would we know when we achieved it? In this feature we outline a model for improving assessment at an institution, which includes the following steps: a) articulating expectations for assessment, b) sharing these expectations with faculty, c) evaluating programs’ assessment based on these expectations, and d) providing resources to help faculty.
Articulating expectations for assessment
In order for programs to improve assessment, a university must first define assessment. Fortunately, many scholars have weighed in on this definition. Erwin (1991), Palomba and Banta (1999), and Suskie (2009) among others, have offered popular models. Further, Fulcher, Swain, and Orem (2012) investigated how over 50 institutions defined assessment. Common to all is that assessment, at a minimum, should include clear student learning outcomes, a robust methodology to collect data on the outcomes, results, and use of the results to improve programs. At JMU, we operationally define assessment as having six components, and we further break those down into 14 elements (see Table 1). We believe that the broader components form the foundation for quality assessment that can be strengthened through effective facilitation of the more specific elements.
After defining aspects of assessment, we articulated levels of performance for each element within this definitional framework. We did so via a rubric whereby behavioral descriptors are associated with beginning, developing, good, and exemplary levels of assessment quality for each of the 14 elements. This rubric, along with other supporting documentation can be found in the Assessment Progress Template.
Sharing assessment expectations with faculty
Once assessment has been operationally defined, the next step is sharing these expectations with stakeholders across campus. Ideally, one should hold meetings with various groups including the provost, deans, department heads, and program-level assessment coordinators. Meetings provide the various constituents with an open forum to voice concerns and questions about what is being asked of them. Depending on the size of your institution, reaching all audience members using this format may take months if not years. That said, you will certainly want to weigh the benefits of sharing expectations for assessment with stakeholders in person versus faster but more passive approaches such as letters or email.
Evaluating programs’ assessment
Closely linked to the process of communicating assessment expectations is the third step in our model: evaluating a program’s assessment. Over several years, there is no better mechanism for reinforcing expectations than evaluating programs on those expectations. At JMU we evaluate each academic degree program’s assessment yearly.
Two trained raters read an assessment report, independently assign scores to each of the 14 elements described in the rubric, and provide qualitative feedback that is customized to that particular assessment. Raters must adjudicate when their ratings for any individual element are off more than a point. The consistency of these ratings across programs and across time is paramount. If raters cannot be consistent it means that JMU has not successfully communicated assessment standards to the raters. We have conducted several studies on the reliability of such ratings and, through much training, have achieved professional-level reliability using two raters, with phi coefficients ranging from .88 to .91 (Orem, 2012).
The feedback to programs helps them identify where their assessments are strong and where they can use improvement. This feedback is shared with the program coordinators, the relevant department heads and deans.
Providing resources to help faculty
The feedback itself is a resource. It helps faculty determine where their assessments are strong and where they might need help. For example, problems in data collection could yield samples that are unrepresentative. We provide a range of other resources including an example of an exemplary report, one-on-one consultations, workshops, six-week assessment fellowships, and opportunities for faculty to evaluate assessment reports. Because we have conducted this process for several years, we can determine how much our assessment, across academic degree programs, has improved as evidenced by Figure 1. Further, because each point on the graph represents a different element of assessment, we can identify areas of assessment that are consistently strong or weak across programs and years.
Overall, across programs, JMU’s assessment is improving. Part of this growth is attributable to a system whereby expectations for assessment have been made explicit. Further, JMU has provided much support to faculty to engage in quality assessment. And, most responsible for this improvement, faculty have spent considerable time and energy reflecting on their programs.
In an effort to improve assessment by defining expectations for quality assessment, communicating them to faculty, and evaluating faculty responses to the expectations, we offer institutions a process for intentionally scrutinizing their assessment processes. This process of evaluating the quality of assessment is conceptualized as meta-assessment (Ory, 1992). Recently, several authors have addressed meta-assessment at the academic degree program and institutional levels (Bresciani, Gardner, & Hickmott, 2009; Fong Bloom, 2010; Fulcher and Orem, 2010; Fulcher, Swain, & Orem, 2012). We encourage readers to consult these articles to learn more about meta-assessment and its potential for improving assessment.
In closing it should be noted that although this article is about assessment, assessment alone is NOT the end goal for college programs, rather it is student learning. That said, one can only discern students' learning level with robust assessment. In this sense assessment is necessary but insufficient for evidencing student achievement and improvement. At its best program assessment enables faculty to discern the impact of their pedagogy and curricula on student skills, knowledge, and attitudes. To actually improve student learning, assessment practitioners, faculty, and administrators must communicate effectively so that assessment results are appropriately used to resource and implement targeted interventions.
Keston H. Fulcher is associate director at the
Center for Assessment and Research Studies at James Madison University in Harrisonburg, VA.
Chris D. Orem is the director of institutional effectiveness at
Dabney S. Lancaster Community College, Clifton Forge, VA.
Bresciani, M. J., Gardner, M. M., & Hickmott, J. (2009). Demonstrating student success: A practical guide to outcomes-based assessment of learning and development in student affairs. Sterling, VA: Stylus.
Erwin, T. D. (1991). Assessing student learning and development: A practical guide for college faculty and administrators. San Francisco: Jossey-Bass Publishers.
Fong Bloom, M. (2010, September). Peer review of program assessment efforts: One strategy, multiple gains. Assessment Update, 22(5), 5-7, 16.
Fulcher, K. H. & Orem, C. D. (2010). Evolving from quantity to quality: A new yardstick for assessment. Research and Practice in Assessment, 4(1), 1-10.
Fulcher, K. H., Swain, M. S., & Orem, C. D. (2012 January/February). Expectations for assessment reports: A descriptive analysis. Assessment Update, 24(1), 1-2, 14-16.
Orem, C. D. (2012). Demonstrating validity evidence of meta-assessment scores using generalizability theory. (Unpublished doctoral dissertation). James Madison University, Harrisonburg, VA.
Ory, J.C. (1992). Meta-Assessment: Evaluating assessment activities. Research in Higher Education, 33(4), 467-481.
Palomba, C. A., & Banta, T. W. (1999). Assessment essentials: Planning, implementing and improving assessment in higher education. San Francisco, CA: Jossey-Bass Publishers.
Suskie, L. A. (2009). Assessing student learning: A common sense guide (2nd ed.). San Francisco: Jossey-Bass.