From Learning Outcomes to Evaluation:
An Examination of Programmatic Design and Assessment for Residential Education
By Sherry L. Early
The Department of Residence Life at Bowling Green State University provides programs, services, and a home away from home for 6,600 students living in 17 residence halls within nine complexes and 31 houses. Within the department there is a Residence Education staff responsible for “creating a living environment that is supportive, educational, and enjoyable.” Residence Education also sponsors programs, workshops, social events, community service projects, and leadership development opportunities.
One of the premiere Residence Education programs is Leaders in Residence (LIR), a program consisting of a six-week workshop series each spring semester centered on Kouzes and Posner’s The Leadership Challenge. The series is facilitated by graduate students who either volunteer their time or receive practicum credit. The graduate students are paired up to facilitate the program, which promotes campus involvement while developing students’ leadership potential. Additionally, the LIR is ground in five learning outcome areas related to the development of leadership styles, individual social competencies, citizenship and social responsibility, paraprofessional work, and future career opportunities. As a measure to gauge the effectiveness of LIR and initiate improvement, there are several assessment activities associated with the program.
As the person responsible for assessment and evaluation for the department, I examined the program through three lenses: the student participants, facilitators, and the program itself. In order to measure effectiveness of achieving these outcomes, all LIR participants completed an online pre-test survey. After attending weekly LIR sessions, the participants completed a post-test at the conclusion of the program. Upon collecting pre- and post-test data, I compared means to determine the achievement of the learning outcomes and examined open-ended questions directly related to the principles in The Leadership Challenge. Results of the post-test data indicated the program was effective. Students were able to identify, describe, and articulate how they have, or plan to implement the five principles in their lives as leaders. Moreover, the experiential and dialogue-based components of LIR were highly rated. Facilitators lecturing for long amounts of time and being underprepared had a negative impact on the LIR student and co-facilitator experiences. Since LIR is not a credit-bearing course, completion of the program is voluntary. Therefore, a survey was administered to those students who did not complete LIR asking what factors contributed to them not completing the program.
Another important aspect of the LIR assessment was the facilitator experience. Upon being selected for LIR, each facilitator was paired with another facilitator to co-teach LIR sessions. Facilitators also attended a training session prior to beginning the LIR series. A survey dealing with the effectiveness of training, comfort with teaching workshop content, and availability of resources was administered to LIR facilitators. Additionally, the facilitators distributed a brief evaluation form toward the end of each LIR session asking participants to synthesize the leadership principle covered in the workshop and how they have or plan to utilize the principle as a leader. The purpose was to help the facilitators know if the workshop content was conveyed and understood by participants. Lastly, the co-facilitators completed an evaluation of one another’s facilitation style, preparedness, and respect for one another.
The findings informed our practice related to the program in several ways. We synthesized information from students and facilitators to inform our evaluation of the overall LIR program. The findings indicated that facilitators felt underprepared after training, and they did not feel there were enough “check ins” with program staff over the six weeks. Additionally, some facilitators expressed concerns about their co-facilitator. Thereby, we made adjustments to facilitator selection and training to better meet the needs of the faciliators and, in turn, the overall student experience. Some facilitators had modifications and adjunct responsibilities added to their assistantships for spring semester they did not anticipate, which prevented them from devoting the necessary time to LIR as projected at the time of application. Accordingly, the dates/times of LIR sessions were modified to accommodate facilitator’s commitments and afford them time to prepare so they are not rushing from their assistantships or classes to teach their LIR session. In addition, checking in during week three with facilitators face to face as opposed to electronically has also been implemented as both a measure of accountability and for an evaluative component for practicum students. Finally, it was concluded that newer facilitators would be paired with someone who has either been an LIR facilitator before or has extensive facilitation and/or teaching experience.
The findings led us to reexamine the learning outcomes of the program. Currently learning outcomes were broad in nature. These outcomes were reconfigured to be more specific, measurable, and congruent with the evaluative tools. Rather than stating LIR participants will be more proficient in their knowledge of the five outcome areas, we have more detailed outcomes for which we assess the effectiveness of the LIR section and program as a whole based on participant responses in the LIR post-test evaluation. Learning outcomes were also established for the facilitator training program based upon feedback identified in the facilitator evaluation. These outcomes are also measurable and congruent with our evaluation tools. In essence, we have examined the results and made training and programmatic adjustments to address concerns and make the LIR program even stronger. Modifications made over time to the LIR program will continue to evolve and help meet objectives to provide residential students with a campus involvement opportunity, meet established student leaders, recognize their leadership potential, and have fun while enhancing personal leadership abilities.
The process of assessing a programmatic initiative for the purpose of continuous improvement proved to be a learning experience. Through this experience, I learned how to consult with others unfamiliar with drawing connections between learning outcomes and assessment and how to learn from constructive feedback to strengthen a program. Though there were many evaluative components to this program, each served a purpose and provided a more complete understanding of the LIR experience. When undertaking such an assessment initiative with a similar type of program, it is important to begin with the end in mind. Upon creating learning outcomes, it is also important to consider how you will gauge the effectiveness of your program, what type of evaluation tools will best provide you with usable feedback, and have an implementation plan after reviewing findings. Furthermore, working collaboratively with others and taking a few moments to explain the assessment process to others is beneficial. Conducting a comprehensive assessment of a program may seem time consuming initially, but in the end taking the time to carefully plan and incorporate others in to the assessment process will further enhance the assessment itself and the overall experience for students.
Sherry L. Early is a doctoral student and assistant to the chair
in the higher education administration program at Bowling Green State University.