We are working to revisit, review, and as needed, revise our program outcomes assessment plans. To a large extent, the assessment has been done very well at the program level; as our office of learning and assessment (OLA) has evolved over the past 4-6 years, however, we are continuously working to improve the services and support we provide to our faculty and instructional leaders (department chairs and deans) regarding program outcomes assessment. We are working to improve and to better institutionalize those processes. Read more
My thoughts regarding “Federal regulators debate how to handle direct assessment programs” @insidehighered
My thoughts regarding Federal regulators debate how to handle direct assessment programs @insidehighered.
Higher education inevitably will have to figure out competency-based education; it’s the logical conclusion to the national accountability discourse. Several examples from this article that illustrate perhaps that we’re further away than we might hope:
When discussing the LEAP Value Rubrics, the AAC&U allows and even encourages institutions to customize to to modify the rubrics for use locally. Certainly, I believe when presented with an instrument like the LEAP Value rubric for Critical Thinking that it is the natural, inherent tendency of faculty and educational institutions to add their perspective or to research that construct in order to create a “better” rubric or to establish “our institution’s definition” of critical thinking (because critical thinking may be defined in *many* different ways). At the moment however, I believe it is important to resist that tendency. Quite simply, revising the LEAP Value rubrics is not likely to add value to the institutional assessment effort, may nullify the benefits of using the LEAP Value rubrics, and may expend valuable time and effort resources better spent.
I’ll forgo the typical apology for having not written in this space in a long while because… well… that assumes that anyone… someone… actually reads what I write here, notices when I do not write here, and actually misses it.
Related to our general education outcomes assessment project, I’ve been extremely busy with “rubric norming sessions” the last several weeks. Working with volunteer faculty evaluators, we have been going through a process to help foster consistency in how we, as an institution and as a faculty, apply different rubrics as we assess general education outcomes. I’ll describe that process, and I’m interested in hearing from others perhaps engaging a similar process and/or doing it differently.
Current, common methods used by #highered #assessment professionals when sampling and evaluating student work for general education or program level outcomes assessment projects may not provide results that can be reliably generalized. Even if sampling work from 1500 students in an institution with only 4500 students total, the results may not be indicative of the institution’s true level of performance in teaching general education or program outcomes.