When discussing the LEAP Value Rubrics, the AAC&U allows and even encourages institutions to customize to to modify the rubrics for use locally. Certainly, I believe when presented with an instrument like the LEAP Value rubric for Critical Thinking that it is the natural, inherent tendency of faculty and educational institutions to add their perspective or to research that construct in order to create a “better” rubric or to establish “our institution’s definition” of critical thinking (because critical thinking may be defined in *many* different ways). At the moment however, I believe it is important to resist that tendency. Quite simply, revising the LEAP Value rubrics is not likely to add value to the institutional assessment effort, may nullify the benefits of using the LEAP Value rubrics, and may expend valuable time and effort resources better spent.
As my institution has engaged a more systematic assessment of general education outcomes, the cornerstone of that effort, thus far, has been the AAC&U’s LEAP Value Rubrics. Our faculty assessment committee has worked with faculty in different disciplines to identify a Value Rubric they determined was aligned to and an appropriate assessment method for each of our institutional general education outcomes. As faculty have encountered and begun using those rubrics, a number ask, “Why are we using the LEAP Value Rubrics?” or “Is there an opportunity to explore, create, and potentially use other rubrics?” Generally, we are always open to discussion and continuous improvement, and that could include a change in rubrics. With that said, there’s a number of reasons why we have relied inititally on the LEAP Value Rubrics as the foundation of our work and why I believe we should continue doing so.
I’ll forgo the typical apology for having not written in this space in a long while because… well… that assumes that anyone… someone… actually reads what I write here, notices when I do not write here, and actually misses it.
Related to our general education outcomes assessment project, I’ve been extremely busy with “rubric norming sessions” the last several weeks. Working with volunteer faculty evaluators, we have been going through a process to help foster consistency in how we, as an institution and as a faculty, apply different rubrics as we assess general education outcomes. I’ll describe that process, and I’m interested in hearing from others perhaps engaging a similar process and/or doing it differently.
Current, common methods used by #highered #assessment professionals when sampling and evaluating student work for general education or program level outcomes assessment projects may not provide results that can be reliably generalized. Even if sampling work from 1500 students in an institution with only 4500 students total, the results may not be indicative of the institution’s true level of performance in teaching general education or program outcomes.
A recent article published by Inside Higher Ed focuses on a study that suggests student motivation on low-stakes, standardized exams used for institutional assessment may impact the reliability of results. That has serious implications for institutions using the results of those exams to report institutional effectiveness regarding student achievement of institutional outcomes. That article and study is an important read for institutional assessment professionals.
“But my institution relies more heavily on course-embedded assessment, so it’s not as relevant to my institution.” Not so fast… I believe the results of that study also have implications for institutional administration of outcomes assessment projects relying on course-embedded assessments.
I have a speaking engagement coming up next Friday at Howard College in San Angelo. I’ve been developing a custom presentation to address the specific needs described in my conversations with the colleague that invited me to present. As usual, the development process takes on a life of it’s own and the presentation slowly emerges during the weeks that I spend preparing it.
Most of the topics I’ve planned to include have been in place for some time, but the organization of it has been evolving quite a bit. At the moment, the presentation will be a series of issues and challenges I’ll pose to faculty to improve assessment in courses. Of course, there will be interaction expected; there’s a question and pause every fourth slide or so… It’s more about them than it is me.
Current assessment issues to be included are below along with questions I may ask along with a short summary or a link to a previous blog post where I’ve discussed the issue. I’m always open to discussion and comments.
Being primarily responsible at my institution for general education, program, and course level outcomes assessment, a project I helped to initiate this academic year has been the implementation of Blackboard Outcomes that integrates deeply with the Blackboard Learn LMS. In short, Blackboard Outcomes makes it possible to collect electronically samples of student work so that they may be evaluated – also electronically – against a rubric (e.g. AAC&U LEAP Value rubrics) as part of an institutional or programmatic outcomes assessment project. With the evidence collection and evaluation process occurring electronically, the reporting process is also greatly streamlined. Naturally, as we’ve implemented the tool, I’ve encountered a few features that I’d like to have that are not currently available. One feature that is needed is for Blackboard Outcomes to be able to collect samples of student work submitted to Turnitin assignments. The full product enhancement suggestion I submitted is below. If you work with Blackboard Outcomes, I’m interested in your feedback, and your also making the same suggestion to Blackboard.
Being primarily responsible at my institution for general education, program, and course level outcomes assessment, a project I helped to initiate this academic year has been the implementation of Blackboard Outcomes that integrates deeply with the Blackboard Learn LMS. In short, Blackboard Outcomes makes it possible to collect electronically samples of student work so that they may be evaluated – also electronically – against a rubric (e.g. AAC&U LEAP Value rubrics) as part of an institutional or programmatic outcomes assessment project. With the evidence collection and evaluation process occurring electronically, the reporting process is also greatly streamlined. Naturally, as we’ve implemented the tool, I’ve encountered a few features that I’d like to have that are not currently available. The most significant of those for Blackboard Outcomes is the ability to specify students from which samples will be collected. The full product enhancement suggestion I submitted is below. If you work with Blackboard Outcomes, I’m interested in your feedback, and your also making the same suggestion to Blackboard More >
I have suggested previously that NO college level course should have learning outcomes that are written at the lower cognitive levels. Working from Bloom’s Taxonomy, a college level course learning outcome should NOT be to define, explain, describe, discuss, list or identify. College level courses should require students to analyze, apply, synthesize and evaluate new knowledge and concepts. They should be expected to use new knowledge, concepts and skills not simply remember or comprehend them.
The question or issue typically raised is almost always similar to, “Students must acquire basic skills and knowledge before they can operate at higher cognitive levels regarding that content.” That is absolutely true. However, my argument is that basic skills and knowledge are pre/requisite to achieving higher order outcomes; students that achieve higher order outcomes of a course will have implicitly demonstrated mastery of the pre/requisite skills. So, listing the lower order skills as outcomes is both unnecessary, and from an assessment perspective, undesirable. More >
The difference between an outcome and an objective is critical. I have argued before that indifference to the distinction could present significant issues. Without the clarity between the two concepts, the development process could yield a long mish-mash of “outcomes” for a course that both complicate institutional efforts to report assessment outcomes at the course level and potentially erode the academic freedom, creativity and responsibility of faculty. Consider an example . . .