I’ll forgo the typical apology for having not written in this space in a long while because… well… that assumes that anyone… someone… actually reads what I write here, notices when I do not write here, and actually misses it.
Related to our general education outcomes assessment project, I’ve been extremely busy with “rubric norming sessions” the last several weeks. Working with volunteer faculty evaluators, we have been going through a process to help foster consistency in how we, as an institution and as a faculty, apply different rubrics as we assess general education outcomes. I’ll describe that process, and I’m interested in hearing from others perhaps engaging a similar process and/or doing it differently.
A recent article published by Inside Higher Ed focuses on a study that suggests student motivation on low-stakes, standardized exams used for institutional assessment may impact the reliability of results. That has serious implications for institutions using the results of those exams to report institutional effectiveness regarding student achievement of institutional outcomes. That article and study is an important read for institutional assessment professionals.
“But my institution relies more heavily on course-embedded assessment, so it’s not as relevant to my institution.” Not so fast… I believe the results of that study also have implications for institutional administration of outcomes assessment projects relying on course-embedded assessments.
I have a speaking engagement coming up next Friday at Howard College in San Angelo. I’ve been developing a custom presentation to address the specific needs described in my conversations with the colleague that invited me to present. As usual, the development process takes on a life of it’s own and the presentation slowly emerges during the weeks that I spend preparing it.
Most of the topics I’ve planned to include have been in place for some time, but the organization of it has been evolving quite a bit. At the moment, the presentation will be a series of issues and challenges I’ll pose to faculty to improve assessment in courses. Of course, there will be interaction expected; there’s a question and pause every fourth slide or so… It’s more about them than it is me.
Current assessment issues to be included are below along with questions I may ask along with a short summary or a link to a previous blog post where I’ve discussed the issue. I’m always open to discussion and comments.
Being primarily responsible at my institution for general education, program, and course level outcomes assessment, a project I helped to initiate this academic year has been the implementation of Blackboard Outcomes that integrates deeply with the Blackboard Learn LMS. In short, Blackboard Outcomes makes it possible to collect electronically samples of student work so that they may be evaluated – also electronically – against a rubric (e.g. AAC&U LEAP Value rubrics) as part of an institutional or programmatic outcomes assessment project. With the evidence collection and evaluation process occurring electronically, the reporting process is also greatly streamlined. Naturally, as we’ve implemented the tool, I’ve encountered a few features that I’d like to have that are not currently available. One feature that is needed is for Blackboard Outcomes to be able to collect samples of student work submitted to Turnitin assignments. The full product enhancement suggestion I submitted is below. If you work with Blackboard Outcomes, I’m interested in your feedback, and your also making the same suggestion to Blackboard.