The previous post highlighted differences between Outcomes vs. Objectives for Institutional Assessment. As promised, I hope to further clarify the differences by engaging the next step in the institutional assessment planning process: defining and planning assessment methods for a previously developed outcome. Describing the assessment methods and measures, that may be used will further distinguish between outcomes, assessment, and objectives/tasks within institutional assessment.
I’ve written previously about the difference between outcomes and objectives. During a recent institutional effectiveness workshop, I’ve come to the conclusion that the same issue exists when assessing administrative units. From what I’ve experienced and seen, current institutional effectiveness and continuous improvement practices for the administrative assessment efforts lead to the same blending of outcomes and objectives that I’ve observed and noted on the instructional side of the house. A few examples and an explanation of how to approach it differently . . .
CHANGE! My online, professional digital footprint is changing, evolving, updating. EdTechatouille, the focus of the site for the past 6+ years, is disappearing into the background. I have given the site a slight makeover to highlight my current professional focus and interest: learning and assessment in higher education. It’s a subtle change to the presentation of the site, but the substantive change is significant.
Bloom’s Taxonomy is exactly that – a taxonomy, not a heirarchy. And, students of many ages are capable of higher order thinking skills (e.g. my then 9yo daughter) like application, analysis, synthesis and evaluation; of course, students will exhibit different levels of proficiency and different levels of complexity in their thinking.
Here’s my question… Should there be any college level course that has learning outcomes that are predominantly, or worse yet, entirely at the lower levels of thinking per Bloom’s Taxonomy? Is there any instance in which the outcome of the course should not be an ability to analyze, apply, synthesize or evaluate content related to the discipline? More >
The Chronicle of Higher Education asked,
Is it time for more widespread reform of college teaching?
This series explores the state of the college lecture, and how technologies point to new models of undergraduate education.
Last month, we began inviting students across the countries to fire up their Web cameras or camera-phones to send us video commentaries about whether lectures work for them.
Chronicle.com/LectureFail displays a number of student comments, including a compilation, along with several faculty responses.
As a faculty member, as I watched several of the videos, I found my beliefs and attitudes to be more in line with the students than my faculty colleagues. Personally, lectures are boring… for me… as a faculty member. I don’t like them, and pedagogically and historically, I find them to be an outmoded approach to teaching and learning. Why?
I work closely with end of course evaluation surveys. At one institution, I administer the online survey system through which we survey students, and for the other institution, I rely heavily and place high value on feedback from students to help me continuously improve the course. My question is, “How much is that feedback worth?” More >
With a significant focus on the evaluation of our institutional general education curriculum/program, one concept I’ve encountered frequently of late is “course embedded assessment.” However, I’ve discovered at least two different interpretations of the concept. More >
After three iterations of the course I’m teaching, I’m revisiting and potentially revising the grading rubric I’m using to assess learner participation in discussion forums. Back in August, I described the types of discussions in which my students in COSC 1401 Introduction to Computers are asked to participate and posted the grading rubric for assessing their participation. I have been using that rubric the last three terms (I’m teaching primarily 8 week terms; two last fall and one so far this spring). But, it’s not quite a perfect fit to how the discussions have progressed and how I want to grade them. So, I’m revising. I’m interested in your thoughts on this rubric. More >
Working with a broad range of faculty and instructional design types, I believe there’s some confusion within education regarding Bloom’s Taxonomy. Specifically, it’s often perceived and applied as a hierarchy rather than a taxonomy. Quite bluntly, that is incorrect and counterproductive to effective teaching and learning. More >
At the 2011 Texas Community College Instructional Leaders annual conference in Fort Worth, October 5-6, I had the opportunity to present and discuss three issues I think are important to the effective development of curriculum and assessment. The three issues are those which I have identified over the past year as I’ve worked more in depth with my local institution’s curriculum and assessment initiatives. The highlights of the discussion and presentation: More >