Gallaudet University
Who We Are
Our Work
Overview
News & Stories
May 25, 2023
May 18, 2023
Upcoming Events
June 24, 2023
June 29, 2023
August 20, 2023
University Wide Events
No Communication Compromises
Areas of Study
Schools
Programs
Changing the world
Research
Community & Innovation
Research Experiences & Services
Our Global Presence
Global at Home
Global Learning For All
Global Engagement
Your Journey Starts Here
Study
Learn
Undergraduate Support
Information
Tools and Resources
Explore Our Campus
Connect
Discover
Influence
Jun 1, 2023
Quick Links
GU
/
Assessment of Student Learning
Developing a Scoring Criteria (Rubrics)
College Hall 410A
(202) 559-5370
202.651.5085
Email Us
DISCLAIMER: This data in this section is fictitious and does not, in any way, represent any of the programs at Gallaudet University. This information is intended only as examples.
A rubric is a scoring guide used to assess performance against a set of criteria. At a minimum, it is a list of the components you are looking for when you evaluate an assignment. At its most advanced, it is a tool that divides an assignment into its parts and provides explicit expectations of acceptable and unacceptable levels of performance for each component.
1 – Checklists, the least complex form of scoring system, are simple lists indicating the presence, NOT the quality, of the elements. Therefore, checklists are NOT frequently used in higher education for program-level assessment. But faculty may find them useful for scoring and giving feedback on minor student assignments or practice/drafts of assignments.
Example 1: Critical Thinking Checklist
The student…
__ Accurately interprets evidence, statements, graphics, questions, etc.
__ Identifies the salient arguments (reasons and claims)
__ Offers analyzes and evaluates major alternative points of view
__ Draws warranted, judicious, non-fallacious conclusions
__ Justifies key results and procedures, explains assumptions and reasons
__ Fair-mindedly follows where evidence and reasons lead
Example 2: Presentation Checklist
__ engaged audience
__ used an academic or consultative American Sign Language (ASL) register
__ used adequate ASL syntactic and semantic features
__ cited references adequately in ASL
__ stayed within allotted time
__ managed PowerPoint presentation technology smoothly
2 – Basic Rating Scales are checklists of criteria that evaluate the quality of elements and include a scoring system. The main drawback with rating scales is that the meaning of the numeric ratings can be vague. Without descriptors for the ratings, the raters must make a judgment based on their perception of the meanings of the terms. For the same presentation, one rater might think a student rated “good,” and another rater might feel the same student was “marginal.”
3 – Holistic Rating Scales use a short narrative of characteristics to award a single score based on an overall impression of a student’s performance on a task. A drawback to using holistic rating scales is that they do not provide specific areas of strengths and weaknesses and therefore are less useful to help you focus your improvement efforts. Use a holistic rating scale when the projects to be assessed will vary greatly (e.g., independent study projects submitted in a capstone course). Or when the number of assignments to be evaluated is significant (e.g., reviewing all the essays from applicants to determine who will need developmental courses).
The Holistic Critical Thinking Scoring Rubric: A Tool for Developing and Evaluating Critical Thinking. Retrieved April 12, 2010 from Insight Assessment. 4 – Analytic Rating Scales are rubrics that include explicit performance expectations for each possible rating, for each criterion. Analytic rating scales are especially appropriate for complex learning tasks with multiple criteria. Evaluate carefully whether this the most appropriate tool for your assessment needs. They can provide more detailed feedback on student performance; more consistent scoring among raters, but the disadvantage is that they can be time-consuming to develop and apply. Results can be aggregated to provide detailed information on the strengths and weaknesses of a program. Example: Critical Thinking Portion of the Gallaudet University Rubric for Assessing Written English
There are different ways to approach building an analytic rating scale: logical or organic. For both the logical and the organic model, steps 1-3 are the same.
Tip: Adding numbers to the ratings can make scoring easier. However, if you plan to use the rating scale for course-level assessment grading as well, a meaning must be attached to that score. For example, what is the minimum score that would be considered acceptable for a “C.”
Example:
Components of Analytic Rating Scales
Other possible descriptors include:
examples of inconsistent performance characteristics and suggested corrections.
Tips: Keep list of characteristics manageable by only including critical evaluative components. Extremely long, overly-detailed lists make a rating scale hard to use.
In addition to having descriptions brief, the language should be consistent. Below are several ideas to keep descriptors consistent:
Keep the aspects of a performance stay the same across the levels but adding adjectives or adverbial phrases to show the qualitative difference
A word of warning: numeric references on their own can be misleading. They are best teamed with a qualitative reference (eg three appropriate and relevant examples) to avoid ignoring quality at the expense of quantity.
Use rating scales for program-level assessment to see trends in strengths and weaknesses of groups of students.
Examples
For more information on using course-level assessment to provide feedback to students and to determine grades, see University of Hawaii’s “Part 7. Suggestions for Using Rubrics in Courses” and the section on Converting Rubric Scores to Grades in Craig A. Mertler’s “Designing Scoring Rubrics for Your Classroom”.
Adapted from sources below:
Allen, Mary. (January, 2006). Assessment Workshop Material. California State University, Bakersfield. Retrieved DATE from http://www.csub.edu/TLC/options/resources/handouts/AllenWorkshopHandoutJan06.pdf
http://www.uhm.hawaii.edu/assessment/howto/rubrics.htm
http://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4523.html?detoured=1
Mueller, Jon. (2001). Rubrics. Authentic Assessment Toolbox. Retrieved April 12, 2010 from http://jonathan.mueller.faculty.noctrl.edu/toolbox/rubrics.htm
http://en.wikipedia.org/wiki/Rubric_(academic)
Tierney, Robin & Marielle Simon. (2004). What’s Still Wrong With Rubrics: Focusing on the Consistency of Performance Criteria Across Scale Levels. Practical Assessment, Research & Evaluation, 9(2).
Assessment