Part 1. What is a rubric?
Part 2. Why use a rubric?
Part 3. What are the parts of a rubric?
Part 4. Developing a rubric
Part 5. Sample rubrics
Part 6. Scoring rubric group orientation and calibration
Part 7. Suggestions for Using Rubrics in Courses
Part 8. Tips for developing a rubric
See also: workshop presentation slides and handouts
- Using Rubrics in Program Assessment (2013)
- Workshop handout (Word document)
- How to Use a Rubric for Program Assessment (2010)
- Techniques for Using Rubrics in Program Assessment by guest speaker Dannelle Stevens (2010)
- Rubrics: Save Grading Time & Engage Students in Learning by guest speaker Dannelle Stevens (2009)
1. What is a rubric?
A rubric is an assessment tool often shaped like a matrix, which describes levels of achievement in a specific area of performance, understanding, or behavior.
There are two main types of rubrics:
Analytic Rubric: An analytic rubric specifies at least two characteristics to be assessed at each performance level and provides a separate score for each characteristic (e.g., a score on “formatting” and a score on “content development”).
- Advantages: provides more detailed feedback on student performance; promotes consistent scoring across students and between raters
- Disadvantages: more time consuming than applying a holistic rubric
- Use when:
- You want to see strengths and weaknesses.
- You want detailed feedback about student performance.
Holistic Rubric: A holistic rubrics provide a single score based on an overall impression of a student’s performance on a task.
- Advantages: quick scoring; provides an overview of student achievement; efficient for large group scoring
- Disadvantages: does not provided detailed information; not diagnostic; may be difficult for scorers to decide on one overall score
- Use when:
- You want a quick snapshot of achievement.
- A single dimension is adequate to define quality.
- A rubric creates a common framework and language for assessment.
- Complex products or behaviors can be examined efficiently.
- Well-trained reviewers apply the same criteria and standards.
- Rubrics are criterion-referenced, rather than norm-referenced. Raters ask, “Did the student meet the criteria for level 5 of the rubric?” rather than “How well did this student do compared to other students?”
- Using rubrics can lead to substantive conversations among faculty.
- When faculty members collaborate to develop a rubric, it promotes shared expectations and grading practices.
Faculty members can use rubrics for program assessment. Examples:
The English Department collected essays from students in all sections of English 100. A random sample of essays was selected. A team of faculty members evaluated the essays by applying an analytic scoring rubric. Before applying the rubric, they “normed”–that is, they agreed on how to apply the rubric by scoring the same set of essays and discussing them until consensus was reached (see below: “6. Scoring rubric group orientation and calibration”).
Biology laboratory instructors agreed to use a “Biology Lab Report Rubric” to grade students’ lab reports in all Biology lab sections, from 100- to 400-level. At the beginning of each semester, instructors met and discussed sample lab reports. They agreed on how to apply the rubric and their expectations for an “A,” “B,” “C,” etc., report in 100-level, 200-level, and 300- and 400-level lab sections. Every other year, a random sample of students’ lab reports are selected from 300- and 400-level sections. Each of those reports are then scored by a Biology professor. The score given by the course instructor is compared to the score given by the Biology professor. In addition, the scores are reported as part of the program’s assessment report. In this way, the program determines how well it is meeting its outcome, “Students will be able to write biology laboratory reports.”
Rubrics are composed of four basic parts. In its simplest form, the rubric includes:
- A task description. The outcome being assessed or instructions students received for an assignment.
- The characteristics to be rated (rows). The skills, knowledge, and/or behavior to be demonstrated.
- Levels of mastery/scale (columns). Labels used to describe the levels of mastery should be tactful and clear. Commonly used labels include:
- Not meeting, approaching, meeting, exceeding
- Exemplary, proficient, marginal, unacceptable
- Advanced, intermediate high, intermediate, novice
- 1, 2, 3, 4
- A description of each characteristic at each level of mastery/scale (cells).
Step 1: Identify what you want to assess
Step 2: Identify the characteristics to be rated (rows). These are also called “dimensions.”
- Specify the skills, knowledge, and/or behaviors that you will be looking for.
- Limit the characteristics to those that are most important to the assessment.
Step 3: Identify the levels of mastery/scale (columns).
Tip: Aim for an even number (4 or 6) because when an odd number is used, the middle tends to become the “catch-all” category.
Step 4: Describe each level of mastery for each characteristic/dimension (cells).
- Describe the best work you could expect using these characteristics. This describes the top category.
- Describe an unacceptable product. This describes the lowest category.
- Develop descriptions of intermediate-level products for intermediate categories.
Important: Each description and each characteristic should be mutually exclusive.
Step 5: Test rubric.
- Apply the rubric to an assignment.
- Share with colleagues.
Tip: Faculty members often find it useful to establish the minimum score needed for the student work to be deemed passable. For example, faculty members may decided that a “1” or “2” on a 4-point scale (4=exemplary, 3=proficient, 2=marginal, 1=unacceptable), does not meet the minimum quality expectations. We encourage a standard setting session to set the score needed to meet expectations (also called a “cutscore”). Monica has posted materials from standard setting workshops, one offered on campus and the other at a national conference (includes speaker notes with the presentation slides).
They may set their criteria for success as 90% of the students must score 3 or higher. If assessment study results fall short, action will need to be taken.
Step 6: Discuss with colleagues. Review feedback and revise.
Important: When developing a rubric for program assessment, enlist the help of colleagues. Rubrics promote shared expectations and consistent grading practices which benefit faculty members and students in the program.
Rubrics are in our Rubric Bank and more are available at the Assessment Office (hard copy).
These open as Word documents and are examples from outside UH.
- Group Participation (analytic rubric)
- Participation (holistic rubric)
- Design Project (analytic rubric)
- Critical Thinking (analytic rubric)
- Media and Design Elements (analytic rubric; portfolio)
- Writing (holistic rubric; portfolio)
When using a rubric for program assessment purposes, faculty members apply the rubric to pieces of student work (e.g., reports, oral presentations, design projects). To produce dependable scores, each faculty member needs to interpret the rubric in the same way. The process of training faculty members to apply the rubric is called “norming.” It’s a way to calibrate the faculty members so that scores are accurate and consistent across the faculty. Below are directions for an assessment coordinator carrying out this process.
Suggested materials for a scoring session:
- Copies of the rubric
- Copies of the “anchors”: pieces of student work that illustrate each level of mastery. Suggestion: have 6 anchor pieces (2 low, 2 middle, 2 high)
- Score sheets
- Extra pens, tape, post-its, paper clips, stapler, rubber bands, etc.
Hold the scoring session in a room that:
- Allows the scorers to spread out as they rate the student pieces
- Has a chalk or white board, smart board, or flip chart
- Describe the purpose of the activity, stressing how it fits into program assessment plans. Explain that the purpose is to assess the program, not individual students or faculty, and describe ethical guidelines, including respect for confidentiality and privacy.
- Describe the nature of the products that will be reviewed, briefly summarizing how they were obtained.
- Describe the scoring rubric and its categories. Explain how it was developed.
- Analytic: Explain that readers should rate each dimension of an analytic rubric separately, and they should apply the criteria without concern for how often each score (level of mastery) is used. Holistic: Explain that readers should assign the score or level of mastery that best describes the whole piece; some aspects of the piece may not appear in that score and that is okay. They should apply the criteria without concern for how often each score is used.
- Give each scorer a copy of several student products that are exemplars of different levels of performance. Ask each scorer to independently apply the rubric to each of these products, writing their ratings on a scrap sheet of paper.
- Once everyone is done, collect everyone’s ratings and display them so everyone can see the degree of agreement. This is often done on a blackboard, with each person in turn announcing his/her ratings as they are entered on the board. Alternatively, the facilitator could ask raters to raise their hands when their rating category is announced, making the extent of agreement very clear to everyone and making it very easy to identify raters who routinely give unusually high or low ratings.
- Guide the group in a discussion of their ratings. There will be differences. This discussion is important to establish standards. Attempt to reach consensus on the most appropriate rating for each of the products being examined by inviting people who gave different ratings to explain their judgments. Raters should be encouraged to explain by making explicit references to the rubric. Usually consensus is possible, but sometimes a split decision is developed, e.g., the group may agree that a product is a “3-4” split because it has elements of both categories. This is usually not a problem. You might allow the group to revise the rubric to clarify its use but avoid allowing the group to drift away from the rubric and learning outcome(s) being assessed.
- Once the group is comfortable with how the rubric is applied, the rating begins. Explain how to record ratings using the score sheet and explain the procedures. Reviewers begin scoring.
- If you can quickly summarize the scores, present a summary to the group at the end of the reading. You might end the meeting with a discussion of five questions:
- Are results sufficiently reliable?
- What do the results mean? Are we satisfied with the extent of students’ learning?
- Who needs to know the results?
- What are the implications of the results for curriculum, pedagogy, or student support services?
- How might the assessment process, itself, be improved?
- Use the rubric to grade student work. Hand out the rubric with the assignment so students will know your expectations and how they’ll be graded. This should help students master your learning outcomes by guiding their work in appropriate directions.
- Use a rubric for grading student work and return the rubric with the grading on it. Faculty save time writing extensive comments; they just circle or highlight relevant segments of the rubric. Some faculty members include room for additional comments on the rubric page, either within each section or at the end.
- Develop a rubric with your students for an assignment or group project. Students can the monitor themselves and their peers using agreed-upon criteria that they helped develop. Many faculty members find that students will create higher standards for themselves than faculty members would impose on them.
- Have students apply your rubric to sample products before they create their own. Faculty members report that students are quite accurate when doing this, and this process should help them evaluate their own projects as they are being developed. The ability to evaluate, edit, and improve draft documents is an important skill.
- Have students exchange paper drafts and give peer feedback using the rubric. Then, give students a few days to revise before submitting the final draft to you. You might also require that they turn in the draft and peer-scored rubric with their final paper.
- Have students self-assess their products using the rubric and hand in their self-assessment with the product; then, faculty members and students can compare self- and faculty-generated evaluations.
- Find and adapt an existing rubric! It is rare to find a rubric that is exactly right for your situation, but you can adapt an already existing rubric that has worked well for others and save a great deal of time. A faculty member in your program may already have a good one.
- Evaluate the rubric. Ask yourself: A) Does the rubric relate to the outcome(s) being assessed? (If yes, success!) B) Does it address anything extraneous? (If yes, delete.) C) Is the rubric useful, feasible, manageable, and practical? (If yes, find multiple ways to use the rubric: program assessment, assignment grading, peer review, student self assessment.)
- Collect samples of student work that exemplify each point on the scale or level. A rubric will not be meaningful to students or colleagues until the anchors/benchmarks/exemplars are available.
- Expect to revise.
- When you have a good rubric, SHARE IT!
Sources consulted (last accessed 2017):
- Rubric Library, Institutional Research, Assessment & Planning, California State University-Fresno
- The Basics of Rubrics [PDF], Schreyer Institute, Penn State
- Creating Rubrics, Teaching Methods and Management, TeacherVision
- Allen, Mary – University of Hawai’i at Manoa Spring 2008 Assessment Workshops, May 13-14, 2008 [available at the Assessment and Curriculum Support Center]
- Mertler, Craig A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25).
- NPEC Sourcebook on Assessment: Definitions and Assessment Methods for Communication, Leadership, Information Literacy, Quantitative Reasoning, and Quantitative Skills. [PDF] (June 2005)