Creating and Using Rubrics

Last Updated: 4 March 2024. Click here to view archived versions of this page.

On this page:

  1. What is a rubric?
  2. Why use a rubric?
  3. What are the parts of a rubric?
  4. Developing a rubric
  5. Sample rubrics
  6. Scoring rubric group orientation and calibration
  7. Suggestions for using rubrics in courses
  8. Equity-minded considerations for rubric development
  9. Tips for developing a rubric
  10. Additional resources & sources consulted

Note: The information and resources contained here serve only as a primers to the exciting and diverse perspectives in the field today. This page will be continually updated to reflect shared understandings of equity-minded theory and practice in learning assessment.

1. What is a rubric?

A rubric is an assessment tool often shaped like a matrix, which describes levels of achievement in a specific area of performance, understanding, or behavior.

There are two main types of rubrics:

Analytic Rubric: An analytic rubric specifies at least two characteristics to be assessed at each performance level and provides a separate score for each characteristic (e.g., a score on “formatting” and a score on “content development”).

  • Advantages: provides more detailed feedback on student performance; promotes consistent scoring across students and between raters
  • Disadvantages: more time consuming than applying a holistic rubric
  • Use when:
    • You want to see strengths and weaknesses.
    • You want detailed feedback about student performance.

Holistic Rubric: A holistic rubrics provide a single score based on an overall impression of a student’s performance on a task.

  • Advantages: quick scoring; provides an overview of student achievement; efficient for large group scoring
  • Disadvantages: does not provided detailed information; not diagnostic; may be difficult for scorers to decide on one overall score
  • Use when:
    • You want a quick snapshot of achievement.
    • A single dimension is adequate to define quality.

2. Why use a rubric?

  • A rubric creates a common framework and language for assessment.
  • Complex products or behaviors can be examined efficiently.
  • Well-trained reviewers apply the same criteria and standards.
  • Rubrics are criterion-referenced, rather than norm-referenced. Raters ask, “Did the student meet the criteria for level 5 of the rubric?” rather than “How well did this student do compared to other students?”
  • Using rubrics can lead to substantive conversations among faculty.
  • When faculty members collaborate to develop a rubric, it promotes shared expectations and grading practices.

Faculty members can use rubrics for program assessment. Examples:

The English Department collected essays from students in all sections of English 100. A random sample of essays was selected. A team of faculty members evaluated the essays by applying an analytic scoring rubric. Before applying the rubric, they “normed”–that is, they agreed on how to apply the rubric by scoring the same set of essays and discussing them until consensus was reached (see below: “6. Scoring rubric group orientation and calibration”).

Biology laboratory instructors agreed to use a “Biology Lab Report Rubric” to grade students’ lab reports in all Biology lab sections, from 100- to 400-level. At the beginning of each semester, instructors met and discussed sample lab reports. They agreed on how to apply the rubric and their expectations for an “A,” “B,” “C,” etc., report in 100-level, 200-level, and 300- and 400-level lab sections. Every other year, a random sample of students’ lab reports are selected from 300- and 400-level sections. Each of those reports are then scored by a Biology professor. The score given by the course instructor is compared to the score given by the Biology professor. In addition, the scores are reported as part of the program’s assessment report. In this way, the program determines how well it is meeting its outcome, “Students will be able to write biology laboratory reports.”

3. What are the parts of a rubric?

Rubrics are composed of four basic parts. In its simplest form, the rubric includes:

  1. A task description. The outcome being assessed or instructions students received for an assignment.
  2. The characteristics to be rated (rows). The skills, knowledge, and/or behavior to be demonstrated.
  3. Levels of mastery/scale (columns). Labels used to describe the levels of mastery should be tactful and clear. Commonly used labels include:
    • Beginning, approaching, meeting, exceeding
    • Emerging, developing, proficient, exemplary 
    • Novice, intermediate, intermediate high, advanced 
    • Beginning, striving, succeeding, soaring
    • 1, 2, 3, 4
  4. A description of each characteristic at each level of mastery/scale (cells).
    • Also called a “performance description.” Explains what a student will have done to demonstrate they are at a given level of mastery for a given characteristic.

4. Developing a rubric

Step 1: Identify what you want to assess

Step 2: Identify the characteristics to be rated (rows). These are also called “dimensions.”

  • Specify the skills, knowledge, and/or behaviors that you will be looking for.
  • Limit the characteristics to those that are most important to the assessment.

Step 3: Identify the levels of mastery/scale (columns).

Tip: Aim for an even number (4 or 6) because when an odd number is used, the middle tends to become the “catch-all” category.

Step 4: Describe each level of mastery for each characteristic/dimension (cells).

  • Describe the best work you could expect using these characteristics. This describes the top category.
  • Describe an unacceptable product. This describes the lowest category.
  • Develop descriptions of intermediate-level products for intermediate categories.

Important: Each description and each characteristic should be mutually exclusive.

Step 5: Test rubric.

  • Apply the rubric to an assignment.
  • Share with colleagues.

Tip: Faculty members often find it useful to establish the minimum score needed for the student work to be deemed passable. For example, faculty members may decided that a “1” or “2” on a 4-point scale (4=exemplary, 3=proficient, 2=marginal, 1=unacceptable), does not meet the minimum quality expectations. We encourage a standard setting session to set the score needed to meet expectations (also called a “cutscore”). Monica has posted materials from standard setting workshops, one offered on campus and the other at a national conference (includes speaker notes with the presentation slides).

They may set their criteria for success as 90% of the students must score 3 or higher. If assessment study results fall short, action will need to be taken.

Step 6: Discuss with colleagues. Review feedback and revise.

Important: When developing a rubric for program assessment, enlist the help of colleagues. Rubrics promote shared expectations and consistent grading practices which benefit faculty members and students in the program.

5. Sample rubrics

Rubrics are on our Rubric Bank page and in our Rubric Repository (Graduate Degree Programs). More are available at the Assessment and Curriculum Support Center in Crawford Hall (hard copy).

These open as Word documents and are examples from outside UH.

6. Scoring rubric group orientation and calibration

When using a rubric for program assessment purposes, faculty members apply the rubric to pieces of student work (e.g., reports, oral presentations, design projects). To produce dependable scores, each faculty member needs to interpret the rubric in the same way. The process of training faculty members to apply the rubric is called “norming.” It’s a way to calibrate the faculty members so that scores are accurate and consistent across the faculty. Below are directions for an assessment coordinator carrying out this process.

Suggested materials for a scoring session:

  • Copies of the rubric
  • Copies of the “anchors”: pieces of student work that illustrate each level of mastery. Suggestion: have 6 anchor pieces (2 low, 2 middle, 2 high)
  • Score sheets
  • Extra pens, tape, post-its, paper clips, stapler, rubber bands, etc.

Hold the scoring session in a room that:

  • Allows the scorers to spread out as they rate the student pieces
  • Has a chalk or white board, smart board, or flip chart


  1. Describe the purpose of the activity, stressing how it fits into program assessment plans. Explain that the purpose is to assess the program, not individual students or faculty, and describe ethical guidelines, including respect for confidentiality and privacy.
  2. Describe the nature of the products that will be reviewed, briefly summarizing how they were obtained.
  3. Describe the scoring rubric and its categories. Explain how it was developed.
  4. Analytic: Explain that readers should rate each dimension of an analytic rubric separately, and they should apply the criteria without concern for how often each score (level of mastery) is used. Holistic: Explain that readers should assign the score or level of mastery that best describes the whole piece; some aspects of the piece may not appear in that score and that is okay. They should apply the criteria without concern for how often each score is used.
  5. Give each scorer a copy of several student products that are exemplars of different levels of performance. Ask each scorer to independently apply the rubric to each of these products, writing their ratings on a scrap sheet of paper.
  6. Once everyone is done, collect everyone’s ratings and display them so everyone can see the degree of agreement. This is often done on a blackboard, with each person in turn announcing his/her ratings as they are entered on the board. Alternatively, the facilitator could ask raters to raise their hands when their rating category is announced, making the extent of agreement very clear to everyone and making it very easy to identify raters who routinely give unusually high or low ratings.
  7. Guide the group in a discussion of their ratings. There will be differences. This discussion is important to establish standards. Attempt to reach consensus on the most appropriate rating for each of the products being examined by inviting people who gave different ratings to explain their judgments. Raters should be encouraged to explain by making explicit references to the rubric. Usually consensus is possible, but sometimes a split decision is developed, e.g., the group may agree that a product is a “3-4” split because it has elements of both categories. This is usually not a problem. You might allow the group to revise the rubric to clarify its use but avoid allowing the group to drift away from the rubric and learning outcome(s) being assessed.
  8. Once the group is comfortable with how the rubric is applied, the rating begins. Explain how to record ratings using the score sheet and explain the procedures. Reviewers begin scoring.
  9. If you can quickly summarize the scores, present a summary to the group at the end of the reading. You might end the meeting with a discussion of five questions:
    • Are results sufficiently reliable?
    • What do the results mean? Are we satisfied with the extent of students’ learning?
    • Who needs to know the results?
    • What are the implications of the results for curriculum, pedagogy, or student support services?
    • How might the assessment process, itself, be improved?

7. Suggestions for using rubrics in courses

  • Use the rubric to grade student work. Hand out the rubric with the assignment so students will know your expectations and how they’ll be graded. This should help students master your learning outcomes by guiding their work in appropriate directions.
  • Use a rubric for grading student work and return the rubric with the grading on it. Faculty save time writing extensive comments; they just circle or highlight relevant segments of the rubric. Some faculty members include room for additional comments on the rubric page, either within each section or at the end.
  • Develop a rubric with your students for an assignment or group project. Students can the monitor themselves and their peers using agreed-upon criteria that they helped develop. Many faculty members find that students will create higher standards for themselves than faculty members would impose on them.
  • Have students apply your rubric to sample products before they create their own. Faculty members report that students are quite accurate when doing this, and this process should help them evaluate their own projects as they are being developed. The ability to evaluate, edit, and improve draft documents is an important skill.
  • Have students exchange paper drafts and give peer feedback using the rubric. Then, give students a few days to revise before submitting the final draft to you. You might also require that they turn in the draft and peer-scored rubric with their final paper.
  • Have students self-assess their products using the rubric and hand in their self-assessment with the product; then, faculty members and students can compare self- and faculty-generated evaluations.

8. Equity-minded considerations for rubric development

Ensure transparency by making rubric criteria public, explicit, and accessible

Transparency is a core tenet of equity-minded assessment practice. Students should know and understand how they are being evaluated as early as possible.

  • Ensure the rubric is publicly available & easily accessible. We recommend publishing on your program or department website.
  • Have course instructors introduce and use the program rubric in their own courses. Instructors should explain to students connections between the rubric criteria and the course and program SLOs.
  • Write rubric criteria using student-focused and culturally-relevant language to ensure students understand the rubric’s purpose, the expectations it sets, and how criteria will be applied in assessing their work.
  • Provide clear explanations of the criteria.
    • For example, instructors can provide annotated examples of student work using the rubric language as a resource for students.

Meaningfully involve students and engage multiple perspectives

Rubrics created by faculty alone risk perpetuating unseen biases as the evaluation criteria used will inherently reflect faculty perspectives, values, and assumptions. Including students and other stakeholders in developing criteria helps to ensure performance expectations are aligned between faculty, students, and community members. Additional perspectives to be engaged might include community members, alumni, co-curricular faculty/staff, field supervisors, potential employers, or current professionals. Consider the following strategies to meaningfully involve students and engage multiple perspectives:

  • Ask students what rubric language confuses them.
    • Have students read each evaluation criteria and talk out loud about what they think it means. This will allow you to identify what language is clear and where there is still confusion.
  • Ask students to use their language to interpret the rubric and provide a student version of the rubric.
  • Train students in rubric construction and then have students collaborate with faculty to co-construct the program rubric.
    • If you use this strategy, it is essential to create an inclusive environment where students and faculty have equal opportunity to provide input.
  • Be sure to incorporate feedback from faculty and instructors who teach diverse courses, levels, and in different sub-disciplinary topics. Faculty and instructors who teach introductory courses have valuable experiences and perspectives that may differ from those who teach higher-level courses.
  • Engage multiple perspectives including co-curricular faculty/staff, alumni, potential employers, and community members for feedback on evaluation criteria and rubric language. This will ensure evaluation criteria reflect what is important for all stakeholders.
  • Elevate historically silenced voices in discussions on rubric development. Ensure stakeholders from historically underrepresented communities have their voices heard and valued.

Honor students’ strengths in performance descriptions

When describing students’ performance at different levels of mastery, use language that describes what students can do rather than what they cannot do. For example:

  • Instead of: Students cannot make coherent arguments consistently.
  • Use: Students can make coherent arguments occasionally.

9. Tips for developing a rubric

  • Find and adapt an existing rubric! It is rare to find a rubric that is exactly right for your situation, but you can adapt an already existing rubric that has worked well for others and save a great deal of time. A faculty member in your program may already have a good one.
  • Evaluate the rubric. Ask yourself: A) Does the rubric relate to the outcome(s) being assessed? (If yes, success!) B) Does it address anything extraneous? (If yes, delete.) C) Is the rubric useful, feasible, manageable, and practical? (If yes, find multiple ways to use the rubric: program assessment, assignment grading, peer review, student self assessment.)
  • Collect samples of student work that exemplify each point on the scale or level. A rubric will not be meaningful to students or colleagues until the anchors/benchmarks/exemplars are available.
  • Expect to revise.
  • When you have a good rubric, SHARE IT!

10. Additional resources & sources consulted:

Rubric examples:

Workshop presentation slides and handouts:


Contributors: Monica Stitt-Bergh, Ph.D., TJ Buckley, Yao Z. Hill Ph.D.