A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z


(Back to top)

Achievement Results
Findings based on direct evidence (by student learning outcome) or via direct assessment methods. Direct refers to actual student work products, recordings, observation of student performance (versus indirect which refers to students’ self-reports of their learning typically collected by surveys).

A logical connection between the curriculum and the intended outcomes.
Example: curriculum mapping is an alignment activity: the curriculum is analyzed to determine when, where, and how students are introduced to each learning outcomes and then given opportunities to develop and demonstrate achievement. The “map” visually displays where outcomes are emphasized in the curriculum.

Analytic Scoring
Scoring that divides the student work into elemental, logical parts or basic principles. Scorers evaluate student work across multiple dimensions of performance rather than from an overall impression (holistic scoring). In analytic scoring, individual scores for each dimension are determined and reported; however, an overall impression of quality may be included. (P.A. Gantt; CRESST Glossary) See also: Holistic Scoring.
Example: analytic scoring of a history essay might include scores of the following dimensions: use of prior knowledge, application of principles, use of original source material to support a point of view, and composition.

A sample of student work that exemplifies a specific level of performance. Anchors are often used in rubric scoring. Raters use anchors to score student work, usually comparing the student performance to the anchor.
Example: if student work was being scored on a scale of 1-5, there would typically be anchors (previously scored student work) exemplifying each point on the scale. (CRESST Glossary)

“Assessment is the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development.” (Palomba & Banta, 1999) Assessment for Accountability The assessment of some unit (could be a program, department, or entire institution) conducted to satisfy external stakeholders. Results are summative and often compared across units. (Leskes, A., 2002)
Example: to retain state approval, graduates of the school of education must achieve a 90 percent pass rate or better on teacher certification tests.

Assessment for Improvement
Assessment that feeds directly back into revising the course, program, or institution to improve student learning results. (Leskes, A., 2002)

Assessment Plan
A document that outlines the

  • program mission/goals
  • desired student learning outcomes (or objectives)
  • learning processes (e.g., courses, activities, assignments) that contribute to students’ abilities reach the program’s outcomes (this may be shown in the form of a curriculum map)
  • a description of how students demonstrate the learning outcomes (e.g., oral presentation, written lab report, dance performance)
  • the method to evaluate that demonstration of learning such as a rubric, observation checklist, exam answer key
  • timeline
  • location of the mission/goals and student learning outcomes (e.g., web site, brochure, advising session)

A plan for a specific assessment activity/project will include the following:

  • the purpose or goal of particular assessment activities
  • how the results will be used and who will use them
  • brief explanation of data-collection methods and the analysis methods
  • an indication of which outcome(s)/objective(s) is/are addressed by each method,
  • the intervals at which evidence is collected and reviewed
  • the individual(s) responsible for the collection/review of evidence and dissemination of assessment results

(adapted from the Northern Illinois University Assessment Glossary)

Authentic Assessment
Determining the level of student knowledge/skill in a particular area by evaluating their ability to perform a “real world” task in the way professionals in the field would perform it. Authentic assessment asks for a demonstration of the behavior the learning is intended to produce.
Example: asking students to create a marketing campaign and evaluating that campaign instead of asking students to answer test questions about characteristics of a good marketing campaign.

G. Wiggins is credited with coining the term “authentic assessment”; see his article, “A True Test: Toward More Authentic and Equitable Assessment,” in The Phi Delta Kappan, Vol. 70, No. 9 (May, 1989), pp. 703-713


(Back to top)

A point of reference for measurement; a standard of achievement against which to evaluate or judge performance.


(Back to top)

Capstone Course/Experience

A class or experience designed to help students demonstrate comprehensive learning. In addition to emphasizing learning related to the program requirements, capstones can require students to demonstrate how well they have mastered important learning objectives from the institution’s general studies programs. (Palomba & Banta, 1999)
Closing the loop
Using assessment results for improvement and/or evolution. The first “loop” using assessment results to make changes in the program aimed at student learning improvement. The second “loop” occurs after the change has been made: the program (re)assesses student learning once students have experienced the changed program and determines if the change contributed to improved student performance (e.g., higher rubric scores). For more information on learning improvement, visit the Learning Improvement Community’s website
The demonstration of the ability to perform a specific task or achieve specified criteria. (James Madison University Dictionary of Student Outcomes Assessment)
Course Assessment
Assessment to determine the extent to which a specific course is achieving its learning outcomes.
Criteria for Success
The minimum requirements for a program to declare itself successful.
Example: 70% of students score 3 or higher on a lab skills assessment.
Assessment where student performance is compared to a pre-established performance standard (and not to the performance of other students). (CRESST Glossary) See also: Norm-referenced.
Curriculum Map
A matrix (or narrative text) that shows where each program student learning outcome is addressed in the curriculum (in courses or non-course requirements such as the PhD oral defense, an exit interview, internship).


(Back to top)

Direct Assessment
Collecting data/evidence on students’ actual behaviors or products. Direct data-collection methods provide evidence in the form of student products or performances. Such evidence demonstrates the actual learning that has occurred relating to a specific content or skill. (Middle States Commission on Higher Education, 2007). See also: Indirect Assessment.
Examples: exams, course work, essays, oral performance.


(Back to top)

Embedded Assessment
Collecting data/evidence on program learning outcomes by extracting course assignments. It is a means of gathering information about student learning that is built into and a natural part of the teaching-learning process. The instructor evaluates the assignment for individual student grading purposes; the program evaluates the assignment for program assessment. When used for program assessment, typically someone other than the course instructor uses a rubric to evaluate the assignment. (Leskes, A., 2002) See also: Embedded Exams and Quizzes.
Embedded Exams and Quizzes
Collecting data/evidence on program learning outcomes by extracting a course exam or quiz. Typically, the instructor evaluates the exam/quiz for individual student grading purposes; the program evaluates the exam/quiz for program assessment. Often only a section of the exam or quiz is analyzed and used for program assessment purposes. See also: Embedded Assessment.
A value judgment. A statement about quality. A statement about merit and worth.
Evidence (of learning)
Students’ written work, recorded performance, observed performance; students’ self-reports (via survey, interview, focus groups); documents on student learning. Evidence is typically divided into direct (i.e., student demonstrations of their learning) and indirect (e.g., students’ self-reports of their learning). See also Direct Assessment and Indirect Assessment


(Back to top)

Focus Group
A qualitative data-collection method that relies on facilitated discussions, with 3-10 participants who are asked a series of carefully constructed open-ended questions about their attitudes, beliefs, and experiences. Focus groups are typically considered an indirect data-collection method.
Formative Assessment
Ongoing assessment that takes place during the learning process. It is intended to improve an individual student’s performance, program performance, or overall institutional effectiveness. Formative assessment is used internally, primarily by those responsible for teaching a course or developing and running a program. (Middle States Commission on Higher Education, 2007) See also: Summative Assessment.


(Back to top)

General expectations for students. Effective goals are broadly stated, meaningful, achievable, and assessable.
The process of evaluating students, ranking them, and distributing each student’s value across a scale. Typically, grading is done at the course level.


(Back to top)

High Stakes Assessment
Any assessment whose results have important consequences for students, teachers, programs, etc. For example, using results of assessment to determine whether a student should receive certification, graduate, or move on to the next level. Most often the instrument is externally developed, based on set standards, carried out in a secure testing situation, and administered at a single point in time. (Leskes, A., 2002)
Examples: exit exams required for graduation, the bar exam, nursing licensure.
Holistic Scoring
Scoring that emphasizes the importance of the whole and the interdependence of parts. Scorers give a single score based on an overall appraisal of a student’s entire product or performance. Used in situations where the demonstration of learning is considered to be more than the sum of its parts and so the complete final product or performance is evaluated as a whole. (P. A. Gantt) See also: Analytic Scoring.


(Back to top)

Institution-level learning outcome/objective statements. See also PLO and SLO.
Indirect Assessment
Collecting evidence/data through reported perceptions about student mastery of learning outcomes. Indirect methods reveal characteristics associated with learning, but they only imply that learning has occurred. (Middle States Commission on Higher Education) See also: Direct Assessment.
Examples: surveys, interviews, focus groups.


(Back to top)

Learning outcomes
Statements that identify the knowledge, skills, or attitudes that students will be able to demonstrate, represent, or produce as a result of a given educational experience. There are three levels of learning outcomes: course, program, and institution.


(Back to top)

Assessment where student performances are compared to a larger group. In large-scale testing, the larger group, or “norm group” is usually a national sample representing a wide and diverse cross-section of students. The purpose of a norm-referenced assessment is usually to sort or rank students and not to measure achievement against a pre-established standard. (CRESST Glossary) See also: Criterion-referenced.
Also called “rater training.” The process of educating raters to evaluate student performance and produce dependable scores. Typically, this process uses criterion-referenced standards and analytic or holistic rubrics. Raters need to participate in norming sessions before scoring student performance. (Mount San Antonio College Assessment Glossary)


(Back to top)

Clear, concise statements that describe how students can demonstrate their mastery of program goals. (Allen, M., 2008) Note: on the Mānoa Assessment web site, “objective” and “outcome” are used interchangeably.
Clear, concise statements that describe how students can demonstrate their mastery of program goals. (Allen, M., 2008) Note: on the Mānoa Assessment web site, “objective” and “outcome” are used interchangeably.


(Back to top)

Performance Assessment
The process of using student activities or products, as opposed to tests or surveys, to evaluate students’ knowledge, skills, and development. As part of this process, the performances generated by students are usually rated or scored by faculty or other qualified observers who also provide feedback to students. Performance assessment is described as “authentic” if it is based on examining genuine or real examples of students’ work that closely reflects how professionals in the field go about the task. (Palomba & Banta, 1999)
A program student learning outcome. Program refers to degree program, program sequences, and the general education program. 
A type of performance assessment in which students’ work is systematically collected and carefully reviewed for evidence of learning. In addition to examples of their work, most portfolios include reflective statements prepared by students. Portfolios are assessed for evidence of student achievement with respect to established student learning outcomes and standards. (Palomba & Banta, 1999)
Program Assessment
An on-going process designed to monitor and improve student learning. Faculty: a) develop explicit statements of what students should learn (i.e., student learning outcomes); b) verify that the program is designed to foster this learning (alignment); c) collect data/evidence that indicate student attainment (assessment results); d) use these data to improve student learning (close the loop). (Allen, M., 2008)


(Back to top)

In the broadest sense, reliability speaks to the quality of the data collection and analysis. It may refer to the level of consistency with which observers/judges assign scores or categorize observations. In psychometrics and testing, it is a mathematical calculation of consistency, stability, and dependability for a set of measurements.
A tool often shaped like a matrix, with criteria on one side and levels of achievement across the top used to score products or performances. Rubrics describe the characteristics of different levels of performance, often from exemplary to unacceptable. The criteria are ideally explicit, objective, and consistent with expectations for student performance.
Rubrics may be used by an individual or multiple raters to judge student work. When used by multiple raters, norming takes place before scoring begins.
Rubrics are meaningful and useful when shared with students before their work is judged so they better understand the expectations for their performance. Rubrics are most effective when coupled with benchmark student work or anchors to illustrate how the rubric is applied.


(Back to top)

Scoring Guide or Scoring Criteria
See Rubric
A student learning outcome statement. We (at UHM) typically use “SLO” to refer to program-level learning outcome statements. See also PLO and ILO. 
In K-12 education, Education, and other fields, standard is synonymous with outcome.
Student Learning Outcome
A statement of what students will be able to think, know, do, or feel because of a given educational experience.
Summative Assessment
The gathering of information at the conclusion of a course, program, or undergraduate/graduate career to improve learning or to meet accountability demands. The purposes are to determine whether or not overall goals have been achieved and to provide information on performance for an individual student or statistics about a course or program for internal or external accountability purposes. Grades are the most common form of summative assessment. (Middle States Commission on Higher Education, 2007) See also: Formative Assessment


(Back to top)

The use of a combination of methods in a study. The collection of data from multiple sources to support a central finding or theme or to overcome the weaknesses associated with a single method.


Used Results

The program/unit made a change aimed at improvement (e.g., curriculum change) and/or celebrated student success. Programs use achievement results (i.e., the results from direct assessment) and use results from surveys, interviews, focus groups to guide curriculum development and evolution.


(Back to top)

Refers to whether the interpretation and intended use of assessment results are logical and supported by theory and evidence. In addition, it refers to whether the anticipated and unanticipated consequences of the interpretation and intended use of assessment results have been taken into consideration. (Standards for Educational and Psychological Testing, 1999)
Value-Added Assessment
Determining the impact or increase in learning that participating in higher education had on students during their programs of study. The focus can be on the individual student or a cohort of students. (Leskes, A., 2002).
A value-added assessment plan is designed so it can reveal “value”: at a minimum, students need to be assessed at the beginning and the ending of the course/program/degree.

Helpful link: Internet Resources for Higher Education Outcomes Assessment

Sources consulted:

  • Allen, M. (2008). Assessment Workshop at UH Manoa on May 13-14, 2008
  • American Psychological Association, National Council on Measurement in Education, & the American Educational Research Association. (1999). Standards for educational and psychological testing. Washington DC: American Educational Research Association.
  • CRESST Glossary.
  • Gantt, P.A. Portfolio Assessment: Implications for Human Resource Development. University of Tennessee.
  • James Madison Dictionary of Student Outcomes Assessment.
  • Leskes, A. (Winter/Spring 2002). Beyond confusion: An assessment glossary. Peer Review, AAC&
  • Middle States Commission on Higher Education. (2007). Student learning assessment: Options and resources (2nd Ed.). Philadelphia: Middle States Commission on Higher Education.
  • Mount San Antonio College Institution Level Outcomes
  • Northern Illinois University Assessment Terms Glossary.
  • Palomba, C.A. & Banta, T.W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass.

updated March 2013