Last Updated: 4 March 2024. Click here to view archived versions of this page.
The data or evidence in a learning assessment project is the information on student learning. A common type of learning data/evidence is students’ completed course assignments.
On this page:
- Data-collection methodology: Direct and indirect
- Benefits and drawbacks of data-collection methods
- Equity-minded considerations for data/evidence collection
- Evaluate your choice of method
- Additional resources & sources consulted
Note: The information and resources contained here serve only as a primers to the exciting and diverse perspectives in the field today. This page will be continually updated to reflect shared understandings of equity-minded theory and practice in learning assessment.
1. Data-collection methodology: Direct and indirect
Appropriate data collection methodology helps faculty members to answer meaningful assessment questions. There are typically two types of evidence a program can collect: direct and indirect.
- Direct evidence of student learning comes in the form of a student product or performance that can be evaluated. Direct evidence is required.
- Indirect evidence is student perception, opinion, or attitude. Indirect evidence by itself is insufficient.
Ideally, a program collects both types. Direct evidence, by itself, can reveal what students have learned and to what degree, but it does not provide information as to why the student learned or did not learn. The why is valuable because it can guide faculty members in how to interpret results and make improvements. Indirect evidence can be used to answer why questions. Programs can collect both direct and indirect evidence of student learning to gain a better picture of their students.
Tips
- Choose methods to collect data/evidence that will
- answer specific assessment questions
- be seen as credible to the faculty and the intended users of the results
- provide useful information
- benefit students and faculty
- Address equity considerations (see below)
- Use or modify existing evidence whenever possible. Inventory what evidence of student learning and perceptions about the program already exist.
- The curriculum map is a useful tool when conducting the inventory.
- Use more than one method whenever possible, especially when answering questions about highly-valued learning outcomes.
- Choose methods that are feasible given your program’s resources, money, and the amount of time faculty can devote to assessment activities.
- Feasibility tips
- Sampling. For programs with 40 or more graduates each year, we suggest a random sample of at least 40 students. For programs with fewer than 40 graduates each year, plan on collecting evidence from 100% of the graduating students.
- Consider Workloads: In our experience, it takes a faculty member an average of 15 minutes to apply a rubric to score an undergraduate research paper or other significant written project.
- Feasibility tips
Types of direct data/evidence of student learning
Direct data/evidence | Examples |
---|---|
Licensure or certification exam | Nursing program students’ pass rates on the NCLEX (Nursing) examination. |
National exams or standardized tests | a) Freshmen and seniors’ scores on the Collegiate Learning Assessment (CLA) or Collegiate Assessment of Academic Proficiency (CAAP) b) Senior-level biology students’ scores on the GRE Subject Test on Biology. |
Local exams (external to courses) | Entering students’ scores on the Mānoa Writing Placement Exam. Note: these local exams are not course exams (see “embedded testing or quizzes” for course exams/quizzes) |
Embedded testing or quizzes | a) Students’ pass rates on the German 202 final exam (students in all sections of German 202 take the same final exam). b) Two questions from the History 151 final exams are scored by a team of faculty members, and results are used for program-level assessment. |
Embedded assignments | The program selects course assignments that can provide information on a student learning outcome. Students complete these assignments as a regular part of the course and instructors grade the assignments for the course grade. In addition, the assignments are scored using criteria or a scoring rubric and these scores are used for program-level assessment. Examples: a) Course instructors apply the program/shared rubric to evaluate case studies written by students in targeted courses. b) A team of faculty members apply a rubric to evaluate videos of students’ oral presentations given in Oral Communication Focus courses. |
Grades calibrated to clear student learning outcome(s) | Professors give grades based on explicit criteria that are directly related to particular learning outcomes. (See also “embedded testing or quizzes” and “embedded assignments.”) |
Portfolios | A collection of student work such as written assignments, personal reflection, and self assessments. Developmental portfolios typically include work completed early, middle, and late in the students’ academic career so growth can be noted. Showcase portfolios include students’ best work and aim to show the students’ highest achievement level. |
Pre- post-tests | When used for program assessment, students take the pre-test as part of a required, introductory course. They take the post-test during their senior year, often in a required course or capstone course. Example: Students in Speech 151 and Speech 251 take a multiple-choice test. The semester that Speech majors and minors graduate, they make an appointment to take the same test. |
Employer’s or internship supervisor’s direct evaluations of students’ performances | Evaluation or rating of student performance in a work, internship, or service-learning experience by a qualified professional. |
Observation of student performing a task | Professor or an external observer rates each students’ classroom discussion participation using an observation checklist. |
Culminating project: capstone projects, senior theses, senior exhibits, senior dance performance | Students produce a piece of work or several pieces that showcase their cumulative experiences in a program. The work(s) are evaluated by a pair of faculty members, a faculty team, or a team comprised of faculty and community members. |
Student publications or conference presentations | Students present their research to an audience outside their program. Faculty and/or external reviewers evaluate student performance. |
Student description or list of what was learned | Students are asked to describe or list what they have learned. The descriptions are evaluated by faculty in the program and compared to the intended student learning outcomes. Example: After completing a service-learning project, students are asked to describe the three most important things they learned through their participation in the project. Faculty members evaluate the descriptions in terms of how well the service-learning project contributed to the program outcomes. |
Types of indirect data/evidence of student learning
Indirect data/evidence | Examples |
---|---|
Student surveys | Students self-report via a questionnaire (online, telephone, or paper) about their ability, attitudes, and/or satisfaction. E.g., Students answer questions about their information literacy competence via an online questionnaire. |
End-of-course evaluations (e.g., eCAFE) or mid-semester course evaluations | Students report their perceptions about the quality of a course, its instructor, and the classroom environment. |
Alumni surveys | Alumni report their perceptions via a questionnaire (online, telephone, or paper). E.g., alumni answer questions during a telephone survey about the importance of particular program learning outcomes and whether they are pertinent to their current career or personal life. |
Employer surveys | Potential employers complete a survey in which they indicate the job skills they perceive are important for college graduates. Note: if the survey asks employers to directly evaluate the skills, knowledge, and values of new employees who graduated from Mānoa, the survey can be considered a direct method of evaluating students. |
Interviews | Face-to-face, one-to-one discussions or question/answer session. E.g., A trained peer interviews seniors in a program to find out what courses and assignments they valued the most (and why). |
Focus group interviews | Face-to-face, one-to-many discussions or question/answer session. E.g., A graduate student lead a focus group of 4-5 undergraduate students who were enrolled in Foundations Symbolic Reasoning courses (e.g., Math 100). The graduate student asked the undergraduates to discuss their experiences in the course, including difficulties and successes. |
Percent of time or number of hours/minutes spent on various educational experiences in and out of class | Students’ self reports or observations made by trained observers on time spent on, for example: a) co-curricular activities b) homework c) classroom active learning activities versus classroom lecture d) intellectual activities related to a student learning outcome e) cultural activities related to a student learning outcome |
Grades given by professors that are not based on explicit criteria directly related to a learning outcome | Grade point averages or grades of students in a program. E.g., 52% of the students in Foundations Written Communication courses received an “A,” “A+” or “A-” grade. |
Job placement data | The percent of students who found employment in a field related to the major/program within one year. |
Enrollment in higher degree programs | The number or percent of students who pursued a higher degree in the field. |
Maps or inventories of practice | A map or matrix of the required curriculum and instructional practices/signature assignments. |
Transcript analysis or course-taking patterns | The actual sequence of courses (instead of the program’s desired course sequence for students). |
Institutional/ Program Research data | Information such as the following: a) Registration or course enrollment data b) Class size data c) Graduation rates d) Retention rates e) Grade point averages Specific examples: a) Number of sections with wait-listed students during registration. b) Percent of seats filled by majors. c) Number of students who dropped a course after first day of classes. d) Average enrollment in Writing Intensive sections by course level. |
Back to top
2. Benefits and drawbacks of data-collection methods
When selecting the best method(s) to answer your assessment question, take the benefits and drawbacks into consideration. Think about the following:
- the method’s consequences (intended and unintended)
- whether the method will be seen as credible by the faculty and intended users of the results
- whether faculty and users will be willing to make program changes based on the evidence the method provides
- whether the method is in line with equity considerations (next section)
- whether the method will benefit students (e.g., the method mirrors a task done by professionals or community members)
Some methods have beneficial consequences unrelated to the results of the evaluation. For example:
- Portfolios: Keeping a portfolio can lead students to become more reflective and increase their motivation to learn.
- Embedded assignments: When faculty members collaborate to create scoring rubrics and reach consensus on what is acceptable and exemplary student work, students receive more consistent grading and feedback from professors in the program.
- Authentic assessment: When faculty use a method that asks students to perform tasks that mirror what professionals, community members, and others do. When students complete tasks similar to what is done outside of the classroom setting, the student benefits by gaining that experience and having a product they can show to potential employers, their community, and their family. [“Authentic test” is a phrase coined by Grant Wiggins, 1989, in “A True Test: Toward More Authentic and Equitable Assessment,” The Phi Delta Kappan, 70(9), pp. 703-713.]
Direct data/evidence: Benefits and drawbacks
Direct data/evidence | Benefits | Drawbacks |
---|---|---|
Licensure or certification exam | a) National comparisons can be made. b) Reliability and validity are monitored by the test developers. c) An external organization handles test administration and evaluation. | a) Faculty may be unwilling to make changes to their curriculum if students score low (reluctant to “teach to the test”). b) Test may not be aligned with the program’s intended curriculum and outcomes. c) Information from test results is too broad to be used for decision making. |
National exam or standardized test | National comparisons can be made. Reliability and validity are monitored by the test developers. An external organization may handle test administration and evaluation. | Students may not take exam seriously. Faculty may be unwilling to make changes to their curriculum if students score low (reluctant to “teach to the test”). Test may not be aligned with the program’s intended curriculum and outcomes. Information from test results is too broad to be used for decision making. Can be expensive. The external organization may not handle administration and evaluation. |
Local exam (external to courses) | Faculty typically more willing to make changes to curriculum because local exam is tailored to the curriculum and intended outcomes. | Students may not take exam seriously. They are not motivated to do their best. Campus or program is responsible for test reliability, validity, and evaluation. |
Embedded testing or quiz | Students motivated to do well because test/quiz is part of their course grade. Evidence of learning is generated as part of normal workload. | Faculty members may feel that they are being overseen by others, even if they are not. |
Embedded assignment | Students motivated to do well because assignment is part of their course grade. Faculty members more likely to use results because they are active participants in the assessment process. Online submission and review of materials possible. Data collection is unobtrusive to students. | Faculty members may feel that they are being overseen by others, even if they are not. Faculty time required to develop and coordinate, to create a rubric to evaluate the assignment, and to actually score the assignment. |
Grades calibrated to explicit student learning outcome(s) | Students motivated to do well because test/quiz/assignment is part of their course grade. Faculty members more likely to use results because they are active participants in the assessment process. Online submission and review of materials possible. | Faculty time required to develop and coordinate and to agree on grading standards. |
Portfolio | Provides a comprehensive, holistic view of student achievement and/or development over time. Students can see growth as they collect and reflect on the products in the portfolio. Students can draw from the portfolio when applying for graduate school or employment. Online submission and review of materials possible. | Amount of resources needed: costly and time consuming for both students and faculty. Students may not take the process seriously (collection, reflection, etc.) Accommodations need to be made for transfer students (when longitudinal or developmental portfolios are used). |
Pre- post-test | Provides “value-added” or growth information. | Increased workload to evaluate students more than once. Designing pre- post-tests that are truly comparable at different times is difficult. Statistician may be needed to properly analyze results. |
Employer’s or internship supervisor’s direct evaluations of students’ performances | Evaluation by a career professional is often highly valued by students. Faculty members learn what is expected by community members outside Mānoa. | Lack of standardization across evaluations may make summarization of the results difficult. |
Observation of student performing a task | Captures data that is difficult to obtain through written texts or other methods. | If someone other than the course instructor performs the evaluation, a trained, external observer to collect data is recommended, which may cost money and/or require the willingness of faculty members to observe colleagues’ courses and allow observations of their class. Some may believe observation is subjective and therefore the conclusions are only suggestive. |
Culminating project: capstone projects, senior theses, senior exhibits, senior dance performance | Provides a sophisticated, multi-level view of student achievement. Students have the opportunity to integrate their learning. | Creating an effective, comprehensive culminating experience can be challenging. Faculty time required to develop evaluation methods (multiple rubrics may be needed). |
Student publications or conference presentations | Gives students an opportunity to practice being a professional and receive feedback from career professionals or community members. | Time needed to coordinate the event or publication if the unit is the organizer. |
Student description or list of what was learned | Careful attention to developing the prompts that students respond to. | Some may view the student description as not trustworthy. Students with better writing and rhetorical skills may be unfairly receive higher scores. |
Benefits and drawbacks: Indirect data/evidence
Indirect data/evidence | Benefits | Drawbacks |
---|---|---|
Student Surveys | Can administer to large groups for a relatively low cost. Analysis of responses typically quick and straightforward. Reliable commercial surveys are available for purchase. | Low response rates are typical. With self-efficacy reports, students’ perception may be different from their actual abilities. Designing reliable, valid questions can be difficult. Caution is needed when trying to link survey results and achievement of learning outcomes. |
End-of-course evaluations (eCAFE) or mid-semester course evaluations | Analysis of responses typically quick and straightforward. CAFE allows both common questions across all courses as well as choice of questions. | Difficult to summarize the CAFE results across courses Property of individual faculty members |
Alumni surveys | Can administer to large groups for a relatively low cost. Analysis of responses typically quick and straightforward. | Low response rates are typical. If no up-to-date mailing list, alumni can be difficult to locate. Designing reliable, valid questions can be difficult. |
Employer surveys | Can administer to large groups for a relatively low cost. Analysis of responses typically quick and straightforward. Provides a real-world perspective. | Low response rates are typical. May have a very limited number of employers to seek information from. |
Interviews | Provides rich, in-depth information and allows for tailored follow-up questions. “Stories” and voices can be powerful evidence for some groups of intended users. | Trained interviewers needed. Transcribing, analyzing, and reporting are time consuming. |
Focus group interviews | Provides rich, in-depth information and allows for tailored follow-up questions. The group dynamic may spark more information–groups can become more than the sum of their parts. “Stories” and voices can be powerful evidence for some groups of intended users. | Trained facilitators needed. Transcribing, analyzing, and reporting are time consuming. |
Percent of time or number of hours/minutes spent on various activities related to a student learning outcome | Information about co-curricular activities and student habits can help programs make sense of results and/or guide them in making decisions about program improvement. | Retrospective self reports may not be accurate. |
Grades given by professors that are not based on explicit criteria directly related to a learning outcome | Data relatively easy to collect. | Impossible or nearly impossible to reach conclusions about the levels of student learning on an outcome. |
Job placement data | Satisfies some accreditation agencies’ reporting requirements Can help faculty revise curriculum so it better prepares students for particular jobs | Tracking alumni may be difficult. |
Enrollment in higher degree programs | Satisfies some accreditation agencies’ reporting requirements Can help faculty revise curriculum requirements so students are better prepared for post-degree activities | Tracking alumni may be difficult. |
Transcript analysis or course-taking patterns | Unobtrusive method. Student demographics and other information can be linked to their course-taking patterns. | Conclusions need to be tempered because other variables do not appear on transcripts (e.g., personal situations, course availability). |
Institutional research data | Can be effective when linked to other performance measures and the results of the assessment of student learning (using a direct method). |
Back to top
3. Equity-minded considerations for data/evidence collection
There are a wide range of equity considerations in regard to the learning data/evidence collected for program-level learning assessment — from the design of the method (e.g., assignment guidelines; test questions) to the impact on students and faculty. The literature on equity-minded assessment is vast. Below we briefly discuss some considerations. Please contact us for additional guidance.
Remove bias in the design/task
Bias is introduced when a data collection method inadvertently advantages or disadvantages a group of students because of characteristics irrelevant to the learning outcome, thus threatening the fairness of the method. For example, characteristics such as age, disability status, or economic status are irrelevant to learning outcomes such as “apply the scientific method” or “conduct a literary analysis” and thus the method should be designed so that students of all ages, disability status, and economic status have equal opportunity to succeed.
A fair method allows students to produce work or answers that reflect their knowledge and skill as accurately as possible.
Faculty need to be aware of (unintentional) bias. We recommend answering questions such as these:
- What is being done to ensure all students have unobstructed opportunity to demonstrate their proficiency? For example, do the following exist or occur?
- accommodations for students with visual impairments or for students in other disability communities
- removal of references that are idiomatic, cultural, or specific to a particular group of students, provided that these references are irrelevant to the learning outcome:
- e.g., “across the board”; “Achilles’ heel”; “turn the other cheek”
- age-, gender-, race-, and economic-based references that students from a particular group would not know or would consider offensive (e.g., out-of-date terminology)
- examination of the language for inclusion and for supportive identity orientation
- For example, see the American Psychological Association’s Inclusive Language Guide
- application of elements from the Universal Design for Learning (UDL) Framework: in particular, the Representation and the Action & Expression guidelines to both pedagogy and the method to collect learning evidence/data
- Are the assignments/tests/methods reflective of the students in the program? To what extent can students see themselves?
- Have the program faculty members checked for dominant epistemologies and beliefs–and acted to bring in other appropriate ones?
- Have the program faculty members included experiences and values of all students and their communities (especially marginalized populations)?
- What steps are used to solicit stakeholder input to identify potential issues? E.g., ask students for feedback on the assignment/test before and after they complete it; conduct an item analysis on test items.
Flexibility in how students demonstrate their learning: student choice and options
In higher education, the tendency is to ask students to demonstrate their learning via a written, academic-styled product. Giving students a choice or using other (non-traditional) ways in how students demonstrate their learning is an equity move. When the program SLO does not center on a particular way to demonstrate learning (e.g., write a comprehensive financial report), the program faculty can
- consider other ways for students to demonstrate their learning: artistic such as poem, song; storytelling circle; video blog; narrative; photos; a game; spoken word
- allow students to choose from several ways (e.g., produce a video or write a report)
- ask students to demonstrate their learning on each program SLO using 2-3 different ways (e.g., create a storyboard, give an oral presentation, write a report). Multiple ways will help faculty form a more comprehensive understanding of student performance.
Recommended resource: Universal Design for Learning Guidelines section on Action and Expression.
Use learning evidence that emerges from culturally-relevant, culture-based, or place-based education
Research (see below) indicates that students thrive when instructors use culturally-responsive, culture-based and/or place-based education practices, especially Indigenous students and students of color. Degree programs that use an embedded assessment approach (i.e., they use course assignments as the program’s learning evidence) are encouraged to implement culturally-relevant/culture-based/place-based education in the curriculum.
Culturally-relevant/culture-based/place-based education characteristics (adapted from Ladson-Billings, 2021; Kana’iaupuni, Ledward, & Malone,2017; Yemini, Engel, & Ben Simon, 2023):
Program-level:
Equity-minded Practice | Example Program | Example Description |
---|---|---|
Instructors embed multicultural information, resources, and materials | Social Work | Social Work program faculty reviewed the reading lists in required courses to determine the prevalence of authors representing different groups (disability, ethnicity, gender, geographic) and took steps to diversify the groups represented. |
Instructors and students acknowledge the legitimacy of different cultures, lived experiences, and ways of knowing | Marine Biology | Graduate faculty use the Kūlana Noi‘i (research standards) to introduce students to a framework for conducting research in communities that is grounded in Native Hawaiian values. |
Assignment-level:
Equity-minded Practice | Example Assignment | Assignment Description |
---|---|---|
Place-based Teaching: Course design and/or assignment design prioritize experiential, community-based, and contextual/ecological learning with the goal of fostering students’ connection to local contexts, cultures, and environments | Program: Mathematics Course: Mathematics 100 Assignment: Restoration of Kahoʻolawe By Dr. M. Chyba, Mathematics, University of Hawai‘i at Mānoa | The goal of the assignment is to determine how to cover over 3500 acres of Kahoʻolawe island with vegetation. Students are asked to answer discussion questions such as: 1. How much water is needed to sustain certain native plants requested to plant on Kahoʻolawe based on their youngest mature size and recommended watering? 2. How can we utilize the tanks to overcome months of drought? |
Assignments utilize students’ culture as a vehicle for learning | Program: Sociology Course: Sociology of Food (100-level) Assignment: Tell me what you eat, and I will tell you who you are: Past, present, and future by Professor E. L. Brekke, Leeward Community College | Students conduct interviews to locate the meaning of food, family, and culture and how the food system has changed. As a collective, the students identify shared values and their unique connection to their culture and as islanders. These values and traditions—connected to people and place—become the guiding values as they analyze the current food system and create solutions for the future. |
Assignments utilize students’ culture as a vehicle for learning | Program: Teacher Education Course: Secondary Mathematics. Assignment: Community Asset Mapping and Inventory By Dr. C. Mangram, University of Hawai‘i at Mānoa | Students: (A) draw or download an asset map for their (1) classroom, (2) school, (3) community (neighborhood) the school serves, and (4) ahupua‘a [land division]; (B) select three of the assets and reflect on how they might leverage each asset in their design of secondary mathematics learning experiences.” |
Assignments guide students in engaging the world and others in a critical manner (e.g., “prepare students to effect change in society, not merely fit into it” (Ladson-Billings, 2021, p. 64) | Program: Mathematics Course: Mathematics 100 Assignment: Mercury Levels of Fish in Hawai`i by Dr. S. Post, Mathematics, University of Hawai‘i at Mānoa | Compare mercury levels found in different fish common in Hawai`i with those of fish from the Minamata disaster (known case of Mercury poisoning). Using FDA guidelines as a reference, give a recommendation of the kind and amount of fish that should be consumed per week, supported by quantitative data. Addtl. Details in the 2018 quantitative reasoning workshop materials. |
Clear, explicit alignment among the targeted learning outcome(s), the task, and the evaluation criteria
Help students know which parts of the course (readings, lectures, assignments) will help them prepare to be successful on the assignment/test/evaluation method. While it may be obvious to the instructor, some students might not make the connection between course elements. Clear, explicit alignment is an equity move: it helps all students connect the dots and can lead to students being effective self-directed learners.
Tip: Use the same words and phrases (not synonyms) in the learning outcome, the task, and the evaluation criteria.
Example:
- Program SLO: Work individually and in teams and demonstrate respect for diversity of viewpoints
- Course SLO
- Not well aligned: Participate on a team
- Better aligned: Apply teamwork skills (collaborative goal setting, accountability, listening, respectful communication) to complete a team project
- Task (excerpt)
- Not well aligned: In your group, conduct a case analysis.
- Better aligned: As a team, collaborate to conduct a case analysis. Create and periodically update the team’s time management and accountability plan. Pay special attention to demonstrating the list of effective teamwork skills while working on the case analysis project.
- Evaluation criteria (excerpt from high score criteria)–aligned: Collaborates on the time management and accountability plan. Completes all assigned tasks by deadline; work accomplished is thorough, comprehensive, and advances the project. Proactively helps other team members complete their assigned tasks to a similar level of excellence. Listens to and engages with team members in ways that facilitate their contributions to meetings by constructively building upon or synthesizing the contributions of other. Invites other team members to engage. Treats team members respectfully by being polite and constructive in communication. Uses positive vocal or written tone, facial expressions, and/or body language to convey a positive attitude about the team and its work. Communicates constructively to address conflict, helps to manage/resolve it in a way that strengthens overall team cohesiveness and future effectiveness. [Adapted from the AAC&U VALUE Rubric on Teamwork]
Understandable evaluation criteria and clear documentation of what constitutes a successful demonstration of learning (provided to students before they start)
Faculty might assume students enter their classrooms already knowing the common genres of the discipline (literary analysis in literature; experimental lab report in science; literature review in social sciences) or that students know what critical thinking is in their discipline (it varies by field; for example, evaluating the quality of evidence in a psychology course is different from in a literature course).
Many students do not know the genres or ways of thinking in the field, particularly undergraduate students. Thus, faculty practice equity-minded teaching and assessment by providing students with
- Evaluation criteria that students understand–discuss the criteria with students to confirm their understanding
- Documentation and explanation of what a successful demonstration of learning looks like–e.g., provide annotated samples
Involvement of students
Meaningful student involvement is a perfect technique to accomplish the recommendations in this section. Regularly seeking student feedback and suggestions are crucial to equity-minded assessment. As Montenegro and Jankowski (2020) state, “Listening to the voices of those historically silenced is an essential element of equity-minded assessment” (p. 10, A new decade for assessment: Embedding equity into assessment praxis, Occasional Paper #42, National Institute for Learning Outcomes Assessment).
Options to consider:
- Students participate in the design of the method(s)
- Students critique the method and make suggestions (in class, in focus groups, or via survey)
- Students evaluate the method from different perspectives (in class or in focus groups)
- Students give feedback on the clarity of the assignment/test, the evaluation criteria, and the samples provided (in class, in focus groups, or via survey)
- Students give feedback on the extent to which they see themselves in the course and the assignments.
Additional resources on equity, teaching, learning, and assessing
- Linking assessment practices to indigenous ways of knowing [PDF] by S. Williams & F. Perrone (2018, April). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).
- stop talking: Indigenous Ways of Teaching and Learning and Difficult Dialogues in Higher Education [PDF] (2013) by Ilarion (Larry) Merculieff and Libby Roderick. University of Alaska Anchorage.
- Transparency in Learning and Teaching: TILT Higher Education Resources and Examples for Teachers
- Universal Design for Learning Guidelines
Back to top
4. Evaluate your choice of data-collection method
After selecting the type(s) of evidence/data to collect, use this checklist to help confirm your decision. A well-chosen method:
- is viewed as beneficial by students and faculty.
- has addressed equity considerations.
- provides specific answers to the assessment question being investigated.
- is feasible to carry out given program resources and amount of time faculty members are willing to invest in assessment activities; takes advantage of existing products (e.g., exams or surveys the faculty/program already use) whenever possible.
- has a maximum of positive effects and minimum of negative ones. The method should give faculty members, intended users, and students the right messages about what is important to learn and teach.
- provides information that is credible and actionable. Faculty members or other intended users will be willing to discuss and make changes to the program (as needed) based on the results.
5. Additional resources & sources consulted
Additional resources on choosing a method & collecting data/information:
- Evidence of Student Learning: Direct and Indirect (7-minute video)
- Collect Evidence of Student Learning Using a Signature Assignment (2015 workshop)
- Take the Next Step in Program Learning Assessment: Collect & Review Evidence of Learning (2014 workshop)
- Collecting Evidence of Student Learning (2009 and 2010 workshops)
- Efficient Program Assessment (2010 workshop)
- Examples of Program-level Assessment of Student Learning (2009 workshop)
Outcome-specific resources:
- Critical thinking
- Critical Thinking: Teaching, Learning, and Assessing (2018 workshop)
- Civic Engagement
- Civic Learning Assessment Guide for Course Instructors (n.d., handout)
- Civic Engagement Assignment Design (2022 workshop)
- Oral Communication
- Oral Communication & Program-level Assessment (2012 workshop)
- Assignment Design for Powerful Learning in Oral Communication (2016 workshop)
- Oral Communication Resources for Instruction and Assessment (n.d., handout)
- Quantitative Reasoning
Equity-minded Considerations
- Henning, G. W., Baker, G. R., Jankowski, N. A., Lundquist, A. E., & Montenegro, E. (2022). Reframing Assessment to Center Equity : Theories, Models, and Practices. Stylus. E-book available at Hamilton Library (login required).
- See also chapter 10, “Centering ‘Āina in Assessment: Striving for Equity and Social Justice”
- Kana’iaupuni, S. M., Ledward, B., & Malone, N. (2017). Mohala i ka wai: Cultural Advantage as a Framework for Indigenous Culture-Based Education and Student Outcomes. American Educational Research Journal, 54(1_suppl), 311S-339S. https://doi.org/10.3102/0002831216664779. Also, article available at Hamilton Library (login required).
- Ladson-Billings, G. (2021). Culturally Relevant Pedagogy: Asking a Different Question. Teachers College Press. E-book available at Hamilton Library (login required).
- Ladson-Billings, G. (2014). Culturally Relevant Pedagogy 2.0: a.k.a. the Remix. Harvard Educational Review, 84(1), 74–84. https://doi.org/10.17763/haer.84.1.p2rj131485484751. This article is reprinted in her 2021 book (above).
- Ladson-Billings, G. (1995). Toward a theory of culturally relevant pedagogy. American Educational Research Journal, 32(3), 465–491. https://doi.org/10.3102/00028312032003465. Also, article available at Hamilton Library (login required).
- Yemini, M., Engel, L., & Ben Simon, A. (2023). Place-based education: A systematic review of literature. Educational Review (Birmingham), ahead-of-print, 1–21. https://doi.org/10.1080/00131911.2023.2177260. (open access)
Contributors: Monica Stitt-Bergh, Ph.D., TJ Buckley.