ARCHIVED – Choose a Method to Collect Data/Evidence

Archived Date: 26 February 2024

Part 1. Data-collection Methodology: Direct and Indirect
Part 2. Benefits and Drawbacks of Data-collection Methods
Part 3. Evaluate Your Choice of Method

See also: Workshops and Events

Part 1. Data-collection Methodology: Direct and Indirect

After faculty members agree on the program mission/goals, student learning objective/outcomes, and a meaningful assessment question(s), they choose a data-collection methodology that will help them answer the question and provide information that can aid in decision making.Data-collection methods for assessment purposes typically fall into two categories: direct and indirect. Direct evidence of student learning comes in the form of a student product or performance that can be evaluated. Indirect evidence is the perception, opinion, or attitude of students (or others). Both are important, but indirect evidence by itself is insufficient. Direct evidence is required. Ideally, a program collects both types.Why is direct evidence of student learning required? Here’s an example. If students self-report on a survey (indirect evidence of learning) that their knowledge of world geography is excellent but later fail a multiple-choice world geography test (direct evidence), that’s useful information. The indirect evidence by itself is not as meaningful without the direct evidence of students’ knowledge. Direct evidence, by itself, can reveal what students have learned and to what degree, but it does not provide information as to why the student learned or did not learn. The why is valuable because it can guide faculty members in how to interpret results and make improvements. Indirect evidence can be used to answer why questions. Programs can collect both direct and indirect evidence of student learning to gain a better picture of their students.

Tips

  • Choose methods that will
    • answer specific assessment questions,
    • be seen as credible to the faculty and the intended users of the results, and
    • provide useful information. Quantity is not the goal.
  • Use more than one method whenever possible, especially when answering questions about highly-valued learning outcomes.
  • Use or modify existing evidence whenever possible. Inventory what evidence of student learning and perceptions about the program already exist. The curriculum map is a useful tool when conducting the inventory.
  • Choose methods that are feasible given your program’s resources, money, and the amount of time faculty are willing to devote to assessment activities.
    • Feasibility tips:
      • For programs with 40 or more graduates each year, we suggest a random sample of at least 40 students. For programs with fewer than 40 graduates each year, plan on collecting evidence from 100% of the graduating students.
      • If the evidence is a undergraduate student project such as a research paper and faculty other than the course instructor will evaluate the student work, keep this in mind: in our experience, it takes a faculty member an average of 15 minutes to apply a rubric to score each research paper or other significant written project. So if the program can recruit 6 faculty members to spend 90 minutes evaluating student work, that’s (a sample of) 36 students if each paper is evaluated by only one faculty member.

Types of Direct Data-collection Methods

DIRECT METHODS Examples
Licensure or certification Nursing program students’ pass rates on the NCLEX (Nursing) examination.
National exams or standardized tests a) Freshmen and seniors’ scores on the Collegiate Learning Assessment (CLA) or Collegiate Assessment of Academic Proficiency (CAAP)
b) Senior-level biology students’ scores on the GRE Subject Test on Biology.
Local exams (external to courses) Entering students’ scores on the Mānoa Writing Placement Exam. Note: these local exams are not course exams (see “embedded testing or quizzes” for course exams/quizzes)
Embedded testing or quizzes a) Students’ pass rates on the German 202 final exam (students in all sections of German 202 take the same final exam).
b) Two questions from the History 151 final exams are scored by a team of faculty members, and results are used for program-level assessment.
Embedded assignments The program selects course assignments (“signature assignments”) that can provide information on a student learning outcome. Students complete these assignments as a regular part of the course and instructors grade the assignments for the course grade. In addition, the assignments are scored using criteria or a scoring rubric and these scores are used for program-level assessment.

Examples:
a) Course instructor and an outside faculty member apply a rubric to evaluate case studies written by students in BUS 301.
b) A team of faculty members apply a rubric to evaluate videos of students’ oral presentations given in Oral Communication Focus courses.

Grades calibrated to clear student learning outcome(s) Professors give grades based on explicit criteria that are directly related to particular learning outcomes. (See also “embedded testing or quizzes” and “embedded assignments.”)
Portfolios A collection of student work such as written assignments, personal reflection, and self assessments. Developmental portfolios typically include work completed early, middle, and late in the students’ academic career so growth can be noted. Showcase portfolios include students’ best work and aim to show the students’ highest achievement level.
Pre- post-tests When used for program assessment, students take the pre-test as part of a required, introductory course. They take the post-test during their senior year, often in a required course or capstone course.

Example: Students in Speech 151 and Speech 251 take a multiple-choice test. The semester that Speech majors and minors graduate, they make an appointment to take the same test.

Employer’s or internship supervisor’s direct evaluations of students’ performances Evaluation or rating of student performance in a work, internship, or service-learning experience by a qualified professional.
Observation of student performing a task Professor or an external observer rates each students’ classroom discussion participation using an observation checklist.
Culminating project: capstone projects, senior theses, senior exhibits, senior dance performance Students produce a piece of work or several pieces that showcase their cumulative experiences in a program. The work(s) are evaluated by a pair of faculty members, a faculty team, or a team comprised of faculty and community members.
Student publications or conference presentations Students present their research to an audience outside their program. Faculty and/or external reviewers evaluate student performance.
Description or list of what student learned Students are asked to describe or list what they have learned. The descriptions are evaluated by faculty in the program and compared to the intended student learning outcomes.

Example: After completing a service-learning project, students are asked to describe the three most important things they learned through their participation in the project. Faculty members evaluate the descriptions in terms of how well the service-learning project contributed to the program outcomes.

Types of Indirect Data-collection Methods

INDIRECT METHODS Example
Student surveys Students self-report via a questionnaire (online, telephone, or paper) about their ability, attitudes, and/or satisfaction. E.g., Students answer questions about their information literacy competence via an online questionnaire.
End-of-course evaluations (e.g., CAFE) or mid-semester course evaluations Students report their perceptions about the quality of a course, its instructor, and the classroom environment.
Alumni surveys Alumni report their perceptions via a questionnaire (online, telephone, or paper). E.g., alumni answer questions during a telephone survey about the importance of particular program learning outcomes and whether they are pertinent to their current career or personal life.
Employer surveys Potential employers complete a survey in which they indicate the job skills they perceive are important for college graduates.

Note: if the survey asks employers to directly evaluate the skills, knowledge, and values of new employees who graduated from Mānoa, the survey can be considered a direct method of evaluating students.

Interviews Face-to-face, one-to-one discussions or question/answer session. E.g., A trained peer interviews seniors in a program to find out what courses and assignments they valued the most (and why).
Focus group interviews Face-to-face, one-to-many discussions or question/answer session. E.g., A graduate student lead a focus group of 4-5 undergraduate students who were enrolled in Foundations Symbolic Reasoning courses (e.g., Math 100). The graduate student asked the undergraduates to discuss their experiences in the course, including difficulties and successes.
Percent of time or number of hours/minutes spent on various educational experiences in and out of class Students’ self reports or observations made by trained observers on time spent on, for example:
  • co-curricular activities
  • homework
  • classroom active learning activities verses classroom lectures
  • intellectual activities related to a student learning outcome
  • cultural activities related to a student learning outcome
Grades given by professors that are not based on explicit criteria directly related to a learning outcome Grade point averages or grades of students in a program.

E.g., 52% of the students in Foundations Written Communication courses received an “A,” “A+” or “A-” grade.

Job placement data The percent of students who found employment in a field related to the major/program within one year.
Enrollment in higher degree programs The number or percent of students who pursued a higher degree in the field.
Maps or inventories of practice A map or matrix of the required curriculum and instructional practices/signature assignments.
Transcript analysis or course-taking patterns The actual sequence of courses (instead of the program’s desired course sequence for students).
Institutional/Program Research data Information such as the following:
  • Registration or course enrollment data
  • Class size data
  • Graduation rates
  • Retention rates
  • Grade point averages

Specific examples:
a) Number of sections with wait-listed students during registration.
b) Percent of seats filled by majors.
c) Number of students who dropped a course after first day of classes.
d) Average enrollment in Writing Intensive sections by course level.

Part 2. Benefits and Drawbacks of Data-collection Methods

When selecting the best method(s) to answer your assessment question, take the benefits and drawbacks into consideration. More importantly, think about the following:
  1. the method’s consequences (intended and unintended),
  2. whether the method will be seen as credible by the faculty and intended users of the results, and
  3. whether faculty and users will be willing to make program changes based on the evidence the method provides.

Some methods have beneficial consequences unrelated to the results of the evaluation. For example:

  • Portfolios: Keeping a portfolio can lead students to become more reflective and increase their motivation to learn.
  • Embedded assignments: When faculty members collaborate to create scoring rubrics and reach consensus on what is acceptable and exemplary student work, students receive more consistent grading and feedback from professors in the program.
DIRECT METHODS BENEFITS DRAWBACKS
Licensure or certification
  • National comparisons can be made.
  • Reliability and validity are monitored by the test developers.
  • An external organization handles test administration and evaluation.
  • Faculty may be unwilling to make changes to their curriculum if students score low (reluctant to “teach to the test”).
  • Test may not be aligned with the program’s intended curriculum and outcomes.
  • Information from test results is too broad to be used for decision making.
National exam or standardized test
  • National comparisons can be made.
  • Reliability and validity are monitored by the test developers.
  • An external organization may handle test administration and evaluation.
  • Students may not take exam seriously.
  • Faculty may be unwilling to make changes to their curriculum if students score low (reluctant to “teach to the test”).
  • Test may not be aligned with the program’s intended curriculum and outcomes.
  • Information from test results is too broad to be used for decision making.
  • Can be expensive.
  • The external organization may not handle administration and evaluation.
Local exam (external to courses)
  • Faculty typically more willing to make changes to curriculum because local exam is tailored to the curriculum and intended outcomes.
  • Students may not take exam seriously. They are not motivated to do their best.
  • Campus or program is responsible for test reliability, validity, and evaluation.
Embedded testing or quiz
  • Students motivated to do well because test/quiz is part of their course grade.
  • Evidence of learning is generated as part of normal workload.
  • Faculty members may feel that they are being overseen by others, even if they are not.
Embedded assignment
  • Students motivated to do well because assignment is part of their course grade.
  • Faculty members more likely to use results because they are active participants in the assessment process.
  • Online submission and review of materials possible.
  • Data collection is unobtrusive to students.
  • Faculty members may feel that they are being overseen by others, even if they are not.
  • Faculty time required to develop and coordinate, to create a rubric to evaluate the assignment, and to actually score the assignment.
Grades calibrated to explicit student learning outcome(s)
  • Students motivated to do well because test/quiz/assignment is part of their course grade.
  • Faculty members more likely to use results because they are active participants in the assessment process.
  • Online submission and review of materials possible.
  • Faculty time required to develop and coordinate and to agree on grading standards.
Portfolio
  • Provides a comprehensive, holistic view of student achievement and/or development over time.
  • Students can see growth as they collect and reflect on the products in the portfolio.
  • Students can draw from the portfolio when applying for graduate school or employment.
  • Online submission and review of materials possible.
  • Amount of resources needed: costly and time consuming for both students and faculty.
  • Students may not take the process seriously (collection, reflection, etc.)
  • Accommodations need to be made for transfer students (when longitudinal or developmental portfolios are used).
Pre- post-test
  • Provides “value-added” or growth information.
  • Increased workload to evaluate students more than once.
  • Designing pre- post-tests that are truly comparable at different times is difficult.
  • Statistician may be needed to properly analyze results.
Employer’s or internship supervisor’s direct evaluations of students’ performances
  • Evaluation by a career professional is often highly valued by students.
  • Faculty members learn what is expected by community members outside Mānoa.
  • Lack of standardization across evaluations may make summarization of the results difficult.
Observation of student performing a task
  • Captures data that is difficult to obtain through written texts or other methods.
  • A trained, external observer (not the course instructor) to collect data is recommended, which may cost money and/or require the willingness of faculty members to observe colleagues’ courses and allow observations of their class.
  • Some may believe observation is subjective and therefore the conclusions are only suggestive.
Culminating project: capstone projects, senior theses, senior exhibits, senior dance performance
  • Provides a sophisticated, multi-level view of student achievement.
  • Students have the opportunity to integrate their learning.
  • Creating an effective, comprehensive culminating experience can be challenging.
  • Faculty time required to develop evaluation methods (multiple rubrics may be needed).
Student publications or conference presentations
  • Gives students an opportunity to practice being a professional and receive feedback from career professionals or community members.
 
Description or list of what student learned    
INDIRECT METHODS BENEFITS DRAWBACKS
Student surveys
  • Can administer to large groups for a relatively low cost.
  • Analysis of responses typically quick and straightforward.
  • Reliable commercial surveys are available for purchase.
  • Low response rates are typical.
  • With self-efficacy reports, students’ perception may be different from their actual abilities.
  • Designing reliable, valid questions can be difficult.
  • Caution is needed when trying to link survey results and achievement of learning outcomes.
End-of-course evaluations (CAFE) or mid-semester course evaluations
  • Analysis of responses typically quick and straightforward.
  • CAFE allows both common questions across all courses as well as choice of questions.
  • Difficult to summarize the CAFE results across courses
  • Property of individual faculty members
Alumni surveys
  • Can administer to large groups for a relatively low cost.
  • Analysis of responses typically quick and straightforward.
  • Low response rates are typical.
  • If no up-to-date mailing list, alumni can be difficult to locate.
  • Designing reliable, valid questions can be difficult.
Employer surveys
  • Can administer to large groups for a relatively low cost.
  • Analysis of responses typically quick and straightforward.
  • Provides a real-world perspective.
  • Low response rates are typical.
  • May have a very limited number of employers to seek information from.
Interviews
  • Provides rich, in-depth information and allows for tailored follow-up questions.
  • “Stories” and voices can be powerful evidence for some groups of intended users.
  • Trained interviewers needed.
  • Transcribing, analyzing, and reporting are time consuming.
Focus group interviews
  • Provides rich, in-depth information and allows for tailored follow-up questions.
  • The group dynamic may spark more information–groups can become more than the sum of their parts.
  • “Stories” and voices can be powerful evidence for some groups of intended users.
  • Trained facilitators needed.
  • Transcribing, analyzing, and reporting are time consuming.
Percent of time or number of hours/minutes spent on various activities related to a student learning outcome
  • Information about co-curricular activities and student habits can help programs make sense of results and/or guide them in making decisions about program improvement.
  • Retrospective self reports may not be accurate.
Grades given by professors that are not based on explicit criteria directly related to a learning outcome
  • Data relatively easy to collect.
  • Impossible or nearly impossible to reach conclusions about the levels of student learning.
Job placement data
  • Satisfies some accreditation agencies’ reporting requirements
  • Tracking alumni may be difficult.
Enrollment in higher degree programs
  • Satisfies some accreditation agencies’ reporting requirements
  • Tracking alumni may be difficult.
Transcript analysis or course-taking patterns
  • Unobtrusive method.
  • Student demographics and other information can be linked to their course-taking patterns.
  • Conclusions need to be tempered because other variables do not appear on transcripts (e.g., personal situations, course availability).
Institutional research data
  • Can be effective when linked to other performance measures and the results of the assessment of student learning (using a direct method).
 

Part 3. Evaluate Your Choice of Data-collection Method

After selecting a data-collection method, use this checklist to help confirm your decision. A well-chosen method:
  1. provides specific answers to the assessment question being investigated.
  2. is feasible to carry out given program resources and amount of time faculty members are willing to invest in assessment activities.
  3. has a maximum of positive effects and minimum of negative ones. The method should give faculty members, intended users, and students the right messages about what is important to learn and teach.
  4. provides useful, meaningful information that can be used as a basis for decision-making.
  5. provides results that faculty members and intended users will believe are credible.
  6. provides results that are actionable. Faculty members will be willing to discuss and make changes to the program (as needed) based on the results.
  7. takes advantage of existing products (e.g., exams or surveys the faculty/program already use) whenever possible.