Unit: Human Nutrition, Food & Animal Sciences
Program: Food Science & Human Nutrition (BS)
Degree: Bachelor's
Date: Wed Oct 21, 2009 - 12:51:54 pm

1) List your program's student learning outcomes (SLOs).

FSHN Program Learning Outcomes
1. Know, apply and critically analyze and evaluate concepts related to the science of food and nutrition with a focus on humans.
2. Develop written & oral skills commensurate with the ability to summarize, evaluate, synthesize, and appropriately communicate scientific concepts to a variety of audiences.
3. Acquire personal characteristics and leadership, management, and human relations skills appropriate to professional practice in careers related to food science and human nutrition.
4. Recognizes and uses appropriate technologies, such as computer applications and food and nutrition laboratory methodologies.
5. Identifies and develops skills to gain successful admission into entry level careers or post-graduate education. 
6. Develops problem-solving and critical thinking skills.
7. Develops and demonstrates the ability be an effective participant in community service.
8. Values being an integral and functioning member of communities from local to global levels. 

2) Where are your program's SLOs published?

Department Website URL: http://www.ctahr.hawaii.edu/hnfas/
Student Handbook. URL, if available online: http://www.ctahr.hawaii.edu/hnfas/
Information Sheet, Flyer, or Brochure URL, if available online: http://www.ctahr.hawaii.edu/hnfas/degreePrograms.html
UHM Catalog. Page Number: 335
Course Syllabi. URL, if available online:

3) Upload your program's current curriculum map(s) as a PDF.

Curriculum Map File(s) from 2009:

4) What percentage of courses have the course SLOs explicitly stated on the course syllabus, department website, or other publicly available document? (Check one)


5) State the SLO(s) that was Assessed, Targeted, or Studied

6. Develops problem-solving and critical thinking skills

6) State the Assessment Question(s) and/or Goal(s) of Assessment Activity

Can the conventional Cornell Critical Thinking Test (CCTT), Level Z (Ennis and others, 1985) administered as a pre-test and post-test measure any gains in CT among students in a Food Science Experimental Foods Course offered every Fall Semester during the period 2001 – 2008?

Ennis, RH, Millman, J, Tomko, TN. Cornell Critical Thinking Tests Level X and Level Z – Manual. Third Edition. Midwest Publications, Pacific Grove, CA.

7) State the Type(s) of Evidence Gathered

   The Cornell Critical Thinking Test Level Z was administered within the first two weeks of school each Fall semester from the period 2001 - 2008.  Students were asked to read the directions on the exam and fill out an answer sheet (having 52 A, B, and C choices) to the best of their ability.  They were given 50 minutes to complete the test and all test and answer sheets were collected at the end of 50 minutes.  The same test was given during the last week of class and the same protocol for test administration was followed. The students were asked to put their first and last names on a line at the top of the sheet along with the date so that their pre- and post-tests results could be correlated.

8) State How the Evidence was Interpreted, Evaluated, or Analyzed

Total scores of the CCTT were obtained for “right-only” answers and also corrected using the formula “rights minus one-half the number wrong”.  Making this correction for wrong answers was consistent with the test instructions cautioning examinees not to make wild guesses. The score change over the semester was determined for each student; and descriptive statistics including means and standard deviations were calculated.

The data for each year and the pooled data were subjected to t-test using SAS® 9.1 (SAS Institute Inc.) to determine whether students had achieved significant gains in critical thinking test scores during the class.

Further, subset scores for five aspects of critical thinking, including deduction, meaning, observation/credibility, assumption, and induction, were computed and corrected for guessing. The subset scores were subjected to t-test to determine in which aspects of critical thinking students had made significant gains.

9) State How Many Pieces of Evidence Were Collected

Pre- and Post test answer sheets were collected from 154 students from the period 2001 to 2008 and statistically analyzed.  This represents a total of 308 pieces of evidence. 

10) Summarize the Actual Results

 On casual observation, there doesn’t appear to be any significant gains in the Cornell Critical Thinking Test  scores between pre and post testing after one semester of conducting PBL exercises or looking at the mean scores for 8 years.  However, when statistical analyses were performed on the change in individual scores for the 8-year period, significant differences were found between pre- and post-test mean scores for students in 2002 and 2004.  Analyses showed that the mean score for students in 2002 increased 1.73 points while students in 2004 showed increase of 2.82.  On the other hand, in other years (2003, 2006, 2007) mean test scores decreased from pre to post testing.  Gains range from 1 point up to a high of 20 (in 2004) while decreases in scores varied from 1 to 7 points.  When the different questions in CCTT are separated into different aspects of critical thinking, students in the 2002 and 2004 classes showed significant gains only in “deduction” and “assumption” aspects while there were no changes in “meaning,” “observation/credibility,” and “induction.”  

11) Briefly Describe the Distribution and Discussion of Results

At the end of each semester, students who took both the pre and post test received a hand out showing what they received for their pre and post test scores.  

"Name __________________________ Score Before ________ Score After _______

Please see me if you desire further information about the Cornell Critical Thinking Test."

In addition, they received the following information on their score sheet:

Excerpts from the Cornell Critical Thinking Test Manual

by Robert H. Ennis, Jason Millman, Thomas N. Tomko

The working definition of "critical thinking" the authors are using is the following: "Critical thinking is the process of reasonably deciding what to believe and do."
There are many ways to dissect and subcategorize critical thinking. Aspects of critical thinking deliberately included in these tests are: induction, deduction, value judgement, observation, credibility, assumptions meaning.

For the test you took, the means, standard deviation, and percentile rank equivalent (where available) secured in the 1960's, late 70's and early 1980's are shown below in Table 1. You can compare your score with those for which there is a percentile ranking for different universities, as well as with several UHM FSHN classes over the past several years.

Table 1.  Mean scores and standard deviations of CCTT (Level Z) test scores at different university classes secured in the 1960’s and late 1970’s to early 1980’s using “right-only” answers (Ennis and others, 1985)

Class Description of class N




Undergraduates at a Midwestern state university





Undergraduate elementary education major in a philosophy of education course at a New England College





Freshmen engineering students t a college in NJ





Undergraduates in a small state university in upstate NY





Same group immediately above after a one –semester introduction to deductive logic





Freshmen in rhetoric class at the University of Iowa





Seniors in math and behavioral sciences @ University of Iowa





Graduate students in math and behavioral sciences @ U of Iowa




Discussion of the results of the Cornell Critical Thinking Test took place on the last day of class. 

12) Describe Conclusions and Discoveries

The Cornell Critical Thinking Test (CCTT) scores of University of Hawaii (UH) students in an experimental foods class reported over an eight-year period appeared to be similar to those reported for students from a wide variety of classes at different universities.

Mean class CCTT scores appeared to increase slightly in seven of the eight years tested and this slight increase appeared to be similar to a class where students were tested before and after taking a deductive logic course. However, there were no significant differences among the pre- and post- test scores.

Statistical analysis disclosed that in two of the years (2002 and 2004) there were significant gains (P values 0.036 and 0.045, respectively) in CT test scores.

Furthermore, in both years significant gains were made in the same two aspects of CT (deduction and assumption) and not in the other aspects.

The results also indicate that gains in CT scores occurred both with students who had initial lower as well as higher CT scores.

Because there are so many definitions of CT as well as a multitude of methods used in classrooms intended to teach and develop CT skills, a more traditional CT test like CCTT may not measure gains in CT skills (as defined by the Ennis definition) when indeed there might be gains in other areas.

13) Use of Results/Program Modifications: State How the Program Used the Results --or-- Explain Planned Use of Results

The CCTT was one means of attempting to document gains in general or overall “critical thinking” skills in students as developed by the PBL method of learning.

If the CCTT cannot unequivocally show gains in CT skills, can a student’s written statement of his/her perceived gains in CT skills in journals still be considered a gain, although qualitative, or must a quantitative measurement be a requirement for proving definitive gains?  If students were not asked to but wrote in journal entries about their perceived gains in CT, can their reflective discussions of any CT gains be useful to program assessment from a qualitative standpoint?

Excerpts from student journals indicate that students who wrote about thinking appeared to perceive that they learned and were able to use problem-solving methods and appeared to have learned and refined their CT skills.  At the end of the Fall semester's Experimental Foods course, a specific question will be asked of students to reflect and respond to any perceived increases in CT skills (that were supported with reasons and examples).  Depending on what students write, this might be used or considered as a valid qualitative assessment of a student’s gain in CT skills.

14) Reflect on the Assessment Process

15) Other Important Information

16) FOR DISTANCE PROGRAMS ONLY: Explain how your program/department has adapted its assessment of student learning in the on-campus program to assess student learning in the distance education program.


17) FOR DISTANCE PROGRAMS ONLY: Summarize the actual student learning assessment results that compare the achievement of students in the on-campus program to students in the distance education program.