Unit: Religion
Program: Religion (MA)
Degree: Master's
Date: Fri Oct 28, 2011 - 9:17:50 am

1) Below are your program student learning outcomes (SLOs). Please update as needed.

SLOs for the MA Plan A (Thesis Track) and Plan B (Non-Thesis Track):

1.) Students demonstrate familiarity and developing mastery of the methodologies and theoretical frameworks employed in the field of Religion.

2.) Students demonstrate the ability to write and prepare presentations at a high level of proficiency.

3.) Students are able to conduct research which leads to either a thesis or a significant portfoli of shorter works.

Work with graduate students is a highly individual endeavor, since students come to the program with diverse professional goals. In addition to the general outcomes listed above, the following outcomes apply to different students, based on aptitude and goals:

  • Student is prepared to enter a Ph.D. program in the field.
  • Student is prepared to teach courses in Religion at the junior college level.

2) Your program's SLOs are published as follows. Please update as needed.

Department Website URL: www.hawaii.edu/religion/grad-ma.html
Student Handbook. URL, if available online:
Information Sheet, Flyer, or Brochure URL, if available online:
UHM Catalog. Page Number:
Course Syllabi. URL, if available online:

3) Below is the link(s) to your program's curriculum map(s). If we do not have your curriculum map, please upload it as a PDF.

Curriculum Map File(s) from 2011:

4) For your program, the percentage of courses that have course SLOs explicitly stated on the syllabus, a website, or other publicly available document is as follows. Please update as needed.


5) For the period June 1, 2010 to September 30, 2011: State the assessment question(s) and/or assessment goals. Include the SLOs that were targeted, if applicable.

Assessment of student evaluation forms and practices

6) State the type(s) of evidence gathered to answer the assessment question and/or meet the assessment goals that were given in Question #5.

A set of student evaluations from one course was collected from each faculty.

7) State how many persons submitted evidence that was evaluated. If applicable, please include the sampling technique used.

Six (out of eight) faculty

8) Who interpreted or analyzed the evidence that was collected? (Check all that apply.)

Course instructor(s)
Faculty committee
Ad hoc faculty group
Department chairperson
Persons or organization outside the university
Faculty advisor
Advisors (in student support services)
Students (graduate or undergraduate)

9) How did they evaluate, analyze, or interpret the evidence? (Check all that apply.)

Used a rubric or scoring guide
Scored exams/tests/quizzes
Used professional judgment (no rubric or scoring guide used)
Compiled survey results
Used qualitative methods on interview, focus group, open-ended response data
External organization/person analyzed data (e.g., external organization administered and scored the nursing licensing exam)

10) For the assessment question(s) and/or assessment goal(s) stated in Question #5:
Summarize the actual results.

Why do we have student evaluations?

Possible answers:

  1. Because they help determine whether a faculty should be promoted or let go.
  2. Because they help faculty gauge the reception of their teaching and thus make it possible for them to adjust pedagogical strategies in order to improve teaching.

Survey of practices at the Department

  1. The two full professors do not use formal student evaluations.
  2. Of the remaining six associate and assistant professors, four do evaluations using the standard department form, which consists of a multiple choice check list with mostly three possible answers to thirteen questions about the course. One of them adds a few questions concerning technical aspects of the reading and testing. Of the last two professors one uses a wholly different form of his own design, which poses both quantitative questions about the quality of the teaching and open-ended questions about the teaching materials and the class experience and structure. The other uses the questionnaires for students in E-focus and O-focus classes respectively, which are similarly hybrid but more condensed, and which address specifically the focus of the class in question.

Preliminary conclusions

  1. From the point of view of the full professors only the first reason for having student evaluations is relevant.
  2. The same might apply to the four faculty who use the department form, while the use of an individually designed, open-ended form, as well as of the focus questionnaires, would seem to indicate further interest in gauging student reception of the teaching.

Advantages of the two types of forms

The multiple choice form with only three possible answers would appear to work as a measure of teaching excellence and/or “student satisfaction,” but since in most cases it tends toward exclusively positive rating of everything to do with the course, it clearly yields very little information about possible room for improvement.

The more open-ended forms are more helpful in differentiating responses from the students and thereby allowing for adjustment of specific teaching practices. They also make it possible to understand why students judge the courses the way they do, for instance, why they like or dislike the professor. The open-ended questions, moreover, help to weed out responses from disgruntled students who may simply vent anger caused by frustration over their own shortcomings.

Use of CAFE forms

Nobody at the Department currently uses CAFE, both because of inherent design flaws in the system that allegedly cannot be corrected, and because of the voluntary nature of the system. It thus encourages the use of the system by students with particular biases—whether positive or negative. In this way, it is somewhat similar to the infamous “Rate Your Professor” website.


While it remains questionable whether student evaluations constitute a fair measure of the quality of teaching, they nonetheless may have a useful function for faculty who wish to gauge the reception of their teaching. It is difficult to see, however, why this use of evaluations should not be voluntary, especially since student participation is voluntary in the online system sponsored by the University.

11) State how the program used the results or plans to use the results. Please be specific.

The report is distributed among faculty, who on a voluntary basis may adopt new evaluation forms and practices.

12) Beyond the results, were there additional conclusions or discoveries?
This can include insights about assessment procedures, teaching and learning, program aspects and so on.

13) Other important information.
Please note: If the program did not engage in assessment, please explain. If the program created an assessment plan for next year, please give an overview.