Course Evaluation System (CES)
In Spring of 2016, the Mānoa Faculty Senate passed a resolution seeking clarification on the new CES and how it will be used. The resolution focused on the purpose of the CES, possible biases in evaluation instruments, and adequate consultation with the Senate. The MFS Executive Committee and CAPP have now raised additional questions that are included in their new resolution. This website has been developed to help answer some of these questions and provide faculty with access to the materials developed by the Course Evaluation Committee over the last nine months. We have prepared an FAQ that can be found in the CES documents below.
I. Use and Purpose of the new CES
As the President’s memo indicated, there are already multiple uses for course evaluations. The uses do not change with the new system. Rather, the CES is designed to offer a better system for collecting course evaluation information and distributing it to the faculty. It is also more intentionally designed to be easier for student’s to use and can simulate a paper form by offering options to fill it out in class.
At the individual faculty member level, the CES can be used to supply feedback on the course. Faculty can do this by using the common questions (still in development) and individually tailored questions suitable for their specific course. The use of the CES to provide such feedback is clearly valuable to faculty. When asked, “are you using course evaluation results to inform your teaching,” 93% of Mānoa faculty who responded (381/409) said yes.
Table 1: Faculty use to inform teaching
Q8 - Are you using course evaluation results (eCAFE or a different system) to inform your teaching? (e.g., adjusting teaching strategies in response to student feedback).

Departments also have the option of adding their own questions. In fact, may departments already have a set of departmental questions, and some include their program learning outcomes here (as Nursing will do now that the CES is available for them). At the department level, department chairs will have access to faculty results (in many departments this is already the case) so that they can better understand and address curricular and programmatic needs.
Table 2: Reporting that results already go to the Department Chair
Does your department require that evaluation results go to the department chair?

Programs and departments can use questions to inquire about program needs, including program review and student learning outcomes where applicable. Some programs collect data reported on course evaluation forms to submit to their accrediting agencies. Chairs will not use the CES to make personnel decisions – faculty members remain in control of what is included in their tenure/promotion file. However, when asked, 78.47% of survey respondents said that results should go to the department chair.
Table 3: Should department chairs see evaluations
Should evaluation results go to the department chairs?

At the college level, deans will only see the aggregate data for their college-level questions (if any college level questions are created). Similarly, at the institutional level, only aggregate data on the common questions will be available. The current e-Café also provides aggregate institutional date for all questions asked, including the current Mānoa-wide common questions. The new system is no different – the aggregate data provides faculty with a point of comparison with others at the institutional level. This is how the aggregate data can be used. The common questions also will be of use to individual faculty members who will now have a uniform set of data to use in their own feedback and evaluation process. There is no way the institution can disaggregate the data to review an individual faculty score. To the degree this data would be “used” in another manner it would be to communicate our strengths and weaknesses as an institution.
The Course Evaluation Committee asked the Mānoa faculty about the possible uses of course evaluations. It should be noted that when asked, these possible purposes were aligned with what the faculty already see as valuable aspects of an evaluation tool. While the MFS asserts that there can only be one possible use for the form, both existing practice at UHM and the practices of our peer and benchmark institutions suggest that the uses identified by the President’s memo are possible using the same instrument. While programs may use the instrument for other purposes, using the feedback from the faculty survey, the Course Evaluation Committee has focused primarily on the first two uses – faculty improvement of courses and evaluation of faculty.
Table 4: Possible uses for CES
Q7 - How important are the following possible uses of course evaluations?

II. Issues of Bias in Course Evaluation Forms
Second, the May 2016 MFS resolution identified concerns about bias associated with course evaluations. While the senate resolution questioned if such concerns were considered “when crafting this tool and accompanying policy,” the “new tool” was not crafted when the resolution was written. The campus committee created to identify common questions was tasked with developing the content of the new tool. Thus, prior to “crafting this tool,” the committee conducted a literature review that covers the recent literature on the question of bias. The literature review can be found in the CES document section below.
The literature review focuses on the issue of race and gender differences in evaluations at US universities. Additionally, other issues surrounding the use of student evaluations were reviewed including validity and low response rates. Literature published after 2010 was prioritized so that the most recent data was viewed, including the several metasurveys on this topic. To quote from the conclusion of the review:
There is no unanimity in the literature on the significance of a gender bias but there is a general understanding, especially when the meta-analysis is considered that there is validity to SET measures and that they can provide useful information regarding teaching effectiveness. The literature dating back into the 1990s “demonstrates that student course evaluations are valid measures of instructional effectiveness (Filak and Sheldon 2003, 238).” However, nobody, either supporters or detractors of SET, suggest that these instruments should be the sole measure upon which faculty tenure and promotion decisions are made and many suggest other qualitative processes that should be included (these are beyond the scope of this committee to determine). Such qualitative and meaningful measures would include teaching portfolios and peer reviews of classes (Stark and Freishtat 2014, 15–16).
In addressing the limitations in measuring teaching effectiveness, Stark and Freishat argue that student evaluations cannot accurately measure teaching effectiveness (Stark and Freishtat 2014, 14), though other studies that survey the entire scope of the research say that they can (Wright and Jenkins-Guarnieri 2012). Zhao and Gallant studied 73,500 completed SEIs over a 11 year period and found that there was construct validity and reliability to the SEI questionnaire used and that these items were strongly correlated to instructional effectiveness (Zhao and Gallant 2012, 232–33). Stark and Freishtat conclude that the “global” questions such as “overall teaching effectiveness” and “value of the course” are least helpful as a measure because they are “misleading” (Stark and Freishtat 2014, 20). This recommendation against global measures was also made by Laube, Massoni, Sprague and Ferber (Laube et al. 2007, 97). Instead, Stark and Freishtat suggest we should ask questions like: “Is the instructor available to students?” “Is she responsive to student questions?” Others take on the issue of selection bias and argue that even with some selection bias, SET are reliable instruments to assess quality of teaching but should not be used to make comparisons between courses (Wolbring and Treischl 2016). Additional recommendations included adding a disclaimer regarding gender (when discussing evaluations) and to ensure that any reports on evaluations include variance and not just means (Laube et al. 2007).
In other words, while the literature is mixed on the depth of bias, the overall validity of the forms is supported. The types of questions faculty choose to include remains up to them. At the institutional level, the literature can be used to better frame the questions that are asked. This literature was made available to all committee members. In an effort to craft questions appropriately, faculty with instrument design experience were recruited to the committee. Additional refinement of the questions continues using the comments provided by faculty in the most recent survey.
In addition to the literature review, a survey of all our peer and benchmark universities was conducted to see how they use evaluations. It should be noted that our current practice of is wildly out of step with the practices of our peer and benchmark institutions and beyond. The norm is that a common set of questions exist for the vast majority of course evaluation systems amongst our peer and benchmark institutions. In virtually all these cases, the results go to departments and/or are available to the campus community, including students to review. In some cases depending upon the state sunshine laws, these are public documents available to anyone who wishes to view them. Finally, the evaluations are completed for all courses at the institution. The document with the review of our peer and benchmark institutions can be found in the CES document section below.
III. Faculty consultation
BOR Policy 1.210 states that the role of faculty governance is to “advise the administration (primarily at the campus and unit level) on matters impacting and/or relating to the development and maintenance of academic policy and standards to the end that quality education is provided, preserved, and improved. (III.1).
BOR Policy 1.210 IIIB3b states that, the “duly authorized organization specified by each charter shall have the responsibility to speak for the faculty on academic policy matters such as evaluation of faculty….”
While the President’s memo indicated that a new course evaluation system was being implemented, the details of all the questions from the individual faculty questions to the common questions were left to each campus to decide. Consultation began with the campus notification of a new CES via the President’s memo.
After the President’s memo, attempts to engage the faculty senate via CAPP and the SEC were initiated, including meeting with CAPP and a formal request that faculty senators be nominated to serve on the course evaluation committee. In addition, a course evaluation committee was established using faculty with instrument design experience, two different surveys were sent to the entire faculty generating about 500 responses each, department chairs were surveyed, two focus groups were held, students from the ASUH board and the GSO leadership were consulted and two departments agreed to pilot the new system.
IV. Possible Common Questions
At this point, the course evaluation committee has identified possible common questions. While the faculty were sent 14 possible questions, the goal is to reduce this number to 5 or so questions. The committee fully recognizes the concerns of survey exhaustion and wishes to work with departments to ensure the best possible instrument. A survey of the possible questions was sent to all faculty members. With over 500 responses, we would like to thank those who provided insightful suggestions and wording for the questions. We were most interested in which of these questions would be useful as common questions. While the committee has not fully analyzed the results, it is worth noting that the usefulness of these questions for an evaluation system is overwhelming affirmed by those who answered the survey.
The instructor demonstrated knowledge of course content

The instructor was well prepared for class

The instructor provided helpful feedback on work-in progress/tests/assignments if requested

The instructor was accessible to students

The instructor created a learning environment that was respectful of diverse backgrounds and perspectives.

On average, how many hours a week outside of class did you spend on this course(for example, doing readings, reviewing notes, writing papers and any other course related work)?

We have not determined what the final Mānoa questions will be. Over the last nine months there has been a concerted effort to work through the process. Given that the system we are implementing is only different from the eCAFE in the fact that it will go to all courses and results will be available to department chairs, a year to implement has been ample. This page will be updated as new information is available to be shared with the Mānoa faculty.
Course Evaluation System Documents
Date | Documents |
---|---|
08/21/2018 | CES Implementation Update |
08/21/2018 | FAQ New Course Evaluation System |
08/21/2018 | New Course Evaluation System |
Frequently Asked Questions
- 1. Why are we shifting from eCAFE?
The new course evaluation system (CES) is intended to offer several substantial improvements over eCAFE. First, there will be updated and revised common questions. Second, the results of the survey will go to the Department Chair. Third, the new CES will be readable via phone and tablets making it easier to administer in class like a paper form. Finally, faculty and departments will be given the option of writing their own questions rather than selecting from a predetermined question bank. We are implementing these changes as a result of the March 8, 2016 memo issued by President Lassner outlining the decommissioning of eCAFE (tentatively scheduled for Fall of 2017) and the implementation of a new course evaluation system (CES).
- 2. For what purposes will CES be used?
President Lassner noted in his memo of March 8, 2016, that the new CES “will provide a transparent, consistent process to contribute to the assessment of program effectiveness and provide commonality of approaches across evaluations at the course/division/college/campus level. It allows students to provide feedback on their learning experiences which faculty can use to inform their teaching practices, to evaluate new teaching methods and techniques, and to demonstrate teaching effectiveness. Program directors/department chairs can also use these data in aggregated form to evaluate curriculum and program effectiveness.” At the individual faculty member level, the purpose of the CES can be to supply feedback on the course. The common questions will be of use to individual faculty members who would like a uniform set of data to use in their own feedback and evaluation process. At the institutional level the intent of the common questions is to provide aggregate data on how Mānoa is doing to better communicate and understand our strengths and weaknesses. Addressing the question of purpose was taken up by the course evaluation committee after the passage of the Mānoa Faculty Senate resolution of May 14, 2016. The uses of the CES, the common questions and the implementation of the new CES remain part of the ongoing conversation between the committee, the Mānoa administration and the Mānoa faculty.
- 3. Will these evaluations be used for personnel decisions?
While the CES will be distributed to all students in all courses, faculty retain control of whether to use the CES results to demonstrate effective teaching in personnel decisions. Faculty members are required to provide evidence of teaching effectiveness and most faculty currently use eCAFE or some other form of student evaluation to do so. Chairs will not use the CES to make personnel decisions – faculty members remain in control of what is included in their tenure/promotion file.
- 4. Is participation in the new course evaluation system mandatory?
Yes. All courses with more than three students will be evaluated using the new CES, as instructed by the March 8th 2016 memo from UH System President Lassner. For courses with 3 or fewer students, a method for evaluation that will not compromise anonymity is being developed.
- 5. My department has been using the same questions for years. Can we still include these questions on the new evaluation?
Yes. The CES will have multiple tiers. Departments, colleges or programs can insert their own questions. You will need to provide these to ITS so they can be integrated into the CES. However, departments may need to determine if their departmental questions overlap with common questions.
- 6. What will be the common questions that appear on all evaluations?
These are still under development. We invite the faculty to be involved in the creation of the draft common questions. The course evaluation committee was formed to develop the common questions. As we have worked to develop them, the committee sent a survey to department chairs and all faculty about how eCAFE is currently used. We also surveyed the faculty about which possible common questions should appear. We held focus groups and will hold an open meeting to elicit further faculty feedback about the questions. Based upon this process we have developed draft common questions that we will send back out to the faculty to assess.
- 7. Will the Department Chair be able to see the results of the course evaluation?
Yes. Lassner’s memo says that Faculty and unit chairs will receive results. In a survey of UHM faculty, when asked if Department Chairs should see the results 78% of those responding (317 total) said that results should go to the Department Chairs.
- 8. Will the results of the course evaluation be made available to upper administration?
Yes, but only the common questions and only in aggregate form without individually identifiable information. Individual results will not be available outside the department level, unless instructors choose to make their results publically available.
- 9. One reason President Lassner gave for switching from eCAFE to a new course evaluation system was that eCAFE has poor student response rates. How will the new CES increase response rates?
The CES can be filled out in class on a tablet, phone or computer while eCAFE could not easily be done on a tablet or phone. Research has shown that response rates appear to be higher for paper evaluations than for on-line evaluations. With the new CES, as with a paper form, class time can be set aside for the evaluation to take place as one would with a paper form. For students needing accommodation please contact KOKUA.
- 10. Where will student evaluations be stored and for how long?
As with eCAFE, evaluations will be available online and retained behind a password protected wall. They will remain archived perpetually. With respect to student evaluations that are going to the department chair, chairs will be able to view the results by going through the password protected CES site. Only the current department chair will be able to access departmental results.
- 11. I have heard that faculty members who are demanding may get lower evaluation scores than faculty members who are not as demanding. How can this problem be offset?
We would like to include questions on the new CES that can hopefully be used to assess the validity of such claims.
- 12. What if my course ends at a non-traditional time or is taught on-line?
Like eCAFE, if the course finishes prior to, or after the end of, the semester, then it is included in the 'non-traditional' category. For non-traditional courses, eCAFE/CES will let the instructor choose a different end date.