Research on
Evaluation

CRDG faculty and staff do more than conduct evaluation studies; they also engage in research on evaluation. This research focus answers the repeated call in the professional literature to establish a stronger theoretical and empirical foundation for evaluators’ conceptions of the profession and of evaluation practice, methods, and theory. Examples of research on the practice of evaluation include the study of the degree to which the participation of program personnel in evaluations affects evaluation methods and results and the study of the degree to which programs are implemented as intended. Research on the methods of evaluation has examined the development, validation, and use of evaluation methods such as classroom observations, teacher logs, and other data collection instruments. As a result of this kind of research, CRDG faculty and staff have published several evaluation instruments in the professional evaluation literature. Research on the profession of evaluation has included reviews of the degree to which research on evaluation studies have been reported in the professional evaluation literature. Research on the theory of evaluation has addressed fundamental issues in evaluation—revisited regularly in new contexts over the years—that undergird how and why evaluations are conducted. Theory topics have included the extent to which evaluations should respect indigenous populations, the use of evaluation findings by program personnel, and others. The results of CRDG faculty and staff’s work on the four broad evaluation topics have been presented at national evaluation conferences, published in several national refereed journals, and discussed in books. They have provided some of the background necessary to be awarded grants from the National Science Foundation and the U. S. Department of Education and helped serve as the foundation for the selection of a CRDG faculty member as the 2013–2015 editor-in-chief of the American Evaluation Association journal, New Directions for Evaluation. Resources permitting, CRDG faculty and staff intend to broaden their body of research on evaluation practice, methods, the profession, and theory and thereby continue to enhance CRDG’s stature and contributions nationally and internationally.

Evaluation Resources

 

Peer-reviewed publications:

Brandon, P. R., Lawton, B. E., and Harrison, G. M. (2013). Issues of rigor and feasibility when observing the quality of educational program implementation: a case study. Manuscript submitted for publication (under revision).

Brandon, P. R. (in press). Book review: J. Bradley Cousins and Jill C. Chouinard,

Participatory evaluation up close: An integration of research-based knowledge. American Journal of Evaluation. doi:10.1177/1098214013503202

Brandon, P. R., & Fukunaga, L. (2014). A review of the findings of empirical studies of stakeholder involvement in program evaluations. American Journal of Evaluation, 35, 27–45. doi 10.1177/1098214013503699

Brandon, P. R., Smith, N. L., Ofir, Z., & Noordeloos, M. (in press). African Women in Agricultural Research and Development: An exemplar of managing for impact in development evaluation. American Journal of Evaluation, 35, 126–141. doi 10.1177/1098214013509876

Brandon, P. R., & Lawton, B. E. (2013). The development, validation, and potential uses of the Student Interest-in-the-Arts Questionnaire. Studies in Educational Evaluation, 39, 90–96. doi: http://dx.doi.org/10.1016/j.stueduc.2013.01.001

Brandon, P. R., Harrison, G. M., & Lawton, B. E. (2013). SAS code for calculating intraclass correlation coefficients and effect size benchmarks for site-randomized education experiments. American Journal of Evaluation, 34, 78–83. doi: 10.1177/1098214012466453

Brandon, P. R., Smith, N. L., & Grob, G. F. (2012). Five years of HHS home health care evaluations: Using evaluation to change national policy. American Journal of Evaluation, 33, 251–263.

Smith, N. L., & Brandon, P. R. (2011). If not to predict, at least to envision, evaluation’s future. American Journal of Evaluation, 32, 565–566.

Brandon, P. R. (2011). Reflection on four multisite evaluation case studies. In J. A. King & F. Lawrenz (Eds.), Multisite evaluation practice: Lessons and reflections from four cases. New Directions for Evaluation, 129, 87–95.

Brandon, P. R., Smith, N. L., & Hwalek, M. (2011). Aspects of successful evaluation practice at an established private evaluation firm. American Journal of Evaluation, 32, 295–307.

Brandon, P. R., Smith, N. L., Trenholm, C., & Devaney, C. (2010). Evaluation exemplar: The critical importance of stakeholder relations in a national, experimental abstinence education evaluation. American Journal of Evaluation, 31, 517–531.

Brandon, P. R., & Smith, N. L. (2010). Exemplars editorial statement. American Journal of Evaluation, 31, 252–253.

Smith, N. L., Brandon, P. R., Lawton, B. E., & Krohn-Ching, V. (2010) Evaluation exemplar: Exemplary aspects of a small group-randomized local educational program evaluation. American Journal of Evaluation, 31, 254–265.

Brandon, P. R., & Singh, J. M. (2009). The strength of the methodological warrants for the findings of research on program evaluation use. American Journal of Evaluation, 30, 123–157.

Brandon, P. R., Young, D. B., Taum, A. K. H., & Pottenger, F. M. (2009). The Inquiry Science Implementation Scale: Instrument development and the results of validation studies. International Journal of Science and Mathematics Education, 7, 1135–1147.

Ayala, C. C., Shavelson, R. J., Brandon, P. R., Yin, Y., Furtak, E. M., Ruiz-Primo, M. A., Young, D. B., & Tomita, M. (2008). From formal embedded assessments to reflective lessons: The development of formative assessment suites. Applied Measurement in Education, 21, 315–334.

Brandon, P. R., Taum, A. K. H., Young, D. B., Pottenger, F. M., & Speitel, T. W. (2008). The complexity of measuring the quality of program implementation with observations: The case of middle-school inquiry-based science. American Journal of Evaluation, 29, 235–250.

Brandon, P. R., Taum, A. K. H., Young, D. B., & Pottenger, F. M. (2008). The development and validation of the Inquiry Science Observation Coding Sheet. Evaluation and Program Planning, 31 , 247–258.

Brandon, P. R., Young, D. B., Shavelson, R. J., Jones, R., Ayala, C. C., Ruiz-Primo, M. A., Yin, Y., Tomita, M., & Furtak, E. (2008). Embedding formative assessments: Lessons learned and recommendations for future “romances” between curriculum and assessment developers. Applied Measurement in Education, 21, 390–402.

Furtak, E. M., Ruiz-Primo, M. A., Shemwell, J. T., Ayala, CC., Brandon, P. R., Shavelson, R. J., Tomita, M., & Yin, Y. (2008). On the fidelity of implementing embedded formative assessments and its relation to student learning. Applied Measurement in Education, 21, 360–389.

Higa, T. A. F., & Brandon, P. R. (2008). Participatory evaluation as seen in a Vygotskian framework. Canadian Journal of Program Evaluation, 23(3), 103–125.

Shavelson, R. J., Young, D. B., Ayala, C. C., Brandon, P. R., Furtak, E. M., Ruiz-Primo, M. A., Tomita, M., & Yin, Y. (2008). On the impact of curriculum-embedded formative assessment on learning: A collaboration between curriculum and assessment developers. Applied Measurement in Education, 21, 295–314.

Yin, Y., Ayala, C. C., Shavelson, R. J., Ruiz-Primo, M. A., Tomita, M., Furtak, E. M., Brandon, P. R., & Young, D. B. (2008). On the measurement and impact of formative assessment on students’ motivation, achievement, and conceptual change.

Applied Measurement in Education, 21, 335–359.

Brandon, P. R. (2005). Using test standard-setting methods in educational program evaluation: Addressing the issue of how good is good enough. Journal of Multidisciplinary Evaluation, 3, 1–29.

Brandon, P. R., & Higa, T. H. (2004). An empirical study of building the evaluation capacity of K–12 site-managed project personnel. Canadian Journal of Program Evaluation, 19(1), 125–142.

Heck, R. H., Brandon, P. R., & Wang, J. (2001). Implementing site-managed educational changes: Examining levels of implementation and effect. Educational Policy, 15, 302–322.

Brandon, P. R. (1999). Involving program stakeholders in reviewing evaluators’ recommendations for program revisions. Evaluation and Program Planning, 22, 363–372.

Brandon, P. R. (1998). Stakeholder participation for the purpose of helping ensure evaluation validity: Bridging the gap between collaborative and non-collaborative evaluations. American Journal of Evaluation, 19, 325–337.

Brandon, P. R., & Heck, R. H. (1998). The use of teacher expertise in decision making during school-conducted needs assessments: A multilevel perspective. Evaluation and Program Planning, 21, 323–331.

Cooper, J. E., Brandon, P. R., & Lindberg, M. A. (1998). Evaluators’ use of peer debriefing: Three impressionist tales. Qualitative Inquiry, 4, 265–279.

Brandon, P. R., Wang, Z., & Heck, R. H. (1994). Teacher involve­ment in school-conduct­ed needs assess­ments: Issues of decision-making process and validity. Evalua­tion Review, 18, 458–471.

Brandon, P. R., Lindberg, M. A., & Wang, Z. (1993). Involving program beneficiaries in the early stages of evaluation: Issues of consequential validity and influence. Educa­tional Evaluation and Policy Analysis, 15, 420–428.

Brandon, P. R., Newton, B. J., & Harman, J. W. (1993). Enhancing validity through beneficiaries’ equitable involvement in identifying and prioritizing homeless children’s educational problems. Evaluation and Program Planning, 16, 287–293.

Heath, R. W., & Brandon, P. R. (1982). An alternative approach to the evalua­tion of educa­tional and social programs. Educational Evaluation and Policy Analysis, 4(4), 477–486.

Presentations

Brandon, P. R. (2012, April). Ruminations on research on evaluation. Paper presented at the meeting of the American Education­al Research Associa­tion, Vancouver, B. C.

Brandon, P. R., Vallin, L. M., & Philippoff, J. (2012, October). A quantitative summary of research on evaluation published in the American Journal of Evaluation from 1998 through 2011. Paper presented at the meeting of the American Evaluation Association, Minneapolis.

Fukunaga, L. L., & Brandon, P. R. (2011, November ). Findings on stakeholder involvement: A review of empirical studies of stakeholder involvement in evaluation. Paper presented at the meeting of the American Evaluation Association, Anaheim, CA.

Brandon, P. R., Harrison, G. M., & Lawton, B. E. (2011, November ). Intraclass correlation coefficients and effect sizes for school-randomized experiments. Paper presented at the meeting of the American Evaluation Association, Anaheim, CA.

Brandon, P. R., Lawton, B. E., & Harrison, G. (2011, April). Development and validation of the Interest-in-the-Arts Questionnaire. Paper presented at the annual meeting of the American Educational Research Association, New Orleans.

Fukunaga, L. & Brandon, P. R. (2010, November). An overview of the methods of the empirical studies of stakeholder involvement in program evaluation. Paper presented at the meeting of the American Evaluation Association, San Antonio, TX.

Harrison, G., & Brandon, P. R. (2010, September). Statistics for planning school-randomized experiments in Hawai‘i. Poster presented at the annual meeting of the Hawai‘i-Pacific Evaluation Associ­ation, Honolulu.

Brandon, P. R. (2009, November). Conducting research on program evaluation in conjunction with evaluation studies. Paper presented at the meeting of the American Evaluation Association, Orlando, FL.

Brandon, P. R., & Lawton, B. (2009, April). An empirical basis for interpreting effect sizes: Distributions of between-school effect sizes in seven grades. Poster presented at the meeting of the American Educational Research Association, San Diego, CA.

Brandon, P. R., & Singh, J. M. (2008, November). Conclusions from research on evaluation use: How strong are the methodological warrants? Paper presented at the meeting of the American Evaluation Association, Denver, CO.

Lawton, B., & Brandon, P. R. (2008, November). Evaluation of a formative and summative method for judging teachers’ quality of program implementation. Paper presented at the meeting of the American Evaluation Association, Denver, CO.

Shavelson, R. , Young, D.B., Ayala, C.C., Brandon, P., Furtak, E.M., Ruiz-Primo, M.A., Tomita, M., and Yin. Y. (2008, June). On the impact of curriculum-

embedded formative assessment on learning. Paper presented at the annual meeting of the Pacific Circle Consortium, Apia, Samoa.

Brandon, P. R. (2008, January). Overview of a small body of research on stakeholder participation in program evaluation and possible reasons for the inattention given to it in the evaluation literature. Paper presented at the meeting of the Hawai‘i Educational Research Association, Honolulu.

Brandon, P. R., Taum, A. K. H. Young, D. B., Pottenger F. P., Speitel, T. & Gray, M. (2007, April.) Development, validation, and trial of a method for judging the quality of using questioning strategies in a middle-school inquiry science program. Paper presented at the meeting of the American Educational Research Association, Chicago.

Taum, A. K. H., & Brandon, P. R. (2006, April). The iterative process of developing an inquiry science classroom observation protocol. Paper presented at the annual meeting of the American Educational Research Association. San Francisco.

Brandon, P. R., & Taum, A. K. H. (2005, October). Development and validation of the Inquiry Science Teacher Log and the Inquiry Science Teacher Questionnaire. Paper presented at the meeting of the American Evaluation Association, Toronto.

Taum, A. K. H., & Brandon, P. R. (2005, October). The development of the Inquiry Science Observation Code Sheet. Paper presented at the annual meeting of the American Evaluation Association, Toronto.

Brandon, P. R., & Taum, A. K. H. (2005, April). Instrument development for a study comparing two versions of inquiry science professional development. Paper presented at the annual meeting of the American Educational Research Association, Montreal.

Taum, A. H. K., & Brandon, P. R. (2005, April). Coding teachers in inquiry science classrooms using the Inquiry Science Observation Guide. Paper presented at the annual meeting of the American Educational Research Association, Montreal.

Brandon, P. R. (2004, July). Using test standard-setting methods in program evaluation. Paper presented at the annual meeting of the American Psychological Association, Honolulu.

Brandon, P. R. (2003, November). Using test standard-setting methods to address the “how good is good enough” question in educational program evaluation. Paper presented at the meeting of the American Evaluation Association, Reno, NV.

Cousins, J. B., Brandon, P. R., Goh, S. C., Quon, T., and Heck, R. (2003, November). A comparative study of organizational readiness for evaluation. Paper presented at the meeting of the American Evaluation Association, Reno, NV.

Brandon, P. R., & Higa, T. F. (2002, April). An empirical examination of the effects of providing training and consultation in practical participatory evaluations. Paper presented at the meeting of the American Educational Research Association, New Orleans.

Young, D. B., & Brandon, P. R., Shavelson, R., Ruiz-Primo, M A., Ayala, C., Pottenger, F. M., Lai, M., Feldman, A., Tomita, M., Scarlett, T., Haynes, R, L., and Chin-Chance, S. (2002, February). Embedding assessments in the fast curriculum: On beginning the romance among curriculum, teaching and assessment. Paper presented at the annual meeting of the Hawai‘i Educational Research Association, Honolulu.

Brandon, P. R. (2001, November). Test standard setting in program evaluation: The beginnings of an evaluation standard-setting model. Paper presented at the meeting of the American Evaluation Association, St. Louis.

Brandon, P. R. (2001, April). The contrasting-groups standard-setting method as a means of judging program merit in small educational evaluations. Paper presented at the meeting of the American Educational Research Association, Seattle.

Brandon, P. R. (2000, November). A review of the Angoff standard-setting method: A potential procedure for judging program performance during low-stakes evaluations. Paper presented at the meeting of the American Evaluation Association, Honolulu.

Brandon, P. R. (1998, November). Setting program-performance standards: What program evaluators can learn from the testing and assessment literature. Paper presented at the meeting of the American Evaluation Association, Chicago.

Brandon, P. R., and Higa, T. F. (1998, April). Setting standards to use when judging program performance in stakeholder-assisted evaluations of small educational programs. Paper presented at the meeting of the American Educational Research Association, San Diego, CA.

Brandon, P. R. (1997, November). A contrast of the stakeholder-assisted and participatory-evaluation approaches, using Smith’s meta-model of evaluation practice. In N. L. Smith (Chair), Examining Evaluation Practice. Symposium conducted at the meeting of the American Evaluation Association, San Diego, CA.

Brandon, P. R., & Wang, Z. (1997, March). The soundness of evidence supporting the conclusions of needs assessments for site-managed programs. Paper presented at the meeting of the American Educa­tional Research Associa­tion, Chicago.

Brandon, P. R. (1996, November). A description and critique of the use of the Toulmin argument model for metaevaluative purposes. Paper presented at the meeting of the American Evaluation Association, Atlanta.

Brandon, P. R., & Heck, R. H. (1995, April). The use of teacher expertise in site-managed educational needs assessments. Paper presented at the meeting of the American Educa­tional Research Associa­tion, San Francisco.

Brandon, P. R. (1994, November). Involving program beneficiaries in reviews of evaluators’ recommendations for curriculum revisions. Paper presented at the meeting of the American Evalua­tion Associa­tion, Boston.

Brandon, P. R. (1993, November). Studying the implementation of a medical-school prob­lem-based learn­ing curriculum: Lessons learned about the component-evalua­tion approach. Paper presented at the meeting of the American Evalua­tion Associa­tion, Dallas.

Brandon, P. R., Wang, Z., & Heck, R. (1993, April.) Shared decision-making in participa­tory needs assessment. Paper presented at the meeting of the American Educa­tional Research Associa­tion, Atlanta.

Brandon, P. R., Lindberg, M. A., & Wang, Z. (1992, April). Faculty and student shared decision-making in a stakeholder-based evaluation of a medical-school problem-based learning curriculum. Paper presented at the meeting of the American Educational Research Associa­tion, San Francisco.

Brandon, P. R., & Newton, B. J. (1990, October). An approach for involving stakeholders in the development of plans for serving homeless children. Paper presented at the meeting of the American Evalua­tion Associa­tion, Washing­ton, D. C.

Heath, R. W., & Brandon, P. R. (1980, April). An alterna­tive paradigm for the evaluation of education­al pro­grams. Paper presented at the meeting of the American Education­al Research Associa­tion, Boston.

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can
take care of it!

MaPS Portfolio

wrap-box 549778_207935232679990_933441712_n 603060_207932829346897_470411565_n maps-calendar kukila 537390_207916489348531_436590480_n

Affiliation

University of Hawaii at Manoa
College of Education