- Author: Carol Kochhar-Bryant
- Date: November 12, 2020
The previous blog in the Wayfinding series introduced the ‘logic model’ as a special kind of compass and roadmap…
The term ‘wayfinding’ historically refers to the techniques used by ancient peoples of Polynesia to explore the islands of the Pacific. More recently, the term has been applied to helping people navigate their communities, and fits neatly with the concept of human-centered mobility which places the user directly at the heart of design.
‘Wayfinding’ provides a useful metaphor for thinking about evaluation and performance measurement, as a journey of dedication, discovery, and strategy. However, what led to the success of the ancient seafaring explorers can be attributed only partly to technique and skill, the other part is about the stamina born of a profound sense of the mission – the needs and voices of the people.
I recently provided an online performance evaluation workshop for volunteer driver agencies and mobility managers who serve clients who need specialized transportation services (Maricopa Association of Governments, Phoenix, AZ). One key ‘take-away’ from the presentation was – Keep your performance evaluation client or people-centered – or, in other words, the needs of the people for whom you are doing this work.
Mobility management represents a customer-focused approach to connect people with transportation services so that seniors, low-income individuals, people with disabilities, and youth can participate fully in community life. The clients’/participants’ experiences and perspectives provide the true measure of the quality of services that mobility managers provide. This means that you give priority to the collection of data that focuses on your clients, participants or riders — Who is participating? Are the number of clients increasing? Are the services working for them? What can we learn from their experiences? Their voices should take center stage. It is logical, then, that you would be interested in a performance evaluation that is ‘client-centered.’ In that case, what do you look for in an evaluator that claims to be ‘client-centered’? (And that means you as well, since client-centered evaluation necessarily requires that you engage in the evaluation as well.)
Many program evaluators use an accountability–focused approach aimed at improving a program as well as judging its merit and worth. In a client-focused approach (or ‘responsive’ evaluation), the evaluator is the ‘surrogate client’, and treats the client’s welfare as a primary justification for the program. The ‘responsive’ evaluator connects with the experience of personally being there, feeling the activities and the tensions, and getting to know the people and their values. The evaluator gets acquainted with the concerns of stakeholders by giving extra attention to program uniqueness and to the culture of the people (Stake, 2010). The evaluator does not analyze data and write reports without working to know the stakeholders and the clients they are determined to impact, and developing a deep appreciation for the work and the challenges. He or she does not act as the program’s singular and final judge, but rather, collects and reports on the opinions, perspectives and judgements of a range of stakeholders.
The client-centered evaluator uses many information sources, both quantitative and qualitative, that can help communicate the complexity and innovative nature of programs – to tell their stories. Evaluation is governed by a set of principles which include the following:
Client-centered evaluation is relaxed, flexible, interactive, holistic, constructed together around all stakeholders concerns and issues, and is service-oriented. What is key is that all methods used are developed with reference to the core stakeholder groups. The philosophy that guides such an evaluation – and evaluator – is that there can be many and conflicting interpretations of data and findings that are equally plausible (Stufflebeam, 2019). This is particularly important when people are implementing service innovations and novel solutions. During program innovation and solution testing, the evaluator must be attentive to collecting useful information which may not have been anticipated in the beginning. Authoritative, inflexible conclusions are not possible. The evaluator’s aim is to travel the journey with the stakeholders and together to continuously search for the right questions, to investigate and discover.
Community stakeholders and beneficiary groups should help shape the questions that drive an evaluation. This is particularly important because with programs that apply a person-centered or individualized planning approach, a key component to serving individuals in the most integrated settings appropriate to their needs. Integrated and inclusive settings are based on five guiding principles: community integration, individual choice; individual rights; optimizing autonomy; and choice regarding services and providers. The client-centered approach to program evaluation aligns well with the person-centered planning principles.
Client-centered evaluation can serve many purposes, and so the questions and performance measures can vary. These purposes include the following:
Let’s look at these purposes alongside some examples of client-centered evaluation questions. These are a composite of mobility management programs in several states.
Though many mobility management strategies have been proven to be effective, too often successful programs are grant funded and disappear when grant funding expires. It is therefore essential that program leaders strategically use evaluation data and findings that have appeal to potential funders.
Client–centered evaluation requires that the people who are implementing, funding and using the programs are heavily involved in interpreting and using the findings to improve their processes, understanding, and decisions for the future.
Bloom, M. (2010). Client-Centered Evaluation: Ethics for 21st Century Practitioners. Journal of Social Work Values and Ethics, Volume 7, Number 1. White Hat Communications.
Mitch, D., Claris, S., & Edge, D. (2016). Human-Centered Mobility: A New Approach to Designing and Improving Our Urban Transport Infrastructure. DOI: 10.1016/J.ENG.2016.01.030
Rahman, M.M., Deb, S., Strawderman, L., Smith, B., & Burch, R. (2019). Evaluation of transportation alternatives for aging population in the era of self-driving vehicles, IATSS Research, https://doi.org/10.1016/j.iatssr.2019.05.004
Spiegel, A.N., Bruning, R.H., Giddings, L. (1999). Using Responsive Evaluation to Evaluate a Professional Conference. Educational Psychology Papers and Publications. 183. http://digitalcommons.unl.edu/edpsychpapers/183
Stake, R.E. (2010). Program Evaluation Particularly Responsive Evaluation. Journal of Multi-Disciplinary Evaluation, v. 7, n. 15, p. 180-201. Accessed 2/27/20 from http://journals.sfu.ca/jmde/index.php/jmde_1/article/view/303
Stufflebeam, D. (2019). New directions for evaluation. No. 89, Spring 2001. Jossey-Bass.
Stufflebeam, D.L., & Madaus, G.F. (Eds., 2000). Evaluation Models: Viewpoints on Educational and Human Services Evaluation. T. Kluwer Academic Publishers.
Carol Kochhar-Bryant is an Evaluation Consultant with the National Center for Mobility Management, and Professor Emeritus at the George Washington University, Washington D.C. She lives in Reston, Virginia.
Have more mobility news that we should be reading and sharing? Let us know! Reach out to Sage Kashner (firstname.lastname@example.org).