contact me

(303) 886-5116
 

on a personal note...

I have lived and worked in Denver since 1990. In my free time, I like to be with friends and family, go for walks in my neighborhood, and see movies, live music, theater, and dance.

 

I also have several avocational passions and would especially like to do additional evaluation and facilitation work in these areas:

  • Education, racial justice, and respect for individuals in the learning environment

  • The built environment and community, including neighborhood change

  • Aging and end-of-life issues

  • Working with people who speak Spanish

  • The intersection of any of the above (and they do intersect!)

an interview with maggie

Q.

How did you get into program planning and evaluation?

A.

In 2001, I was a director of a program at a nonprofit organization and we got a grant from a foundation. The program officer asked, “what are you going to do about evaluation?” I thought we'd do a survey at the end; I didn’t know then that evaluation ideally starts at the planning stage. 


The program officer said, “You need to put 15% of your budget toward evaluation and hire an outside evaluator.” I interviewed three evaluators. I chose the evaluator who said she was all about building capacity. She said, "I'm going to teach you how to do it and you'll do all the work.” I found that intriguing!

Q.

How did program evaluation become important historically speaking?

A.

I’m learning new things about this right now because I’ve been doing a lot of reading and thinking about equity evaluation practices. I learned how evaluation came into the foundation world (and thus the nonprofit world) from the federal government, and thus had very specific and narrow ideas about what “truth” is, as well as what “knowing” is, and what “value” means. And how this paradigm also brought narrow ideas about evaluation approaches and methodology and accountability.


Where evaluation came from and how it can function today are two different things. Today we can look at evaluation with a lot more flexibility, and it can be such a useful tool for learning about ourselves and about our programs! And of course, when necessary, it’s still a tool for accountability to stakeholders.

Q.

What is the most overlooked element of program planning and/or evaluation?

A.

The most overlooked aspect of program planning and evaluation is the idea that evaluation should be built into the program plan. When you’re planning the program you are, ideally, thinking about what you want to accomplish and what you want to measure.


Also overlooked: actually using the data that you’ve collected! I typically include a meeting at the end of my work with clients for no charge, because I want to be sure the client is positioned to use the data.


​The place where nonprofits usually get stuck is when they have a 4-inch stack of surveys that never get entered, analyzed, or used. A remedy for that is working with them to create a system that isn’t too burdensome to get the job done. I help clients work with staff and volunteers to make that happen.

Q.

How does the evaluation work you do differ across sectors?

A.

Honestly, it doesn’t. I’ll tell you what all the work has in common no matter the sector. With all my clients, I meet them where they’re at; in terms of their reasons for doing evaluation, their interest in evaluation, and their capacity and budget. 


Also, no matter who my client is, I count on them to be the content expert in their field. I’ve learned a ton about museums and Jewish life and early childhood science learning; I will never know as much about these things as my clients. We end up being partners where I bring the knowledge about evaluation and they bring the knowledge about content.


Finally, I want to say that no matter the sector, we live in a multicultural world and a foundational aspect about evaluating in any sector is cultural humility. I could go on about that… For now, I’ll say that sometimes I’m not the best person for the job. Other times I may be the pick for the job, but then I’m really clear about my own limitations, being the person I am with my particular lived experience, and I’m always sure work with cultural navigators as part of the scope of the work.

Q.

Do different sectors have different needs when it comes to evaluation?

A.

Good question! Again, I first think about what they all have in common, which is data to help them make decisions. But thinking about different needs in different sectors, I’ll use K-12 public education as an example. I’m really interested in working with individual public schools who want data beyond “achievement" and test score data; where they want to hear the voices of teachers, students, and families to help them make decisions about their program.

Q.

What drives your interest in evaluation for K-12 education?

A.

I love education. With the recent political developments I made it a priority in my life to focus on a specific area of activism, and I choose public education. So, in my vocational life it feels natural to work in the area of evaluation related to equity in K-12 education.


What can I do as an evaluator to support equity in evaluation? I need to have more conversations with current and prospective clients about this. But, what I hope people will say is, "I am moving toward equity in the culture of my school and I want to elevate the voice of students and the voice of families." As an evaluator that’s my job: to ask questions of people, analyze what they say and bring those stories to the people who will use the information to make programmatic decisions.

Q.

What is your wish for the education system when it comes to data analysis and program evaluation?

A.

We hear a lot about test scores. They have their place, although I think they are overused. What you don’t hear so much are the voices of the students, teachers, and families. I have several nonprofit clients who seek out feedback from these groups of people. I would like to work with individual public schools who want to hear more of those voices as well.

Q.

If an organization had to choose one step toward program evaluation this year, what should they prioritize?

A.

It depends what they are doing and how they want to use the evaluation. It’s always helpful to start with a logic model. People have different opinions of logic models. I am known for making logic models fun. That is my reputation. I own it. And it’s possible to make a logic model without ever using those two words in a sentence!

 

It’s really about articulating these two things: “What do we think we want to accomplish in our program?” and “How do we think we’ll accomplish it?" That’s the foundation for the program evaluation. After that, we can get curious about progress toward our intended outcomes.

Q.

A.

What does it mean to be curious?

Sometimes I think people just need to be given space and time to be curious. To be in conversation. To be able to take a step back and have somebody ask you about your program from an outside perspective.

Q.

What do you think is the future of evaluation for nonprofits?

A.

Some of it is driven by funding, we have to be honest about that. And each nonprofit I work with is on their own journey and in such a different place from others. 


But big-picture-wise, there’s a move afoot in the foundation world toward equity in evaluation and that’s raising everyone’s awareness. It’s getting beyond the idea of, “How can I measure if an organization is moving toward equity/diversity/inclusion?” It’s looking at evaluation itself as a tool to leverage equity. I understand that this may not be the top priority for every client, but I love to think about that being in the future of evaluation for nonprofits.