Get to know: Elizabeth Bondi-Kelly

Her work breaks new ground in the application of artificial intelligence for social impact.
Liz Bondi Kelly headshot. A woman with long blonde hair smiles at the camera. She is wearing a gray blazer, white blouse, and glasses.
Elizabeth Bondi-Kelly

Elizabeth Bondi-Kelly joined CSE as an Assistant Professor in Fall 2023. Her research focuses on artificial intelligence (AI) for social impact, particularly the development and application of multi-agent systems and machine learning tools for conservation and public health. She leads the Realize Lab in hopes of realizing AI for positive social impact.

Bondi-Kelly completed her PhD in computer science at Harvard University and subsequently worked as a Postdoctoral Fellow at MIT through the CSAIL METEOR Fellowship.

Bondi-Kelly’s research has been widely published in distinguished venues, including the proceedings of top conferences such as the AAAI Conference on Artificial Intelligence, the International Joint Conference on Artificial Intelligence, and others. She also founded and leads Try AI, a 501(c)(3) nonprofit committed to increasing diversity, equity, inclusion, and belonging in the field of AI through community-building educational programs.

In a recent conversation with Bondi-Kelly, we learned more about her research interests and goals for her future work in CSE: 

What are the major research problems you are working to address in your research?

Working on AI systems in domains like conservation, education, and public health, I’ve found that at least two major research themes arise. First, once we try to introduce these systems in the real world, there is a great deal of uncertainty that arises. What if we are building an algorithm to identify species in images of animals and we find an image containing only the tip of an animal’s tail, or we find an image captured during a storm? Much of my work involves figuring out how to model and account for that uncertainty in these systems.

The second research theme is the human element of these systems. When we deploy these systems in the real world, we are working with humans, so we need to account for our human users and stakeholders throughout the design and development of these systems. And that could be at any stage in the pipeline, from understanding what challenges people are facing, all the way to trying to develop a model and iterate on a developed system.

What’s unique about your approach to tackling these problems?

One thing that distinguishes my research is that I try to seek out and engage stakeholders in conservation and health, including nonprofits and interdisciplinary collaborators. Our lab is currently working with a conservation nonprofit to better understand some of the challenges they face in identifying pollutants in the environment, and clinicians in reproductive health and antimicrobial resistance.

I’ve found collaborations like these to be very important in terms of understanding problems and then making sure that the AI solution we develop makes sense and is useful in the real world for these collaborators and beyond. I really strive to prioritize engagement with stakeholders in my work.

Societal impact is at the center of your work. What is a specific example of a way your work has impacted society at large?

During my PhD, I started a nonprofit called Try AI, where our goal is to broaden participation in the field of AI. To do this, we’ve organized multiple two-week micro-internships, where we match early undergraduate students with PhD student mentors working in AI. During those two weeks, they conduct research on AI in society. Our hope is that this has given and will give students the opportunity to build a network and try AI research in an inviting, collaborative environment. 

What are your future goals with regard to research?

I plan to continue working in the three application domains we’ve discussed so far: conservation, education, and public health. I’m looking forward to building relationships with stakeholders in each domain and developing those projects, as well as thinking through how to generally involve stakeholders throughout the design and development of AI solutions.

What’s most important to you as a mentor to graduate students?

Mentorship is very important to me, and it’s one of the reasons that I wanted to become a faculty member versus going into industry. What is most important to me as a mentor is trying to support students and help them reach their goals. For example, I try to help students identify opportunities and network with people in our field, as well as foster a collaborative environment in our lab.

What are qualities you look for in the students you work with? 

I am privileged to work with and seek students who are interested in realizing social impact with AI. It can take a great deal of effort to work with stakeholders and incorporate them in the full AI development pipeline. I believe it’s important that our lab is committed to engaging in these collaborations.

When you’re not thinking about computer science, what else do you do?

My spouse and I have a little one-year-old, so we’re usually chasing her around and reading as much as possible as a family. We also love outdoor activities like hiking.