Podcast
Air Date:
June 12, 2025

#007: Is It Too Late to Opt Out? - Dr. Avriel Epps on Algorithmic Bias, AI Harms, and Parenting Through Surveillance

Dr. Avriel Epps

Dr. Avriel Epps is a computational social scientist, founder of AI4Abolition, and a leading voice on algorithmic justice. Her work sits at the intersection of data science, ethics, and social impact, with a focus on how AI systems reproduce bias—particularly in the criminal legal system, healthcare, and childhood development. Through public scholarship, organizing, and platform accountability, she advocates for community-led approaches to technology that prioritize equity, consent, and care. Dr. Epps holds a Ph.D. from Harvard and is known for her sharp critiques of surveillance tech, her refusal to normalize facial recognition, and her commitment to raising children free from algorithmic harm.

What do bail decisions, Spotify playlists, and your child's future have in common? They're all shaped by algorithms, often without your knowledge or consent. Dr. Avriel Epps, founder of AI4Abolition, joins Siara to reveal the real-world consequences of algorithmic bias in areas like criminal justice, healthcare, and childhood development. They also discuss what it means to practice digital consent as a parent—from opting out of facial recognition at airports to refusing to post photos of their child online.

Connect with
Dr. Epps
EPISODE TRANSCRIPT

Dr. Avriel Epps (She/They) (00:10)
algorithms, computational systems in general, are not value neutral. They're not necessarily positive and they're not necessarily negative, but they're not neutral.

the analogy with like a fire know, fire is an amazing tool. It could be used to cook nourishing meals. And it could also be used to burn entire cities down.

Siara Singleton (00:32)
I've come across the facial recognition thing dozens of times at the airport and have still not mustered up the courage to say no because I'm afraid I will look suspicious or something. I'm like, if I opt out, will they be like, ⁓ she has

something to hide

Dr. Avriel Epps (She/They) (00:46)
I will always be like, no, I don't wanna do that. Look at my face, look at my passport. It's not because I'm like, I don't want the government to have my face. The government already has my face. That's not the point.

What I am trying to do in that moment, is to do that curl, that rep, exercising that muscle of saying, no, I don't want to consent to this. I feel like we all need to be in that practice

lot

of predictive technology being used in the criminal legal system.

system called Compass, is used to determine

stays in jail for pretrial sentencing and who gets let out on So Julia Angwin, did a kind of large skill analysis on the Compass system.

and found that it was significantly biased black folks. And what that meant was that black people or black defendants were more likely

stay detained pre-trial compared to their white

I think

Black Mirror should do an entire season where you have to guess after watching the whole episode if this is a technology that already exists or doesn't. so half of the season should be stuff already here and is deeply dystopian and half of it should be like right on the edge, That would be so good.

Siara Singleton (01:47)
Thank

Siara Singleton (01:54)
Hi, welcome back to the Log out Podcast.

Siara Singleton (01:56)
I want you all to consider this question. What are you okay with algorithms deciding about your life? What are you okay with algorithms deciding about your neighbor's life? What about your child's life? If I were to do this exercise, for instance, I'd say, I'm okay with an AI suggesting what podcast I might like or using an algorithm to tell me the fastest route through traffic. I am...

not okay with a crappy AI determining who gets a job or worse who gets put into prison or let out on bail. I definitely don't remember voting for or signing off on that

I think it's worth all of us considering, especially right now because decisions are being made quietly, rapidly, and often without our input.

I feel like I'm stating the obvious, whether or not we're okay with it, AI is influencing the trajectory of our lives, and it's going to continue to escalate. Some ways, sadly, are out of our hands, but others, I believe, and experts do believe that we absolutely have agency. We just need the

We can make active decisions to have more control over our lives and mitigate harm to ourselves and our communities. I really believe

Reminder, I'm bullish on AI. I think it could solve a lot of societal issues, and it makes life easier. It makes my life easier.

But I'm not bullish on blind, unchecked, or irresponsible use of AI, especially when it disproportionately harms marginalized communities.

And it does. Did you know AI is oftentimes behind who gets what type of health insurance or whether or not your mortgage gets approved? Like critical functions.

would think the data supporting these models can be trusted, but there is a lot of evidence that points to otherwise.

The very first episode of this podcast, I speak with an AI researcher from Princeton University about AI snake oil. If you've seen that episode, you know exactly what I'm talking about. these are the types of things that we're not seeing in the headlines about this technology. The types of things I'd imagine more people would like to be aware of. and you know, even if these don't alarm you, your privacy should. We're living.

constantly track us. They're looking at who we are, where we go, what we say, who we love, and it's being collected and sold and interpreted and used Sometimes without consent and almost always without recourse. My thing is if it's not a problem, why does it need to happen quietly?

Today I'm speaking with Dr. Avril Epps, a computational social scientist with a PhD from Harvard. Dr. Epps is the founder for AI 4 Abolition and a leading researcher on Algorithmic bias

Their work has shaped policies at Spotify and beyond.

Dr. Epps will shed light on how algorithms influence our lives at seemingly low stakes levels, like the music we stream on Spotify, but also the higher stakes ones, like who goes to prison or what type of insurance a person is eligible for. We'll talk about data integrity, the basics of algorithms, and how one might think about raising a child in this new world of technology.

of facial recognition software at the airport.

They're a wildly fascinating person with an abundance of knowledge to share. So let's meet Dr. Epps.

Siara Singleton (05:05)
Dr. Avriel, welcome to the show. So happy to have you here.

Dr. Avriel Epps (She/They) (05:07)
Thank you so much. I'm so excited to be here. It's a pleasure.

Siara Singleton (05:10)
So just to jump in, I think that something that I found myself Google searching when I was looking into your work was your title, one of your many titles, which is Computational Social Scientist. Can you explain to those of us who are laymen exactly what that is? Because I thought I knew what it was when I read it, and then I looked into it, and I thought, that is way cooler than what I thought. So please enlighten us.

Dr. Avriel Epps (She/They) (05:35)
Well, I

think there's still some debate in the field about what a computational social scientist is. We're still formalizing that as a community, but for me, it means I'm trained in social science disciplines, developmental science, psychology, and I use computational methods to answer questions that are social science questions. So machine learning methods often I

do a lot of research that requires me to scrape large amounts of information off the web. specifically when we talk about some other research projects that I did with Spotify, for example, a little bit later, using really large data sets, which require a lot of computational power that is different than traditional social science methods.

And then I guess the flip side of that is I also, computational social scientists are not always necessarily interested in questions that have to do with computation, but I am definitely interested in how do computers impact society, specifically how does artificial intelligence and predictive technologies, how do predictive technologies influence society? And so.

there's kind of a double meaning there that I care about computation as a topic of research, but I also use computational methods in my research.

Siara Singleton (06:47)
Thank you.

Okay, so just to clarify for the audience, like a social scientist might use surveys or like interviews to come to certain conclusions in their research, but a computational social scientist would more so, it couldn't really have existed before mass data and big data was available, is that correct?

Dr. Avriel Epps (She/They) (07:16)
Correct, yeah, this is very new form of social science for sure.

Siara Singleton (07:21)
yeah, I mean, it's kind of cool to know. Sometimes it's scary to think about all of the data that's around, but it's nice to know that there are professionals doing something productive with it, so I think that's positive. So what drew you specifically to this?

Dr. Avriel Epps (She/They) (07:33)
that's a great question.

I really became interested in social science research when I was an undergraduate at UCLA. And for me, research was an opportunity to do me search, to like really try to make better sense of what my personal experiences were up until that point. So at that time I had been in the music industry and the entertainment industry really my entire life. And

I had just kind of been given a formal education about how systemic biases manifest in the entertainment industry. And I was having like a slightly different experience online as part of the first generation of artists who really grew up on the internet, you know, sharing their music on my space, using YouTube to like gain an audience.

And I had a lot of questions about how these new tools were changing the opportunities for artists like myself, especially those who were impacted by systemic biases, to express themselves. And so that kind of snowballed into larger questions about how does the internet shape identity development for young people, not just young artists? then eventually, what

Eventually the question became what do the underlying technological structures of the internet have to do with how we see ourselves and our place in the world and how we relate to other people given all of our unique identities and positionality in the world.

Siara Singleton (09:04)
so.

When you're studying this work, because it seems sometimes like it's all very much an Oz-like system where there's a lot of data happening in the background, but we as the consumer don't really know what's going on unless we really look into it. And even then, know, companies can be kind of secretive about what's happening there. So I'm curious how access to that data becomes available to researchers like yourself. And then do you feel like you have even more data to work with now that

AI is opening itself to consumers such a more advanced method of collecting data on consumers. mean it's curious how it all kind of works together.

Dr. Avriel Epps (She/They) (09:42)
That's

a really excellent question. mean, data, even though data is plentiful as a researcher, especially someone who's auditing these systems, it's not easy to come by. you know, folks take different approaches. I have been on the inside of organizations and having like access to all the data that an organization has access to, like, or a company has access to on its users.

And I've also done research from the outside, having to audit a platform by looking at how it's performing on the user side. Obviously, it's a lot easier to do this kind of work inside of companies, but then whether or not your work sees the light of day, if the conclusions you come to are in the best interest of the company or not. You know, we've we've seen Francis Haugen has like

Siara Singleton (10:21)
Thank

Dr. Avriel Epps (She/They) (10:28)
with the Facebook files has really kind of shown us how much of the internal research gets squashed. know, Timnit Gebru is a AI researcher, a former AI researcher at Google and now runs the Distributive AI Research Network. She was essentially fired because she wanted to publish research that wasn't shining the best light on Google. So,

Siara Singleton (10:51)
.

Dr. Avriel Epps (She/They) (10:51)
there are significant limitations to when you do have full access to all the data that these companies are collecting on us. Most of my work now is like, how do I get access to that data? And it's always imperfect data, right? Like I cannot, at best, individual users are theoretically able to access all of the data that has been collected on them through

laws that were put in place in California through the California Consumer Privacy Act. So, you you, Siara could go log into your TikTok or Instagram and request to download all of the data that a company has collected on you. And then you might choose to share that data with researchers or donate it to science. And if we did that at scale, we could theoretically have access to some.

Siara Singleton (11:24)
Thank

okay.

Dr. Avriel Epps (She/They) (11:44)
of the data that the companies have access to. But short of that, we have to kind of simulate user experiences in some ways that may or may not be in alignment with terms and conditions of some of these platforms to try to just get an understanding of how they're working. And yeah, and I think there's some other interesting research methods that folks have used to try to gather data, especially around like how people interact with chatbots, for example.

Siara Singleton (12:00)
I see.

Thank ⁓

Dr. Avriel Epps (She/They) (12:10)
just observing people as they use the platforms. But even that is hard to do at the scale that the companies are able to collect data on.

Siara Singleton (12:22)
I have a side question that might seem very random, but I think about companies like Bluesky Sky often just because Obviously it's an extremely different approach to infrastructure for social media that seems more open. And I don't know if that's how it necessarily works, but could a platform like that, if it really becomes widely adopted and it seems like it's going in that direction, would that be more helpful for computational social scientists in terms of?

getting data that is helpful.

Dr. Avriel Epps (She/They) (12:49)
Potentially, yeah. But, you know, Blue Sky is such a new platform, new-ish platform compared to the others, and is also not under the same kind of, what's the right word? It's not under the same microscope that the other platforms are under because it's still got pretty niche adoption in comparison to say something like Google or YouTube.

Siara Singleton (13:11)
right.

Dr. Avriel Epps (She/They) (13:12)
But yeah, I'm very interested in how folks are beginning to use blue sky data and the hope is that it will be a little bit more transparent, but I haven't seen enough examples of that yet to be able to really talk about it.

Siara Singleton (13:27)
Yeah, it's very early. You mentioned being inside of companies to do your work. What can you tell us about the work that you did at Spotify? Can you tell us what you discovered there and what that was like?

Dr. Avriel Epps (She/They) (13:39)
Yeah,

so when I was at Spotify, it was really born out of this research that I said I was doing as an undergrad, trying to understand opportunities for musicians like myself online. And when I got there, I was again interested in these questions around algorithms, algorithmic bias, and Spotify had been dealing

with trying to understand if there were biases in its recommendation systems. This was back in 2017 or so, and there were a few years that they had been working on this problem, really trying to understand, like, okay, do we recommend more male artists? Are we limiting opportunities for female artists to reach audiences? What about artists that are like massive superstars versus artists that are a little bit smaller or like the middle-class artists like...

are we limiting opportunities for people across different statuses? And so I was really curious about that. And one of the questions we asked was what is happening on the user side? So are users being recommended fewer non-male or female or gender non-conforming or mixed artists?

compared to male artists. How does that align with the actual supply of artists on the platform? Because you have to understand that part of the reasons why algorithmic biases exist is because there are these long historical biases that are then reflected in the algorithmic system. So we know that the music industry for its 100-plus year run has historically

not been a welcoming place for women and gender non-conforming artists. So how is that manifesting in the way that the algorithms work? And then the other question we had was, depending on the proportion of male to non-male artists that are being recommended to people, does that impact their downstream listening? So like,

if you're recommended more male artists, are you more likely in the future to listen to more male artists? Or if we switch that up a little bit and you're recommended more female artists, are you more likely to change your listening downstream? And so, you know, we found the obvious, which is that there was some bias in the recommendation system, but it was mostly in alignment with the actual supply of female artists on the platform. And so there's kind of a double-edged solution there. It's one,

And it's also a chicken and the egg problem too, right? Cause like, you know, we need to increase the number of female artists that are on the platform. But if female artists aren't getting the same kind of like support and recognition on the platform, then like it's hard to support them in those ways, right? And then we also found that there is, there are some effects for downstream listening.

in the directions that you might assume. And those vary a little bit based on the listeners or the user's gender and age as well. So I think it was correct. We'll have to go back and look at this study because I don't have it off the top of my mind. It was published many years ago, but that listeners that were like in their late teens, early adulthood were like the most

likely to have those downstream effects. So yeah, it kind of pushed that science around gender biases and recommendations a little bit further. And you know, I really liked working with Spotify and I'm proud of the team over there in a lot of ways, at least the research team over there, because they were taking these issues really seriously and they really wanted to understand

What is the role of the recommender in this? And then also what is the role of the human beings in the entire kind of music ecosystem as well?

Siara Singleton (17:16)
That's so fascinating because...

As I kind of explore this world, I find less and less things that I realize have not been influenced by algorithms. I grew up on Spotify. I've been on Spotify for, I feel like over 10 years. And I'll go back and look at my, you know, high school playlists. like, now I'm wondering like how much of the algorithm was kind of pushing me towards certain genres and whatnot, because I'm all over the place genre wise. So yeah, that's, that's super fascinating. Are there any other cool stories from

companies and things that you discovered that you are able to share.

Dr. Avriel Epps (She/They) (17:50)
Yeah, well you

brought up genre and that's another thing that we touched on in that paper, which is that gender biases and recommendation are also genre specific. you know, pop actually doesn't really have as big of a problem around gender representation and the way the algorithms recommend music to folks as much as like heavy metal does, which like intuitively that makes a lot of sense.

And so, you know, for me, that makes me think about what happens on other platforms, which are like these kind of gateways toward bias or like these pathways for bias, where, you know, if you're a heavy, heavy metal listener, you're probably not getting exposed to the themes that like women might write about in heavy metal music. And like

How is that reinforcing your pre-existing belief system, your cultural practices, etc. in real life? I think there's an analogy there with YouTube and how folks who might be interested in pretty benign content like around cars or fitness or whatever end up getting kind of funneled into or not hearing the perspectives of a huge proportion of the population.

and maybe end up in a Manosphere space on the internet. And so what is that doing about their pre-existing beliefs? And the larger question then for me becomes, what about the kids? So the people who have been exposed to much, much less in their life because their lives have been much shorter. I think about the 10-year-old who's interested in sports cars because they're, I don't know, they just.

Siara Singleton (19:21)
you

Dr. Avriel Epps (She/They) (19:27)
got a little sports car for a toy one day and like they're on YouTube now looking at sports car content, where might they be led to content wise and how might that impact how they think about themselves and their beliefs and identity.

Siara Singleton (19:40)
I think you might be the perfect person to do this give the layman the non-technical person a formal definition of exactly what an algorithm is because I think it's something that we all kind of talk about but I feel like there's something that's more clear that would help us all conceptualize it when we talk about it.

Dr. Avriel Epps (She/They) (19:58)
Absolutely.

So in its simplest terms, an algorithm is just a set of instructions for a computer to follow. It is like, if this happens, then do this, and then do that. I think of it almost like a recipe, or like for a different analogy, like my kid loves to build Legos, and they come with those like booklets with the instructions. You could think of that as an algorithm.

Siara Singleton (20:19)
get out.

Dr. Avriel Epps (She/They) (20:20)
Algorithms are used in computer science in a lot of different ways that have nothing to do with prediction or machine learning or artificial intelligence. But when we're typically as a society talking about algorithms these days, we're talking about algorithmic systems that perform machine learning. And so in that case, when it's machine learning, we are taking an input, which is like data about you.

Siara, and we're feeding it into a list of instructions for how to analyze that data, make sense of it mathematically, and then make a prediction, which is an output based on what it thinks that data might look like in the future without seeing the future. ⁓ And so just like a recipe, you've got ingredients, the data, you've got the list of things that you need to do, the algorithm, and then you get the

Siara Singleton (21:07)
the way.

Dr. Avriel Epps (She/They) (21:16)
the output, the prediction, which is like the cake that you have baked. And that's how I like to think about what an algorithm is in those different contexts.

Siara Singleton (21:25)
The recipe metaphor have been perfect because you can obviously get a lot of different results based on the ingredients that you're given which I'm sorry the recipe that you're given which leads me to my next question I think algorithms for some have been marketed as objective or neutral, but what's your take on that? How true is that really?

Dr. Avriel Epps (She/They) (21:44)
Yeah,

algorithms, computational systems in general, are not value neutral. They're not necessarily positive and they're not necessarily negative, but they're not neutral. And what I mean by that is that, you know, they are tools that can be used towards whatever end that you care about. Obviously people talk about

you know, the analogy with like a fire or, yeah, let's go with fire. you know, fire is an amazing tool. It could be used to cook nourishing meals. If we're going to keep going down this recipe analogy. And it could also be used to burn entire cities down. and so that we can, we can think about the technology in that way too.

thing about technology though, is that,

So much of it is predicated on what the society has looked like up until that point. and so it's not the same as fire, which every time you start a new fire, it is just fire. And the way you use it in that moment is what determines whether or not it's good or bad. Every time you build a machine learning algorithm, it's not just the machine learning algorithm. It's the machine learning algorithm plus all of

the 100 years worth of data that you've collected or 30 years worth of data that you've collected in the past. And so that is actually what determines in a large part whether or not it has a positive or a negative impact on society. And then of course, what you use it for also really, really matters, right? So you can take facial recognition technology and it has all the biases baked into it because

let's just assume that there are fewer people with dark skin who are represented in facial recognition datasets. And so that presents a series of issues, right? Does it work as well for different sets of people? But then also how you use it matters, right? So like facial recognition technology becomes increasingly bad if you're using it for surveillance, to criminalize people or to surveil protesters.

versus facial recognition technology that's used for something more benign, like, you know, to help educators know whether or not a student has shown up for school that day and whether they need to offer more support to the parents to help get the kid to school. So yeah, it's never objective. always has the decisions.

of decision makers of the past baked into it, and also the decisions and the values of the people who are coding it in that moment as well. Yeah, I hope that's clear.

Siara Singleton (24:23)
Yeah,

no, that's very clear. I think it explains perfectly

bias happens. And I think algorithmic bias has been up a lot more often. And

I think that there's a lot more instances of the bias happening than people realize.

Dr. Avriel Epps (She/They) (24:39)
Yeah, absolutely. I I think a lot of people don't realize how much machine learning is embedded into our healthcare system. I think that's one of the most invisibilized ways that AI has a direct impact on people's life and life outcomes that we really don't see. So everything from like, you know, insurance claims being denied or accepted.

which is done by machine learning in a lot of cases for lot of insurance companies, to diagnostic algorithms. Like, let's run the results from this person's lab to decide whether or not they're at a higher risk or a lower risk for kidney disease. And, you know, those systems are biased in a lot of ways. I mean, in the medical system, it's not a surprise or it shouldn't be a surprise to many of your listeners that there are...

lots of racial and gender biases in medicine. Just think about the decades and decades of medical research that haven't even included female subjects. Have it looked at questions that have to do with the female body or intersex bodies. And same goes for all of the racial stereotypes that proliferate throughout medical research and medical practice.

All of those things get baked into these algorithms. And so if you're using them to diagnose folks, to treat them, to help come up with treatment plans, increasingly folks are interfacing with chatbots, delivering some form of administrative care or like actual healthcare. And then also the insurance company that decides whether or not you're even allowed to get treated or have care.

and that obviously has myriad financial consequences for many, many people. That significantly impacts your life. And if you're a person who's been historically marginalized in the medical system, but you don't understand how those biases are baked into these processes and are being reinforced or sometimes even exacerbated, that puts you at significant risk.

Siara Singleton (26:40)
Okay, super

back to the basics question, because I think someone might ask, who is deciding what data sets make it into an algorithm? Because you would hope most of us learned about statistical significance in school. You would hope that's worked into it, but then I guess, of what significance? Like, can you look at a study and say, OK, I'm getting the data set from this study, but we know that this study did not do adequate research on women or adequate

of research on

of person with this type of health condition or like you said, gender identity or assignment. So how are those data sets decided on when they're incorporated into an algorithm? Because some algorithms are kind of like whatever you're recommending something for Netflix maybe, but then some are very serious and have serious consequences. So is there different levels of

carefullness when it comes to actually designing these algorithms.

Dr. Avriel Epps (She/They) (27:34)
The short

answer to the question is yes. not everybody in like medical science or anything that's like adjacent to medical decision making in like a research setting is trained in like a critical understanding of the world and our society. Like it's pretty rare actually for PhD researchers.

to get that kind of training that allows them to ask those kinds of questions. Like who's being included, who's not?

Siara Singleton (27:57)
Let's go.

Dr. Avriel Epps (She/They) (28:04)
factors are playing into this or not? There are a lot of assumptions I think made in these projects.

that because something has been used for a long time, therefore it, or because a data set has been used for a long time, therefore it is a good data set. And quite frankly, a lot of people don't necessarily care about like minorities in data sets. And then I think the other thing is that, you know, there's, there is a not insignificant population of researchers

Siara Singleton (28:29)
Yeah.

Dr. Avriel Epps (She/They) (28:39)
across a variety of fields that use machine learning that just simply don't have the cultural competency that folks who are trained in critical methods or who have lived experience being a marginalized person in this particular society to be able to ask those questions well and find the right answers. I mean the problem really boils down to the fact that

Black, Indigenous, you know, women in general are not well represented in the academy and in these kinds of, especially in these like highly technical fields and that's a huge part of the problem.

Siara Singleton (29:20)
What cases of algorithmic bias concern you the most?

Dr. Avriel Epps (She/They) (29:23)
I am most concerned by...

biases in algorithms in the criminal justice system, in the criminal legal system, and algorithmic systems that apply the same logic, the same like, carceral, like, punishment logic of the U.S. criminal legal system that disproportionately harms and Black and Brown.

communities

this country. And that's everything from facial recognition technology that's used by law enforcement to, you know, sentencing and bail setting algorithms used in the court system to like parole algorithms that are used. And there's like

I couldn't even speak to just the massive amounts of surveillance that various law enforcement agencies across the country and at the national level use that deeply concern me. Just in their ability to detect and identify different groups of people accurately. And I don't think in those settings they're necessarily concerned with that. I don't think that's the motivation, if that makes sense. ⁓

Siara Singleton (30:17)
room.

I don't think so as well

Dr. Avriel Epps (She/They) (30:34)
Because it's never

that's never been the motivation in that setting. Like it's you know, like there's there's nothing in history that we can point to where like, yeah, they really care about getting the right person. It's like, no, anybody who mildly fits the description, which is usually black or brown, could get locked up. like that. So anyway, like that you're just at that point, it feels like putting this computational veneer on a deeply unjust system to be able to

Siara Singleton (30:38)
for that.

Yeah.

you

Dr. Avriel Epps (She/They) (31:03)
lie to people and say like, now it's subjective because everyone's going to assume it's a computer so of course it's subjective it's just math but in reality that's like not what's happening.

Siara Singleton (31:07)
grow.

We're good.

Yeah, absolutely. And I spoke with someone about predictive AI recently that really opened my eyes as someone who was almost actually hopeful about it, but for different use cases. And so when I heard the possibility of predictive AI being used in the case of

incarceration, I became deeply concerned. And I actually don't know the answer to the question, is predictive AI actually used in our systems today? Let's keep it to the US for now,

Dr. Avriel Epps (She/They) (31:43)
Yeah,

There's a lot of examples of predictive technology being used in the criminal legal system. The one that I talk about the most in my work is the system called Compass, which is used to determine who is,

It's for pretrial sentencing, who stays in jail for pretrial sentencing and who gets let out on bail. So Julia Angwin, who was with ProPublica at the time, did a kind of large skill analysis on the Compass system.

Siara Singleton (32:08)
you

Dr. Avriel Epps (She/They) (32:16)
and found that it was significantly biased black folks. And what that meant was that black people or black defendants were more likely

stay detained pre-trial compared to their white

And so that's just a very clear example. And that system is still being used in multiple states. And despite...

all of the research and the investigative journalism and the, you know, bad press. Julia's work was done almost 10 years ago and there have not been significant attempts to try to fix or correct for some of those biases. And, you know, I'm always really shocked that like, even though this research came out 10 years ago and it's been such an important inspiration for

for my work that so many people still don't know about Compass. And it's like such a, if to me, it just feels like such an evil

Siara Singleton (33:04)
Yes.

Dr. Avriel Epps (She/They) (33:09)
of technology. It's the stuff that people make like sci-fi thrillers about. you know, millions of people will go see a box office movie about this technology if that movie existed. So like, why doesn't everyone know about this? And why isn't everyone really upset about it?

Siara Singleton (33:25)
my gosh, I'm proposing that Black Mirror dedicate an episode to Compass and then at the end just say like, this is reality or something.

Dr. Avriel Epps (She/They) (33:32)
Right,

Siara Singleton (33:33)
That's alright. Like, this is possible.

Dr. Avriel Epps (She/They) (33:33)
You know what? I think Black Mirror

actually needs to do a whole season just on like existing fucked up uses of technology and then like or maybe like half the season. Sorry, I shouldn't have cussed on your podcast. Maybe half this. OK, cool.

Siara Singleton (33:48)
⁓ hi. Welcome, welcome to cuss

Dr. Avriel Epps (She/They) (33:52)
Yeah, I think

Black Mirror should do an entire season where you have to guess after watching the whole episode if this is a technology that already exists or doesn't. And it's like a little game that you get to play with every episode, so half of the season should be stuff that's already here and is deeply dystopian and half of it should be like right on the edge, you don't know. That would be so good.

Siara Singleton (34:06)
Thank ⁓

Dr. Avriel Epps (She/They) (34:13)
Somebody propose that to them. ⁓

Siara Singleton (34:13)
that one year.

Yeah, I mean, it's already happened where I'm like, well, I remember this happening in season one of Black Mirror and now I'm witnessing it with my own eyes. So it's just it's not very far from reality a lot of times. ⁓

Dr. Avriel Epps (She/They) (34:26)
No, exactly. That's

the uncomfortable part of watching that show for me.

Siara Singleton (34:30)
It's curve throw.

Season three is waiting in the queue. When I'm emotionally ready, I'll get into it.

Dr. Avriel Epps (She/They) (34:36)
If totally.

Also, like, well, I, the only piece of technology on Black Mirror that I actually want to exist in real life is the, did you watch San Junipero? That's the like lesbian love story, but they're like in the afterlife, their consciousness has been uploaded to the computer and they get to spend eternity together and they're like debating whether or not they want

Siara Singleton (34:54)
all of them.

Dr. Avriel Epps (She/They) (34:56)
I've watched that episode like five times. It's my favorite. And I would, I would do that.

Siara Singleton (35:00)
That's

I need to re-watch that one.

Dr. Avriel Epps (She/They) (35:03)
Yeah, you should.

Siara Singleton (35:04)
So my next question was that, and I think like the facial recognition piece relates to this, but we sometimes unknowingly opt into these systems, or I guess are we really opting into it if we didn't have a choice, but we're pulled into them without ever having given explicit consent.

But for the ones that we do choose, are there red flags that you think consumers should watch out for when they, you know, maybe sign up for a service or even enter a space? What do you think about what people should look out for before opting into something new? It could be an app, but really these days it could be anything.

Dr. Avriel Epps (She/They) (35:41)
I'm not going to answer it directly, but I'll do my own version.

Instead of trying to evaluate every single piece of technology that you interact with, whether or not you should sign up for it or not, we should be in the practice of exercising the muscle of revoking our consent or not giving our consent. So when there are opportunities to do that, we should practice it, period.

And it doesn't, there's no, there doesn't necessarily need to be a rhyme or reason to it because often folks will feel like, well, I already gave my data over to X, Y, and Z company. What, what does it matter if if X, you know, A, B and C company also has my data, which is honestly kind of true, right? Like any data that we get to any company for the most part is probably going to be sold to some kind of third party data broker that can be bought by anybody else who wants it. So when I talked to folks about

Siara Singleton (36:30)
Mm-hmm.

Dr. Avriel Epps (She/They) (36:33)
for instance,

opting out a facial recognition technology used at the airport. You you walk up to the little TSA agent, they tell you to look in the camera, and I will always be like, no, I don't wanna do that. Look at my face, look at my passport. It's not because I'm like, I don't want the government to have my face. The government already has my face. That's not the point. Like, Clearview AI has already scraped all of our faces off of every social media,

Siara Singleton (36:58)
clear view AI.

Dr. Avriel Epps (She/They) (36:59)
Yeah,

Clearview AI is like a facial recognition AI company. ⁓

Siara Singleton (37:03)
Okay. Is that the clear,

like in the airport, the clear? Okay.

Dr. Avriel Epps (She/They) (37:06)
No, that's a different company. That's a different company.

But the point being that the DMV has a database of all of our faces in it. I'm not concerned about the government having my face. What I am trying to do in that moment, though, is to do that curl, that rep, exercising that muscle of saying, no, I don't want to consent to this. I feel like we all need to be in that practice whenever we're

we remember to do it. It's like, I don't know, doing your key goals or like remembering to like do your stretches or whatever. Like when you have the opportunity, you are reminded like, I could either opt into this thing or opt out, sometimes choose to opt out because it's a good muscle to have. That I think is like, if enough people begin to do that, if we are all in that practice,

Siara Singleton (37:47)
Mm.

Dr. Avriel Epps (She/They) (37:57)
Then the next step in the movement, the data sovereignty, the data rights movement, is being like, hmm, we can't actually meaningfully opt out of any of these things. That's a problem. Let's organize ourselves in our communities to try to figure out how to solve that problem. Because

our individual being able to opt in or out, that really doesn't make a difference. But we need to be in the practice of thinking critically about these things and then eventually be in the practice of organizing around these issues because they require systemic solutions, not individualized solutions.

Siara Singleton (38:34)
Yeah, the facial recognition at the airport I think is more than a bicep curl, I will give you points because I like to, like you say, of exercise opting out when I can. A little easier to do online than it is in person. I've now come across the facial recognition thing dozens of times at the airport and have still not mustered up the courage to say no because I'm afraid I will look suspicious or something. I'm like, if I opt out, will they be like, she has

something to hide so like I'm using this conversation as the bravery I need to do it next time because I agree with you.

It's more so the sentiment of you know just exercising that muscle

I am also wondering about a recent metaphor that you shared. You compared technology to the Amazon rainforest. And I learned something about the Amazon rainforest, but I'll let you share it. Can you restate it for the audience here and explain what you mean?

Dr. Avriel Epps (She/They) (39:24)
Yeah, absolutely. I was thinking about how to explain technology to group of middle schoolers, which I often do. I'm often obviously called in to talk to little kids and teenagers about these issues. And, you know, I am not a total techno pessimist.

I think there are many people who are and that's totally fine. I think that's like a valid viewpoint to have that like all technology is bad and we need to just like, you know, go back to simpler time or something like that. It feels a little, it feels a little MAGA-esque to me to take that position, but it's fine. It's fine. Anyway, but I also know that little kids, middle schoolers don't understand, don't even know.

Siara Singleton (39:59)
Bye.

Dr. Avriel Epps (She/They) (40:08)
what a world without technology looks like. All they know is a world with technology. So if you come in guns blazing saying like, this is so bad for you, da da da da, like they're just not gonna take you seriously. How could they even imagine what it would be like to take away their Minecraft or whatever? so I was trying to think about how do I hold that balance?

And for me, the Amazon rainforest was like such a perfect metaphor because one, the Amazon rainforest is manmade. It's not some like natural, you know, it's a world wonder of course, but it didn't occur naturally. Human beings made the Amazon rainforest through like super sophisticated advanced agroforestry practices that we could all actually learn something from in this climate crisis that we're in. And,

But you know, the Amazon has like lots of scary animals in it. It's got poisonous mushrooms in it. It's got a lot of scary stuff, right? You know, you could make the metaphor be that like the poisonous mushrooms that you don't want to eat that look similar to the mushrooms that might be tasty are like the social media algorithms or something that could cause you to become addicted to social media or encourage you to harm yourself in some way.

And at the same time, there's medicinal food, there's yummy fruits that you can climb up a tree and eat, there's places to be at rest in Amazon rainforest and just appreciate the nature. And I think like all of those things also exist on the internet. so, for me, it feels like a really powerful metaphor. And also it gives me an opportunity to teach kids like...

not just about technology, but also like some fun facts that they might not know about our natural world. Which also to me is deeply important that in this work that I do around technology and the like unnatural world that I'm like always trying to stay connected to and rooted in the real carbon life based world as well, just for my own sanity.

Siara Singleton (42:05)
And speaking of teaching kids about AI, you just released your newest book. have it right here. I read through this a few times and I have to say I would like to give this to many adults.

I think this is like the basic information that I think a lot of adults don't have. so claps to you on that. But when I was reading it, I was thinking like, this is low key a sneaky, great way to teach adults about AI bias. If they have to read it to their children, you could just slap AI bias for dummies on the cover and it would still be really useful. I mean that. I'm wondering what was your inspiration for the book?

Dr. Avriel Epps (She/They) (42:26)
Thank you.

Siara Singleton (42:45)
What is it like teaching kids about AI? Are they picking it up rather quickly since this is already their world?

Dr. Avriel Epps (She/They) (42:51)
that you hit the nail on the

That was my inspiration for writing the book. I am such a fan of so many of the books that have been written on technology, specifically AI harms by authors like Sophia Noble, Ruha Benjamin, Kate Crawford, Joy Boon Winnie. All of those books are so amazing. I've learned so much from those women. And I also know as a parent of a young child myself,

Siara Singleton (43:11)
Thank

Dr. Avriel Epps (She/They) (43:19)
that's not accessible for a lot of people. That kind of college level text for folks who are not, who have already left college and are not coming back to a space where they can engage with these really dense texts and important texts, and folks who may not ever make it to college, like we need some other kind, we need an intervention for those folks too. So that was my inspiration. And yes, it's a total bonus that

elementary school kids, middle schoolers, and I think arguably even high schoolers can get a primer to these topics that they're all really interested in and like are thinking about and hearing about every single day was also really important to build that kind of like critical AI literacy at a young age, not just like AI is great, AI is fun, which is a lot of the messaging that they're hearing or

Increasingly I'm noticing because I consume a lot of media with my kid. I don't really let him consume. He doesn't get to watch Netflix like by himself really when he's with me. So we'll sit and I'll watch the movies. And so many of them are about like machines gone awry, robots who are going to like take you know the what was the last one we were watching? I think it was like the Plankton movie like the SpongeBob SquarePants Plankton movie.

Siara Singleton (44:30)
you

Okay.

Dr. Avriel Epps (She/They) (44:35)
about

how his robot wife goes, like gets really upset and turns into a super monster and like destroys Bikini Bottom. The other, the most recent Chicken Run movie is about an AI robot that like multiplies itself and builds other robots and then tries to like kill Wallace and Gromit. No, it's not Chicken Run, it's Wallace and Gromit.

Siara Singleton (44:58)
Yeah.

Dr. Avriel Epps (She/They) (44:58)
Sorry, the most recent Wallace and Gromit movie. Anyway, so

I'm watching these movies with him and I'm like, oh, not only are you in school learning how to code and doing all your STEM programming, because that's the most important thing in education right now, which I take issue with, you are also then watching all these stories about how the robots are gonna take over the world and how they're all evil. Like, where's the sober take in the middle?

Siara Singleton (45:22)
Thanks.

Mmm.

Dr. Avriel Epps (She/They) (45:27)
So that also felt really, really important to me and one of the main motivations for writing the book.

Siara Singleton (45:31)
How do you feel like AI is shaping childhood development right now? As someone who grew up with AI around me, didn't really know it was AI, but now it's very in our face. You can actively use it, you can actively train it. So how's that changing the next generation?

Dr. Avriel Epps (She/They) (45:48)
We can think about this in a couple of different ways. I think the first way is like, let's just take the narrow form of AI that most people are usually thinking about, which is like a chatbot, take the example of like Character AI, which is this app that so many young people are on. I want to say it's like 60 million users and the majority of them are young. And of course, there are these really egregious

really heartbreaking, disturbing examples of children being led to harm themselves by that chatbot. And that I think is just an example of like a larger thing of the way that young people interact with chatbots really shapes their own

way that they think about themselves. This goes back to the conversation we having at the beginning of this interview. But also, the decisions that they make in their real lives and the way that they are being trained and maybe you could even use the word groomed to engage in social interactions. It's very different to interact with a highly complimentary chatbot.

or a chatbot that's designed to just keep you engaged. And maybe for some people, maybe that means just giving them lots of compliments, saying they're brilliant, being like, that's such a good idea, da, da, da, da. For some people that's you know, tapping into some shadow that like gives them not so great things about themselves. But the common thread there is that the chatbot

is trained to keep your attention for as long as possible. That's very different than interacting with another human being. That other human being has motivations way beyond just keeping you engaged or just keeping you around or giving you what you want to hear. So what then happens to young people who are accustomed to interacting with AI that has a single motivation?

to then stepping into a much more complex, messy world of interacting with real human beings that have many, many, many different conflicting sometimes motivations. How do they learn how to deal with conflict? Maybe they retreat from conflict in the real world because that's just too difficult. And then what happens to those parts of their development? I'm very concerned about that.

Siara Singleton (48:09)
Yeah.

Dr. Avriel Epps (She/They) (48:13)
I don't think there's enough good research. know there's not enough good research on that to be able to say definitively what is happening or what is changing in development in those ways. But then beyond the chat bots, social media algorithms, access to search, like on Google or DuckDuckGo or whatever, the healthcare algorithms that we were talking about earlier, law enforcement algorithms.

algorithms that are predictive algorithms that are used in educational settings. All of these technologies are determining and I would argue sometimes limiting the future outcomes for any given human being on a moment to moment basis. So TikTok is determining and limiting what you are going to learn about, be entertained by on a

millisecond to millisecond basis. And same goes for any higher education admissions algorithm that might be being used, right? That's a bigger determination, but it's the same logic. And one could argue that human beings did a lot of that for us before the technology was doing that, but I think like,

because it's so systematic and so hyper personalized and also so opaque and something we really have very little control of. Like what's the alternative to TikTok? Like Instagram, there's one other alternative, right? YouTube shorts, maybe two. That's very different than like what's the alternative to your local elementary school if you don't like that school or a different teacher in that. Like there's many, many, many more.

Siara Singleton (49:34)
Yeah.

Dr. Avriel Epps (She/They) (49:45)
alternatives to like where you're getting your information from etc outside of these like monopolistic tech companies. ⁓ And so that's another thing that I'm like thinking a lot about like what are the implications for individual human development in that context but then also our development as like as societies and like our species like level development as well.

Siara Singleton (49:53)
Mm-hmm.

Mm-hmm.

Something else that you've shared is that with your own child, you've chosen not to post photos of them. I don't have children of my own. I have a dog who, unfortunately, I post many, many photos of this dog online. But if I do ever have children, I agree. I am completely, there's no safe place that I'm aware of today online based on everything that I know about. So I'm just curious why you made that decision. But also, for parents who

already

shared photos, is it too late or can they make some changes to help mitigate any chances of some of the things that you're about to explain from happening? Because I think some people are discouraged by the fact that they've probably already, with no mal intent, shared photos because they want to, you know, share what's happening with their family. But now they feel like it's sort of a lost cause. So I'm super curious of what you would have to say.

Dr. Avriel Epps (She/They) (51:00)
Yeah, well first I want to say that I started off not having this rule So I did post pictures of my child when he was a baby and maybe like a young toddler, which I ended up deleting. But okay, before I talk about that, I think the reasoning behind that choice for me as a parent and I've asked, you know, his whole community, not just me or his dad, but also grandparents, aunties, etc.

is there's a few different reasons. One, I don't really want his he cannot provide meaningful consent and I want to give him as much time as possible before he develops the agency and the ability to really understand how his image and likeness might be co-opted and used by some kind of

facial recognition or computer vision model. so that's the first thing. He just can't consent into his image being put into a database to be used to train AI. The second thing is

It's unfortunately, deeply unfortunately true that children's images in these models are often used to produce child sexual abuse materials. It's like the deep dark shadow of image generation or generative AI used for creating images and videos is that oftentimes it's used to create

child sexual abuse materials and I do not want him to be implicated by that in any way shape or form. Every time a photo of somebody is generated by AI, the first thought that pops into my mind is who is the human being that that picture is like trying to replicate?

Siara Singleton (52:43)
you

Dr. Avriel Epps (She/They) (52:44)
because that human being exists in like somewhere very closely to that image. Or maybe it's a combination of a couple of human beings. like, I remember one time early on, like early 2023, maybe, or sorry, late 2023, seeing an AI generated image of a black person or a black woman. And I was like, whoa, that looks like my partner. That is so creepy. Like.

Siara Singleton (53:10)
Mmm.

Dr. Avriel Epps (She/They) (53:11)
what images of her were used to train this that like when I put in some nondescript prompt to generate this image that it produced something that looked almost exactly like her. So like, I just, can't stomach the idea that like someone would generate any kind of image. It doesn't have to be abuse material, but any kind of image. then it like somehow looking like my kid and he doesn't have any control or agency or say in that.

Siara Singleton (53:22)
Right.

Yeah.

Dr. Avriel Epps (She/They) (53:36)
And then the final thing that I'm increasingly concerned about, especially with other family members who may not be as AI literate or AI skeptical as I am, I'm deeply concerned about emerging scams that are AI powered. So I think about if I put a clip of his voice on the internet in a video and it's long enough for a voice cloning,

Siara Singleton (53:51)
Yeah.

Dr. Avriel Epps (She/They) (54:01)
system to be used on it. What if somebody calls his grandparents and is like, it's, you know, Grammy, grandpa, like, I need your help or whatever. And like, I don't know, I just like, my God. And that's happening to people right now, you know, or something that is like used to create a like grown up version of him. And that's used to do identity theft in some way. I don't know, there's a lot of really

not great uses of this technology that I just want to try to protect him from in some way. So given all of that, I think that because kids grow and their faces change, their voices change, it's never too late to be like, I'm actually going to stop posting pictures of my kid or posting videos of my kid. It's like really not too late. You can decide today even if you spent the last seven years doing it. And also we have the right to delete.

So it's a long complicated process. Some people argue maybe all of your data doesn't get fully deleted when you request platforms to do so, but it's worth going through the trouble if these are things you're also concerned about or you're just learning about to delete the photos and then make requests of the platforms to delete the data on their end too.

Siara Singleton (55:17)
Yeah, they make it difficult as someone who's tried, but I agree. It's worth it. Mm-hmm.

Dr. Avriel Epps (She/They) (55:22)
but you have a legal right to it. ⁓

And I would argue your child has an even bigger legal right to it because they didn't consent to those pictures being put up on the internet in the first place by you. So I'm not a legal expert, so don't quote me on that, but.

Siara Singleton (55:39)
At what point do you feel like kids can give informed consent? Because I imagine I was around nine when I asked for my first phone and I was on social media at age 11. So I know that the age is just getting lower and lower. When can they give informed consent to put themselves online if they so choose? I know that's tough question.

Dr. Avriel Epps (She/They) (56:02)
⁓ Yeah,

I mean in research settings when we think about informed consent it's 18. Like anything younger than 18 you need parental consent and I think it should be the same for especially especially so like whether how old a kid should be to be able to use social media is not the is not the question here for me. The question for me is how old

Siara Singleton (56:07)
Yeah.

Okay.

Great, yeah.

Dr. Avriel Epps (She/They) (56:29)
can a child meaningfully consent to giving their data over to a company that creates prediction machines with that data? We could argue till we're blue in the face about if 13 or 16 or 21 is like best for like our cognitive emotional development to be on social media. I think later is better. As somebody who grew up on social media, essentially like grew up on my space, I'm obviously a little older than you, but like,

I, my goal is to try to keep my kid off of social media for as long as humanly possible. And it might be until he goes off to college. I don't know. Like maybe it also depends on all of his other family members, like being on board with that, right? It's complicated. But the question for me is about data being used in these specific ways. And that's why I think it's more akin to

informed consent in the research setting when you're giving over your data to researchers to be able to make predictions, do science, whatever. Not so much like how old can they handle being able to watch something on YouTube. But yeah, my goal and I'm working on a, with all the other parents in his class at my kid's school, a pledge

Siara Singleton (57:40)
Right. Okay.

Dr. Avriel Epps (She/They) (57:50)
to not give our children smartphones until eighth grade or going into ninth grade so that all of the students, that all of the students in his cohort, his classmates are kind of like, he can't come home and basically be like, but Susie has a phone and David has a phone. Like, why can't I have one? And so we all are going to be able to say like, actually that's not true. So like, you can't get a phone, sorry.

Siara Singleton (57:54)
Yeah.

you

Dr. Avriel Epps (She/They) (58:14)
But I, I think, you know, for me, more so than the like hard cutoffs, it's kind of like sex education in a way. for me, it's like, how do we maintain the open dialogue and like, how do I create the scaffolding for you to learn healthy digital habits and to have like a healthy skepticism of these tools and to be able to have the critical thinking skills you need to navigate them independently.

And there's no way I could give all of that education to him before he's eight or nine years old, right? But the age at which he decides to get on social media in the same way that the age at which he decides to first start engaging in like romantic partnerships or romantic intimacy is not kind of like, I don't really have total control over that. know that as a parent, even though I want to have total control over it, I just don't.

So my job is to make sure that I do as much preparation for him as possible so that when he makes those decisions, he's doing with the most amount of tools that he can have.

Siara Singleton (59:15)
any other specific decisions that you've decided to make as a parent as it pertains to just technology as a whole,

Dr. Avriel Epps (She/They) (59:23)
yeah, I learned from a professor I had in undergrad who was one of my mentors, Travis Dixon. When I was in undergrad, he had young twins And he's a communication studies professor and he studies racism in the media.

And so I asked him one time, like, how do you think about what your kids are allowed to watch on TV, given what you know from your research about how racist TV media is? Aren't you freaked out about this? And he was like, well, mostly I just sit and watch things with them so I can have conversations with them about it. Like, it's a

Siara Singleton (59:48)
Thank

Dr. Avriel Epps (She/They) (59:58)
Like I'm there as a resource to help them navigate so that they don't internalize that messaging and they see it as a problem with the media and its representation and they can like assess what's good for them, what's not eventually. And so I take that same approach. We don't watch a whole lot of, we don't even really have TVs in my house, but we watch Netflix sometimes on the laptop. so like,

Siara Singleton (1:00:11)
Thanks.

Dr. Avriel Epps (She/They) (1:00:19)
When it's movie night, we watch together and we have conversations about it. When he's got a question about something or wants to do a Google search because he wants to draw his Wings of Fire character that he really loves, like I try to have conversations with him about what it is we're seeing and why the technology works in that way. And this is why when I released the book, I really wanted to build a game to like put the concepts of the book into action.

It's a really simple card game. It's just like 10 different experiments that you can run, but you could shuffle the game. You could shuffle the cards and then pick one and it like, you know, gives you instructions to go do a Google search for a scientist and then make your observations about what is shown and why you think that is. So that parents have a resource, a conversation starting resource with their kids. And I do that stuff with mine all the time.

Siara Singleton (1:01:11)
That's so smart too because I feel like some parents might hear all of the harms and all of the possibilities and want to almost shield them, but they're going to be exposed to it. They're going to go to school. Their friends are going to have technology and like they're just going to live life. So I love that because it's a little bit less scary, a little bit less restrictive and a little bit more open and kind of.

you know, arms your children with the tools that they need to make decisions as they come about things when you're not in room. So I really love that. For yourself, do you have, I imagine that you have like maybe a list of personal boundaries when it comes to digital use? Do you have like a personal rule book that you follow?

Dr. Avriel Epps (She/They) (1:01:52)
I recently turned my phone into a dumb phone. And what that meant for me was turning it to grayscale because I read that grayscale is like less addictive, than like full color. Turned a bunch of like screen time restrictions and settings on so I can't access anything except for my messages and my weather app before 9 a.m.

Siara Singleton (1:02:15)
food.

Dr. Avriel Epps (She/They) (1:02:15)
and everything

gets shut off at 8 p.m. I don't have any social media on my phone. I don't have any shopping apps on my phone. I eventually would love to get rid of my email too, but I'm just like too much on the go to be able to do that and still like be a functioning professional. I've made a lot of strides toward just like those kinds of

hygiene practices, I would call them. And, I really admire people who try really hard to maximize their digital privacy by using things like ProtonMail instead of Gmail or CryptPad instead of Google Docs. And I aspire to eventually get to that. I'm just not, I'm not there yet.

Siara Singleton (1:02:58)
Nothing.

Dr. Avriel Epps (She/They) (1:02:59)
Yeah, but I do, mean, you know, on the lines of like, you know, digital privacy, I take two factor authentication very seriously. I think everyone should like, I am really confused by people who don't use password managers and just like use the same password for everything. I don't know. like,

You know, we could have that conversation if you want, but, and then, yeah, I think that we already had that conversation about just practicing or exercising that like consent muscle. I try to do that all the time.

Siara Singleton (1:03:29)
Yeah, I'm right there with you on the password manager thing. That's awesome. Okay, so you run AI for abolition. Can you tell us a little bit more about the mission and what you've been up to lately?

Dr. Avriel Epps (She/They) (1:03:43)
Yeah, absolutely. So AI for Abolition is an organization where we're trying to build collective power with and around artificial intelligence. So we do that in a couple of different ways. One is through kind of community-based AI literacy projects, and that looks different depending on who we're trying to reach and what we're trying to do. So...

The kids book about AI bias is definitely an AI literacy intervention for young people, for parents, for grandparents. I make a lot of short form video content for TikTok and Instagram to reach Gen Z, millennials. And then I also do like in real life, we also put on programs in real life. So

We recently had a partnership with the Los Angeles Public Library System to do a multi-part event series for folks who are frequent users of the library. And we also just partnered with Black Girls Code to do an event with the Crenshaw Dairy Mart in Inglewood in California. That was kind of a multi-generational event looking at themes around algorithmic bias and also

Siara Singleton (1:04:34)
Ready?

Dr. Avriel Epps (She/They) (1:04:48)
facial recognition surveillance stuff. So that's one arm of what we do. And then the other arm of what we do is building open source, open science, community led and community developed AI tools. And the first project that we're working on in that vein is this app called Repair, which is a platform for transformative

justice and restorative justice practitioners. So folks who work outside of the criminal legal system to help people who have committed some kind of harm and who have survived or experienced some kind of harm to find accountability, to heal together, to find a way to move forward after the harm has occurred. And we're building a digital platform that's AI powered to help them connect with each other, connect with other practitioners, share resources, develop resources, increase their capacity to do that kind of work.

The vision there is like can technology be used to build and help people imagine alternatives to these super harmful systems? Because right now when folks talk about abolishing the prison industrial complex, the first thing you think is like, okay, but what do we do with people? Like this harm is still going to occur, we're human beings, so like what's the alternative? And people can't really imagine that.

Siara Singleton (1:05:49)
and

What's that?

Dr. Avriel Epps (She/They) (1:06:05)
But there is a really amazing, committed group of people who are building the alternative to the criminal justice system. I've been lucky enough to be invited into transformative justice circles and experience that firsthand. And the problem is, or one of the problems is how do we do that at a scale that rivals the existing prison industrial complex? And so that's where...

you know, the theory of change or like the, the hypothesis that AI for abolition is putting forward is that like, this is a way that technology can actually be used for good and like with an abolitionist politic.

Siara Singleton (1:06:43)
That's incredible. Yeah, it's if you read like about social justice a lot and then you hear things like this It really feels like this is the modern movement and this is what excites me the most about Just change and like having hope for real true change. So that's incredible How can people learn more about your work? Is there a specific website or specific social channels that we can look out for?

Dr. Avriel Epps (She/They) (1:07:05)
Yeah, AI for Abolition is at AI and the number four dot org. And then I were also on social media. I'm King Avriel on Instagram, TikTok and AI for Abolition is just AI the number four abolition on those channels as well.

Siara Singleton (1:07:21)
Awesome. Okay, one last question. I ask every single guest this and it's totally up to you how you interpret it. But what are you logging out of this year and then what are you logging into?

Dr. Avriel Epps (She/They) (1:07:32)
Ooh, hold on, I to think about that. I love that question.

Siara Singleton (1:07:35)
Yeah, take all the time that you need. It changes for me on a weekly basis, so.

Dr. Avriel Epps (She/They) (1:07:40)
This year I am continuing to log out of hustle culture and like the speed, the increasing speed of work that technologies, especially AI now is asking us to produce at. And I'm logging into being more rooted in like

Siara Singleton (1:07:47)
there.

Dr. Avriel Epps (She/They) (1:08:00)
land-based practices, being more connected to nature, being more in real life with human beings and community, going to the store to talk with the store clerk instead of just buying something on an app. Those kinds of practices I feel really important and nourishing and what I want to lean into.

Siara Singleton (1:08:20)
Mm, I love that. More human interactions. Awesome. Well, thank you so much. This has been a huge learning experience for me and I know for the audience. I hope to have you back on the show again one day. I want to hear all about what you learn as you continue your research and your work with AI 4 Abolition.

Dr. Avriel Epps (She/They) (1:08:31)
Well, thank you for having me.

Absolutely, thank you so much for having me. I really appreciate it.

Follow us on Social
Subscribe to The Log Out Report:
Siara Singleton
Host, The Log Out Podcast

Subscribe to Log Out
wherever you listen.