Systemic changes to the criminal legal system can be met with fear and misunderstanding. In recent years, this has been especially true in states and counties that have sought to improve their pretrial justice systems. Adding to this challenge, jurisdictions often lack the data and research capacity needed to tell a clear story about why change is needed, refute false narratives about reform, and report the results of system improvements.
David Olson, PhD, understands the complexities of analyzing changes to pretrial systems. He is a professor in the Criminal Justice and Criminology Department at Loyola University Chicago and co-director of Loyola’s interdisciplinary Center for Criminal Justice.
In addition to extensive research in most areas of criminal legal systems, both nationally and in his home state of Illinois, his work evaluating Illinois’ Pretrial Fairness Act (PFA) from development to post-implementation has been instrumental in assessing the law’s success and areas of potential adjustment.
APPR spoke with Dr. Olson about the importance of conducting quality pretrial research and discussed the lessons he has learned about the field.
The conversation has been edited and condensed.
APPR: What is pretrial justice research? What factors does it examine, and what can we learn from it?
Dr. David Olson: There are two basic categories of questions about pretrial systems that all practitioners and policymakers ask, or should ask, and often researchers are the ones who try to answer them. The first category describes the factors that drive pretrial decisions and the outcomes of those decisions—how often, for example, courts impose money bonds, set specific pretrial conditions, or release people on their own recognizance, known as ROR. Then it looks at individual characteristics that influence those decisions, like the nature of the crime, current supervision status, or other information that may be presented through a risk assessment instrument. This can help us understand who gets ROR, who gets a $10,000 bond versus a $20,000 bond, and what percentage of people are detained, those kinds of things. It also looks at the influence of factors that should not play a role in decision making, such as race or income level. And it looks at things like how long someone stays in pretrial detention if they have to post bond—hours, days, weeks, or months? All these are what I would consider to be basic kinds of questions, but they are not always asked by practitioners or policymakers.
The next category of research looks at what happens during pretrial release. How compliant are people with court appearances? Do they get charged with new crimes? Scholars often ask whether people who are released pretrial have different outcomes than people who are detained. Are there differences in how often they plead guilty, are convicted, or receive punitive sentences?
The first category of research is necessary to inform the second category. We need to know the descriptive stuff so we can understand the outcomes. And we need to know both things to establish a baseline before we embark on law or policy change, so we can measure what changes and by how much.
What we find in most places—including Illinois, where my colleagues and I are tracking the outcomes of major system change implemented in 2023—is that local and state stakeholders often can’t answer those baseline questions. So, they don’t really know what’s happening in the moment. If you ask, “What does it look like now? What percentage of people don’t appear in court or are rearrested during pretrial?”, in most places, they don’t know if it’s 10 percent or 50 percent.
APPR: Do most jurisdictions collect or have access to the data needed to answer those questions?
Dr. Olson: Yes and no. Practitioners collect a lot of information, but most of it is used to decide how they’re going to deal with each individual case. Rarely do they look at it more systematically. That’s just a reflection of our field—most agencies don’t have researchers on staff. So they use data to do their jobs, but not always to find trends or aggregate outcomes. And because they are not trained researchers, they may not recognize the wealth of information they have to help them better understand their operations and impact.
Having data like that can help deflate some of those uninformed or misinformed arguments.
The Pretrial Fairness Act required the creation of a statewide pretrial data dashboard to give stakeholders and the public access to a lot of descriptive information. I was on the data dashboard task force, and we learned from stakeholders in a lot of local jurisdictions that they didn’t know the answers to many basic questions, like what percentage of people have to post bond and what percentage actually post the bond.
One issue I’ve seen is that, despite a lack of reliable data or answers to basic questions—or maybe because of them—people make assumptions about how things work and the potential impact of changes. For example, prior to the passage of the PFA and during its early implementation, many assumed that eliminating money bond would result in some huge surge of people being released. Then it was further assumed that those people wouldn’t show up to court and would commit more crime. Our pre-PFA research showed that most people posted bond or were released within a week. So, the people they were so worried about getting released were already getting released under the old system. Having data like that can help deflate some of those uninformed or misinformed arguments.
APPR: How does pretrial research help support effective and fair systems?
Dr. Olson: In a couple of ways. First, to operate systems that are fair and effective, we must have good pretrial research. We can’t know how we’re doing or whether we’re improving if we don’t measure it. That’s the easy answer.
Second, we know that evidence-based or data-driven practices can help promote fairness and effectiveness. To do that, you need evidence and data, and that comes from research. You might adopt an evidence-based practice or policy, but are you evaluating your implementation? Are you measuring the impact in your jurisdiction? Pretrial has historically been minimized in its importance. People think that, you know, people are charged with crimes, they go to court, the judge orders them to pay bond, they wait a couple of days, and they pay bond, no big deal. But the pretrial phase of the justice system really punches above its weight in terms of its impact on all subsequent stages. We’ve learned in the last decade or so that what happens during the pretrial phase influences everything that comes after it. Fairer pretrial results in fairer dispositions and fairer sentences, and so on.
…The pretrial phase of the justice system really punches above its weight in terms of its impact on all subsequent stages.
APPR: In your research, both in Illinois and nationally, have you found commonalities or through-lines?
Dr. Olson: Yes. We’ve seen that in most places where some kind of pretrial reform has occurred—usually in the form of limiting the role that money plays in release—these efforts haven’t had an impact on release rates or on crime.
This makes sense once you understand that, as I said before, in most places, most people who are arrested get released during the pretrial phase, whether they have to pay money or not. The reforms don’t change release rates much, they just change the way people get released and how that release happens. Unless you believe that paying money for release has a rehabilitative effect or deters crime, which it doesn’t, there’s no reason to expect a change in crime rates, up or down.
Another thing we’ve learned is that the impact of pretrial policies varies between urban and rural areas. Most of the places we’ve studied are large urban jurisdictions that tend to have more violent crime. Rural areas may have more drug offenses and can struggle with providing indigent defense and treatment resources. A statewide pretrial system, like we have now in Illinois, is going to look different depending on where you are. As we continue collecting data and studying pretrial in Illinois, we are keeping an eye on smaller and rural jurisdictions to learn more.
APPR: Has your research uncovered anything that surprised you?
Dr. Olson: Mostly, I’ve been surprised by the consistency in our findings across jurisdictions. In the places we’ve studied, including the eleven jurisdictions we examined in the Guggenheim report [1], we see really consistent court appearance rates and relatively low rates of people rearrested for new violent crimes. It makes sense because most of the cities we’ve studied have roughly the same mix of cases, the same level of policing, and so on. Now, as we work in individual jurisdictions in Illinois, when we see a failure to appear rate of around 20 percent and a new violent crime arrest rate of less than 5 percent, we’re like, “yep, that seems about right.” As a researcher, replication and consistent findings help me recognize anomalies so I know to look more closely.
APPR: What are the benefits to jurisdictions for partnering with academic institutions and researchers?
Dr. Olson: To understand the kind of data you should be collecting and the questions you should be asking, or to make sense of the data you have, you need to partner with a qualified researcher.
What’s most important is finding the right people to work with for your jurisdiction. I’ve seen plenty of academics who get all the data from a jurisdiction, and then they never talk to them again. Or they don’t share the findings with the local jurisdiction before presenting them at a conference or putting out something that the media picks up.
When we work with a jurisdiction, we agree to not show anybody the findings until after we’ve shown it to our practitioner partners and gotten their feedback and thoughts on it. That’s not so they can censor us or prohibit us from sharing it, but to make sure we understand what we’re seeing and can correct any coding errors. When I work with multiple cities or counties, it can be a bear to process all their individual feedback, but it’s worth it.
Also, working with researchers should ideally be seen as a long-term relationship. This helps practitioners get used to looking at data, thinking about it, and interpreting it. It also helps researchers understand the context, environment, and practices that influence the research findings. Over time, data and research become normalized and expected.