Home / Topics / Collaboration / Working with Sites / Justice System / Challenges of and Tips for Conducting Randomized Trials in a Criminal Court Setting
Michael Rempel

Challenges of and Tips for Conducting Randomized Trials in a Criminal Court Setting

Posted on May 31, 2011

Michael Rempel (bio) of the Center for Court Innovation discusses the challenges of and offers tips for conducting randomized clinical trials in criminal court settings.

 

Q: What was the purpose of the randomized clinical trials you conducted?
A: The purpose was to test the effectiveness of (1) court-mandated batterer programs and (2) judicial monitoring in cases of domestic violence. To test the former, we worked with the court system in the Bronx, New York. For the latter, we moved to Rochester, New York. Essentially, we wanted to determine whether court-mandated attendance at a batterer program (the Bronx) or judicial monitoring in conjunction with some type of program -- batterer, substance abuse, mental health -- (Rochester) could elicit real change in a defendant population. In both trials, the primary outcomes were official re-arrests or victim reports of re-abuse by the same offender.
Q: What were the greatest challenges you faced when implementing the trials?
A: The most obvious challenge was the potentially ethical one of denying treatment to a segment of the study population. In the Bronx trial, offenders were randomized into one of four conditions involving various combinations of batterer program participation and/or monitoring. Because previous studies had not clearly indicated the effectiveness of batterer programs, and little research of any kind had been conducted on different forms of monitoring with domestic violence offenders, these particular concerns did not prevent us from conducting the study. In fact, some argue that in this situation, the ethical imperative is to conduct a randomized trial, since otherwise policymakers will be continuing to order individuals to interventions in the absence of knowledge that these interventions are achieving any desired benefits. Nonetheless, we wanted to be sure that the offenders' victims remained safe. We addressed this by: (1) attempting to contact victims at regular intervals post-sentencing (though a lack of accurate contact information made this very difficult), (2) checking key results halfway through the study to check for significant differences among the randomized groups, and (3) agreeing to stop the study immediately if any significant differences were detected.

Next, we faced the problem of upholding due process rights for the offenders without compromising the integrity of the study. Any study protocol involving the criminal justice system has to take into account defendants' constitutional rights, as well as the responsibility judges have to uphold court mandates. We learned from our experiences in the Bronx that defense attorneys would be unlikely to accept any study protocol that might result in a defendant receiving a more onerous sentence than he would have received were he not involved in the study. We also learned that judges felt strongly it was their legal responsibility to impose the exact terms of sentence -- thus, randomization could not take place after the sentencing process. Eventually, we settled upon in-court allocution and assignment, in which the sentencing judge would fully explain to the defendant the ramifications of pleading guilty, after which randomization would take place at a bench conference. The judge would then inform the defendant of the selected condition as part of the sentencing process. This method worked well, and we implemented a similar randomization process in Rochester.

In addition, we were not sufficiently skeptical about court volume estimates at either the Bronx or Rochester sites. Estimates regarding case volume at both were significantly higher than actual cases, and, in the case of the Rochester site, we had to extend the study there by a full year in order to meet our targeted recruitment goal.

Finally, we learned that stakeholder collaboration and buy-in was essential. In Rochester especially, we encountered a tight-knit stakeholder community that was unfamiliar with our work. Although we had agreed to conduct the trial with the judiciary, we could not implement several key components (mainly the victim and defendant interviews) without the support of the local prosecutor and victim advocacy community, neither of which we had contacted in the initial study planning phase. In our case, we were eventually able to work together with all of the involved stakeholders, although this outcome cannot always be guaranteed. In general, it's best to let every major stakeholder know as early as possible that a randomized trial might be taking place and to solicit input from those stakeholders.
Q: Can you suggest any methods you found helpful for monitoring study fidelity?
A: Greater study fidelity may be achieved, first, through working with both high-level stakeholders as well as the rank-and-file staff. Meetings with stakeholder leaders are a necessary starting point, because they are the people who will be signing off on the study and making policy changes. Once stakeholder leaders have signed-on, it becomes equally important to meet with the people who will be responsible for the day-to-day implementation of study protocols. Actions by rank-and-file staff will ultimately determine who is found study eligible (and whether those findings correspond with how things are supposed to work), how often judges or others intentionally exclude eligible offenders from the study (from a research standpoint, such exclusions should be extremely infrequent), and whether the interventions themselves are implemented in a way that conforms with best practices.

Second, having an on-site research presence can encourage fidelity to the implementation plan and provide immediate resolutions to challenges as they arise. It also conveys to stakeholders a level of commitment to the process that, ultimately, contributes to long-term buy-in and cooperation.

Third, the study itself may precipitate unintended effects. It is important to watch for such consequences and address any attendant changes in practice. For example, we wondered if prosecutors would be less likely to offer a plea bargain, because the defendants would have only a 50 percent chance of being assigned to a batterer program (the Bronx) or receiving judicial monitoring (Rochester). In the Bronx, we collected pre-study data to assess major differences between the characteristics of the pre-study sample of defendants versus the participant sample. We found no significant differences. In Rochester, we monitored the overall distribution of sentences in the domestic violence courtroom to catch any changes happening as a direct result of the study.

Finally, close monitoring of data collection is critical. For example, a problem in the Bronx site only came to light during the final data analysis, at which point we discovered that the judicial hearing officer had not been consistently applying the graduated monitoring protocols (noncompliant participants were supposed to be assigned a more stringent reporting schedule; in most cases, this did not take place). Although we could not fix this problem, we could at least take it into account and make appropriate adjustments when analyzing our data.
Q: Both studies took substantially longer to conduct than you originally thought they would. What were the reasons behind this and what lesson did you learn?
A: The lesson learned was that process planning takes time. A lot of time in some cases. For us, there were a number of factors that resulted in a substantially drawn out process. First, the number of stakeholders involved at both study sites made coordinating meetings difficult. In addition, all of these stakeholders contributed input and their viewpoints often differed, sometimes drastically. Collecting, processing, and synthesizing these varying opinions took time. Lastly, in the Bronx, we already had a working relationship with many of the relevant stakeholders, including the court system and local victim advocacy groups. By contrast, we had to build relationships and trust with stakeholder groups in Rochester from scratch, something we did not take into account when planning the study.
Q: Given how long it takes to secure stakeholder buy-in, which is more important to secure first, buy-in or funding?
A: In the Bronx study, we made a choice to secure funding prior to involving all the relevant stakeholders. Although we concluded that it would have been premature to approach all stakeholders before this point, we also acknowledge that the decision led to a conflict. When it came time to write our proposal, we could not guarantee the necessary buy-in by all stakeholders. In addition, it added substantially to our planning process, because once we were funded, we then had to solicit buy-in from those stakeholders we had not previously approached, a process that took six months longer than the one month we had allotted for in our proposal.

In Rochester, as noted previously, the challenges related to stakeholder buy-in were even more severe. In general, having learned from the Bronx experience, we involved more stakeholders in Rochester prior to securing funding. For instance, we reached out to the local judiciary in Rochester from the outset rather than planning the study initially with only a statewide administrative judge. Yet, we still encountered resentment from the local prosecutor and victim advocacy community in Rochester, who believed that they should have been able to provide input at the earliest possible time. In general, it is difficult to achieve the right balance on this question, because it is obviously inefficient to hold multiple meetings with a wide array of individuals when funding is not yet in place. We can only recommend extreme care in thinking through whom to notify and what level of detail to provide at the proposal stage.

Once funding is secured, it then becomes important to initiate broader outreach sooner rather than later. In a third and more recent trial, as we did in Rochester, we obtained the buy-in of both the statewide and local judiciary prior to obtaining funding; but unlike Rochester, we sought in-depth meetings with prosecutors, the defense bar, and line staff as soon as possible after funding was secured, so that their input could be obtained at a time when we could feasibly incorporate it into the final study implementation plan.
Q: Was there one element of the study protocol that you felt you got particularly "right"?
A: I would say that our decision to keep all of the events relevant to the randomization process at one discrete court appearance (the initial explanation of the study to the defendant by the judge, the random selection of a condition, and the imposition of that condition, i.e., the specific sentence) was something we got right. No one attempted to withdraw their plea after they were randomly assigned to a condition and received their sentence (e.g., assigned to a batterer program or not, or assigned to monthly versus graduated monitoring). We can't at all take credit for this decision, because it was a product of extensive discussions with the bench judges in the Bronx, but the outcome was correct, and the same approach worked well in Rochester.

 

 


Adapted from the 2010 article "Lessons Learned from the Implementation of Two Randomized Trials in a Criminal Court Setting," by Melissa Labriola, Michael Rempel, and Amanda Cissner.

Feedback

Please note that the feedback is viewed only by 4researchers staff and is not intended for communication with individual contributors.

 

Use the form below to submit feedback about this article. If you would like a response, please be sure to include your e-mail address.


More About "Justice System"

4:05

Considering the Upstream Factors

3:06

Getting in With the Juvenile Justice System

 

More From Michael Rempel (bio)

Lessons Learned from Conducting Randomized Trials in a Criminal Court Setting