top of page
  • Writer's pictureYArespond

Statistics, Clergy Abuse, and Homosexuality: The Sullins Report

Updated: Feb 7, 2019

by Nathan


A study about the clergy sex abuse crises has been making the rounds in Catholic circles in the past few months, gaining attention from such organizations as The National Catholic Register, Catholic News Agency, and even the Wall Street Journal. Published by the Ruth Institute and authored by D. Paul Sullins, a former Sociology Professor at the Catholic University of America, the paper investigates a potential cause of the crises that was dismissed by the John Jay reports but which many Catholics still wish to explore: homosexuality in seminaries and the priesthood. The paper’s title states its primary question: “Is Catholic clergy sex abuse related to homosexual priests?”


Before going any further, I should pause to say a few things. This is obviously a hot-button issue. Many readers hold strong opinions one way or the other on this question. But regardless, Catholics have a duty to take into account empirical research as we together work for justice and healing in our Church and the world. As St. John Paul II famously wrote, “Faith and reason are like two wings on which the human spirit rises to the contemplation of truth” (Fides et Ratio). Science can never contradict Dogma; truth can never contradict truth. Catholics not only can but must turn to the methods of science to answer some of the questions we need to ask in order to move forward on this issue.


Thus, we should approach Sullins’ study with intellectual humility, seeking to learn what we can from it. As an actuary and therefore someone who works with statistics every day at my job, I would like to utilize statistical analysis to walk through questions I have about Sullins’ headline claims. They are summarized in his Figures 9 and 10.


Sullins’ Claims


Sullins presents two conclusions at the top of both the paper and his executive summary: a .9 correlation between the number of homosexual priests and the incidence of abuse and a .96 correlation between the prevalence of homosexual subcultures in seminaries and the incidence of abuse. These are very high numbers, and if they can be substantiated, might point to a major link between homosexuality and abuse. Thus, we should take a closer look at the graphs and his explanations of them to make sure we know what Sullins is claiming in relation to what the data tells us.


The blue bars in both figures are the percent of abuse incidents occurring in each five-year period of time. Note that all of the blue bars (approximately) add up to 100, with the percentage for each bar calculated from the total number of cases that occurred from 1950 to 2000. This data is drawn from three sources: the John Jay reports, the Pennsylvania Grand Jury report published this Summer, and the yearly reports published by the Center for Applied Research in the Apostolate.





The red bars in Figure 9 are the proportion of priests reporting as homosexual, drawn from to a fourth source (a survey conducted by the LA Times in 2002). In the LA Times survey, a modified Kinsey scale was used to determine the sexual orientation of priests. The numbers in those red bars are aggregated by year of ordination; each bar represents the cumulative percentage of priests reporting as homosexual in the ordination classes in the five-year period. Thus, the first red bar represents the total percentage of priests who were in ordination classes in 1950-1954 and were identified by the study as homosexual.


The pink bars in Figure 10 are the proportion of priests reporting a homosexual subculture at their seminary, which is another data point collected by the same LA Times survey. These data are also aggregated by year of ordination.





The dotted lines in each graph are the trend lines of each variable, added to illustrate the close relationship between them.


The Model, and Problem #1


Sullins also builds several models to support his findings in Figures 9 and 10. They are summarized in the following table:





The first thing that one ought to notice here is the note at the very bottom of Table 1: “Outcomes reference current allegations only.” Sullins defines “current allegations” as those which “report abuse occurring in the same year as the allegation.” This means that the data utilized by Sullins excludes the vast majority of allegations. For example, the John Jay Report outlines a spike in reporting of abuse incidents in 2002 after the Boston Globe reporting, though the average reporting delay for those was 30 years. Model 1 would include only those incidents which were both reported in 2002 and also occured in 2002. If the incident were reported in 2002 but occurred in 2001, it would be excluded. Table 1 thus represents a very small minority of allegations.


Further, it appears that Figures 9 and 10 do not use the same data as Table 1. Note that the blue bars in Figures 9 and 10 indicate “all abuse incidents.” This means his models are built on a different dataset than the one used to calculate the correlation. Thus his models, from a perspective of statistics, do nothing to support the correlation finding.


This does not mean, however, that nothing can be gained from Sullins’ paper. Rather, it means that his data will need much more exploration in order to truly understand its significance. It also means that much reporting on his paper mischaracterizes the data and what can be gained from it.


Subsequent posts will discuss other limitations related to Sullins’ paper, in the hopes of equipping readings to better evaluate data put before them. We hope that Catholics can use what they gain in these posts to more critically review the ways in which statistics are utilized to make claims. In addressing the clergy abuse crises, Catholics need the best that faith and reason have to offer, and we hope to help equip Catholics in this offering.


Posts in this series:

0 comments
bottom of page