Can AI Reduce Harm to Children?: Gabriel Fernandez and the Case for Machine Learning

Can AI Reduce Harm to Children?: Gabriel Fernandez and the Case for Machine Learning

Could AI have saved Gabriel Fernandez’s life? That’s the question posed in Episode 5 of Netflix’s The Trials of Gabriel Fernandez. The docuseries explores the aftermath of Gabriel’s tragic death at the age of eight. Several witnesses made reports to Los Angeles County officials on his behalf. But, Gabriel remained in the abusive home of his mother and her boyfriend. In 2013, he lost his life. 

A poster for the documentary The Trials of Gabriel Fernandez is a a photograph showing a darkened silhouette of a boy's head and neck against a white background.
The Trials of Gabriel Fernandez is currently available on Netflix. (Source: IMDB)

In the series, data and social scientists argue that machine learning might have kept him alive–and, that it can save future Gabriels. How? By improving the identification of potential signs of abuse.

One evangelist for the technology is Marc Cherna, Director of the Allegheny County Department of Human Services (ACDHS). Cherna believes that it would be unethical not to use such algorithms in the administration of child protective services.

But, what exactly is this potentially life-saving AI? Keep reading to learn more about predictive risk modeling, how governments currently use similar AI, and the ethical questions such work raises. 

What is predictive risk modeling and how can it help social workers?

At the heart of the AI explored in the series is the predictive risk model (PRM). In lay terms, a PRM uses an algorithm trained on a large dataset to develop a risk score for an adverse event. The model’s developers determine what sort of event the model predicts. 

In the Trials of Gabriel Fernandez, interviewees discuss the Allegheny Family Screening Tool (ACFST). Allegheny County implemented the ACFST in 2016–becoming the first jurisdiction to use a PRM tool. Rhema Vaithianathan, co-director of the Centre for Social Data Analytics, and members of the Children’s Data Network, including Emily Putnam-Hornstein, developed the model and screening tool. 

An animated picture from the film Minority Report showing a red ball bearing descend a curved clear tube.
Machine learning can be used to predict adverse events. (Image source: Giphy.com)

The algorithm does not predict abuse per se. Rather, it predicts future involvement with the social services system. Using historical data from numerous Allegheny County departments, the ACFST considers factors such as parent history, evidence of substance abuse, and criminal background when determining risk. 

Making the first call count

How does all this technology help children? A critical juncture in child welfare administration is the initial screening. Social workers, with scant resources and little time for research, often rely on a phone call and instinct to know whether to “screen a case in.” The algorithm’s designers discovered that, in Allegheny County at least, those workers were not fantastic at making quick decisions. They found that over a quarter of high-risk cases were being “screened out.”

Rhema Vaithianathan discusses the building of the Allegheny County Family Screening Tool. 

The ACFST comes in at this precise moment–before the screener makes a decision. The tool predicts the risk of future system involvement, showing the screener a score from 1 to 20. Social workers may choose to override the tool’s recommendation in certain cases, but Allegheny County officials believe it improves decision-making. And, they have research to back it up. 

Bias in, bias out

Critics point out that these models are not without pitfalls. Two of the most concerning are data and algorithmic bias. The concepts are simple–train your model on biased data, and it will produce biased predictions. Similar software used to predict potential recidivism in several states was “nearly twice as likely to be inaccurate when assessing” imprisoned African-Americans than their white counterparts. 

African-Americans and the poor are overrepresented in the data of public institutions such as child protective services, so how much should we trust a model trained on them? In the series, Erin Dalton, Deputy Director of the Allegheny Office of Analysis, Technology and Planning, admits that what the ACFST predicts is a “function of who gets reported.” AI might have saved Gabriel Fernandez, but it also has the potential to endanger many other children.

A circle diagram. At the top center is a blue box with the words Real World Bias. The arc of the circle below it is captioned "Is reflected in." At the right is a blue box with the words Data Bias. The arc of the circle below it is captioned "Is exposed by." The arc of the circle above it is captioned "Is acted upon by." At the left is a blue box with the words Business Bias. The arc of the circle above it is captioned "Which impacts." The arc ends in an arrow that points to the top box.
Anthony J. Bradley illustrates how bias reinforces itself at every point of the machine learning process. 

Though the ACFST does not explicitly use race as a factor for analysis, other factors (zip code, criminal justice history) are often proxies for it. In fact, the tool does produce higher scores for black children than for white children

The model also considers reports to child protective services in its analysis, introducing a potential negative feedback loop. But, proponents of this use of PRMs argue that there are ways to mitigate discriminatory outcomes. The ACFST’s Ethical Analysis emphasizes that it is a screening tool. Workers who screen a case in must follow up in person to confirm or refute abuse claims. Still, it acknowledges the burden that county intervention imposes even on those cleared of wrongdoing. 

Transparency and open data

The use of AI in child protective services raises another common machine learning issue: the black box problem. In short, it is difficult, even impossible in some cases, to know exactly how an algorithm is making its decisions. Vaithianathan identifies regular ethical reviews, transparency, and independent evaluation as necessary for ensuring algorithmic transparency and accountability. For a model to serve the public interest, its inputs and processes should be open, and its outputs published and reviewed. 

The word Input followed by an arrow pointing right and a large black cube labeled Black Box followed by an arrow pointing right at the word Output. Underneath the cube is the sentence "Internal behavior of the code is unknown." Underneath the image is the caption: It can be difficult to understand where the output really comes from.
(Image source: Matthew Cress)

Allegheny County owns its tool and data and regularly publishes process and impact reports. Not all human services programs do, however. Agencies in several states are using a PRM developed by Eckerd Connects, a family services provider. Eckerd staff review the high-risk cases the company’s Rapid Safety Feedback tool flags. Though Eckerd is a non-profit, it’s for-profit arm sells the software used to administer the service. As such, the inputs, measures, and processes of its algorithm are proprietary.

 In 2017, Illinois ceased its use of the tool over concerns about the model’s output and Eckerd’s lack of transparency. Those who’ve watched the series may also express concern about the practice of contracting out child welfare services. In Gabriel’s case, outsourced social workers were accused of neglecting actionable reports about the danger the child was in. 

Saving lives with AI

The effects of machine learning on fields such as disaster relief and medicine prove that AI can save lives. In fact, it already has. Whether it could have saved Gabriel’s is an open question. What is known is that with its power to aid people comes the power to harm people. After all, we did create it in our own image.

Still, there are many ways in which machine learning’s ability to detect patterns and make predictions far exceeds our own, making it indispensable to our present and future. It is up to us to ensure that these technologies are transparent and regularly reviewed. The lives we save may be our own.