The Business & Technology Network
Helping Business Interpret and Use Technology
«  

May

  »
S M T W T F S
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 

The Problems Of The NCMEC CyberTipline Apply To All Stakeholders

DATE POSTED:April 25, 2024

The failures of the NCMEC CyberTipline to combat child sexual abuse material (CSAM) as well as it could are extremely frustrating. But as you look at the details, you realize there just aren’t any particularly easy fixes. While there are a few areas that could improve things at the margin, the deeper you look, the more challenging the whole setup is. There aren’t any easy answers.

And that sucks, because Congress and the media often expect easy answers to complex problems. And that might not be possible.

This is the second post about the Stanford Internet Observatory’s report on the NCMEC CyberTipline, which is the somewhat useful, but tragically limited, main way that investigations of child sexual abuse material (CSAM) online is done. In the first post, we discussed the structure of the system, and how the incentive structure regarding law enforcement is a big part of what’s making the system less impactful than it otherwise might be.

In this post, I want to dig in a little more about the specific challenges in making the CyberTipline work better.

The Constitution

I’m not saying that the Constitution is a problem, but it represents a challenge here. In the first post, I briefly mentioned Jeff Kosseff’s important article about how the Fourth Amendment and the structure of NCMEC makes things tricky, but it’s worth digging in a bit here to understand the details.

The US government set up NCMEC as a private non-profit in part because if it were a government agency doing this work, there would be significant concerns about whether or not the evidence it gets was collected with or without a warrant under the Fourth Amendment. If it’s a government agency, then the law cannot require companies to hand over the info without a warrant.

So, Congress did a kind of two-step dance here: they set up this “private” non-profit, and then created a law that requires companies that come across CSAM online to report it to the organization. And all of this seems to rely on a kind of fiction that if we pretend NCMEC isn’t a government agent, then there’s no 4th Amendment issue.

From the Stanford report:

The government agent doctrine explains why Section 2258A allows, but does not require, online platforms to search for CSAM. Indeed, the statute includes an express disclaimer that it does not require any affirmative searching or monitoring. Many U.S. platforms nevertheless proactively monitor their services for CSAM, yielding millions of CyberTipline reports per year. Those searches’ legality hinges on their voluntariness. The Fourth Amendment prohibits unreasonable searches and seizures by the government; warrantless searches are typically considered unreasonable. The Fourth Amendment doesn’t generally bind private parties, however the government may not sidestep the Fourth Amendment by making a private entity conduct a search that it could not constitutionally do itself. If a private party acts as the government’s “instrument or agent” rather than “on his own initiative” in conducting a search, then the Fourth Amendment does apply to the search. That’s the case where a statute either mandates a private party to search or “so strongly encourages a private party to conduct a search that the search is not primarily the result of private initiative.” And it’s also true in situations where, with the government’s knowledge or acquiescence, a private actor carries out a search primarily to assist the government rather than to further its own purposes, though this is a case-by-case analysis for which the factors evaluated vary by court.

Without a warrant, searches by government agents are generally unconstitutional. The usual remedy for an unconstitutional search is for a court to throw out all evidence obtained as a result of it (the so-called “exclusionary rule”). If a platform acts as a government agent when searching a user’s account for CSAM, there is a risk that the resulting evidence could not be introduced against the user in court, making a conviction (or plea bargain) harder for the prosecution to obtain. This is why Section 2258A does not and could not require online platforms to search for CSAM: it would be unconstitutional and self-defeating.

In CSAM cases involving CyberTipline reports, defendants have tried unsuccessfully to characterize platforms as government agents whose searches were compelled by Section 2258A and/or by particular government agencies or investigators. But courts, pointing to the statute’s express disclaimer language (and, often, the testimony of investigators and platform employees), have repeatedly held that platforms are not government agents and their CSAM searches were voluntary choices motivated mainly by their own business interests in keeping such repellent material off their services.

So, it’s quite important that the service providers that are finding and reporting CSAM are not seen as agents of the government. It would destroy the ability to use that evidence in prosecuting cases. That’s important. And, as the report notes, it’s also why it would be a terrible idea to require social media to proactively try to hunt down CSAM. If the government required it, it would effectively light all that evidence on fire and prevent using it for prosecution.

That said, the courts (including in a ruling by Neil Gorsuch while he was on the appeals court) have made it clear that, while platforms may not be government agents, it’s pretty damn clear that NCMEC and the CyberTipline are. And that creates some difficulties.

In a landmark case called Ackerman, one federal appeals court held that NCMEC is a “governmental entity or agent.” Writing for the Tenth Circuit panel, then-judge Neil Gorsuch concluded that NCMEC counts as a government entity in light of NCMEC’s authorizing statutes and the functions Congress gave it to perform, particularly its CyberTipline functions. Even if NCMEC isn’t itself a governmental entity, the court continued, it acted as an agent of the government in opening and viewing the defendant’s email and four attached images that the online platform had (as required) reported to NCMEC. The court ruled that those actions by NCMEC were a warrantless search that rendered the images inadmissible as evidence. Ackerman followed a trial court-level decision, Keith, which had also deemed NCMEC a government agent: its review of reported images served law enforcement interests, it operated the CyberTipline for public not private interests, and the government exerts control over NCMEC including its funding and legal obligations. As an appellate-level decision, Ackerman carries more weight than Keith, but both have proved influential.

The private search doctrine is the other Fourth Amendment doctrine commonly raised in CSAM cases. It determines what the government or its agents may view without a warrant upon receiving a CyberTipline report from a platform. As said, the Fourth Amendment generally does not apply to searches by private parties. “If a private party conducted an initial search independent of any agency relationship with the government,” the private search doctrine allows law enforcement (or NCMEC) to repeat the same search so long as they do not exceed the original private search’s scope. Thus, if a platform reports CSAM that its searches had flagged, NCMEC and law enforcement may open and view the files without a warrant so long as someone at the platform had done so already. The CyberTipline form lets the reporting platform indicate which attached files it has reviewed, if any, and which files were publicly available.

For files that were not opened by the platform (such as where a CyberTipline submission is automated without any human review), Ackerman and a 2021 Ninth Circuit case called Wilson hold that the private search exception does not apply, meaning the government or its agents (i.e., NCMEC) may not open the unopened files without a warrant. Wilson disagreed with the position, adopted by two other appeals-court decisions, that investigators’ warrantless opening of unopened files is permissible if the files are hash matches for files that had previously been viewed and confirmed as CSAM by platform personnel. Ackerman concluded by predicting that law enforcement “will struggle not at all to obtain warrants to open emails when the facts in hand suggest, as they surely did here, that a crime against a child has taken place.”

To sum up: Online platforms’ compliance with their CyberTipline reporting obligations does not convert them into government agents so long as they act voluntarily in searching their platforms for CSAM. That voluntariness is crucial to maintaining the legal viability of the millions of reports platforms make to the CyberTipline each year. This imperative shapes the interactions between platforms and U.S.-based legislatures, law enforcement, and NCMEC. Government authorities must avoid crossing the line into telling or impermissibly pressuring platforms to search for CSAM or what to search for and report. Similarly, platforms have an incentive to maintain their CSAM searches’ independence from government influence and to justify those searches on rationales “separate from assisting law enforcement.” When platforms (voluntarily) report suspected CSAM to the CyberTipline, Ackerman and Wilson interpret the private search doctrine to let law enforcement and NCMEC warrantlessly open and view only user files that had first been opened by platform personnel before submitting the tip or were publicly available.

This is all pretty important in making sure that the whole system stays on the right side of the 4th Amendment. As much as some people really want to force social media companies to proactively search for and report CSAM, mandating that creates real problems under the 4th Amendment.

As for the NCMEC and law enforcement side of things, the requirement to get a warrant for unopened communications remains important. But, as noted below, sometimes law enforcement doesn’t want to get a warrant. If you’ve been reading Techdirt for any length of time, this shouldn’t surprise you. We see all sorts of areas where law enforcement refuses to take that basic step of getting a warrant.

Understanding that framing is important to understanding the rest of this, including exploring where each of the stakeholders fall down. Let’s start with the biggest problem of all: where law enforcement fails.

Law Enforcement

In the first article on this report, we noted that the incentive structure has made it such that law enforcement often tries to evade this entire process. It doesn’t want to go through the process of getting warrants some of the time. It doesn’t want to associate with the ICAC task forces because they feel like it puts too much of a burden on them, and if they don’t take care of it, someone else on the task force will. And sometimes they don’t want to deal with CyberTipline reports because they’re afraid that if they’re too slow after getting a report, they might face liability.

Most of these issues seem to boil down to law enforcement not wanting to do its job.

But the report details some of the other challenges for law enforcement. And it starts with just how many reports are coming in:

Almost across the board law enforcement expressed stress over their inability to fully investigate all CyberTipline reports due to constraints in time and resources. An ICAC Task Force officer said “You have a stack [of CyberTipline reports] on your desk and you have to be ok with not getting to it all today. There is a kid in there, it’s really quite horrible.” A single Task Force detective focused on internet crimes against children may be personally responsible for 2,000 CyberTipline reports each year. That detective is responsible for working through all of their tips and either sending them out to affiliates or investigating them personally. This process involves reading the tip, assessing whether a crime was committed, and determining jurisdiction; just determining jurisdiction might necessitate multiple subpoenas. Some reports are sent out to affiliates and some are fully investigated by detectives at the Task Force.

An officer at a Task Force with a relatively high CyberTipline report arrest rate said “we are stretched incredibly thin like everyone.” An officer in a local police department said they were personally responsible for 240 reports a year, and that all of them were actionable. When asked if they felt overwhelmed by this volume, they said yes. While some tips involve self-generated content requiring only outreach to the child, many necessitate numerous search warrants. Another officer, operating in a city with a population of 100,000, reported receiving 18–50 CyberTipline reports annually, actively investigating around 12 at any given time. “You have to manage that between other egregious crimes like homicides,” they said. This report will not extensively cover the issue of volume and law enforcement capacity, as this challenge is already well-documented and detailed in the 2021 U.S. Department of Homeland Security commissioned report, in Cullen et al., and in a 2020 Government Accountability Office report. “People think this is a one-in-a-million thing,” a Task Force officer said. “What they don’t know is that this is a crime of secrecy, and could be happening at four of your neighbors’ houses.”

And of course, making social media platforms more liable doesn’t help to fix much here. At best, it makes it worse because it encourages even more reporting by the platforms, which only further overloads law enforcement.

Given all those reports the cops are receiving, you’d hope they had a good system for managing them. But your hope would not be fulfilled:

Law enforcement pick a certain percentage of reports to investigate. The selection is not done in a very scientific way—one respondent described it as “They hold their finger up in the air to feel the wind.” An ICAC Task Force officer said triage is more of an art than a science. They said that with experience you get a feel for whether a case will have legs, but that you can never be certain, and yet you still have to prioritize something.

That seems less than ideal.

Another problem, though, is that a lot of the reports are not prosecutable at all. Because of the incentives discussed in the first post, apparently certain known memes get reported to the CyberTipline quite frequently, and police feel they just clog up the system. But because the platforms fear significant liability if they don’t report those memes, they keep reporting them.

U.S. law requires that platforms report this content if they find it, and that NCMEC send every report to law enforcement. When NCMEC knows a report contains viral content or memes they will label it “informational,” a category that U.S. law enforcement typically interpret as meaning the report can be ignored, but not all such reports get labeled “informational.” Additionally there are an abundance of “age difficult” reports that are unlikely to lead to prosecution. Law enforcement may have policies requiring some level of investigation or at least processing into all noninformational reports. Consequently, officers often feel inundated with reports unlikely to result in prosecution. In this scenario, neither the platforms, NCMEC, nor law enforcement agencies feel comfortable explicitly ignoring certain types of reports. An employee from a platform that is relatively new to NCMEC reporting expressed the belief that “It’s best to over-report, that’s what we think.”

At best, this seems to annoy law enforcement, but it’s a function of how the system works:

An officer expressed frustration over platforms submitting CyberTipline reports that, in their view, obviously involve adults: “Tech companies have the ability to […] determine with a high level of certainty if it’s an adult, and they need to stop sending [tips of adults].” This respondent also expressed a desire that NCMEC do more filtering in this regard. While NCMEC could probably do this to some extent, they are again limited by the fact that they cannot view an image if the platform did not check the “reviewed” box (Figure 5.3 on page 26). NCMEC’s inability to use cloud services also makes it difficult for them to use machine learning age classifiers. When we asked NCMEC about the hurdles they face, they raised the “firehose of I’ll just report everything” problem.

Again, this all seems pretty messy. Of course you want companies to report anything they find that might be CSAM. And, of course, you want NCMEC to pass them on to law enforcement. But the end result is overwhelmed law enforcement with no clear process for triage and dealing with a lot of reports that were sent in an abundance of caution but which are not at all useful to law enforcement.

And, of course, there are other challenges that policymakers probably don’t think about. For example: how do you deal with hacked accounts? How much information is it right for the company to share with law enforcement?

One law enforcement officer provided an interesting example of a type of report he found frustrating: he said he frequently gets reports from one platform where an account was hacked and then used to share CSAM. This platform provided the dates of multiple password changes in the report, which the officer interpreted as indicating the account had been hacked. Despite this, they felt obligated to investigate the original account holder. In a recent incident they described, they were correct that the account had been hacked. They expressed that if the platform explicitly stated their suspicion in the narrative section of the report, such as by saying something like “we think this account may have been hacked,” they would then feel comfortable de-prioritizing these tips. We subsequently learned from another respondent that this platform provides time stamps for password changes for all of their reports, putting the burden on law enforcement to assess whether the password changes were of normal frequency, or whether they reflected suspicious activity.

With that said, the officer raised a valid issue: whether platforms should include their interpretation of the information they are reporting. One platform employee we interviewed who had previously worked in law enforcement acknowledged that they would have found the platform’s unwillingness to explicitly state their hunch frustrating as well. However, in their current role they also would not have been comfortable sharing a hunch in a tip: “I have preached to the team that anything they report to NCMEC, including contextual information, needs to be 100% accurate and devoid of personal interpretation as much as possible, in part because it may be quoted in legal process and case reports down the line.” They said if a platform states one thing in a tip, but law enforcement discovers that is not the case, that could make it more difficult for law enforcement to prosecute, and could even ruin their case. Relatedly, a former platform employee said some platforms believe if they provide detailed information in their reports courts may find the reports inadmissible. Another platform employee said they avoid sharing such hunches for fear of it creating “some degree of liability [even if ] not legal liability” if they get it wrong

The report details how local prosecutors are also loathe to bring cases, because it’s tricky to find a jury who can handle a CSAM case:

It is not just police chiefs who may shy away from CSAM cases. An assistant U.S. attorney said that potential jurors will disqualify themselves from jury duty to avoid having to think about and potentially view CSAM. As a result, it can take longer than normal to find a sufficient number of jurors, deterring prosecutors from taking such cases to trial. There is a tricky balance to strike in how much content to show jurors, but viewing content may be necessary. While there are many tools to mitigate the effect of viewing CSAM for law enforcement and platform moderators, in this case the goal is to ensure that those viewing the content understand the horror. The assistant U.S. attorney said that they receive victim consent before showing the content in the context of a trial. Judges may also not want to view content, and may not need to if the content is not contested, but seeing it can be important as it may shape sentencing decisions.

There are also issues outside the US with law enforcement. As noted in the first article, NCMEC has become the de facto global reporting center, because so many companies are based in the US and report there. And the CyberTipline tries to share out to foreign law enforcement too, but that’s difficult:

For example, in the European Union, companies’ legal ability to voluntarily scan for CSAM required the passage of a special exception to the EU’s so-called “ePrivacy Directive”. Plus, against a background where companies are supposed to retain personal data no longer than reasonably necessary, EU member states’ data retention laws have repeatedly been struck down on privacy grounds by the courts for retention periods as short as four or ten weeks (as in Germany) and as long as a year (as in France). As a result, even if a CyberTipline report had an IP address that was linked to a specific individual and their physical address at the time of the report, it may not be possible to retrieve that information after some amount of time.

Law enforcement agencies abroad have varying approaches to CyberTipline reports and triage. Some law enforcement agencies will say if they get 500 CyberTipline reports a year, that will be 500 cases. Another country might receive 40,000 CyberTipline reports that led to just 150 search warrants. In some countries the rate of tips leading to arrests is lower than in the U.S. Some countries may find that many of their CyberTipline reports are not violations of domestic law. The age of consent may be lower than in the U.S., for example. In 2021 Belgium received about 15,000 CyberTipline reports, but only 40% contained content that violated Belgium law

And in lower income countries, the problems can be even worse, including confusion about how the entire CyberTipline process works.

We interviewed two individuals in Mexico who outlined a litany of obstacles to investigating CyberTipline reports even where a child is known to be in imminent danger. Mexican federal law enforcement have a small team of people who work to process the reports (in 2023 Mexico received 717,468 tips), and there is little rotation. There are people on this team who have been viewing CyberTipline reports day in and day out for a decade. One respondent suggested that recent laws in Mexico have resulted in most CyberTipline reports needing to be investigated at the state level, but many states lack the know-how to investigate these tips. Mexico also has rules that require only specific professionals to assess the age of individuals in media, and it can take months to receive assessments from these individuals, which is required even if the image is of a toddler

The investigator also noted that judges often will not admit CyberTipline reports as evidence because they were provided proactively and not via a court order as part of an investigation. They may not understand that legally U.S. platforms must report content to NCMEC and that the tips are not an extrajudicial invasion of privacy. As a result, officers may need a court order to obtain information that they already have in the CyberTipline report, confusing platforms who receive requests for data they put in a report a year ago. This issue is not unique to Mexico; NCMEC staff told us that they see “jaws drop” in other countries during trainings when they inform participants about U.S. federal law that requires platforms to report CSAM.

NCMEC Itself

The report also details some of the limitations of NCMEC and the CyberTipline itself, some of which are legally required (and where it seems like the law should be updated).

There appears to be a big issue with repeat reports, where NCMEC needs to “deconflict” them, but has limited technology to do so:

Improvements to the entity matching process would improve CyberTipline report prioritization processes and detection, but implementation is not always as straightforward as it might appear. The current automated entity matching process is based solely on exact matches. Introducing fuzzy matching, which would catch similarity between, for example, bobsmithlovescats1 and bobsmithlovescats2, could be useful in identifying situations where a user, after suspension, creates a new account with an only slightly altered username. With a more expansive entity matching system, a law enforcement officer proposed that tips could gain higher priority if certain identifiers are found across multiple tips. This process, however, may also require an analyst in the loop to assess whether a fuzzy match is meaningful.

It is common to hear of instances where detectives received dozens of separate tips for the same offender. For instance, the Belgium Federal Police noted receiving over 500 distinct CyberTipline reports about a single offender within a span of five months. This situation can arise when a platform automatically submits a tip each time a user attempts to upload CSAM; if the same individual tries to upload the same CSAM 60 times, it could result in 60 separate tips. Complications also arise if the offender uses a Virtual Private Network (VPN); the tips may be distributed across different law enforcement agencies. One respondent told us that a major challenge is ensuring that all tips concerning the same offender are directed to the same agency and that the detective handling them is aware that these numerous tips pertain to a single individual.

As the report notes, there are a variety of challenges, both economic and legal, in enabling NCMEC to upgrade its technology:

First, NCMEC operates with a limited budget and as a nonprofit they may not be able to compete with industry salaries for qualified technical staff. The status quo may be “understandable given resource constraints, but the pace at which industry moves is a mismatch with NCMEC’s pace.” Additionally, NCMEC must also balance prioritizing improving the CyberTipline’s technical infrastructure with the need to maintain the existing infrastructure, review tips, or execute other non-Tipline projects at the organization. Finally, NCMEC is feeding information to law enforcement, which work within bureaucracies that are also slow to update their technology. A change in how NCMEC reports CyberTipline information may also require law enforcement agencies to change or adjust their systems for receiving that information.

NCMEC also faces another technical constraint not shared with most technology companies: because the CyberTipline processes harmful and illegal content, it cannot be housed on commercially available cloud services. While NCMEC has limited legal liability for hosting CSAM, other entities currently do not, which constrains NCMEC’s ability to work with outside vendors. Inability to transfer data to cloud services makes some of NCMEC’s work more resource intensive and therefore stymies some technical developments. Cloud services provide access to proprietary machine learning models, hardware-accelerated machine learning training and inference, on-demand resource availability and easier to use services. For example, with CyberTipline files in the cloud, NCMEC could more easily conduct facial recognition at scale and match photos from the missing children side of their work with CyberTipline files. Access to cloud services could potentially allow for scaled detection of AI-generated images and more generally make it easier for NCMEC to take advantage of existing machine learning classifiers. Moving millions of CSAM files to cloud services is not without risks, and reasonable people disagree about whether the benefits outweigh the risks. For example, using a cloud facial recognition service would mean that a third party service likely has access to the image. There are a number of pending bills in Congress that, if passed, would enable NCMEC to use cloud services for the CyberTipline while providing the necessary legal protections to the cloud hosting providers.

Platforms

And, yes, there are some concerns about the platforms. But while public discussion seems to focus almost exclusively on where people think that platforms have failed to take this issue seriously, the report suggests the failures of platforms are much more limited.

The report notes that it’s a bit tricky to get platforms up and running with CyberTipline reporting, and that even as NCMEC will do some onboarding, it’s very limited to avoid some of the 4th Amendment concerns talked about above.

And, again, some of the problem with onboarding is due to outdated tech on NCMEC’s side. I mean… XML? Really?

Once NCMEC provides a platform with an API key and the corresponding manual, integrating their workflow with the reporting API can still present challenges. The API is XML-based, which requires considerably more code to integrate with than simpler JSON-based APIs and may be unfamiliar to younger developers. NCMEC is aware that this is an issue. “Surprisingly large companies are using the manual form,” one respondent said. One respondent at a small platform had a more moderate view; he thought the API was fine and the documentation “good.” But another respondent called the API “crap.”

There are also challenges under the law about what needs to be reported. As noted above and in the first article, that can often lead to over-reporting. But it can also make things difficult for companies trying to make determinations.

Platforms will additionally face policy decisions. While prohibiting illegal content is a standard approach, platforms often lack specific guidelines for moderators on how to interpret nuanced legal terms such as “lascivious exhibition.” This term is crucial for differentiating between, for example, an innocent photo of a baby in a bathtub, and a similar photo that appears designed to show the baby in a way that would be sexually arousing to a certain type of viewer. Trust and safety employees will need to develop these policies and train moderators.

And, of course, as has been widely discussed elsewhere, it’s not great that platforms have to hire human beings and expose them to this kind of content.

However, the biggest issue on reporting seems to not be a company’s unwillingness to do so, but how much information they pass along. And again, here, the issue is not so much unwillingness of the companies to be cooperative, but the incentives.

Memes and viral content pose a huge challenge for CyberTipline stakeholders. In the best case scenario, a platform checks the “Potential Meme” box and NCMEC automatically sends the report to an ICAC Task Force as “informational,” which appears to mean that no one at the Task Force needs to look at the report.

In practice, a platform may not check the “Potential Meme” box (possibly due to fixable process issues or minor changes in the image that change the hash value) and also not check the “File Viewed by Company” box. In this case NCMEC is unable to view the file, due to the Ackerman and Wilson decisions as discussed in Chapter 3. A Task Force could view the file without a search warrant and realize it is a meme, but even in that scenario it takes several minutes to close out the report. At many Task Forces there are multiple fields that have to be entered to close the report, and if Task Forces are receiving hundreds of reports of memes this becomes hugely time consuming. Sometimes, however, law enforcement may not realize the report is a meme until they have invested valuable time into getting a search warrant to view the report.

NCMEC recently introduced the ability for platforms to “batch report” memes after receiving confirmation from NCMEC that that meme is not actionable. This lets NCMEC label the whole batch as informational, which reduces the burden on law enforcement

We heard about an example where a platform classified a meme as CSAM, but NCMEC (and at least one law enforcement officer we spoke to about this meme) did not classify it as CSAM. NCMEC told the platform they did not classify the meme as CSAM, but according to NCMEC the platform said because they do consider it CSAM they were going to continue to report it. Because the platform is not consistently checking the “Potential Meme” box, law enforcement are still receiving it at scale and spending substantial time closing out these reports.

There is a related challenge when a platform neglects to mark content as “viral”. Most viral images are shared in outrage, not with an intent to harm. However, these viral images can be very graphic. The omission of the “viral” label can lead law enforcement to mistakenly prioritize these cases, unaware that the surge in reports stems from multiple individuals sharing the same image in dismay.

We spoke to one platform employee about the general challenge of a platform deeming a meme CSAM while NCMEC or law enforcement agencies disagree. They noted that everyone is doing their best to apply the Dost test. Additionally, there is no mechanism to get an assurance that a file is not CSAM: “No one blesses you and says you’ve done what you need to do. It’s a very unsettling place to be.” They added that different juries might come to different conclusions about what counts as CSAM, and if a platform fails to report a file that is later deemed CSAM the platform could be fined $300,000 and face significant public backlash: “The incentive is to make smart, conservative decisions.”

This is all pretty fascinating, and suggests that while there may be ways to improve things, it’s difficult to structure things right and make the incentives align properly.

And, again, the same incentives pressure the platforms to just overreport, no matter what:

Once a platform integrates with NCMEC’s CyberTipline reporting API, they are incentivized to overreport. Consider an explicit image of a 22-year-old who looks like they could be 17: if a platform identified the content internally but did not file a report and it turned out to be a 17-year-old, they may have broken the law. In such cases, they will err on the side of caution and report the image. Platform incentives are to report any content that they think is violative of the law, even if it has a low probability of prosecution. This conservative approach will also lead to reports from what Meta describes as “non-malicious users”—for instance, individuals sharing CSAM in outrage. Although such reports could theoretically yield new findings, such as uncovering previously unknown content, it is more likely that they overload the system with extraneous reports

All in all, the real lesson to be taken from this report is that this shit is super complicated, like all of trust & safety, and tradeoffs abound. But here it’s way more fraught than in most cases, both in terms of the seriousness of the issue, the potential for real harm, and the potentially destructive criminal penalties involved.

The report has some recommendations, though they mostly seem to deal with things at the margins: increase funding for NCMEC, allow it to update its technology (and hire the staff to do so), and have some more information to help platforms get set up.

Of course, what’s notable is that this does not include things like “make platforms liable for any mistake they make.” This is because, as the report shows, most platforms seem to take this stuff pretty seriously already, and the liability is already very clear, to the point that they are often over-reporting to avoid it, and that’s actually making the results worse, because they’re overwhelming both NCMEC and law enforcement.

All in all, this report is a hugely important contribution to this discussion, and provides a ton of real-world information about the CyberTipline that were basically only known to people working on it, leaving many observers, media and policymakers in the dark.

It would be nice if Congress reads this report and understands the issues. However, when it comes to things like CSAM, expecting anyone to bother with reading a big report and understanding the tradeoffs and nuances is probably asking too much.