The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
 
30
 
 
 
 
 

Judge Slams Elon Musk For Filing Vexatious SLAPP Suit Against Critic, Calling Out How It Was Designed To Suppress Speech

DATE POSTED:March 25, 2024

Self-described “free speech absolutist” Elon Musk has just had a judge slam him for trying to punish and suppress the speech of critics. Judge Charles Breyer did not hold back in his ruling dismissing Musk’s utterly vexatious SLAPP suit against the Center for Countering Digital Hate (CCDH).

Sometimes it is unclear what is driving a litigation, and only by reading between the lines of a complaint can one attempt to surmise a plaintiff’s true purpose. Other times, a complaint is so unabashedly and vociferously about one thing that there can be no mistaking that purpose. This case represents the latter circumstance. This case is about punishing the Defendants for their speech.

As a reminder, this case was brought after Musk got upset with CCDH for releasing a report claiming that hate speech was up on the site. I’ve noted repeatedly that I’m not a fan of CCDH and am skeptical of any research they release, as I’ve found them to have very shoddy, results-driven methodologies. So, it’s entirely possible that the report they put out is bullshit (though, there have been other reports that seem to confirm its findings).

But the way to respond to such things is, as any actual “free speech absolutist” would tell you, with more speech. Refute the findings. Explain why they were wrong. Dispute the methodology.

But Elon Musk didn’t do that.

He sued CCDH for their speech and put up a silly façade to pretend it wasn’t about their speech. Instead, he claimed it was some sort of breach of contract claim and a Computer Fraud and Abuse Act (CFAA) claim (which long-term Techdirt readers will recognize as a law intended to be used against “hacking” but which has been regularly used in abusive ways against “things we don’t like”).

Despite my general distrust of CCDH’s research and methodologies, this case seemed like an obvious attack on speech. I was happy to see the Public Participation Project (where I am a board member) file an amicus brief (put together by the Harvard Cyberlaw clinic) calling out that this was a clear SLAPP suit.

CCDH itself filed an anti-SLAPP motion, also calling out how the entire lawsuit was clearly an attempt to punish the organization for its speech.

And, thankfully, the judge agreed. The judge noted that it’s clear from the complaint and the details of the case that the lawsuit is about CCDH’s speech, at which point the burden shifts to the plaintiff to prove a “reasonable probability” of prevailing on the merits to allow the case to move forward.

Let’s just say that ExTwitter’s arguments don’t go over well. ExTwitter’s lawyers argued that it wasn’t really complaining about CCDH’s speech, but rather the way in which it collected the data it used to make its argument regarding hate speech. The judge doesn’t buy it:

But even accepting that the conduct that forms the specific wrongdoing in the state law claims is CCDH’s illegal access of X Corp. data, that conduct (scraping the X platform and accessing the Brandwatch data using ECF’s login credentials) is newsgathering—and claims based on newsgathering arise from protected activity….

It is also just not true that the complaint is only about data collection. See Reply at 3 (arguing that X Corp.’s contention that its “claims arise from ‘illegal access of data,’ as opposed to speech,” is the “artifice” at “the foundation of [this] whole case.”) (quoting Opp’n at 10). It is impossible to read the complaint and not conclude that X Corp. is far more concerned about CCDH’s speech than it is its data collection methods. In its first breath, the complaint alleges that CCDH cherry-picks data in order to produce reports and articles as part of a “scare campaign” in which it falsely claims statistical support for the position that the X platform “is overwhelmed with harmful content” in order “to drive advertisers from the X platform.” See FAC ¶ 1. Of course, there can be no false claim without communication. Indeed, the complaint is littered with allegations emphasizing CCDH’s communicative use of the acquired data. See, e.g., id. ¶¶ 17–20 (reports/articles are based on “flawed ‘research’ methodologies,” which “present an extremely distorted picture of what is actually being discussed and debated” on the X platform, in order to “silence” speech with which CCDH disagrees); id. ¶ 43 (CCDH “used limited, selective, and incomplete data from that source . . . that CCDH then presented out of context in a false and misleading manner in purported ‘research’ reports and articles.”), id. ¶ 56 (“CCDH’s reports and articles . . . have attracted attention in the press, with media outlets repeating CCDH’s incorrect assertions that hate speech is increasing on X.”).

The judge also correctly calls out that not suing for defamation doesn’t mean you can pretend you’re not suing over speech, when that’s clearly the intent here:

Whatever X Corp. could or could not allege, it plainly chose not to bring a defamation claim. As the Court commented at the motion hearing, that choice was significant. Tr. of 2/29/24 Hearing at 62:6–10. It is apparent to the Court that X Corp. wishes to have it both ways—to be spared the burdens of pleading a defamation claim, while bemoaning the harm to its reputation, and seeking punishing damages based on reputational harm.

For the purposes of the anti-SLAPP motion, what X Corp. calls its claims is not actually important. The California Supreme Court has held “that the anti-SLAPP statute should be broadly construed.” Martinez, 113 Cal. App. 4th at 187 (citing Equilon Enters. v. Consumer Cause, Inc., 29 Cal. 4th 53, 60 n.3 (2002)). Critically, “a plaintiff cannot avoid operation of the anti-SLAPP statute by attempting, through artifices of pleading, to characterize an action as a ‘garden variety breach of contract [or] fraud claim’ when in fact the liability claim is based on protected speech or conduct.” Id. at 188 (quoting Navellier, 29 Cal. 4th at 90–92); see also Baral v. Schnitt, 1 Cal. 5th 376, 393 (2016) (“courts may rule on plaintiffs’ specific claims of protected activity, rather than reward artful pleading”); Navellier, 29 Cal. 4th at 92 (“conduct alleged to constitute breach of contract may also come within constitutionally protected speech or petitioning. The anti-SLAPP statute’s definitional focus in not the form of the plaintiff’s cause of action[.]”).

This is important, as we’ve seen other efforts to try to avoid anti-SLAPP claims by pretending that a case is not about speech. It’s good to see a judge call bullshit on this. Indeed, in a footnote, the judge even calls out Elon’s lawyers for trying to mislead him about all this:

At the motion hearing, X Corp. asserted that it was “not trying to avoid defamation” and claimed to have “pleaded falsity” in paragraph 50 of the complaint. Tr. of 2/29/24 Hearing at 59:20–23. In fact, paragraph 50 did not allege falsity, or actual malice, though it used the word “incorrect.” See FAC ¶ 50 (“incorrect implications . . . that hate speech viewed on X is on the rise” and “incorrect assertions that X Corp. ‘doesn’t care about hate speech’”). When the Court asked X Corp. why it had not brought a defamation claim, it responded rather weakly that “to us, this is a contract and intentional tort case,” and “we simply did not bring it.” Tr. of 2/29/24 Hearing at 60:1–12; see also id. at 60:21–22 (“That’s not necessarily to say we would want to amend to bring a defamation claim.”).

The judge outright mocks ExTwitter’s claim that they were really only concerned with the data scraping techniques and that they would have brought the same case even if CCDH hadn’t published its report. He notes that each of the torts under which the case was brought require ExTwitter to show harm, and the only “harms” they discuss are “harms” from the speech:

X Corp.’s many allegations about CCDH’s speech do more than add color to a complaint about data collection—they are not “incidental to a cause of action based essentially on nonprotected activity.” See Martinez, 113 Cal. App. 4th at 187. Instead, the allegations about CCDH’s misleading publications provide the only support for X Corp.’s contention that it has been harmed. See FAC ¶ 78 (breach of contract claim: alleging that CCDH “mischaracterized the data . . . in efforts to claim X is overwhelmed with harmful conduct, and support CCDH’s call to companies to stop advertising on X. . . . As a direct and proximate result of CCDH’s breaches of the ToS in scraping X, X has suffered monetary and other damages in the amount of at least tens of millions of dollars”); ¶¶ 92– 93 (intentional interference claim: alleging that Defendants “intended for CCDH to mischaracterize the data regarding X in the various reports and articles . . . to support Defendants’ demands for companies to stop advertising on X” and that “[a]s a direct and proximate result of Defendants intentionally interfering with the Brandwatch Agreements . . . X Corp. has suffered monetary and other damages of at least tens of millions of dollars”); ¶¶ 98–99 (inducing breach of contract claim: alleging that “X Corp. was harmed and suffered damages as a result of Defendants’ conduct when companies paused or refrained from advertising on X, in direct response to CCDH’s reports and articles” and that “[a]s a direct and proximate result of Defendants inducing Brandwatch to breach the Brandwatch Agreements . . . X Corp. has suffered monetary and other damages in the amount of at least tens of millions of dollars.”).

The “at least tens of millions of dollars” that X Corp. seeks as damages in each of those claims is entirely based on the allegation that companies paused paid advertising on the X platform in response to CCDH’s “allegations against X Corp. and X regarding hate speech and other types of content on X.” See id. ¶ 70. As CCDH says, “X Corp. alleges no damages that it could possibly trace to the CCDH Defendants if they had never spoken at all.” Reply at 3. Indeed, X Corp. even conceded at the motion hearing that it had not alleged damages that would have been incurred if CCDH “had scraped and discarded the information,” or scraped “and never issued a report, or scraped and never told anybody about it.” See Tr. of 2/29/24 Hearing at 7:22–8:3. The element of damages in each state law claim therefore arises entirely from CCDH’s speech.

In other words, stop your damned lying. This case was always about trying to punish CCDH for its speech.

ExTwitter could still get past an anti-SLAPP motion by showing it had a likelihood of succeeding on the other claims, but no such luck. First up are the “breach of contract” claims that are inherently silly for a variety of reasons (not all of which I’ll go over here).

But the biggest one, again, is that if this were a legitimate breach of contract claim, Musk wouldn’t be seeking “reputational damages.” But he is. And thus:

Another reason that the damages X Corp. seeks—“at least tens of millions of dollars” of lost revenue that X Corp. suffered when CCDH’s reports criticizing X Corp. caused advertisers to pause spending, see FAC ¶ 70—are problematic is that X Corp. has alleged a breach of contract but seeks reputation damages. Of course, the main problem with X Corp.’s theory is that the damages alleged for the breach of contract claim all spring from CCDH’s speech in the Toxic Twitter report, and not its scraping of the X platform. See Reply at 7 (“Because X Corp. seeks (impermissibly) to hold CCDH U.S. liable for speech without asserting a defamation claim, it is forced to allege damages that are (impermissibly) attenuated from its claimed breach.”). One way we know that this is true is that if CCDH had scraped the X platform and never spoken, there would be no damages. Cf. ACLU Br. at 12. (“Had CCDH U.S. praised rather than criticized X Corp., there would be no damages to claim and therefore no lawsuit.”). Again, X Corp. conceded this point at the motion hearing.

CCDH’s reputation damages argument is another way of saying that the damages X Corp. suffered when advertisers paused their spending in response to CCDH’s reporting was not a foreseeable result of a claimed breach.

In other words, the actual issue here wasn’t any “breach of contract”. Just the speech. Which is protected.

And that becomes important for a separate analysis of whether or not the First Amendment applies here, with the judge going through the relevant precedents and noting that there’s just nothing in the case that suggests any of the relevant damages are due to any claimed contractual breach, rather than from CCDH’s protected speech:

Here, X Corp. is not seeking some complicated mix of damages—some caused by the reactions of third parties and some caused directly by the alleged breach. The Court can say, as a matter of law, whether the single type of damages that X Corp. seeks constitutes “impermissible defamation-like publication damages that were caused by the actions and reactions of third parties to” speech or “permissible damages that were caused by [CCDH’s] breaches of contract.”….

The breach that X Corp. alleges here is CCDH’s scraping of the X platform. FAC ¶ 77. X Corp. does not allege any damages stemming directly from CCDH’s scraping of the X platform.19 X Corp. seeks only damages based on the reactions of advertisers (third parties) to CCDH’s speech in the Toxic Twitter report, which CCDH created after the scraping. See FAC ¶¶ 70, 78; see also ACLU Br. at 12 (“The damages X Corp. seeks . . . are tied to reputational harm only, with no basis in any direct physical, operational or other harm that CCDH U.S.’s alleged scraping activities inflicted on X Corp.”). That is just what the Fourth Circuit disallowed in Food Lion, 194 F.3d at 522. The speech was not the breach, as it was in Cohen. And X Corp.’s damages would not have existed even if the speech had never occurred, as in Newman, 51 F.4th at 1134. Here, there would be no damages without the subsequent speech. Accordingly, the Court can hold as a matter of law that the damages alleged are impermissible defamation-like publication damages caused by the actions of third parties to CCDH’s report.

And, finally, the court says it would be “futile” to allow ExTwitter to amend the complaint and try again. Everything here is about punishing CCDH for its speech. While the judge notes that courts should often give plaintiffs leave to amend in cases where it makes sense, here he rightly fears that ExTwitter would only amend as further punishment for CCDH’s speech.

ExTwitter argued that it could amend the complaint to show that the data scraping harmed the “safety and security” of the site, and that it created harm for the company in having to protect the site. Again, the judge points out how ridiculous both arguments are.

On the “safety and security” side, the judge effectively says the legal equivalent of “I wasn’t born yesterday.” Nothing in this scraping puts users at risk. It was just a sophisticated search of Twitter’s site:

While security and safety are noble concepts, they have nothing to do with this case. The Toxic Twitter report stated that CCDH had used the SNScrape tool, “which utilizes Twitter’s search function,” to “gather tweets from” “ten reinstated accounts,” resulting in a “dataset of 9,615 tweets posted by the accounts.” Toxic Twitter at 17. There is no allegation in the complaint, and X Corp. did not assert that it could add an allegation, that CCDH scraped anything other than public tweets that ten X platform users deliberately broadcast to the world.21 No private user information was involved—no social security numbers, no account balances, no account numbers, no passwords, not even “gender, relationship status, ad interests etc.” See Meta Platforms, Inc. v. BrandTotal Ltd., 605 F. Supp. 3d 1218, 1273 (2022).

And, importantly, the judge notes that even if some users of ExTwitter want to delete their tweets, it doesn’t implicate their “safety and security” that someone might have seen (or even saved) their tweets earlier:

When asked why the collecting of public tweets implicates users’ security interests, X Corp. insisted that “this all goes to control over your data,” and that users expect that they will be able to take down their tweets later, or to change them—abilities they are robbed of when the “data” is scraped. See Tr. of 2/29/24 Hearing at 40:17–24. But even assuming that it is “very important” to a user that he be able to amend or remove his pro-neo-Nazi tweets at some point after he has tweeted them, see id. at 40:24–25, a user can have no expectation that a tweet that he has publicly disseminated will not be seen by the public before that user has a chance to amend or remove it. While scraping is one way to collect a user’s tweets, members of the public could follow that user and view his tweets in their feeds, or use the X platform’s search tool (as SNScrape did) and view his tweets that way.

X Corp.’s assertion that the scraping alleged here violates a user’s “safety and security” in his publicly disseminated tweets is therefore a non-starter.

As for a harm in ExTwitter having to then “protect” its users, the judge also laughs that one off. He notes that scraping is a common part of how the web works, even if some websites don’t like it:

The problem with this argument is that it is at odds with what X Corp. has alleged.

Although social media platforms do not like it, scraping, for various ends, is commonplace. See, e.g., ACLU Br. at 7 (“Researchers and Journalists Use Scraping to Enable Speech in the Public Interest and Hold Power to Account.”); see also id. at 8–9 (collecting sources); Andrew Sellars, “Twenty Years of Web Scraping and the Computer Fraud and Abuse Act,” 24 B.U. J. Sci. & Tech. L 372, 375 (2018) (“Web scraping has proliferated beneath the shadow of the [CFAA].”); hiQ 2022 Circuit opinion, 31 F.4th at 1202 (“HiQ points out that data scraping is a common method of gathering information, used by search engines, academic researchers, and many others. According to hiQ, letting established entities that already have accumulated large user data sets decide who can scrape that data from otherwise public websites gives those entities outsized control over how such data may be put to use . . . the public interest favors hiQ’s position.”); see also id. at 1186 (“LinkedIn blocks approximately 95 million automated attempts to scrape data every day”).

Furthermore, while the judge notes that, of course, there can be some cases where scraping could create harms for a site, this case is not an example of that. It was just doing a basic search of publicly available info:

This is not such a case. Here, CCDH is alleged to have used Twitter’s own search tool to collect 9,615 public tweets from ten Twitter users, see FAC ¶ 77; Toxic Twitter at 17, and then to have announced that it did so in a public report, see id. Assuming for present purposes that this conduct amounts to the “scraping” barred by the ToS,23 the extent of CCDH’s scraping was not a mystery. As CCDH asked at the motion hearing, “What CCDH did and the specific tweets that it gathered, what tool it used, how it used that tool and what the results were are documented explicitly in its public report. So what is it that they’re investigating?” Tr. of 2/29/24 Hearing at 22: 3–7. Nor was this the kind of large-scale, commercial scraping—as in hiQ, as alleged in Bright Data—that could conceivably harm the X platform or overburden its servers. It is not plausible that this small-scale, non-commercial scraping would prompt X Corp. to divert “dozens, if not over a hundred personnel hours across disciplines,” see Tr. of 2/29/24 Hearing at 8:7–11, of resources toward the repair of X Corp.’s systems. Nor would such expenditures have been foreseeable to CCDH in 2019. In 2019, if CCDH had thought about the no-scraping provision in the ToS at all, it would have expected X Corp. to incur damages only in response to breaches of that provision that could actually harm the X platform. It would not have expected X Corp. to incur damages in connection with a technical breach of that provision that involved the use of Twitter’s search tool to look at ten users and 9,615 public tweets.

Furthermore, the judge notes that if ExTwitter had to spend time and money to “protect” the site here, it wouldn’t be because of any costs from the scraping, but rather from CCDH’s (again, protected) speech.

And thus the anti-SLAPP motion wins, the case is dismissed, and Elon can’t file an amended complaint. Indeed, the judge calls out the silliness of ExTwitter claiming that CCDH was seeking to suppress speech, by noting that it’s quite obvious the opposite is going on here:

The Court notes, too, that X Corp.’s motivation in bringing this case is evident. X Corp. has brought this case in order to punish CCDH for CCDH publications that criticized X Corp.—and perhaps in order to dissuade others who might wish to engage in such criticism. Although X Corp. accuses CCDH of trying “to censor viewpoints that CCDH disagrees with,” FAC ¶ 20, it is X Corp. that demands “at least tens of millions of dollars” in damages—presumably enough to torpedo the operations of a small nonprofit—because of the views expressed in the nonprofit’s publications…. If CCDH’s publications were defamatory, that would be one thing, but X Corp. has carefully avoided saying that they are.

Given these circumstances, the Court is concerned that X Corp.’s desire to amend its breach of contract claim has a dilatory motive—forcing CCDH to spend more time and money defending itself before it can hope to get out from under this potentially ruinous litigation. See PPP Br. at 2 (“Without early dismissal, the free speech interests that the California legislature sought to protect will vanish in piles of discovery motions.”). As CCDH argued at the motion hearing, the anti-SLAPP “statute recognizes that very often the litigation itself is the punishment.” Tr. of 2/29/24 Hearing at 33:12–34:5. It would be wrong to allow X Corp. to amend again when the damages it now alleges, and the damages it would like to allege, are so problematic, and when X Corp.’s motivation is so clear.

Accordingly, the Court STRIKES the breach of contract claim and will not allow X Corp. to add the proposed new allegations as to that claim.

The judge also drops a hammer on the silly CFAA claims. First, the court notes that, as per the Supreme Court’s Van Buren ruling, if you’re arguing losses from hacking, the loss has to come from “technical harms” associated with the unauthorized access. ExTwitter claims that the “loss” was from the investigation it had to do to stop such violations, but the judge isn’t buying it. (For what it’s worth, Riana Pfefferkorn notes that the issue of harms having to be “technological harms” came from dicta in the Supreme Court, not a holding, but it appears that courts are treating it as a holding…)

When the lawsuit was filed, we called out a big part of the problem, which was that a separate entity, BrandWatch, was who gave CCDH access to its tools to do research on Twitter. So if there were any complaint here by ExTwitter, it might (weakly!) be against BrandWatch for giving CCDH access. But they can’t possibly argue that CCDH violated the CFAA.

X Corp.’s losses in connection with “attempting to conduct internal investigations in efforts to ascertain the nature and scope of CCDH’s unauthorized access to the data,” see FAC ¶ 87, are not technological in nature. The data that CCDH accessed does not belong to X Corp., see Kaplan Decl. Ex. A at 13 (providing that users own their content and grant X Corp. “a worldwide, non-exclusive, royalty-free license”), and there is no allegation that it was corrupted, changed, or deleted. Moreover, the servers that CCDH accessed are not even X Corp.’s servers. X Corp. asserted at the motion hearing that its servers “stream data to Brandwatch servers in response to queries from a logged in user” and so “you cannot say fairly it’s not our systems.” Tr. of 2/29/24 Hearing at 28:16–20. But that is not what the complaint alleges. The complaint alleges that “X Corp. provided non-public data to Brandwatch” and “[t]hat data was then stored on a protected computer.” FAC ¶ 83; see also id. ¶ 86 (“the Licensed Materials were stored on servers located in the United States that Brandwatch used for its applications. CCDH and ECF thus knew that, in illegally using ECF’s login credentials and querying the Licensed Materials, CCDH was targeting and gaining unauthorized access to servers used by Brandwatch in the United States.”) (emphasis added); id. ¶ 29 (Twitter would stream its Licensed Materials from its servers, “including in California,” to “servers used by Brandwatch [] located in the United States, which Brandwatch’s applications accessed to enable [its] users with login credentials to analyze the data.”).28 It is therefore hard to see how an investigation by X Corp. into what data CCDH copied from Brandwatch’s servers could amount to “costs caused by harm to computer data, programs, systems, or information services.” See Van Buren, 141 S. Ct. at 1659–60.

The court also laughs off the idea that the cost of attorneys could be seen as “technological harm” under the CFAA. And thus, CFAA claims are dropped.

As we noted in our original post, the other claims were the throw-in claims designed to piggyback on the contract and CFAA claims and to sound scary and drive up the legal fees. The judge sees these for what they are and dismisses them easily.

CCDH’s arguments for dismissing the tort claims are that: (1) the complaint shows that CCDH did not cause a breach; (2) the complaint has failed to plausibly allege a breach; (3) the complaint has failed to plausibly allege CCDH’s knowledge; and (4) the complaint fails to adequately allege damages. MTD&S at 23–26. The Court concludes that CCDH’s arguments about causation and about damages are persuasive, and does not reach its other arguments.

The court also called out that it’s weird (and notable in exposing Musk’s nonsense) that the complaint seeks to hold CCDH liable for Brandwatch allowing CCDH to use its services:

X Corp.’s response is that “the access is the breach.” Opp’n at 30. In other words, Brandwatch agreed to keep the Licensed Materials secure, and by allowing CCDH to access the Licensed Materials, Brandwatch necessarily—and simultaneously—breached its agreement to keep the Licensed Materials secure. The Court rejects that tortured reasoning. Any failure by Brandwatch to secure the Licensed Materials was a precondition to CCDH’s access. In addition, to the extent that X Corp. maintains that CCDH need not have done anything to impact Brandwatch’s behavior, then it is seeking to hold CCDH liable for breaching a contract to which it was not a party. That does not work either.

The complaint also included some utter nonsense that the judge isn’t buying. Some Senator had said that CCDH was a “foreign dark money group,” and thus ExTwitter said there might be further claims against unnamed “John Does,” but the judge points out that this conspiracy theory isn’t backed up by anything legitimate:

CCDH’s last argument is that X Corp. fails to state a claim against the Doe defendants. MTD&S at 26. X Corp. alleges that “one [unnamed] United States senator referred to CCDH as ‘[a] foreign dark money group,’” and that “[o]ther articles have claimed that CCDH is, in part, funded and supported by foreign organizations and entities whose directors, trustees, and other decision-makers are affiliated with legacy media organizations.” FAC ¶ 62. It further alleges that “CCDH is acting . . . at the behest of and in concert with funders, supporters, and other entities.” Id. ¶ 63. These allegations are vague and conclusory, and do not state a plausible claim against the Doe defendants. See Iqbal, 556 U.S. at 678 (claim is plausible “when the plaintiff pleads factual content that allows the court to draw the reasonable inference that the defendant is liable for the misconduct alleged.”).

So, case dismissed. Of course, Elon might appeal to the 9th Circuit and just run up the costs of what he’ll eventually need to pay CCDH’s lawyers just to further attempt to bully and suppress the speech of critics (and to chill speech of other critics).

And, yeah, that might happen. All from the “free speech absolutist.”

On the whole, though, this is a good, clear ruling that really highlights (1) the bullshit vexatious, censorial nature of the SLAPP suit from Elon and (2) the value and importance of a strong anti-SLAPP law like California’s.