Published: October 23, 2025
22
60
476

I think @DavidSacks is right. Anthropic has a strategy. It will create regulatory capture. But while the below post fired people up, I haven't seen anyone point out that @jackclarkSF actually told people the strategy. A 🧵

2/ Last week, @jackclarkSF, Anthropic’s co-founder / Head of Policy, posted a speech he gave at The Curve. It had no surprises if you've talked with Jack. He is both in awe of & very worried about AI. His post crystalizes his fears into memorable stories. https://x.com/jackclarkSF/stat...

3/ But then David Sacks, venture capitalist and chair of Trump’s Council of Advisors on Science and Technology, criticized Jack’s post. This tweet set off a mini-malestorm that swept in @sriramk , @reidhoffman , and @Scott_Wiener, among many others. https://x.com/Scott_Wiener/sta...

4/ The debate even got mainstream media coverage.

Image in tweet by Neil Chilson ⤴️⬆️🆙📈 🚀
Image in tweet by Neil Chilson ⤴️⬆️🆙📈 🚀
Image in tweet by Neil Chilson ⤴️⬆️🆙📈 🚀

4/ It also eventually spurred a response from Anthropic's CEO, @DarioAmodei. https://www.anthropic.com/news...

Image in tweet by Neil Chilson ⤴️⬆️🆙📈 🚀

5/ I’ve been watching this conversation, waiting for someone to point out the obvious: Jack pretty much said what Sacks accuses him of. But since no one else has pointed this out, I guess I will.

6/ But first, credit where due. Anthropic executes. Claude is a joy to use. Their policy team is super friendly and shockingly candid. As Amodei wrote, Anthropic intends to “keep being honest and straightforward, and will stand up for the policies we believe are right.” That

7/ (I share this thread in that same spirit, and hope it is recieved as such.)

8/ Ok, to the story. I mentioned that Jack’s post is based on a rousing keynote he gave at The Curve conference in Berkeley, CA, to an audience that broadly shares Jack’s AI safety concerns. I was there, too. But Jack said more that didn't make it into the essay.

9) See, Jack did Q&A at The Curve. I said earlier that his essay wasn’t surprising, but some of his answers in the Q&A were provocative and very relevant now, given Sacks’ criticism.

10/ In Q&A, Jack said that the only thing that fixes AI safety risk is a major government regime with teeth. Yes, he called for transparency mandates so strong that AI companies are see-through. But he also called for pre-deployment testing, expanded forms of liability, and

11/ Now, those policies probably aren't a surprise if you've followed the AI Safety movement. Plenty of folks in the audience have advocated for even more onorous regulation.

12/ The spiciest part of Jack's answers, the one that supports Sack's accusation, was his frank discussion of Anthropic's strategy. He that Anthropic supports transparency requirements in order to scare people enough to draw attention and create the political will to pass

13/ Yes, he said that. Quite plainly. I am paraphrasing (barely) because the rules of The Curve are that there can be no quotes without authorization, but good faith paraphrasing of conversations is ok. I believe this is a good faith paraphrase of what

14/ So, when Amodei in yesterday’s post explains that Anthropic supported California’s SB 53 (a transparency bill that carves out smaller companies), understand that is not their end game. And when he argues for a federal transparency framework, it’s as a means.

15/ As Amodei himself explained elsewhere, “Having this national transparency standard would help not only the public but also Congress understand how the technology is developing, so that lawmakers can decide whether further government action is needed.” https://www.nytimes.com/2025/0...

16/ That’s Anthropic's strategy. Transparency is their first step toward their goal of imposing a pre-deployment testing regime with teeth.

17/ Now, what’s that have to do with regulatory capture? Sacks argues that Anthropic wants regulation in order to achieve regulatory capture. I’m not sure about that. I think Anthropic staff are deeply sincere. This isn't merely a play for marketshare. Now, Anthropic may not

18/ (Although, as I pointed out, it is at least possible that one could sincerely believe that regulatory capture is the best way to achieve AI safety.) https://x.com/neil_chilson/sta...

19/ Ultimately, however, it doesn’t really matter whether Anthropic intends to achieve regulatory capture, or why. What matters is what will happen.

20/ And pre-approval regimes almost always result in regulatory capture. Any industry that needs gov. favor to pursue their business model will invest in influence.

21/ This is how capture works: - Firms must secure government approval to deploy. - Large incumbents absorb compliance costs; smaller challengers struggle. - Regulators lean on incumbent firms for expertise, who frame all issues in their favor. - Eventually, the regulated staff

22/ Look at sectors where pre‑approval is core to the business model—defense procurement, pharmaceuticals, broadcast licensing. The more existential the permission, the greater the incentive (and budget) to shape the permission‑giver. Eventually, the rulebook reflects incumbent

23/ So, putting this together. Anthropic has a plan: use transparency regulations to scare people and thus generate the political will to impose a pre-approval government regime with teeth. That regime will inevitable devolve into capture.

END/ If that isn’t a regulatory capture strategy based on fear-mongering, then what is it? Maybe it's merely a fear‑mobilization strategy whose logical endpoint is capture. Does that make you feel better?

@neil_chilson @DavidSacks @jackclarkSF Transparency != fear-mongering. Transparency = information on which to make informed decisions (which TBC could be regulate or don't regulate). If one thought policymakers / the public were at risk of making bad choices through lack of info, transparency is a pro-social solution.

@neil_chilson @DavidSacks @jackclarkSF If that’s true, what’s the game-theoretical logic behind it? How would the other players rationally respond?

@neil_chilson @DavidSacks @jackclarkSF @grok should there be a balance between “open source” and regulation? Suggestions?

@neil_chilson @DavidSacks @jackclarkSF Exactly. “Safety” often becomes the language of market consolidation. The call for regulation rarely protects the public, it protects whoever’s already closest to compliance.

@neil_chilson @DavidSacks @jackclarkSF It's obvious. A few sneaky individuals want to use fear mongering so people will support overly strict rules. They want power in the hands of a few companies who control the government agencies writing the regulations. David Sacks is trying to keep this from happening.

@neil_chilson @DavidSacks @jackclarkSF Surgical and necessary breakdown. Thank you.

@neil_chilson @DavidSacks @jackclarkSF Interesting observation! Have you discussed this further?

@neil_chilson @DavidSacks @jackclarkSF Interesting observation! It's crucial to pay attention to the details.

@neil_chilson @DavidSacks @jackclarkSF Everyone’s focused on OpenAI vs Anthropic, but the real story might be who ends up writing the “safety” laws for everyone else. 👀

@neil_chilson @DavidSacks @jackclarkSF Ah, the classic competitive weakness playbook. Policy moats are for those who cant ship slope. Maximalist builders concentrate on compounding learning, not lobbying for fear-based handicaps.

@neil_chilson @DavidSacks @jackclarkSF Very interesting analysis

Share this thread

Read on Twitter

View original thread

Navigate thread

1/37