“One thing like over 70 p.c of [Anthropic’s] pull requests are actually Claude code written,” Krieger informed me. As for what these engineers are doing with the additional time, Krieger mentioned they’re orchestrating the Claude codebase and, after all, attending conferences. “It actually turns into obvious how a lot else is within the software program engineering position,” he famous.
The pair fiddled with Voss water bottles and answered an array of questions from the press about an upcoming compute cluster with Amazon (Amodei says “elements of that cluster are already getting used for analysis,”) and the displacement of employees as a consequence of AI (“I do not suppose you possibly can offload your organization technique to one thing like that,” Krieger mentioned).
We’d been informed by spokespeople that we weren’t allowed to ask questions on coverage and regulation, however Amodei supplied some unprompted perception into his views on a controversial provision in President Trump’s megabill that will ban state-level AI regulation for 10 years: “If you happen to’re driving the automobile, it is one factor to say ‘we do not have to drive with the steering wheel now.’ It is one other factor to say ‘we’ll rip out the steering wheel, and we will not put it again in for 10 years,’” Amodei mentioned.
What does Amodei take into consideration probably the most? He says the race to the underside, the place security measures are reduce so as to compete within the AI race.
“Absolutely the puzzle of operating Anthropic is that we in some way need to discover a solution to do each,” Amodei mentioned, which means the corporate has to compete and deploy AI safely. “You might need heard this stereotype that, ‘Oh, the businesses which might be the most secure, they take the longest to do the protection testing. They’re the slowest.’ That isn’t what we discovered in any respect.”