5.7 C
Korea
Friday, April 17, 2026

“AI Firm Anthropic Updates Safety Policy Amid Industry Competition”

Anthropic, the AI firm known for its Claude chatbot and commitment to safe technology, seems to be adjusting its safety measures to stay competitive. The company announced a revision to its responsible-scaling policy, which aims to avoid potentially harmful AI developments like large-scale cyberattacks.

While the updated guidelines still require a strong argument for containing catastrophic risks in AI development, they now allow progress to continue as long as the company believes it maintains a significant lead over competitors. Anthropic cited the shift in focus from AI safety to economic potential in the U.S. as the reason for this change.

The company’s CEO, Dario Amodei, emphasized safety as a top priority since its inception in 2021 by former OpenAI employees. Despite the company’s earlier safety-first image, concerns have been raised about its actual efforts to prevent harm. Notably, the Claude chatbot has been misused in fraudulent activities and cyberattacks.

The alteration in Anthropic’s safety guidelines coincides with pressure from the Pentagon, which has threatened to cancel contracts unless the company allows the military to utilize its technology for all legal military purposes. The company clarified that the safety policy update is unrelated to the Pentagon issue.

As the global AI industry intensifies, companies like Anthropic, OpenAI, and Google are vying for market dominance. The U.S. administration’s pro-AI stance and threats to withhold funding from states hindering AI progress further complicate the safety landscape.

In Canada, the lack of AI regulations and the influence of U.S. policies pose challenges for companies prioritizing safety over competitiveness. The absence of clear regulations since the failure of the Artificial Intelligence and Data Act in 2025 has left both Canadian and American tech firms navigating uncertain legal terrain.

Despite the Pentagon’s ultimatum, Anthropic stands firm on its stance against using its technology for autonomous weapons and mass surveillance. The company remains committed to ethical AI use, even if it means relinquishing government contracts.

The evolving dynamics between tech companies, governments, and ethical considerations underscore the complex interplay between AI advancement, safety protocols, and regulatory frameworks.

Latest news
Related news