April 27, 2025 – In the ongoing competition between Anthropic’s Claude Code and OpenAI’s Codex CLI—two AI-powered coding assistants designed to streamline development workflows—the latter has emerged as the preferred choice among developers, according to a TechCrunch report today. Industry observers attribute this shift largely to Anthropic’s handling of intellectual property controls and community engagement, which contrasts sharply with OpenAI’s more permissive approach.
Both tools, launched in close proximity to capture developer attention, leverage cloud-based AI models to automate coding tasks. However, their licensing frameworks differ significantly. While OpenAI distributed Codex CLI under the open-source Apache 2.0 license, permitting commercial use and redistribution, Anthropic opted for a proprietary model requiring explicit corporate approval for modifications to Claude Code. This restriction has fueled criticism, particularly after Anthropic issued a takedown notice to a developer who attempted reverse engineering of Claude Code’s source code.

Adding to the controversy, Anthropic obscured Claude Code’s source code through obfuscation techniques, a practice developers argue obstructs transparency. When a developer circumvented these measures and published the code on GitHub, Anthropic responded with a Digital Millennium Copyright Act (DMCA) complaint, prompting its removal. The move sparked backlash on social media platforms, where users contrasted Anthropic’s actions unfavorably with OpenAI’s rapid integration of community feedback into Codex CLI. Within a week of its release, OpenAI had already incorporated dozens of developer suggestions, including a feature enabling Codex CLI to interface with rival AI models—including those from Anthropic.
Anthropic has yet to publicly address the fallout, though industry insiders note Claude Code remains in beta testing, with persistent bug reports. Speculation persists that the company may pivot to a more open-source strategy in future iterations, citing security justifications for its current obfuscation tactics. Analysts suggest the incident underscores a broader tension in AI development: balancing corporate interests with the collaborative ethos of open-source communities.
As the debate unfolds, developers are closely monitoring whether Anthropic will recalibrate its licensing model or risk ceding further ground to competitors prioritizing community-driven innovation.