July 14, 2025 – Aravind Srinivas, the CEO of the U.S.-based AI startup Perplexity, announced today that the company is impressed by the impressive performance of Moonshot AI’s Kimi K2 model in recent tests and is considering using it for post-training development.
Earlier this year, in January, Live Mint reported that Perplexity had previously incorporated DeepSeek R1 into its model training process.

Kimi K2, Moonshot AI’s first trillion-parameter open-source model, was unveiled just yesterday. The model places a strong emphasis on coding capabilities and general agent task proficiency. Built on a Mixture of Experts (MoE) architecture, Kimi K2 is designed to excel in general agent tasks, featuring a total of 1 trillion parameters with 32 billion activated parameters.
According to official information from Moonshot AI, Kimi K2 has achieved state-of-the-art (SOTA) results among open-source models in benchmark tests such as SWE Bench Verified, Tau2, and AceBench. These results highlight its leading capabilities in coding, agent tasks, and mathematical reasoning.