September 4, 2025 – Google’s Tensor Processing Unit (TPU), a self-developed AI chip that has gone through multiple iterations over the years, has long been a cornerstone of the tech giant’s AI operations. Besides powering Google’s own AI services, it has also been made available to partners like OpenAI through Google Cloud’s cloud computing capabilities.
Up until now, all of Google’s TPU chips have been exclusively hosted within its own data centers, with no deployment in external physical settings. However, this situation is on the verge of changing, as reported by The Infomation.

According to the latest news, Google has recently been in talks with small cloud service providers that mainly lease NVIDIA’s AI GPUs. The aim is to have these companies also host and deploy Google’s TPU chips in their data centers. In fact, Google has already reached an agreement with at least one partner, Fluidstack, in this regard.
For Google, this move could signify a major shift in its AI strategy. By leveraging its position as a leading AI ASIC (Application-Specific Integrated Circuit) company, Google is poised to challenge NVIDIA’s dominance in the AI chip market. Moreover, deploying TPUs externally will free Google’s Tensor computing power from the limitations imposed by its self-built data center capacity, opening up new possibilities for growth and expansion.