Blink and you’ll miss it. That’s how quickly AI evolved in 2024.

OpenAI revealed a preview of its o1 model, the first AI model supposedly capable of enhanced reasoning and the biggest step yet toward artificial general intelligence (AGI). DeepMind announced AlphaFold 3, an AI capable of predicting the structures of protein complexes, and Apple introduced “Apple Intelligence”, putting powerful generative AI capabilities into the palms of millions of hands.

Barely weeks into 2025, the Consumer Electronics Show (CES) showcased a plethora of new product announcements – from Nvidia's new RTX 50-series graphics chip with AI-driven rendering to Halliday’s augmented reality smartglasses – each leveraging AI as a cornerstone of its design.

AI is already so entwined with our digital ecosystems that it can no longer be regarded as a “fringe” technology. According to MIT, a breathtaking 95 percent of businesses are already utilizing AI in one form or another, and more than half are aiming for full-scale integration by 2026.

But let’s step back for a moment. AI is ubiquitous, but that does not mean it has reached maturation. Those familiar with Gartner’s Hype Cycle will know that every new technology goes through phases before it “settles” and becomes part of our business landscape.

Generative AI is now in the “peak of inflated expectations” phase, meaning the hype around the technology is likely bigger than our ability to harness and sustain it. Before the “slope of enlightenment” and the “plateau of productivity” – where successful technologies end up – we must first endure the “trough of disillusionment”. This “trough” phase is basically a reality check for the technology, where its capabilities and limitations are made apparent and expectations are redrawn. What is unique about AI, however, is that any limitations are not inherent to the technology itself, but rather our ability to leverage it.

For AI to flourish, we need to make sure we have the infrastructure in place to support it. Algorithmic workloads that use real-time inference, for instance, will suffer greatly without high-performance, low-latency connectivity. This can stifle a technology before it’s had time to prove itself, leaving many businesses out in the cold when it comes to AI adoption. The appetite for AI is there, but without the network infrastructure to support it, the fall into the “trough” is going to be long and painful. So, what can companies do to avoid such a fall and ensure they are as ready for AI as AI is for them?

Looking beyond the cloud

The rapid migration of businesses to the cloud is now well documented, with cloud-based data lakes and warehouses becoming far more viable than on-premise facilities due to the sheer amount of data now used by businesses.

When it comes to AI, however, cloud utilization isn’t just about volume or storage – it’s about performance. Most AI models, particularly those requiring intensive model training, need to leverage the kind of power density and GPU capability that can only be offered by hyperscalers in the cloud. To make the most of the AI applications hosted in these environments, businesses need to ensure they have the connectivity in place to support high-volume data transfer and low-latency performance. Selecting the perfect cloud provider won’t mean much without the network infrastructure to realize its potential.

This is where interconnection comes in. Typically, businesses will rely on the public Internet for cloud access, and while Internet speeds can be impressive, it will be prone to the same issues as any public Internet connection – congestion, bottlenecks, unpredictable data routes, variable latency, and exposure to security risks. Data center-neutral interconnection platforms, equipped with cloud and AI exchange capabilities, offer a direct form of connectivity to solve these issues, providing businesses with resilient, direct, and secure pathways to multiple leading cloud operators.

The data center landscape in 2025

Data centers are booming in response to AI. In the UK, data centers are now classified as “critical infrastructure” with all the protections that come with it. In the US, the rollout of new data centers specifically to support AI has become a key priority. This growth has not been sudden; according to our research, the US now has more than 11,000 MW of data center capacity, and the number of data center operators has increased by 250 percent in the past decade.

Selecting the right data center mix is critical for organizations aiming to harness the potential of AI. A strategic blend of hyperscalers, colocation facilities, and on-premise options offers a balanced foundation to meet the complex demands of AI. Hyperscalers, with their unmatched power density and GPU capabilities, are of course essential for training large AI models, while colocation data centers, though typically smaller, will continue to play a vital role in supporting inference – where AI algorithms process real-time tasks – due to their proximity to end users.

The renewed focus on data centers will certainly be welcomed by businesses in the US in 2025. Despite the data center boom of the past decade, vacancy rates in data center hubs are dwindling as low as 1-4 percent as businesses race to capitalize on AI and LLM applications.

As these locations become increasingly crowded, businesses are having to broaden their horizons and seek homes for their data in secondary markets – often outside of urban centers. This “forced migration” once again puts connectivity under the spotlight. If a business utilizes a data center outside of its own city, or even state, it needs to ensure that it has the network capabilities in place to avoid latency and congestion.

AI data center Getty
– Getty Images

The future of connectivity

When it comes to optimizing cloud workloads and migrating to available data centers, connectivity is the “make or break” technology. This is why Internet Exchanges (IXs) – physical platforms where multiple networks interconnect to exchange traffic directly with one another via peering – have become indispensable. An IX allows businesses to bypass the public Internet and find the shortest and fastest network pathways for their data, dramatically improving performance and reducing latency for all participants. Importantly, smart use of an IX facility will enable businesses to connect seamlessly to data centers outside of their “home” region, removing geography as a barrier and easing the burden on data center hubs.

This form of connectivity is becoming increasingly popular, with the number of IXs in the US surging by more than 350 percent in the past decade. The use of IXs itself is nothing new, but what is relatively new is the neutral model they now employ. A neutral IX isn’t tied to a specific carrier or data center, which means businesses have more connectivity options open to them, increasing redundancy and enhancing resilience. Our own research in 2024 revealed that more than 80 percent of IXs in the US are now data center and carrier-neutral, making it the dominant interconnection model.

As businesses propel themselves forward into the age of AI, waiting for new data capacity isn’t an option. Instead, they need to gain control over their connectivity and use interconnection to overcome the geographical barriers that have traditionally held them back. AI is ready for businesses; it’s up to businesses to ensure that they are ready for AI.