Current data infrastructure has been able to handle the influx of cloud computing, 5G networks and video streaming, but It likely won’t be enough to support the latest digital transformations that are emerging courtesy of the full embrace of artificial intelligence (AI), GlobeSt.com reports.
Instead, AI likely will need a completely separate cloud-computing framework for its digital infrastructure. This new framework would need to redefine existing data center networks in terms of where particular data center clusters are located and what functionalities they possess.
Tech companies are onboard with the AI trend
The oft-discussed ChatGPT AI verbal synthesizer has more than 1 million users and received a $10 billion investment from Microsoft, GlobeSt.com reports. Additionally, Amazon Web Services partnered with Stability AI in November and Google created a similar ChatGPT system called Lamda. Meanwhile, Meta recently announced it was putting its data center buildouts on hold so it can reconfigure its server farms so they’re equipped to meet AI’s data processing requirements.
AI platforms’ data processing needs have grown to the point that OpenAI, ChatGPT’s creator, would be unable to keep operating the platform without Microsoft’s upcoming upgraded Azure cloud platform, GlobeSt.com reports.
Why AI needs new data infrastructure
An AI platform like ChatGPT’s “brain” functions through two different “hemispheres” or “lobes” according to GlobeSt.com — the “training” lobe, which pulls all of the data needed to meet user content requests, and the “inference” lobe, which supports the generative platforms that answer users’ questions in a more “human” manner moments after it is asked something.
The training lobe will require a lot of “computational firepower” to process all of the data points necessary to generate all of the content ChatGPT creates. Essentially, the training lobe pulls in data points and reorganizes them in a model. This process is done repeatedly and each time, the AI entity gets better at understanding — it teaches itself how to take in information and communicate what it learned as a human would.
While it’s an interesting process, the training lobe needs not only big-time computing power, but also the most advanced graphic processing unit (GPU) semiconductors to function at its full capacity. Additionally, any infrastructure that’s focused on “training” AI platforms will need a lot of power, so data centers will have to be located near gigawatts of renewable energy. New liquid-based cooling systems and redesigned backup power and generator systems will also have to be installed, GlobeSt.com reports.
As for the other half of an AI platform’s brain, the inference lobe, which handles answering questions seconds after a user asks, has its own set of needs that current data infrastructures cannot meet. The good news is current connected data center networks can be adapted to meet such needs, but the facilities would have to be upgraded to handle the vast amount of required processing capacity. The facilities would also have to be near power substations.
The biggest cloud computing providers today are offering data-crunching power to AI startup companies that need it, GlobeSt.com reports. They’re willing to offer it because they see the AI startups as potential long-term customers.
“There’s somewhat of a proxy war going on between the big cloud companies,” Matt McIlwain, managing director at Seattle’s Madrona Venture Group LLC, told Bloomberg. “They are really the only ones that can afford to build the really big (AI platforms) with gazillions of parameters.”