As the engine behind AI, data center networks play a critical role in interconnecting and maximizing Graphics Processing Unit (GPU) capabilities. Reducing job completion time (JCT)—the time it takes to complete each round of AI training—is key to faster training of AI models and, ultimately, cost savings. However, traditional data center technologies and designs fall short of the demanding performance and capacity requirements AI workloads place on network infrastructure.
As organizations try to avoid lock-in and supply chain bottlenecks associated with InfiniBand, enterprises are increasingly turning to Ethernet as the preferred, and open, networking alternative for AI data centers.
This webcast will address the AI use case for networking, and how organizations can leverage Ethernet to deliver the network performance and capacity that AI/ML systems require.