Spirent Communications is a leading provider of test and assurance solutions for next-generation devices and networks. Pulse 2.0 interviewed Spirent’s VP of Wireline Product Management Aniket Khosla to learn more about the company.
Aniket Khosla’s Background
What is Aniket Khosla’s background? Khosla said:
“I pursued my bachelor’s degree in computer science at an Indian institute before continuing my education in the US, where I completed my master’s at USC with a focus on computer networking. My career in the computer networking industry began in the year 2000, marking over two decades of experience in the field. Starting as a QA engineer, I transitioned to product management and progressively advanced in my roles. With a background in the core Ethernet market and the associated test industry, I have over 22 years of expertise in this domain. I joined Spirent three and a half years ago.”
“Within Spirent, I oversee the Wireline product lines for the Automated Test & Assurance business unit, which includes the complete Ethernet hardware and software portfolio. This includes test solutions ranging from 1G to 800G Ethernet, encompassing Ethernet testers across hardware platforms and the Spirent TestCenter software platforms. My role involves product management responsibilities for all these components within the business unit.”
Significant Milestones
What have been some of the company’s most significant milestones? Khosla cited:
“One of the things that excited me most about joining Spirent was that it has always been driven by innovation and is constantly creating new industry milestones. Our recent industry-first AI workload traffic emulation platform for Ethernet is a perfect example on how we are constantly working with customers to understand the challenges they are facing and then creating innovative, groundbreaking solutions that exceed their expectations and help them overcome major industry challenges.”
Differentiation From The Competition
What differentiates the company from its competition? Khosla affirmed:
“At Spirent, we have a strong history of innovation, such as our success with 800G technology. We are confident that our new AI test solutions will also position us strongly in the market. This reflects the culture of technological innovation that thrives within Spirent, where we continuously operate at the forefront of industry advancements. I think our ability to stay ahead is what sets up apart, and we’ve been doing that for over two decades.”
“This culture of innovation is evident in how we nurture and fund technological advancements, encouraging individuals to explore new ideas and step outside their comfort zone. Spirent instills a mindset of embracing innovation and actively promotes a culture of trying new approaches, even if they may fail, emphasizing the importance of experimentation and learning from experiences.”
Challenges With The AI Data Center
What are some of the challenges with the AI data center and how do organizations overcome them? Khosla acknowledged:
“Right now, companies want to throw more bandwidth and graphics processing units (GPUs) at the problem, but that’s not the solution. The key is optimization (including the network) for maximum efficiency. This requires a continuous, proactive test and verification process to ensure organizations get the most out of the infrastructure as they scale out.”
What is Spirent doing along those lines?
“We’ve been at the forefront of the Ethernet test space for over two decades. With AI data centers, we’ve been working to remove the complexity of AI fabric testing in a pre-production environment. Our hardware can emulate GPUs and AI workloads on Spirent equipment without our customers needing to source, manage and operate real GPUs and real server farms, which are hugely expensive and in short supply.”
“We’ve built the capabilities into our products so that we can emulate these GPUs in a lab, and customers can run tests on the AI Ethernet fabric to make sure there are no performance bottlenecks, which would render expensive GPUs idle. When our customers actually deploy these massive AI clusters, they’re getting the best performance out of their data center as possible, optimizing their AI network investments.”
Future Of The AI Data Center
How do you see things developing in the next 12 months with the AI data center? Khosla concluded:
“NVIDIA recently unveiled advancements in its upcoming next-generation products, most notably the Blackwell line, which boasts double the power and computing capacity of its current H100s while significantly reducing power consumption.”
“At the same time, hyperscalers like Amazon and Meta are getting into chip development, introducing chips such as Trainium and Inferentia. This strategic shift towards internal chip manufacturing by hyperscalers aims to lessen dependence on single vendors for comprehensive AI infrastructure solutions. Expect considerable innovation to emerge from these tech giants, particularly focused on enhancing energy efficiency and managing power consumption to support seamless scalability.”
“Furthermore, a shift is on the horizon for large language models (LLMs) within AI data centers. These centers currently operate vast language models that demand a lot of computing resources. The future landscape may involve breaking down these extensive models into smaller, specialized segments tailored for specific industries like finance, healthcare, and others. Fragmenting these large language models into more targeted components could optimize efficiency and mitigate the extensive scale currently required for their operation.”