Takai Kennouche joins this episode of DCD>Talks to kick off the year’s content with a job title that would have been unimaginable just a few decades ago.
As an AI architect for Viavi Solutions, Kennouche brings a unique perspective on network design in the evolving data center industry of 2025. His career reflects the industry’s increasing versatility, beginning in telecommunications and expanding into electrical and computer engineering with a focus on communication systems. Over time, as networking challenges grew more complex, Kennouche turned to artificial intelligence (AI) and machine learning (ML) as powerful tools to address these issues.
“At some point, I decided to focus heavily on exploring AI and ML applications to solve challenging problems in the telco space,” he explains. “That led me into data solutions, where my role has evolved from research scientist/engineer to lead architect. At Viavi, I work on identifying use cases, designing new architecture patterns, transforming these into products, and operationalizing solutions to help both our customers and our company solve the right problems in the right way.”
The defining factors of resilience and reliability
To begin to understand AI’s impact on network resilience and reliability, we must first consider the preexisting definitions of these concepts within network design. Viavi Solutions specializes in building hardware- and software-based systems and products to address issues related to network performance, resilience, and reliability.
“Fundamentally, this means that what gets deployed is actually ready for deployment and will not harm or diminish the value to be extracted from an operational network,” Kennouche explains.
As a key outcome for its customers, Viavi is keen to ensure that with the implementation of AI applications, the meaning of resilience is not redefined, but heightened in terms of importance.
Kennouche provides a practical example, explaining that the integration of AI and ML shifts focus away from performing integration and functionality tests offline before deployment and instead places it on the capability itself.
“The capability in itself is almost like a living being,” he illustrates. “It’s a data-driven entity where its behavior and function cannot be perfectly quantified in the lab and then deployed with fully understood reliability characteristics."
"As a result, you end up in a situation where, while the capability is live in the system, certain conditions will inevitably lead to failures. These are what we call the failure modes of AI and ML models.”
Essentially, what Kennouche is saying is that certain failure modes must be anticipated and addressed, particularly in highly sensitive, critical systems and infrastructures, such as data centers or telecommunication networks.
AI as an asset rather than a burden
One challenge that Viavi addresses through AI applications is ensuring the most effective approach to testing system capabilities.
Kennouche explains firstly that, alongside its traditional test and measurement tools, Viavi has been developing methods tailored specifically to ML-driven systems. This includes adversarial testing, which deliberately exposes models to inputs designed to trigger failures, helping to uncover potential weaknesses.
The next focus area that Kennouche describes is monitoring. To ensure network resiliency, Kennouche emphasizes the importance of comprehensive network observation, considering all components such as topology, key performance indicators (KPIs), and changes over time.
“The benefit of AI here is that it can effectively define network characteristics, explore topology, and address critical issues like poor data quality,” he notes.
The third focus area is optimization. Once data is collected and the network is continuously monitored and tested, the next step is to manage it in a data-driven way, enabling informed optimization decisions.
“When making optimization decisions, it’s crucial to understand the trends and issues in the backhaul or backbone,” Kennouche explains. “This insight helps determine how to scale resources up or down, where to commission new sites, and where to deploy technicians for the most impactful troubleshooting or diagnosis.”
Training versus inferencing workloads
Touching on the broader ecosystem of AI capabilities, DCD’s Alex Dickins explores the key differentiators of large language models (LLMs) for network testing and monitoring.
Kennouche explains that while generative AI is undoubtedly impressive, it comes with quirks that businesses must responsibly address.
“I wouldn’t want to rely on anyone’s network capabilities if I can’t quantify their behavior or understand their failure modes. That wouldn’t be wise from a business perspective. In the context of network resilience and reliability, large models are intriguing because of the emerging dynamics surrounding them,” he states.
Generative AI and LLMs require significant investment to build, and once they’re trained, they may be deployed to perform inference across a wide range of tasks, which is the primary purpose of training the model.
Larger companies often take on the heavy lifting of training models, while developers leverage these pre-trained models through inference, applying them to specific, practical tasks.
“Another nuance of this dynamic is the concept of inference-as-a-service,” Kennouche explains. “By utilizing this infrastructure – featuring high-speed chips in the cloud and accessible via APIs – you can tap into the latest and most powerful generative AI models. This approach is vital for rapidly introducing innovative capabilities to market and staying aligned with the latest advancements, as new and more powerful language models are released nearly every week.”
Returning to the core topic of resiliency and reliability in telecommunications networks, Kennouche emphasizes that resiliency must focus on ensuring consistently reliable systems. This means preventing AI and ML model failures from causing widespread network collapse, highlighting the importance of self-healing capabilities – an area of growing significance in the data center industry in 2025 and beyond.
Kennouche concludes with advice for others in the telecommunications sector:
“I think it’s important not to get lost in the weeds of generative AI. From an operationalization standpoint, we should focus on leveraging inference-as-a-service. This dynamic is thriving as the cost of developing and delivering powerful applications continues to decrease, enabling faster deployment of groundbreaking innovations.”
Watch the full DCD>Talks episode with Takai Kennouche from Viavi Solutions here.
More from Viavi Solutions
-
DCD>Survey: Data center networking trends
Exploring the big issues affecting data center networking
-
DCD>Talks: Resilience in network design with Takai Eddine Kennouche, Viavi Solutions
Takai Kennouche from Viavi Solutions speaks with DCD’s Alex Dickins about the growing challenge of resilience in networks, reliability by design and the pros and cons of AI for networks.
-
Networking: Future-proof your fiber
Enabling the workloads of today, tomorrow and beyond