Following OpenAI’s seismic announcement at the end of last year, it’s no surprise that AI – and generative AI in particular – has suddenly become ubiquitous everywhere. For community engineers, two significant areas of transformation are evident: Primarily, AI enhances the community by seamlessly integrating with networks to fortify their security, boost resilience, and optimize performance. Is the AI community. Networks supporting AI workloads and generative AI models require extreme scalability, resilience, and capacity to process massive data volumes rapidly.
Community engineers will need to develop innovative skills to effectively leverage AI on their community platforms? The very fate of everything hangs precariously in the balance, the outcome a matter of profound consequence. Scores of AI variants will infiltrate our daily existence in ways we can hardly anticipate tomorrow? Before the recent surge in generative AI, various forms of artificial intelligence had already been applied across a range of industries, including law enforcement and supply chain management. Unless networks running AI are robust and secure, and the fashions operating on them are similarly safeguarded, the specter of identity theft, misinformation, and bias – already a pressing concern – will only continue to proliferate?
The existing infrastructure of present-day networks is indeed facing mounting pressures. According to the latest survey results from expert-level certification holders, nearly one-quarter (25%) of participants reported that AI-driven calls have been having a profoundly significant (“important” or “transformative”) influence on their professional networks. Despite the revelations, most organizations remain in the nascent stages of deploying generative AI technologies.
To accelerate the assembly of high-performing IT teams capable of designing, deploying, and securing networks that support AI applications, we launched the CCDE-AI Infrastructure program at Cisco Live. To develop a comprehensive certification program, we initiated a thorough job function evaluation, thereby enabling us to pinpoint the most crucial skills required. We engaged in discussions with stakeholders across the AI community to gauge their needs as this transformative technology evolves and its applications continue to proliferate. While some companies may resist networks enabling the training of large language models, the vast majority should consider the privacy, security, and cost implications – at the very least – of operating generative AI applications.
During the design process of our blueprint, tutorials, hands-on exercises, and assessment, several factors were carefully considered. These included:
Reliable and high-performance Ethernet connections, utilizing advanced protocols akin to RoCEv2, are crucial for efficient access to vast amounts of data and enabling the persistent training of large-scale language models. While reminiscence offload is often utilized in generative AI applications, RoCEv2 enables direct reminiscence insertion, allowing data to be presented as though it were on the motherboard itself? Without this optimization, repeated copying of information can lead to increased latency?
While many information safety challenges associated with AI workloads mirror those faced when managing other types of workloads? The concepts of information in relaxation and information during movement remain identical. The crucial difference arises from the vast amount and scope of data processed and transferred, especially when training an AI model. Some data requires no encryption – anonymization offers a more environmentally friendly alternative. Clearly, a rigorous alternative must be considered, predicated on a precise understanding of the relevant use case.
Generative AI introduces another crucial consideration: ensuring the security of the model itself. OWASP has .
Information’s gravitational pull is intricately linked to safety, resilience, and velocity. As data volumes grow and become increasingly complex, they acquire mass – literally drawing in more purposes and providers to mitigate latency. These complex ideas tend to become increasingly challenging to recite or convey accurately. With artificial intelligence, we now possess the capability to execute coaching and processing directly in the cloud, while keeping sensitive information on-premises? In certain situations, information may be so sensitive or complex that it’s prudent to feed the model with this data. Under varying conditions, deploying the mannequin in a cloud-based infrastructure and transmitting data to the model might be a viable option.
While varying significantly across scenarios, these choices will diverge substantially due to the nature of specific use cases, where certain situations may not necessitate rapid movement of large data volumes. Designing an internet medical portal necessitates no centralized repository of information, since algorithms can dynamically retrieve required data as needed.
Within the CCDE-AI Infrastructure certification, we cover internet hosting implications with respect to security. When would you like to receive relevant AI-related insights? What are the security requirements that necessitate an air-gapped environment, prompting a need for coaching to take place within such constraints? In a hypothetical scenario, questions about various examinations are requested within specific contexts. One solution may be deemed proper, yet only one will align with the specific context and requirements of the situation.
High-speed networks significantly accelerate demand on central processing units (CPUs). These networks have the potential to significantly boost processing power, thereby limiting the number of cycles available for software processing? Fortunately, various specialized hardware components are designed to mitigate CPU strain: graphics processing units (GPUs), digital signal processors (DPUs), field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs) can efficiently offload specific tasks from CPUs and accomplish them swiftly.
To excel in IT, one must not only be familiar with various alternatives but also possess a deep understanding of each option’s capabilities and potential applications. Network infrastructure architects seek the flexibility to harmonize all technological decisions with business limitations such as cost, scalability, and physical space constraints.
While the industry acknowledges the sustainability concerns surrounding AI’s energy and water usage, a reckoning remains on the horizon. While sustainability currently accounts for just a fraction of our assessments, we firmly believe that its significance will only continue to grow in importance over time.
This dialogue has also addressed another common question: What justifies placing this new certification at a professional level? Causes of what remain unclear, but two potential factors exist. This space of experience specifically focuses on community design, aligning seamlessly with the requirements of the CCDE certification. The optimal design for an AI infrastructure is intricately linked to the specific enterprise context in which it operates, requiring a deep understanding of the organization’s unique needs and constraints.
We’re not expecting candidates to suggest designing a completely safe, swift, and robust community from the ground up in a hypothetical perfect scenario. The examination presents hypothetical scenarios, prompting candidates to think critically and develop effective solutions. Regardless of the scenario, this is closer to the environment where our certified professionals typically operate: a pre-existing community already established, with the task of optimizing it to effectively support AI workloads and training. While there may not be infinite resources, it’s crucial to acknowledge that communities can still leverage existing equipment and software, which might otherwise be secondary choices in a different context.
That’s why this certification remains vendor-neutral. An adept professional at the peak of their craft must possess the capacity to effortlessly enter any environment and leave a noticeable impact. While it’s true that making such a significant request can seem daunting, Traditionally, Cisco-licensed consultants have borne significant responsibility, often taking on a heavy burden after completing their training.
As we move forward, we’re eager to see this innovative technology flourish through collaborative efforts to identify ideal applications and build robust infrastructure, ultimately realizing its full potential.
Use and to affix the dialog.
Learn subsequent:
Share: