Log In

NVIDIA Partners with Tech Companies to Advance Enterprise AI Solutions

Published 4 days ago4 minute read
NVIDIA Partners with Tech Companies to Advance Enterprise AI Solutions

JFrog and NVIDIA have partnered to integrate JFrog's DevSecOps tools with the NVIDIA Enterprise AI Factory validated design. This collaboration aims to assist enterprises in developing on-premises artificial intelligence systems. JFrog will serve as the primary software artifact repository and secure model registry for NVIDIA's agentic AI architecture. As AI adoption grows, this integration seeks to provide a blueprint for secure, scalable, and efficient AI and machine learning operations (MLOps) on-premises, addressing regulatory, privacy, and data control requirements. The JFrog Platform is being integrated into the NVIDIA Enterprise AI Factory, a suite of technologies for managing AI workloads, including agentic AI, physical AI, and high-performance computing, particularly relevant for industries with stringent compliance and security needs.

Shlomi Ben Haim, CEO and co-founder of JFrog, highlighted the importance of trust, control, and seamless execution in AI. He emphasized that ML models should be managed as first-class software artifacts. Key features of the JFrog Platform include secure, governed visibility, allowing scanning of machine learning models and software artifacts for security risks, versioning, and traceability. The platform also supports end-to-end management, providing seamless uploading, hosting, and deployment of AI models, datasets, containers, and dependencies optimized for the NVIDIA Enterprise AI Factory. This streamlines configuration and deployment by eliminating the need for runtime environments to retrieve components from external sources.

Justin Boitano, VP at NVIDIA, noted the need for enterprises to manage AI adoption complexity while ensuring performance, governance, and trust. The partnership supports the JFrog Platform running natively on NVIDIA Blackwell systems, reducing latency and improving efficiency for demanding AI workloads across sectors like finance, healthcare, and manufacturing. This leverages NVIDIA's expertise and partner network to reduce deployment risks and expedite the value from AI solutions.

Dell Technologies has announced advancements to its Dell AI Factory, including AI infrastructure, partner ecosystem solutions, and professional services. These enhancements aim to simplify and accelerate AI deployments, addressing challenges like data quality, security, and costs. The Dell AI Factory approach is touted as more cost-effective for inferencing LLMs on-premises compared to the public cloud. Dell offers a comprehensive AI portfolio for deployments across client devices, data centers, edge locations, and clouds, with over 3,000 global customers.

Dell's infrastructure advancements include the Dell Pro Max AI PC with a Qualcomm® AI 100 PC Inference Card, providing on-device inferencing for large AI models. Dell has also redefined AI cooling with the PowerCool Enclosed Rear Door Heat Exchanger (eRDHx), reducing cooling energy costs by up to 60%. Dell PowerEdge servers will support AMD Instinct™ MI350 series GPUs, enhancing inferencing performance and reducing cooling costs. Dell AI Data Platform updates improve access to high-quality data, and Project Lightning accelerates training time for AI workflows.

Dell is collaborating with AI ecosystem players to deliver tailored solutions, including on-premises deployment of Cohere North, integration with Google Gemini, and prototyping with Dell AI Solutions with Llama. The Dell AI Factory also includes advancements to the Dell AI Platform with AMD and Intel. Jeff Clarke, COO of Dell Technologies, stated that these advancements are designed to help organizations of every size seamlessly adopt AI.

Dell Technologies also announced innovations across the Dell AI Factory with NVIDIA to accelerate AI adoption, including enhanced compute, data storage, data management, and networking solutions. New Dell PowerEdge servers support NVIDIA Blackwell Ultra GPUs and offer efficiency at rack scale for training and inference. Dell AI Data Platform advancements improve AI data management, and software updates help organizations deploy agentic AI. The Dell Managed Services for the Dell AI Factory with NVIDIA simplify AI operations with 24x7 monitoring and support.

NetApp has partnered with NVIDIA to support the NVIDIA AI Data Platform reference design in NetApp AIPod solutions, aiming to improve data infrastructure for AI applications in Australia and New Zealand. This partnership addresses challenges in managing fragmented data environments. NetApp AIPod deployments built on the NVIDIA AI Data Platform will provide secure, governed, and scalable data pipelines for retrieval-augmented generation (RAG) and inference tasks. Sandeep Singh of NetApp emphasized the importance of unified data storage for businesses to leverage their data effectively. The integrated solution incorporates NVIDIA accelerated computing to run NVIDIA NeMo Retriever microservices, connecting processing nodes to scalable storage via NetApp's platform. This supports more accurate and effective AI agents.

Rob Davis of NVIDIA highlighted the importance of fast access to high-quality data for agentic AI. The NVIDIA AI Data Platform is designed to align with NetApp's approach to advanced data management. This solution is positioned for government agencies and highly regulated sectors, ensuring secure and governed data access for current and future AI initiatives.

From Zeal News Studio(Terms and Conditions)

Recommended Articles

Loading...

You may also like...