REVISITING THE TURNKEY WinFast WS940 AI/DL/ML SUPERWORKSTATION FOR NVIDIA OMNIVERSE
The Leadtek WinFast WS940 SuperWorkstation is a versatile high-performance workstation suited to support the wide range of developmental activities associated with AL/DL/ML, data science, rendering and visual design in metaverse and NVIDIA Omniverse.
The system supports Omniverse Connectors for your favourite tools like Revit, Rhino, Maya, Unreal Engine and more, to interact with the 3D worlds. Choose your favourite bundling options to have immediate benefits for your projects.
Leadtek WinFast WS940 c/w 4x RTX A6000 Includes:
1x Intel Xeon® W-3375 (38core/2.5GHz) support
16x 32GB DDR4 3200 ECC RDIMM
4x 3.84TB SATA3 Enterprise SSD (Hot-swap)
1x10GbE & 1x1GbE Ports
2200W Titanium Redundant (1+1) Power Supplies
4x NVIDIA RTX A6000 48 GB GDDR6 with ECC
2x NVLink Bridge 2-Slot
Leadtek RTX AI Software Pack
NVIDIA Certified Workstation
Warranty: 3 Years, 9 x 5 x Next Business Day
Key Features:
Factory Pre-installed RTX AI Software Toolkit
Auto-Installing for:
Operating System
GPU Drivers
CUDA Toolkit
cuDNN
NCCL
NVIDIA-DOCKER PACKAGE
NVIDIA DCGM
One-Click System Restore Function
Pre-loaded AI/DL/ML Frameworks optimised by NVIDIA NGC
(Minimum: 11 applications)
Support for:
NVIDIA GPU CLOUD (NGC)
NVIDIA CLARA IMAGING
NVIDIA RAPIDS
NVIDIA OMNIVERSE
For more details and enquiries about NVIDIA Products, including NVIDIA Launchpad and NVIDIA AI Enterprise, please contact our sales team at:
NVIDIA
LaunchPad is a free program that provides users short-term access to a large
catalog of hands-on labs. Enterprises and organizations can immediately tap
into the necessary hardware and software stacks to experience end-to-end
solution workflows in the areas of AI, data science, 3D design collaboration
and simulation, and more.
What to Expect on NVIDIA LaunchPad
What is NVIDIA AI Enterprise
NVIDIA AI
Enterprise Catalog gives you the entire access to an end-to-end, cloud-native,
suite of AI and data analytics software, optimized and certified by NVIDIA.
It’s certified to deploy anywhere—from the enterprise data center to the public
cloud—and includes global enterprise support and training. It includes key
enabling technologies and software from NVIDIA for rapid deployment,
management, and scaling of AI workloads in the modern hybrid cloud.
NVIDIA AI
Enterprise enables the following:
Leverage fully integrated, optimized,
certified, and supported software from NVIDIA for AI workloads.
Run NVIDIA AI frameworks and tools optimized
for GPU acceleration, reducing deployment time and ensuring reliable
performance.
Deploy anywhere – including on popular data
center platforms from VMware and Red Hat, mainstream NVIDIA-Certified Systems
configured with or without GPUs, and on GPU-accelerated instances in the public
cloud.
Leverage the jointly certified NVIDIA and Red
Hat solution to deploy and manage AI workloads in containers or VMs with
optimized software.
Scale out to multiple nodes, enabling even the
largest deep learning training models to run on the VMware vSphere. Previously,
scaling with bare metal performance in a fully virtualized environment was
limited to a single node, limiting the complexity and size of AI workloads that
could be supported.
Run AI workloads at near bare-metal performance
with new optimizations for GPU acceleration on vSphere, including support for
the latest Ampere architecture including the NVIDIA A100. Additionally,
technologies like GPUDirect Communications can now be supported on vSphere. This
provides communication between GPU memory and storage across a cluster for
improved performance.
The NVIDIA
AI Enterprise includes:
TensorFlow and Pytorch for machine learning
NVIDIA TAO Toolkit for a faster, easier way to
accelerate training and quickly create highly accurate and performant,
domain-specific vision, and conversational AI models
NVIDIA Tensor RT, for GPU optimized deep
learning inference and Triton Inference Server to deploy trained AI models at
scale
Triton Inference Server supports all major
frameworks, such as TensorFlow, TensorRT, PyTorch, MXNet, Python and more.
Triton Inference Server also includes the RAPIDS FIL backend for the best
inference performance for tree-based models on GPUs.
NVIDIA RAPIDS, for end-to-end data science, machine
learning and analytics pipeline
NVIDIA GPU and Network Operators, to deploy and
manage NVIDIA GPU and Networking resources in Kubernetes
NVIDIA vGPU Software, to deploy vGPU on common
data center platforms, including VMware and Red Hat
Deploying NVIDIA AI Enterprise on NVIDIA LaunchPad
Through the
LaunchPad portal, NVIDIA provides a series of steps that you can follow to
install the NVIDIA AI Enterprise software stack. For example, below
demonstration is to “Train and Deploy an AI Support Chatbot” using TensorFlow,
TensorRT and Triton Inference Server from the NVIDIA AI Enterprise Catalog.
Connect to the VMware vCenter
2. Create Your First NVIDIA AI Enterprise VM and install the Operating System. We also added an NVIDIA Virtual GPU device to the VM to accelerate the workloads later on.
3. Installing Docker and Docker Utility Engine for NVIDIA GPUs
4. Installing Tensorflow to train the BERT model for conversational AI (NLP)
5. Installing Triton Inference Server to serve the trained BERT model
For more
details and enquiries about NVIDIA Products, including NVIDIA Launchpad and
NVIDIA AI Enterprise, please contact our sales team at: