FaceBook
MSI  EdgeXpert MS-C931 - Gen5 1TB

檢查庫存中...

免費送貨
MSI 微星 EdgeXpert MS-C931 - Gen5 1TB
商品特色
NVIDIA® Grace Blackwell Architecture:
- NVIDIA Blackwell GPU & Arm 20-core CPU
- NVIDIA® NVLink®-C2C CPU-GPU memory interconnect
128 GB LPDDR5x coherent, unified system memory
1000 AI FLOPS (FP4) AI performance
Full stack solution, hardware & software, designed for AI developers

貨號:CS-MSC931A

網站: 瀏覽官方網站

容量

1TB
4TB

數量

一件裝
兩件裝
加購品
限選一款
已加購
HK$199
HK$0
限選一款
已加購
HK$499
HK$350

HK$ 23,999


對產品有疑問? 歡迎與客服 即時對話

MSI EdgeXpert AI Supercomputer Product Page

MSI EdgeXpert AI Supercomputer

Next-Level AI Power, Right at Your Desk

Desktop AI Supercomputing Redefined
The MSI EdgeXpert AI Supercomputer redefines desktop AI computing, delivering petaflop-scale performance through the cutting-edge NVIDIA® GB10 Grace Blackwell Superchip—the same powerhouse at the core of NVIDIA DGX™ Spark. Purpose-built for developers, AI researchers, and data scientists, the EdgeXpert empowers local AI development with unmatched performance, scalability, and advanced features—all in a compact, desktop-ready form.

Core Architecture and Performance

NVIDIA Grace Blackwell Architecture Diagram
NVIDIA® Grace Blackwell Architecture
Featuring the NVIDIA Blackwell GPU and Arm 20 core CPU, this architecture optimizes data preprocessing and orchestration to accelerate model tuning and enable real-time inference with greater efficiency.
NVLink C2C Technology
NVLink®-C2C Technology
Offers a seamless CPU+GPU memory model with up to five times the bandwidth of PCIe 5.0, ensuring ultra-fast data access and transfer between components.
AI Performance Metric
Unmatched AI Computing Power
Delivers **1000 AI TOPS (FP4) Tensor Performance** for blazing-fast execution of complex AI workloads at scale.
LPDDR5x Memory Module
Unified System Memory
Includes **128 GB LPDDR5x** unified system memory, providing the large capacity needed for smooth model development, rapid experimentation, and high-efficiency inference.

Local LLM Scaling and Data Security

Large Language Model Icon
Running LLMs Locally (Up to 200B Parameters)
Supports AI models with up to 200 billion parameters. Run LLMs locally for enhanced data security, low latency, cost control, and full AI workflow—prototyping, fine-tuning, and inference.
Networking Hardware Illustration
High-Performance Stacking via NVIDIA ConnectX
High-performance NVIDIA ConnectX networking enables two MSI EdgeXpert systems together to work with AI models up to **405 billion parameters**, facilitating large-scale AI model development and performance testing.
Seamless AI Model Scaling from Desktop to Cloud
Leverage NVIDIA’s AI software architecture to seamlessly scale from the desktop to NVIDIA DGX™ cloud or other NVIDIA® accelerated data centers or cloud infrastructures, requiring only minimal code changes.

Designed for AI Developers and Researchers

Enterprise / Data Scientist
AI developers who can fine-tune AI models or AI inference applications locally, then easily migrate to the company data center or cloud for production scaling.
Academic / Education
Practice AI training or develop AI inference applications in the classroom or lab, with the ability to migrate to school server rooms or AI laboratories to scale up.
Enthusiast / Independent AI Developer
Experiment with fine-tuning AI models or run local inference to test or create local AI assistants. Easily migrate to the cloud for more computing power when needed.

Applications Supported by NVIDIA® AI Frameworks

AI Performance Metric
Medical
Utilizes **NV Holoscan** for Medical Devices.
AI Performance Metric
Robotics
Utilizes **NV Isaac** for Robot development.
AI Performance Metric
Retail
Utilizes **NV Riva** for Retail solutions.
AI Performance Metric
Smart City
Utilizes **NV Metropolis** for Smart City infrastructure.

相關產品