Nexvec™
Nexvec™
面向企业 AI 的交钥匙开放式基础设施解决方案
可组合计算
In partnership with Liqid, Edgecore’s composable infrastructure solution utilizes industry-standard data center components to create a flexible, scalable architecture—built from pools of disaggregated resources.

Dynamic Resource Allocation, On Demand
Compute, networking, storage, GPU, FPGA, and Intel® Optane™ memory are interconnected via intelligent fabrics, enabling dynamically-configurable bare-metal servers. Each server is precisely tailored with only the physical resources required by the application—nothing more, nothing less.
Improve Efficiency, Reduce Waste
By disaggregating and reallocating hardware as needed, you can double or even triple resource utilization, significantly reducing power consumption and lowering your carbon footprint—especially valuable for AI-centric deployments.
Powered by Liqid Matrix Technology
Edgecore’s composable infrastructure, driven by Liqid Matrix, allows infrastructure to adapt in real time to workload demands. Full utilization becomes achievable, while improving scalability and responsiveness.
Automation for Next-Gen Workloads
Infrastructure processes can be fully automated, unlocking new efficiencies to meet the data demands of next-generation applications—AI, IoT, DevOps, Cloud, Edge Computing, and support for NVMe-over-Fabric (NVMe-oF) 和 GPU-over-Fabric (GPU-oF) technologies.

Highest AI Performance
Optimized Efficiency
Maximum Flexibility
Lower Power Consumption
More GPU Horsepower. Fewer Servers. Greater AI Results.
Scale Up to 30 GPUs per server to meet your AI workload demands while lowering your power and increasing your AI utilization


Drive Down AI Costs with Smarter GPU Utilization
Achieve up to 100% GPU Utilization for Maximum Tokens per Watt and Dollar
Leverage Multi-Vendor GPUs
Your AI, your choice. Harness the power of silicon diversity for unmatched flexibility and agility


The Path to a Self-driving Fabric Starts Here
Build your own private AI Inference cloud with LIQID Matrix® Software, Kubernetes and Nvidia NIM automation.
Accelerate AI with On-Demand GPU Provisioning
Choose your own infrastructure adventure. Leverage our intuitive UI, CLI, and Northbound APIs for Kubernetes, VMware and SLURM.

相关资源