Nexvec™

Nexvec™

A Turnkey Open Infrastructure Solution for Enterprise AI

Home

Home

Open Networking

General chips: fast, cheap, smart, standard.

Composable Compute

Edgecore + Liqid: dynamic, green, efficient compute.

Lifecycle Management

Edgecore Controller: smart, scalable AI network control.

Interconnectivity

reliable, optimized, hassle-free networking.

Composable Compute

In partnership with Liqid, Edgecore’s composable infrastructure solution utilizes industry-standard data center components to create a flexible, scalable architecture—built from pools of disaggregated resources.

Dynamic Resource Allocation, On Demand

Compute, networking, storage, GPU, FPGA, and Intel® Optane™ memory are interconnected via intelligent fabrics, enabling dynamically-configurable bare-metal servers. Each server is precisely tailored with only the physical resources required by the application—nothing more, nothing less.

Improve Efficiency, Reduce Waste

By disaggregating and reallocating hardware as needed, you can double or even triple resource utilization, significantly reducing power consumption and lowering your carbon footprint—especially valuable for AI-centric deployments.

Powered by Liqid Matrix Technology

Edgecore’s composable infrastructure, driven by Liqid Matrix, allows infrastructure to adapt in real time to workload demands. Full utilization becomes achievable, while improving scalability and responsiveness.

Automation for Next-Gen Workloads

Infrastructure processes can be fully automated, unlocking new efficiencies to meet the data demands of next-generation applications—AI, IoT, DevOps, Cloud, Edge Computing, and support for NVMe-over-Fabric (NVMe-oF) and GPU-over-Fabric (GPU-oF) technologies.

Highest AI Performance

Maximum Flexibility

Lower Power Consumption

More GPU Horsepower. Fewer Servers. Greater AI Results.

Scale Up to 30 GPUs per server to meet your AI workload demands while lowering your power and increasing your AI utilization

Drive Down AI Costs with Smarter GPU Utilization

Achieve up to 100% GPU Utilization for Maximum Tokens per Watt and Dollar

Leverage Multi-Vendor GPUs

Your AI, your choice. Harness the power of silicon diversity for unmatched flexibility and agility

The Path to a Self-driving Fabric Starts Here

Build your own private AI Inference cloud with LIQID Matrix® Software, Kubernetes and Nvidia NIM automation.

Accelerate AI with On-Demand GPU Provisioning

Choose your own infrastructure adventure. Leverage our intuitive UI, CLI, and Northbound APIs for Kubernetes, VMware and SLURM.

Related Resource

2025 Product Catalogue