Skip to main content
Versa Networks

Hardware and Software Requirements for Headend

Versa-logo-release-icon.pngFor supported software information, click here.

This article describes the hardware and software required to install Versa Networks headend components.

CPU Requirements

Bare-Metal Platforms

For a bare-metal platform, you can deploy the headend components on either a Sandy Bridge CPU architecture or a Westmere CPU architecture:

Headend Component Sandy Bridge CPU Requirements Westmere CPU Requirements
Versa Director
  • AES
  • AVX
  • PCLMULQDQ
  • SSE
  • SSE2
  • SSE3
  • SSE4.1
  • SSE4.2
  • Not supported
Versa Analytics
  • AES
  • AVX
  • PCLMULQDQ
  • SSE
  • SSE2
  • SSE3
  • SSE4.1
  • SSE4.2
  • Not supported
Versa Operating SystemTM (VOSTM) device
  • AES
  • AVX
  • PCLMULQDQ
  • SSE
  • SSE2
  • SSE3
  • SSE4.1
  • SSE4.2
  • AES
  • PCLMULQDQ
  • SSE
  • SSE2
  • SSE3
  • SSE4.1
  • SSE4.2
  • RDRAND

Requires a VOS software image with .wsm filename suffix

Virtual Machine Platforms

For a virtual machine (VM) deployment, you must allocate a dedicated CPU and memory (1:1 provisioning) to the headend components. Depending on the number of sockets present in the host, you might need to use CPU pinning. It is recommended that you turn off hyperthreading at the host level, by disabling hyperthreading in the host's BIOS. When hyperthreading is enabled, the number of available cores doubles. To verify the number of active cores, issue the lscpu command in the host's operating system.

Hardware and VM Requirements

This section lists the bare-metal, AWS, and Azure requirements for headend components.

The following figure illustrates a representative headend topology.

headend-topology.png

The following figure illustrates the network interfaces required for each headend component.

headend-network-interfaces.png

Bare-Metal Hardware Requirements

Recommended Bare-Metal Hardware Requirements

The following table lists the recommended bare-metal hardware requirements for headend components for topologies up to 3500 CPE devices and up to 700 tenants.

Component Up to 3,500 CPEs and 500 Tenants Up to 2,500 CPEs and 500 Tenants Up to 1000 CPEs and 200 Tenants Up to 500 CPEs and 100 Tenants Up to 250 CPEs and 50 Tenants
Analytics Cluster (Analytics and Search Nodes)

At least 2 Analytics clusters (each cluster supports up to 2,500 CPEs)

At least 2 clusters. For cluster resource requirements, see the 2,500 CPE column in the next column

Use an Analytics aggregator for consolidated view across clusters

6 single-socket servers per cluster (cluster provides HA)

Of the 6 servers:

  • 4 of type analytics, with at least 2048 GB, SSD recommended
  • 2 of type search, with at least 1024 GB, SSD recommended

For each server:

  • 16 cores
  • 64 GB RAM
  • 3 network ports

4 single-socket servers per cluster (cluster provides HA)

Of the 4 servers:

  • 2 of type analytics, with at least 1024 GB, SSD recommended
  • 2 of type search, with at least 1024 GB, SSD recommended

For each server:

  • 16 cores
  • 64 GB RAM
  • 3 network ports

4 single-socket servers per cluster (cluster provides HA)

Of the 4 servers:

  • 2 of type analytics, with at least 1024 GB, SSD recommended
  • 2 of type search, with at least 512 GB, SSD recommended

For each server:

  • 16 cores
  • 64 GB RAM
  • 3 network ports

4 single-socket servers per cluster (cluster provides HA)

Of the 4 servers:

  • 2 of type analytics, with at least 1024 GB, SSD recommended
  • 2 of type search, with at least 512 GB, SSD recommended

For each server:

  • 16 cores
  • 32 GB RAM
  • 3 network ports
Analytics Log Forwarder Nodes

8 single-socket servers per cluster per region

For each server:

  • 4 cores
  • 8 GB RAM
  • 128 GB, preferably SSD
  • 2 network ports

8 single-socket servers per cluster per region

For each server:

  • 4 cores
  • 8 GB RAM
  • 128 GB, preferably SSD
  • 2 network ports

4 single-socket servers per cluster per region

For each server:

  • 4 cores
  • 8 GB RAM
  • 128 GB, preferably SSD
  • 2 network ports

2 single-socket servers per cluster per region

For each server:

  • 4 cores
  • 8 GB RAM
  • 128 GB, preferably SSD
  • 2 network ports

2 single-socket servers per cluster per region

For each server:

  • 4 cores
  • 8 GB RAM
  • 128 GB, preferably SSD
  • 2 network ports
Concerto Arbiter 

1 server:

  • 8 cores
  • 32 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports

1 server:

  • 8 cores
  • 32 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports

1 server:

  • 8 cores
  • 32 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports

1 server:

  • 8 cores
  • 32 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports

1 server:

  • 8 cores
  • 32 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports
Concerto Cluster

2 worker servers for 3-node cluster:

  • 1 primary node
  • 1 secondary node
  • 1 arbiter node

For each server:

  • 20 cores
  • 48 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports

2 worker servers for 3-node cluster:

  • 1 primary node
  • 1 secondary node
  • 1 arbiter node

For each server:

  • 16 cores
  • 32 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports

2 worker servers for 3-node cluster:

  • 1 primary node
  • 1 secondary node
  • 1 arbiter node

For each server:

  • 8 cores
  • 32 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports

2 worker servers for 3-node cluster:

  • 1 primary node
  • 1 secondary node
  • 1 arbiter node

For each server:

  • 8 cores
  • 32 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports

2 worker servers for 3-node cluster:

  • 1 primary node
  • 1 secondary node
  • 1 arbiter node

For each server:

  • 8 cores
  • 32 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports
Controller

V930, V1000, V1800, CSG1500, or CSG2500

2 each for non-HA, 4 each for HA; HA is highly recommended. 

For Hub-Controller nodes, see below.

Hub-controllers:

Hub-Controllers should have a higher specification than the Controllers. Each Hub-Controller node must include 8 additional cores dedicated solely to the control plane, in addition to the cores used for the data plane.

V930, V1000, V1800, CSG1500, or CSG2500

2 each for non-HA, 4 each for HA; HA is highly recommended. 

For Hub-Controller nodes, see below.

Hub-controllers:

Hub-Controllers should have a higher specification than the Controllers. Each Hub-Controller node must include 8 additional cores dedicated solely to the control plane, in addition to the cores used for the data plane.

V930, V1000, V1800, CSG1500, or CSG2500

1 each for non-HA, 2 each for HA; HA is highly recommended.

For Hub-Controller nodes, see below.

Hub-controllers:

Hub-Controllers should have a higher specification than the Controllers. Each Hub-Controller node must include 8 additional cores dedicated solely to the control plane, in addition to the cores used for the data plane.

V930, V1000, V1800, CSG1500, or CSG2500

1 each for non-HA, 2 each for HA; HA is highly recommended.

V930, V1000, V1800, CSG1500, or CSG2500

1 each for non-HA, 2 each for HA; HA is highly recommended. 

Director

1 single-socket server for non-HA, 2 single-socket servers for HA

For each server:

  • 24 cores
  • 64 GB RAM
  • 200 GB, preferably SSD
  • 3 network ports

CSG2500, Versa1800, and VEP-4600-V930 are some examples of servers suitable for this deployment.

1 single-socket server for non-HA, 2 single-socket servers for HA

For each server:

  • 24 cores
  • 64 GB RAM
  • 200 GB, preferably SSD
  • 3 network ports

CSG2500, Versa1800, VEP-4600-V930 are some examples of servers suitable for this deployment.

1 single-socket server for non-HA, 2 single-socket servers for HA

For each server:

  • 16 cores
  • 48 GB or 64 GB RAM
  • 200 GB, preferably SSD
  • 3 network ports

CSG2500, Versa1800, VEP-4600-V930 are some examples of servers suitable for this deployment.

1 single-socket server for non-HA, 2 single-socket servers for HA

For each server:

  • 16 cores
  • 48 GB or 64 GB RAM
  • 200 GB, preferably SSD
  • 3 network ports

CSG2500, Versa1800, VEP-4600-V930 are some examples of servers suitable for this deployment.

1 single-socket server for non-HA, 2 single-socket servers for HA

For each server:

  • 16 cores
  • 32 GB RAM
  • 200 GB, preferably SSD
  • 3 network ports
Staging Controller (optional)

CSG355, CSG365, CSG750, or Versa 210

CSG355, CSG365, CSG750, or Versa 210

CSG355, CSG365, CSG750, or Versa 210 CSG355, CSG365, CSG750, or Versa 210 CSG355, CSG365, CSG750, or Versa 210
VOS underlay PE Router CSG770 or Versa 220, 2 each CSG770 or Versa 220, 2 each CSG770 or Versa 220, 2 each CSG770 or Versa 220, 2 each CSG770 or Versa 220, 2 each

The following table lists the recommended bare-metal hardware requirements for headend components for topologies up to 20,000 CPE devices and up to 4,000 tenants.

Component Up to 20,000 CPEs and 4,000 Tenants Up to 10,000 CPEs and 2,000 Tenants Up to 5,000 CPEs and 1,000 Tenants

Analytics

8 Analytics clusters (each cluster supports up to 2,500 CPEs)

For cluster resource requirements, see the 2,500 CPE column in the previous table

Use an Analytics aggregator for consolidated view across clusters

4 Analytics clusters (each cluster supports up to 2,500 CPEs)

For cluster resource requirements, see the 2,500 CPE column in the previous table

Use an Analytics aggregator for consolidated view across clusters

2 Analytics clusters (each cluster supports up to 2,500 CPEs)

For cluster resource requirements, see the 2,500 CPE column in the previous table

Use an Analytics aggregator for consolidated view across clusters

Analytics Aggregator Cluster

3 single-socket servers per cluster (cluster provides HA)

For each server:

  • 16 cores
  • 64 GB RAM
  • 512 GB, SSD recommended
  • 2 network ports

3 single-socket servers per cluster (cluster provides HA)

For each server:

  • 16 cores
  • 64 GB RAM
  • 512 GB, SSD recommended
  • 2 network ports

2 single-socket servers per cluster (cluster provides HA)

For each server:

  • 16 cores
  • 64 GB RAM
  • 512 GB, SSD recommended
  • 2 network ports
Analytics Log Forwarder Nodes

8 single-socket servers per cluster per region

For each server:

  • 4 cores
  • 8 GB RAM
  • 128 GB, preferably SSD
  • 2 network ports

8 single-socket servers per cluster per region

For each server:

  • 4 cores
  • 8 GB RAM
  • 128 GB, preferably SSD
  • 2 network ports

8 single-socket servers per cluster per region

For each server:

  • 4 cores
  • 8 GB RAM
  • 128 GB, preferably SSD
  • 2 network ports
Concerto Arbiter 

1 server:

  • 24 cores
  • 64 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports

1 server:

  • 16 cores
  • 32 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports

1 server:

  • 16 cores
  • 32 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports
Concerto Cluster

4 worker servers for 5-node cluster:

  • 2 primary nodes
  • 2 secondary nodes
  • 1 arbiter node

For each server:

  • 24 cores
  • 64 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports

4 worker servers for 5-node cluster:

  • 2 primary nodes
  • 2 secondary nodes
  • 1 arbiter node

For each server:

  • 24 cores
  • 64 GB RAM
  • 256 GB, preferably SSD
  • 2 network ports

2 worker servers for 3-node cluster:

  • 1 primary node
  • 1 secondary node
  • 1 arbiter node

For each server:

  • 24 cores
  • 48 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports
Controller

V930, V1000, V1800, CSG1500, or CSG2500

16 each for non-HA, 32 each for HA; HA is highly recommended. For Hub-Controller nodes, see below.

Hub-controllers:

Hub-Controllers should have a higher specification than the Controllers. Each Hub-Controller node must include 8 additional cores dedicated solely to the control plane, in addition to the cores used for the data plane.

V930, V1000, V1800, CSG1500, or CSG2500

8 each for non-HA, 16 each for HA; HA is highly recommended. For Hub-Controller nodes, see below.

Hub-controllers:

Hub-Controllers should have a higher specification than the Controllers. Each Hub-Controller node must include 8 additional cores dedicated solely to the control plane, in addition to the cores used for the data plane.

V930, V1000, V1800, CSG1500, or CSG2500

4 each for non-HA, 8 each for HA; HA is highly recommended. For Hub-Controller nodes, see below.

Hub-controllers:

Hub-Controllers should have a higher specification than the Controllers. Each Hub-Controller node must include 8 additional cores dedicated solely to the control plane, in addition to the cores used for the data plane.
Director

8 single-socket servers for non-HA, 16 single-socket servers for HA; HA is highly recommended

For each server:

  • 24 cores
  • 64 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports

4 single-socket servers for non-HA, 8 single-socket servers for HA; HA is highly recommended

For each server:

  • 24 cores
  • 64 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports

2 single-socket servers for non-HA, 4 single-socket servers for HA; HA is highly recommended

For each server:

  • 24 cores
  • 64 GB RAM
  • 512 GB, preferably SSD
  • 2 network ports
Staging Controller (optional) CSG355, CSG365, CSG750, or Versa 210 CSG355, CSG365, CSG750, or Versa 210 CSG355, CSG365, CSG750, or Versa 210
VOS underlay PE Router CSG770 or Versa 220, 2 each CSG770 or Versa 220, 2 each CSG770 or Versa 220, 2 each

Minimum Bare-Metal Hardware Requirements

The following table lists the minimum bare-metal hardware requirements for headend components. Note that for Analytics nodes, the table shows only the minimum storage recommendations. You may need to increase storage depending on logging rate, log retention, and other factors.

Use the minimum hardware resources only for lab or proof-of-concept (POC) environments. For production environments, use the hardware listed in Recommended Bare-Metal Hardware Requirements, above.

Component Up to 2500 CPEs and 500 Tenants Up to 1000 CPEs and 200 Tenants Up to 500 CPEs and 100 Tenants Up to 250 CPEs and 50 Tenants
Analytics

6 single-socket servers per cluster (cluster provides HA)

Of the 6 servers:

  • 4 of type analytics, with at least 2048 GB, preferably SSD
  • 2 of type search, with at least 1024 GB, preferably SSD

For each server:

  • 16 cores
  • 64 GB RAM
  • 2 network ports

4 single-socket servers per cluster (cluster provides HA)

Of the 4 servers:

  • 2 of type analytics, with at least 1024 GB, preferably SSD
  • 2 of type search, with at least 1024 GB, preferably SSD

For each server:

  • 16 cores
  • 64 GB RAM
  • 2 network ports

4 single-socket servers per cluster (cluster provides HA)

Of the 4 servers:

  • 2 of type analytics, with at least 1024 GB, preferably SSD
  • 2 of type search, with at least 512 GB, preferably SSD

For each server:

  • 8 cores
  • 16 GB RAM
  • 2 network ports

4 single-socket servers per cluster (cluster provides HA)

Of the 4 servers:

  • 2 of type analytics, with at least 512 GB, preferably SSD
  • 2 of type search, with at least 256 GB, preferably SSD

For each server:

  • 8 cores
  • 16 GB RAM
  • 2 network ports
Analytics Log Forwarder Nodes

8 single-socket servers per cluster per region

For each server:

  • 4 cores
  • 8 GB RAM
  • 128 GB, preferably SSD
  • 2 network ports

4 single-socket servers per cluster per region

For each server:

  • 4 cores
  • 8 GB RAM
  • 128 GB, preferably SSD
  • 2 network ports

2 single-socket servers per cluster per region

For each server:

  • 4 cores
  • 8 GB RAM
  • 128 GB, preferably SSD
  • 2 network ports

2 single-socket servers per cluster per region

For each server:

  • 4 cores
  • 8 GB RAM
  • 128 GB, preferably SSD
  • 2 network ports
Controller

V930, V1000, V1800, CSG1500, or CSG2500

2 each for non-HA, 4 each for HA;

HA is highly recommended. For Hub-Controller nodes, see below.

Hub-controllers:

Hub-Controllers should have a higher specification than the Controllers. Each Hub-Controller node must include 8 additional cores dedicated solely to the control plane, in addition to the cores used for the data plane.

V930, V1000, V1800, CSG1500, or CSG2500

1 each for non-HA, 2 each for HA;

HA is highly recommended. For Hub-Controller nodes, see below.

Hub-controllers:

Hub-Controllers should have a higher specification than the Controllers. Each Hub-Controller node must include 8 additional cores dedicated solely to the control plane, in addition to the cores used for the data plane.

CSG770, V810, or V910

1 each for non-HA, 2 each for HA;

HA is highly recommended.

 

CSG770, V810, or V910

1 each for non-HA, 2 each for HA;

HA is highly recommended.

Director

1 single-socket server, or
2 single-socket servers (for HA)

For each server:

  • 16 cores
  • 32 GB RAM
  • 200 GB, preferably SSD
  • 2 network ports

1 single-socket server, or 2 single-socket servers (for HA)

For each server:

  • 16 cores
  • 32 GB RAM
  • 200 GB, preferably SSD
  • 2 network ports

1 single-socket server, or 2 single-socket servers (for HA)

For each server:

  • 8 cores
  • 16 GB RAM
  • 200 GB, preferably SSD
  • 2 network ports

1 single-socket server, or 2 single-socket servers (for HA)

For each server:

  • 8 cores
  • 16 GB RAM
  • 200 GB, preferably SSD
  • 2 network ports
Staging Controller (optional)

CSG355, CSG365, CSG750, or Versa 210

CSG355, CSG365, CSG750, or Versa 210 CSG355, CSG365, CSG750, or Versa 210 CSG355, CSG365, CSG750, or Versa 210
VOS underlay PE Router CSG770 or Versa 220, 2 each CSG770 or Versa 220, 2 each CSG770 or Versa 220, 2 each CSG770 or Versa 220, 2 each

General Notes about Bare-Metal Hardware Requirements

  • Each Controller node can support a maximum of 255 tenants (organizations).
  • The Versa 210 and Versa 220 devices are provided by Advantech, Dell, and Lanner. For more information, see Versa SD-WAN White-Box Appliances.
  • After you start the Versa services, the Versa Director node reserves a fixed amount of memory for core processes such as Spring Boot and Tomcat, regardless of real-time utilization. This allocation can account for up to 70 percent of available memory. Because of this preallocation, RAM utilization does not increase linearly when you have a low to moderate number of managed devices.

AWS Requirements

The following table shows the minimum headend requirements for AWS virtual machine (VM) installations. Note that for Analytics nodes, the table shows only the minimum storage recommendations. You may need to increase storage depending on logging rate, log retention, and other factors.

To achieve interchassis HA, two EC2 instances are required for Director and Controller nodes. For Analytics, interchassis HA is achieved through clustering, which is accounted for in the recommended number of EC2 instances.

Component Up to 2500 CPEs and 500 Tenants Up to 1000 CPEs and 200 Tenants Up to 500 CPEs and 100 Tenants Up to 250 CPEs and 50 Tenants
Analytics

6 c5.4xlarge instances (cluster provides HA)

2 virtual NIC ports

Of the 6 instances:

– 4 of type analytics, with at least 2048 GB, preferably SSD

– 2 of type search, with at least 1024 GB, preferably SSD

4 c5.4xlarge instances (cluster provides HA)

2 virtual NIC ports

Of the 4 instances:

– 2 of type analytics, with at least 1024 GB, preferably SSD

– 2 of type search, with at least 1024 GB, preferably SSD

4 c5.2xlarge instances (cluster provides HA)
2 virtual NIC ports

Of the 4 instances:

– 2 of type analytics, with at least 1024 GB, preferably SSD

– 2 of type search, with at least 512 GB, preferably SSD

If HA for database is not required, 1 analytics and 1 search instance can be enabled, which still ensures HA for log data

4 c5.2xlarge instances (cluster provides HA)
2 virtual NIC ports

Of the 4 instances:

– 2 of type analytics, with at least 1024 GB, preferably SSD

– 2 of type search, with at least 512 GB, preferably SSD

If HA for database is not required, 1 analytics and 1 search instance can be enabled, which still ensures HA for log data

Analytics Log Forwarder Nodes 8 c5.xlarge instances per region
2 virtual NIC ports
4 c5.xlarge instances per region
2 virtual NIC ports

2 c5.xlarge instances per region
2 virtual NIC ports

If regional log collectors are not required, external log collector/forwarder is optional

2 c5.xlarge instances per region
2 virtual NIC ports

If regional log collectors are not required, external log collector/forwarder is optional

Controller

2 c5.4xlarge instances, or 4 c5.4xlarge instances (for HA)

3 virtual NIC ports per instance

120 GB, preferably SSD

1 Controller pair supports up to 256 tenants; 2 pairs needed for 500 tenants.

For Hub-Controller nodes, see below.

Hub-controllers:

Hub-Controllers should have a higher specification than the Controllers. Each Hub-Controller node must include 8 additional cores dedicated solely to the control plane, in addition to the cores used for the data plane.

1 c5.4xlarge instances, or 2 c5.4xlarge instances (for HA)

3 virtual NIC ports per instance

120 GB, preferably SSD

For Hub-Controller nodes, see below.

Hub-controllers:

Hub-Controllers should have a higher specification than the Controllers. Each Hub-Controller node must include 8 additional cores dedicated solely to the control plane, in addition to the cores used for the data plane.

1 c5.2xlarge instances, or 2 c5.2xlarge instances (for HA)

3 virtual NIC ports per instance

120 GB, preferably SSD

1 c5.2xlarge instances, or 2 c5.2xlarge instances (for HA)

3 virtual NIC ports per instance

120 GB, preferably SSD

Director 1 c5.4xlarge instance, or 2 c5.4xlarge instances (for HA)
2 virtual NIC ports per instance

200 GB, preferably SSD

1 c5.4xlarge instance, or 2 c5.4xlarge instances (for HA)
2 virtual NIC ports per instance

200 GB, preferably SSD

1 c5.4xlarge instance, or 2 c5.4xlarge instances (for HA)
2 virtual NIC ports per instance

200 GB, preferably SSD

1 c5.4xlarge instance, or 2 c5.4xlarge instances (for HA)
2 virtual NIC ports per instance

200 GB, preferably SSD

VOS Underlay PE Router To Connect to VPC

2 c5.xlarge instances

2 c5.xlarge instances

2 c5.xlarge instances

2 c5.xlarge instances

Azure Requirements

The following table shows the requirements for Azure VM installations. Note that for Analytics nodes, the table shows only the minimum storage recommendations. You may need to increase storage depending on logging rate, log retention, and other factors.

To achieve interchassis HA, two F-series VMs are required for Versa Director and Versa Controller. For Versa Analytics, interchassis HA is achieved through clustering, which is accounted for in the recommended number of VM instances.

Component Up to 2500 CPEs and 500 Tenants Up to 1000 CPEs and 200 Tenants Up to 500 CPEs and 100 Tenants Up to 250 CPEs and 50 Tenants
Analytics 6 standard_F16s_v2 VMs (cluster provides redundancy)
2 virtual NIC ports per instance

Of the 6 instances:

– 4 of type analytics, with at least 2048 GB, preferably SSD

– 2 of type search, with at least 1024 GB, preferably SSD

4 standard_F16s_v2 VMs (cluster provides redundancy)
2 virtual NIC ports per instance

Of the 4 instances:

– 2 of type analytics, with at least 1024 GB, preferably SSD

– 2 of type search, with at least 1024 GB, preferably SSD

4 standard_F8s_v2 VMs (cluster provides redundancy)
2 virtual NIC ports per instance

Of the 4 instances:

– 2 of type analytics, with at least 1024 GB, preferably SSD

– 2 of type search, with at least 512 GB, preferably SSD

If HA for database is not required, 1 analytics and 1 search instance can be enabled, which still ensures HA for log data

4 standard_F8s_v2 VMs (cluster provides redundancy)
2 virtual NIC ports per instance

Of the 4 instances:

– 2 of type analytics, with at least 1024 GB, preferably SSD

– 2 of type search, with at least 512 GB, preferably SSD

If HA for database is not required, 1 analytics and 1 search instance can be enabled, which still ensures HA for log data

Analytics Log Forwarder Nodes 8 standard_F4s VMs per region
2 virtual NIC ports
4 standard_F4s VMs per region
2 virtual NIC ports

2 standard_F4s VMs per region
2 virtual NIC ports

If regional log collectors are not required, external log collector/forwarder is optional

2 standard_F4s VMs per region
2 virtual NIC ports

If regional log collectors are not required, external log collector/forwarder is optional

Controller

2 standard_F16s_v2 VMs, or 4 standard_F16s_v2 VMs (for HA)
3 virtual NIC ports per instance

120 GB, preferably SSD.

1 Controller pair supports up to 256 tenants; 2 pairs needed for 500 tenants. For Hub-Controller nodes, see below.

Hub-controllers:

Hub-Controllers should have a higher specification than the Controllers. Each Hub-Controller node must include 8 additional cores dedicated solely to the control plane, in addition to the cores used for the data plane.

1 standard_F16s_v2 VMs or 2 standard_F16s_v2 VMs (for HA)
3 virtual NIC ports per instance

120 GB, preferably SSD. 

For Hub-Controller nodes, see below.

 

Hub-controllers:

Hub-Controllers should have a higher specification than the Controllers. Each Hub-Controller node must include 8 additional cores dedicated solely to the control plane, in addition to the cores used for the data plane.

1 standard_F8s_v2 VMs, or 2 standard_F8s_v2 VMs (for HA)
3 virtual NIC ports per instance

120 GB, preferably SSD.

1 standard_F8s_v2 VMs, or 2 standard_F8s_v2 VMs (for HA)
3 virtual NIC ports per instance

120 GB, preferably SSD.

Director

1 standard_F16s_v2 VMs, or 2 standard_F16s_v2 VMs (for HA)
2 virtual NIC ports per instance

200 GB, preferably SSD.

1 standard_F16s_v2 VMs, or 2 standard_F16s_v2 VMs (for HA)
2 virtual NIC ports per instance

200 GB, preferably SSD.

1 standard_F16s_v2 VMs, or 2 standard_F16s_v2 VMs (for HA)
2 virtual NIC ports per instance

200 GB, preferably SSD.

1 standard_F16s_v2 VMs, or 2 standard_F16s_v2 VMs (for HA)
2 virtual NIC ports per instance

200 GB, preferably SSD.

VOS Underlay PE Router To Connect to vnet 2 standard_F2s_v2 VMs 2 standard_F2s_v2 VMs 2 standard_F2s_v2 VMs 2 standard_F2s_v2 VMs

Versa Concerto Hardware Requirements

Versa Concerto supports both bare-metal and virtualized deployments. The following table lists the system requirements for an on-premises Concerto deployment for both bare metal and virtual environments.

  Up to 2,500 Branches Up to 5,000 Branches Up to 10,000 Branches

Up to 20,000 Branches

Number of tenants 500 1024 1024 1024
Deployment options Bare metal or VM Bare metal or VM Bare metal or VM Bare metal or VM
Processors High-end x86 server class High-end x86 server class High-end x86 server class High-end x86 server class
  • Servers

2 worker servers for 3-node cluster:

  • 1 primary node
  • 1 secondary node
  • 1 arbiter node

2 worker servers for 3-node cluster:

  • 1 primary node
  • 1 secondary node
  • 1 arbiter node

4 worker servers for 5-node cluster:

  • 2 primary nodes
  • 2 secondary nodes
  • 1 arbiter node

8 worker servers for 9-node cluster:

  • 4 primary nodes
  • 4 secondary nodes
  • 1 arbiter node
  • Cores
16 24 24 24
  • Storage capacity
512 GB 512 GB 512 GB 512 GB
  • DRAM
32 GB 48 GB 64 GB 64 GB
  • Network ports per server
2 2 2 2
Arbiter node        
  • Servers
1 1 1 1
  • Cores
8 8 16

16

  • Storage Capacity
512 GB 512 GB 512 GB 512 GB
  • DRAM
16 GB 16 GB 32 GB 32 GB
  • Network ports per server
2 2 2 2

Azure

Worker nodes

  • 1xStandard_F16s_v2Primary node
  • 1xStandard_F16s_v2Secondary node

Arbiter node

  • 1xStandard_F8s_v2

Worker nodes

  • 1xStandard_F32s_v2 Primary node
  • 1xStandard_F32s_v2Secondary node

Arbiter node

  • 1xStandard_F8s_v2

Worker nodes

  • 2xcStandard_F32s_v2Primary node
  • 2xcStandard_F32s_v2 Secondary node

Arbiter node

  • 1xStandard_F16s_v2

Worker nodes

  • 2xStandard_F32s_v2 Primary node
  • 2xStandard_F32s_v2 Secondary node

Arbiter node

  • 1xStandard_F32s_v2
AWS

Worker nodes

  • 1xc5.4xlarge Primary node
  • 1xc5.4xlarge Secondary node

Arbiter node

  • 1xc5.2xlarge

Worker nodes

  • 1xc5.9xlarge Primary node
  • 1xc5.9xlarge Secondary node

Arbiter node

  • 1xc5.2xlarge

Worker nodes

  • 2xc5.9xlarge Primary node
  • 2xc5.9xlarge Secondary node

Arbiter node

  • 1xc5.4xlarge

Worker nodes

  • 2xc5.9xlarge Primary node
  • 2xc5.9xlarge Secondary node

Arbiter node

  • 1xc5.9xlarge

General Hardware and VM Requirements

The following are general requirements for bare-metal deployments:

  • Servers must have Sandy Bridge or newer class of CPUs.
  • Server hardware must support Ubuntu 18.04.
  • For Director and Analytic nodes, it is recommended that you use servers that Versa Networks customers have already deployed in production networks. However, if you want to use any other servers, discuss this with your Versa Networks Sales Engineer. If you use any other servers, you may also need to qualify the hardware in your lab before placing it into production because of variabilities, including RAID, which may not be supported in the Ubuntu 18.04 used by the Versa software, and persistent NIC-ordering settings.
  • The Director and Analytics nodes must support Ubuntu 18.04. Typical network interfaces that are supported are:
    • For 1-GB interfaces—i350-based and i210-based network adapters
    • For 10-GB interfaces—X710-based and 82599-based network adapters
  • You must disable hyperthreading in the BIOS.
  • The following are the minimum IOPS requirements for Director and VOS nodes; however, it is recommended that the disk have more IOPS:
    • Random 4K Reads—78k IOPS
    • Random 4K Writes—86k IOPS
  • When more than 500 sites are connected to a Controller or Hub-Controller node, allocate eight CPUs for control plane processing by issuing the following CLI command and then rebooting the node:
Controller-CLI> request system isolate-cpu enable num-control-cpu 8

Note:  The  above assumes the system has a minimum of 16 cores out of which 8 cores are reserved for the control plane.

For VM deployments, note the following:

  • With the same resources as a bare-metal installation, the performance of VM headend components is about 25 percent less because of the virtualization.
  • For public cloud instances, Versa publishes instance names based on scale and throughput requirements. If these instances are used, CPU pinning, dedicated CPU, memory, and hyperthreading are already optimized.
  • For non-cloud virtual instances:
    • On the VM host, you must allocate a dedicated CPU and dedicated memory to the Versa headend VMs. This is called 1:1 provisioning.
    • You must pin the CPU to the VM vCPU.
    • You must disable hyperthreading on the VM host 
  • The preferred NICs are SR-IOV NICs, which provide the best network I/O. A second choice is a VirtIO NIC driver. For VMWare, it is recommended that you use VMXNET3 on all interfaces.
  • Openstack and KVM are supported. If you host the headend components on OpenStack, you must have a good understanding of the OpenStack infrastructure, and you must be able to troubleshoot hypervisor, underlay, and firewall issues.
  • Ensure that the disk IO is optimized. The following are the minimum IOPS requirements for Director and VOS nodes; however, it is recommended that the disk have more IOPS:
    • Random 4K Reads—78k IOPS
    • Random 4K Writes—86k IOPS

The following table lists the VM software supported for VOS devices.

Software Type Supported Software
Cloud Platforms Amazon Machine Image (AMI)
Google Cloud Platform
Microsoft Azure VHD
Oracle Cloud Infrastructure (OCI)
Hypervisors KVM
Ubuntu 18.04+
VMware vSphere 5.5, 6.0, 6.5, 6.6, 6.7, 7.0

For information about AWS, Azure, and Google cloud instances that have been qualified by Versa Networks and that you can use for headend and VOS devices, see Qualified AWS, Azure, and Google Cloud Instances.

Versa Messaging Service

For Releases 21.2.1 and later.

The following are the minimum hardware requirements to install the Versa messaging service (VMS) software on a bare-metal platform:

  • 8 cores
  • 16-GB RAM
  • 250-GB solid state drive (SSD)

You can also deploy VMS on KVM and VMware virtual machines. When you install VMS on virtual machines, Versa Networks recommends the following for the VM deployment:

  • Deploy the virtual machine using a SCSI controller.
  • Disable hyperthreading.
  • Ensure that CPU affinity and core pinning for the VMS is on a single socket.

Software Requirements

All headend components must run the same Versa software version. For example, if you are using Release 21.2.1, you must install this same software version on the Versa Director, Versa Analytics, and Versa Controller nodes.

Versa Concerto Release 10.1.1 is supported with Release 20.2.3 and later versions of the Versa DCA complex. Concerto is not supported with Release 21.1.1 of Versa DCA.

For information about the latest software release version, contact Versa Networks Customer Support.

Note: Versa Networks does not support the installation of any software packages other than what is contained in the software packages provided by Versa Networks. Installation of non-Versa Networks software packages would render void any service agreement from Versa technical support.

 

Supported Software Information

Releases 20.2 and later support all content described in this article, except as noted.

  • Releases 20.2.3 and later versions of the Versa DCA complex support Versa Concerto Release 10.1.1.
    Note: Release 21.1.1 of Versa DCA does not support Concerto Release 10.1.1.
  • Releases 21.2.1 and later add support for Versa messaging service.

Revision History

April 15, 2022—Update Concerto hardware requirements.
April 21, 2022—Update Controller requirements for 500 tenants for bare metal, AWS, and Azure.
May 31, 2023—Expand description of Analytics node requirements.
June 26,2023—Update AWS Director requirements for 500 CPEs and 100 tenants from C5.2xlarge to C5.4xlarge. Update Azure Director requirements for 500 CPEs and 100 tenants from 1 standard_F8s_v2 VMs, or 2 standard_F8s_v2 VMs (for HA) to 1 standard_F16s_v2 VMs, or 2 standard_F16s_v2 VMs (for HA).
August 23, 2023—Add bare-metal requirements for 5,000, 10,000, and 20,000 CPEs.
September 8, 2023—Add Controller and Director node information for 5,000, 10,000, and 20,000 CPEs.
November 1, 2023—Add list of VM software supported for VOS devices. Add CSG770 ;to minimum bare-metal requirements as a supported Controller for 250 and 500 CPEs.
July 15, 2024—Added separate instructions for control CPU allocation based on site connectivity thresholds for Controller and Hub-Controller nodes.

  • Was this article helpful?