Configure Appliance Clustering
For supported software information, click here.
You can configure elastic clustering on multiple Versa Operating SystemTM (VOSTM) appliances from the same physical location, such as a branch or data center, into a single cluster. In an appliance cluster, two or more VOS appliances form a logical unit that can scale out on both the service plane and the IO (input/output) plane. Within a cluster, the appliances configured as service nodes can perform stateful services, such as SD-WAN termination, TLS decryption, AV, etc., at any location, hub, or spoke. In addition, the appliances in the cluster can have different underlay connectivity to reach each other using IP addresses, which allows the cluster to extend to the cloud.
An appliance cluster supports high throughput with bare metal and virtual nodes. A cluster can handle packets even if packets of the same flow are received on different cluster nodes. The appliance cluster can automatically add more service-plane and IO-plane capacity, as required, without the upfront costs of over-provisioning to handle peak demand. As the network and service utilization levels of any location reach their limit, you can add capacity by augmenting existing devices with additional scale-out devices. This helps to protect existing investments and avoid expensive replacements.
Before You Begin
Before you configure appliance clustering, make sure that the following prerequisites are completed:
- Configure SD-WAN nodes in a full mesh topology. (See SD-WAN Topologies.)
- Enable direct internet access (DIA) for the client at the bottom of the topology, with working DNS. (See Configure Direct Breakout to the Internet.)
- Enable next-generation firewall (NGFW) with applicable traffic rules on the VOS nodes. (See Configure Next-Gen Firewall.)
- Ensure that the VOS image supports clustering technology for the branch and controller.
- Ensure the controllers have the same or higher code version than the branches.
- Configure equal-cost multipath (ECMP) or virtual route redundancy protocol (VRRP) as a first hop or load balancing protocol. (See Configure SD-WAN Traffic Engineering or Configure VRRP.)
Configure Appliance Clusters
- In Director view:
- Select the Administration tab in the top menu bar.
- Select Appliances in the left menu bar.

- Select a device name in the main panel. The view changes to Appliance view.
- Select the Configuration tab in the top menu bar.
- Select Others > System > Configuration > Configuration in the left menu bar. The following screen displays.

- Click the Edit icon in the Service Scaling section. In the Edit Service Scaling window, enter the following information.

Field Description Cluster ID Enter a value for the cluster ID.
Range: 0 through 65535
Default: 1
Control Node Priority Enter a control node priority value.
Range: 0 through 255
Default: 1
Service Weight Enter a service weight value.
Range: 10 through 100
Default: 100
Flow Hash Type Select a flow hash type:
- Two Tuple
- Five Tuple
Control Node Select to enable control node. Input Output Node Select to enable input output node. Service Node Select to enable service mode. - Click OK.
- Repeat Step 1 through Step 5 to add additional appliances to the cluster.
Add Virtual Routers
In order to enable appliance clusters, you must add the proper virtual routers at the provider organization level.
- In Director view, go to Organization > Limits.
- Select an organization. The Edit Organization Limit screen displays with the General tab selected by default.
- Click the Resources tab.

- In the Owned Service Scaling Routing Instances list, select one or more of the available routing instances.
- Click OK.
Check the VRRP State
Before generating traffic, note which node member is the VRRP master. Most of the traffic exists on the master node, but clustering should result in an even distribution of sessions.
In the example below, the VRRP master is VOS-1.
admin@VOS-1-cli> show vrrp group summ
GROUP PRIORITY
INTERFACE ID TNT STATE MODE CONF(CURR) TYPE IP Address
---------------- -------- --- ---------- ---------- ------------ -------- -------------------
vni-0/2.0 10 2 Master Active 100(100) Primary 10.10.50.14
Virtual 10.10.50.50
[ok][2025-05-06 13:45:11]
admin@VOS-1-cli>
Check SLA State
To check the state of the SLA, issue the show orgs org Witkop-Tenant sd-wan sla-monitor status command.
admin@VOS-3-cli> show orgs org Witkop-Tenant sd-wan sla-monitor status | tab
LOCAL REMOTE
WAN WAN
PATH REMOTE SITE FWD LOCAL REMOTE WAN LINK LINK PATH ADAPTIVE DAMP DAMP CONN LAST
SITE NAME HANDLE NAME CLASS WAN LINK LINK ID ID MTU MONITORING STATE FLAPS STATE FLAPS FLAPPED
---------------------------------------------------------------------------------------------------------------------------------------------
Branch-1 6885636 Branch-1 fc_ef INTERNET INTERNET 1 1 1500 active disable 0 up 1 00:01:01
6885892 Branch-1 fc_ef INTERNET Fremont-Inet 1 2 - active disable 0 down 1 00:00:58
Controller-1 135424 Controller-1 fc_nc INTERNET INTERNET 1 1 1500 disable disable 0 up 1 00:01:00
VOS-1 6689028 VOS-1 fc_ef INTERNET INTERNET 1 1 1500 active disable 0 up 2 00:00:26
VOS-2 6623492 VOS-2 fc_ef INTERNET INTERNET 1 1 1500 active disable 0 up 2 00:00:42
[ok][2025-05-06 17:02:49]
Repeat this command on all cluster members.
Check the Current Number of Sessions
To check the currrent number of sessions, issue the show orgs org Witkop-Tenant sessions summary command.
admin@VOS-1-cli> show orgs org Witkop-Tenant sessions summary sessions summary 0 session-count 0 session-created 318 session-closed 318 nat-session-count 0 nat-session-created 0 nat-session-closed 0 session-failed 0 session-count-max 100000 tcp-session-count 0 udp-session-count 0 icmp-session-count 0 other-session-count 0 [ok][2025-05-06 16:28:26] admin@VOS-1-cli>
Repeat this command on the other VOS nodes in the cluster. Make a note of the current sessions and the distribution of those sessions.
On all VOS nodes, clear all sessions and start fresh by entering the request clear sessions all command.
admin@VOS-1-cli> request clear sessions all [ok][2025-05-06 16:37:19] admin@VOS-1-cli>
Supported Software Information
Releases 23.1.1 and later support all content described in this article.
