Skip to main content
Versa Networks

Configure Analytics Aggregator Nodes

Versa-logo-release-icon.png For supported software information, click here.

Versa Analytics clusters collect, store, and analyze log records for tenants in Versa topologies. Some topologies include multiple Analytics clusters. For example, you might configure multiple Analytics clusters to fulfill the following requirements:

  • Scalability—It is recommended that a single Analytics cluster have a maximum of 2500 VOS devices to ensure easier management and reduce the impact of multiple failures. For deployments of more than 2500 devices, it is recommended that you install multiple Analytics clusters.
  • Compliance—You can install regional clusters to meet local data privacy and compliance requirements. For example, you can deploy multiple clusters to meet the General Data Protection Regulation (GDPR) requirements that data must reside within a specific country or regional boundary.

In topologies that have multiple Analytics clusters, data for a single tenant may reside on two or more clusters. This situation requires you to switch between each cluster to view reports for the tenant and there is no consolidated view.

To provide a consolidated view for tenant-level reports, you can configure an Analytics aggregator node, which can consolidate and aggregate data from multiple standard Analytics clusters. Aggregator nodes generate reports by pulling data individually from each of the standard Analytics clusters, called child clusters.

This article describes how to configure an Analytics aggregator node and how to view the status of the aggregator node and its child clusters.

Note: In Versa Concerto, you can combine a set of aggregator nodes into an aggregator cluster. The cluster provides redundancy and increased performance for Concerto View tab dashboards. You configure individual aggregator nodes as described below, and then configure the aggregator cluster in Concerto. For information about configuring a Concerto aggregator cluster, see Add a Concerto Analytics Aggregator Cluster in Install Concerto. For information about configuring a Concerto aggregator cluster, see Add a Concerto Analytics Aggregator Cluster in Install Concerto.

For high availability (HA), you can configure multiple aggregator nodes. The aggregator node performs no caching or preloading of reports, so reports generated with the same parameters do not differ when viewed from an alternate aggregator node.

The following figure shows a topology in which child Analytics clusters (cluster-1 through cluster-n) reside in multiple locations and some of the clusters may store data for the same tenant. An aggregator node consolidates data for a tenant from each appropriate child cluster when it receives a request from Concerto, custom applications, or the Analytics application GUI, which is accessible from the Analytics tab in the Versa Director GUI or directly by accessing the Analytics IP address from the browser.
 

Global_aggregator_cluster_pic.png

The aggregator node maintains a mapping of cluster, tenant, and appliance so that it can connect to the appropriate clusters to retrieve tenant-level data or to an individual cluster to retrieve appliance-level data.

For tenant-level data, when accessing data for an Analytics report, chart, or table, the aggregator node transfers the data from child clusters and then merges the data. This process can be complex, and it results in the following differences from standard clusters when viewing reports, charts, and tables from an aggregator node:

  • For reports that include statistics metrics, data is summed after the aggregator node receives data from all the child clusters.

  • For summary (pie, column, and bar) charts, in the consolidation process for top-N values, the top M values (where M > N) are determined by the child clusters, and the data is then consolidated on the aggregator node.

  • For top-N time series charts, the top-N summary is computed across the clusters, and for the top-N values, N time series queries are triggered.

  • For data tables (grid data), when the user queries with a specific filter from the aggregate node, each child cluster is queried with the filter. A maximum of 500 rows are retrieved from all the clusters, and they are sorted to produce the consolidated report. In the following example, the user filters alarm events to display only events with severity major and this table displays the maximum number of 500 rows.

    Grid_500_Max_Rows.png

  • When viewing log screens, you can select a specific cluster from the appliance drop-down menu to see cluster-specific data. In the following example screenshot, we select cluster child-1 in the appliance drop-down menu, and the chart displays major alarm event information for child-1.

    Child-1-Alarms-Chart-V2.png

  • Exporting logs from the report preview area generates multiple exported reports, one for each child cluster. For information about exported reports, see Export Logs from the Preview Area in Manage Analytics Reports.

Set Up an Aggregator Node

To set up an aggregator node, you install the standard Analytics image, as described in Headend Installation. Then, after setting up Director nodes as described in Perform Initial Software Configuration, you do the following:

  • Add entries to /etc/host files.
  • Run the vansetup.py setup script.
  • Configure DNS.
  • Configure NTP.
  • Optionally, configure SMTP.
  • Configure connectors to child clusters.

Finally, from the Director GUI, you configure a connector to the aggregator node.

 

Configure /etc/hosts Files

Add entries to the /etc/hosts files on Director, Analytics Cluster, and Aggregator nodes to facilitate communications between the nodes. This can be the northbound or southbound IP address, depending on your topology.

  1. On Director and Analytics cluster nodes, add an entry for the aggregator node to the /etc/hosts file:
admin@Director$ vi /etc/hosts
...
10.40.235.242 Aggregator1
...​​
admin@Analytics$ vi /etc/hosts
...
10.40.235.242 Aggregator1
...
  1. On the aggregator node, add entries for Director and Analytics cluster nodes, and verify the hostname in the /etc/hostname file.
admin@Aggregator1$ vi /etc/hosts
...
10.100.251.15 Director1
10.100.251.16 Director2
10.100.251.10 Analytics1
10.100.251.11 Analytics2
10.100.251.12 Analytics3
10.100.251.13 Analytics4
...

admin@Aggregator1$ cat /etc/hostname
Aggregator1

Note: The /etc/hosts entry for Director nodes must match the values from the SSL certificate. If you are using a wildcard certificate, use the full domain name when creating an entry in the /etc/hosts file. For example, if the CN value is *.utt.com, use hostnames Director1.utt.com and Director2.utt.com. To troubleshoot certificate issues, see Troubleshoot Analytics Access and Certificate issues.

Run the Setup Script

To configure an Analytics aggregator node, from a shell on the node, you modify the vansetup.conf file, which provides input for the vansetup.py setup script, and then you run the vansetup.py script.

To configure an aggregator node:

  1. Log in to a shell on the aggregator node. 
  2. Verify that certificates are present in the /opt/versa/var/van-app/certificates directory:
admin@Aggregator1$ ls /opt/versa/var/van-app/certificates

If the directory is empty, issue the following command:

admin@Aggregator1$ sudo /opt/versa/scripts/van-scripts/van-cert-install.sh
  1. In the /opt/versa/scripts/van-scripts directory, verify that the vansetup.conf file is present. This file is the input file for the vansetup.py program.
admin@Aggregator1$ cd /opt/versa/scripts/van-scripts
admin@Aggregator1$ ls vansetup.conf
vansetup.conf
  1. Edit the vansetup.conf file and change the value of cluster_type to aggregate. The line should appear as follows:

    cluster_type:aggregate

  1. Run the vansetup.py script:
admin@Aggregator1$ sudo vansetup.py

Configure a DNS Name Server

You configure a DNS name server on the node by editing the base file in the /etc/resolvconf/file. You do this from the shell.

  1. Configure the IP address of the DNS name server:

admin@Aggregator1$ sudo vi /etc/resolvconf/resolv.conf.d/base
nameserver x.x.x.x

For example:

admin@Aggregator1$ sudo vi /etc/resolvconf/resolv.conf.d/base
nameserver 8.8.8.8
  1. Update the configuration file to save the name server entry:

admin@Aggregator1$ sudo resolvconf -u
  1. Verify the configuration. For example:

admin@Aggregator1$ ping www.google.com
PING www.google.com (216.58.214.4) 56(84) bytes of data.
64 bytes from lhr26s05-in-f4.1e100.net (216.58.214.4): icmp_seq=1 ttl=117 time=15.4 ms
64 bytes from lhr26s05-in-f4.1e100.net (216.58.214.4): icmp_seq=2 ttl=117 time=10.5 ms

Configure NTP

To ensure that the time on the node is synchronized, it is recommended that you configure NTP. You do this from the shell.

  1. Display a list of available timezones:

admin@Aggregator1$ timedatectl list-timezones
  1. Set the timezone:
admin@Aggregator1$ sudo timedatectl set-timezone timezone

For example:

admin@Aggregator1$ sudo timedatectl set-timezone Africa/Cairo

Note that the default timezone is Americas/Los_Angeles. To change the default, edit the /etc/timezone file.

  1. Verify that the timezone is correct:

admin@Aggregator1$ timedatectl
  1. To set the global timezone, edit the /etc/ntp.conf file, and add the appropriate server. For a list of available time servers, see https://www.ntppool.org/zone/@.
  2. To set the local time, add the following line to the /etc/ntp.conf file:
server your.ntp.server.local prefer iburst
  1. Verify that the NTP time server is correct:

 admin@Aggregator1$ ntpq -pn

Configure Connectors to Aggregator Nodes

To enable communication between a Director node and the Analytics application on an aggregator node, you configure a connector. When the connector is enabled, the aggregator node name displays on the drop-down menu in the Analytics tab in the Director GUI.

To configure a connector to an aggregator node:

  1. Log in to the Director GUI.
  2. In Director view, select the Administration tab in the top menu bar.
  3. Select Connectors > Analytics Cluster in the left menu bar. The following screen displays.

    Administration_Connectors_AnalyticsCluster.png
     
  4. Click the + Add icon. In the Add Analytics Cluster popup window, enter information for the following fields.

    2023-05-23_16-51-16.png
     
    Field Description
    Cluster Name (Required) Enter a name for the aggregator node. This name combines with the northbound IP name to identify the node in the drop-down menu in the Analytics tab.
    Northbound IP (Table)  
    • Name (Required)
    Enter a name for the northbound interface of the aggregator node.
    • Northbound IP (Required)
    Enter the IP address of the northbound interface of the aggregator node.
    • Add icon
    Click to add the northbound IP address.
    Port Select the port number to use for the northbound connection.
  5. Click OK.

Configure SMTP

To allow the node to send reports and email notifications, configure the Simple Mail Transfer Protocol (SMTP) for the node.

To configure SMTP:

  1. In Director view, select the Analytics tab in the top menu bar.
  2. Hover over the Analytics tab and then select the aggregator node in the drop-down menu.
  3. Select Admin > Configurations > Settings in the left menu bar.
  4. Select the Email Configuration tab. The following screen displays.

    Email_configuration.png
     
  5. Enter information for the following fields.
     
    Field Description
    SMTP Host Enter the name of the SMTP server.
    SMTP Port Enter the port number of the SMTP server.
    Username Enter the username to use to connect to the SMTP server.
    Password Enter the password to use to connect to the SMTP server.
    Sender Email Enter the email address to place in the From: field of the email.
    System Email Notifications Enter the email address to which to send Analytics monitoring notifications. If you enter more than one email address, separate them with commas.
    SSL, TLS Click to enable SSL or TLS security on the email connection.
  6. Click Save Configuration.

Configure Connectors to Child Clusters

To enable communication between the aggregator node and its child clusters, configure a connector from the aggregator node to each child Analytics cluster. The connector uses a secure HTTPS connection and its bandwidth is primarily used to process REST requests and responses. A bandwidth of approximately 200 Mbps uplink/downlink is sufficient for this interface.

To configure a connector to a child cluster:

  1. In Director view, select the Analytics tab.
  2. Hover over the Analytics tab, and then select the aggregator node in the drop-down menu.
  3. Select Administration > Configurations > Settings in the left menu bar.
  4. In the main pane, select the Connectors tab. The following screen displays.

    Display_connectors.png
     
  5. Click Add Connector. The Connector popup window displays.

    Add_Aggregator_Connector.png
     
  6. Enter information for the following fields.
     
    Field Description

    Name

    Enter a name for the connector.
    Hostname

    Enter the IP address or FQDN of the child Analytics node. To ensure load balancing across the available nodes, you can configure multiple IP address of FQDNs, separating them by a comma.

    Username Enter the username used to log in to the child Analytics node.
    Password Enter the password.
  7. Click Save. This creates a connector that automatically connects to the child cluster. The connector credentials are saved securely on the aggregator node.

Verify the Configuration

To verify the configuration, you access the aggregator node from the Analytics tab in the Director GUI, and then display the cluster status. To troubleshoot Analytics access issues, see Troubleshoot Analytics Access and Certificate Issues.

To verify the configuration:

  1. In Director view, select the Analytics tab.
  2. Hover over the Analytics tab, and then select the aggregator node in the drop-down menu.
  3. Select Administration > System Status > Status in the left menu bar. The Status screen displays.

    aggregator_cluster_status.png
     
  4. Verify that the nodes in the child Analytics clusters display. Child cluster nodes display their cluster name in parentheses after their hostname.

Supported Software Information

Releases 22.1.1 and later support all content described in this article.