diff --git a/content/cumulus-netq-411/_index.md b/content/cumulus-netq-411/_index.md index 737b43042a..1eb40656be 100644 --- a/content/cumulus-netq-411/_index.md +++ b/content/cumulus-netq-411/_index.md @@ -9,6 +9,7 @@ cascade: version: "4.11" imgData: cumulus-netq siteSlug: cumulus-netq + old: true --- NVIDIA® NetQ™ is a network operations tool set that provides visibility into your overlay and underlay networks, enabling troubleshooting in real-time. NetQ delivers data and statistics about the health of your data center—from the container, virtual machine, or host, all the way to the switch and port. NetQ correlates configuration and operational status, and tracks state changes while simplifying management for the entire Linux-based data center. With NetQ, network operations change from a manual, reactive, node-by-node approach to an automated, informed, and agile one. Visit {{}} to learn more. diff --git a/content/cumulus-netq-412/Installation-Management/Backup-and-Restore-NetQ.md b/content/cumulus-netq-412/Installation-Management/Backup-and-Restore-NetQ.md index bff7ffb4a5..e3ea873ec6 100644 --- a/content/cumulus-netq-412/Installation-Management/Backup-and-Restore-NetQ.md +++ b/content/cumulus-netq-412/Installation-Management/Backup-and-Restore-NetQ.md @@ -75,19 +75,19 @@ If you restore NetQ data to a server with an IP address that is different from t {{}} ``` -cumulus@netq-appliance:~$ sudo vm-backuprestore.sh --restore --backupfile /home/cumulus/backup-netq-standalone-onprem-4.9.0-2029-02-06_12_37_29_UTC.tar +cumulus@netq-appliance:~$ sudo vm-backuprestore.sh --restore --backupfile /home/cumulus/backup-netq-standalone-onprem-4.10.0-2029-02-06_12_37_29_UTC.tar Mon Feb 6 12:39:57 2024 - Please find detailed logs at: /var/log/vm-backuprestore.log Mon Feb 6 12:39:57 2024 - Starting restore of data Mon Feb 6 12:39:57 2024 - Extracting release file from backup tar Mon Feb 6 12:39:57 2024 - Cleaning the system -Mon Feb 6 12:39:57 2024 - Restoring data from tarball /home/cumulus/backup-netq-standalone-onprem-4.9.0-2024-02-06_12_37_29_UTC.tar +Mon Feb 6 12:39:57 2024 - Restoring data from tarball /home/cumulus/backup-netq-standalone-onprem-4.10.0-2024-02-06_12_37_29_UTC.tar Data restored successfully Please follow the below instructions to bootstrap the cluster The config key restored is EhVuZXRxLWVuZHBvaW50LWdhdGVfYXkYsagDIix2OUJhMUpyekMwSHBBaitUdTVDaTRvbVJDR3F6Qlo4VHhZRytjUUhLZGJRPQ==, alternately the config key is available in file /tmp/config-key Pass the config key while bootstrapping: - Example(standalone): netq install standalone full interface eth0 bundle /mnt/installables/NetQ-4.11.0.tgz config-key EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIix2OUJhMUpyekMwSHBbaitUdTVDaTRvbVJDR3F6Qlo4VHhZRytjUUhLZGJRPQ== - Example(cluster): netq install cluster full interface eth0 bundle /mnt/installables/NetQ-4.11.0.tgz config-key EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIix2OUJhMUpyekMwSHBbaitUdTVDaTRvbVJDR3F6Qlo4VHhZRytjUUhLZGJRPQ== + Example(standalone): netq install standalone full interface eth0 bundle /mnt/installables/NetQ-4.12.0.tgz config-key EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIix2OUJhMUpyekMwSHBbaitUdTVDaTRvbVJDR3F6Qlo4VHhZRytjUUhLZGJRPQ== + Example(cluster): netq install cluster full interface eth0 bundle /mnt/installables/NetQ-4.12.0.tgz config-key EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIix2OUJhMUpyekMwSHBbaitUdTVDaTRvbVJDR3F6Qlo4VHhZRytjUUhLZGJRPQ== Alternately you can setup config-key post bootstrap in case you missed to pass it during bootstrap Example(standalone): netq install standalone activate-job config-key EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIix2OUJhMUpyekMwSHBbaitUdTVDaTRvbVJDR3F6Qlo4VHhZRytjUUhLZGJRPQ== Example(cluster): netq install cluster activate-job config-key EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIix2OUJhMUpyekMwSHBbaitUdTVDaTRvbVJDR3F6Qlo4VHhZRytjUUhLZGJRPQ== diff --git a/content/cumulus-netq-412/Installation-Management/Install-NetQ/Before-You-Install.md b/content/cumulus-netq-412/Installation-Management/Install-NetQ/Before-You-Install.md index bf6c53378a..3c68aef8f5 100644 --- a/content/cumulus-netq-412/Installation-Management/Install-NetQ/Before-You-Install.md +++ b/content/cumulus-netq-412/Installation-Management/Install-NetQ/Before-You-Install.md @@ -14,13 +14,15 @@ Consider the following deployment options and requirements before you install th | Single Server | High-Availability Cluster| High-Availability Scale Cluster | | --- | --- | --- | | On-premises or cloud | On-premises or cloud | On-premises only | -| Low scale| Medium scale| High scale| +| Network size: small| Network size: medium| Network size: large| | KVM or VMware hypervisor | KVM or VMware hypervisor | KVM or VMware hypervisor | | System requirements

On-premises: 16 virtual CPUs, 64GB RAM, 500GB SSD disk

Cloud: 4 virtual CPUs, 8GB RAM, 64GB SSD disk | System requirements (per node)

On-premises: 16 virtual CPUs, 64GB RAM, 500GB SSD disk

Cloud: 4 virtual CPUs, 8GB RAM, 64GB SSD disk | System requirements (per node)

On-premises: 48 virtual CPUs, 512GB RAM, 3.2TB SSD disk| -| All features supported | All features supported| No support for: Limited support for:| +| All features supported | All features supported| No support for:| + +*Exact device support counts can vary based on multiple factors, such as the number of links, routes, and IP addresses in your network. Contact NVIDIA for assistance in selecting the appropriate deployment model for your network. -NetQ is also available through NVIDIA Base Command Manager. To get started, refer to the {{}}. -## Deployment Type: On-premises or Cloud + +## Deployment Type: On-Premises or Cloud **On-premises deployments** are hosted at your location and require the in-house skill set to install, configure, back up, and maintain NetQ. This model is a good choice if you want very limited or no access to the internet from switches and hosts in your network. @@ -30,13 +32,17 @@ In all deployment models, the NetQ Agents reside on the switches and hosts they ## Server Arrangement: Single or Cluster -A **single server** is easier to set up, configure, and manage, but can limit your ability to scale your network monitoring quickly. Deploying multiple servers is more complicated, but you limit potential downtime and increase availability by having more than one server that can run the software and store the data. Select the standalone, single-server arrangements for smaller, simpler deployments. +A **single server** is easier to set up, configure, and manage, but limits your ability to scale your network monitoring. Deploying multiple servers allows you to limit potential downtime and increase availability by having more than one server that can run the software and store the data. Select the standalone, single-server arrangement for smaller, simpler deployments. -Select the **high-availability cluster** deployment for greater device support and high availability for your network. The clustering implementation comprises three servers: one master and two workers. NetQ supports high availability server-cluster deployments using a virtual IP address. Even if the master node fails, NetQ services remain operational. However, keep in mind that the master hosts the Kubernetes control plane so anything that requires connectivity with the Kubernetes cluster—such as upgrading NetQ or rescheduling pods to other workers if a worker goes down—will not work. +The **high-availability cluster** deployment supports a greater number of switches and provides high availability for your network. The clustering implementation comprises three servers: one master and two workers nodes. NetQ supports high availability server-cluster deployments using a virtual IP address. Even if the master node fails, NetQ services remain operational. However, keep in mind that the master hosts the Kubernetes control plane so anything that requires connectivity with the Kubernetes cluster—such as upgrading NetQ or rescheduling pods to other workers if a worker goes down—will not work. During the installation process, you configure a virtual IP address that enables redundancy for the Kubernetes control plane. In this configuration, the majority of nodes must be operational for NetQ to function. For example, a three-node cluster can tolerate a one-node failure, but not a two-node failure. For more information, refer to the {{}}. -The **high-availability scale cluster** deployment provides support for the greatest number of devices and provides an extensible framework for greater scalability. +The **high-availability scale cluster** deployment provides the same benefits as the high-availability cluster deployment, but supports larger networks of up to 1,000 switches. NVIDIA recommends this option for networks that have over 100 switches and at least 100 interfaces per switch. It offers the highest level of scalability, allowing you to adjust NetQ's network monitoring capacity as your network expands. + +Tabular data in the UI is limited to 10,000 rows. For large networks, NVIDIA recommends downloading and exporting the tabular data as a CSV or JSON file and opening it in a spreadsheet program for further analysis. Refer to the installation overview table at the beginning of this section for additional HA scale cluster deployment support information. + + ### Cluster Deployments and Load Balancers @@ -44,6 +50,10 @@ As an alternative to the three-node cluster deployment with a virtual IP address However, you need to be mindful of where you {{}} for the NetQ UI (port 443); otherwise, you cannot access the NetQ UI. If you are using a load balancer in your deployment, NVIDIA recommends that you install the certificates directly on the load balancer for SSL offloading. However, if you install the certificates on the master node, then configure the load balancer to allow for SSL passthrough. +## Base Command Manager + +NetQ is also available through NVIDIA Base Command Manager. To get started, refer to the {{}}. + ## Next Steps After you've decided on your deployment type, you're ready to {{}}. \ No newline at end of file diff --git a/content/cumulus-netq-412/Installation-Management/Install-NetQ/HA-scale-cluster.md b/content/cumulus-netq-412/Installation-Management/Install-NetQ/HA-scale-cluster.md index 20999b8a5b..2aae35ba70 100644 --- a/content/cumulus-netq-412/Installation-Management/Install-NetQ/HA-scale-cluster.md +++ b/content/cumulus-netq-412/Installation-Management/Install-NetQ/HA-scale-cluster.md @@ -5,16 +5,16 @@ weight: 227 toc: 5 bookhidden: true --- -Follow these steps to set up and configure your VM on a cluster of servers in an on-premises deployment. First configure the VM on the master node, and then configure the VM on *each* additional node. NVIDIA recommends installing the virtual machines on different servers to increase redundancy in the event of a hardware failure. +Follow these steps to set up and configure your VM on a cluster of servers in an on-premises deployment. First configure the VM on the master node, and then configure the VM on each additional node. NVIDIA recommends installing the virtual machines on different servers to increase redundancy in the event of a hardware failure. {{%notice note%}} -NetQ 4.12.0 only supports a 3-node HA scale cluster consisting of one master and 2 additional HA worker nodes. +NetQ 4.12.0 supports a 3-node HA scale cluster consisting of 1 master and 2 additional HA worker nodes. {{%/notice%}} - - - ## System Requirements -Verify that each node in your cluster meets the VM requirements. +Verify that *each node* in your cluster meets the VM requirements. | Resource | Minimum Requirements | | :--- | :--- | @@ -199,9 +199,6 @@ cumulus@netq-server:~$ vim /tmp/cluster-install-config.json { "ip": "" }, - { - "ip": "" - }, { "ip": "" } @@ -215,7 +212,34 @@ cumulus@netq-server:~$ vim /tmp/cluster-install-config.json | `cluster-vip` | The cluster virtual IP address must be an unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes. | | `master-ip` | The IP address assigned to the interface on your master node used for NetQ connectivity. | | `is-ipv6` | Set the value to `true` if your network connectivity and node address assignments are IPv6. | -| `ha-nodes` | The IP addresses of each of the HA nodes in your cluster, including the `master-ip`. | +| `ha-nodes` | The IP addresses of each of the HA nodes in your cluster. | + +{{%notice note%}} + +NetQ uses the 10.244.0.0/16 (`pod-ip-range`) and 10.96.0.0/16 (`service-ip-range`) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the cluster configuration JSON file: + +``` +cumulus@netq-server:~$ vim /tmp/cluster-install-config.json +{ + "version": "v2.0", + "interface": "eth0", + "cluster-vip": "10.176.235.101", + "master-ip": "10.176.235.50", + "is-ipv6": false, + "pod-ip-range": "192.168.0.1/32", + "service-ip-range": "172.168.0.1/32", + "ha-nodes": [ + { + "ip": "10.176.235.51" + }, + { + "ip": "10.176.235.52" + } + ] +} +``` + +{{%/notice%}} {{< /tab >}} {{< tab "Completed JSON Example ">}} @@ -229,9 +253,6 @@ cumulus@netq-server:~$ vim /tmp/cluster-install-config.json "master-ip": "10.176.235.50", "is-ipv6": false, "ha-nodes": [ - { - "ip": "10.176.235.50" - }, { "ip": "10.176.235.51" }, @@ -247,7 +268,7 @@ cumulus@netq-server:~$ vim /tmp/cluster-install-config.json | `cluster-vip` | The cluster virtual IP address must be an unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes. | | `master-ip` | The IP address assigned to the interface on your master node used for NetQ connectivity. | | `is-ipv6` | Set the value to `true` if your network connectivity and node address assignments are IPv6. | -| `ha-nodes` | The IP addresses of each of the HA nodes in your cluster, including the `master-ip`. | +| `ha-nodes` | The IP addresses of each of the HA nodes in your cluster. | {{< /tab >}} {{< /tabs >}} @@ -258,15 +279,6 @@ cumulus@netq-server:~$ vim /tmp/cluster-install-config.json cumulus@:~$ netq install cluster bundle /mnt/installables/NetQ-4.12.0.tgz /tmp/cluster-install-config.json ``` - -

If this step fails for any reason, run netq bootstrap reset and then try again.

## Verify Installation Status diff --git a/content/cumulus-netq-412/Installation-Management/Install-NetQ/Install-NetQ-CLI.md b/content/cumulus-netq-412/Installation-Management/Install-NetQ/Install-NetQ-CLI.md index 9b3f9a1762..05a2a330ae 100644 --- a/content/cumulus-netq-412/Installation-Management/Install-NetQ/Install-NetQ-CLI.md +++ b/content/cumulus-netq-412/Installation-Management/Install-NetQ/Install-NetQ-CLI.md @@ -189,7 +189,7 @@ deb https://apps3.cumulusnetworks.com/repos/deb CumulusLinux-4 netq-latest ``` {{}} -You can specify a NetQ CLI version in the repository configuration. The following example shows the repository configuration to retrieve NetQ CLI v4.3:
deb https://apps3.cumulusnetworks.com/repos/deb CumulusLinux-4 netq-4.3
+You can specify a NetQ CLI version in the repository configuration. The following example shows the repository configuration to retrieve NetQ CLI v4.12:
deb https://apps3.cumulusnetworks.com/repos/deb CumulusLinux-4 netq-4.12
{{
}} @@ -206,7 +206,7 @@ You can specify a NetQ CLI version in the repository configuration. The followin cumulus@switch:~$ dpkg-query -W -f '${Package}\t${Version}\n' netq-apps ``` -{{}} +{{}} 4. Continue with NetQ CLI configuration in the next section. @@ -227,7 +227,7 @@ You can specify a NetQ CLI version in the repository configuration. The followin root@ubuntu:~# dpkg-query -W -f '${Package}\t${Version}\n' netq-apps ``` -{{}} +{{}} 3. Continue with NetQ CLI configuration in the next section. diff --git a/content/cumulus-netq-412/Installation-Management/Install-NetQ/Install-NetQ-System.md b/content/cumulus-netq-412/Installation-Management/Install-NetQ/Install-NetQ-System.md index 663fb2c421..6cb8e4f1bf 100644 --- a/content/cumulus-netq-412/Installation-Management/Install-NetQ/Install-NetQ-System.md +++ b/content/cumulus-netq-412/Installation-Management/Install-NetQ/Install-NetQ-System.md @@ -6,6 +6,8 @@ toc: 3 --- You can install NetQ either on your premises or as a remote, cloud solution. If you are unsure which option is best for your network, refer to {{}}. +For installation troubleshooting, see {{}}. + ## On-Premises Deployment Options -| Server Arrangement | Hypervisor | Requirements & Installation | +| Server Arrangement           | Hypervisor | Requirements & Installation | | :--- | --- | :---: | | Single server | KVM or VMware | {{}} | -| High-availability cluster | KVM or VMware | {{}}| +| High-availability cluster| KVM or VMware | {{}}| ## Base Command Manager diff --git "a/content/cumulus-netq-412/Installation-Management/Install-NetQ/In\342\200\214stall-NetQ-Agents.md" "b/content/cumulus-netq-412/Installation-Management/Install-NetQ/In\342\200\214stall-NetQ-Agents.md" index f210228eda..f6579a1bb7 100644 --- "a/content/cumulus-netq-412/Installation-Management/Install-NetQ/In\342\200\214stall-NetQ-Agents.md" +++ "b/content/cumulus-netq-412/Installation-Management/Install-NetQ/In\342\200\214stall-NetQ-Agents.md" @@ -8,7 +8,7 @@ toc: 4 After installing the NetQ software, you should install the NetQ Agents on each switch you want to monitor. You can install NetQ Agents on switches and servers running: - Cumulus Linux 5.0.0 or later (Spectrum switches) -- Ubuntu 20.04, 22.04 +- Ubuntu 22.04, 20.04 ## Prepare for NetQ Agent Installation @@ -73,7 +73,7 @@ deb https://apps3.cumulusnetworks.com/repos/deb CumulusLinux-4 netq-latest ``` {{}} -You can specify a NetQ Agent version in the repository configuration. The following example shows the repository configuration to retrieve NetQ Agent 4.9:
deb https://apps3.cumulusnetworks.com/repos/deb CumulusLinux-4 netq-4.9
+You can specify a NetQ Agent version in the repository configuration. The following example shows the repository configuration to retrieve NetQ Agent 4.12:
deb https://apps3.cumulusnetworks.com/repos/deb CumulusLinux-4 netq-4.12
{{
}} 2. Add the `apps3.cumulusnetworks.com` authentication key to Cumulus Linux: @@ -232,7 +232,7 @@ Cumulus Linux 4.4 and later includes the `netq-agent` package by default. To ins cumulus@switch:~$ dpkg-query -W -f '${Package}\t${Version}\n' netq-agent ``` - {{}} + {{}} 3. Restart `rsyslog` so it sends log files to the correct destination. @@ -261,7 +261,7 @@ To install the NetQ Agent: root@ubuntu:~# dpkg-query -W -f '${Package}\t${Version}\n' netq-agent ``` - {{}} + {{}} 3. Restart `rsyslog` so it sends log files to the correct destination. diff --git a/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-clstr-cld.md b/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-clstr-cld.md index a827087d9e..daa1276c1d 100644 --- a/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-clstr-cld.md +++ b/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-clstr-cld.md @@ -17,11 +17,11 @@ Follow these steps to set up and configure your VM on a cluster of servers in a 3. Download the NetQ image. - {{}} + {{}} 4. Set up and configure your VM. - {{}} + {{}} 5. Log in to the VM and change the password. @@ -53,7 +53,7 @@ Make a note of the private IP address you assign to the worker node. You need it 13. Install and activate the NetQ software using the CLI: -{{}} +{{}} After NetQ is installed, you can {{}} from your browser. diff --git a/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-clstr-op-ha.md b/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-clstr-op-ha.md index f67baaa242..90ab54aa5e 100644 --- a/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-clstr-op-ha.md +++ b/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-clstr-op-ha.md @@ -19,11 +19,11 @@ Follow these steps to set up and configure your VM on a cluster of servers in an 3. Download the NetQ image. - {{}} + {{}} 4. Set up and configure your VM. - {{}} + {{}} 5. Log in to the VM and change the password. @@ -54,6 +54,6 @@ Make a note of the private IP address you assign to the worker node. You need it 13. Install and activate the NetQ software using the CLI: -{{}} +{{}} After NetQ is installed, you can {{}} from your browser. diff --git a/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-sngl-cld.md b/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-sngl-cld.md index 4beed6fcdc..cabebb6e01 100644 --- a/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-sngl-cld.md +++ b/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-sngl-cld.md @@ -17,11 +17,11 @@ Follow these steps to set up and configure your VM on a single server in a cloud 3. Download the NetQ image. - {{}} + {{}} 4. Set up and configure your VM. - {{}} + {{}} 5. Log in to the VM and change the password. @@ -37,6 +37,6 @@ Follow these steps to set up and configure your VM on a single server in a cloud 8. Install and activate the NetQ software using the CLI: -{{}} +{{}} After NetQ is installed, you can {{}} from your browser. \ No newline at end of file diff --git a/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-sngl-op.md b/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-sngl-op.md index 308c69a9c9..5425788d3b 100644 --- a/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-sngl-op.md +++ b/content/cumulus-netq-412/Installation-Management/Install-NetQ/KVM-Setup-sngl-op.md @@ -17,11 +17,11 @@ Follow these steps to set up and configure your VM on a single server in an on-p 3. Download the NetQ image. - {{}} + {{}} 4. Set up and configure your VM. - {{}} + {{}} 5. Log in to the VM and change the password. @@ -37,6 +37,6 @@ Follow these steps to set up and configure your VM on a single server in an on-p 8. Run the install command on your NetQ server: -{{}} +{{}} After NetQ is installed, you can {{}} from your browser. \ No newline at end of file diff --git a/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-clstr-cld.md b/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-clstr-cld.md index bfdda71f70..847dc2f45b 100644 --- a/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-clstr-cld.md +++ b/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-clstr-cld.md @@ -19,11 +19,11 @@ Follow these steps to set up and configure your VM on a cluster of servers in a 3. Download the NetQ image. - {{}} + {{}} 4. Set up and configure your VM. - {{}} + {{}} 5. Log in to the VM and change the password. @@ -55,6 +55,6 @@ Make a note of the private IP address you assign to the worker node. You will ne 13. Install and activate the NetQ software using the CLI: - {{}} + {{}} After NetQ is installed, you can {{}} from your browser. \ No newline at end of file diff --git a/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-clstr-op-ha.md b/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-clstr-op-ha.md index 297e9b8db9..afc0e0f694 100644 --- a/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-clstr-op-ha.md +++ b/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-clstr-op-ha.md @@ -19,11 +19,11 @@ Follow these steps to set up and configure your VM cluster for an on-premises de 3. Download the NetQ image. - {{}} + {{}} 4. Set up and configure your VM. - {{}} + {{}} 5. Log in to the VM and change the password. @@ -55,6 +55,6 @@ Make a note of the private IP address you assign to the worker node. You need it 13. Install and activate the NetQ software using the CLI: -{{}} +{{}} After NetQ is installed, you can {{}} from your browser. \ No newline at end of file diff --git a/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-sngl-cld.md b/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-sngl-cld.md index 59c925b929..e76ad5d0a9 100644 --- a/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-sngl-cld.md +++ b/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-sngl-cld.md @@ -15,11 +15,11 @@ Follow these steps to set up and configure your VM for a cloud deployment: 3. Download the NetQ image. - {{}} + {{}} 4. Set up and configure your VM. - {{}} + {{}} 5. Log in to the VM and change the password. @@ -35,6 +35,6 @@ Follow these steps to set up and configure your VM for a cloud deployment: 8. Install and activate the NetQ software using the CLI: -{{}} +{{}} After NetQ is installed, you can {{}} from your browser. \ No newline at end of file diff --git a/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-sngl-op.md b/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-sngl-op.md index 78c2536f0a..008f0e35aa 100644 --- a/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-sngl-op.md +++ b/content/cumulus-netq-412/Installation-Management/Install-NetQ/VMware-Setup-sngl-op.md @@ -17,11 +17,11 @@ Follow these steps to set up and configure your VM on a single server in an on-p 3. Download the NetQ image. - {{}} + {{}} 4. Set up and configure your VM. - {{}} + {{}} 5. Log in to the VM and change the password. @@ -37,6 +37,6 @@ Follow these steps to set up and configure your VM on a single server in an on-p 8. Install and activate the NetQ software using the CLI: - {{}} + {{}} After NetQ is installed, you can {{}} from your browser. \ No newline at end of file diff --git a/content/cumulus-netq-412/Installation-Management/Upgrade-NetQ/Upgrade-Agents.md b/content/cumulus-netq-412/Installation-Management/Upgrade-NetQ/Upgrade-Agents.md index 187cec4e68..1c45e71dec 100644 --- a/content/cumulus-netq-412/Installation-Management/Upgrade-NetQ/Upgrade-Agents.md +++ b/content/cumulus-netq-412/Installation-Management/Upgrade-NetQ/Upgrade-Agents.md @@ -46,7 +46,7 @@ Run the following command to view the NetQ Agent version. cumulus@switch:~$ dpkg-query -W -f '${Package}\t${Version}\n' netq-agent ``` -{{}} +{{}} {{}} @@ -56,7 +56,7 @@ cumulus@switch:~$ dpkg-query -W -f '${Package}\t${Version}\n' netq-agent root@ubuntu:~# dpkg-query -W -f '${Package}\t${Version}\n' netq-agent ``` -{{}} +{{}} {{}} diff --git a/content/cumulus-netq-412/Installation-Management/Upgrade-NetQ/Upgrade-System.md b/content/cumulus-netq-412/Installation-Management/Upgrade-NetQ/Upgrade-System.md index 903e3249d2..d804865cbd 100644 --- a/content/cumulus-netq-412/Installation-Management/Upgrade-NetQ/Upgrade-System.md +++ b/content/cumulus-netq-412/Installation-Management/Upgrade-NetQ/Upgrade-System.md @@ -22,7 +22,7 @@ cumulus@masternode:~$ /home/cumulus# kubectl get pods|grep admin netq-app-admin-masternode 1/1 Running 0 15m ``` -If the output of this command displays errors or returns an empty response, you will not be able to upgrade NetQ. Try waiting and then re-run the command. If after several attempts the command continues to fail, reset the NetQ server with `netq bootstrap reset keep-db` and perform a fresh installation of the tarball with the appropriate {{}} command for your deployment type. +If the output of this command displays errors or returns an empty response, you will not be able to upgrade NetQ. Try waiting and then re-run the command. If after several attempts the command continues to fail, reset the NetQ server with `netq bootstrap reset keep-db` and perform a fresh installation of the tarball with the appropriate {{}} command for your deployment type. For more information, refer to {{}}. 2. {{}}. This is an optional step for on-premises deployments. NVIDIA automatically creates backups for NetQ cloud deployments. @@ -60,11 +60,11 @@ If the output of this command displays errors or returns an empty response, you ... Fetched 39.8 MB in 3s (13.5 MB/s) ... - Unpacking netq-agent (4.11.0-ub20.04u48~1722675045.0390e155f) ... + Unpacking netq-agent (4.12.0-ub20.04u49~1731404061.ffa541ea6) ... ... - Unpacking netq-apps (4.11.0-ub20.04u48~1722675045.0390e155f) ... - Setting up netq-apps (4.11.0-ub20.04u48~1722675045.0390e155f) ... - Setting up netq-agent (4.11.0-ub20.04u48~1722675045.0390e155f) ... + Unpacking netq-apps (4.12.0-ub20.04u49~1731404061.ffa541ea6) ... + Setting up netq-apps (4.12.0-ub20.04u49~1731404061.ffa541ea6) ... + Setting up netq-agent (4.12.0-ub20.04u49~1731404061.ffa541ea6) ... Processing triggers for rsyslog (8.32.0-1ubuntu4) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... ``` @@ -74,7 +74,7 @@ If the output of this command displays errors or returns an empty response, you 1. Download the upgrade tarball. - {{}} + {{}} 2. Copy the tarball to the `/mnt/installables/` directory on your NetQ VM. diff --git a/content/cumulus-netq-412/Installation-Management/Upgrade-NetQ/_index.md b/content/cumulus-netq-412/Installation-Management/Upgrade-NetQ/_index.md index cb65197084..a27b9c9495 100644 --- a/content/cumulus-netq-412/Installation-Management/Upgrade-NetQ/_index.md +++ b/content/cumulus-netq-412/Installation-Management/Upgrade-NetQ/_index.md @@ -6,8 +6,6 @@ toc: 3 --- This section describes how to upgrade from your current installation to NetQ {{}}. Refer to the {{}} before you upgrade. -You must upgrade your NetQ on-premises or cloud virtual machines. Upgrading NetQ Agents is optional, but recommended. If you want access to new and updated commands, you can upgrade the CLI on your physical servers or VMs, and monitored switches and hosts as well. - Follow these steps to upgrade your on-premises or cloud deployment. Note that these steps are sequential; **you must upgrade your NetQ virtual machine before you upgrade the NetQ Agents.** 1. {{}} diff --git a/content/cumulus-netq-412/Lifecycle-Management/CL-Upgrade-LCM.md b/content/cumulus-netq-412/Lifecycle-Management/CL-Upgrade-LCM.md index 57511ba4bb..403f52d1a0 100644 --- a/content/cumulus-netq-412/Lifecycle-Management/CL-Upgrade-LCM.md +++ b/content/cumulus-netq-412/Lifecycle-Management/CL-Upgrade-LCM.md @@ -41,10 +41,10 @@ Before you upgrade, make sure you have the appropriate files and credentials: {{}} If you are upgrading to Cumulus Linux 5.9 or later and select the option to roll back to a previous Cumulus Linux version (for unsuccessful upgrade attempts), you must upload a total of four netq-apps and netq-agents packages to NetQ. Cumulus Linux 5.9 or later packages include cld12. Prior versions of Cumulus Linux include cl4u.

For example, you must upload the following packages for amd64 architecture: -- netq-agent_4.11.0-cl4u48~1722675371.0390e155f_amd64.deb -- netq-apps_4.11.0-cl4u48~1722675371.0390e155f_amd64.deb -- netq-agent_4.11.0-cld12u48~1722675256.0390e155f_amd64.deb -- netq-apps_4.11.0-cld12u48~1722675256.0390e155f_amd64.deb +- netq-agent_4.12.0-cl4u49~1731404368.ffa541ea6_amd64.deb +- netq-apps_4.12.0-cl4u49~1731404368.ffa541ea6_amd64.deb +- netq-agent_4.12.0-cld12u49~1731404238.ffa541ea6_amd64.deb +- netq-apps_4.12.0-cld12u49~1731404238.ffa541ea6_amd64.deb {{
}} 2. (Optional) Specify a {{}}. @@ -108,7 +108,7 @@ You can exclude selected services and protocols from the snapshots by clicking t Perform the upgrade using the {{}} command, providing a name for the upgrade job, the Cumulus Linux and NetQ version, and a comma-separated list of the hostname(s) to be upgraded: ``` -cumulus@switch:~$ netq lcm upgrade cl-image job-name upgrade-example cl-version 5.9.1 netq-version 4.11.0 hostnames spine01,spine02 +cumulus@switch:~$ netq lcm upgrade cl-image job-name upgrade-example cl-version 5.9.1 netq-version 4.12.0 hostnames spine01,spine02 ``` ### Create a Network Snapshot @@ -116,7 +116,7 @@ cumulus@switch:~$ netq lcm upgrade cl-image job-name upgrade-example cl-version You can also generate a network snapshot before and after the upgrade by adding the `run-snapshot-before-after` option to the command: ``` -cumulus@switch:~$ netq lcm upgrade cl-image job-name upgrade-example cl-version 5.9.1 netq-version 4.11.0 hostnames spine01,spine02,leaf01,leaf02 order spine,leaf run-snapshot-before-after +cumulus@switch:~$ netq lcm upgrade cl-image job-name upgrade-example cl-version 5.9.1 netq-version 4.12.0 hostnames spine01,spine02,leaf01,leaf02 order spine,leaf run-snapshot-before-after ``` ### Restore upon an Upgrade Failure @@ -124,7 +124,7 @@ cumulus@switch:~$ netq lcm upgrade cl-image job-name upgrade-example cl-version (Recommended) You can restore the previous version of Cumulus Linux if the upgrade job fails by adding the `run-restore-on-failure` option to the command. ``` -cumulus@switch:~$ netq lcm upgrade cl-image name upgrade-example cl-version 5.9.1 netq-version 4.11.0 hostnames spine01,spine02,leaf01,leaf02 order spine,leaf run-restore-on-failure +cumulus@switch:~$ netq lcm upgrade cl-image name upgrade-example cl-version 5.9.1 netq-version 4.12.0 hostnames spine01,spine02,leaf01,leaf02 order spine,leaf run-restore-on-failure ``` {{}} diff --git a/content/cumulus-netq-412/Lifecycle-Management/Image-Management.md b/content/cumulus-netq-412/Lifecycle-Management/Image-Management.md index f0c5d1adb7..f7e147aecf 100644 --- a/content/cumulus-netq-412/Lifecycle-Management/Image-Management.md +++ b/content/cumulus-netq-412/Lifecycle-Management/Image-Management.md @@ -7,7 +7,7 @@ toc: 4 NetQ and network operating system images are managed with LCM. This section explains how to check for missing images, upgrade images, and specify default images. -The network OS and NetQ images are available in several variants based on the software version, the CPU architecture, platform, and SHA checksum. Download both the `netq-apps` and `netq-agents` packages according to the version of Cumulus Linux you are running. {{}} +The network OS and NetQ images are available in several variants based on the software version, the CPU architecture, platform, and SHA checksum. Download both the `netq-apps` and `netq-agents` packages according to the version of Cumulus Linux you are running. {{}} ## View and Upload Missing Images @@ -90,7 +90,7 @@ If you have already specified a default image, you must click Manage}}, selecting the appropriate OS version and architecture. Place the files in an accessible part of your local network. +4. Download the NetQ Debian packages needed for upgrade from the {{}}, selecting the appropriate OS version and architecture. Place the files in an accessible part of your local network. 5. In the UI, click {{}} **Add image** above the table. @@ -120,13 +120,13 @@ If you have already specified a default image, you must click Manage}}, selecting the appropriate version and hypervisor/platform. Place them in an accessible part of your local network. +2. Download the NetQ Debian packages needed for upgrade from the {{}}, selecting the appropriate version and hypervisor/platform. Place them in an accessible part of your local network. -3. Upload the images to the LCM repository. This example uploads the two packages (`netq-agent` and `netq-apps`) required for NetQ version 4.11.0 for a NetQ appliance or VM running Ubuntu 20.04 with an AMD 64 architecture. +3. Upload the images to the LCM repository. This example uploads the two packages (`netq-agent` and `netq-apps`) required for NetQ version 4.12.0 for a NetQ appliance or VM running Ubuntu 20.04 with an AMD 64 architecture. ``` - cumulus@switch:~$ netq lcm add netq-image /path/to/download/netq-agent_4.11.0-ub20.04u48~1722675045.0390e155f_amd64.deb - cumulus@switch:~$ netq lcm add netq-image /path/to/download/netq-apps_4.11.0-ub20.04u48~1722675045.0390e155f_amd64.deb + cumulus@switch:~$ netq lcm add netq-image /path/to/download/netq-agent_4.12.0-ub20.04u49~1731404061.ffa541ea6_amd64.deb + cumulus@switch:~$ netq lcm add netq-image /path/to/download/netq-apps_4.12.0-ub20.04u49~1731404061.ffa541ea6_amd64.deb ``` {{}} @@ -135,7 +135,7 @@ netq lcm show netq-images ## Upload Upgrade Images -To upload the network OS or NetQ images that you want to use for upgrade, first download the Cumulus Linux disk images (*.bin* files) and NetQ Debian packages from the {{}} and {{}}, respectively. Place them in an accessible part of your local network. +To upload the network OS or NetQ images that you want to use for upgrade, first download the Cumulus Linux disk images (*.bin* files) and NetQ Debian packages from the {{}} and {{}}, respectively. Place them in an accessible part of your local network. If you are upgrading the network OS on switches with different ASIC vendors or CPU architectures, you need more than one image. For NetQ, you need both the `netq-apps` and `netq-agent` packages for each variant. @@ -172,8 +172,8 @@ cumulus@switch:~$ netq lcm add image /path/to/download/cumulus-linux-5.9.1-mlx-a NetQ images: ``` -cumulus@switch:~$ netq lcm add image /path/to/download/netq-agent_4.11.0-ub20.04u48~1722675045.0390e155f_amd64.deb -cumulus@switch:~$ netq lcm add image /path/to/download/netq-apps_4.11.0-ub20.04u48~1722675045.0390e155f_amd64.deb +cumulus@switch:~$ netq lcm add image /path/to/download/netq-agent_4.12.0-cl4u49~1731404368.ffa541ea6_amd64.deb +cumulus@switch:~$ netq lcm add image /path/to/download/netq-apps_4.12.0-cl4u49~1731404368.ffa541ea6_amd64.deb ``` {{}} diff --git a/content/cumulus-netq-412/Lifecycle-Management/Manage-NetQ-Agents.md b/content/cumulus-netq-412/Lifecycle-Management/Manage-NetQ-Agents.md index a92169da35..17e9fe3df2 100644 --- a/content/cumulus-netq-412/Lifecycle-Management/Manage-NetQ-Agents.md +++ b/content/cumulus-netq-412/Lifecycle-Management/Manage-NetQ-Agents.md @@ -97,7 +97,7 @@ cumulus@switch~:$ netq show agents opta Matching agents records: Hostname Status NTP Sync Version Sys Uptime Agent Uptime Reinitialize Time Last Changed ----------------- ---------------- -------- ------------------------------------ ------------------------- ------------------------- -------------------------- ------------------------- -netq-ts Fresh yes 4.11.0-ub20.04u48~1601393774.104fb9 Mon Sep 21 16:46:53 2020 Tue Sep 29 21:13:07 2020 Tue Sep 29 21:13:07 2020 Thu Oct 1 16:29:51 2020 +netq-ts Fresh yes 4.12.0-ub20.04u48~1601393774.104fb9 Mon Sep 21 16:46:53 2020 Tue Sep 29 21:13:07 2020 Tue Sep 29 21:13:07 2020 Thu Oct 1 16:29:51 2020 ``` ## View NetQ Agent Configuration diff --git a/content/cumulus-netq-412/Monitor-Operations/Monitor-ECMP.md b/content/cumulus-netq-412/Monitor-Operations/Monitor-ECMP.md index 7111a5cfe4..49dbcba3f0 100644 --- a/content/cumulus-netq-412/Monitor-Operations/Monitor-ECMP.md +++ b/content/cumulus-netq-412/Monitor-Operations/Monitor-ECMP.md @@ -30,21 +30,18 @@ You can view resource utilization for ECMP next hops in the full-screen switch c Select **Forwarding resources** from the side menu. The ECMP next hops column displays the maximum number of hops seen in the forwarding table, the number used, and the percentage of this usage compared to the maximum number. -{{
}} +{{
}} ## Adaptive Routing Adaptive routing is a load balancing feature that improves network utilization for eligible IP packets by selecting forwarding paths dynamically based on the state of the switch, such as queue occupancy and port utilization. You can use the adaptive routing dashboard to view switches with adaptive routing capabilities, events related to adaptive routing, RoCE settings, and egress queue lengths in the form of histograms. -{{}} - -Adaptive routing monitoring is supported on Spectrum-4 switches. It requires a switch fabric running Cumulus Linux 5.5.0 and later. - -{{}} - ### Requirements -To display adaptive routing data, you must have adaptive routing configured on the switch; it can be either enabled or disabled. Switches without an adaptive routing configuration will not appear in the UI or CLI. Additionally, {{}} must be enabled to display adaptive routing data. Switches with RoCE lossy mode enabled will appear in the UI and CLI, but will not display adaptive routing data. +- Adaptive routing monitoring is supported on Spectrum-4 switches. It requires a switch fabric running Cumulus Linux 5.5.0 or later. +- To display adaptive routing data, you must {{}} on the switch; it can be either enabled or disabled. Switches without an adaptive routing configuration will not appear in the UI or CLI. +- {{}} must be enabled to display adaptive routing data. Switches with RoCE *lossy* mode enabled will appear in the UI and CLI, but will not display adaptive routing data. +- To view a switch's {{}} and adaptive routing imbalance events, you must enable {{}} on the switch. If you stop the `asic-monitor` service, NetQ will report values of 0 for all histogram metrics (P95, standard deviation, mean, and maximum queue lengths). ### Adaptive Routing Commands @@ -57,7 +54,7 @@ netq show adaptive-routing config interface ### Access the Adaptive Routing Dashboard -From the header or {{}} menu, select **Spectrum-X**, then **Adaptive routing**. +From the header or {{}} Menu, select **Spectrum-X**, then **Adaptive routing**. The adaptive routing dashboard displays: diff --git a/content/cumulus-netq-412/Monitor-Operations/Topology-View.md b/content/cumulus-netq-412/Monitor-Operations/Topology-View.md index 691eb615a7..431468cb07 100644 --- a/content/cumulus-netq-412/Monitor-Operations/Topology-View.md +++ b/content/cumulus-netq-412/Monitor-Operations/Topology-View.md @@ -10,6 +10,8 @@ The network topology dashboard displays a visual representation of your network, To open the topology view, click **Topology** in the workbench header. The UI displays the highest-level view of your network's topology, showing devices as part of tiers corresponding to your network's architecture: a two-tier architecture is made up of leaf and spine devices; a three-tier architecture is made up of leaf, spine, and super-spine devices. The bottom-most tier is reserved for devices which do not have a role assigned to them. +{{
}} + If your devices appear as a single tier, navigate to the device tab and select the **Assign roles** button. Select the switches to assign to the same role, then select {{Assign Role}} **Assign role** above the table and follow the steps in the UI. {{%notice tip%}} @@ -21,6 +23,8 @@ After assigning roles to the switches, return to the topology view and select ** The topology screen features a main panel displaying tiers or, when zoomed in, the individual devices that comprise the tiers. You can zoom in or out of the topology via the zoom controls at the bottom-right corner of the screen, a mouse with a scroll wheel, or with a trackpad on your computer. You can also adjust the focus by clicking anywhere on the topology and dragging it with your mouse to view a different portion of the network diagram. Above the zoom controls, a smaller screen reflects a macro view of your network and helps with orienting, similar to mapping applications. +{{
}} + ### View Device and Link Data Select a device to view the connections between that devices and others in the network. A side panel displays additional device data, including: diff --git a/content/cumulus-netq-412/More-Documents/NetQ-CLI-Reference-Manual/install.md b/content/cumulus-netq-412/More-Documents/NetQ-CLI-Reference-Manual/install.md index ebeba07560..0f093e8aac 100644 --- a/content/cumulus-netq-412/More-Documents/NetQ-CLI-Reference-Manual/install.md +++ b/content/cumulus-netq-412/More-Documents/NetQ-CLI-Reference-Manual/install.md @@ -109,38 +109,44 @@ cumulus@:~$ netq install cluster bundle /mnt/installables/NetQ-4.12.0. - `netq install cluster config generate` - - - - ### Syntax ``` netq install cluster config generate [] - -netq install cluster config generate workers - - [] ``` ### Required Arguments -| Argument | Value | Description | -| ---- | ---- | ---- | -| NA | \ | | +None + ### Options | Option | Value | Description | | ---- | ---- | ---- | -| NA | \ | | +| NA | \ | Generate the file at this location; you must specify a full path | ### Sample Usage +``` +cumulus@netq-server:~$ netq install cluster config generate +2024-10-28 17:29:53.260462: master-node-installer: Writing cluster installation configuration template file @ /tmp/cluster-install-config.json +``` + ### Related Commands - `netq install cluster bundle` - - - ---> + ## netq install cluster full Installs the NetQ software for an on-premises, server cluster deployment. Run this command on your *master* node. You must have the hostname or IP address of the master node, two worker nodes, virtual IP address, and the NetQ software bundle to run the command. @@ -225,7 +231,7 @@ cumulus@:~$ netq install cluster master-init - `netq install cluster worker-init` - - - - | Test Number | Test Name | Description | | :---------: | --------- | ----------- | -| 0 | EVPN BGP Session | Checks if:
  • BGP EVPN sessions are established
  • The EVPN address family advertisement is consistent
| -| 1 | EVPN VNI Type Consistency | Because a VNI can be of type L2 or L3, checks that for a given VNI, its type is consistent across the network | -| 2 | EVPN Type 2 | Checks for consistency of IP-MAC binding and the location of a given IP-MAC across all VTEPs | -| 3 | EVPN Type 3 | Checks for consistency of replication group across all VTEPs | -| 4 | EVPN Session | For each EVPN session, checks if:
  • adv_all_vni is enabled
  • FDB learning is disabled on tunnel interface
| -| 5 | VLAN Consistency | Checks for consistency of VLAN to VNI mapping across the network | -| 6 | VRF Consistency | Checks for consistency of VRF to L3 VNI mapping across the network | +| 0 | EVPN BGP session | Checks if:
  • BGP EVPN sessions are established
  • The EVPN address family advertisement is consistent
| +| 1 | EVPN VNI type consistency | Because a VNI can be of type L2 or L3, checks that for a given VNI, its type is consistent across the network | +| 2 | EVPN type 2 | Checks for consistency of IP-MAC binding and the location of a given IP-MAC across all VTEPs | +| 3 | EVPN type 3 | Checks for consistency of replication group across all VTEPs | +| 4 | EVPN session | For each EVPN session, checks if:
  • adv_all_vni is enabled
  • FDB learning is disabled on tunnel interface
| +| 5 | VLAN consistency | Checks for consistency of VLAN to VNI mapping across the network | +| 6 | VRF consistency | Checks for consistency of VRF to L3 VNI mapping across the network | ## Interface Validation Tests @@ -70,10 +70,10 @@ The interface validation tests look for consistent configuration between two nod | Test Number | Test Name | Description | | :---------: | --------- | ----------- | -| 0 | Admin State | Checks for consistency of administrative state on two sides of a physical interface | -| 1 | Oper State | Checks for consistency of operational state on two sides of a physical interface | +| 0 | Administrative state | Checks for consistency of administrative state on two sides of a physical interface | +| 1 | Operational state | Checks for consistency of operational state on two sides of a physical interface | | 2 | Speed | Checks for consistency of the speed setting on two sides of a physical interface | -| 3 | Autoneg | Checks for consistency of the auto-negotiation setting on two sides of a physical interface | +| 3 | Auto-negotiation | Checks for consistency of the auto-negotiation setting on two sides of a physical interface | ## Link MTU Validation Tests @@ -81,7 +81,7 @@ The link MTU validation tests look for consistency across an interface and appro | Test Number | Test Name | Description | | :---------: | --------- | ----------- | -| 0 | Link MTU Consistency | Checks for consistency of MTU setting on two sides of a physical interface | +| 0 | Link MTU consistency | Checks for consistency of MTU setting on two sides of a physical interface | | 1 | VLAN interface | Checks if the MTU of an SVI is no smaller than the parent interface, subtracting the VLAN tag size | | 2 | Bridge interface | Checks if the MTU on a bridge is not arbitrarily smaller than the smallest MTU among its members | @@ -95,20 +95,20 @@ The MLAG validation tests look for misconfigurations, peering status, and bond e | 1 | Backup IP | Checks if:
  • MLAG backup IP configuration is missing on an MLAG node
  • MLAG backup IP is correctly pointing to the MLAG peer and its connectivity is available
| | 2 | MLAG Sysmac | Checks if:
  • MLAG Sysmac is consistently configured on both nodes in an MLAG pair
  • Any duplication of an MLAG sysmac exists within a bridge domain
| | 3 | VXLAN Anycast IP | Checks if the VXLAN anycast IP address is consistently configured on both nodes in an MLAG pair | -| 4 | Bridge Membership | Checks if the MLAG peerlink is part of bridge | -| 5 | Spanning Tree | Checks if:
  • STP is enabled and running on the MLAG nodes
  • MLAG peerlink role is correct from STP perspective
  • The bridge ID is consistent between two nodes of an MLAG pair
  • The VNI in the bridge has BPDU guard and BPDU filter enabled
| -| 6 | Dual Home | Checks for:
  • MLAG bonds that are not in dually connected state
  • Dually connected bonds have consistent VLAN and MTU configuration on both sides
  • STP has consistent view of bonds' dual connectedness
| -| 7 | Single Home | Checks for:
  • Singly connected bonds
  • STP has consistent view of bond's single connectedness
| -| 8 | Conflicted Bonds | Checks for bonds in MLAG conflicted state and shows the reason | -| 9 | ProtoDown Bonds | Checks for bonds in protodown state and shows the reason | +| 4 | Bridge membership | Checks if the MLAG peerlink is part of bridge | +| 5 | Spanning tree | Checks if:
  • STP is enabled and running on the MLAG nodes
  • MLAG peerlink role is correct from STP perspective
  • The bridge ID is consistent between two nodes of an MLAG pair
  • The VNI in the bridge has BPDU guard and BPDU filter enabled
| +| 6 | Dual home | Checks for:
  • MLAG bonds that are not in dually connected state
  • Dually connected bonds have consistent VLAN and MTU configuration on both sides
  • STP has consistent view of bonds' dual connectedness
| +| 7 | Single home | Checks for:
  • Singly connected bonds
  • STP has consistent view of bond's single connectedness
| +| 8 | Conflicted bonds | Checks for bonds in MLAG conflicted state and shows the reason | +| 9 | ProtoDown bonds | Checks for bonds in protodown state and shows the reason | | 10 | SVI | Checks if:
  • Both sides of an MLAG pair have an SVI configured
  • SVI on both sides have consistent MTU setting
| -| 11 | Package Mismatch | Checks for package mismatch on an MLAG pair | +| 11 | Package mismatch | Checks for package mismatch on an MLAG pair | ## NTP Validation Tests The NTP validation test looks for poor operational status of the NTP service. | Test Number | Test Name | Description | | :---------: | --------- | ----------- | -| 0 | NTP Sync | Checks if the NTP service is running and in sync state | +| 0 | NTP sync | Checks if the NTP service is running and in sync state | ## RoCE Validation Tests @@ -116,12 +116,12 @@ The RoCE validation tests look for consistent RoCE and QoS configurations across | Test Number | Test Name | Description | | :---------: | --------- | ----------- | -| 0 | RoCE Mode | Checks whether RoCE is configured for lossy or lossless mode | -| 1 | Classification | Checks for consistency of DSCP, service pool, port group, and traffic class settings | -| 2 | Congestion Control | Checks for consistency of ECN and RED threshold settings | -| 3 | Flow Control | Checks for consistency of PFC configuration for RoCE lossless mode | -| 4 | ETS | Checks for consistency of Enhanced Transmission Selection settings | -| 5 | RoCE Miscellaneous | Checks for consistency across related services | +| 0 | RoCE mode | Checks whether RoCE is configured for lossy or lossless mode | +| 1 | RoCE classification | Checks for consistency of DSCP, service pool, port group, and traffic class settings | +| 2 | RoCE congestion control | Checks for consistency of ECN and RED threshold settings | +| 3 | RoCE flow control | Checks for consistency of PFC configuration for RoCE lossless mode | +| 4 | RoCE ETS mode | Checks for consistency of Enhanced Transmission Selection settings | +| 5 | RoCE miscellaneous | Checks for consistency across related services | ## Sensor Validation Tests The sensor validation tests looks for chassis power supply, fan, and temperature sensors that are not operating as expected. @@ -146,8 +146,8 @@ The VLAN validation tests look for configuration consistency between two nodes. | Test Number | Test Name | Description | | :---------: | --------- | ----------- | -| 0 | Link Neighbor VLAN Consistency | Checks for consistency of VLAN configuration on two sides of a port or a bond | -| 1 | CLAG Bond VLAN Consistency | Checks for consistent VLAN membership of a CLAG (MLAG) bond on each side of the CLAG (MLAG) pair | +| 0 | Link neighbor VLAN consistency | Checks for consistency of VLAN configuration on two sides of a port or a bond | +| 1 | MLAG nond VLAN consistency | Checks for consistent VLAN membership of an MLAG bond on each side of the MLAG pair | ## VXLAN Validation Tests @@ -155,6 +155,6 @@ The VXLAN validation tests look for configuration consistency across all VTEPs. | Test Number | Test Name | Description | | :---------: | --------- | ----------- | -| 0 | VLAN Consistency | Checks for consistent VLAN to VXLAN mapping across all VTEPs | +| 0 | VLAN consistency | Checks for consistent VLAN to VXLAN mapping across all VTEPs | | 1 | BUM replication | Checks for consistent replication group membership across all VTEPs | diff --git a/content/cumulus-netq-412/Whats-New/_index.md b/content/cumulus-netq-412/Whats-New/_index.md index 7ff17eab6c..211fb67be2 100644 --- a/content/cumulus-netq-412/Whats-New/_index.md +++ b/content/cumulus-netq-412/Whats-New/_index.md @@ -12,13 +12,13 @@ This page summarizes new features and improvements for the NetQ {{}} re NetQ 4.12.0 includes the following new features and improvements: -- {{}} that supports up to 1,000 devices +- {{}} that supports up to 1,000 switches - {{}} with {{}} that you can add to your existing workbenches - Compare interfaces and view counter data across links with the {{}} (beta) - View a switch's BGP and EVPN session information from the full-screen {{}} - New option to send all events to a notification channel as part of {{}} - The {{}} is now generally available -- {{}} are now generally available +- The {{}} is now generally available ## Upgrade Paths @@ -29,13 +29,13 @@ For deployments running: ## Compatible Agent Versions -The NetQ 4.12 server is compatible with NetQ Agent 4.10.1 or later. You can install NetQ Agents on switches and servers running: +The NetQ 4.12 server is compatible with the NetQ 4.12 agent. You can install NetQ agents on switches and servers running: - Cumulus Linux 5.0.0 or later (Spectrum switches) - Ubuntu 22.04, 20.04 -You must upgrade to the latest agent version to enable 4.12 features. +## Release Considerations -{{%notice info%}} -Switches running Cumulus Linux 5.9 or later require the NetQ 4.10 or later agent package. See {{}}. -{{%/notice%}} \ No newline at end of file +- NetQ 4.12 is not backward compatible with previous NetQ agent versions. You must install NetQ agent version 4.12 after upgrading your NetQ server to 4.12. +- When you upgrade to NetQ 4.12, any pre-existing event and validation data will be lost. +- If you upgrade a NetQ server with scheduled OSPF validations, they might still appear in the UI but will display results from previous validations. \ No newline at end of file diff --git a/content/cumulus-netq-412/_index.md b/content/cumulus-netq-412/_index.md index 109a5bf5d6..b5fc5413ad 100644 --- a/content/cumulus-netq-412/_index.md +++ b/content/cumulus-netq-412/_index.md @@ -9,7 +9,6 @@ cascade: version: "4.12" imgData: cumulus-netq siteSlug: cumulus-netq - draft: true --- NVIDIA® NetQ™ is a network operations tool set that provides visibility into your overlay and underlay networks, enabling troubleshooting in real-time. NetQ delivers data and statistics about the health of your data center—from the container, virtual machine, or host, all the way to the switch and port. NetQ correlates configuration and operational status, and tracks state changes while simplifying management for the entire Linux-based data center. With NetQ, network operations change from a manual, reactive, node-by-node approach to an automated, informed, and agile one. Visit {{}} to learn more. diff --git a/content/cumulus-netq-412/pdf.md b/content/cumulus-netq-412/pdf.md index b2b133bf67..af7e98a91e 100644 --- a/content/cumulus-netq-412/pdf.md +++ b/content/cumulus-netq-412/pdf.md @@ -8,6 +8,5 @@ version: "4.12" imgData: cumulus-netq siteSlug: cumulus-netq pdfhidden: true -draft: true --- \ No newline at end of file diff --git a/content/cumulus-netq-412/weights.md b/content/cumulus-netq-412/weights.md index acc4a50f4b..5cab53295e 100644 --- a/content/cumulus-netq-412/weights.md +++ b/content/cumulus-netq-412/weights.md @@ -8,7 +8,6 @@ version: "4.12" imgData: cumulus-netq siteSlug: cumulus-netq pdfhidden: true -draft: true --- diff --git a/static/images/netq/topo-device-view-412.png b/static/images/netq/topo-device-view-412.png new file mode 100644 index 0000000000..4d5d64bdbd Binary files /dev/null and b/static/images/netq/topo-device-view-412.png differ diff --git a/static/images/netq/topo-tier-412.png b/static/images/netq/topo-tier-412.png new file mode 100644 index 0000000000..7ade1d3f77 Binary files /dev/null and b/static/images/netq/topo-tier-412.png differ diff --git a/themes/netDocs/layouts/shortcodes/netq-install/agent-version.html b/themes/netDocs/layouts/shortcodes/netq-install/agent-version.html index d3d1dbb95e..0b36b9a58b 100644 --- a/themes/netDocs/layouts/shortcodes/netq-install/agent-version.html +++ b/themes/netDocs/layouts/shortcodes/netq-install/agent-version.html @@ -1,3 +1,36 @@ +{{- if eq (.Get "version") "4.12.0" -}} + +You should see version 4.12.0 and update 49 in the results.
+ +{{- if eq (.Get "opsys") "cl" -}} +
    +
      +
    • Cumulus Linux 5.9.0 or later: netq-agent_4.12.0-cld12u49~1731404238.ffa541ea6_amd64.deb
    • +
    • Cumulus Linux 5.8.0 or earlier, ARM platforms: netq-agent_4.12.0-cl4u49~1731403923.ffa541ea6_armel.deb
    • +
    • Cumulus Linux 5.8.0 or earlier, amd64 platforms: netq-agent_4.12.0-cl4u49~1731404368.ffa541ea6_amd64.deb
    • +
    +
+{{ end }} + +{{- if eq (.Get "opsys") "ub" -}} + +
    +
  • Ubuntu 20.04: netq-agent_4.12.0-ub20.04u49~1731404061.ffa541ea6_amd64.deb
  • +
  • Ubuntu 22.04: netq-agent_4.12.0-ub22.04u49~1731404070.ffa541ea6_amd64.deb
  • +
+ +{{ end }} + +{{- if eq (.Get "opsys") "rh" -}} + +
    +
  • netq-agent-4.12.0-rh7u49~?????????.x86_64.rpm
  • +
+ +{{ end }} + +{{ end }} + {{- if eq (.Get "version") "4.11.0" -}} You should see version 4.11.0 and update 48 in the results.
diff --git a/themes/netDocs/layouts/shortcodes/netq-install/cli-version.html b/themes/netDocs/layouts/shortcodes/netq-install/cli-version.html index 9b66f99373..8b37d3fdce 100644 --- a/themes/netDocs/layouts/shortcodes/netq-install/cli-version.html +++ b/themes/netDocs/layouts/shortcodes/netq-install/cli-version.html @@ -1,3 +1,37 @@ +{{- if eq (.Get "version") "4.12" -}} + +You should see version 4.12.0 and update 49 in the results. For example: + +{{- if eq (.Get "opsys") "cl" -}} + +
    +
      +
    • Cumulus Linux 5.9.0 or later: netq-apps_4.12.0-cld12u49~1731404238.ffa541ea6_amd64.deb
    • +
    • Cumulus Linux 5.8.0 or earlier, ARM platforms: netq-apps_4.12.0-cl4u49~1731403923.ffa541ea6_armel.deb
    • +
    • Cumulus Linux 5.8.0 or earlier, amd64 platforms: netq-apps_4.12.0-cl4u49~1731404368.ffa541ea6_amd64.deb
    • +
    +
+ +{{ end }} +{{- if eq (.Get "opsys") "ub" -}} + +
    +
  • Ubuntu 20.04: netq-apps_4.12.0-ub20.04u49~1731404061.ffa541ea6_amd64.deb
  • +
  • Ubuntu 22.04: netq-apps_4.12.0-ub22.04u49~1731404070.ffa541ea6_amd64.deb
  • +
+ +{{ end }} + +{{- if eq (.Get "opsys") "rh" -}} + +
    +
  • netq-apps_4.12.0-rh7u49~????????.x86_64.rpm
  • +
+ +{{ end }} + +{{ end }} + {{- if eq (.Get "version") "4.11" -}} You should see version 4.11.0 and update 48 in the results. For example: diff --git a/themes/netDocs/layouts/shortcodes/netq-install/install-with-cli.html b/themes/netDocs/layouts/shortcodes/netq-install/install-with-cli.html index beaaab7d62..e1302ead37 100644 --- a/themes/netDocs/layouts/shortcodes/netq-install/install-with-cli.html +++ b/themes/netDocs/layouts/shortcodes/netq-install/install-with-cli.html @@ -1,3 +1,158 @@ +{{- if eq (.Get "version") "4.12" -}} + +{{- if eq (.Get "deployment") "onprem-single" -}} + +
cumulus@hostname:~$ netq install standalone full interface eth0 bundle /mnt/installables/NetQ-4.12.0.tgz
+ +

NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:

+
cumulus@hostname:~$ netq install standalone full interface eth0 bundle /mnt/installables/NetQ-4.12.0.tgz pod-ip-range <pod-ip-range> service-ip-range <service-ip-range>

You can specify the IP address of the server instead of the interface name using the ip-addr <ip-address> argument:

+
cumulus@hostname:~$ netq install standalone full ip-addr <ip-address> bundle /mnt/installables/NetQ-4.12.0.tgz

If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.

+

+ +

If this step fails for any reason, you can run netq bootstrap reset and then try again.

+ +{{end}} + + +{{- if eq (.Get "deployment") "onprem-cluster-ha" -}} + +

Run the following command on your master node to initialize the cluster. Copy the output of the command to use on your worker nodes:

+ +
cumulus@<hostname>:~$ netq install cluster master-init
+    Please run the following command on all worker nodes:
+    netq install cluster worker-init c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDM2NjTTZPdVVUWWJ5c2Q3NlJ4SHdseHBsOHQ4N2VMRWVGR05LSWFWVnVNcy94OEE4RFNMQVhKOHVKRjVLUXBnVjdKM2lnMGJpL2hDMVhmSVVjU3l3ZmhvVDVZM3dQN1oySVZVT29ZTi8vR1lOek5nVlNocWZQMDNDRW0xNnNmSzVvUWRQTzQzRFhxQ3NjbndIT3dwZmhRYy9MWTU1a
+
+ +

Run the netq install cluster worker-init <ssh-key> on each of your worker nodes.

+ +

Run the following commands on your master node, using the IP addresses of your worker nodes and the HA cluster virtual IP address (VIP):

+ +

The HA cluster virtual IP must be:

+
    +
  • An unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes. The default interface is the interface used in the netq install command.
  • +
  • A different IP address than the primary IP assigned to the default interface.
  • +
+

+ +
cumulus@<hostname>:~$ netq install cluster full interface eth0 bundle /mnt/installables/NetQ-4.12.0.tgz workers <worker-1-ip> <worker-2-ip> cluster-vip <vip-ip>
+ +

NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:

+
cumulus@hostname:~$ netq install cluster full interface eth0 bundle /mnt/installables/NetQ-4.12.0.tgz workers <worker-1-ip> <worker-2-ip> pod-ip-range <pod-ip-range> service-ip-range <service-ip-range>

You can specify the IP address of the server instead of the interface name using the ip-addr <ip-address> argument:

+
cumulus@hostname:~$ netq install cluster full ip-addr <ip-address> bundle /mnt/installables/NetQ-4.12.0.tgz workers <worker-1-ip> <worker-2-ip>

If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.

+

+ +

If this step fails for any reason, you can run netq bootstrap reset and then try again.

+ +{{end}} + +{{- if eq (.Get "deployment") "cloud-single" -}} + +

Run the following command on your NetQ cloud appliance with the config-key obtained from the email you received from NVIDIA titled NetQ Access Link. You can also obtain the configuration key through the NetQ UI.

+ +
cumulus@<hostname>:~$ netq install opta standalone full interface eth0 bundle /mnt/installables/NetQ-4.12.0-opta.tgz config-key <your-config-key> [proxy-host <proxy-hostname> proxy-port <proxy-port>]
+
+ +

NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:

+
cumulus@hostname:~$ netq install opta standalone full interface eth0 bundle /mnt/installables/NetQ-4.12.0-opta.tgz config-key <your-config-key> pod-ip-range <pod-ip-range> service-ip-range <service-ip-range>

You can specify the IP address of the server instead of the interface name using the ip-addr <address> argument:

+
cumulus@hostname:~$ netq install opta standalone full ip-addr <ip-address> bundle /mnt/installables/NetQ-4.12.0-opta.tgz config-key <your-config-key>

If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.

+

+ +

If this step fails for any reason, you can run netq bootstrap reset and then try again.

+ + +{{end}} + +{{- if eq (.Get "deployment") "cloud-cluster-ha" -}} + +

Run the following command on your master node to initialize the cluster. Copy the output of the command to use on your worker nodes:

cumulus@<hostname>:~$ netq install cluster master-init
+    Please run the following command on all worker nodes:
+    netq install cluster worker-init c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFDM2NjTTZPdVVUWWJ5c2Q3NlJ4SHdseHBsOHQ4N2VMRWVGR05LSWFWVnVNcy94OEE4RFNMQVhKOHVKRjVLUXBnVjdKM2lnMGJpL2hDMVhmSVVjU3l3ZmhvVDVZM3dQN1oySVZVT29ZTi8vR1lOek5nVlNocWZQMDNDRW0xNnNmSzVvUWRQTzQzRFhxQ3NjbndIT3dwZmhRYy9MWTU1a
+    

Run the netq install cluster worker-init <ssh-key> on each of your worker nodes.

+ +

Run the following command on your NetQ cloud appliance with the config-key obtained from the email you received from NVIDIA titled NetQ Access Link. You can also obtain the configuration key through the NetQ UI. Use the IP addresses of your worker nodes and the HA cluster virtual IP address (VIP).

+ +

The HA cluster virtual IP must be:

+
    +
  • An unused IP address allocated from the same subnet assigned to the default interface for your master and worker nodes. The default interface is the interface used in the netq install command.
  • +
  • A different IP address than the primary IP assigned to the default interface.
  • +
+

+ +
cumulus@<hostname>:~$ netq install opta cluster full interface eth0 bundle /mnt/installables/NetQ-4.12.0-opta.tgz config-key <your-config-key> workers <worker-1-ip> <worker-2-ip> cluster-vip <vip-ip> [proxy-host <proxy-hostname> proxy-port <proxy-port>]
+    
+ +

NetQ uses the 10.244.0.0/16 (pod-ip-range) and 10.96.0.0/16 (service-ip-range) networks for internal communication by default. If you are using these networks, you must override each range by specifying new subnets for these parameters in the install command:

+
cumulus@hostname:~$ netq install opta cluster full interface eth0 bundle /mnt/installables/NetQ-4.12.0-opta.tgz config-key <your-config-key> pod-ip-range <pod-ip-range> service-ip-range <service-ip-range>

You can specify the IP address of the server instead of the interface name using the ip-addr <ip-address> argument:

+
cumulus@hostname:~$ netq install opta cluster full ip-addr <ip-address> bundle /mnt/installables/NetQ-4.12.0-opta.tgz config-key <your-config-key>

If you change the server IP address or hostname after installing NetQ, you must reset the server with the netq bootstrap reset keep-db command and rerun the install command.

+

+ + +

If this step fails for any reason, you can run netq bootstrap reset and then try again.

+ +{{end}} + +{{- if eq (.Get "deployment") "cloud-cluster-ha" "onprem-cluster-ha" -}} + +

Verify Installation Status

To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:

State: Active
+    NetQ Live State: Active
+    Installation Status: FINISHED
+    Version: 4.12.0
+    Installer Version: 4.12.0
+    Installation Type: Cluster
+    Activation Key: EhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixPSUJCOHBPWUFnWXI2dGlGY2hTRzExR2E5aSt6ZnpjOUvpVVTaDdpZEhFPQ==
+    Master SSH Public Key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCZ1FDNW9iVXB6RkczNkRC
+    Is Cloud: False
+    
+    Kubernetes Cluster Nodes Status:
+    IP Address    Hostname     Role    NodeStatus    Virtual IP
+    ------------  -----------  ------  ------------  ------------
+    10.213.7.52   10.213.7.52  Worker  Ready         10.213.7.53
+    10.213.7.51   10.213.7.51  Worker  Ready         10.213.7.53
+    10.213.7.49   10.213.7.49  Master  Ready         10.213.7.53
+    
+    In Summary, Live state of the NetQ is... Active
+ +{{end}} + +{{- if ne (.Get "deployment") "cloud-cluster-ha" "onprem-cluster-ha" -}} + +

Verify Installation Status

To view the status of the installation, use the netq show status [verbose] command. The following example shows a successful on-premises installation:

State: Active
+    Version: 4.12.0
+    Installer Version: 4.12.0
+    Installation Type: Cluster
+    Activation Key: PKrgipMGEhVuZXRxLWVuZHBvaW50LWdhdGV3YXkYsagDIixUQmFLTUhzZU80RUdTL3pOT01uQ2lnRnrrUhTbXNPUGRXdnUwTVo5SEpBPTIHZGVmYXVsdDoHbmV0cWRldgz=
+    Master SSH Public Key: a3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBQUFCQVFEazliekZDblJUajkvQVhOZ0hteXByTzZIb3Y2cVZBWFdsNVNtKzVrTXo3dmMrcFNZTGlOdWl1bEhZeUZZVDhSNmU3bFdqS3NrSE10bzArNFJsQVd6cnRvbVVzLzlLMzQ4M3pUMjVZQXpIU2N1ZVhBSE1TdTZHZ0JyUkpXYUpTNjJ2RTkzcHBDVjBxWWJvUFo3aGpCY3ozb0VVWnRsU1lqQlZVdjhsVjBNN3JEWW52TXNGSURWLzJ2eks3K0x2N01XTG5aT054S09hdWZKZnVOT0R4YjFLbk1mN0JWK3hURUpLWW1mbTY1ckoyS1ArOEtFUllrr5TkF3bFVRTUdmT3daVHF2RWNoZnpQajMwQ29CWDZZMzVST2hDNmhVVnN5OEkwdjVSV0tCbktrWk81MWlMSDAyZUpJbXJHUGdQa2s1SzhJdGRrQXZISVlTZ0RwRlpRb3Igcm9vdEBucXRzLTEwLTE4OC00NC0xNDc=
+    Is Cloud: False
+    
+    Cluster Status:
+    IP Address     Hostname       Role    Status
+    -------------  -------------  ------  --------
+    10.188.44.147  10.188.44.147  Role    Ready
+    
+    NetQ... Active
+    
+ + {{end}} + +

Run the netq show opta-health command to verify all applications are operating properly. Allow 10-15 minutes for all applications to come up and report their status.

cumulus@hostname:~$ netq show opta-health
+    Application                                            Status    Namespace      Restarts    Timestamp
+    -----------------------------------------------------  --------  -------------  ----------  ------------------------
+    cassandra-rc-0-w7h4z                                   READY     default        0           Fri Apr 10 16:08:38 2020
+    cp-schema-registry-deploy-6bf5cbc8cc-vwcsx             READY     default        0           Fri Apr 10 16:08:38 2020
+    kafka-broker-rc-0-p9r2l                                READY     default        0           Fri Apr 10 16:08:38 2020
+    kafka-connect-deploy-7799bcb7b4-xdm5l                  READY     default        0           Fri Apr 10 16:08:38 2020
+    netq-api-gateway-deploy-55996ff7c8-w4hrs               READY     default        0           Fri Apr 10 16:08:38 2020
+    netq-app-address-deploy-66776ccc67-phpqk               READY     default        0           Fri Apr 10 16:08:38 2020
+    netq-app-admin-oob-mgmt-server                         READY     default        0           Fri Apr 10 16:08:38 2020
+    netq-app-bgp-deploy-7dd4c9d45b-j9bfr                   READY     default        0           Fri Apr 10 16:08:38 2020
+    netq-app-clagsession-deploy-69564895b4-qhcpr           READY     default        0           Fri Apr 10 16:08:38 2020
+    netq-app-configdiff-deploy-ff54c4cc4-7rz66             READY     default        0           Fri Apr 10 16:08:38 2020
+    ...
+    

If any of the applications or services display Status as DOWN after 30 minutes, open a support ticket and attach the output of the opta-support command.

+ +{{end}} + + {{- if eq (.Get "version") "4.11" -}} {{- if eq (.Get "deployment") "onprem-single" -}} diff --git a/themes/netDocs/layouts/shortcodes/netq-install/kvm-platform-image.html b/themes/netDocs/layouts/shortcodes/netq-install/kvm-platform-image.html index 52d219bf49..c0db0ef1bf 100644 --- a/themes/netDocs/layouts/shortcodes/netq-install/kvm-platform-image.html +++ b/themes/netDocs/layouts/shortcodes/netq-install/kvm-platform-image.html @@ -824,4 +824,42 @@ {{ end }} +{{ end }} + +{{- if eq (.Get "version") "4.12" -}} + +{{- if eq (.Get "deployment") "onprem" -}} + +
    +
  1. On the NVIDIA Application Hub, log in to your account.
    +
  2. Select NVIDIA Licensing Portal.
  3. +
  4. Select Software Downloads from the menu.
  5. +
  6. Click Product Family and select NetQ.
  7. +
  8. Locate the NetQ SW 4.12 KVM image and select Download.
  9. +
  10. If prompted, read the license agreement and proceed with the download.
  11. +
+ +

For enterprise customers, if you do not see a link to the NVIDIA Licensing Portal on the NVIDIA Application Hub, contact NVIDIA support.

+
+

For NVIDIA employees, download NetQ directly from the NVIDIA Licensing Portal.

+ +{{ end }} + +{{- if eq (.Get "deployment") "cloud" -}} + +
    +
  1. On the NVIDIA Application Hub, log in to your account.
    +
  2. Select NVIDIA Licensing Portal.
  3. +
  4. Select Software Downloads from the menu.
  5. +
  6. Click Product Family and select NetQ.
  7. +
  8. Locate the NetQ SW 4.12 KVM Cloud image and select Download.
  9. +
  10. If prompted, read the license agreement and proceed with the download.
  11. +
+ +

For enterprise customers, if you do not see a link to the NVIDIA Licensing Portal on the NVIDIA Application Hub, contact NVIDIA support.

+
+

For NVIDIA employees, download NetQ directly from the NVIDIA Licensing Portal.

+ +{{ end }} + {{ end }} \ No newline at end of file diff --git a/themes/netDocs/layouts/shortcodes/netq-install/upgrade-image.html b/themes/netDocs/layouts/shortcodes/netq-install/upgrade-image.html index ea848b885b..7c1d03a51b 100644 --- a/themes/netDocs/layouts/shortcodes/netq-install/upgrade-image.html +++ b/themes/netDocs/layouts/shortcodes/netq-install/upgrade-image.html @@ -334,4 +334,23 @@

For NVIDIA employees, download NetQ directly from the NVIDIA Licensing Portal.

+{{ end }} + +{{- if eq (.Get "version") "4.12" -}} + +
    +
  1. On the NVIDIA Application Hub, log in to your account.
    +
  2. Select NVIDIA Licensing Portal.
  3. +
  4. Select Software Downloads from the menu.
  5. +
  6. Click Product Family and select NetQ.
  7. +
  8. Select the relevant software for your hypervisor:
    + If you are upgrading NetQ software for a NetQ on-premises VM, select NetQ SW 4.12.0 Appliance to download the NetQ-4.12.0.tgz file. If you are upgrading NetQ software for a NetQ cloud VM, select NetQ SW 4.12.0 Appliance Cloud to download the NetQ-4.12.0-opta.tgz file. +
  9. If prompted, read the license agreement and proceed with the download.
  10. +
+ +

For enterprise customers, if you do not see a link to the NVIDIA Licensing Portal on the NVIDIA Application Hub, contact NVIDIA support.

+
+

For NVIDIA employees, download NetQ directly from the NVIDIA Licensing Portal.

+ + {{ end }} \ No newline at end of file diff --git a/themes/netDocs/layouts/shortcodes/netq-install/vmw-platform-image.html b/themes/netDocs/layouts/shortcodes/netq-install/vmw-platform-image.html index 07bdc61bd0..0fe529f4ab 100644 --- a/themes/netDocs/layouts/shortcodes/netq-install/vmw-platform-image.html +++ b/themes/netDocs/layouts/shortcodes/netq-install/vmw-platform-image.html @@ -794,4 +794,40 @@

For NVIDIA employees, download NetQ directly from the NVIDIA Licensing Portal.

{{ end }} +{{ end }} + +{{- if eq (.Get "version") "4.12" -}} + +{{- if eq (.Get "deployment") "onprem" -}} + +
    +
  1. On the NVIDIA Application Hub, log in to your account.
    +
  2. Select NVIDIA Licensing Portal.
  3. +
  4. Select Software Downloads from the menu.
  5. +
  6. Click Product Family and select NetQ.
  7. +
  8. Locate the NetQ SW 4.12 VMware image and select Download.
  9. +
  10. If prompted, read the license agreement and proceed with the download.
  11. +
+ +

For enterprise customers, if you do not see a link to the NVIDIA Licensing Portal on the NVIDIA Application Hub, contact NVIDIA support.

+
+

For NVIDIA employees, download NetQ directly from the NVIDIA Licensing Portal.

+{{ end }} + +{{- if eq (.Get "deployment") "cloud" -}} + +
    +
  1. On the NVIDIA Application Hub, log in to your account.
    +
  2. Select NVIDIA Licensing Portal.
  3. +
  4. Select Software Downloads from the menu.
  5. +
  6. Click Product Family and select NetQ.
  7. +
  8. Locate the NetQ SW 4.12 VMware Cloud image and select Download.
  9. +
  10. If prompted, read the license agreement and proceed with the download.
  11. +
+ +

For enterprise customers, if you do not see a link to the NVIDIA Licensing Portal on the NVIDIA Application Hub, contact NVIDIA support.

+
+

For NVIDIA employees, download NetQ directly from the NVIDIA Licensing Portal.

+{{ end }} + {{ end }} \ No newline at end of file