diff --git a/content/cumulus-linux-510/Whats-New/rn.md b/content/cumulus-linux-510/Whats-New/rn.md index 9d3427de50..fbce34c855 100644 --- a/content/cumulus-linux-510/Whats-New/rn.md +++ b/content/cumulus-linux-510/Whats-New/rn.md @@ -18,7 +18,6 @@ pdfhidden: True | [4135919](#4135919)
| You might experience a memory leak in ospfd when processing next hops due to network changes. | 5.9.1-5.10.1 | | | [4129344](#4129344)
| When you create an ACL rule that matches TCP state and more than seven TCP or UDP source or destination ports, the rule does not get framed properly and is rejected by the kernel.
To work around this issue, create another rule number when the number of ports you want to match is more than seven. | 5.9.1-5.10.1 | | | [4128913](#4128913)
| In an EVPN configuration, when you use NVUE to configure a new host bond and an multihoming ESI at the same time, the Split-Horizon preventive traffic class rule is not programmed in the egress direction. To work around this issue, configure the host bond and apply the configuration, then configure the EVPN multihoming ESI on the host bonds and apply the configuration in a separate step. | 5.9.1-5.10.1 | | -| [4122543](#4122543)
| When you use the NVUE nv set bridge domain stp priority command to configure the STP priority on two bridges, the STP priority on the second bridge does not apply.
To work around this issue, configure an NVUE snippet for the second bridge and apply it to the switch; for example:
Create the vlan-aware_bridge_snippet.yaml file and add the following:
- set:
system:
config:
snippet:
ifupdown2_eni:
bridge2: \|mstpctl-treeprio 8192

Save the file, run the nv config patch vlan-aware_bridge_snippet.yaml command, then the nv config apply command. | 5.9.1-5.10.1 | | | [4119696](#4119696)
| If you run nv set commands after you perform an upgrade but before a reboot, NVUE creates a revision based off the pre-upgrade version. After reboot, the revision contains pre-upgrade data that might cause it to fail during config apply. To work around this issue, detach the stale revision after upgrade with the nv config detach command. | 5.10.0-5.10.1 | | | [4119621](#4119621)
| When you set the SNMP server listening address to listen on all IPv4 and IPv6 addresses in a VRF with the nv set service snmp-server listening-address all vrf and nv set service snmp-server listening-address all-v6 vrf commands,SNMP requests over IPv6 addresses do not work. | 5.8.0-5.10.1 | | | [4102992](#4102992)
| When you use the NVUE nv set bridge domain stp priority command to configure the STP priority on two bridges, the STP priority on the second bridge does not apply.
To work around this issue, configure an NVUE snippet for the second bridge and apply it to the switch; for example:
Create the vlan-aware_bridge_snippet.yaml file and add the following:
- set:
system:
config:
snippet:
ifupdown2_eni:
bridge2: \|mstpctl-treeprio 8192

Save the file, run the nv config patch vlan-aware_bridge_snippet.yaml command, then the nv config apply command. | 5.9.1-5.10.1 | | @@ -117,7 +116,6 @@ pdfhidden: True | [4135919](#4135919)
| You might experience a memory leak in ospfd when processing next hops due to network changes. | 5.9.1-5.10.1 | | | [4129344](#4129344)
| When you create an ACL rule that matches TCP state and more than seven TCP or UDP source or destination ports, the rule does not get framed properly and is rejected by the kernel.
To work around this issue, create another rule number when the number of ports you want to match is more than seven. | 5.9.1-5.10.1 | | | [4128913](#4128913)
| In an EVPN configuration, when you use NVUE to configure a new host bond and an multihoming ESI at the same time, the Split-Horizon preventive traffic class rule is not programmed in the egress direction. To work around this issue, configure the host bond and apply the configuration, then configure the EVPN multihoming ESI on the host bonds and apply the configuration in a separate step. | 5.9.1-5.10.1 | | -| [4122543](#4122543)
| When you use the NVUE nv set bridge domain stp priority command to configure the STP priority on two bridges, the STP priority on the second bridge does not apply.
To work around this issue, configure an NVUE snippet for the second bridge and apply it to the switch; for example:
Create the vlan-aware_bridge_snippet.yaml file and add the following:
- set:
system:
config:
snippet:
ifupdown2_eni:
bridge2: \|mstpctl-treeprio 8192

Save the file, run the nv config patch vlan-aware_bridge_snippet.yaml command, then the nv config apply command. | 5.9.1-5.10.1 | | | [4119696](#4119696)
| If you run nv set commands after you perform an upgrade but before a reboot, NVUE creates a revision based off the pre-upgrade version. After reboot, the revision contains pre-upgrade data that might cause it to fail during config apply. To work around this issue, detach the stale revision after upgrade with the nv config detach command. | 5.10.0-5.10.1 | | | [4119621](#4119621)
| When you set the SNMP server listening address to listen on all IPv4 and IPv6 addresses in a VRF with the nv set service snmp-server listening-address all vrf and nv set service snmp-server listening-address all-v6 vrf commands,SNMP requests over IPv6 addresses do not work. | 5.8.0-5.10.1 | | | [4102992](#4102992)
| When you use the NVUE nv set bridge domain stp priority command to configure the STP priority on two bridges, the STP priority on the second bridge does not apply.
To work around this issue, configure an NVUE snippet for the second bridge and apply it to the switch; for example:
Create the vlan-aware_bridge_snippet.yaml file and add the following:
- set:
system:
config:
snippet:
ifupdown2_eni:
bridge2: \|mstpctl-treeprio 8192

Save the file, run the nv config patch vlan-aware_bridge_snippet.yaml command, then the nv config apply command. | 5.9.1-5.10.1 | | diff --git a/content/cumulus-linux-510/rn.xml b/content/cumulus-linux-510/rn.xml index 67c5b7407b..001f6099c6 100644 --- a/content/cumulus-linux-510/rn.xml +++ b/content/cumulus-linux-510/rn.xml @@ -31,21 +31,6 @@ -4122543 -When you use the NVUE {{nv set bridge domain <bridge-id> stp priority}} command to configure the STP priority on two bridges, the STP priority on the second bridge does not apply. To work around this issue, configure an NVUE snippet for the second bridge and apply it to the switch; for example: -Create the {{vlan-aware_bridge_snippet.yaml}} file and add the following: -- set: - system: - config: - snippet: - ifupdown2_eni: - bridge2: | -mstpctl-treeprio 8192 -Save the file, run the {{nv config patch vlan-aware_bridge_snippet.yaml}} command, then the {{nv config apply}} command. -5.9.1-5.10.1 - - - 4119696 If you run {{nv set}} commands after you perform an upgrade but before a reboot, NVUE creates a revision based off the pre-upgrade version. After reboot, the revision contains pre-upgrade data that might cause it to fail during {{config apply}}. To work around this issue, detach the stale revision after upgrade with the {{nv config detach}} command. 5.10.0-5.10.1 @@ -624,21 +609,6 @@ Fixed: 2.6.0+dfsg.1-1+deb10u1 -4122543 -When you use the NVUE {{nv set bridge domain <bridge-id> stp priority}} command to configure the STP priority on two bridges, the STP priority on the second bridge does not apply. To work around this issue, configure an NVUE snippet for the second bridge and apply it to the switch; for example: -Create the {{vlan-aware_bridge_snippet.yaml}} file and add the following: -- set: - system: - config: - snippet: - ifupdown2_eni: - bridge2: | -mstpctl-treeprio 8192 -Save the file, run the {{nv config patch vlan-aware_bridge_snippet.yaml}} command, then the {{nv config apply}} command. -5.9.1-5.10.1 - - - 4119696 If you run {{nv set}} commands after you perform an upgrade but before a reboot, NVUE creates a revision based off the pre-upgrade version. After reboot, the revision contains pre-upgrade data that might cause it to fail during {{config apply}}. To work around this issue, detach the stale revision after upgrade with the {{nv config detach}} command. 5.10.0-5.10.1 diff --git a/content/cumulus-linux-59/Whats-New/rn.md b/content/cumulus-linux-59/Whats-New/rn.md index 905f4c3593..2963e57d3f 100644 --- a/content/cumulus-linux-59/Whats-New/rn.md +++ b/content/cumulus-linux-59/Whats-New/rn.md @@ -17,7 +17,6 @@ pdfhidden: True | [4135919](#4135919)
| You might experience a memory leak in ospfd when processing next hops due to network changes. | 5.9.1-5.10.1 | | | [4129344](#4129344)
| When you create an ACL rule that matches TCP state and more than seven TCP or UDP source or destination ports, the rule does not get framed properly and is rejected by the kernel.
To work around this issue, create another rule number when the number of ports you want to match is more than seven. | 5.9.1-5.10.1 | | | [4128913](#4128913)
| In an EVPN configuration, when you use NVUE to configure a new host bond and an multihoming ESI at the same time, the Split-Horizon preventive traffic class rule is not programmed in the egress direction. To work around this issue, configure the host bond and apply the configuration, then configure the EVPN multihoming ESI on the host bonds and apply the configuration in a separate step. | 5.9.1-5.10.1 | | -| [4122543](#4122543)
| When you use the NVUE nv set bridge domain stp priority command to configure the STP priority on two bridges, the STP priority on the second bridge does not apply.
To work around this issue, configure an NVUE snippet for the second bridge and apply it to the switch; for example:
Create the vlan-aware_bridge_snippet.yaml file and add the following:
- set:
system:
config:
snippet:
ifupdown2_eni:
bridge2: \|mstpctl-treeprio 8192

Save the file, run the nv config patch vlan-aware_bridge_snippet.yaml command, then the nv config apply command. | 5.9.1-5.10.1 | | | [4119621](#4119621)
| When you set the SNMP server listening address to listen on all IPv4 and IPv6 addresses in a VRF with the nv set service snmp-server listening-address all vrf and nv set service snmp-server listening-address all-v6 vrf commands,SNMP requests over IPv6 addresses do not work. | 5.8.0-5.10.1 | | | [4102992](#4102992)
| When you use the NVUE nv set bridge domain stp priority command to configure the STP priority on two bridges, the STP priority on the second bridge does not apply.
To work around this issue, configure an NVUE snippet for the second bridge and apply it to the switch; for example:
Create the vlan-aware_bridge_snippet.yaml file and add the following:
- set:
system:
config:
snippet:
ifupdown2_eni:
bridge2: \|mstpctl-treeprio 8192

Save the file, run the nv config patch vlan-aware_bridge_snippet.yaml command, then the nv config apply command. | 5.9.1-5.10.1 | | | [4101808](#4101808)
| When the SNMP service is busy for approximately more than a minute, the applications using net-snmp APIs to support their MIBs (such as FRR) become blocked. | 5.9.0-5.10.1 | | @@ -149,7 +148,6 @@ pdfhidden: True | [4135919](#4135919)
| You might experience a memory leak in ospfd when processing next hops due to network changes. | 5.9.1-5.10.1 | | | [4129344](#4129344)
| When you create an ACL rule that matches TCP state and more than seven TCP or UDP source or destination ports, the rule does not get framed properly and is rejected by the kernel.
To work around this issue, create another rule number when the number of ports you want to match is more than seven. | 5.9.1-5.10.1 | | | [4128913](#4128913)
| In an EVPN configuration, when you use NVUE to configure a new host bond and an multihoming ESI at the same time, the Split-Horizon preventive traffic class rule is not programmed in the egress direction. To work around this issue, configure the host bond and apply the configuration, then configure the EVPN multihoming ESI on the host bonds and apply the configuration in a separate step. | 5.9.1-5.10.1 | | -| [4122543](#4122543)
| When you use the NVUE nv set bridge domain stp priority command to configure the STP priority on two bridges, the STP priority on the second bridge does not apply.
To work around this issue, configure an NVUE snippet for the second bridge and apply it to the switch; for example:
Create the vlan-aware_bridge_snippet.yaml file and add the following:
- set:
system:
config:
snippet:
ifupdown2_eni:
bridge2: \|mstpctl-treeprio 8192

Save the file, run the nv config patch vlan-aware_bridge_snippet.yaml command, then the nv config apply command. | 5.9.1-5.10.1 | | | [4119621](#4119621)
| When you set the SNMP server listening address to listen on all IPv4 and IPv6 addresses in a VRF with the nv set service snmp-server listening-address all vrf and nv set service snmp-server listening-address all-v6 vrf commands,SNMP requests over IPv6 addresses do not work. | 5.8.0-5.10.1 | | | [4102992](#4102992)
| When you use the NVUE nv set bridge domain stp priority command to configure the STP priority on two bridges, the STP priority on the second bridge does not apply.
To work around this issue, configure an NVUE snippet for the second bridge and apply it to the switch; for example:
Create the vlan-aware_bridge_snippet.yaml file and add the following:
- set:
system:
config:
snippet:
ifupdown2_eni:
bridge2: \|mstpctl-treeprio 8192

Save the file, run the nv config patch vlan-aware_bridge_snippet.yaml command, then the nv config apply command. | 5.9.1-5.10.1 | | | [4101808](#4101808)
| When the SNMP service is busy for approximately more than a minute, the applications using net-snmp APIs to support their MIBs (such as FRR) become blocked. | 5.9.0-5.10.1 | | diff --git a/content/cumulus-linux-59/rn.xml b/content/cumulus-linux-59/rn.xml index 646e234b8f..ec47a1be7c 100644 --- a/content/cumulus-linux-59/rn.xml +++ b/content/cumulus-linux-59/rn.xml @@ -25,21 +25,6 @@ -4122543 -When you use the NVUE {{nv set bridge domain <bridge-id> stp priority}} command to configure the STP priority on two bridges, the STP priority on the second bridge does not apply. To work around this issue, configure an NVUE snippet for the second bridge and apply it to the switch; for example: -Create the {{vlan-aware_bridge_snippet.yaml}} file and add the following: -- set: - system: - config: - snippet: - ifupdown2_eni: - bridge2: | -mstpctl-treeprio 8192 -Save the file, run the {{nv config patch vlan-aware_bridge_snippet.yaml}} command, then the {{nv config apply}} command. -5.9.1-5.10.1 - - - 4119621 When you set the SNMP server listening address to listen on all IPv4 and IPv6 addresses in a VRF with the {{nv set service snmp-server listening-address all vrf}} and {{nv set service snmp-server listening-address all-v6 vrf}} commands,SNMP requests over IPv6 addresses do not work. 5.8.0-5.10.1 @@ -792,21 +777,6 @@ This issue occurs because {{poectl}} is called on non-PoE switches. To work arou -4122543 -When you use the NVUE {{nv set bridge domain <bridge-id> stp priority}} command to configure the STP priority on two bridges, the STP priority on the second bridge does not apply. To work around this issue, configure an NVUE snippet for the second bridge and apply it to the switch; for example: -Create the {{vlan-aware_bridge_snippet.yaml}} file and add the following: -- set: - system: - config: - snippet: - ifupdown2_eni: - bridge2: | -mstpctl-treeprio 8192 -Save the file, run the {{nv config patch vlan-aware_bridge_snippet.yaml}} command, then the {{nv config apply}} command. -5.9.1-5.10.1 - - - 4119621 When you set the SNMP server listening address to listen on all IPv4 and IPv6 addresses in a VRF with the {{nv set service snmp-server listening-address all vrf}} and {{nv set service snmp-server listening-address all-v6 vrf}} commands,SNMP requests over IPv6 addresses do not work. 5.8.0-5.10.1 diff --git a/content/cumulus-netq-411/Whats-New/rn.md b/content/cumulus-netq-411/Whats-New/rn.md index ac6100ffd1..9f7973509d 100644 --- a/content/cumulus-netq-411/Whats-New/rn.md +++ b/content/cumulus-netq-411/Whats-New/rn.md @@ -15,6 +15,7 @@ pdfhidden: True | Issue ID | Description | Affects | Fixed | |--- |--- |--- |--- | | [4001098](#4001098)
| When you use NetQ LCM to upgrade a Cumulus Linux switch from version 5.9 to 5.10, and if the upgrade fails, NetQ rolls back to version 5.9 and reverts the cumulus user password to the default password. After rollback, reconfigure the password with the nv set system aaa user cumulus password \ command. | 4.11.0 | | +| [4000939](#4000939)
| When you upgrade a NetQ VM with devices in the inventory that have been rotten for 7 or more days, NetQ inventory cards in the UI and table output might show inconsistent results and might not display the rotten devices. To work around this issue, decommission the rotten device and ensure it's running the appropriate NetQ agent version. | 4.11.0 | | | [3995266](#3995266)
| When you use NetQ LCM to upgrade a Cumulus Linux switch with NTP configured using NVUE in a VRF that is not mgmt, the upgrade fails to complete. To work around this issue, first unset the NTP configuration with the nv unset service ntp and nv config apply commands, and reconfigure NTP after the upgrade completes. | 4.11.0 | | | [3985598](#3985598)
| When you configure multiple threshold-crossing events for the same TCA event ID on the same device, NetQ will only display one TCA event for each hostname per TCA event ID, even if both thresholds are crossed or status events are triggered. | 4.11.0 | | | [3981655](#3981655)
| When you upgrade your NetQ VM, some devices in the NetQ inventory might appear as rotten. To work around this issue, restart NetQ agents on devices or upgrade them to the latest agent version after the NetQ VM upgrade is completed. | 4.11.0 | | diff --git a/content/cumulus-netq-411/rn.xml b/content/cumulus-netq-411/rn.xml index 04440d9e7c..92d1ab453b 100644 --- a/content/cumulus-netq-411/rn.xml +++ b/content/cumulus-netq-411/rn.xml @@ -13,6 +13,12 @@ +4000939 +When you upgrade a NetQ VM with devices in the inventory that have been rotten for 7 or more days, NetQ inventory cards in the UI and table output might show inconsistent results and might not display the rotten devices. To work around this issue, decommission the rotten device and ensure it's running the appropriate NetQ agent version. +4.11.0 + + + 3995266 When you use NetQ LCM to upgrade a Cumulus Linux switch with NTP configured using NVUE in a VRF that is not {{mgmt}}, the upgrade fails to complete. To work around this issue, first unset the NTP configuration with the {{nv unset service ntp}} and {{nv config apply}} commands, and reconfigure NTP after the upgrade completes. 4.11.0 diff --git a/content/cumulus-netq-49/Whats-New/rn.md b/content/cumulus-netq-49/Whats-New/rn.md index c723fab8e1..b3c3428008 100644 --- a/content/cumulus-netq-49/Whats-New/rn.md +++ b/content/cumulus-netq-49/Whats-New/rn.md @@ -14,31 +14,24 @@ pdfhidden: True | Issue ID | Description | Affects | Fixed | |--- |--- |--- |--- | -| [3820671](#3820671)
| When you upgrade NetQ cluster deployments with DPUs in the device inventory, the DPUs might not be visible in the NetQ UI after the upgrade. To work around this issue, restart the DTS container on the DPUs in your network. | 4.9.0 | | -| [3819688](#3819688)
| When you upgrade NetQ cluster deployments, the configured LCM credential profile assigned to switches in the inventory is reset to the default access profile. To work around this issue, reconfigure the correct access profile on switches before managing them with LCM after the upgrade. | 4.9.0 | | -| [3819364](#3819364)
| When you attempt to delete a scheduled trace using the NetQ UI, the trace record is not deleted. | 4.7.0-4.9.0 | | -| [3814701](#3814701)
| After you upgrade NetQ, devices that were in a rotten state before the upgrade might not appear in the UI or CLI after the upgrade. To work around this issue, decommission rotten devices before performing the upgrade. | 4.9.0 | | -| [3813819](#3813819)
| When you perform a switch discovery by specifying an IP range, an error message is displayed if switches included in the range have different credentials. To work around this issue, batch switches based on their credentials and run a switch discovery for each batch. | 4.9.0 | | -| [3813078](#3813078)
| When you perform a NetQ upgrade, the upgrade might fail with the following error message:
Command '['kubectl', 'version --client']' returned non-zero exit status 1.
To work around this issue, run the netq bootstrap reset keep-db command and then reinstall NetQ using the netq install command for your deployment. | 4.9.0 | | -| [3808200](#3808200)
| When you perform a netq bootstrap reset on a NetQ cluster VM and perform a fresh install with the netq install command, the install might fail with the following error:
 master-node-installer: Running sanity check on cluster_vip: 10.10.10.10 Virtual IP 10.10.10.10 is already used
To work around this issue, run the netq install command again. | 4.9.0 | | -| [3800434](#3800434)
| When you upgrade NetQ from a version prior to 4.9.0, What Just Happened data that was collected before the upgrade is no longer present. | 4.9.0 | | -| [3798677](#3798677)
| In a NetQ cluster environment, if your master node goes offline and is restored, subsequent NetQ validations for MLAG and EVPN might unexpectedly indicate failures. To work around this issue, either restart NetQ agents on devices in the inventory or wait up to 24 hours for the issue to clear. | 4.9.0 | | -| [3787946](#3787946)
| When you install a NetQ cluster deployment on a subnet with other NetQ clusters or other devices using VRRP, there might be connectivity loss to the cluster virtual IP (VIP). The VIP is established using the VRRP protocol, and during cluster installation a virtual router ID (VRID) is selected. If another device on the subnet running VRRP selects the same VRID, connectivity issues may occur. To work around this issue, avoid multiple VRRP speakers on the subnet, or ensure the VRID used on all VRRP devices is unique. To validate the VRID used by NetQ, check the assigned virtual_router_id value in /mnt/keepalived/keepalived.conf. | 4.9.0 | | -| [3773879](#3773879)
| When you upgrade a switch running Cumulus Linux using NetQ LCM, any configuration files in /etc/cumulus/switchd.d for adaptive routing or other features are not restored after the upgrade. To work around this issue, manually back up these files and restore them after the upgrade. | 4.9.0 | | -| [3772274](#3772274)
| After you upgrade NetQ, data from snapshots taken prior to the NetQ upgrade will contain unreliable data and should not be compared to any snapshots taken after the upgrade. In cluster deployments, snapshots from prior NetQ versions will not be visible in the UI. | 4.9.0 | | -| [3771124](#3771124)
| When you reconfigure a VNI to map to a different VRF or remove and recreate a VNI in the same VRF, NetQ EVPN validations might incorrectly indicate a failure for the VRF consistency test. | 4.9.0 | | -| [3769936](#3769936)
| When there is a NetQ interface validation failure for admin state mismatch, the validation failure might clear unexpectedly while one side of the link is still administratively down. | 4.9.0 | | -| [3764718](#3764718)
| When you reboot the master node of a NetQ cluster deployment, NIC telemetry is no longer collected. To recover from this issue, restart the Prometheus pod with the following commands:
1. Retrieve the Prometheus pod name with the kubectl get pods \| grep netq-prom command
2. Restart the pod by deleting the pod name with the kubectl delete pod command
Example:
cumulus@netq-server:~$ kubectl get pods \| grep netq-prom
netq-prom-adapter-ffd9b874d-hxhbz 2/2 Running 0 3h50m
cumulus@netq-server:~$ kubectl delete pod netq-prom-adapter-ffd9b874d-hxhbz
| 4.9.0 | | -| [3760442](#3760442)
| When you export events from NetQ to a CSV file, the timestamp of the exported events does not match the timestamp reported in the NetQ UI based on the user profile's time zone setting. | 4.9.0 | | -| [3755207](#3755207)
| When you export digital optics table data from NetQ, some fields might be visible in the UI that are not exported to CSV or JSON files. | 4.9.0 | | -| [3752422](#3752422)
| When you run a NetQ trace and specify MAC addresses for the source and destination, NetQ displays the message “No valid path to destination” and does not display trace data. | 4.9.0 | | -| [3738840](#3738840)
| When you upgrade a Cumulus Linux switch configured for TACACS authentication using NetQ LCM, the switch's TACACS configuration is not restored after upgrade. | 4.8.0-4.9.0 | | -| [3721754](#3721754)
| After you decommission a switch, the switch's interfaces are still displayed in the NetQ UI in the Interfaces view. | 4.9.0 | | -| [3656965](#3656965)
| After you upgrade NetQ and try to decommission a switch, the decommission might fail with the message "Timeout encountered while processing." | 4.8.0-4.9.0 | | -| [3633458](#3633458)
| The legacy topology diagram might categorize devices into tiers incorrectly. To work around this issue, use the updated topology diagram by selecting Topology Beta in the latest version of the NetQ UI. | 4.7.0-4.9.0 | | -| [3613811](#3613811)
| LCM operations using in-band management are unsupported on switches that use eth0 connected to an out-of-band network. To work around this issue, configure NetQ to use out-of-band management in the mgmt VRF on Cumulus Linux switches when interface eth0 is in use. | 4.8.0-4.9.0 | | -| [3735959](#3735959)
| When you upgrade a Cumulus Linux switch using NetQ LCM, NetQ presents a warning indicating that there are unsaved NVUE configuration changes when NVUE is not in use and no NVUE configuration is present on the switch. You can safely ignore this warning. | 4.9.0 | | -| [3824873](#3824873)
| When you upgrade an on-premises NetQ deployment, the upgrade might fail with the following message:
master-node-installer: Upgrading NetQ Appliance with tarball : /mnt/installables/NetQ-4.9.0.tgz
master-node-installer: Migrating H2 db list index out of range.
To work around this issue, re-run the netq upgrade command. | 4.9.0 | | +| [3824873](#3824873)
| When you upgrade an on-premises NetQ deployment, the upgrade might fail with the following message:
master-node-installer: Upgrading NetQ Appliance with tarball : /mnt/installables/NetQ-4.9.0.tgz
master-node-installer: Migrating H2 db list index out of range.
To work around this issue, re-run the netq upgrade command. | 4.9.0 | 4.10.0-4.11.0| +| [3820671](#3820671)
| When you upgrade NetQ cluster deployments with DPUs in the device inventory, the DPUs might not be visible in the NetQ UI after the upgrade. To work around this issue, restart the DTS container on the DPUs in your network. | 4.9.0 | 4.10.0-4.11.0| +| [3819688](#3819688)
| When you upgrade NetQ cluster deployments, the configured LCM credential profile assigned to switches in the inventory is reset to the default access profile. To work around this issue, reconfigure the correct access profile on switches before managing them with LCM after the upgrade. | 4.9.0 | 4.10.0-4.11.0| +| [3819364](#3819364)
| When you attempt to delete a scheduled trace using the NetQ UI, the trace record is not deleted. | 4.7.0-4.9.0 | 4.10.0-4.11.0| +| [3813819](#3813819)
| When you perform a switch discovery by specifying an IP range, an error message is displayed if switches included in the range have different credentials. To work around this issue, batch switches based on their credentials and run a switch discovery for each batch. | 4.9.0 | 4.10.0-4.11.0| +| [3813078](#3813078)
| When you perform a NetQ upgrade, the upgrade might fail with the following error message:
Command '['kubectl', 'version --client']' returned non-zero exit status 1.
To work around this issue, run the netq bootstrap reset keep-db command and then reinstall NetQ using the netq install command for your deployment. | 4.9.0 | 4.10.0-4.11.0| +| [3808200](#3808200)
| When you perform a netq bootstrap reset on a NetQ cluster VM and perform a fresh install with the netq install command, the install might fail with the following error:
 master-node-installer: Running sanity check on cluster_vip: 10.10.10.10 Virtual IP 10.10.10.10 is already used
To work around this issue, run the netq install command again. | 4.9.0 | 4.10.0-4.11.0| +| [3800434](#3800434)
| When you upgrade NetQ from a version prior to 4.9.0, What Just Happened data that was collected before the upgrade is no longer present. | 4.9.0-4.11.0 | | +| [3773879](#3773879)
| When you upgrade a switch running Cumulus Linux using NetQ LCM, any configuration files in /etc/cumulus/switchd.d for adaptive routing or other features are not restored after the upgrade. To work around this issue, manually back up these files and restore them after the upgrade. | 4.9.0 | 4.10.0-4.11.0| +| [3772274](#3772274)
| After you upgrade NetQ, data from snapshots taken prior to the NetQ upgrade will contain unreliable data and should not be compared to any snapshots taken after the upgrade. In cluster deployments, snapshots from prior NetQ versions will not be visible in the UI. | 4.9.0-4.11.0 | | +| [3771124](#3771124)
| When you reconfigure a VNI to map to a different VRF or remove and recreate a VNI in the same VRF, NetQ EVPN validations might incorrectly indicate a failure for the VRF consistency test. | 4.9.0 | 4.10.0-4.11.0| +| [3769936](#3769936)
| When there is a NetQ interface validation failure for admin state mismatch, the validation failure might clear unexpectedly while one side of the link is still administratively down. | 4.9.0-4.11.0 | | +| [3760442](#3760442)
| When you export events from NetQ to a CSV file, the timestamp of the exported events does not match the timestamp reported in the NetQ UI based on the user profile's time zone setting. | 4.9.0 | 4.10.0-4.11.0| +| [3755207](#3755207)
| When you export digital optics table data from NetQ, some fields might be visible in the UI that are not exported to CSV or JSON files. | 4.9.0 | 4.10.0-4.11.0| +| [3752422](#3752422)
| When you run a NetQ trace and specify MAC addresses for the source and destination, NetQ displays the message “No valid path to destination” and does not display trace data. | 4.9.0-4.11.0 | | +| [3738840](#3738840)
| When you upgrade a Cumulus Linux switch configured for TACACS authentication using NetQ LCM, the switch's TACACS configuration is not restored after upgrade. | 4.8.0-4.9.0 | 4.10.0-4.11.0| +| [3721754](#3721754)
| After you decommission a switch, the switch's interfaces are still displayed in the NetQ UI in the Interfaces view. | 4.9.0-4.10.1 | 4.11.0| +| [3613811](#3613811)
| LCM operations using in-band management are unsupported on switches that use eth0 connected to an out-of-band network. To work around this issue, configure NetQ to use out-of-band management in the mgmt VRF on Cumulus Linux switches when interface eth0 is in use. | 4.8.0-4.11.0 | | ### Fixed Issues in 4.9.0 | Issue ID | Description | Affects | @@ -55,4 +48,5 @@ pdfhidden: True | [3634648](#3634648)
| The topology graph might show unexpected connections when devices in the topology do not have LLDP adjacencies. | 4.8.0 | | | [3632378](#3632378)
| After you upgrade your on-premises NetQ VM from version 4.7.0 to 4.8.0, NIC telemetry using the Prometheus adapter is not collected. To work around this issue, run the following commands on your NetQ VM:
sudo kubectl set image deployment/netq-prom-adapter netq-prom-adapter=docker-registry:5000/netq-prom-adapter:4.8.0
sudo kubectl set image deployment/netq-prom-adapter prometheus=docker-registry:5000/prometheus-v2.41.0:4.8.0
| 4.8.0 | | | [3549877](#3549877)
| NetQ cloud deployments might unexpectedly display validation results for checks that did not run on any nodes. | 4.6.0-4.8.0 | | -| [3429528](#3429528)
| EVPN and RoCE validation cards in the NetQ UI might not display data when Cumulus Linux switches are configured with high VNI scale. | 4.6.0-4.8.0 | | \ No newline at end of file +| [3429528](#3429528)
| EVPN and RoCE validation cards in the NetQ UI might not display data when Cumulus Linux switches are configured with high VNI scale. | 4.6.0-4.8.0 | | +