RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2035519 - Re-attach unmanaged ports to bridge controller after rollback
Summary: Re-attach unmanaged ports to bridge controller after rollback
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: NetworkManager
Version: 8.4
Hardware: All
OS: Linux
urgent
high
Target Milestone: rc
: ---
Assignee: Thomas Haller
QA Contact: Filip Pokryvka
URL:
Whiteboard:
Depends On:
Blocks: 2061711 2062609 2076131
TreeView+ depends on / blocked
 
Reported: 2021-12-24 11:45 UTC by nijin ashok
Modified: 2022-12-14 02:32 UTC (History)
16 users (show)

Fixed In Version: NetworkManager-1.36.0-3.el8
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2061711 2062609 (view as bug list)
Environment:
Last Closed: 2022-05-10 14:55:01 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-110538 0 None None None 2022-02-01 14:33:04 UTC
Red Hat Product Errata RHEA-2022:1985 0 None None None 2022-05-10 14:55:37 UTC
freedesktop.org Gitlab NetworkManager NetworkManager-ci merge_requests 995 0 None merged general: add libnm_snapshot_reattach_unmanaged_ports_to_bridge 2022-03-18 15:08:28 UTC
freedesktop.org Gitlab NetworkManager NetworkManager merge_requests 1131 0 None merged [th/checkpoint-preserve-external-ports] preserve external ports during checkpoint rollback 2022-03-08 07:05:21 UTC

Description nijin ashok 2021-12-24 11:45:06 UTC
Description of problem:

VM was running in the node worker-1 which has got vnic on bridge br1.

~~~
[root@worker1 ~]# bridge link show
3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br1 state forwarding priority 32 cost 100 
4: vethfc546558@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br1 state forwarding priority 32 cost 2    <<<<

Dec 24 10:35:16 worker1.ocp4.shiftvirt.com kernel: br1: port 2(vethfc546558) entered forwarding state
~~~

Then nncp was edited to update a route configuration.

~~~
Dec 24 10:35:41 worker1.ocp4.shiftvirt.com NetworkManager[1515]: <info>  [1640342141.1636] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/163" pid=2320992 uid=0 result="success"
Dec 24 10:35:41 worker1.ocp4.shiftvirt.com NetworkManager[1515]: <info>  [1640342141.1954] audit: op="connection-update" uuid="32af0fb4-2390-4ec5-a048-52e52c755e44" name="br1" args="ipv4.routes" pid=2320992 uid=0 result="success"
~~~

However, the DNS probe failed.

~~~
{"level":"info","ts":1640342141.9202867,"logger":"probe","msg":"Running 'ping' probe"}
{"level":"info","ts":1640342142.4557729,"logger":"probe","msg":"Running 'dns' probe"}
{"level":"info","ts":1640342149.1017618,"logger":"probe","msg":"Running 'ping' probe"}
{"level":"info","ts":1640342149.6388083,"logger":"probe","msg":"Running 'dns' probe"}
{"level":"info","ts":1640342150.166052,"logger":"probe","msg":"Running 'api-server' probe"}
{"level":"info","ts":1640342159.802868,"logger":"probe","msg":"Running 'node-readiness' probe"}
{"level":"info","ts":1640342159.8029985,"logger":"enactmentconditions","msg":"NotifyFailedToConfigure","enactment":"worker1.ocp4.shiftvirt.com.worker1"}


"level":"info","ts":1640342159.8030884,"logger":"enactmentstatus","msg":"status: {DesiredState:interfaces:\n- bridge:\n    options:\n      stp:\n        enabled: false\n    port:\n    - name: enp7s0\n      vlan: {}\n  description: Linux
 bridge with enp7s0 as a port\n  ipv4:\n    address:\n    - ip: 192.168.122.1\n      prefix-length: 24\n    enabled: true\n  name: br1\n  state: up\n  type: linux-bridge\nroutes:\n  config:\n  - destination: 10.140.110.8/24\n    next-hop
-address: 192.168.122.37\n    next-hop-interface: br1\n PolicyGeneration:13 Conditions:[{Type:Failing Status:True Reason:FailedToConfigure Message:error reconciling NodeNetworkConfigurationPolicy at desired state apply: , rolling back de
sired state configuration: failed runnig probes after network changes: failed runnig probe 'dns' with after network reconfiguration -> currentState: ---\ndns-resolver:\n  config:\n    search: []\n    server:\n    - 10.74.128.144\n  runni
ng:\n    search: []\n    server:\n    - 10.74.128.144\n ..........................................enabled: true\n    address:\n    - ip: fe80::e819:d5ff:feef:809a\n      prefix-length: 64\n  lldp:\n    enabled: false\n  mac-address: EA:19:D5:EF:80:9A\n  mtu: 65000\n  vxlan:\n    base-iface: ''\n    destination-port: 4789\n    id: 0\n    remote: ''\n: failed checking DNS connectivity: [failed looking up NS root-server.net using name sever 10.74.128.144: lookup root-server.net on 192.168.122.1:53: read udp 10.74.130.149:33734->10.74.128.144:53: read: connection refused]
~~~

This initiated a rollback which deactivated and activated the bridge and in the process removed the port "vethfc546558", causing the VM running in this host to lose network connectivity.

~~~
Dec 24 10:35:48 worker1.ocp4.shiftvirt.com NetworkManager[1515]: <info>  [1640342148.4185] checkpoint[0x56494aaab1d0]: rollback of /org/freedesktop/NetworkManager/Checkpoint/163
Dec 24 10:35:48 worker1.ocp4.shiftvirt.com NetworkManager[1515]: <info>  [1640342148.4225] device (br1): disconnecting for new activation request.

Dec 24 10:35:48 worker1.ocp4.shiftvirt.com NetworkManager[1515]: <info>  [1640342148.4472] device (br1): detached bridge port enp7s0
Dec 24 10:35:48 worker1.ocp4.shiftvirt.com NetworkManager[1515]: <info>  [1640342148.4722] device (br1): detached bridge port vethfc546558
Dec 24 10:35:48 worker1.ocp4.shiftvirt.com NetworkManager[1515]: <info>  [1640342148.4722] device (vethfc546558): released from master device br1

Dec 24 10:35:48 worker1.ocp4.shiftvirt.com NetworkManager[1515]: <info>  [1640342148.4798] device (br1): Activation: starting connection 'br1' (32af0fb4-2390-4ec5-a048-52e52c755e44)
Dec 24 10:35:48 worker1.ocp4.shiftvirt.com NetworkManager[1515]: <info>  [1640342148.4822] device (br1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')

Dec 24 10:35:48 worker1.ocp4.shiftvirt.com NetworkManager[1515]: <info>  [1640342148.5029] device (br1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Dec 24 10:35:48 worker1.ocp4.shiftvirt.com NetworkManager[1515]: <info>  [1640342148.5049] device (br1): Activation: successful, device activated.

After rollback

worker1 ~]# bridge link show
3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br1 state forwarding priority 32 cost 100
~~~

Version-Release number of selected component (if applicable):

4.8.13

How reproducible:

100%

Steps to Reproduce:

This was observed in a customer environment. This can be reproduced by failing the dns probe.

knmstate first checks if the dns probe is working and add it to the list of probes to check after applying the conf.

~~~
 func ApplyDesiredState(client client.Client, desiredState shared.State) (string, error) {
	if len(string(desiredState.Raw)) == 0 {
		return "Ignoring empty desired state", nil
	}

	out, err := EnableVlanFiltering(desiredState)
	if err != nil {
		return out, fmt.Errorf("failed to enable vlan filtering via nmcli: %s", err.Error())
	}

	// Before apply we get the probes that are working fine, they should be
	// working fine after apply
	probes := probe.Select(client)            <<<<<<<<<<<

	setOutput, err := nmstatectl.Set(desiredState, DesiredStateConfigurationTimeout)
	if err != nil {
		return setOutput, err
	}

	err = probe.Run(client, probes)           <<<<<<<<<
~~~

So I was having a working DNS while knmstate checks for the probes that are working fine and I blocked the connection between the DNS and the worker node when it runs the probe after applying the desired conf.


Actual results:

Failure in DNS probe during nncp reconciliation leads to loss of network connectivity of VMs 

Expected results:

VM network connectivity to work fine.

Additional info:

Regardless of DNS probe, any potential NetworkManager rollback results in loss of network connectivity of VMs.

Comment 2 nijin ashok 2021-12-27 13:43:57 UTC
(In reply to nijin ashok from comment #1)
> Additional info:
> 
> Regardless of DNS probe, any potential NetworkManager rollback results in
> loss of network connectivity of VMs.

Looks like the issue is because the veth interfaces are not managed by NetworkManager.

~~~
grep veth /usr/lib/udev/rules.d/85-nm-unmanaged.rules
ENV{ID_NET_DRIVER}=="veth", ENV{NM_UNMANAGED}="1"
~~~

So rollback is not considering these interfaces.

Cannot see this issue in KVM/RHV where the "vnet" interfaces are managed by NetworkManager and the connection information contains "connection.master" and "connection.slave-type:" info.

Comment 3 Petr Horáček 2022-01-06 09:26:05 UTC
Thanks for reporting this, it is indeed worrying.

Comment 4 Petr Horáček 2022-01-20 13:36:31 UTC
We are working with the nmstate team on a resolution for this.

Ruth is to introduce a test automation for this case.

Comment 5 Radim Hrazdil 2022-02-01 14:26:39 UTC
After deeper debugging, we believe this should be addressed in NetworkManager, as kubernetes-nmstate can't make assumptions about the cause of a probe
failure and decide whether or not to rollback. If something bad happens, including unplugging DNS server, rolling back to the last checkpoint should always be a safe operation.

The desired NetworkManager behaviour for our use case is to put the bridge to the same state it was when a checkpoint is captured, including unmanaged veth ports.
Currently, NetworkManager detaches unmanaged interfaces, that are attached at the time of checkpoint creation.

This has a potentially very bad consequences, because in CNV, Virtual Machines are connected to linux bridge via unmanaged veth interfaces.
Unplugging all the veths means that all CNV Virtual Machines are disconnected.



Let me summarize the scenario again:
Note that the actions are done with nmstate.

1. created bridge named test
interfaces:
- name: test
  description: Linux bridge with eth1 as a port
  type: linux-bridge
  state: up
  bridge:
    options:
      stp:
        enabled: true
  port:
  - name: eth1

Jan 25 11:47:36 node02 NetworkManager[7135]: <info>  [1643111256.7433] manager: (test): new Bridge device (/org/freedesktop/NetworkManager/Devices/18)


2. Two VMs are connected to the bridge:
Jan 25 11:58:59 node02 NetworkManager[7135]: <info>  [1643111939.3819] manager: (veth a0c26a52): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Jan 25 11:59:01 node02 NetworkManager[7135]: <info>  [1643111941.6424] manager: (cali8b59b96228e): new Veth device (/org/freedesktop/NetworkManager/Devices/22)


3. The bridge is updated (new port is attached), a new checkpoint is created before performing the change: 

interfaces:
- name: test
  description: Linux bridge with eth1 as a port
  type: linux-bridge
  state: up
  bridge:
    options:
      stp:
        enabled: true
  port:
  - name: eth1
  - name: eth2

Jan 25 12:03:13 node02 NetworkManager[7135]: <trace> [1643112193.1014] dbus-object[87d88214781f23b5]: export: "/org/freedesktop/NetworkManager/Checkpoint/85"
Jan 25 12:03:13 node02 NetworkManager[7135]: <info>  [1643112193.1015] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/85" pid=168237 uid=0 result="success"
Jan 25 12:03:13 node02 NetworkManager[7135]: <trace> [1643112193.1230] auth: call[23198]: CheckAuthorization(org.freedesktop.NetworkManager.settings.modify.system), subject=unix-process[pid=168237, uid=0, start=927518] (succeeding for root)
Jan 25 12:03:13 node02 NetworkManager[7135]: <trace> [1643112193.1231] auth: call[23198]: completed: authorized=1, challenge=0 (simulated)
Jan 25 12:03:13 node02 NetworkManager[7135]: <trace> [1643112193.1232] ifcfg-rh: write: write connection eth2 (4f627270-7279-4be0-b4ab-ef2b98d4dc44) to file "/etc/sysconfig/network-scripts/ifcfg-eth2"
Jan 25 12:03:13 node02 NetworkManager[7135]: <trace> [1643112193.1236] ifcfg-rh: commit: 4f627270-7279-4be0-b4ab-ef2b98d4dc44 (eth2) added as "/etc/sysconfig/network-scripts/ifcfg-eth2"
Jan 25 12:03:13 node02 NetworkManager[7135]: <trace> [1643112193.1237] settings: add-connection: successfully added connection 4f627270-7279-4be0-b4ab-ef2b98d4dc44,'eth2' (d701f501e2bed8e5/ifcfg-rh, "/etc/sysconfig/network-scripts/ifcfg-eth2")
Jan 25 12:03:13 node02 NetworkManager[7135]: <trace> [1643112193.1237] settings: storage[4f627270-7279-4be0-b4ab-ef2b98d4dc44,d701f501e2bed8e5/ifcfg-rh]: change event with connection "eth2" (file "/etc/sysconfig/network-scripts/ifcfg-eth2")
Jan 25 12:03:13 node02 NetworkManager[7135]: <trace> [1643112193.1237] settings: update[4f627270-7279-4be0-b4ab-ef2b98d4dc44]: adding connection "eth2" (d701f501e2bed8e5/ifcfg-rh)
Jan 25 12:03:13 node02 NetworkManager[7135]: <trace> [1643112193.1238] settings-connection[8d3dc5505cf91345,4f627270-7279-4be0-b4ab-ef2b98d4dc44]: timestamp: no timestamp from keyfile database "/var/lib/NetworkManager/timestamps"
Jan 25 12:03:13 node02 NetworkManager[7135]: <trace> [1643112193.1238] settings-connection[8d3dc5505cf91345,4f627270-7279-4be0-b4ab-ef2b98d4dc44]: autoconnect: blocked reason: user-request
Jan 25 12:03:13 node02 NetworkManager[7135]: <trace> [1643112193.1238] settings-connection[8d3dc5505cf91345,4f627270-7279-4be0-b4ab-ef2b98d4dc44]: update settings-connection flags to visible (was none)
Jan 25 12:03:13 node02 NetworkManager[7135]: <trace> [1643112193.1238] dbus-object[8d3dc5505cf91345]: export: "/org/freedesktop/NetworkManager/Settings/29"
Jan 25 12:03:13 node02 NetworkManager[7135]: <debug> [1643112193.1239] ++ connection 'new connection' (0x55a914e5eee0/NMSimpleConnection/"802-3-ethernet"): [/org/freedesktop/NetworkManager/Settings/29]


4. The update of the bridge succeeded passed, now we perform some connectivity checks and let's assume that our connectivity checks fail for some reason (DNS server unreachable, or gateways unreachable)
   If that happens, we perform rollback
Jan 25 12:03:13 node02 NetworkManager[7135]: <info>  [1643112193.8342] checkpoint[0x55a914ee6350]: rollback of /org/freedesktop/NetworkManager/Checkpoint/85


5. The rollback succeeds, but the veth ports are detached along with the eth2
Jan 25 12:03:14 node02 NetworkManager[7135]: <debug> [1643112194.0049] platform-linux: do-change-link[23]: success changing link: success
Jan 25 12:03:14 node02 NetworkManager[7135]: <info>  [1643112194.0049] device (test): detached bridge port vetha0c26a52

Jan 25 12:03:14 node02 NetworkManager[7135]: <debug> [1643112194.0190] platform-linux: do-change-link[25]: success changing link: success
Jan 25 12:03:14 node02 NetworkManager[7135]: <info>  [1643112194.0190] device (test): detached bridge port vethd07b020d


The desired behaviour for our use case is to put the bridge to the state at the time the Checkpoint/85 was captured.
At that point in time, the unmanaged veth ports were already connected to the bridge, so it's desirable to keep them attached
after the rollback.




There's also an issue on NetworkManager GitLab repo, where this behaviours was discussed: https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues/909

Comment 8 Beniamino Galvani 2022-02-08 10:55:54 UTC
To summarize, the acceptance criteria for this would be:

Given a Linux system with NM, when I create a bridge/bond/team with NM
and attach a port not managed by NM, then after a checkpoint and
rollback the port is still attached to the controller.

Comment 10 Thomas Haller 2022-02-22 17:55:08 UTC
(In reply to Radim Hrazdil from comment #5)
> After deeper debugging, we believe this should be addressed in

The log discussed here is at https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/issues/909#note_1230891

Comment 13 Thomas Haller 2022-03-04 09:27:15 UTC
I used this script for testing:





#!/bin/bash

set -x

ip link del x-eth0
ip link del x-eth1
ip link del x-eth2
ip link del br0

nmcli connection delete c-br0
nmcli connection delete c-br0-2
nmcli connection delete c-br0-eth0

ip link add x-eth0 type veth peer y-eth0
ip link add x-eth1 type veth peer y-eth1
ip link add x-eth2 type veth peer y-eth2

ip link set y-eth0 up
ip link set y-eth1 up
ip link set y-eth2 up

nmcli connection add type bridge con-name c-br0 ifname br0 ipv4.method disabled ipv6.method disabled autoconnect no
nmcli connection add type ethernet con-name c-br0-eth0 ifname x-eth0 master c-br0 autoconnect no
nmcli connection clone c-br0 c-br0-2

nmcli connection up c-br0-eth0

ip link set x-eth1 master br0
ip link set x-eth2 master br0

./examples/python/gi/checkpoint.py create 0 --destroy-all

./examples/python/gi/checkpoint.py

nmcli connection up c-br0-2

ip link set x-eth1 master br0
ip link set x-eth2 master br0

sleep 1
./examples/python/gi/checkpoint.py rollback

Comment 15 Thomas Haller 2022-03-07 14:08:08 UTC
(In reply to Thomas Haller from comment #13)
> I used this script for testing:

and the final step afterwards, check `ip link` whether x-eth1,x-eth2 are still attached.

Comment 25 Thomas Haller 2022-03-15 12:48:05 UTC
(In reply to Beniamino Galvani from comment #8)
> To summarize, the acceptance criteria for this would be:
> 
> Given a Linux system with NM, when I create a bridge/bond/team with NM
> and attach a port not managed by NM, then after a checkpoint and
> rollback the port is still attached to the controller.

what was implemented only takes effect for linux bride. There is no change for bond/team drivers.
The reason is that it's not clear how this is useful for bond/team.

Comment 27 errata-xmlrpc 2022-05-10 14:55:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (NetworkManager bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2022:1985

Comment 28 Thomas Haller 2022-05-17 08:06:17 UTC
Hi Peter,

This bug and bug 2076131 were reported to address a certain problem with usage of NetworkManager/nmstate.
But the general problem is maybe not fully fixed nor understood.

In this bug, the attached logfile shows that simply a rollback happens. That is fixed so that rollback preserves external ports now.
Also, unlike originally suggested, rollback does not restore the ports that were attached at the moment when the checkpoint was created (which was possibly a long time ago). Instead, the solution on this rhbz is that rollback keeps the ports that are attached at the moment of rollback. I think that makes sense, because otherwise (what was orignally requested on this rhbz) it does not cover ports that are attached/detached between creation and rollback of the Checkpoint.

In bug 2076131, the reproducer script re-activates the bridge profile before rollback -- doing that causes a full reactivation, meaning that ports get detached. If you do `nmcli connection up $BRIDGE && nm-rollback`, then you loose already the ports in the first step, and the rollback isn't gonna fix it.  Also, I don't think that rollback is the solution to "doing a full reactivation of the bridge detaches the ports". That is, because rollback is a failsafe, but not what usually happens. A solution need to work also if no rollback is happening.
The solution for the reproducer in 2076131, makes "reapply" work so that a full reactivation is not necessary (reapply is a less disruptive API call in NetworkManager to make changes).



So, unless a new use-case (in form of a reproducer or a trace logfile) is provided, these issues are considered fixed. Please provide such a use-case and explain what exactly is done by the user of NetworkManager/nmstate (please open a new rhbz). Please also test the latest NM builds (for rhel-8.7/rhel-9.1) to see whether an important usecase of yours is not yet covered.

1) A possible/preferable fix is that reapply works for all cases as a less disruptive way for configuring devices.
2) Another possible fix is to just not do a full reactivation. Why are you even doing that? This is not something that seems necessary in common cases and reapply should suffice.
3) Another fix is that if you do a full reactivation, that the upper layers reattach the ports yourself. After all, it was not NetworkManager who attached them in the first place.
3b) After a full reactivation, restart/recycle the affected containers/VMs.
4) Another possible fix is that a full reactivation preserves the external ports. But that seems to go against why we do the reactivation in the first place. I would prefer to avoid doing this.

Comment 29 Petr Horáček 2022-05-19 09:12:08 UTC
Hello,

Our use-case is roughly the following:

* A bridge is configured through nmstate
* Containers get attached to it, that is done by setting one side of a veth pair as a port of the bridge, directly through netlink, and moving the other side to containers netns
* Some time passes
* Admin decides to change a parameter of the bridge, they mess up and rollback is triggered

We expect that the veth ifaces are left intact. We don't need them to be recovered by rollback if a user explicitly removed them.

IIUIC that use-case is solved by this BZ, right?

I cannot comment on the 5 points you raised. IIUIC, those are all internal details of nmstate.

Comment 30 Thomas Haller 2022-05-19 09:40:05 UTC
(In reply to Petr Horáček from comment #29)
> Hello,
> 
> Our use-case is roughly the following:
> 
> * A bridge is configured through nmstate
> * Containers get attached to it, that is done by setting one side of a veth
> pair as a port of the bridge, directly through netlink, and moving the other
> side to containers netns
> * Some time passes
> * Admin decides to change a parameter of the bridge, they mess up and
> rollback is triggered
> 
> We expect that the veth ifaces are left intact. We don't need them to be
> recovered by rollback if a user explicitly removed them.
> 
> IIUIC that use-case is solved by this BZ, right?

This scenario (with this form of detail) is supposed to be fixed.

There are some cases that would not work:

1) the Admin deletes (or deletes and recreates) the bridge
2) the Admin does a change to the interface, which requires a full-reactivation. Usually "reapply" should be able to perform most changes and avoid this problem.


In particular, is this rhbz fixed if we follow what the reporter did, according to the provided log file. The reporter did neither 1) nor 2) and rollback is now improved to not messed up the currently existing(!!) configuration.

Maybe we need to better handle 1) or 2). The destructive thing already happens at those steps. It seems to me that the solution to this is not "rollback fixes it" (because rollback is not what you usually do, but the exception. Also, it's not clear how rollback would know what to configure). In any case, that's a different bug. If you have a more specific scenario (or reproducer), please report it.


Note You need to log in before you can comment on or make changes to this bug.