Description of problem: [UI] [Text] - Improve the event log for "sync all host networks" if the host is in good state or in non-responsive state "Sync all hosts in cluster" invokes the pre-existing "sync all host networks" operation per each host in the cluster. "sync all host networks" in turn invokes "setup networks" without checking if the host is in\out of sync. This means that if the host was in good state and all network are synced on this host, we still will see events saying that: changes are applied on the host and succeeded and sync succeeded as well. But this is confusing because the host was in good state. We can add a message saying that nothing has synced on the host, because everything is already in sync(or something like that). Also, if invoking "Sync all hosts in cluster" and we have a host in non-responsive state, then engine will try to perform changes on the host and then will fail to sync the networks on the host. This is also a bit confusing. Should we ignore non-responsive host in such case or just to say that we failed to sync? Version-Release number of selected component (if applicable): 4.2.6.4-0.0.master.20180817095858.git52c4ae5.el7 How reproducible: 100% Steps to Reproduce: 1. Create 2 DCs and have 3 hosts in DC1 2. Create 3 networks net1 in DC1 net1 non-VM network in DC2 - net2 in DC1 net2 with vlan in DC2 net3 in DC3 net3 with MTU 5000 in DC2 3. Attach the networks to host1 only 4. Move host1 to DC2 - all 3 networks on host1 are out-of-sync 5. Sync all networks in the cluster in DC2 6. Move the host1 back to DC1 - all 3 networks on host1 are out-of-sync 7. Reboot host3 8. Sync all networks in the cluster in DC1 Actual results: - host1 - networks get sync - host2 - there was no need for sync on this host, but we still see events saying, changes applied, changes succeeded, sync succeeded - host3 - host in non-responsive state - we see events saying: changes applied, changes failed, sync has failed. Expected results: - host1 - networks get synced - host2 - no need for sync - host3 - host in non-responsive, should we try? should we write different message?
4.2.7 is blockers-only. postponing, even though it is an EasyFix.
In an off-line discussion it was decided that 'sync cluster networks' will: - continue to attempt to sync all hosts regardless of their operational status (up, maint., non-operational, non-responsive etc.) because they might become available before the sync reaches them. - refrain from syncing hosts which are in sync because there is no point in that, since the sync request will be empty. this scenario just creates noise in the log and event log. Burman, please ack.
(In reply to eraviv from comment #2) > In an off-line discussion it was decided that 'sync cluster networks' will: > > - continue to attempt to sync all hosts regardless of their operational > status (up, maint., non-operational, non-responsive etc.) because they might > become available before the sync reaches them. > > - refrain from syncing hosts which are in sync because there is no point in > that, since the sync request will be empty. this scenario just creates noise > in the log and event log. > > Burman, please ack. ACK
Burman, Dominik assigned 4.3.6 as target milestone meaning a backport of the merge on master is required. Do you ack the backport? Thanks
(In reply to eraviv from comment #4) > Burman, > > Dominik assigned 4.3.6 as target milestone meaning a backport of the merge > on master is required. > Do you ack the backport? > > Thanks Eitan, do you mean backport from master to 4.3.z?
(In reply to Michael Burman from comment #5) > (In reply to eraviv from comment #4) > > Burman, > > > > Dominik assigned 4.3.6 as target milestone meaning a backport of the merge > > on master is required. > > Do you ack the backport? > > > > Thanks > > Eitan, do you mean backport from master to 4.3.z? yes
Verified on - 4.3.6.1-0.1.el7
This bugzilla is included in oVirt 4.3.6 release, published on September 26th 2019. Since the problem described in this bug report should be resolved in oVirt 4.3.6 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.