Bug 1979976

Summary: update network settings in satellite, correcttly after they were chanaged manually on the server.
Product: Red Hat Satellite Reporter: Stefan Nemeth <snemeth>
Component: HostsAssignee: satellite6-bugs <satellite6-bugs>
Status: CLOSED CURRENTRELEASE QA Contact: Satellite QE Team <sat-qe-bz-list>
Severity: low Docs Contact:
Priority: low    
Version: 6.9.0CC: aruzicka, inecas
Target Milestone: UnspecifiedKeywords: Triaged
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-08-02 13:50:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Stefan Nemeth 2021-07-07 14:21:54 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:

100%


Steps to Reproduce:

1. create a vm with 3 interfaces eth0 eth1 eth2 where 0 was primary interface


Than create a bond in a 802.3ad mode over eth0 and 1 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master nm-bond state UP group default qlen 1000
    link/ether 52:54:00:32:37:19 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master nm-bond state UP group default qlen 1000
    link/ether 52:54:00:32:37:19 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:69:92:46 brd ff:ff:ff:ff:ff:ff
    inet 192.168.133.11/24 brd 192.168.133.255 scope global noprefixroute dynamic eth2
       valid_lft 41800sec preferred_lft 41800sec
    inet6 fe80::5054:ff:fe69:9246/64 scope link 
       valid_lft forever preferred_lft forever
6: nm-bond: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:32:37:19 brd ff:ff:ff:ff:ff:ff
    inet 192.168.140.233/24 brd 192.168.140.255 scope global noprefixroute dynamic nm-bond
       valid_lft 42697sec preferred_lft 42697sec
    inet6 fe80::3c05:5ddf:37a1:2a20/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

3. 

#subscription-manager facts --update
#subscription-manager refresh


Actual results:


Id:                       3160
UUID:                     a810cde8-1b31-41fe-bd0d-69d1d078df9a
Name:                     snemeth-interfaces.sysmgmt.lan
Organization:             Default Organization
Location:                 Default Location
Host Group:               RHEL7
Compute Resource:         satotest
Compute Profile:          1-Small
Puppet Environment:       production
Puppet CA Proxy:          ktordeur-sat65.sysmgmt.lan
Puppet Master Proxy:      ktordeur-sat65.sysmgmt.lan
Cert name:                snemeth-interfaces.sysmgmt.lan
Managed:                  yes
Installed at:             2021/06/02 12:16:17
Last report:              2021/06/08 11:35:17
Uptime (seconds):         2346
Status:                   
    Global Status: Error
    Build Status:  Installed
Network:                  
    IPv4 address: 192.168.140.233
    MAC:          52:54:00:32:37:19
    Subnet ipv4:  192.168.140.0/24
    Domain:       sysmgmt.lan
Network interfaces:       
 1) Id:          3331
    Identifier:  eth1
    Type:        interface
    MAC address: 52:54:00:24:33:1d
    FQDN:
 2) Id:          3332
    Identifier:  eth2
    Type:        interface
    MAC address: 52:54:00:69:92:46
    FQDN:
 3) Id:           3330
    Identifier:   nm-bond
    Type:         interface (primary, provision, execution)
    MAC address:  52:54:00:32:37:19
    IPv4 address: 192.168.140.233
    FQDN:         snemeth-interfaces.sysmgmt.lan
Operating system:         
    Architecture:           x86_64
    Operating System:       RHEL Server 7.9
    Build:                  no
    Partition Table:        Kickstart default
    PXE Loader:             PXELinux BIOS
    Custom partition table:
Parameters:         

Not a corresponding and correct information. 
In satellite eth0 seems to be re-written to nm-bond 



Expected results:

Information correspoding to real network settings. 

Additional info:

It is not possible to delete primary interface, so maybe we can start from there, since primary interface got changed on a vm and can not be updated in satellite.  

re-registering of the machine fixes the interfaces.

Comment 1 Mike McCune 2021-08-25 14:00:08 UTC
This is more of a bug vs RFE, reformatting.

Comment 4 Marek Hulan 2022-09-12 11:29:30 UTC
For some reason, the Nic parsed from the facts ended up being an instance of Nic::Base instead of Nic::Managed or Nic::Bond, that is the source of the issue. Given this is an existing state of the customer DB, we should provide a cleanup script/rake task that handles all interfaces like that.

Comment 5 Marek Hulan 2022-12-09 09:47:44 UTC
The next steps we agreed on:
0) We need the reproducer, ideally in form of host facts so we can try to import them and get the Nic::Base record created
1) We need to prevent the issue by adding the abstract class definition to Nic::Base class, such instances should never be saved
2) Since there are databases that contain already such value, we'll provide a new check for the foreman-maintain (satellite-maintain) that will detect such objects and will convert those to Nic::Managed

Comment 7 Brad Buckingham 2023-01-04 22:48:23 UTC
Upon review of our valid but aging backlog the Satellite Team has concluded that this Bugzilla does not meet the criteria for a resolution in the near term, and are planning to close in a month. This message may be a repeat of a previous update and the bug is again being considered to be closed. If you have any concerns about this, please contact your Red Hat Account team.  Thank you.

Comment 8 Marek Hulan 2023-01-11 15:14:06 UTC
We have reviewed this and the bug shouldn't be closed. As stated in comment 5, if at least need to provide something that will help to fix the data in the database. Next step is to fix the root cause, which will be more complicated, since we don't have reliable reproducer yet. I'm sorry for the confusion on our side.

Comment 11 Adam Ruzicka 2023-08-02 13:50:17 UTC
Per #10 this seems to got fixed in 6.13 so I'll go ahead and close this. If the issue comes back, feel free to reopen.