Description of problem: If delete hypervisor on Satellite webUI, then virt-who send this hypervisor to satellite again, it can't show on WebUI. Version-Release number of selected component (if applicable): virt-who-0.20.2-1.el7sat.noarch subscription-manager-1.17.15-1.el7.x86_64 python-rhsm-1.17.10-1.el7_3.x86_64 How reproducible: 50% Steps to Reproduce: 1.Register system to satellite, make virt-who run at esx mode,then restart virt-who ,virt-who send two esx hypervisors("bootp-73-131-104.rhts.eng.pek2.redhat.com" and "bootp-73-131-168.rhts.eng.pek2.redhat.com") to satellite [root@esx-rhel7 virt-who.d]# cat /etc/sysconfig/virt-who | grep -v ^# | grep -v ^$ VIRTWHO_DEBUG=1 VIRTWHO_ESX=1 VIRTWHO_ESX_OWNER=Default_Organization VIRTWHO_ESX_ENV=Default_Organization VIRTWHO_ESX_SERVER=10.73.131.114 VIRTWHO_ESX_USERNAME=administrator VIRTWHO_ESX_PASSWORD=Welcome1! [root@esx-rhel7 virt-who.d]# service virt-who restart 2. In satellite webUI, delete one esx hypervisor1(bootp-73-131-104.rhts.eng.pek2.redhat.com) 3. Disable esx mode in /etc/sysconfig/virt-who, make virt-who run at fake mode which has the same data as previous esx mode. virt-who can send fake mapping info(two esx hypervisors as previous) to satellite. [root@esx-rhel7 virt-who.d]# virt-who --esx --esx-owner=Default_Organization --esx-env=Default_Organization --esx-server=10.73.131.114 --esx-username=administrator --esx-password=Welcome1! -p -d > /tmp/fake_file [root@esx-rhel7 virt-who.d]# cat /etc/virt-who.d/fake.conf [fake] type=fake file=/tmp/fake_file is_hypervisor=True owner=Default_Organization env=Default_Organization [root@esx-rhel7 virt-who.d]# service virt-who restart | tail -f /var/log/rhsm/rhsm.log 017-07-25 05:52:51,156 [virtwho.init WARNING] MainProcess(1263):MainThread @config.py:checkOptions:473 - Option `env` is not used in non-hypervisor fake mode 2017-07-25 05:52:51,157 [virtwho.init WARNING] MainProcess(1263):MainThread @config.py:checkOptions:475 - Option `owner` is not used in non-hypervisor fake mode 2017-07-25 05:52:51,157 [virtwho.init DEBUG] MainProcess(1263):MainThread @executor.py:__init__:52 - Using config named 'fake' 2017-07-25 05:52:51,157 [virtwho.init INFO] MainProcess(1263):MainThread @main.py:main:183 - Using configuration "fake" ("fake" mode) 2017-07-25 05:52:51,157 [virtwho.init INFO] MainProcess(1263):MainThread @main.py:main:185 - Using reporter_id='esx-rhel7.3-sattool.redhat.com-e751cca94ca1443687e77561e29c9871' 2017-07-25 05:52:51,160 [virtwho.main DEBUG] MainProcess(1263):MainThread @executor.py:run:186 - Starting infinite loop with 3600 seconds interval 2017-07-25 05:52:51,172 [virtwho.fake DEBUG] MainProcess(1263):Thread-2 @virt.py:run:375 - Thread 'fake' started 2017-07-25 05:52:51,174 [virtwho.fake INFO] MainProcess(1263):Thread-2 @virt.py:_send_data:912 - Report for config "fake" gathered, placing in datastore 2017-07-25 05:52:51,175 [virtwho.destination_3096823934142387789 DEBUG] MainProcess(1263):Thread-3 @virt.py:run:375 - Thread 'destination_3096823934142387789' started 2017-07-25 05:52:51,176 [virtwho.destination_3096823934142387789 INFO] MainProcess(1263):Thread-3 @virt.py:_send_data:590 - Hosts-to-guests mapping for config "fake": 2 hypervisors and 3 guests found 2017-07-25 05:52:51,178 [rhsm.connection INFO] MainProcess(1263):Thread-3 @connection.py:__init__:828 - Connection built: host=satellite63-ohsnap-rhel7.redhat.com port=443 handler=/rhsm auth=identity_cert ca_dir=/etc/rhsm/ca/ verify=0 2017-07-25 05:52:51,177 [virtwho.destination_3096823934142387789 DEBUG] MainProcess(1263):Thread-3 @subscriptionmanager.py:_connect:132 - Authenticating with certificate: /etc/pki/consumer/cert.pem 2017-07-25 05:52:51,180 [rhsm.connection DEBUG] MainProcess(1263):Thread-3 @connection.py:_request:572 - Making request: GET /rhsm/status/ 2017-07-25 05:52:51,250 [rhsm.connection DEBUG] MainProcess(1263):Thread-3 @connection.py:_request:601 - Response: status=200 2017-07-25 05:52:51,252 [rhsm.connection DEBUG] MainProcess(1263):Thread-3 @connection.py:_request:572 - Making request: GET /rhsm/status 2017-07-25 05:52:51,251 [virtwho.destination_3096823934142387789 DEBUG] MainProcess(1263):Thread-3 @subscriptionmanager.py:_is_rhsm_server_async:224 - Checking if server has capability 'hypervisor_async' 2017-07-25 05:52:51,320 [rhsm.connection DEBUG] MainProcess(1263):Thread-3 @connection.py:_request:601 - Response: status=200 2017-07-25 05:52:51,322 [rhsm.connection DEBUG] MainProcess(1263):Thread-3 @connection.py:_load_manager_capabilities:873 - Server has the following capabilities: ['cores', 'ram', 'instance_multiplier', 'derived_product', 'cert_v3', 'guest_limit', 'vcpu', 'hypervisors_async', 'storage_band', 'remove_by_pool_id', 'batch_bind', 'org_level_content_access'] 2017-07-25 05:52:51,323 [virtwho.destination_3096823934142387789 DEBUG] MainProcess(1263):Thread-3 @subscriptionmanager.py:_is_rhsm_server_async:228 - Server has capability 'hypervisors_async' 2017-07-25 05:52:51,324 [rhsm.connection DEBUG] MainProcess(1263):Thread-3 @connection.py:_request:572 - Making request: POST /rhsm/hypervisors/Default_Organization?reporter_id=esx-rhel7.3-sattool.redhat.com-e751cca94ca1443687e77561e29c9871&cloaked=False&env=Default_Organization 2017-07-25 05:52:51,324 [virtwho.destination_3096823934142387789 DEBUG] MainProcess(1263):Thread-3 @subscriptionmanager.py:hypervisorCheckIn:182 - Host-to-guest mapping: { "hypervisors": [ { "hypervisorId": { "hypervisorId": "09fa4d56-ca62-583c-b5db-7db9fab938bc" }, "name": "bootp-73-131-104.rhts.eng.pek2.redhat.com", ===>Deleted before "guestIds": [ { "guestId": "564d447f-0d3f-f6e3-5df1-d38961ecf6e3", "state": 1, "attributes": { "active": 1, "virtWhoType": "esx" } }, { "guestId": "564d580c-8c98-1480-f760-d901499b1f63", "state": 1, "attributes": { "active": 1, "virtWhoType": "esx" } }, { "guestId": "564d701d-1dcc-d20e-063e-0bcf2281a311", "state": 5, "attributes": { "active": 0, "virtWhoType": "esx" } } ], "facts": { "hypervisor.type": "VMware ESXi", "hypervisor.version": "6.5.0", "cpu.cpu_socket(s)": "2" } }, { "hypervisorId": { "hypervisorId": "abce4d56-514b-42da-d052-e26dc66003df" }, "name": "bootp-73-131-168.rhts.eng.pek2.redhat.com", "guestIds": [], "facts": { "hypervisor.type": "VMware ESXi", "hypervisor.version": "6.5.0", "cpu.cpu_socket(s)": "2" } } ] } 2017-07-25 05:52:51,652 [rhsm.connection DEBUG] MainProcess(1263):Thread-3 @connection.py:_request:601 - Response: status=200 2017-07-25 05:52:51,654 [rhsm.connection INFO] MainProcess(1263):Thread-3 @connection.py:__init__:828 - Connection built: host=satellite63-ohsnap-rhel7.redhat.com port=443 handler=/rhsm auth=identity_cert ca_dir=/etc/rhsm/ca/ verify=0 2017-07-25 05:52:51,654 [rhsm.connection DEBUG] MainProcess(1263):Thread-3 @connection.py:_request:572 - Making request: GET /rhsm/status/ 2017-07-25 05:52:51,653 [virtwho.destination_3096823934142387789 DEBUG] MainProcess(1263):Thread-3 @subscriptionmanager.py:_connect:132 - Authenticating with certificate: /etc/pki/consumer/cert.pem 2017-07-25 05:52:51,734 [rhsm.connection DEBUG] MainProcess(1263):Thread-3 @connection.py:_request:601 - Response: status=200 2017-07-25 05:52:51,735 [virtwho.destination_3096823934142387789 DEBUG] MainProcess(1263):Thread-3 @subscriptionmanager.py:check_report_state:263 - Checking status of job hypervisor_update_5121702f-975d-49e3-bd33-23c7ca08d98f 2017-07-25 05:52:51,736 [rhsm.connection DEBUG] MainProcess(1263):Thread-3 @connection.py:_request:572 - Making request: GET /rhsm/jobs/hypervisor_update_5121702f-975d-49e3-bd33-23c7ca08d98f?result_data=True 2017-07-25 05:52:51,813 [rhsm.connection DEBUG] MainProcess(1263):Thread-3 @connection.py:_request:601 - Response: status=200 2017-07-25 05:52:51,815 [virtwho.destination_3096823934142387789 DEBUG] MainProcess(1263):Thread-3 @subscriptionmanager.py:check_report_state:286 - Number of mappings unchanged: 1 2017-07-25 05:52:51,815 [virtwho.destination_3096823934142387789 INFO] MainProcess(1263):Thread-3 @subscriptionmanager.py:check_report_state:287 - Mapping for config "destination_3096823934142387789" updated 4. In satellite webUI, go to hosts--> Content hosts, check the esx hypervisor1(bootp-73-131-104.rhts.eng.pek2.redhat.com) which deleted in step2 Actual results: Although virt-who has send the esx hypervisors to server, it still can't show on satellite webUI. Expected results: The deleted esx hypervisor should show on webUI again since virt-who has send it to server. Additional info: If it can't reproduce after step4, please delete the esx hypervisor again on satellite webUI, then restart virt-who, it should reproduce. If it still can't reproduce, do the same test once again, it should reproduce.
hi shihui, I tried it many times but can't reproduce any more, so closed it. if you can reproduce next, please reopen it again.