RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2057053 - subscription-manager should not try to gather cloud metadata, when there is not strong sign that the VM is running in public cloud
Summary: subscription-manager should not try to gather cloud metadata, when there is n...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: subscription-manager
Version: 8.6
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: rc
: 8.7
Assignee: Jiri Hnidek
QA Contact: Red Hat subscription-manager QE Team
URL:
Whiteboard:
: 2058431 (view as bug list)
Depends On:
Blocks: 2083642 2175823
TreeView+ depends on / blocked
 
Reported: 2022-02-22 16:11 UTC by Jiri Hnidek
Modified: 2024-01-22 17:51 UTC (History)
11 users (show)

Fixed In Version: subscription-manager-1.28.30-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2083642 2175823 (view as bug list)
Environment:
Last Closed: 2022-11-08 10:48:04 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github candlepin subscription-manager pull 3008 0 None Merged 2057053: [1.28] Facts: do no use heuristics detection of cloud 2023-03-02 08:52:16 UTC
Red Hat Issue Tracker RHELPLAN-113062 0 None None None 2022-02-22 16:22:18 UTC
Red Hat Knowledge Base (Solution) 6971359 0 None None None 2022-08-09 18:23:48 UTC
Red Hat Product Errata RHEA-2022:7719 0 None None None 2022-11-08 10:48:27 UTC

Description Jiri Hnidek 2022-02-22 16:11:57 UTC
Description of problem:
We use gathered system facts (mostly output from dmidecode) to detect if the machine is running on public cloud. We use two methods of detection of public cloud provider from system facts. First method uses some strong signs (e.g. dmi.chassis.asset_tag == '7783-7084-3265-9085-8269-3286-77' is strong sign that machine is running on Azure cloud provider). When detection using strong signs fails, then use some fallback approach and we try to do heuristics detection (we try to find some keywords like Microsoft, Azure, AWS, Amazon, etc.). We get some list of cloud providers from heuristics detection. Then we try to contact metadata servers from this list. This approach is great for automatic detection, but it is too aggressive, when we only try to get facts.

When system is running on VM and Hyper-V hypervisor is used, then false positive detection of Azure could happen as result of heuristics detection and this system tries to reach metadata server. This happens, when system is registered, rhsmcertd tries to gather system facts and 'subscription-manager facts' is executed.


Version-Release number of selected component (if applicable):
[root@localhost ~]# subscription-manager version
server type: Red Hat Subscription Management
subscription management server: 3.2.22-1
subscription management rules: 5.41
subscription-manager: 1.28.21-3.el8

How reproducible:
100%

Steps to Reproduce:
1. Install RHEL8 on VM using Hyper-V or mimic this environment using following custom facts (/etc/rhsm/facts/hyperv.facts)

{
  "dmi.chassis.manufacturer": "microsoft",
  "virt.host_type": "hyperv",
  "virt.is_guest": true
}

2. Set debug environment variables: https://www.candlepinproject.org/docs/subscription-manager/debug_http_traffic.html

3. Run 'subscription-manager facts'

Actual results:

System tries to reach metadata server as you can see here

[root@localhost ~]# subscription-manager facts

Making request: GET http://169.254.169.254/metadata/instance?api-version=2021-02-01 {User-Agent: cloud-what/1.0, Accept-Encoding: gzip, deflate, br, Accept: */*, Connection: keep-alive, Metadata: true}


Making request: GET http://169.254.169.254/metadata/versions {User-Agent: cloud-what/1.0, Accept-Encoding: gzip, deflate, br, Accept: */*, Connection: keep-alive, Metadata: true}

cpu.core(s)_per_socket: 4
cpu.cpu(s): 8
cpu.cpu_socket(s): 1
cpu.thread(s)_per_core: 2
cpu.topology_source: kernel /sys cpu sibling lists
distribution.id: Workstation Edition
distribution.name: Fedora Linux
...


Expected results:
subscription-manager doesn't try to reach metadata server, because there is not strong sign that system is running on cloud provider. Azure cloud provider is detected only by heuristics

Additional info:

Content of rhsm.log

2022-02-22 17:10:29,366 [DEBUG] subscription_manager.py:119690:MainThread @https.py:57 - Using standard libs to provide httplib and ssl
2022-02-22 17:10:29,712 [DEBUG] subscription_manager.py:119690:MainThread @plugins.py:564 - loaded plugin modules: []
2022-02-22 17:10:29,712 [DEBUG] subscription_manager.py:119690:MainThread @plugins.py:565 - loaded plugins: {}
2022-02-22 17:10:29,713 [DEBUG] subscription_manager.py:119690:MainThread @identity.py:135 - Loading consumer info from identity certificates.
2022-02-22 17:10:29,725 [DEBUG] subscription_manager.py:119690:MainThread @cli.py:302 - X-Correlation-ID: 280db92a0fe641b0a0bf28396300cfde
2022-02-22 17:10:29,725 [DEBUG] subscription_manager.py:119690:MainThread @cli.py:191 - Client Versions: {'subscription-manager': 'RPM_VERSION'}
2022-02-22 17:10:29,725 [DEBUG] subscription_manager.py:119690:MainThread @connection.py:170 - Environment variable NO_PROXY= will be used
2022-02-22 17:10:29,725 [DEBUG] subscription_manager.py:119690:MainThread @connection.py:268 - Connection built: host=svice port=8443 handler=/candlepin auth=identity_cert ca_dir=/etc/rhsm/ca/ insecure=True
2022-02-22 17:10:29,725 [DEBUG] subscription_manager.py:119690:MainThread @connection.py:170 - Environment variable NO_PROXY= will be used
2022-02-22 17:10:29,726 [DEBUG] subscription_manager.py:119690:MainThread @connection.py:268 - Connection built: host=svice port=8443 handler=/candlepin auth=none
2022-02-22 17:10:29,731 [DEBUG] subscription_manager.py:119690:MainThread @dmiinfo.py:71 - Using dmidecode dump file: /dev/mem
2022-02-22 17:10:29,738 [WARNING] subscription_manager.py:119690:MainThread @dmiinfo.py:95 - Error reading system DMI information with memory: <built-in function memory> returned a result with an exception set
2022-02-22 17:10:29,739 [WARNING] subscription_manager.py:119690:MainThread @dmiinfo.py:125 - Warnings while reading system DMI information:
# SMBIOS implementations newer than version 2.7 are not
# fully supported by this version of dmidecode.


2022-02-22 17:10:29,823 [DEBUG] subscription_manager.py:119690:MainThread @hwprobe.py:656 - Parsing lscpu JSON: b'{\n   "lscpu": [\n      {\n         "field": "Architecture:",\n         "data": "x86_64"\n      },{\n         "field": "CPU op-mode(s):",\n         "data": "32-bit, 64-bit"\n      },{\n         "field": "Address sizes:",\n         "data": "39 bits physical, 48 bits virtual"\n      },{\n         "field": "Byte Order:",\n         "data": "Little Endian"\n      },{\n         "field": "CPU(s):",\n         "data": "8"\n      },{\n         "field": "On-line CPU(s) list:",\n         "data": "0-7"\n      },{\n         "field": "Vendor ID:",\n         "data": "GenuineIntel"\n      },{\n         "field": "BIOS Vendor ID:",\n         "data": "Intel(R) Corporation"\n      },{\n         "field": "Model name:",\n         "data": "Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz"\n      },{\n         "field": "BIOS Model name:",\n         "data": "Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz"\n      },{\n         "field": "CPU family:",\n         "data": "6"\n      },{\n         "field": "Model:",\n         "data": "142"\n      },{\n         "field": "Thread(s) per core:",\n         "data": "2"\n      },{\n         "field": "Core(s) per socket:",\n         "data": "4"\n      },{\n         "field": "Socket(s):",\n         "data": "1"\n      },{\n         "field": "Stepping:",\n         "data": "10"\n      },{\n         "field": "CPU max MHz:",\n         "data": "4200.0000"\n      },{\n         "field": "CPU min MHz:",\n         "data": "400.0000"\n      },{\n         "field": "BogoMIPS:",\n         "data": "4199.88"\n      },{\n         "field": "Flags:",\n         "data": "fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities"\n      },{\n         "field": "Virtualization:",\n         "data": "VT-x"\n      },{\n         "field": "L1d cache:",\n         "data": "128 KiB (4 instances)"\n      },{\n         "field": "L1i cache:",\n         "data": "128 KiB (4 instances)"\n      },{\n         "field": "L2 cache:",\n         "data": "1 MiB (4 instances)"\n      },{\n         "field": "L3 cache:",\n         "data": "8 MiB (1 instance)"\n      },{\n         "field": "NUMA node(s):",\n         "data": "1"\n      },{\n         "field": "NUMA node0 CPU(s):",\n         "data": "0-7"\n      },{\n         "field": "Vulnerability Itlb multihit:",\n         "data": "KVM: Mitigation: Split huge pages"\n      },{\n         "field": "Vulnerability L1tf:",\n         "data": "Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable"\n      },{\n         "field": "Vulnerability Mds:",\n         "data": "Mitigation; Clear CPU buffers; SMT vulnerable"\n      },{\n         "field": "Vulnerability Meltdown:",\n         "data": "Mitigation; PTI"\n      },{\n         "field": "Vulnerability Spec store bypass:",\n         "data": "Mitigation; Speculative Store Bypass disabled via prctl"\n      },{\n         "field": "Vulnerability Spectre v1:",\n         "data": "Mitigation; usercopy/swapgs barriers and __user pointer sanitization"\n      },{\n         "field": "Vulnerability Spectre v2:",\n         "data": "Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling"\n      },{\n         "field": "Vulnerability Srbds:",\n         "data": "Mitigation; Microcode"\n      },{\n         "field": "Vulnerability Tsx async abort:",\n         "data": "Mitigation; TSX disabled"\n      }\n   ]\n}\n'
2022-02-22 17:10:29,829 [DEBUG] subscription_manager.py:119690:MainThread @custom.py:85 - Loading custom facts from: /etc/rhsm/facts/hyperv.facts
2022-02-22 17:10:29,830 [DEBUG] subscription_manager.py:119690:MainThread @insights.py:63 - Unable to read insights machine_id file: /etc/insights-client/machine-id, error: [Errno 2] No such file or directory: '/etc/insights-client/machine-id'
2022-02-22 17:10:29,830 [DEBUG] subscription_manager.py:119690:MainThread @insights.py:63 - Unable to read insights machine_id file: /etc/redhat-access-insights/machine-id, error: [Errno 2] No such file or directory: '/etc/redhat-access-insights/machine-id'
2022-02-22 17:10:29,830 [DEBUG] subscription_manager.py:119690:MainThread @provider.py:81 - Trying to detect cloud provider
2022-02-22 17:10:29,830 [DEBUG] subscription_manager.py:119690:MainThread @provider.py:99 - No cloud provider detected using strong signs
2022-02-22 17:10:29,831 [DEBUG] subscription_manager.py:119690:MainThread @provider.py:114 - Cloud provider aws has probability: 0.0
2022-02-22 17:10:29,831 [DEBUG] subscription_manager.py:119690:MainThread @provider.py:114 - Cloud provider azure has probability: 0.6
2022-02-22 17:10:29,831 [DEBUG] subscription_manager.py:119690:MainThread @provider.py:114 - Cloud provider gcp has probability: 0.0
2022-02-22 17:10:29,831 [DEBUG] subscription_manager.py:119690:MainThread @provider.py:125 - Following cloud providers detected using heuristics: azure
2022-02-22 17:10:29,831 [DEBUG] subscription_manager.py:119690:MainThread @_base_provider.py:373 - Trying to get metadata from http://169.254.169.254/metadata/instance?api-version=2021-02-01
2022-02-22 17:10:29,833 [DEBUG] subscription_manager.py:119690:MainThread @_base_provider.py:387 - Unable to get azure metadata: HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /metadata/instance?api-version=2021-02-01 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4e55b2bf10>: Failed to establish a new connection: [Errno 111] Connection refused'))
2022-02-22 17:10:29,834 [DEBUG] subscription_manager.py:119690:MainThread @_base_provider.py:373 - Trying to get api_versions from http://169.254.169.254/metadata/versions
2022-02-22 17:10:29,835 [DEBUG] subscription_manager.py:119690:MainThread @_base_provider.py:387 - Unable to get azure api_versions: HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /metadata/versions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4e558a4310>: Failed to establish a new connection: [Errno 111] Connection refused'))
2022-02-22 17:10:29,836 [ERROR] subscription_manager.py:119690:MainThread @azure.py:143 - Unable to decode Azure API versions: the JSON object must be str, bytes or bytearray, not NoneType
2022-02-22 17:10:29,836 [DEBUG] subscription_manager.py:119690:MainThread @provider.py:161 - Unable to get metadata from any cloud provider detected using heuristics
2022-02-22 17:10:29,838 [DEBUG] subscription_manager.py:119690:MainThread @repolib.py:167 - The rhsm.auto_enable_yum_plugins is disabled. Skipping the enablement of yum plugins.

Comment 1 Pino Toscano 2022-03-01 15:10:24 UTC
*** Bug 2058431 has been marked as a duplicate of this bug. ***

Comment 2 Jiri Hnidek 2022-03-01 16:14:41 UTC
If you want to avoid this issue with current RHEL 8.5, then you can use following workaround.

Create following file: /etc/rhsm/facts/fix_ms_chassis.facts and add following content to this file:

{
    "dmi.chassis.manufacturer": "Redacted"
}

This custom fact override output of dmidecode and there will not be false positive detection of Azure.

Comment 5 Zdenek Petracek 2022-05-13 14:59:10 UTC
Reproducing the bug:

[root@kvm-02-guest24 ~]# subscription-manager version
server type: This system is currently not registered.
subscription management server: 3.2.22-1
subscription management rules: 5.41
subscription-manager: 1.28.29-3.el8

printing communication between SUBMAN and the server:
[root@kvm-02-guest24 ~]# export SUBMAN_DEBUG_PRINT_REQUEST=1
[root@kvm-02-guest24 ~]# export SUBMAN_DEBUG_PRINT_REQUEST_HEADER=1
[root@kvm-02-guest24 ~]# export SUBMAN_DEBUG_PRINT_REQUEST_BODY=1
[root@kvm-02-guest24 ~]# export SUBMAN_DEBUG_PRINT_RESPONSE=1

mimicking hyperV environment:
[root@kvm-02-guest24 ~]# nano /etc/rhsm/facts/hyperv.facts
{
  "dmi.chassis.manufacturer": "microsoft",
  "virt.host_type": "hyperv",
  "virt.is_guest": true
}

checking if system will try to reach metadata server:
[root@kvm-02-guest24 ~]# subscription-manager facts

Making request: GET http://169.254.169.254/metadata/instance?api-version=2021-02-01 {User-Agent: cloud-what/1.0, Accept-Encoding: gzip, deflate, Accept: */*, Connection: keep-alive, Metadata: true}


Making request: GET http://169.254.169.254/metadata/versions {User-Agent: cloud-what/1.0, Accept-Encoding: gzip, deflate, Accept: */*, Connection: keep-alive, Metadata: true}

cpu.core(s)_per_socket: 1
cpu.cpu(s): 1
cpu.cpu_socket(s): 1
cpu.thread(s)_per_core: 1
cpu.topology_source: kernel /sys cpu sibling lists
distribution.id: Ootpa
distribution.name: Red Hat Enterprise Linux
distribution.version: 8.7
distribution.version.modifier: Unknown
...

--> system tried to reach metadata server

Pre-verification:
pre-veriying on version:

[root@kvm-02-guest24 ~]# subscription-manager version
server type: This system is currently not registered.
subscription management server: 3.2.22-1
subscription management rules: 5.41
subscription-manager: 1.28.29+30.gdab220594-1.git.0.e2b8079

printing communication between SUBMAN and the server:
[root@kvm-02-guest24 ~]# export SUBMAN_DEBUG_PRINT_REQUEST=1
[root@kvm-02-guest24 ~]# export SUBMAN_DEBUG_PRINT_REQUEST_HEADER=1
[root@kvm-02-guest24 ~]# export SUBMAN_DEBUG_PRINT_REQUEST_BODY=1
[root@kvm-02-guest24 ~]# export SUBMAN_DEBUG_PRINT_RESPONSE=1

checking if system will try to reach metadata server:
[root@kvm-02-guest24 ~]# subscription-manager facts
cpu.core(s)_per_socket: 1
cpu.cpu(s): 1
cpu.cpu_socket(s): 1
cpu.thread(s)_per_core: 1
cpu.topology_source: kernel /sys cpu sibling lists
distribution.id: Ootpa
distribution.name: Red Hat Enterprise Linux
distribution.version: 8.7
distribution.version.modifier: Unknown
dmi.bios.address: 0xe8000
dmi.bios.bios_revision: 1.0
dmi.bios.release_date: 01/01/2011
dmi.bios.rom_size: 64 KB
dmi.bios.runtime_size: 96 KB
dmi.bios.vendor: Seabios
dmi.bios.version: 0.5.1
...

--> system didn't try to reach metadata server = pre-verification PASSED

Comment 9 Zdenek Petracek 2022-05-26 12:37:51 UTC
Final verification:

Version:
[root@kvm-02-guest08 ~]# subscription-manager version
server type: This system is currently not registered.
subscription management server: 3.2.22-1
subscription management rules: 5.41
subscription-manager: 1.28.30-1.el8

Printing communication between SM and server:
[root@kvm-02-guest08 ~]# export SUBMAN_DEBUG_PRINT_REQUEST=1
[root@kvm-02-guest08 ~]# export SUBMAN_DEBUG_PRINT_REQUEST_HEADER=1
[root@kvm-02-guest08 ~]# export SUBMAN_DEBUG_PRINT_REQUEST_BODY=1
[root@kvm-02-guest08 ~]# export SUBMAN_DEBUG_PRINT_RESPONSE=1

Configuring host to mimic hyperv environment:
[root@kvm-02-guest08 ~]# cat /etc/rhsm/facts/hyperv.facts
{
  "dmi.chassis.manufacturer": "microsoft",
  "virt.host_type": "hyperv",
  "virt.is_guest": true
}

Trying if system will try to reach metadata server:
[root@kvm-02-guest08 ~]# subscription-manager facts
cpu.core(s)_per_socket: 1
cpu.cpu(s): 1
cpu.cpu_socket(s): 1
cpu.thread(s)_per_core: 1
cpu.topology_source: kernel /sys cpu sibling lists
distribution.id: Ootpa
distribution.name: Red Hat Enterprise Linux
distribution.version: 8.7
distribution.version.modifier: Unknown
dmi.bios.address: 0xe8000
dmi.bios.bios_revision: 1.0
dmi.bios.release_date: 01/01/2011
dmi.bios.rom_size: 64 KB
dmi.bios.runtime_size: 96 KB
dmi.bios.vendor: Seabios
dmi.bios.version: 0.5.1
dmi.chassis.asset_tag: Unknown
...

--> system didn't try to reach metadata server = final verification PASSED

Comment 12 errata-xmlrpc 2022-11-08 10:48:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (subscription-manager bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2022:7719


Note You need to log in before you can comment on or make changes to this bug.