Bug 1542959 - IPA displays IPMI credentials in DEBUG logs during cleaning
Summary: IPA displays IPMI credentials in DEBUG logs during cleaning
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-ironic
Version: 12.0 (Pike)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z2
: 12.0 (Pike)
Assignee: Dmitry Tantsur
QA Contact: mlammon
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-02-07 13:05 UTC by Dmitry Tantsur
Modified: 2018-03-28 17:34 UTC (History)
7 users (show)

Fixed In Version: openstack-ironic-9.1.3-1.el7ost
Doc Type: Bug Fix
Doc Text:
Introspection would display IPMI credentials in DEBUG level logs during node cleaning. This fix masks the IPMI credentials.
Clone Of: 1542954
Environment:
Last Closed: 2018-03-28 17:34:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1744836 0 None None None 2018-02-07 13:05:57 UTC
OpenStack gerrit 541703 0 None MERGED Do not pass credentials to the ramdisk on cleaning 2020-12-08 16:11:35 UTC
Red Hat Product Errata RHBA-2018:0612 0 None None None 2018-03-28 17:34:30 UTC

Comment 2 mlammon 2018-03-16 20:55:56 UTC
Deployed osp10 latest 2018-03-10.1

Enabled automated cleaning in /etc/ironic/ironic.conf
Add IPA debug log ipa-debug=1  to /etc/ironic/ironic.conf
systemctl restart openstack-ironic-conductor

nova delete compute node

Look at ironic-conductor.log

See u'ipmi_password': u'******'  below.
I think this should verify this bug. Double-check with dev team

DEBUG ironic.drivers.modules.agent_base_vendor [-] Cleaning command status for node 9d038c62-ded2-40f6-96fd-e3a64e842720 on step {u'priority': 10, u'interface': u'deploy', u'reboot_requested': False, u'abortable': True, u'step': u'erase_devices_metadata'}: {u'command_error': None, u'command_status': u'SUCCEEDED', u'command_params': {u'node': {u'target_power_state': None, u'inspect_interface': None, u'raid_interface': None, u'target_provision_state': u'available', u'last_error': None, u'storage_interface': u'noop', u'updated_at': u'2018-03-16T20:44:03.555233', u'boot_interface': None, u'chassis_id': None, u'provision_state': u'cleaning', u'clean_step': {u'priority': 10, u'interface': u'deploy', u'reboot_requested': False, u'abortable': True, u'step': u'erase_devices_metadata'}, u'id': 1, u'vendor_interface': None, u'uuid': u'9d038c62-ded2-40f6-96fd-e3a64e842720', u'console_enabled': False, u'extra': {u'hardware_swift_object': u'extra_hardware-9d038c62-ded2-40f6-96fd-e3a64e842720'}, u'raid_config': {}, u'provision_updated_at': u'2018-03-16T20:44:03.000000', u'maintenance': False, u'target_raid_config': {}, u'network_interface': u'flat', u'conductor_affinity': 1, u'inspection_started_at': None, u'inspection_finished_at': None, u'power_state': u'power on', u'driver': u'pxe_ipmitool', u'power_interface': None, u'maintenance_reason': None, u'reservation': u'localhost.localdomain', u'management_interface': None, u'properties': {u'memory_mb': u'6144', u'cpu_arch': u'x86_64', u'local_gb': u'19', u'cpus': u'2', u'capabilities': u'profile:compute,boot_option:local'}, u'instance_uuid': None, u'name': u'compute-0', u'driver_info': {u'ipmi_port': u'6230', u'ipmi_username': u'admin', u'deploy_kernel': u'307a7882-9a98-4bf6-8610-7e15b662239b', u'ipmi_address': u'172.16.0.1', u'deploy_ramdisk': u'9ce4ad3f-ab48-4d66-aaad-10fe9317cec0', u'ipmi_password': u'******'}, u'resource_class': u'baremetal', u'created_at': u'2018-03-13T21:27:47.000000', u'deploy_interface': None, u'console_interface': None, u'driver_internal_info': {u'clean_step_index': 0, u'agent_cached_clean_steps_refreshed': u'2018-03-16 20:44:03.456095', u'agent_cached_clean_steps': {u'deploy': [{u'priority': 99, u'interface': u'deploy', u'reboot_requested': False, u'abortable': True, u'step': u'erase_devices_metadata'}, {u'priority': 10, u'interface': u'deploy', u'reboot_requested': False, u'abortable': True, u'step': u'erase_devices'}]}, u'clean_steps': [{u'priority': 10, u'interface': u'deploy', u'reboot_requested': False, u'abortable': True, u'step': u'erase_devices_metadata'}], u'hardware_manager_version': {u'generic_hardware_manager': u'1.1'}, u'is_whole_disk_image': False, u'agent_continue_if_ata_erase_failed': False, u'agent_erase_devices_iterations': 1, u'agent_erase_devices_zeroize': True, u'root_uuid_or_disk_id': u'92288e77-99c5-41a0-86ab-04c9f4b1a97b', u'agent_url': u'http://192.168.24.7:9999'}, u'instance_info': {}}, u'step': {u'priority': 10, u'interface': u'deploy', u'reboot_requested': False, u'abortable': True, u'step': u'erase_devices_metadata'}, u'ports': [{u'local_link_connection': {}, u'uuid': u'c5049cf2-d1c2-4216-a51d-4aaac7cb660c', u'extra': {}, u'pxe_enabled': True, u'created_at': u'2018-03-13T21:27:48.000000', u'portgroup_id': None, u'updated_at': u'2018-03-16T20:43:41.000000', u'node_id': 1, u'physical_network': None, u'address': u'52:54:00:5f:74:9a', u'internal_info': {u'cleaning_vif_port_id': u'162cbc77-78fa-4af6-b48d-eced1b6dad43'}, u'id': 1}], u'clean_version': {u'generic_hardware_manager': u'1.1'}}, u'command_result': {u'clean_step': {u'priority': 10, u'interface': u'deploy', u'reboot_requested': False, u'abortable': True, u'step': u'erase_devices_metadata'}, u'clean_result': None}, u'id': u'62cbe191-7d67-4eab-bcc1-40c60c2e13d5', u'command_name': u'execute_clean_step'} continue_cleaning /usr/lib/python2.7/site-packages/ironic/drivers/modules/agent_base_vendor.py:439

Comment 3 Bob Fournier 2018-03-18 23:05:19 UTC
Mike - the IPA log files at /var/log/ironic/deploy/XXX where XXX matches the date/time that the deploy or cleaning was done must be used, not the ionic-conductor.log.  That is where the changes have been made.

Comment 4 Bob Fournier 2018-03-19 13:17:24 UTC
According to Dmitry, ipmi_password: '******' is exactly what's expected after the fix.  The fact that this is in the conductor logs is indirectly proving that the fix is working because exactly this JSON is sent to IPA.  

Can mark VERIFIED.

Comment 7 errata-xmlrpc 2018-03-28 17:34:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0612


Note You need to log in before you can comment on or make changes to this bug.