Bug 1542958 - IPA displays IPMI credentials in DEBUG logs during cleaning
Summary: IPA displays IPMI credentials in DEBUG logs during cleaning
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-ironic
Version: 11.0 (Ocata)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z5
: 11.0 (Ocata)
Assignee: Dmitry Tantsur
QA Contact: mlammon
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-02-07 13:03 UTC by Dmitry Tantsur
Modified: 2018-05-18 17:14 UTC (History)
5 users (show)

Fixed In Version: openstack-ironic-7.0.4-1.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1542954
Environment:
Last Closed: 2018-05-18 17:14:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1744836 0 None None None 2018-02-07 13:03:01 UTC
OpenStack gerrit 541704 0 None None None 2018-02-07 13:03:01 UTC
Red Hat Product Errata RHBA-2018:1618 0 None None None 2018-05-18 17:14:45 UTC

Comment 2 mlammon 2018-05-09 21:01:43 UTC
deploy latest osp 11 puddle: 2018-05-03.2


Need to edit the /etc/ironic/ironic.conf
ipa-debug=1   # update debug mode
automated_clean = True   # automate set to true
# restart
systemctl restart openstack-ironic-conductor

Next, I just deleted stack which I could then seen the cleaning took place

Note: You will see in some debug from ironic-conductor
" u'cfd4baab-1da9-4c0d-b78b-47d74772e841', u'ipmi_password': u'******'}"

2018-05-09 15:47:09.145 7014 DEBUG ironic.drivers.modules.agent_base_vendor [-] Cleaning command status for node 0ddfdc9a-26b8-4593-815f-61f|
d9f58cdbf on step {u'priority': 10, u'interface': u'deploy', u'reboot_requested': False, u'abortable': True, u'step': u'erase_devices_metada|
ta'}: {u'command_error': None, u'command_status': u'SUCCEEDED', u'command_params': {u'node': {u'target_power_state': None, u'inspect_interfa|
ce': None, u'raid_interface': None, u'target_provision_state': u'available', u'last_error': None, u'storage_interface': u'noop', u'updated_a|
t': u'2018-05-09T19:47:08.998605', u'boot_interface': None, u'chassis_id': None, u'provision_state': u'cleaning', u'clean_step': {u'priority|
': 10, u'interface': u'deploy', u'reboot_requested': False, u'abortable': True, u'step': u'erase_devices_metadata'}, u'id': 2, u'vendor_inte|
rface': None, u'uuid': u'0ddfdc9a-26b8-4593-815f-61fd9f58cdbf', u'console_enabled': False, u'extra': {u'hardware_swift_object': u'extra_hard|
ware-0ddfdc9a-26b8-4593-815f-61fd9f58cdbf'}, u'raid_config': {}, u'provision_updated_at': u'2018-05-09T19:47:08.000000', u'maintenance': Fal|
se, u'target_raid_config': {}, u'network_interface': u'flat', u'conductor_affinity': 1, u'inspection_started_at': None, u'inspection_finishe|
d_at': None, u'power_state': u'power on', u'driver': u'pxe_ipmitool', u'power_interface': None, u'maintenance_reason': None, u'reservation':|
 u'localhost.localdomain', u'management_interface': None, u'properties': {u'memory_mb': u'16384', u'cpu_arch': u'x86_64', u'local_gb': u'29'|
, u'cpus': u'4', u'capabilities': u'profile:controller,boot_option:local'}, u'instance_uuid': None, u'name': u'controller-0', u'driver_info'|
: {u'ipmi_port': u'6231', u'ipmi_username': u'admin', u'deploy_kernel': u'af1ec959-c7e2-44d6-b93b-1572af8da7d3', u'ipmi_address': u'172.16.0|
.1', u'deploy_ramdisk': u'cfd4baab-1da9-4c0d-b78b-47d74772e841', u'ipmi_password': u'******'}, u'resource_class': None, u'created_at': u'201|
8-05-09T14:11:29.000000', u'deploy_interface': None, u'console_interface': None, u'driver_internal_info': {u'clean_step_index': 0, u'agent_c|
ached_clean_steps_refreshed': u'2018-05-09 19:47:08.907050', u'agent_cached_clean_steps': {u'deploy': [{u'priority': 99, u'interface': u'dep|
loy', u'reboot_requested': False, u'abortable': True, u'step': u'erase_devices_metadata'}, {u'priority': 10, u'interface': u'deploy', u'rebo|
ot_requested': False, u'abortable': True, u'step': u'erase_devices'}]}, u'clean_steps': [{u'priority': 10, u'interface': u'deploy', u'reboot|
_requested': False, u'abortable': True, u'step': u'erase_devices_metadata'}], u'hardware_manager_version': {u'generic_hardware_manager': u'1|
.1'}, u'is_whole_disk_image': False, u'agent_continue_if_ata_erase_failed': False, u'agent_erase_devices_iterations': 1, u'agent_erase_devic|
es_zeroize': True, u'root_uuid_or_disk_id': u'88ab852c-dba2-4aea-ab61-216517f0ab51', u'agent_url': u'http://192.168.24.7:9999'}, u'instance_|
info': {}}, u'step': {u'priority': 10, u'interface': u'deploy', u'reboot_requested': False, u'abortable': True, u'step': u'erase_devices_met|
adata'}, u'ports': [{u'local_link_connection': {}, u'uuid': u'0a457b05-45c8-4af1-b438-3320c5e6275c', u'extra': {}, u'pxe_enabled': True, u'c|
reated_at': u'2018-05-09T14:11:29.000000', u'portgroup_id': None, u'updated_at': u'2018-05-09T19:46:49.000000', u'node_id': 2, u'address': u|
'52:54:00:dc:55:7f', u'internal_info': {u'cleaning_vif_port_id': u'587311de-6eff-47a8-8dd7-933bfc4bff4a'}, u'id': 2}], u'clean_version': {u'|
generic_hardware_manager': u'1.1'}}, u'command_result': {u'clean_step': {u'priority': 10, u'interface': u'deploy', u'reboot_requested': Fals|

Comment 5 errata-xmlrpc 2018-05-18 17:14:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1618


Note You need to log in before you can comment on or make changes to this bug.