Description of problem: Doing a bulk introspection, system throws message to stderr, that the node is locked. However running status later show everything fine and overcloud can be attempted too. It's not clear why the message Version-Release number of selected component (if applicable): How reproducible: Run bulk introspection. [stack@osp8-director ~]$ openstack baremetal introspection bulk start Setting nodes for introspection to manageable... Starting introspection of node: ef0b835a-7482-4fcb-997f-1483b841be9e Starting introspection of node: 2892bca7-9a10-47ed-a3a4-4caef475759f Starting introspection of node: d1fe80cd-1cae-4d69-8b4d-166a4d8ace60 Starting introspection of node: 85ccf5e9-421f-411f-9c92-5abbbcabf3a4 Starting introspection of node: 1cb0e9af-f317-49e0-95f9-b1c1f95a70ce Starting introspection of node: 5027cce0-ae65-4087-a436-f568e6ab484d Starting introspection of node: 676805e8-7e37-44c6-96ff-85124531583d Starting introspection of node: fe0c1a34-31d5-45b8-ab1f-fbfc58b16bce Starting introspection of node: 2195986d-c7b1-4f5a-b2df-f1518b237df7 Starting introspection of node: 49a49727-b1b2-4ba1-987d-705f79a53132 Waiting for introspection to finish... Introspection for UUID 2892bca7-9a10-47ed-a3a4-4caef475759f finished successfully. Introspection for UUID ef0b835a-7482-4fcb-997f-1483b841be9e finished successfully. Introspection for UUID 1cb0e9af-f317-49e0-95f9-b1c1f95a70ce finished successfully. Introspection for UUID 85ccf5e9-421f-411f-9c92-5abbbcabf3a4 finished successfully. Introspection for UUID 5027cce0-ae65-4087-a436-f568e6ab484d finished successfully. Introspection for UUID d1fe80cd-1cae-4d69-8b4d-166a4d8ace60 finished successfully. Introspection for UUID 676805e8-7e37-44c6-96ff-85124531583d finished successfully. Introspection for UUID fe0c1a34-31d5-45b8-ab1f-fbfc58b16bce finished successfully. Introspection for UUID 2195986d-c7b1-4f5a-b2df-f1518b237df7 finished successfully. Introspection for UUID 49a49727-b1b2-4ba1-987d-705f79a53132 finished successfully. Setting manageable nodes to available... Node ef0b835a-7482-4fcb-997f-1483b841be9e has been set to available. Node 2892bca7-9a10-47ed-a3a4-4caef475759f has been set to available. Node d1fe80cd-1cae-4d69-8b4d-166a4d8ace60 has been set to available. Node 85ccf5e9-421f-411f-9c92-5abbbcabf3a4 has been set to available. Node 1cb0e9af-f317-49e0-95f9-b1c1f95a70ce has been set to available. Node 5027cce0-ae65-4087-a436-f568e6ab484d has been set to available. Node 676805e8-7e37-44c6-96ff-85124531583d has been set to available. Node fe0c1a34-31d5-45b8-ab1f-fbfc58b16bce has been set to available. Request returned failure status. Error contacting Ironic server: Node 2195986d-c7b1-4f5a-b2df-f1518b237df7 is locked by host osp8-director.cisco.com, please retry after the current operation is completed. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 142, in inner return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/ironic/conductor/manager.py", line 1150, in do_provisioning_action % action) as task: File "/usr/lib/python2.7/site-packages/ironic/conductor/task_manager.py", line 152, in acquire driver_name=driver_name, purpose=purpose) File "/usr/lib/python2.7/site-packages/ironic/conductor/task_manager.py", line 221, in __init__ self.release_resources() File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 204, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/site-packages/ironic/conductor/task_manager.py", line 203, in __init__ self._lock() File "/usr/lib/python2.7/site-packages/ironic/conductor/task_manager.py", line 242, in _lock reserve_node() File "/usr/lib/python2.7/site-packages/retrying.py", line 68, in wrapped_f return Retrying(*dargs, **dkw).call(f, *args, **kw) File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call raise attempt.get() File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get six.reraise(self.value[0], self.value[1], self.value[2]) File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call attempt = Attempt(fn(*args, **kwargs), attempt_number, False) File "/usr/lib/python2.7/site-packages/ironic/conductor/task_manager.py", line 235, in reserve_node self.node_id) File "/usr/lib/python2.7/site-packages/ironic/objects/node.py", line 228, in reserve db_node = cls.dbapi.reserve_node(tag, node_id) File "/usr/lib/python2.7/site-packages/ironic/db/sqlalchemy/api.py", line 226, in reserve_node host=node['reservation']) NodeLocked: Node 2195986d-c7b1-4f5a-b2df-f1518b237df7 is locked by host osp8-director.cisco.com, please retry after the current operation is completed. (HTTP 409). Attempt 1 of 6 Node 2195986d-c7b1-4f5a-b2df-f1518b237df7 has been set to available. Node 49a49727-b1b2-4ba1-987d-705f79a53132 has been set to available. Running bulk status reports finished without error. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Should not throw such errors. Additional info:
Hi! This error is not related to ironic-inspector, probably tripleoclient should increase retry number. But just to be sure, please provide ironic-conductor logs from /var/log/ironic/ironic-conductor.log or from journalctl.
Created attachment 1171983 [details] ironic-conductor
Created attachment 1171984 [details] ironic-api
As not causing functional problems and flagged as low priority, will not backport to OSP-8.