Bug 2227529

Summary: node deploying fails with ironic-conductor error mount: /tmp/tmp63ht2umu: /dev/sdr3 already mounted on /tmp/tmp63ht2umu
Product: Red Hat OpenStack Reporter: alisci <alisci>
Component: openstack-ironic-python-agentAssignee: Steve Baker <sbaker>
Status: CLOSED DUPLICATE QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: 16.2 (Train)CC: gkadam, jelle.hoylaerts.ext, sbaker, shtiwari
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-08-21 19:42:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description alisci 2023-07-30 13:41:48 UTC
Description of problem:
failure doing a scaling out of ceph node.
the deployment show the following error on ironic-conductor logs:

ERROR ironic.conductor.utils [req-ce122274-9557-446c-bf1a-0e08cf8e436c - - - - -] Failed to install a bootloader when deploying node xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. Error: {'type': 'CommandExecutionError', 'code': 500, 'message': "Command execution failed: Installing GRUB2 boot loader to device /dev/sdr failed with Unexpected error while running command.\nCommand: mount /dev/sdr3 /tmp/tmp63ht2umu\nExit code: 32\nStdout: ''\nStderr: 'mount: /tmp/tmp63ht2umu: /dev/sdr3 already mounted on /tmp/tmp63ht2umu.\\n'.", 'details': "Installing GRUB2 boot loader to device /dev/sdr failed with Unexpected error while running command.\nCommand: mount /dev/sdr3 /tmp/tmp63ht2umu\nExit code: 32\nStdout: ''\nStderr: 'mount: /tmp/tmp63ht2umu: /dev/sdr3 already mounted on /tmp/tmp63ht2umu.\\n'."}


Version-Release number of selected component (if applicable):
OSP 16.2.3


futher details on private attachments

Comment 11 Steve Baker 2023-08-21 19:42:18 UTC
Closing for now on the assumption that this issue will go away after the cluster is updated to z4, it can be reopened if necessary.

*** This bug has been marked as a duplicate of bug 2134529 ***