Bug 1813941 - overcloud update with NFS volume attached to instance fails
Summary: overcloud update with NFS volume attached to instance fails
Keywords:
Status: CLOSED DUPLICATE of bug 1816918
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 16.0 (Train)
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: z2
: 16.0 (Train on RHEL 8.1)
Assignee: Piotr Kopec
QA Contact: David Rosenfeld
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-16 14:28 UTC by Jacob Ansari
Modified: 2024-03-25 15:48 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
See https://bugzilla.redhat.com/show_bug.cgi?id=1816918
Clone Of:
Environment:
Last Closed: 2020-04-30 13:24:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1869020 0 None None None 2020-03-25 15:14:06 UTC
OpenStack gerrit 716280 0 None MERGED Tolerate NFS exports in /var/lib/nova when selinux relabelling 2021-02-04 22:27:31 UTC
OpenStack gerrit 716924 0 None MERGED Fix selinux denial on centos8/rhel8 when relabelling /var/lib/nova 2021-02-04 22:27:31 UTC
Red Hat Issue Tracker OSP-31716 0 None None None 2024-03-25 15:48:19 UTC

Description Jacob Ansari 2020-03-16 14:28:53 UTC
Description of problem:
When a compute has a guest with NFS-backed volume attached, "openstack overcloud deploy" fails.


Version-Release number of selected component (if applicable):
openstack-tripleo-heat-templates-11.3.2-0.20200211065546.d3d6dc3.el8ost.noarch

How reproducible:
systematically

Steps to Reproduce:
1. On a compute node, have an instance with NFS mounted volume attached .
2. Attempt an overcloud stack update


Actual results:
Overcloud stack update fails

Expected results:
Overcloud stack update succeeds 

Additional info:
Example:

(tpapod4-infr) [stack@tpainfrucld0 ~]$ openstack server list --long --column Name --column Host
+------------+---------------------------------------+
| Name       | Host                                  |
+------------+---------------------------------------+
| xtest-srv1 | tpapod4-infr-comp2-0.vici.verizon.com |
| xtest-srv0 | tpapod4-infr-comp2-0.vici.verizon.com |
+------------+---------------------------------------+
(tpapod4-infr) [stack@tpainfrucld0 ~]$ openstack volume list --long
+--------------------------------------+-----------------+--------+------+---------+----------+-------------------------------------+------------+
| ID                                   | Name            | Status | Size | Type    | Bootable | Attached to                         | Properties |
+--------------------------------------+-----------------+--------+------+---------+----------+-------------------------------------+------------+
| ccfe606c-dab5-4783-933f-2d8c25ae9124 | xtest-srv1-boot | in-use |   10 | tripleo | true     | Attached to xtest-srv1 on /dev/vda  |            |
| 6bf9c624-427b-4277-876a-d4cc343e1394 | xtest-srv0-boot | in-use |   10 | tripleo | true     | Attached to xtest-srv0 on /dev/vda  |            |
+--------------------------------------+-----------------+--------+------+---------+----------+-------------------------------------+------------+

2020-03-12 10:19:44,882 p=281432 u=mistral |  fatal: [tpapod4-infr-comp2-0]: FAILED! => {"ansible_job_id": "945381463512.463109", "attempts": 9, "changed": false, "finished": 1, "msg": "Paunch failed with config_id tripleo_step3", "rc": 126, "stderr": "Did not find container with \"['podman', 'ps', '-a', '--filter', 'label=container_name=neutron_ovs_bridge', '--filter', 'label=config_id=tripleo_step3', '--format', '{{.Names}}']\" - retrying without config_id\nDid not find container with \"['podman', 'ps', '-a', '--filter', 'label=container_name=neutron_ovs_bridge', '--format', '{{.Names}}']\"\nDid not find container with \"['podman', 'ps', '-a', '--filter', 'label=container_name=nova_statedir_owner', '--filter', 'label=config_id=tripleo_step3', '--format', '{{.Names}}']\" - retrying without config_id\nDid not find container with \"['podman', 'ps', '-a', '--filter', 'label=container_name=nova_statedir_owner', '--format', '{{.Names}}']\"\nError running ['podman', 'run', '--name', 'nova_statedir_owner', '--label', 'config_id=tripleo_step3', '--label', 'container_name=nova_statedir_owner', '--label', 'managed_by=tripleo-Compute2', '--label', 'config_data={\"command\": \"/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py\", \"detach\": false, \"environment\": {\"TRIPLEO_DEPLOY_IDENTIFIER\": \"1584020954\", \"__OS_DEBUG\": \"false\"}, \"image\": \"tpavcpsat.vici.verizon.com:5000/corona-openstack_containers-nova-compute:16.0-83\", \"net\": \"none\", \"privileged\": false, \"user\": \"root\", \"volumes\": [\"/var/lib/nova:/var/lib/nova:shared,z\", \"/var/lib/container-config-scripts/:/container-config-scripts/:z\"]}', '--conmon-pidfile=/var/run/nova_statedir_owner.pid', '--log-driver', 'k8s-file', '--log-opt', 'path=/var/log/containers/stdouts/nova_statedir_owner.log', '--env=TRIPLEO_DEPLOY_IDENTIFIER=1584020954', '--env=__OS_DEBUG=false', '--net=none', '--privileged=false', '--user=root', '--volume=/var/lib/nova:/var/lib/nova:shared,z', '--volume=/var/lib/container-config-scripts/:/container-config-scripts/:z', '--cpuset-cpus=0,32,64,96,16,48,80,112', 'tpavcpsat.vici.verizon.com:5000/corona-openstack_containers-nova-compute:16.0-83', '/container-config-scripts/pyshim.sh', '/container-config-scripts/nova_statedir_ownership.py']. [126]\n\nstdout: \nstderr: Error: relabel failed \"/var/lib/nova\": operation not supported\n\n", "stderr_lines": ["Did not find container with \"['podman', 'ps', '-a', '--filter', 'label=container_name=neutron_ovs_bridge', '--filter', 'label=config_id=tripleo_step3', '--format', '{{.Names}}']\" - retrying without config_id", "Did not find container with \"['podman', 'ps', '-a', '--filter', 'label=container_name=neutron_ovs_bridge', '--format', '{{.Names}}']\"", "Did not find container with \"['podman', 'ps', '-a', '--filter', 'label=container_name=nova_statedir_owner', '--filter', 'label=config_id=tripleo_step3', '--format', '{{.Names}}']\" - retrying without config_id", "Did not find container with \"['podman', 'ps', '-a', '--filter', 'label=container_name=nova_statedir_owner', '--format', '{{.Names}}']\"", "Error running ['podman', 'run', '--name', 'nova_statedir_owner', '--label', 'config_id=tripleo_step3', '--label', 'container_name=nova_statedir_owner', '--label', 'managed_by=tripleo-Compute2', '--label', 'config_data={\"command\": \"/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py\", \"detach\": false, \"environment\": {\"TRIPLEO_DEPLOY_IDENTIFIER\": \"1584020954\", \"__OS_DEBUG\": \"false\"}, \"image\": \"tpavcpsat.vici.verizon.com:5000/corona-openstack_containers-nova-compute:16.0-83\", \"net\": \"none\", \"privileged\": false, \"user\": \"root\", \"volumes\": [\"/var/lib/nova:/var/lib/nova:shared,z\", \"/var/lib/container-config-scripts/:/container-config-scripts/:z\"]}', '--conmon-pidfile=/var/run/nova_statedir_owner.pid', '--log-driver', 'k8s-file', '--log-opt', 'path=/var/log/containers/stdouts/nova_statedir_owner.log', '--env=TRIPLEO_DEPLOY_IDENTIFIER=1584020954', '--env=__OS_DEBUG=false', '--net=none', '--privileged=false', '--user=root', '--volume=/var/lib/nova:/var/lib/nova:shared,z', '--volume=/var/lib/container-config-scripts/:/container-config-scripts/:z', '--cpuset-cpus=0,32,64,96,16,48,80,112', 'tpavcpsat.vici.verizon.com:5000/corona-openstack_containers-nova-compute:16.0-83', '/container-config-scripts/pyshim.sh', '/container-config-scripts/nova_statedir_ownership.py']. [126]", "", "stdout: ", "stderr: Error: relabel failed \"/var/lib/nova\": operation not supported", ""], "stdout": "\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[0;32mInfo: Loading facts\u001b[0m\n\u001b[mNotice: Compiled catalog for tpapod4-infr-comp2-0.vici.verizon.com in environment production in 0.52 seconds\u001b[0m\n\u001b[0;32mInfo: Applying configuration version '1584022782'\u001b[0m\n\u001b[0;32mInfo: Creating state file /var/lib/puppet/state/state.yaml\u001b[0m\n\u001b[mNotice: Applied catalog in 0.09 seconds\u001b[0m\n", "stdout_lines": ["\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[0;32mInfo: Loading facts\u001b[0m", "\u001b[mNotice: Compiled catalog for tpapod4-infr-comp2-0.vici.verizon.com in environment production in 0.52 seconds\u001b[0m", "\u001b[0;32mInfo: Applying configuration version '1584022782'\u001b[0m", "\u001b[0;32mInfo: Creating state file /var/lib/puppet/state/state.yaml\u001b[0m", "\u001b[mNotice: Applied catalog in 0.09 seconds\u001b[0m"]}

Looks to be due to relabel of /var/lib/nova with an NFS mount under:

[heat-admin@tpapod4-infr-comp2-0 ~]$ df
Filesystem                         1K-blocks     Used  Available Use% Mounted on
devtmpfs                           259613296        0  259613296   0% /dev
tmpfs                              528084992       84  528084908   1% /dev/shm
tmpfs                              528084992     4908  528080084   1% /run
tmpfs                              528084992        0  528084992   0% /sys/fs/cgroup
/dev/sda2                          468773676 14776172  453997504   4% /
tmpfs                              105616996        0  105616996   0% /run/user/1000
172.18.248.157:/tpainfr_cinder_01 4294967296  9089408 4285877888   1% /var/lib/nova/mnt/b8900afc8b1954cf3b9465e268cdf9be
[heat-admin@tpapod4-infr-comp2-0 ~]$ sudo podman start nova_statedir_owner
Error: unable to start container "nova_statedir_owner": relabel failed "/var/lib/nova": operation not supported
[heat-admin@tpapod4-infr-comp2-0 ~]$ df
Filesystem     1K-blocks     Used Available Use% Mounted on
devtmpfs       259613296        0 259613296   0% /dev
tmpfs          528084992       84 528084908   1% /dev/shm
tmpfs          528084992     4896 528080096   1% /run
tmpfs          528084992        0 528084992   0% /sys/fs/cgroup
/dev/sda2      468773676 14775596 453998080   4% /
tmpfs          105616996        0 105616996   0% /run/user/1000
[heat-admin@tpapod4-infr-comp2-0 ~]$ sudo podman start nova_statedir_owner
nova_statedir_owner


SOS report for compute node where reproduced :
sosreport-tpapod4-infr-comp2-0-02607154-2020-03-12-zhtrzky.tar.xz|https://access.redhat.com/hydra/rest/cases/02607154/attachments/aa59748e-8e3c-4715-9dfc-9c039b03571b

See also for similar issue relating to NFS in different use case https://bugzilla.redhat.com/show_bug.cgi?id=1727260

Comment 9 Ollie Walsh 2020-04-30 13:24:49 UTC

*** This bug has been marked as a duplicate of bug 1816918 ***


Note You need to log in before you can comment on or make changes to this bug.