Bug 1731264 - Deployment with mds fails with message: 'dict object' has no attribute 'pg_num'
Summary: Deployment with mds fails with message: 'dict object' has no attribute 'pg_num'
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 15.0 (Stein)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 15.0 (Stein)
Assignee: Giulio Fidente
QA Contact: Eliad Cohen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-18 19:51 UTC by Eliad Cohen
Modified: 2019-10-22 10:26 UTC (History)
9 users (show)

Fixed In Version: openstack-tripleo-heat-templates-10.6.1-0.20190812140519.2a684c0.el8ost
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-09-21 11:24:01 UTC
Target Upstream Version:


Attachments (Terms of Use)
Undercloud files from CI (2.92 MB, application/x-xz)
2019-07-18 19:51 UTC, Eliad Cohen
no flags Details
Var folder from undercloud (10.64 MB, application/x-xz)
2019-07-18 19:52 UTC, Eliad Cohen
no flags Details


Links
System ID Priority Status Summary Last Updated
OpenStack gerrit 671686 None None None 2019-07-19 09:05:43 UTC
OpenStack gerrit 673568 None None None 2019-07-30 15:49:45 UTC
OpenStack gerrit 673569 None None None 2019-07-30 15:49:45 UTC
Red Hat Product Errata RHEA-2019:2811 None None None 2019-09-21 11:24:19 UTC

Description Eliad Cohen 2019-07-18 19:51:48 UTC
Created attachment 1591855 [details]
Undercloud files from CI

Description of problem:
When deploying with ceph-mds, the deployment fails on task: TASK [ceph-mds : create filesystem pools]

With error message [1]

Version-Release number of selected component (if applicable):
Openstack: RHOS_TRUNK-15.0-RHEL-8-20190716.n.0
Ceph-ansible: ceph-ansible-4.0.0-0.1.rc11.el8cp.noarch

How reproducible:
100%

Steps to Reproduce:
1. Deploy with mds enabled, see script used in this instance [2]
2. 

Actual results:
Deployment fails

Expected results:
Deployment should pass

Additional info:
[1] http://pastebin.test.redhat.com/781278
[2] http://pastebin.test.redhat.com/781280

Comment 1 Eliad Cohen 2019-07-18 19:52:20 UTC
Created attachment 1591856 [details]
Var folder from undercloud

Comment 17 John Fulton 2019-08-19 17:54:46 UTC
The CI job DFG-ceph-rhos-15_director-rhel-virthost-3cont_2comp_3ceph-ipv4-geneve-ceph-rgw-mds-ganesha with the fixed-in RPM [1] didn't hit this issue [2]. The task which failed as reported in comment #0 now passes.

[1] http://cougar11.scl.lab.tlv.redhat.com/DFG-ceph-rhos-15_director-rhel-virthost-3cont_2comp_3ceph-ipv4-geneve-ceph-rgw-mds-ganesha/36/undercloud-0.tar.gz?undercloud-0/var/log/rpm.list

[2] http://cougar11.scl.lab.tlv.redhat.com/DFG-ceph-rhos-15_director-rhel-virthost-3cont_2comp_3ceph-ipv4-geneve-ceph-rgw-mds-ganesha/36/undercloud-0.tar.gz?undercloud-0/var/lib/mistral/overcloud/ceph-ansible/ceph_ansible_command.log

2019-08-15 18:20:21,806 p=236307 u=root |  TASK [ceph-mds : create filesystem pools] **************************************
2019-08-15 18:20:21,806 p=236307 u=root |  task path: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml:24
2019-08-15 18:20:21,806 p=236307 u=root |  Thursday 15 August 2019  18:20:21 +0000 (0:00:00.139)       0:12:49.284 ******* 
2019-08-15 18:20:21,811 p=236307 u=root |  META: noop
2019-08-15 18:20:21,812 p=236307 u=root |  META: noop
2019-08-15 18:20:23,449 p=236307 u=root |  ok: [controller-0 -> 192.168.24.19] => (item={'application': 'cephfs', 'name': 'manila_data', 'pg_num': 32, 'rule_name': 'replicated_rule'}) => changed=false 
  ansible_loop_var: item
  cmd:
  - podman
  - exec
  - ceph-mon-controller-0
  - ceph
  - --cluster
  - ceph
  - osd
  - pool
  - create
  - manila_data
  - '32'
  - '32'
  - replicated_rule
  - '1'
  delta: '0:00:01.296956'
  end: '2019-08-15 18:20:23.415933'
  item:
    application: cephfs
    name: manila_data
    pg_num: 32
    rule_name: replicated_rule
  rc: 0
  start: '2019-08-15 18:20:22.118977'
  stderr: pool 'manila_data' created
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
2019-08-15 18:20:24,779 p=236307 u=root |  ok: [controller-0 -> 192.168.24.19] => (item={'application': 'cephfs', 'name': 'manila_metadata', 'pg_num': 32, 'rule_name': 'replicated_rule'}) => changed=false 
  ansible_loop_var: item
  cmd:
  - podman
  - exec
  - ceph-mon-controller-0
  - ceph
  - --cluster
  - ceph
  - osd
  - pool
  - create
  - manila_metadata
  - '32'
  - '32'
  - replicated_rule
  - '1'
  delta: '0:00:01.007097'
  end: '2019-08-15 18:20:24.746319'
  item:
    application: cephfs
    name: manila_metadata
    pg_num: 32
    rule_name: replicated_rule
  rc: 0
  start: '2019-08-15 18:20:23.739222'
  stderr: pool 'manila_metadata' created
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
2019-08-15 18:20:24,898 p=236307 u=root |  TASK [ceph-mds : customize pool size] ******************************************
2019-08-15 18:20:24,898 p=236307 u=root |  task path: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml:40
2019-08-15 18:20:24,898 p=236307 u=root |  Thursday 15 August 2019  18:20:24 +0000 (0:00:03.091)       0:12:52.376 ******* 
2019-08-15 18:20:24,904 p=236307 u=root |  META: noop
2019-08-15 18:20:24,904 p=236307 u=root |  META: noop
2019-08-15 18:20:25,776 p=236307 u=root |  ok: [controller-0 -> 192.168.24.19] => (item={'application': 'cephfs', 'name': 'manila_data', 'pg_num': 32, 'rule_name': 'replicated_rule'}) => changed=false 
  ansible_loop_var: item
  cmd:
  - podman
  - exec
  - ceph-mon-controller-0
  - ceph
  - --cluster
  - ceph
  - osd
  - pool
  - set
  - manila_data
  - size
  - '3'
  delta: '0:00:00.599331'
  end: '2019-08-15 18:20:25.743560'
  item:
    application: cephfs
    name: manila_data
    pg_num: 32
    rule_name: replicated_rule
  rc: 0
  start: '2019-08-15 18:20:25.144229'
  stderr: set pool 6 size to 3
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
2019-08-15 18:20:26,788 p=236307 u=root |  ok: [controller-0 -> 192.168.24.19] => (item={'application': 'cephfs', 'name': 'manila_metadata', 'pg_num': 32, 'rule_name': 'replicated_rule'}) => changed=false 
  ansible_loop_var: item
  cmd:
  - podman
  - exec
  - ceph-mon-controller-0
  - ceph
  - --cluster
  - ceph
  - osd
  - pool
  - set
  - manila_metadata
  - size
  - '3'
  delta: '0:00:00.765375'
  end: '2019-08-15 18:20:26.759282'
  item:
    application: cephfs
    name: manila_metadata
    pg_num: 32
    rule_name: replicated_rule
  rc: 0
  start: '2019-08-15 18:20:25.993907'
  stderr: set pool 7 size to 3
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
2019-08-15 18:20:26,899 p=236307 u=root |  TASK [ceph-mds : customize pool min_size] **************************************

Comment 18 Eliad Cohen 2019-08-19 17:56:17 UTC
(In reply to John Fulton from comment #17)
> The CI job
> DFG-ceph-rhos-15_director-rhel-virthost-3cont_2comp_3ceph-ipv4-geneve-ceph-
> rgw-mds-ganesha with the fixed-in RPM [1] didn't hit this issue [2]. The
> task which failed as reported in comment #0 now passes.
> 
> [1]
> http://cougar11.scl.lab.tlv.redhat.com/DFG-ceph-rhos-15_director-rhel-
> virthost-3cont_2comp_3ceph-ipv4-geneve-ceph-rgw-mds-ganesha/36/undercloud-0.
> tar.gz?undercloud-0/var/log/rpm.list
> 
> [2]
> http://cougar11.scl.lab.tlv.redhat.com/DFG-ceph-rhos-15_director-rhel-
> virthost-3cont_2comp_3ceph-ipv4-geneve-ceph-rgw-mds-ganesha/36/undercloud-0.
> tar.gz?undercloud-0/var/lib/mistral/overcloud/ceph-ansible/
> ceph_ansible_command.log
> 
> 2019-08-15 18:20:21,806 p=236307 u=root |  TASK [ceph-mds : create
> filesystem pools] **************************************
> 2019-08-15 18:20:21,806 p=236307 u=root |  task path:
> /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml:24
> 2019-08-15 18:20:21,806 p=236307 u=root |  Thursday 15 August 2019  18:20:21
> +0000 (0:00:00.139)       0:12:49.284 ******* 
> 2019-08-15 18:20:21,811 p=236307 u=root |  META: noop
> 2019-08-15 18:20:21,812 p=236307 u=root |  META: noop
> 2019-08-15 18:20:23,449 p=236307 u=root |  ok: [controller-0 ->
> 192.168.24.19] => (item={'application': 'cephfs', 'name': 'manila_data',
> 'pg_num': 32, 'rule_name': 'replicated_rule'}) => changed=false 
>   ansible_loop_var: item
>   cmd:
>   - podman
>   - exec
>   - ceph-mon-controller-0
>   - ceph
>   - --cluster
>   - ceph
>   - osd
>   - pool
>   - create
>   - manila_data
>   - '32'
>   - '32'
>   - replicated_rule
>   - '1'
>   delta: '0:00:01.296956'
>   end: '2019-08-15 18:20:23.415933'
>   item:
>     application: cephfs
>     name: manila_data
>     pg_num: 32
>     rule_name: replicated_rule
>   rc: 0
>   start: '2019-08-15 18:20:22.118977'
>   stderr: pool 'manila_data' created
>   stderr_lines: <omitted>
>   stdout: ''
>   stdout_lines: <omitted>
> 2019-08-15 18:20:24,779 p=236307 u=root |  ok: [controller-0 ->
> 192.168.24.19] => (item={'application': 'cephfs', 'name': 'manila_metadata',
> 'pg_num': 32, 'rule_name': 'replicated_rule'}) => changed=false 
>   ansible_loop_var: item
>   cmd:
>   - podman
>   - exec
>   - ceph-mon-controller-0
>   - ceph
>   - --cluster
>   - ceph
>   - osd
>   - pool
>   - create
>   - manila_metadata
>   - '32'
>   - '32'
>   - replicated_rule
>   - '1'
>   delta: '0:00:01.007097'
>   end: '2019-08-15 18:20:24.746319'
>   item:
>     application: cephfs
>     name: manila_metadata
>     pg_num: 32
>     rule_name: replicated_rule
>   rc: 0
>   start: '2019-08-15 18:20:23.739222'
>   stderr: pool 'manila_metadata' created
>   stderr_lines: <omitted>
>   stdout: ''
>   stdout_lines: <omitted>
> 2019-08-15 18:20:24,898 p=236307 u=root |  TASK [ceph-mds : customize pool
> size] ******************************************
> 2019-08-15 18:20:24,898 p=236307 u=root |  task path:
> /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml:40
> 2019-08-15 18:20:24,898 p=236307 u=root |  Thursday 15 August 2019  18:20:24
> +0000 (0:00:03.091)       0:12:52.376 ******* 
> 2019-08-15 18:20:24,904 p=236307 u=root |  META: noop
> 2019-08-15 18:20:24,904 p=236307 u=root |  META: noop
> 2019-08-15 18:20:25,776 p=236307 u=root |  ok: [controller-0 ->
> 192.168.24.19] => (item={'application': 'cephfs', 'name': 'manila_data',
> 'pg_num': 32, 'rule_name': 'replicated_rule'}) => changed=false 
>   ansible_loop_var: item
>   cmd:
>   - podman
>   - exec
>   - ceph-mon-controller-0
>   - ceph
>   - --cluster
>   - ceph
>   - osd
>   - pool
>   - set
>   - manila_data
>   - size
>   - '3'
>   delta: '0:00:00.599331'
>   end: '2019-08-15 18:20:25.743560'
>   item:
>     application: cephfs
>     name: manila_data
>     pg_num: 32
>     rule_name: replicated_rule
>   rc: 0
>   start: '2019-08-15 18:20:25.144229'
>   stderr: set pool 6 size to 3
>   stderr_lines: <omitted>
>   stdout: ''
>   stdout_lines: <omitted>
> 2019-08-15 18:20:26,788 p=236307 u=root |  ok: [controller-0 ->
> 192.168.24.19] => (item={'application': 'cephfs', 'name': 'manila_metadata',
> 'pg_num': 32, 'rule_name': 'replicated_rule'}) => changed=false 
>   ansible_loop_var: item
>   cmd:
>   - podman
>   - exec
>   - ceph-mon-controller-0
>   - ceph
>   - --cluster
>   - ceph
>   - osd
>   - pool
>   - set
>   - manila_metadata
>   - size
>   - '3'
>   delta: '0:00:00.765375'
>   end: '2019-08-15 18:20:26.759282'
>   item:
>     application: cephfs
>     name: manila_metadata
>     pg_num: 32
>     rule_name: replicated_rule
>   rc: 0
>   start: '2019-08-15 18:20:25.993907'
>   stderr: set pool 7 size to 3
>   stderr_lines: <omitted>
>   stdout: ''
>   stdout_lines: <omitted>
> 2019-08-15 18:20:26,899 p=236307 u=root |  TASK [ceph-mds : customize pool
> min_size] **************************************

+1 Thanks fultonj!

Verified with CI jobs

Comment 22 errata-xmlrpc 2019-09-21 11:24:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811


Note You need to log in before you can comment on or make changes to this bug.