Description of problem: During installation testing, I created a two node cluster using the default values. It gave me an error: [root@rhel-mon ~]# ceph health detail HEALTH_WARN Reduced data availability: 32 pgs inactive; Degraded data redundancy: 32 pgs unclean; too few PGs per OSD (16 < min 30) PG_AVAILABILITY Reduced data availability: 32 pgs inactive I realized the default size was 3, so I wasn't going to get to a clean result. So I decided to purge my cluster with '# ansible-playbook purge-cluster.yml', which purged the cluster. Then, I re-ran '# ansible-playbook site.yml' after overriding the default pool size to 2. I got the following result. failed: [rhel-osd1] (item=[{'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2017-09-15 14:05:51.609473', '_ansible_no_log': False, u'stdout': u'', u'cmd': u"parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", u'rc': 1, 'item': [{'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2017-09-15 14:05:49.853054', '_ansible_no_log': False, u'stdout': u'', u'cmd': u"readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", u'rc': 1, 'item': u'/dev/sdb', u'delta': u'0:00:00.009494', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", u'removes': None, u'creates': None, u'chdir': None}}, 'stdout_lines': [], 'failed_when_result': False, u'start': u'2017-09-15 14:05:49.843560', 'failed': False}, u'/dev/sdb'], u'delta': u'0:00:00.020079', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", u'removes': None, u'creates': None, u'chdir': None}}, 'stdout_lines': [], 'failed_when_result': False, u'start': u'2017-09-15 14:05:51.589394', 'failed': False}, {'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2017-09-15 14:05:49.853054', '_ansible_no_log': False, u'stdout': u'', u'cmd': u"readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", u'rc': 1, 'item': u'/dev/sdb', u'delta': u'0:00:00.009494', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", u'removes': None, u'creates': None, u'chdir': None}}, 'stdout_lines': [], 'failed_when_result': False, u'start': u'2017-09-15 14:05:49.843560', 'failed': False}, u'/dev/sdb']) => {"changed": true, "cmd": ["ceph-disk", "prepare", "--cluster", "ceph", "--filestore", "/dev/sdb"], "delta": "0:00:00.366718", "end": "2017-09-15 14:05:53.415466", "failed": true, "item": [{"_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": false, "cmd": "parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", "delta": "0:00:00.020079", "end": "2017-09-15 14:05:51.609473", "failed": false, "failed_when_result": false, "invocation": {"module_args": {"_raw_params": "parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", "_uses_shell": true, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}}, "item": [{"_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": false, "cmd": "readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", "delta": "0:00:00.009494", "end": "2017-09-15 14:05:49.853054", "failed": false, "failed_when_result": false, "invocation": {"module_args": {"_raw_params": "readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", "_uses_shell": true, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}}, "item": "/dev/sdb", "rc": 1, "start": "2017-09-15 14:05:49.843560", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}, "/dev/sdb"], "rc": 1, "start": "2017-09-15 14:05:51.589394", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}, {"_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": false, "cmd": "readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", "delta": "0:00:00.009494", "end": "2017-09-15 14:05:49.853054", "failed": false, "failed_when_result": false, "invocation": {"module_args": {"_raw_params": "readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", "_uses_shell": true, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}}, "item": "/dev/sdb", "rc": 1, "start": "2017-09-15 14:05:49.843560", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}, "/dev/sdb"], "rc": 1, "start": "2017-09-15 14:05:53.048748", "stderr": "Could not create partition 2 from 10487808 to 20973567\nError encountered; not saving changes.\n'/sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:e4d6155e-224b-45e6-97fd-437b1c512dd8 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb' failed with status code 4", "stderr_lines": ["Could not create partition 2 from 10487808 to 20973567", "Error encountered; not saving changes.", "'/sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:e4d6155e-224b-45e6-97fd-437b1c512dd8 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb' failed with status code 4"], "stdout": "", "stdout_lines": []} failed: [rhel-osd0] (item=[{'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2017-09-15 14:05:51.613405', '_ansible_no_log': False, u'stdout': u'', u'cmd': u"parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", u'rc': 1, 'item': [{'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2017-09-15 14:05:49.859364', '_ansible_no_log': False, u'stdout': u'', u'cmd': u"readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", u'rc': 1, 'item': u'/dev/sdb', u'delta': u'0:00:00.007687', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", u'removes': None, u'creates': None, u'chdir': None}}, 'stdout_lines': [], 'failed_when_result': False, u'start': u'2017-09-15 14:05:49.851677', 'failed': False}, u'/dev/sdb'], u'delta': u'0:00:00.019700', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", u'removes': None, u'creates': None, u'chdir': None}}, 'stdout_lines': [], 'failed_when_result': False, u'start': u'2017-09-15 14:05:51.593705', 'failed': False}, {'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2017-09-15 14:05:49.859364', '_ansible_no_log': False, u'stdout': u'', u'cmd': u"readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", u'rc': 1, 'item': u'/dev/sdb', u'delta': u'0:00:00.007687', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", u'removes': None, u'creates': None, u'chdir': None}}, 'stdout_lines': [], 'failed_when_result': False, u'start': u'2017-09-15 14:05:49.851677', 'failed': False}, u'/dev/sdb']) => {"changed": true, "cmd": ["ceph-disk", "prepare", "--cluster", "ceph", "--filestore", "/dev/sdb"], "delta": "0:00:00.374064", "end": "2017-09-15 14:05:53.452243", "failed": true, "item": [{"_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": false, "cmd": "parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", "delta": "0:00:00.019700", "end": "2017-09-15 14:05:51.613405", "failed": false, "failed_when_result": false, "invocation": {"module_args": {"_raw_params": "parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", "_uses_shell": true, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}}, "item": [{"_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": false, "cmd": "readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", "delta": "0:00:00.007687", "end": "2017-09-15 14:05:49.859364", "failed": false, "failed_when_result": false, "invocation": {"module_args": {"_raw_params": "readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", "_uses_shell": true, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}}, "item": "/dev/sdb", "rc": 1, "start": "2017-09-15 14:05:49.851677", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}, "/dev/sdb"], "rc": 1, "start": "2017-09-15 14:05:51.593705", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}, {"_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": false, "cmd": "readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", "delta": "0:00:00.007687", "end": "2017-09-15 14:05:49.859364", "failed": false, "failed_when_result": false, "invocation": {"module_args": {"_raw_params": "readlink -f /dev/sdb | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}|fio[a-z]{1,2}[0-9]{1,2}$'", "_uses_shell": true, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}}, "item": "/dev/sdb", "rc": 1, "start": "2017-09-15 14:05:49.851677", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}, "/dev/sdb"], "rc": 1, "start": "2017-09-15 14:05:53.078179", "stderr": "Could not create partition 2 from 10487808 to 20973567\nError encountered; not saving changes.\n'/sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:026ca94b-b443-41d7-87ec-ff90f29c37b8 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb' failed with status code 4", "stderr_lines": ["Could not create partition 2 from 10487808 to 20973567", "Error encountered; not saving changes.", "'/sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:026ca94b-b443-41d7-87ec-ff90f29c37b8 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb' failed with status code 4"], "stdout": "", "stdout_lines": []} It seems that it isn't purging or zapping virtual disks. The VMs were created with RHEL 7.4, with a system disk sda, and for OSD nodes a separate disk for the osd sdb.
John, could you provide: - full playbook log? - content of group_vars/*.yml - your inventory host file - ceph-ansible version used Meantime I'll still try to reproduce your issue.
I cannot reproduce your issue, I tried something similar to your setup and disks were cleaned. We still need more info. Since we haven't heard from you for almost 3 weeks, I'm closing this, if you think this needs to re-open please do so. Thanks.
I saw the similar thing today when I tried to purge my existing cluster and reinstalling. The reason for this is ceph-ansible purge-cluster.yml is not clearing disk partitions on OSD nodes. To fix this 1) run purge-cluster.yml playbook 2) On each osd node run - for i in 1 2 ; do for j in b c d e ; do parted -s /dev/xvd$j rm $i ; done ; done 3) Now OSD partitions are wiped up, reinstall Ceph by running site.yml Better fix, probably we should add parted -s <device> rm <id> command in purge-cluster.yml
fixes will be in v3.1.0rc3 and v3.0.34
v3.1.0rc3 and v3.0.34 have been released meantime. Fixes will be in v3.1.0rc4 and v3.0.35
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2819