Bug 1384846 - [ceph-ansible]: can fail with "Invalid partition data!"
Summary: [ceph-ansible]: can fail with "Invalid partition data!"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: ceph-ansible
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 2
Assignee: Sébastien Han
QA Contact: Tejas
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-10-14 09:17 UTC by John Harrigan
Modified: 2020-03-11 15:18 UTC (History)
15 users (show)

Fixed In Version: ceph-ansible-2.1.9-1.el7scon
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-19 13:15:40 UTC
Embargoed:


Attachments (Terms of Use)
purge-cluster behavior (30.45 KB, text/plain)
2016-10-19 12:17 UTC, John Harrigan
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible issues 759 0 None closed sgdisk fails to fully clear/wipe device 2020-05-20 16:19:35 UTC
Red Hat Product Errata RHBA-2017:1496 0 normal SHIPPED_LIVE ceph-installer, ceph-ansible, and ceph-iscsi-ansible update 2017-06-19 17:14:02 UTC

Description John Harrigan 2016-10-14 09:17:24 UTC
Description of problem:
TASK: [ceph-osd | prepare osd disk(s)] can fail with 'Invalid partition data!' message. More thorough disk zapping could avoid the failure.

Version-Release number of selected component (if applicable):
ceph-ansible.noarch                  1.0.5-34.el7scon                  @RHSCON-2

How reproducible:
Somewhat...  I was installing on systems which had previously been running RHCS 2, so the disks were already stamped with ceph and had FSID label. In an effort to clean them I ran 'purge-cluster.yml'. The failure occurred on the NVMe journal devices (below, Actual Results). 

Steps to Reproduce:
1. On system with RHCS 2 already installed, reprovision systems with RHEL 7.3 Snapshot 5
2. run ceph-ansible with default (to create FSID)
3. ceph-ansible fails, non-matching FSID's
4. run 'purge-cluster.yml'
5. run 'ceph-ansible' resulting in Invalid partition table failure

Actual results:
Sample failure msg
failed: [gprfs041.sbu.lab.eng.bos.redhat.com] => (item=[{u'cmd': u"parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", u'end': u'2016-10-13 09:18:18.994689', 'failed': False, u'stdout': u'', u'changed': False, u'rc': 1, u'start': u'2016-10-13 09:18:18.988061', 'item': '/dev/sdb', u'warnings': [], u'delta': u'0:00:00.006628', 'invocation': {'module_name': u'shell', 'module_complex_args': {}, 'module_args': u"parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'"}, 'stdout_lines': [], 'failed_when_result': False, u'stderr': u''}, {u'cmd': u"echo '/dev/sdb' | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}$'", u'end': u'2016-10-13 09:17:50.624717', 'failed': False, u'stdout': u'', u'changed': False, u'rc': 1, u'start': u'2016-10-13 09:17:50.618632', 'item': '/dev/sdb', u'warnings': [], u'delta': u'0:00:00.006085', 'invocation': {'module_name': u'shell', 'module_complex_args': {}, 'module_args': u"echo '/dev/sdb' | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}$'"}, 'stdout_lines': [], 'failed_when_result': False, u'stderr': u''}, '/dev/sdb', '/dev/nvme0n1']) => {"changed": false, "cmd": ["ceph-disk", "prepare", "--cluster", "ceph", "/dev/sdb", "/dev/nvme0n1"], "delta": "0:00:00.258114", "end": "2016-10-13 09:18:25.367822", "item": [{"changed": false, "cmd": "parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", "delta": "0:00:00.006628", "end": "2016-10-13 09:18:18.994689", "failed": false, "failed_when_result": false, "invocation": {"module_args": "parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", "module_complex_args": {}, "module_name": "shell"}, "item": "/dev/sdb", "rc": 1, "start": "2016-10-13 09:18:18.988061", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}, {"changed": false, "cmd": "echo '/dev/sdb' | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}$'", "delta": "0:00:00.006085", "end": "2016-10-13 09:17:50.624717", "failed": false, "failed_when_result": false, "invocation": {"module_args": "echo '/dev/sdb' | egrep '/dev/([hsv]d[a-z]{1,2}|cciss/c[0-9]d[0-9]p|nvme[0-9]n[0-9]p)[0-9]{1,2}$'", "module_complex_args": {}, "module_name": "shell"}, "item": "/dev/sdb", "rc": 1, "start": "2016-10-13 09:17:50.618632", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}, "/dev/sdb", "/dev/nvme0n1"], "rc": 1, "start": "2016-10-13 09:18:25.109708", "warnings": []}
stderr: prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
Invalid partition data!
Traceback (most recent call last):
  File "/sbin/ceph-disk", line 9, in <module>
    load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4994, in run
    main(sys.argv[1:])
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4947, in main
    main_catch(args.func, args)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4972, in main_catch
    func(args)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1774, in main
    Prepare.factory(args).prepare()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1762, in prepare
    self.prepare_locked()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1794, in prepare_locked
    self.data.prepare(self.journal)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2446, in prepare
    self.prepare_device(*to_prepare_list)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2622, in prepare_device
    to_prepare.prepare()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1964, in prepare
    self.prepare_device()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2054, in prepare_device
    num=num)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1522, in create_partition
    self.path,
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 439, in command_check_call
    return subprocess.check_call(arguments)
  File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/sbin/sgdisk', '--new=1:0:+5000M', '--change-name=1:ceph journal', '--partition-guid=1:5934c7e3-5623-496e-88d2-22fc40679987', '--typecode=1:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--', '/dev/nvme0n1']' returned non-zero exit status 2


Expected results:
TASK: [ceph-osd | prepare osd disk(s)] should succeed

Additional info:
By executing a "sgdisk --zap-all" on the devices before running ceph-ansible I was able to continue.

Comment 2 John Harrigan 2016-10-14 09:32:25 UTC
The approach to run a seperate instance of "sgdisk --zap-all" is described here:
  https://bugs.launchpad.net/ubuntu/+source/gdisk/+bug/1303903
See comment #4

Comment 5 seb 2016-10-17 09:23:10 UTC
I'm confused, the error below doesn't purge anything but tries to create a partition, where is the actual error of purge-cluster while trying to purge the nvme device?

Thanks!

Comment 6 John Harrigan 2016-10-19 12:15:05 UTC
I have attached the output from purge-cluster.yml

And also added Ben England to the CC since he helped me debug this.

Comment 7 John Harrigan 2016-10-19 12:17:06 UTC
Created attachment 1212132 [details]
purge-cluster behavior

Comment 8 Alfredo Deza 2016-10-19 12:28:19 UTC
This looks like ceph-disk should also get this ticket. ceph-ansible already does a "double-down" on calling zap in other places:

https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-osd/tasks/check_devices_static.yml#L18-L24

Comment 9 John Harrigan 2016-10-19 13:07:21 UTC
I wanted to add that this cluster was failing deployment due to
firewalld settings, getting stuck at the 'activate OSD devices' task.

Once firewalld svc was stopped ceph-ansible successfully
deployed. 

I don't believe my errors in the 'prepare OSD devices' task cited in this BZ
were due to firewall mis-settings but I wanted to add the info.

Comment 10 seb 2016-10-20 13:56:45 UTC
Can you try with the last version of purge-cluster.yml?
https://github.com/ceph/ceph-ansible/blob/master/infrastructure-playbooks/purge-cluster.yml

Thanks!

Comment 11 John Harrigan 2016-10-26 15:32:53 UTC
I downloaded the new version of *only* that file and ran it. Errors resulted.
Note this is on an existing RHCS 2.0 cluster

-------------------------
# ansible-playbook purge-cluster.yml 
Are you sure you want to purge the cluster? [no]: yes

PLAY [confirm whether user really meant to purge the cluster] ***************** 

TASK: [exit playbook, if user did not mean to purge cluster] ****************** 
skipping: [localhost]

PLAY [gather facts and check if using systemd] ******************************** 

GATHERING FACTS *************************************************************** 
ok: [gprfc092.sbu.lab.eng.bos.redhat.com]
ok: [gprfs044.sbu.lab.eng.bos.redhat.com]
ok: [gprfs042.sbu.lab.eng.bos.redhat.com]
ok: [gprfs041.sbu.lab.eng.bos.redhat.com]

TASK: [are we using systemd] ************************************************** 
changed: [gprfs042.sbu.lab.eng.bos.redhat.com]
changed: [gprfs041.sbu.lab.eng.bos.redhat.com]
changed: [gprfc092.sbu.lab.eng.bos.redhat.com]
changed: [gprfs044.sbu.lab.eng.bos.redhat.com]

PLAY [purge ceph mds cluster] ************************************************* 
skipping: no hosts matched

PLAY [purge ceph rgw cluster] ************************************************* 
skipping: no hosts matched

PLAY [purge ceph rbd-mirror cluster] ****************************************** 
skipping: no hosts matched

PLAY [purge ceph nfs cluster] ************************************************* 
skipping: no hosts matched

PLAY [purge ceph osd cluster] ************************************************* 

TASK: [include_vars ../roles/ceph-common/defaults/main.yml] ******************* 
failed: [gprfs041.sbu.lab.eng.bos.redhat.com] => {"failed": true, "file": "/usr/share/roles/ceph-common/defaults/main.yml"}
msg: Source file not found.
failed: [gprfs042.sbu.lab.eng.bos.redhat.com] => {"failed": true, "file": "/usr/share/roles/ceph-common/defaults/main.yml"}
msg: Source file not found.
failed: [gprfs044.sbu.lab.eng.bos.redhat.com] => {"failed": true, "file": "/usr/share/roles/ceph-common/defaults/main.yml"}
msg: Source file not found.

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/root/purge-cluster.retry

gprfc092.sbu.lab.eng.bos.redhat.com : ok=2    changed=1    unreachable=0    failed=0   
gprfs041.sbu.lab.eng.bos.redhat.com : ok=2    changed=1    unreachable=0    failed=1   
gprfs042.sbu.lab.eng.bos.redhat.com : ok=2    changed=1    unreachable=0    failed=1   
gprfs044.sbu.lab.eng.bos.redhat.com : ok=2    changed=1    unreachable=0    failed=1   
localhost                  : ok=0    changed=0    unreachable=0    failed=0   
-----------------------------------

I then reverted to the original version of 'purge-cluster.yml' and got
a clean run, including 'zapping' tasks.

At this point I need to take the cluster to latest version of RHCS 2.1
so I am re-installing.

Comment 12 Ken Dreyer (Red Hat) 2017-03-03 17:17:16 UTC
We think this is fixed in the latest builds currently undergoing testing
(ceph-ansible-2.1.9-1.el7scon as of this writing.) Would you please retest with
these?

Comment 13 John Harrigan 2017-03-03 22:43:16 UTC
Hello Ken,

I am out of the office early next week.
Expect I can take a look at this end of next week.

- John

Comment 14 John Harrigan 2017-03-17 13:19:12 UTC
I will not be able to verify this in the nearterm.
The condition that triggered the failure requires considerable setup
time, namely starting with a RHCS 2.0 cluster, purging it and then 
installing RHCS 2.1. Based on other project priorities and limited 
hardware resources I cannot reproduce this now.

sorry,
John

Comment 17 Tejas 2017-05-11 07:24:21 UTC
Hi Seb,

This issue was seen only on NVMe disks. Since we dont have the hardware, any other way I can verify this, or atleast simulate this?

Thanks,
Tejas

Comment 18 seb 2017-05-12 12:45:50 UTC
Hum you can try to symlink any given partition to /dev/nvme0n1.

Something like ln -f /dev/sdb1 /dev/nvme0n1

Then add /dev/nvme0n1 to your device list.

Comment 19 Tejas 2017-05-19 11:50:38 UTC
Hi Seb,

  Creating a link to a normal hard disk doesnt work.
I tried creating the /dev/nvme0n1 link file, but ceph-ansible fails to read the partition table of this disk.
Any other way to verify this BZ?

Thanks,
Tejas

Comment 20 seb 2017-05-19 11:53:25 UTC
Can I see the error?

Comment 22 seb 2017-05-19 15:23:16 UTC
Ok it seems that this little hack won't work then.
It looks like we might to wait for a nvme drive...

Comment 23 Ben England 2017-05-19 15:49:35 UTC
cc'ing Deepthi Dharwar.  She has NVMe drives in her ceph-ansible BAGL configuration, she has used NVMe partitions both as SSD journal devices and as OSDs.  cc'ing her to see if she has observed this problem.we have NVMe drives in the scale lab.  

From what I'm told, NVMe drives as SSD journals works fine in the scale lab - all the storage servers there have at least 1 NVME drive, I think. So we can try out ceph-ansible when we get a pike build there and see.   Or we can just run ceph-ansible directly on these machines.

Comment 24 Harish NV Rao 2017-05-22 09:02:27 UTC
Deepti, can you please check as per comment 23 and help us verify this bug?

Comment 25 seb 2017-05-22 10:03:56 UTC
Thanks for jumping in Ben :)

Comment 26 Deepthi Dharwar 2017-05-22 10:31:53 UTC
I have been using NVMe drives both as SSD journal devices and OSDs for my benchmark runs.  
I have in the past purged the cluster a few times but not seen this issue. 

At present I do not want to tear down my cluster as I am amidst runs.
I will definitely keep you guys updated if I do so in the near future. 

Running RHEL 7.3 
ceph-ansible-1.0.5-34.el7scon.noarch
ceph version 10.2.3-2.el7cp (e3499ea386b9456f7e17417e091f0a1fefddb3f5)

Comment 27 Harish NV Rao 2017-05-24 10:10:04 UTC
(In reply to John Harrigan from comment #14)
> I will not be able to verify this in the nearterm.
> The condition that triggered the failure requires considerable setup
> time, namely starting with a RHCS 2.0 cluster, purging it and then 
> installing RHCS 2.1. Based on other project priorities and limited 
> hardware resources I cannot reproduce this now.
> 
> sorry,
> John

Hi John,

would it be possible for you to test this bz with latest versions of ceph-ansible(>= ceph-ansible-2.1.9-1.el7scon) and rhceph (10.2.7-x)

Please let me know.

Regards,
Harish

Comment 28 John Harrigan 2017-05-24 14:48:46 UTC
Actually I just finished installing RHCS 2.3 pre-release on a cluster which was previously running RHCS 2.2. 
I grabbed the bits from here:
  baseurl=http://download-node-02.eng.bos.redhat.com/rcm-guest/ceph-drops/auto/ceph-2-rhel-7-compose/latest-RHCEPH-2-RHEL-7/compose

which installed: 
  # yum list installed | grep ceph
  ceph-ansible.noarch                  2.2.6-1.el7scon          @RHSCON-2_3       
  ceph-common.x86_64                   1:10.2.7-19.el7cp        @RHCEPH-23-MON    
  ceph-iscsi-ansible.noarch            1.5-4.el7scon            installed         
  libcephfs1.x86_64                    1:10.2.7-19.el7cp        @RHCEPH-23-MON    
  python-cephfs.x86_64                 1:10.2.7-19.el7cp        @RHCEPH-23-MON    


The TASK: [ceph-osd | prepare osd disk(s)] completed with no issues.

- John

Comment 29 Harish NV Rao 2017-05-24 14:50:36 UTC
Thanks John. If issue is resolved, could you please move the defect to VERIFIED state?

Comment 30 seb 2017-06-15 16:38:52 UTC
This should not be part of the release note.

Comment 32 errata-xmlrpc 2017-06-19 13:15:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1496


Note You need to log in before you can comment on or make changes to this bug.