DescriptionAmeena Suhani S H
2020-09-10 07:21:46 UTC
Description of problem:
Upgrade fails from 4.1z1 to 4.1z2 at TASK [activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus]
Version-Release number of selected component (if applicable):
ceph-ansible-4.0.30-1.el7cp.noarch
ansible-2.8.13-1.el7ae.noarch
How reproducible:
1/1
Steps to Reproduce:
1. install 3.3z6 with ceph-disks osd scenario
2.Upgraded to 4.1z1
3.and then again upgraded from 4.1z1 to 4.1z2
Actual results:
upgrade failed with error
TASK [activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus]
fatal: [magna074]: FAILED! => changed=true
cmd:
- ceph-volume
- --cluster=ceph
- simple
- activate
- --all
delta: '0:00:01.015613'
end: '2020-09-10 06:08:53.377978'
invocation:
module_args:
_raw_params: ceph-volume --cluster=ceph simple activate --all
_uses_shell: false
argv: null
chdir: null
creates: null
executable: null
removes: null
stdin: null
stdin_add_newline: true
strip_empty_ends: true
warn: true
msg: non-zero return code
rc: 1
start: '2020-09-10 06:08:52.362365'
stderr: |-
--> activating OSD specified in /etc/ceph/osd/6-ffb10c54-0a4a-4a22-a5ad-a37f2561d95e.json
Running command: /bin/mount -v /var/lib/ceph/osd/ceph-6
stderr: mount: is write-protected, mounting read-only
stderr: mount: unknown filesystem type '(null)'
Traceback (most recent call last):
File "/sbin/ceph-volume", line 9, in <module>
load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 39, in __init__
self.main(self.argv)
File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 150, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python2.7/site-packages/ceph_volume/devices/simple/main.py", line 33, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python2.7/site-packages/ceph_volume/devices/simple/activate.py", line 280, in main
self.activate(args)
File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python2.7/site-packages/ceph_volume/devices/simple/activate.py", line 178, in activate
process.run(['mount', '-v', data_device, osd_dir])
File "/usr/lib/python2.7/site-packages/ceph_volume/process.py", line 153, in run
raise RuntimeError(msg)
RuntimeError: command returned non-zero exit status: 32
stderr_lines: <omitted>
stdout: ''
stdout_lines: <omitted>
Expected results:
Upgrade Should be sucessful
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2020:4144