Bug 1877672 - [Ceph-volume] Upgrade fails from 4.1z1 to 4.1z2 at TASK [activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus]
Summary: [Ceph-volume] Upgrade fails from 4.1z1 to 4.1z2 at TASK [activate scanned cep...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 4.1
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: z2
: 4.1
Assignee: Guillaume Abrioux
QA Contact: Ameena Suhani S H
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-10 07:21 UTC by Ameena Suhani S H
Modified: 2020-09-30 17:27 UTC (History)
11 users (show)

Fixed In Version: ceph-14.2.8-111.el8cp, ceph-14.2.8-111.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-30 17:27:03 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 37093 0 None closed ceph-volume: fix simple activate when legacy osd 2020-11-21 23:34:09 UTC
Github ceph ceph pull 37234 0 None closed ceph-volume: fix wrong type passed in terminal.warning() 2020-11-21 23:34:09 UTC
Red Hat Product Errata RHBA-2020:4144 0 None None None 2020-09-30 17:27:15 UTC

Description Ameena Suhani S H 2020-09-10 07:21:46 UTC
Description of problem:
Upgrade fails from 4.1z1 to 4.1z2 at TASK [activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus]

Version-Release number of selected component (if applicable):
ceph-ansible-4.0.30-1.el7cp.noarch
ansible-2.8.13-1.el7ae.noarch


How reproducible:
1/1

Steps to Reproduce:
1. install 3.3z6 with ceph-disks osd scenario
2.Upgraded to 4.1z1 
3.and then again upgraded from 4.1z1 to 4.1z2

Actual results:
upgrade failed with error
TASK [activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus]
fatal: [magna074]: FAILED! => changed=true
  cmd:
  - ceph-volume
  - --cluster=ceph
  - simple
  - activate
  - --all
  delta: '0:00:01.015613'
  end: '2020-09-10 06:08:53.377978'
  invocation:
    module_args:
      _raw_params: ceph-volume --cluster=ceph simple activate --all
      _uses_shell: false
      argv: null
      chdir: null
      creates: null
      executable: null
      removes: null
      stdin: null
      stdin_add_newline: true
      strip_empty_ends: true
      warn: true
  msg: non-zero return code
  rc: 1
  start: '2020-09-10 06:08:52.362365'
  stderr: |-
    --> activating OSD specified in /etc/ceph/osd/6-ffb10c54-0a4a-4a22-a5ad-a37f2561d95e.json
    Running command: /bin/mount -v  /var/lib/ceph/osd/ceph-6
     stderr: mount:  is write-protected, mounting read-only
     stderr: mount: unknown filesystem type '(null)'
    Traceback (most recent call last):
      File "/sbin/ceph-volume", line 9, in <module>
        load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 39, in __init__
        self.main(self.argv)
      File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 59, in newfunc
        return f(*a, **kw)
      File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 150, in main
        terminal.dispatch(self.mapper, subcommand_args)
      File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
        instance.main()
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/simple/main.py", line 33, in main
        terminal.dispatch(self.mapper, self.argv)
      File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
        instance.main()
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/simple/activate.py", line 280, in main
        self.activate(args)
      File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/simple/activate.py", line 178, in activate
        process.run(['mount', '-v', data_device, osd_dir])
      File "/usr/lib/python2.7/site-packages/ceph_volume/process.py", line 153, in run
        raise RuntimeError(msg)
    RuntimeError: command returned non-zero exit status: 32
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>


Expected results:
Upgrade Should be sucessful

Comment 22 errata-xmlrpc 2020-09-30 17:27:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4144


Note You need to log in before you can comment on or make changes to this bug.