Bug 1880458 - [Ceph-volume] clone of bug 1877672 : Upgrade fail at [activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus]
Summary: [Ceph-volume] clone of bug 1877672 : Upgrade fail at [activate scanned ceph-d...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 4.2
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.2
Assignee: Guillaume Abrioux
QA Contact: Ameena Suhani S H
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-18 15:34 UTC by Vasishta
Modified: 2021-01-12 14:57 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-12 14:57:11 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 37093 0 None closed ceph-volume: fix simple activate when legacy osd 2021-01-31 12:59:10 UTC
Github ceph ceph pull 37234 0 None closed ceph-volume: fix wrong type passed in terminal.warning() 2021-01-31 12:59:09 UTC
Red Hat Product Errata RHSA-2021:0081 0 None None None 2021-01-12 14:57:38 UTC

Description Vasishta 2020-09-18 15:34:22 UTC
This bug was initially created as a copy of Bug #1877672

I am copying this bug because: 

to track respective fixes to 4.2 branch

Description of problem:
Upgrade fails from 4.1z1 to 4.1z2 at TASK [activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus]

Version-Release number of selected component (if applicable):
ceph-ansible-4.0.30-1.el7cp.noarch
ansible-2.8.13-1.el7ae.noarch


How reproducible:
1/1

Steps to Reproduce:
1. install 3.3z6 with ceph-disks osd scenario
2.Upgraded to 4.1z1 
3.and then again upgraded from 4.1z1 to 4.1z2

Actual results:
upgrade failed with error
TASK [activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus]
fatal: [magna074]: FAILED! => changed=true
  cmd:
  - ceph-volume
  - --cluster=ceph
  - simple
  - activate
  - --all
  delta: '0:00:01.015613'
  end: '2020-09-10 06:08:53.377978'
  invocation:
    module_args:
      _raw_params: ceph-volume --cluster=ceph simple activate --all
      _uses_shell: false
      argv: null
      chdir: null
      creates: null
      executable: null
      removes: null
      stdin: null
      stdin_add_newline: true
      strip_empty_ends: true
      warn: true
  msg: non-zero return code
  rc: 1
  start: '2020-09-10 06:08:52.362365'
  stderr: |-
    --> activating OSD specified in /etc/ceph/osd/6-ffb10c54-0a4a-4a22-a5ad-a37f2561d95e.json
    Running command: /bin/mount -v  /var/lib/ceph/osd/ceph-6
     stderr: mount:  is write-protected, mounting read-only
     stderr: mount: unknown filesystem type '(null)'
    Traceback (most recent call last):
      File "/sbin/ceph-volume", line 9, in <module>
        load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 39, in __init__
        self.main(self.argv)
      File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 59, in newfunc
        return f(*a, **kw)
      File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 150, in main
        terminal.dispatch(self.mapper, subcommand_args)
      File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
        instance.main()
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/simple/main.py", line 33, in main
        terminal.dispatch(self.mapper, self.argv)
      File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
        instance.main()
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/simple/activate.py", line 280, in main
        self.activate(args)
      File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/simple/activate.py", line 178, in activate
        process.run(['mount', '-v', data_device, osd_dir])
      File "/usr/lib/python2.7/site-packages/ceph_volume/process.py", line 153, in run
        raise RuntimeError(msg)
    RuntimeError: command returned non-zero exit status: 32
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>


Expected results:
Upgrade Should be sucessful

Comment 3 Yaniv Kaul 2020-10-01 12:26:31 UTC
So, do we need this BZ or not?

Comment 10 errata-xmlrpc 2021-01-12 14:57:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0081


Note You need to log in before you can comment on or make changes to this bug.