Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1592982

Summary: [docs] OSD removal guide should cover removing containerized OSDs
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: John Fulton <johfulto>
Component: DocumentationAssignee: Ranjini M N <rmandyam>
Status: CLOSED CURRENTRELEASE QA Contact: Tejas <tchandra>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.2CC: agunn, edonnell, kdreyer, mmurthy, rmandyam
Target Milestone: z4   
Target Release: 3.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-06-11 12:11:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1809203    

Description John Fulton 2018-06-19 19:00:53 UTC
Either the RHCS3 container [1] guide should cover removing a containerized OSD, or the Ceph administration guide [2] should also cover removing an OSD which is running in containers. For example, this is fine for non-containers:

# systemctl disable ceph-osd@4
# systemctl stop ceph-osd@4

but in containers the OSD ID uses the device name so the above would become:

# systemctl disable ceph-osd@sdb
# systemctl stop ceph-osd@sdc

Perhaps you could just add to the administration guide [2] how the ID can vary?

[1] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/container_guide/#starting-stopping-and-restarting-ceph-daemons-that-run-in-containers

[2] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/administration_guide/managing_cluster_size#removing_an_osd_with_the_command_line_interface

[3] 

[root@lab-ceph03 ~]# systemctl status ceph-osd@vdb
● ceph-osd - Ceph OSD
   Loaded: loaded (/etc/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-06-19 03:05:26 UTC; 15h ago
 Main PID: 22269 (ceph-osd-run.sh)
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd
           ├─22269 /bin/bash /usr/share/ceph-osd-run.sh vdb
           └─22493 /usr/bin/docker-current run --rm --net=host --privileged=true --...

Jun 19 03:05:36 lab-ceph03 ceph-osd-run.sh[22269]: move_mount: Moving mount to fin....
Jun 19 03:05:36 lab-ceph03 ceph-osd-run.sh[22269]: command_check_call: Running com...2
Jun 19 03:05:36 lab-ceph03 ceph-osd-run.sh[22269]: command_check_call: Running com...5
Jun 19 03:05:36 lab-ceph03 ceph-osd-run.sh[22269]: 2018-06-19 03:05:36  /entrypoin...S
Jun 19 03:05:36 lab-ceph03 ceph-osd-run.sh[22269]: exec: PID 24425: spawning /usr/...k
Jun 19 03:05:37 lab-ceph03 ceph-osd-run.sh[22269]: starting osd.2 at - osd_data /v...l
Jun 19 03:05:37 lab-ceph03 ceph-osd-run.sh[22269]: 2018-06-19 03:05:37.072212 7f8e...c
Jun 19 03:05:37 lab-ceph03 ceph-osd-run.sh[22269]: 2018-06-19 03:05:37.072230 7f8e...c
Jun 19 03:05:37 lab-ceph03 ceph-osd-run.sh[22269]: 2018-06-19 03:05:37.084953 7f8e...}
Jun 19 03:05:38 lab-ceph03 ceph-osd-run.sh[22269]: 2018-06-19 03:05:38.640648 7f8e...p
Hint: Some lines were ellipsized, use -l to show in full.
[root@lab-ceph03 ~]#

Comment 3 Giridhar Ramaraju 2019-08-05 13:05:55 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 4 Giridhar Ramaraju 2019-08-05 13:08:39 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 5 Giridhar Ramaraju 2019-08-20 07:13:10 UTC
evel setting the severity of this defect to "High" with a bulk update. Pls
refine it to a more closure value, as defined by the severity definition in
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity