Bug 1464465 - [Container-DOC]:- Need proper steps in the doc for manually upgrading the container images
[Container-DOC]:- Need proper steps in the doc for manually upgrading the con...
Status: CLOSED ERRATA
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Documentation (Show other bugs)
2.3
Unspecified Unspecified
unspecified Severity high
: rc
: 2.4
Assigned To: Erin Donnelly
ceph-qe-bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-23 09:43 EDT by shylesh
Modified: 2017-06-29 15:01 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-06-29 15:01:08 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description shylesh 2017-06-23 09:43:00 EDT
Description of problem:

By following the doc https://access.redhat.com/articles/2789521 to upgrade from container 2.3GA to 2.3Async cvv I am not able to upgrade successfully.

Need proper doc steps

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:

1.created a 2.3 GA container setup using ceph-ansible method.

2.Now trying to upgrade to 2.3Async CVV by following the doc https://access.redhat.com/articles/2789521

3. Few steps are not working as per the doc.


1. Step 1 and subsection "a" from the doc works fine, I am able to stop mons , osds using systemctl.
2.  Step 2 also worked fine for me and I was able to pull latest ceph container image
3. I followed Step3 which says "If you are upgrading to a new major version of the container image:" though I am not doing manjor version upgrades, because there was no other way to tell docker about the latest container image version.

  In this step a subsection "a)" clearly says that "if the cluster was deployed by ceph-ansible edit the /usr/share/ceph-osd-run.sh " ===> this is fine for osds
But it doesn't talk anything about what should be changed for MONs?? ==> so needs a doc update here.

As a workaround for this step(This step is actually for manually installed cluster according to doc) I edited /etc/systemd/system/multi-user.target.wants/ceph-mon@host.service file and changed the version of the container image to the latest.

performed 3b) systemctl daemon-reload

4) followed 4a) systemctl start ceph-mon@host.service but it started with old container image but rebooting the host solved the problem(not sure what's the issue)



Also Needs seperate section for ceph-rgw and ceph-mds daemon's upgrade procedures which is not present in the doc.

I tried to update bot RGW and MDS 
1. systemctl stop [RGW] or [MDS]
2. pull the new image
3. update the tag in /etc/systemd/system/multi-user.target.wants/ceph-mds@host or ceph-rgw@host .
4. systemctl start [RGW] or [MDS]
Comment 4 seb 2017-06-28 12:52:13 EDT
I just logged into magna053, stop the monitor:

systemctl stop ceph-mon@magna053.service

Edited /etc/systemd/system/multi-user.target.wants/ceph-mon@magna053.service and changed the image with 2.3-1 instead of 2.3-2.

Then I did: systemctl daemon-reload

Started the mon again:

systemctl start ceph-mon@magna053.service

Please see, this works as expected:

[root@magna053 ~]# docker inspect 39b4a4f3bb04 | grep Image
        "Image": "sha256:8d1c4c834a53806b5bbd9d963fc9497d7c074b77d50ab58d12a9d506f0bdb36a",
            "Image": "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhceph:2.3-2",
[root@magna053 ~]# systemctl stop ceph-mon@magna053.service
[root@magna053 ~]#
[root@magna053 ~]# vim /etc/systemd/system/multi-user.target.wants/ceph-mon@magna053.service
[root@magna053 ~]# systemctl daemon-reload
[root@magna053 ~]#
[root@magna053 ~]# systemctl start ceph-mon@magna053.service
[root@magna053 ~]#
[root@magna053 ~]# docker ps
CONTAINER ID        IMAGE                                                               COMMAND             CREATED             STATUS                  PORTS               NAMES
6c1e0dd49eec        brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhceph:2.3-1   "/entrypoint.sh"    2 seconds ago       Up Less than a second                       ceph-mon-magna053


[root@magna053 ~]# docker inspect 6c1e0dd49eec | grep Image
        "Image": "sha256:77c48be375e74a4ef23e6c5a197ba77765bc0c580654e3a9fc1e6e383b2551c7",
            "Image": "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhceph:2.3-1",


We must have missed something.

Erin, the procedure rgw and mds is identical, only the systemd unit file are different.
For a rgw it's ceph-rgw@<hostname>.service and mds it's ceph-mds@<hostname>.service

The path in /etc/ is the same, just the name changes.
Comment 7 seb 2017-06-29 04:08:53 EDT
It's all good Erin, thanks!
Comment 11 errata-xmlrpc 2017-06-29 15:01:08 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1667

Note You need to log in before you can comment on or make changes to this bug.