Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1472370

Summary: [3.5] Restart of atomic-openshift-node service terminates pod glusterfs mount
Product: OpenShift Container Platform Reporter: Scott Dodson <sdodson>
Component: StorageAssignee: Jan Safranek <jsafrane>
Status: CLOSED ERRATA QA Contact: Jianwei Hou <jhou>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.5.1CC: aos-bugs, aos-storage-staff, atumball, bchilds, bleanhar, bmchugh, csaba, ekuric, eparis, erich, hchiramm, jchaloup, jhou, jialiu, jkaur, jnordell, jokerman, jsafrane, knakayam, lxia, mmccomas, mrobson, ppospisi, rcyriac, rhs-bugs, sdodson, ssaha, sudo, tlarsson, trankin, wehe, xtian
Target Milestone: ---Keywords: NeedsTestCase, Reopened
Target Release: 3.5.z   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Cause: When the atomic-openshift-node service got restarted, all processes in its control group are terminated, including the glusterfs mounted points. Consequence: Each glusterfs volume in OpenShift corresponds to one mounted point. If all mounting point are lost, so are all the volumes. Fix: Set the control group mode to terminate only the main process and leave the remaining glusterfs mounting points untouched. Result: When the atomic-openshift-node service is restarted no glusterfs mounting point is terminated.
Story Points: ---
Clone Of: 1423640 Environment:
Last Closed: 2017-10-25 13:02:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1423640, 1424680, 1472372    
Bug Blocks: 1462254, 1466217    

Comment 1 Jan Safranek 2017-08-10 11:05:11 UTC
backport to 3.5: https://github.com/openshift/ose/pull/830

Comment 3 Magnus Glantz 2017-09-01 08:09:13 UTC
This is affecting a strategic customer of mine, please see case 01806322.

Comment 4 Jan Safranek 2017-09-01 15:46:13 UTC
Merged

Comment 7 Wenqi He 2017-09-06 08:10:22 UTC
I have test on below version:
openshift v3.5.5.31.27
kubernetes v1.5.2+43a9be4

This bug is fixed and can be verified when it is ON_QA

[root@host-8-241-77 gluster]# oc rsh gluster
/ # df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/docker-253:0-16798059-dc5dcfd3c62f23aa00fa5aabb3e295b6afd95317f60a1e633ed96475f0e4b3b8
                      10467328     42872  10424456   0% /
tmpfs                  1940952         0   1940952   0% /dev
tmpfs                  1940952         0   1940952   0% /sys/fs/cgroup
/dev/mapper/rhel-root
...

Comment 12 Jan Safranek 2017-09-11 08:48:15 UTC
*** Bug 1466848 has been marked as a duplicate of this bug. ***

Comment 15 Jianwei Hou 2017-09-15 03:29:59 UTC
Verified on v3.5.5.31.27, this bug if fixed.

Comment 17 errata-xmlrpc 2017-10-25 13:02:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3049