Bug 1401124

Summary: oo-accept-node reports missing quota if filesystem name contains gear uuid
Product: OpenShift Container Platform Reporter: Rory Thrasher <rthrashe>
Component: ContainersAssignee: Rory Thrasher <rthrashe>
Status: CLOSED ERRATA QA Contact: Gaoyun Pei <gpei>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 2.2.0CC: agrimm, aos-bugs, dma, gpei, jgoulding, jialiu, jokerman, mmccomas, rthrashe
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openshift-origin-node-util-1.38.8.1-1.el6op Doc Type: Bug Fix
Doc Text:
Cause/Consequence: A grep in oo-accept node means that using a gear's UUID in the logical volume name causes oo-accept-node to fail. Fix: The grep in question was fixed. Result: Using the gear UUID in the logical volume name no longer causes oo-accept-node to fail.
Story Points: ---
Clone Of: 1367909 Environment:
Last Closed: 2017-01-04 20:23:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On: 1367909    
Bug Blocks: 1277547    

Comment 1 Rory Thrasher 2016-12-12 21:56:26 UTC
QA,

Can we please verify using the 2.2.11 puddle <http://etherpad.corp.redhat.com/puddle-2-2-2016-12-12> and the following instructions:

1. Create an new app on a new gear.

2. Create a logical volume from additional storage.  Use the gear UUID from step 1 as the name of the volume.  Something like /dev/mapper/EBSStore02-<UUID>

3. Run oo-accept-node.  It should PASS.

Comment 3 Gaoyun Pei 2016-12-14 03:04:32 UTC
Verify this bug with puddle 2.2/2016-12-12.1

1. Create an app, gear named yes-app2-1 created on node1

[root@node1 ~]# ls /var/lib/openshift/
aquota.user  lost+found  yes-app2-1

2. Create a lv with the same name as this gear
[root@node1 ~]# lvcreate -L 5G -n yes-app2-1 vg_rhel68release
 Logical volume "yes-app2-1" created.

[root@node1 ~]# ls /dev/mapper/vg_rhel68release-yes--app2--1 
/dev/mapper/vg_rhel68release-yes--app2--1

3. Run "oo-accept-node"
[root@node1 ~]# oo-accept-node -v
INFO: using default accept-node extensions
INFO: loading node configuration file /etc/openshift/node.conf
INFO: loading resource limit file /etc/openshift/resource_limits.conf
INFO: finding external network device
INFO: checking that external network device has a globally scoped IPv4 address
INFO: checking node public hostname resolution
INFO: checking selinux status
INFO: checking selinux openshift-origin policy
INFO: checking selinux booleans
INFO: checking package list
INFO: checking services
INFO: checking kernel semaphores >= 512
INFO: checking cgroups configuration
INFO: checking cgroups tasks
INFO: find district uuid: 584feef382611dc026000001
INFO: determining node uid range: 1000 to 6999
INFO: checking presence of tc qdisc
INFO: checking for cgroup filter
INFO: checking presence of tc classes
INFO: checking filesystem quotas
INFO: checking quota db file selinux label
INFO: checking 1 user accounts
INFO: checking application dirs
INFO: checking system httpd configs
INFO: checking cartridge repository
PASS

Comment 4 Gaoyun Pei 2016-12-14 03:11:23 UTC
Also test with USE_PREDICTABLE_GEAR_UUIDS=false

[root@node2 ~]# ls /var/lib/openshift/
5850b7ad82611d5be4000001  aquota.user  lost+found  yes-app1-1

[root@node2 ~]# ls /dev/mapper/vg_rhel68release-5850b7ad82611d5be4000001 
/dev/mapper/vg_rhel68release-5850b7ad82611d5be4000001

[root@node2 ~]# oo-accept-node 
PASS

Comment 5 Johnny Liu 2016-12-14 08:20:37 UTC
Reproduce steps:
1. create an app, its uuid is "584fccd0d1bd4d7ca8000001"
2. create a new volume - /dev/mapper/testvg-gear--584fccd0d1bd4d7ca8000001
# mount -o usrquota /dev/mapper/testvg-gear--584fccd0d1bd4d7ca8000001 /mnt
# chcon system_u:object_r:openshift_var_lib_t:s0 /mnt
# restorecon -R /mnt/*
# quotaoff /mnt
# quotacheck -cmug /mnt
# quotaon /mnt
3. make sure /dev/mapper/testvg-gear--584faccfd1bd4d3ce2000009 is shown before the real openshfit gear data device in the output of `repquota -a`
# repquota -a
*** Report for user quotas on device /dev/mapper/testvg-584fccd0d1bd4d7ca8000001
Block grace time: 7days; Inode grace time: 7days
                        Block limits                File limits
User            used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
root      --      14       0       0              2     0     0       


*** Report for user quotas on device /dev/mapper/testvg-gear--584faccfd1bd4d3ce2000009
Block grace time: 7days; Inode grace time: 7days
                        Block limits                File limits
User            used    soft    hard  grace    used  soft  hard  grace
----------------------------------------------------------------------
root      --    8480       0       0            938     0     0       
584fccd0d1bd4d7ca8000001 --    6380       0 1048576            243     0 80000       
584fcfc4d1bd4dbab8000003 --    6348       0 1048576            241     0 80000       
584fdb45d1bd4dbab8000026 --   31512       0 1048576           2085     0 80000       
5850f90cd1bd4dbab80000f1 --    1152       0 1048576            248     0 80000       
5850f950d1bd4dbab8000113 --    1144       0 1048576            246     0 80000       

4. reproduce it
# oo-accept-node -v
INFO: using default accept-node extensions
INFO: loading node configuration file /etc/openshift/node.conf
INFO: loading resource limit file /etc/openshift/resource_limits.conf
INFO: finding external network device
INFO: checking that external network device has a globally scoped IPv4 address
INFO: checking node public hostname resolution
INFO: checking selinux status
INFO: checking selinux openshift-origin policy
INFO: checking selinux booleans
INFO: checking package list
INFO: checking services
INFO: checking kernel semaphores >= 512
INFO: checking cgroups configuration
INFO: checking cgroups tasks
INFO: find district uuid: 584e72f8d1bd4d8f5c000001
INFO: determining node uid range: 1000 to 6999
INFO: traffic control not enabled in /etc/openshift/node.conf, set TRAFFIC_CONTROL_ENABLED=true to enable
INFO: checking filesystem quotas
INFO: checking quota db file selinux label
INFO: checking 5 user accounts
FAIL: user 584fccd0d1bd4d7ca8000001 does not have quotas imposed. This can be addressed by running: oo-devel-node set-quota --with-container-uuid 584fccd0d1bd4d7ca8000001 --blocks 1048576 --inodes 80000
INFO: checking application dirs
INFO: checking system httpd configs
INFO: checking cartridge repository
1 ERRORS


Verified this bug with openshift-origin-node-util-1.38.8.1-1.el6op.noarch, and PASS.

# oo-accept-node -v
INFO: using default accept-node extensions
INFO: loading node configuration file /etc/openshift/node.conf
INFO: loading resource limit file /etc/openshift/resource_limits.conf
INFO: finding external network device
INFO: checking that external network device has a globally scoped IPv4 address
INFO: checking node public hostname resolution
INFO: checking selinux status
INFO: checking selinux openshift-origin policy
INFO: checking selinux booleans
INFO: checking package list
INFO: checking services
INFO: checking kernel semaphores >= 512
INFO: checking cgroups configuration
INFO: checking cgroups tasks
INFO: find district uuid: 584e72f8d1bd4d8f5c000001
INFO: determining node uid range: 1000 to 6999
INFO: traffic control not enabled in /etc/openshift/node.conf, set TRAFFIC_CONTROL_ENABLED=true to enable
INFO: checking filesystem quotas
INFO: checking quota db file selinux label
INFO: checking 5 user accounts
INFO: checking application dirs
INFO: checking system httpd configs
INFO: checking cartridge repository
PASS

Comment 7 errata-xmlrpc 2017-01-04 20:23:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2017-0017.html