Bug 1490722 - Local PV capacity is incorrect.
Summary: Local PV capacity is incorrect.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.7.0
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
: 3.10.0
Assignee: Tomas Smetana
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-12 06:44 UTC by Qin Ping
Modified: 2018-07-30 19:09 UTC (History)
6 users (show)

Fixed In Version: openshift-external-storage-0.0.2-1.gitd3c94f0.el7
Doc Type: Bug Fix
Doc Text:
Cause: The local persistent storage volumes capacity was computed incorrectly. Consequence: The local persistent volume (PV) capacity differed from the capacity reported by `df` utility. Fix: The code was fixed. Result: The local PV capacity is reported correctly and is consistent with capacity reported by `df` and other tools.
Clone Of:
Environment:
Last Closed: 2018-07-30 19:08:59 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1816 None None None 2018-07-30 19:09:27 UTC

Description Qin Ping 2017-09-12 06:44:41 UTC
Description of problem:
The PV capacity getting from oc get pv is different from the mountpoint capacity getting from df.

Version-Release number of selected component (if applicable):
openshift v3.7.0-0.125.0
kubernetes v1.7.0+695f48a16f

How reproducible:
Always

Steps to Reproduce:
1. Enable PersistentLocalVolume features.
2. Prepare a namespace(mytest) for local volume daemonset.
3. Allow pods in the project to run with HostPath volumes and privileged containers.
4. On one onde, create a directory(/mnt/disks) and mount local block device to /mnt/disks/vol3
mkfs.xfs /dev/vdb
mkdir /mnt/disks/vol3; mount /dev/vdb /mnt/disks/vol3
5. Create configuration of provisioner daemonset.
$ oc create -f - <<EOF 
kind: ConfigMap
metadata:
  name: local-volume-config
data:
    "local-storage": |
      {
        "hostDir": "/mnt/disks"
      }
EOF
6. Create an admin account that is able to create PVs and will be used by the daemonset.
$ oc create -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: local-storage-bootstrapper
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: local-storage:bootstrapper
subjects:
- kind: ServiceAccount
  name: local-storage-bootstrapper
  namespace: mytest
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
EOF
7. Run a bootstrapper image that creates daemonset.
$ oc create -f - <<EOF
apiVersion: batch/v1
kind: Job
metadata:
  name: local-volume-provisioner-bootstrap
spec:
  template:
    spec:
      serviceAccountName: local-storage-bootstrapper
      restartPolicy: Never
      containers:
      - name: bootstrapper
        image: quay.io/external_storage/local-volume-provisioner-bootstrap:latest
        env:
        - name: MY_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        args:
        - --image=quay.io/external_storage/local-volume-provisioner:v1.0.0
        - --volume-config=local-volume-config
EOF
8. Ensure that eache node had a local-volume-provisioner-XXXXX pod is up and ready.
9. Find the PV created for mountpoint and get its configuration.
$ oc get  pv local-pv-xxxxxxxx -o yaml


Actual results:
PV's capacity is different from the mountpoint capacity.


Expected results:
PV's capacity is same with the mountpoint capacity.

Master Log:

Node Log (of failed PODs):

PV Dump:

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:
# df -h /mnt/disks/vol3/
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdb       1014M   33M  982M   4% /mnt/disks/vol3

$ oc get pv local-pv-a8259128 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: local-volume-provisioner-host-8-241-89.host.centralci.eng.rdu2.redhat.com-fc2ed8a0-9692-11e7-be24-fa163e0f3e54
    volume.alpha.kubernetes.io/node-affinity: '{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"kubernetes.io/hostname","operator":"In","values":["host-8-241-89.host.centralci.eng.rdu2.redhat.com"]}]}]}}'
  creationTimestamp: 2017-09-12T05:30:05Z
  name: local-pv-a8259128
  resourceVersion: "119665"
  selfLink: /api/v1/persistentvolumes/local-pv-a8259128
  uid: 6ede2975-977b-11e7-9dd9-fa163e0f3e54
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10230Mi
  local:
    path: /mnt/disks/vol3
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
status:
  phase: Available

Comment 1 Jan Safranek 2017-09-13 14:16:30 UTC
This seems to be fixed in external-storage repo by https://github.com/kubernetes-incubator/external-storage/commit/8d4f6da5ee7624c38b6d8ffcf667e0591cc0a7d7

I am not sure if we released new images to quay.io.

Comment 2 Matthew Wong 2017-09-13 14:33:17 UTC
Yes, the fix should be in quay.io/external_storage/local-volume-provisioner-v1.0.1

Update the config & daemonset to use the new version:
        - --image=quay.io/external_storage/local-volume-provisioner:v1.0.1

Please confirm if this works.

Comment 3 Jan Safranek 2017-09-14 12:57:46 UTC
I updated https://mojo.redhat.com/docs/DOC-1149250 with the new image version.

Comment 4 Qin Ping 2017-09-15 05:38:44 UTC
Confirmed using the new image version, it still had capacity problem.

$ oc get pod local-volume-provisioner-tb8gc -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/created-by: |
      {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"DaemonSet","namespace":"mytest","name":"local-volume-provisioner","uid":"47f6cf56-99d6-11e7-a2e2-fa163e49b386","apiVersion":"extensions","resourceVersion":"14399"}}
    openshift.io/scc: restricted
  creationTimestamp: 2017-09-15T05:25:26Z
  generateName: local-volume-provisioner-
  labels:
    app: local-volume-provisioner
    controller-revision-hash: "3095297645"
    pod-template-generation: "1"
  name: local-volume-provisioner-tb8gc
  namespace: mytest
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: DaemonSet
    name: local-volume-provisioner
    uid: 47f6cf56-99d6-11e7-a2e2-fa163e49b386
  resourceVersion: "14448"
  selfLink: /api/v1/namespaces/mytest/pods/local-volume-provisioner-tb8gc
  uid: 47fbf88f-99d6-11e7-a2e2-fa163e49b386
spec:
  containers:
  - env:
    - name: MY_NODE_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName
    - name: MY_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: VOLUME_CONFIG_NAME
      value: local-volume-config
    image: quay.io/external_storage/local-volume-provisioner:latest
    imagePullPolicy: Always
    name: provisioner
    resources: {}
    securityContext:
      capabilities:
        drop:
        - KILL
        - MKNOD
        - SETGID
        - SETUID
        - SYS_CHROOT
      privileged: true
      runAsUser: 1000070000
      seLinuxOptions:
        level: s0:c8,c7
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /mnt/local-storage/mnt~disks
      name: mount-bc2cd56a
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: local-storage-admin-token-lsmcr
      readOnly: true
  dnsPolicy: ClusterFirst
  imagePullSecrets:
  - name: local-storage-admin-dockercfg-13zgd
  nodeName: openshift-123.lab.sjc.redhat.com
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1000070000
    seLinuxOptions:
      level: s0:c8,c7
  serviceAccount: local-storage-admin
  serviceAccountName: local-storage-admin
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.alpha.kubernetes.io/notReady
    operator: Exists
  - effect: NoExecute
    key: node.alpha.kubernetes.io/unreachable
    operator: Exists
  volumes:
  - hostPath:
      path: /mnt/disks
    name: mount-bc2cd56a
  - name: local-storage-admin-token-lsmcr
    secret:
      defaultMode: 420
      secretName: local-storage-admin-token-lsmcr
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2017-09-15T05:25:26Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2017-09-15T05:25:39Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2017-09-15T05:25:39Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://95f13e5dacf3975e034d621fc561bf7bb11470a38509fe5ec22543a38a5f655a
    image: quay.io/external_storage/local-volume-provisioner:latest
    imageID: docker-pullable://quay.io/external_storage/local-volume-provisioner@sha256:eca4cf03f22cf7dcf7bd23fc77b670a46286152dfff390a2b8565a8616dca134
    lastState: {}
    name: provisioner
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2017-09-15T05:25:38Z
  hostIP: 10.14.6.123
  phase: Running
  podIP: 10.128.0.3
  qosClass: BestEffort
  startTime: 2017-09-15T05:25:26Z


$ oc --config=../admin.kubeconfig get pv
NAME                CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                 REASON    AGE
local-pv-1aeac8f9   1014Mi     RWO           Delete          Available                                   7m
local-pv-54fb265c   10230Mi    RWO           Delete          Available                                   2m
local-pv-78f6cc6b   10230Mi    RWO           Delete          Available                                   5m
local-pv-ff4a3b02   10230Mi    RWO           Delete          Available                                   6m


[root@openshift-132 ~]# df -h /mnt/disks/vol*
Filesystem             Size  Used Avail Use% Mounted on
/dev/vdb              1014M   35M  980M   4% /mnt/disks/vol1
/dev/mapper/rhel-root   10G  1.7G  8.4G  17% /
vol3                   3.9G     0  3.9G   0% /mnt/disks/vol3
/dev/vdc               4.8G   20M  4.6G   1% /mnt/disks/vol4


$ oc --config=../admin.kubeconfig get pv -o yaml 
apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    annotations:
      pv.kubernetes.io/provisioned-by: local-volume-provisioner-openshift-132.lab.sjc.redhat.com-04bc13fd-99c0-11e7-a1c5-fa163e49b386
      volume.alpha.kubernetes.io/node-affinity: '{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"kubernetes.io/hostname","operator":"In","values":["openshift-132.lab.sjc.redhat.com"]}]}]}}'
    creationTimestamp: 2017-09-15T05:25:39Z
    name: local-pv-1aeac8f9
    namespace: ""
    resourceVersion: "14446"
    selfLink: /api/v1/persistentvolumes/local-pv-1aeac8f9
    uid: 4fdb84d8-99d6-11e7-a2e2-fa163e49b386
  spec:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 1014Mi
    local:
      path: /mnt/disks/vol1
    persistentVolumeReclaimPolicy: Delete
    storageClassName: local-storage
  status:
    phase: Available
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    annotations:
      pv.kubernetes.io/provisioned-by: local-volume-provisioner-openshift-132.lab.sjc.redhat.com-04bc13fd-99c0-11e7-a1c5-fa163e49b386
      volume.alpha.kubernetes.io/node-affinity: '{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"kubernetes.io/hostname","operator":"In","values":["openshift-132.lab.sjc.redhat.com"]}]}]}}'
    creationTimestamp: 2017-09-15T05:30:40Z
    name: local-pv-54fb265c
    namespace: ""
    resourceVersion: "14788"
    selfLink: /api/v1/persistentvolumes/local-pv-54fb265c
    uid: 02b49472-99d7-11e7-a2e2-fa163e49b386
  spec:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 10230Mi
    local:
      path: /mnt/disks/vol4
    persistentVolumeReclaimPolicy: Delete
    storageClassName: local-storage
  status:
    phase: Available
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    annotations:
      pv.kubernetes.io/provisioned-by: local-volume-provisioner-openshift-132.lab.sjc.redhat.com-04bc13fd-99c0-11e7-a1c5-fa163e49b386
      volume.alpha.kubernetes.io/node-affinity: '{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"kubernetes.io/hostname","operator":"In","values":["openshift-132.lab.sjc.redhat.com"]}]}]}}'
    creationTimestamp: 2017-09-15T05:27:50Z
    name: local-pv-78f6cc6b
    namespace: ""
    resourceVersion: "14597"
    selfLink: /api/v1/persistentvolumes/local-pv-78f6cc6b
    uid: 9d5d5d32-99d6-11e7-a2e2-fa163e49b386
  spec:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 10230Mi
    local:
      path: /mnt/disks/vol3
    persistentVolumeReclaimPolicy: Delete
    storageClassName: local-storage
  status:
    phase: Available
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    annotations:
      pv.kubernetes.io/provisioned-by: local-volume-provisioner-openshift-132.lab.sjc.redhat.com-04bc13fd-99c0-11e7-a1c5-fa163e49b386
      volume.alpha.kubernetes.io/node-affinity: '{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"kubernetes.io/hostname","operator":"In","values":["openshift-132.lab.sjc.redhat.com"]}]}]}}'
    creationTimestamp: 2017-09-15T05:26:40Z
    name: local-pv-ff4a3b02
    namespace: ""
    resourceVersion: "14515"
    selfLink: /api/v1/persistentvolumes/local-pv-ff4a3b02
    uid: 73a14b77-99d6-11e7-a2e2-fa163e49b386
  spec:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 10230Mi
    local:
      path: /mnt/disks/vol2
    persistentVolumeReclaimPolicy: Delete
    storageClassName: local-storage
  status:
    phase: Available

Comment 5 Jan Safranek 2017-09-15 08:40:55 UTC
The root cause is no mount propagation from the host to the container. When admin creates a new subdirectory /mnt/disks/new-disk1 and mounts something there while the provisioner pod is still running, this mount is not propagated into the provisioner and it sees only empty directory /mnt/disks/new-disk1 there. The provisioner creates a new PV for this directory and it's capacity is capacity of the root device.

This is a known limitation of local storage in 1.7, see https://github.com/kubernetes-incubator/external-storage/blob/0869479b00da5fe7a8eccf8447d1d886d0fb69f7/local-volume/README.md
We should clearly document that.

It will get better when we have mount propagation in 1.8 (but it's alpha and needs to be explicitly enabled).

Comment 6 Jan Safranek 2018-02-21 14:26:12 UTC
Can you please re-test with 3.9 with MountPropagation enabled? You do not need to update any template or yaml file, only master + node configs to enable MountPropagation feature. That should fix the bug, IMO.

In 1.10, MountPropagation should be enabled by default so this bug goes away.

Comment 7 Qin Ping 2018-02-22 03:38:54 UTC
With MountPropagation enabled:

1. If mount option(e.g. mount -t tmpfs vol1 /mnt/local-storage/fast/vol1) is done before local-volume-provisioner createing PV for volume, the PV's size is the size of mountpoint.
# mkdir /mnt/local-storage/fast/vol2;mount -t tmpfs fvol2 /mnt/local-storage/fast/vol2;chcon -Rt svirt_sandbox_file_t /mnt/local-storage/fast/vol2

2. If mount option is done after local-volume-provisioner creating PV for volume, the PV's size is the size of parent directory. Delete PV and local-volume-provisioner will recreate PV for volume, then the PV's size is the size of mountpoint.
# mkdir /mnt/local-storage/fast/vol3
# mount -t tmpfs fvol3 /mnt/local-storage/fast/vol3
# chcon -Rt svirt_sandbox_file_t /mnt/local-storage/fast/vol3

Comment 8 Jan Safranek 2018-02-22 12:17:28 UTC
ad 2) this has been fixed upstream in https://github.com/kubernetes-incubator/external-storage/pull/499 - provisioner will ignore directories that are not mount points. It will create a PV for a directory only when something is mounted there.

Comment 9 Qin Ping 2018-02-22 12:42:56 UTC
Will verify this issue when a new image is built.

Now the local-provisioner version is:
openshift-external-storage-local-provisioner-0.0.1-8.git78d6339.el7.x86_64

Comment 10 Jan Safranek 2018-02-22 16:38:25 UTC
https://github.com/kubernetes-incubator/external-storage/pull/499 requires basically rebase of external-storage in 3.9. IMO it's quite late in the release cycles for this, all external-storage images (nfs, efs, snapshots, local storage) would need to be re-tested.

There is a chapter in our docs that says that provisioner must be offline when adding new devices (=tmpfs) and "Omitting any of these steps may result in a wrong PV being created"

https://github.com/openshift/openshift-docs/blob/master/install_config/configuring_local.adoc#adding-new-devices

Is it good enough for technical preview? Of course, we'll fix it properly in 1.10.

Comment 11 Qin Ping 2018-02-23 02:03:47 UTC
LGTM

Comment 12 Jan Safranek 2018-02-23 08:29:09 UTC
Moving to 3.10.

Comment 13 Tomas Smetana 2018-05-02 12:01:44 UTC
Should be fixed in the latest external-storage.

Comment 15 Qin Ping 2018-05-17 05:32:23 UTC
Verified this issue in OCP:
oc v3.10.0-0.47.0
openshift v3.10.0-0.47.0
kubernetes v1.10.0+b81c8f8

openshift-external-storage-local-provisioner-0.0.2-2.gitd3c94f0.el7.x86_64

# uname -a
Linux host-172-16-120-49 3.10.0-862.2.3.el7.x86_64 #1 SMP Mon Apr 30 12:37:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Add a new device after daemonset is running:
1. Create a subdir under /mnt/local-storage/fast/vol1
2. mount -t tmpfs vol1 /mnt/local-storage/fast/vol1
3. Delete daemonset
4. Recreate daemonset
5. A new PV was created and the capacity is correct.

So, marked as verified.

Comment 18 errata-xmlrpc 2018-07-30 19:08:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1816


Note You need to log in before you can comment on or make changes to this bug.