Bug 1564271

Summary: dockergc daemon set pods are crashing with error: devicemapper storage driver is not supported
Product: OpenShift Container Platform Reporter: Wolfgang Kulhanek <wkulhane>
Component: ContainersAssignee: Mrunal Patel <mpatel>
Status: CLOSED NOTABUG QA Contact: DeShuai Ma <dma>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.9.0CC: amurdaca, aos-bugs, dwalsh, jokerman, mmccomas, mpatel
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-05-03 20:54:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Wolfgang Kulhanek 2018-04-05 21:01:02 UTC
Description of problem:


Version-Release number of selected component (if applicable):
3.9.14

How reproducible:
Every time

Steps to Reproduce:
1. Deploy OCP with CRI-O Support Enabled
2. Patch DaemonSet to fix the wrong command (BZ: 1552505)
3. DaemonSet still crashes

Actual results:

DaemonSet pods go into CrashLoopBackOff:

oc logs -f dockergc-7xwdc
I0405 20:26:54.310781       1 dockergc.go:242] docker build garbage collection daemon
I0405 20:26:54.310856       1 dockergc.go:246] MinimumGCAge: {1h0m0s}, ImageGCHighThresholdPercent: 80, ImageGCLowThresholdPercent: 60
error: devicemapper storage driver is not supported
rpc error: code = Unknown desc = container with ID starting with ac8c05ec6ca9b37c8e315f5383a9f088487ffe37cce37a513275f2c5e020a0f0 not found: ID does not exist


This is on RHEL 7.4, Docker 1.13.1 running on AWS in a c4.4xlarge instance. Storage looks like this:

[root@master1 ~]# lsblk
NAME                              MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda                              202:0    0  50G  0 disk
├─xvda1                           202:1    0   1M  0 part
└─xvda2                           202:2    0  50G  0 part /
xvdb                              202:16   0  20G  0 disk
└─xvdb1                           202:17   0  20G  0 part
  ├─docker--vg-docker--pool_tmeta 253:0    0  24M  0 lvm
  │ └─docker--vg-docker--pool     253:2    0   8G  0 lvm
  └─docker--vg-docker--pool_tdata 253:1    0   8G  0 lvm
    └─docker--vg-docker--pool     253:2    0   8G  0 lvm

Comment 1 Daniel Walsh 2018-05-03 19:47:03 UTC
We only support Overlay for backing store at this time,  Can you change to overlay?

Is this docker or crio complaining about the storage driver?

Comment 2 Wolfgang Kulhanek 2018-05-03 20:27:20 UTC
I did change my docker to overlay2. It is working now.

I am not sure of the implications though. There must have been a reason devicemapper was the default.

Comment 3 Daniel Walsh 2018-05-03 20:54:23 UTC
Well up 2 rhel7.5 devicemapper was the default.  I think the issue you were seeing was for some reason you system was not setup to handle devicemapper.  I will close this as not a bug.