RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1523319 - Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (atomicos-docker--pool) that already has used data blocks
Summary: Error starting daemon: error initializing graphdriver: devmapper: Unable to t...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: docker
Version: 7.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Daniel Walsh
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-12-07 17:41 UTC by Micah Abbott
Modified: 2023-09-15 00:05 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-11 00:01:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1071 0 None None None 2018-04-11 00:02:39 UTC

Description Micah Abbott 2017-12-07 17:41:11 UTC
I rebased a RHELAH 7.4.3 system to the nightly 7.5 compose[0] and observed the error in the Summary when the system came up in 7.5.


$ rpm-ostree status
State: idle
Deployments:
● custom:rhel-atomic-host/7/x86_64/standard
                   Version: 7.5.0 (2017-12-06 20:33:00)
                    Commit: 065a640f9d05fcb6bfb039fe4f38296d250454b05292ba23f3c0dd4bb1dbf044

$ rpm -q docker
docker-1.12.6-66.gitd6f7b83.el7.x86_64

$ sudo systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/docker.service.d
           └─flannel.conf
   Active: failed (Result: exit-code) since Thu 2017-12-07 16:14:37 UTC; 5min ago
     Docs: http://docs.docker.com
  Process: 1134 ExecStart=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES (code=exited, status=1/FAILURE)
 Main PID: 1134 (code=exited, status=1/FAILURE)

Dec 07 16:14:35 micah-rhelah-vm1207a systemd[1]: Starting Docker Application Container Engine...
Dec 07 16:14:36 micah-rhelah-vm1207a dockerd-current[1134]: time="2017-12-07T16:14:36.132115511Z" level=info msg="libcontainerd: new containerd process, pid: 1151"
Dec 07 16:14:37 micah-rhelah-vm1207a dockerd-current[1134]: time="2017-12-07T16:14:37.156421718Z" level=fatal msg="Error starting daemon: error initializing graphdriver: devmapper: Unable to take... data blocks"
Dec 07 16:14:37 micah-rhelah-vm1207a systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 07 16:14:37 micah-rhelah-vm1207a systemd[1]: Failed to start Docker Application Container Engine.
Dec 07 16:14:37 micah-rhelah-vm1207a systemd[1]: Unit docker.service entered failed state.
Dec 07 16:14:37 micah-rhelah-vm1207a systemd[1]: docker.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

$ sudo journalctl -b -u docker --no-pager
-- Logs begin at Thu 2017-12-07 16:14:26 UTC, end at Thu 2017-12-07 16:20:18 UTC. --
Dec 07 16:14:35 micah-rhelah-vm1207a systemd[1]: Starting Docker Application Container Engine...
Dec 07 16:14:36 micah-rhelah-vm1207a dockerd-current[1134]: time="2017-12-07T16:14:36.132115511Z" level=info msg="libcontainerd: new containerd process, pid: 1151"
Dec 07 16:14:37 micah-rhelah-vm1207a dockerd-current[1134]: time="2017-12-07T16:14:37.156421718Z" level=fatal msg="Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (atomicos-docker--pool) that already has used data blocks"
Dec 07 16:14:37 micah-rhelah-vm1207a systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 07 16:14:37 micah-rhelah-vm1207a systemd[1]: Failed to start Docker Application Container Engine.
Dec 07 16:14:37 micah-rhelah-vm1207a systemd[1]: Unit docker.service entered failed state.
Dec 07 16:14:37 micah-rhelah-vm1207a systemd[1]: docker.service failed.


There were some AVC denials, but they didn't appear related to docker:

$ sudo journalctl -b | grep 'avc:  denied'
Dec 07 16:14:28 micah-rhelah-vm1207a kernel: type=1400 audit(1512663268.288:4): avc:  denied  { write } for  pid=570 comm="systemd-journal" name="var" dev="dm-0" ino=5069304 scontext=system_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=dir
Dec 07 16:14:28 micah-rhelah-vm1207a kernel: type=1400 audit(1512663268.314:5): avc:  denied  { write } for  pid=570 comm="systemd-journal" name="var" dev="dm-0" ino=5069304 scontext=system_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=dir
Dec 07 16:14:28 micah-rhelah-vm1207a kernel: type=1400 audit(1512663268.329:6): avc:  denied  { write } for  pid=570 comm="systemd-journal" name="var" dev="dm-0" ino=5069304 scontext=system_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=dir
Dec 07 16:14:28 micah-rhelah-vm1207a kernel: type=1400 audit(1512663268.332:7): avc:  denied  { write } for  pid=570 comm="systemd-journal" name="var" dev="dm-0" ino=5069304 scontext=system_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=dir
Dec 07 16:14:28 micah-rhelah-vm1207a kernel: type=1400 audit(1512663268.336:8): avc:  denied  { write } for  pid=570 comm="systemd-journal" name="var" dev="dm-0" ino=5069304 scontext=system_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=dir
Dec 07 16:14:28 micah-rhelah-vm1207a kernel: type=1400 audit(1512663268.340:9): avc:  denied  { write } for  pid=570 comm="systemd-journal" name="var" dev="dm-0" ino=5069304 scontext=system_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=dir
Dec 07 16:14:28 micah-rhelah-vm1207a kernel: type=1400 audit(1512663268.346:10): avc:  denied  { write } for  pid=570 comm="systemd-journal" name="var" dev="dm-0" ino=5069304 scontext=system_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=dir
Dec 07 16:14:54 micah-rhelah-vm1207a kernel: type=1400 audit(1512663294.085:142): avc:  denied  { read } for  pid=10855 comm="kexec" name="vmlinuz-3.10.0-799.el7.x86_64" dev="vda1" ino=531207 scontext=system_u:system_r:kdump_t:s0 tcontext=system_u:object_r:modules_object_t:s0 tclass=file
Dec 07 16:14:54 micah-rhelah-vm1207a kernel: type=1400 audit(1512663294.089:143): avc:  denied  { read } for  pid=10855 comm="kexec" name="vmlinuz-3.10.0-799.el7.x86_64" dev="vda1" ino=531207 scontext=system_u:system_r:kdump_t:s0 tcontext=system_u:object_r:modules_object_t:s0 tclass=file
Dec 07 16:14:54 micah-rhelah-vm1207a kernel: type=1400 audit(1512663294.091:144): avc:  denied  { read } for  pid=10855 comm="kexec" name="vmlinuz-3.10.0-799.el7.x86_64" dev="vda1" ino=531207 scontext=system_u:system_r:kdump_t:s0 tcontext=system_u:object_r:modules_object_t:s0 tclass=file

Comment 2 Micah Abbott 2017-12-07 17:42:17 UTC
Forgot to link to the nightly compose:  

http://download-node-02.eng.bos.redhat.com/nightly//RHAH-7.5.20171206.n.5/compose/Server/x86_64/ostree/repo/

Comment 3 Vivek Goyal 2017-12-07 19:33:28 UTC
I think this is configuration error. Docker is being passed a thin pool which is not empty and docker can't take ownership of a thin pool which is not empty.

How do you reproduce this issue? What's the workflow of creating thin pool and passing it to docker.

I think reset of storage and restarting docker should fix this.

Comment 4 Qian Cai 2017-12-07 20:03:25 UTC
Well, it looks like the fresh-installed RHEL Atomic Host 7.4.X always have a non-empty thin-pool by default.

# lvs
  LV          VG       Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  docker-pool atomicos twi-a-t--- <12.64g             0.15   0.12

Comment 5 Micah Abbott 2017-12-07 20:18:06 UTC
(In reply to Vivek Goyal from comment #3)

> How do you reproduce this issue? What's the workflow of creating thin pool
> and passing it to docker.

1)  Boot RHELAH 7.4.3 qcow2 image
2)  Add an ostree remote that points to the 7.5 compose in comment #2

# ostree remote add --no-gpg-verify custom http://download-node-02.eng.bos.redhat.com/nightly//RHAH-7.5.20171206.n.5/compose/Server/x86_64/ostree/repo/

3)  Rebase to 7.5 compose

# rpm-ostree rebase custom:rhel-atomic-host/7/x86_64/standard

4)  Reboot
5)  Observe errors

> I think reset of storage and restarting docker should fix this.

Yes, this does workaround the issue.  But I don't think customers are going to want to have to delete all their Docker data when they are upgrading to RHELAH 7.5, right?

Comment 6 Vivek Goyal 2017-12-07 20:35:32 UTC
(In reply to CAI Qian from comment #4)
> Well, it looks like the fresh-installed RHEL Atomic Host 7.4.X always have a
> non-empty thin-pool by default.
> 
> # lvs
>   LV          VG       Attr       LSize   Pool Origin Data%  Meta%  Move Log
> Cpy%Sync Convert
>   docker-pool atomicos twi-a-t--- <12.64g             0.15   0.12

This is strange. So docker has not run at all yet?

Is there any magic going w.r.t atomic system containers.

These kind of errors happen when docker has been started, it has setup thin pool. Now somebody has deleted /var/lib/docker (and not thin pool) and restarted docker.
 
So do following.

- start docker
- stop docker
- rm -rf /var/lib/docker
- start docker

And you will see the error that docker can't take ownership of non-empty thin pool. Problem is docker thinks it is running for the first time (because /var/lib/docker/ is empty) but thin pool is not empty has data created by previous run of docker.

That's why when storage reset happens, we remove thin pool as well as /var/lib/docker/.

So I am not sure what happening in atomic world that we are somehow triggering above sequence where thin pool is old but docker has lost its old data and starting fresh.

Comment 7 Vivek Goyal 2017-12-07 20:48:17 UTC
(In reply to Micah Abbott from comment #5)
> (In reply to Vivek Goyal from comment #3)
> 
> > How do you reproduce this issue? What's the workflow of creating thin pool
> > and passing it to docker.
> 
> 1)  Boot RHELAH 7.4.3 qcow2 image
> 2)  Add an ostree remote that points to the 7.5 compose in comment #2
> 
> # ostree remote add --no-gpg-verify custom
> http://download-node-02.eng.bos.redhat.com/nightly//RHAH-7.5.20171206.n.5/
> compose/Server/x86_64/ostree/repo/
> 
> 3)  Rebase to 7.5 compose
> 
> # rpm-ostree rebase custom:rhel-atomic-host/7/x86_64/standard
> 
> 4)  Reboot
> 5)  Observe errors
> 
> > I think reset of storage and restarting docker should fix this.
> 
> Yes, this does workaround the issue.  But I don't think customers are going
> to want to have to delete all their Docker data when they are upgrading to
> RHELAH 7.5, right?

Is there any chance that we lost old /var/lib/docker/ after the upgrade?

Comment 8 Micah Abbott 2017-12-07 21:30:49 UTC
This is starting to look like a side-effect of the interaction between 'rpm-ostree' and the 7.5 kernel as described here:

https://bugzilla.redhat.com/show_bug.cgi?id=1428677#c17


Some relevent IRC logs:

<caiqian> vgoyal, yes, the /var/lib/docker was lost after upgrade to 7.5

<vgoyal> caiqian: we should not lose /var/lib/docker/ after upgrade
<vgoyal> caiqian: container-storage-setup does not purge /var/lib/docker/ AFAIK
<vgoyal> caiqian: "atomic storage reset" can do it
<vgoyal> caiqian: is /var/lib/docker on rootfs or a special mount
<vgoyal> caiqian: could it be that after upgrade it is overlayfs on a separate volume somehow mounted over /var/lib/docker/
<caiqian> vgoyal, rootfs

<vgoyal> caiqian: i am just scratching my head. i don't know how did we end up in a situation where old /var/lib/docker/ is not visible 

<caiqian> vgoyal, wow, the old /var/lib/docker hide here
<caiqian> find /sysroot/ostree/deploy/rhel-atomic-host/var/lib/docker/
<caiqian> vgoyal, something atomic host upgrade replace /var/lib/docker with a new one.

<vgoyal> caiqian: i suspect that this has something to with ostree and rebase process
<vgoyal> caiqian: findmnt says that pre-upgrade my /var/lib/docker is here.
<vgoyal> ─/var                                /dev/mapper/atomicos-root[/ostree/deploy/rhel-atomic-host/var]
<vgoyal> basically /ostree/deploy/rhel-atomic-host/var is mounted on /var
<caiqian> vgoyal, yes, after upgrade, /var is no longer a separate mount

Comment 9 Vivek Goyal 2017-12-07 21:33:36 UTC
Colin, what do you think?

Comment 10 Colin Walters 2017-12-07 23:02:16 UTC
@miabbott does systemctl --failed show the tmpfiles.d generator failed?

@vivek: Can you glance at https://bugzilla.redhat.com/show_bug.cgi?id=1428677 ?  Does anything in the container stack use O_TMPFILE?  `git grep O_TMPFILE` in projectatomic/docker has zero hits.

Comment 11 Colin Walters 2017-12-07 23:02:59 UTC
The RHELAH 7.5 ISO also immediately blows up with the same issue from https://bugzilla.redhat.com/show_bug.cgi?id=1428677 ...which I may just add EINVAL to our "ignore" list but I'd really like a statement from the kernel side that O_TMPFILE is expected to work.

Comment 12 Micah Abbott 2017-12-08 14:21:36 UTC
(In reply to Colin Walters from comment #10)
> @miabbott does systemctl --failed show the tmpfiles.d generator failed?

Nope.

-bash-4.2$ systemctl --failed
  UNIT           LOAD   ACTIVE SUB    DESCRIPTION
● docker.service loaded failed failed Docker Application Container Engine
● kdump.service  loaded failed failed Crash recovery kernel arming

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

2 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

Comment 13 Vivek Goyal 2018-02-05 17:51:24 UTC
(In reply to Colin Walters from comment #11)
> The RHELAH 7.5 ISO also immediately blows up with the same issue from
> https://bugzilla.redhat.com/show_bug.cgi?id=1428677 ...which I may just add
> EINVAL to our "ignore" list but I'd really like a statement from the kernel
> side that O_TMPFILE is expected to work.

O_TMPFILE issue is now fixed in kernel in 7.5. So is this bug still happening?

Comment 14 Colin Walters 2018-02-05 20:12:52 UTC
I can't reproduce this, just did a quick rebase:

# rpm-ostree status
State: idle
Deployments:
● rhah-20180205.n.0:rhel-atomic-host/7/x86_64/standard
                   Version: 7.5.0 (2018-02-05 18:36:43)
                    Commit: 90db2b58446d807c92d02443bed3560f18a6f1ba7f6ddf640b2559000cd27046

  rhel-atomic-host-ostree:rhel-atomic-host/7/x86_64/standard
                   Version: 7.4.5 (2018-02-05 00:07:59)
                    Commit: bb8c244eadbb48f5dfceaaf44630a82c986dfcc9ee431143a53935b2f3a2dcd0
              GPGSignature: Valid signature by 567E347AD0044ADE55BA8A5F199E2F91FD431D51

Micah, is this still valid?

Comment 15 Daniel Walsh 2018-02-05 20:32:44 UTC
I am going to claim this is fixed in the current release.  Micah please reopen if that is not the case.

Comment 16 Micah Abbott 2018-02-05 20:57:35 UTC
Agreed, can't reproduce using the latest RHELAH 7.5 compose.

Comment 21 errata-xmlrpc 2018-04-11 00:01:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1071

Comment 22 Red Hat Bugzilla 2023-09-15 00:05:38 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.