RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1244300 - docker: sparse file handling causes out-of-space issues
Summary: docker: sparse file handling causes out-of-space issues
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: docker
Version: 7.1
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: 7.5
Assignee: Antonio Murdaca
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1186913 1298243 1385242 1420851 1546181 1564516
TreeView+ depends on / blocked
 
Reported: 2015-07-17 17:22 UTC by Karl Hastings
Modified: 2024-03-25 14:54 UTC (History)
21 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1564516 (view as bug list)
Environment:
Last Closed: 2020-06-09 20:25:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Karl Hastings 2015-07-17 17:22:25 UTC
Description of problem:
It appears that while doing a docker build sparse files may not be handled correctly.  This can cause a building container image to consume all available space, then causing XFS to get caught in an infinite retry-loop (BZ#1240437)


Version-Release number of selected component (if applicable):
docker-1.6.2-14.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
[root@rhel7 ~]# uname -a
Linux rhel7.example.com 3.10.0-229.7.2.el7.x86_64 #1 SMP Fri May 15 21:38:46 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@rhel7 ~]# rpm -q kernel docker
kernel-3.10.0-229.7.2.el7.x86_64
docker-1.6.2-14.el7.x86_64
[root@rhel7 ~]# mkdir usertest
[root@rhel7 ~]# cd usertest/
[root@rhel7 usertest]# cat > Dockerfile <<EOF
> FROM registry.access.redhat.com/rhel6.5:latest
> RUN useradd -u 800173295 -G 100 -c "User, Test" -d /home/tuser tuser
> EOF
[root@rhel7 usertest]# docker build .
Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon 
Step 0 : FROM registry.access.redhat.com/rhel6.5:latest
Trying to pull repository registry.access.redhat.com/rhel6.5 ...
17352558d512: Download complete 
Status: Downloaded newer image for registry.access.redhat.com/rhel6.5:latest
 ---> 17352558d512
Step 1 : RUN useradd -u 800173295 -G 100 -c "User, Test" -d /home/tuser tuser
 ---> Running in 3a1ea5c66162


Actual results:
'docker build' hangs on the useradd.  

thinp fills up:
[root@rhel7 usertest]# docker info
Containers: 1
Images: 2
Storage Driver: devicemapper
 Pool Name: vg_docker-docker--pool
 Pool Blocksize: 524.3 kB
 Backing Filesystem: xfs
 Data file: 
 Metadata file: 
 Data Space Used: 7.411 GB
 Data Space Total: 7.411 GB
 Data Space Available: 0 B
 Metadata Space Used: 540.7 kB
 Metadata Space Total: 12.58 MB
 Metadata Space Available: 12.04 MB
 Udev Sync Supported: true
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Kernel Version: 3.10.0-229.7.2.el7.x86_64
Operating System: Employee SKU
CPUs: 1
Total Memory: 719.5 MiB
Name: rhel7.example.com
ID: N2NT:3V75:BXIY:R2WI:QCMH:JVX4:OKVG:3DF7:XLXL:L766:ZKAK:ORC4

/var/log/messages has:
[...]
Jul 16 15:47:48 rhel7 lvm[608]: Thin vg_docker-docker--pool is now 83% full.
Jul 16 15:47:49 rhel7 lvm[608]: Insufficient free space: 354 extents needed, but only 274 available
Jul 16 15:47:49 rhel7 lvm[608]: Failed to extend thin vg_docker-docker--pool.
Jul 16 15:47:50 rhel7 lvm[608]: No longer monitoring thin vg_docker-docker--pool.
Jul 16 15:47:54 rhel7 kernel: device-mapper: thin: 253:4: reached low water mark for data device: sending event.
Jul 16 15:47:55 rhel7 kernel: device-mapper: thin: 253:4: switching pool to out-of-data-space mode
Jul 16 15:47:55 rhel7 lvm[608]: Thin vg_docker-docker--pool is now 100% full.
Jul 16 15:47:55 rhel7 lvm[608]: Insufficient free space: 354 extents needed, but only 274 available
Jul 16 15:47:55 rhel7 lvm[608]: Failed to extend thin vg_docker-docker--pool.
[...]
Jul 16 15:48:56 rhel7 kernel: XFS (dm-6): metadata I/O error: block 0x1 ("xfs_buf_iodone_callbacks") error 5 numblks 1
Jul 16 15:48:56 rhel7 kernel: Buffer I/O error on device dm-6, logical block 1996829
Jul 16 15:48:56 rhel7 kernel: lost page write due to I/O error on dm-6
Jul 16 15:48:56 rhel7 kernel: Buffer I/O error on device dm-6, logical block 1996830
Jul 16 15:48:56 rhel7 kernel: lost page write due to I/O error on dm-6
[...]
Jul 16 15:48:58 rhel7 kernel: XFS (dm-6): Detected failing async write on buffer block 0x12c1bf8. Retrying async write.

Jul 16 15:48:58 rhel7 kernel: XFS (dm-6): Detected failing async write on buffer block 0x12bc418. Retrying async write.

Jul 16 15:48:58 rhel7 kernel: XFS (dm-6): Detected failing async write on buffer block 0x12bc402. Retrying async write.
[...]
Jul 16 15:52:19 rhel7 kernel: INFO: task kworker/u8:3:4333 blocked for more than 120 seconds.
Jul 16 15:52:19 rhel7 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 16 15:52:19 rhel7 kernel: kworker/u8:3    D ffff88013fc13680     0  4333      2 0x00000080
Jul 16 15:52:19 rhel7 kernel: Workqueue: writeback bdi_writeback_workfn (flush-253:6)
Jul 16 15:52:19 rhel7 kernel: ffff88001a7cb710 0000000000000046 ffff88013745f1c0 ffff88001a7cbfd8
Jul 16 15:52:19 rhel7 kernel: ffff88001a7cbfd8 ffff88001a7cbfd8 ffff88013745f1c0 ffff8800ba377400
Jul 16 15:52:19 rhel7 kernel: ffff880011ce4a10 ffff8800ba3775c0 00000000000565e0 0000000000000004
Jul 16 15:52:19 rhel7 kernel: Call Trace:
Jul 16 15:52:19 rhel7 kernel: [<ffffffff816096a9>] schedule+0x29/0x70
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01f659d>] xlog_grant_head_wait+0x9d/0x180 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01f671e>] xlog_grant_head_check+0x9e/0x110 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01fa0af>] xfs_log_reserve+0xdf/0x1b0 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01b3684>] xfs_trans_reserve+0x204/0x210 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01a8fc9>] xfs_iomap_write_allocate+0x1c9/0x350 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa0194356>] xfs_map_blocks+0x216/0x240 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01955bb>] xfs_vm_writepage+0x25b/0x5d0 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffff81160d63>] __writepage+0x13/0x50
Jul 16 15:52:19 rhel7 kernel: [<ffffffff81161881>] write_cache_pages+0x251/0x4d0
Jul 16 15:52:19 rhel7 kernel: [<ffffffff81160d50>] ? global_dirtyable_memory+0x70/0x70
Jul 16 15:52:19 rhel7 kernel: [<ffffffff81161b4d>] generic_writepages+0x4d/0x80
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa0194ea3>] xfs_vm_writepages+0x43/0x50 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffff81162bfe>] do_writepages+0x1e/0x40
Jul 16 15:52:19 rhel7 kernel: [<ffffffff811f0000>] __writeback_single_inode+0x40/0x220
Jul 16 15:52:19 rhel7 kernel: [<ffffffff811f0cfe>] writeback_sb_inodes+0x25e/0x420
Jul 16 15:52:19 rhel7 kernel: [<ffffffff811f0f5f>] __writeback_inodes_wb+0x9f/0xd0
Jul 16 15:52:19 rhel7 kernel: [<ffffffff811f17a3>] wb_writeback+0x263/0x2f0
Jul 16 15:52:19 rhel7 kernel: [<ffffffff811f2cdc>] bdi_writeback_workfn+0x1cc/0x460
Jul 16 15:52:19 rhel7 kernel: [<ffffffff8108f0bb>] process_one_work+0x17b/0x470
Jul 16 15:52:19 rhel7 kernel: [<ffffffff8108fe8b>] worker_thread+0x11b/0x400
Jul 16 15:52:19 rhel7 kernel: [<ffffffff8108fd70>] ? rescuer_thread+0x400/0x400
Jul 16 15:52:19 rhel7 kernel: [<ffffffff8109726f>] kthread+0xcf/0xe0
Jul 16 15:52:19 rhel7 kernel: [<ffffffff810971a0>] ? kthread_create_on_node+0x140/0x140
Jul 16 15:52:19 rhel7 kernel: [<ffffffff81614158>] ret_from_fork+0x58/0x90
Jul 16 15:52:19 rhel7 kernel: [<ffffffff810971a0>] ? kthread_create_on_node+0x140/0x140
Jul 16 15:52:19 rhel7 kernel: INFO: task docker:6395 blocked for more than 120 seconds.
Jul 16 15:52:19 rhel7 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 16 15:52:19 rhel7 kernel: docker          D ffff88013fc13680     0  6395   4313 0x00000080
Jul 16 15:52:19 rhel7 kernel: ffff880008dbbaf8 0000000000000082 ffff8800b8628000 ffff880008dbbfd8
Jul 16 15:52:19 rhel7 kernel: ffff880008dbbfd8 ffff880008dbbfd8 ffff8800b8628000 ffff8800ba377400
Jul 16 15:52:19 rhel7 kernel: ffff880011ce4398 ffff8800ba3775c0 00000000000105cc 0000000000000000
Jul 16 15:52:19 rhel7 kernel: Call Trace:
Jul 16 15:52:19 rhel7 kernel: [<ffffffff816096a9>] schedule+0x29/0x70
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01f659d>] xlog_grant_head_wait+0x9d/0x180 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01f671e>] xlog_grant_head_check+0x9e/0x110 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01fa0af>] xfs_log_reserve+0xdf/0x1b0 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01b3684>] xfs_trans_reserve+0x204/0x210 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01a9696>] xfs_vn_update_time+0x56/0x190 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffff811e16c5>] update_time+0x25/0xd0
Jul 16 15:52:19 rhel7 kernel: [<ffffffff811e1970>] file_update_time+0xa0/0xf0
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01a0a1b>] xfs_file_aio_write_checks+0xdb/0xf0 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01a0ac3>] xfs_file_buffered_aio_write+0x93/0x260 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffffa01a0d60>] xfs_file_aio_write+0xd0/0x150 [xfs]
Jul 16 15:52:19 rhel7 kernel: [<ffffffff811c5ebd>] do_sync_write+0x8d/0xd0
Jul 16 15:52:19 rhel7 kernel: [<ffffffff811c665d>] vfs_write+0xbd/0x1e0
Jul 16 15:52:19 rhel7 kernel: [<ffffffff811c70a8>] SyS_write+0x58/0xb0
Jul 16 15:52:19 rhel7 kernel: [<ffffffff81614209>] system_call_fastpath+0x16/0x1b
[...]

the above errors repeat ad infinitum even after you ^C the docker build.

Expected results:
Build should finish


Additional info:


https://github.com/docker/docker/issues/5419 talks about this issue and claims it's not docker's fault.

The problem appears to be how sparse files are handled, and creating a user with a very large UID causes /var/log/lastlog (a sparse file) to pad out the entries.

This can be worked around by passing the --no-log-init (-l) parameter to useradd.

However, this is something that should be fixed.  It's also possible that there could be other sparse files that could trigger this.

Comment 2 Daniel Walsh 2015-08-21 03:59:42 UTC
Vivek any ideas on this one?

Comment 4 Vivek Goyal 2015-09-29 18:18:12 UTC
I ran above useradd command in rhel6.5 container and it works fine. No large sparse file was created.

So does that mean this happens only if run during docker build using RUN command?

Comment 5 Vivek Goyal 2015-09-29 18:19:35 UTC
Upstream comment from unclejack says that a large 32GB sparse file is created. It is not clear who creates that sparse file. I assumed it will be useradd, but I can't see that.

Is it possible that docker commit is not handling sparse files well?

Comment 6 Daniel Walsh 2015-09-29 18:43:33 UTC
wtmp file?

Comment 7 Karl Hastings 2015-09-29 19:13:20 UTC
This is only a problem during `docker build`

/var/log/lastlog is a sparse file.  When writing to the file for a very large UID, it must pad the intervening entries.

This is why passing '-l' to useradd works around the problem, that option tells useradd to skip updating /var/log/lastlog

Comment 8 Vivek Goyal 2015-09-29 19:34:20 UTC
I thought this might be issue with docker commit and sparse files and I am running into an strange issue when I create a sparse file in container and then try to commit container.

[root@vm2-f22 rhvgoyal-docker]# docker commit 2b7c139146d8
Error response from daemon: open /var/lib/docker/devicemapper/mnt/2b7c139146d8fb0cd6110c05f12ccd5d99e4cf59d6108e1ec97233f1f776a931-init/rootfs/usr/share/zoneinfo/America/Indiana: no such file or directory

Comment 9 Vivek Goyal 2015-09-29 19:40:12 UTC
So I tried this with overlayfs backend now. I create a sparse file of 1G size and do a docker commit and resultant image is 1.26GB in size. That means docker commit can't handle sparse files well and bloats these.

$ docker run -ti fedora bash
$ truncate -s 1G test.txt
$ exit
$ docker commit <above-container-id>
$ docker images

And resultant image is 1.26GB.


Docker history shows that top most layer itself is 1.074GB. Which confirms that docker has bloated the sparse file.

I am testing with latest upstream docker. That means problem is still present. So this first needs to be fixed upstream.

Comment 12 Daniel Walsh 2015-09-29 20:29:08 UTC
docker diff must show a huge file.  Perhaps golang tar can not handle sparse files?

Comment 13 Vincent Batts 2015-10-28 17:57:30 UTC
Golang archive/tar can extract from a sparse archive, but can not create a sparse archive. Of all tar implementations, only GNU tar creates sparse archives. Many only barely support extracting sparse.

Comment 14 Vivek Goyal 2015-10-28 18:02:52 UTC
Now docker has re-opened this PR.

https://github.com/docker/docker/issues/5419

vbatts, sounds like fixing this is not going to be easy. Will require fundamental changes to golang archive/tar to be able to create sparse files.

Comment 15 Vincent Batts 2015-10-28 19:50:04 UTC
(In reply to Vivek Goyal from comment #14)
> Now docker has re-opened this PR.
> 
> https://github.com/docker/docker/issues/5419
> 
> vbatts, sounds like fixing this is not going to be easy. Will require
> fundamental changes to golang archive/tar to be able to create sparse files.

I've contemplated these changes before, but it's not straight forward and would be a slight departure from the API.
Though this lack of sparse file handling does relate our previous discussions of block level CoW.

Comment 16 Daniel Walsh 2015-10-29 15:53:16 UTC
Really needs to be fixed upstream.

Comment 17 Daniel Walsh 2016-02-22 20:14:21 UTC
No movement on this.

Comment 18 Daniel Walsh 2016-05-19 14:06:35 UTC
https://github.com/docker/docker/issues/20707

Comment 19 Daniel Walsh 2016-08-19 22:27:27 UTC
No movement on this since May

Comment 20 Daniel Walsh 2016-08-19 22:27:55 UTC
Nalin any chance we can do a better job on this with container/storage?

Comment 22 Qian Cai 2016-09-30 20:23:07 UTC
Encountered this as well on RHEL 7.3.

There is a sparse file inside the container backed by overlay2/xfs.

# du -sh trinity-testfile2
18M	trinity-testfile2

# ll trinity-testfile2
----r--r--. 1 test test 2704441078443517033 Sep 30 16:06 trinity-testfile2

# filefrag trinity-testfile2 
trinity-testfile2: 10 extents found

Then, "docker commit" is running out of disk space because there is huge file (still growing) in the host during docker-untar.

# du -sh
...
28G	/var/lib/docker-latest/overlay2/907fba5d781faf78bb48680dfe31646bd60b893216ada69f5a9a14daab3559b2/diff/home/test/trinity-testfile2
...

Comment 24 Antonio Murdaca 2016-11-28 19:55:12 UTC
We may probably have this fixed in Fedora and RHEL-8 with https://bugzilla.redhat.com/show_bug.cgi?id=951564.

I understand this is causing issues though. There's an issue upstream in Golang about having a Tar writer which supports sparse files (https://github.com/golang/go/issues/13548). Yet, that issue is stuck for more than 1 year now.

Docker side, we may implement a tar writer wrapper which could handle sparse files, yes. We'll run into troubles when hashing the resulting tar files though, since docker relies on content addressable hashes. I'd say this isn't really doable in docker and by going with a custom wrapper we may end up relying on something which could change upstream in Golang (making any docker layer created with our wrapper) "invalid".

I'd love to point out that, while there are tools which supports sparse files (rsync?, tar), that's not the default behavior either (tar -S, for instance).

The proposal in https://bugzilla.redhat.com/show_bug.cgi?id=951564 is to change how the lastlog db is built (from my understanding at least). That makes most sense here, as pointed out in that BZ, people don't really expect to have a 400G lastlog files lying around and tooling not supporting sparse files).

Let's keep this open for now (still). But really we should look forward having lastlog structure modified so it won't end up too large to be tarred up by docker (for instance...).

Comment 25 Daniel Walsh 2016-11-28 21:07:24 UTC
Sadly you are linking a very old bugzilla that has had no action in the last year.

Comment 26 Antonio Murdaca 2016-11-28 21:11:22 UTC
I know :( unfortunately, as explained above, that's one (if not the only) viable option to me.

Comment 27 Antonio Murdaca 2016-11-29 11:01:41 UTC
Interestingly, ppl at Docker ran into this today with https://github.com/docker/docker/issues/28920 and they realized it could ddos their own docker hub when auto-building images

Comment 28 Mark Thacker 2016-11-30 21:17:12 UTC
so what is the status and state of this bug? It sounds like no forward momentum?
Do we have a scope for a fix in 7.4? If not, then it should be re-targeted to a different / future release.

Comment 29 Daniel Walsh 2016-11-30 22:02:30 UTC
There is no easy fix. Perhaps for lastlog, we could just replace it with an empty file.  Perhaps docker commit should look for sparse files and create empty files if they exist.  Not sure if there is an easy way to check if a file is sparse.  Wrote a script with some notes I found on the internet.

cat /usr/bin/sparse
#!/bin/sh
sparse() {
	file=$1
	if [ "$((`stat -c '%b*%B-%s' -- "$file"`))" -lt 0 ]; then
	    echo "$file" is sparse
	fi
}
while IFS='' read -r file || [[ -n "$file" ]]; do
      sparse "$file"
done


You can execute something like

# find /var/log -type f ! -size 0 | sparse
/var/log/lastlog is sparse

Comment 30 Daniel Walsh 2016-11-30 22:03:29 UTC
Would Docker be interested in a patch that blocked the import of sparse files on docker commit.

Comment 35 Daniel Walsh 2017-01-10 20:12:15 UTC
As far as I know this bug is not fixed.

Comment 39 Vivek Goyal 2017-01-31 15:15:41 UTC
I am providing devel cond_nack for this. This issue is still an open problem and is not fixed even upstream. Can't be fixed yet.

Comment 40 Red Hat Bugzilla Rules Engine 2017-01-31 15:15:52 UTC
Development Management has reviewed and declined this request. You may appeal this decision by reopening this request.

Comment 42 Vincent Batts 2017-04-24 12:24:35 UTC
(In reply to Daniel Walsh from comment #30)
> Would Docker be interested in a patch that blocked the import of sparse
> files on docker commit.

certainly not. Blocking or clobbering sparse files with an empty file would certainly be unpredictable behavior for many apps.

There might be a future with fixed Sparse file support in golang's `archive/tar`. I'm not giving up hope there.

As for detecting files on disk, it's the same in golang as your bash example

```golang
package main

import (
  "errors"
  "fmt"
  "log"
  "os"
  "path/filepath"
  "syscall"
)

func main() {
  err := filepath.Walk(".", func(path string, stat os.FileInfo, err error) error {
    if err != nil {
      return err 
    }   
    if !stat.Mode().IsRegular() {
      return nil 
    }   
    statT := stat.Sys().(*syscall.Stat_t)
    if statT == nil {
      return errors.New("farts: " + path)
    }   
    if ((statT.Blksize * statT.Blocks) - statT.Size) < 0 { 
      fmt.Printf("%s: { Blksize: %d * Blocks: %d - Size: %d = %d}\n", path, statT.Blksize, statT.Blocks, statT.Size, (statT.Blksize*statT.Blocks)-statT.Size)
    }   

    return nil 
  })  
  if err != nil {
    log.Fatal(err)
  }
}
```

Just checking whether the file size is larger that the currently allocated blocks on disk. The next step is then using things like SEEK_HOLE options to jump to the next allocated blocks and counting the offset.

Comment 43 Daniel Walsh 2017-06-30 15:57:45 UTC
Any movement on this?

Comment 46 Vincent Batts 2017-09-19 18:52:07 UTC
https://github.com/golang/go/issues/13548 is still an active design discussion

Comment 50 Vincent Batts 2017-10-11 14:18:21 UTC
upstream golang compiler have merged the first support for sparse files. It is queued to be in go1.10. Currently you can review this API here https://tip.golang.org/pkg/archive/tar/

go1.10 release may not happen until beginning of Feb 2018, and at that, this API for sparse file support may change before then, so it is not something to backport.

Comment 51 Vivek Goyal 2017-10-11 14:32:43 UTC
Given its a new API, that means docker will require changes too? If yes, that means first go 1.10 will be released, then docker will make changes and then we will backport everything in rhel. So this sounds more like april/may 2018 to me.

Comment 53 Petr Špaček 2017-12-19 12:18:57 UTC
Please note that lastlog is not the only thing which uses sparse files.

I just ran into this problem while trying to build container with LMDB instance inside. LMDB is extensively used by OpenLDAP and other projects and it creates sparse files by design.

Comment 62 Antonio Murdaca 2018-08-27 10:21:43 UTC
no update, this depends on golang

Comment 63 Vincent Batts 2018-09-11 19:35:58 UTC
(In reply to Antonio Murdaca from comment #62)
> no update, this depends on golang

correct. There has been an ongoing effort upstream to get sparse file support. It _almost_ landed for go1.10, but was pulled out at the last minute for further API review. The conversation to re-add it is here https://github.com/golang/go/issues/22735

Comment 64 Tom Sweeney 2020-06-09 20:25:46 UTC
We have no plans to ship another version of Docker at this time. RHEL7 is in final support stages where only security fixes will get released.  Customers should move to using Podman which is available starting in RHEL 7.6.


Note You need to log in before you can comment on or make changes to this bug.