RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1490910 - Upgraded rhel7.3 to 7.4 uses "extfs" as backing storage instead xfs using overlay2 for docker storage
Summary: Upgraded rhel7.3 to 7.4 uses "extfs" as backing storage instead xfs using ove...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: docker
Version: 7.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Daniel Walsh
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-12 13:10 UTC by Eduardo Minguez
Modified: 2019-03-06 01:18 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-12 15:57:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
rpm -qa | sort output from the fresh installed instance (10.08 KB, text/plain)
2017-09-12 13:12 UTC, Eduardo Minguez
no flags Details
rpm -qa | sort output from the fresh installed instance upgraded to latest packages (11.47 KB, text/plain)
2017-09-12 13:13 UTC, Eduardo Minguez
no flags Details

Description Eduardo Minguez 2017-09-12 13:10:26 UTC
Description of problem:
Freshly installed rhel7.3 in OSP using the cloud image, then configured for RHN repos and upgraded to the latest released packages (yum update -y), while configuring docker storage to be overlay2 uses "extfs" as backing storage.
Same procedure using rhel 7.4 image, uses "xfs".
That makes OCP installation fail as one of the checks is if the docker storage uses overlay2, it should be xfs.

Version-Release number of selected component (if applicable):
$ cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.3 (Maipo)
Upgraded to latest rhel7.4:

Docker version:


How reproducible:
Deploy a rhel 7.3 instance using the rhel cloud image. Upgrade the packages to latest. Install docker. Run docker-storage-setup using a dedicated disk, run docker, observe backing storage


Steps to Reproduce:
1.cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.3 (Maipo)
2.subscription-manager register
3. subscription-manager attach --pool=<mypoolid>
4. subscription-manager repos --disable=*
5. subscription-manager repos \
    --enable=rhel-7-server-rpms \
    --enable=rhel-7-server-extras-rpms
Just in case: rpm -qa | sort >> rhel73.packages.txt
6. yum update -y
7. reboot
Just in case: rpm -qa | sort >> rhel74.packages.txt
8. lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    253:0    0  40G  0 disk 
└─vda1 253:1    0  40G  0 part /
vdb    253:16   0  20G  0 disk 
9. yum install -y docker
10. docker version
Client:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-55.gitc4618fb.el7.x86_64
 Go version:      go1.8.3
 Git commit:      c4618fb/1.12.6
 Built:           Thu Aug 24 14:48:49 2017
 OS/Arch:         linux/amd64
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
11. docker-storage-setup 
INFO: Volume group backing root filesystem could not be determined
INFO: Device node /dev/vdb1 exists.
  Physical volume "/dev/vdb1" successfully created.
  Volume group "docker_vol" successfully created
12. systemctl start docker
13. docker info | grep -i backing

Actual results:
Backing Filesystem: extfs

Expected results:
Backing Filesystem: xfs

Additional info:
Same procedure using RHEL 7.4 image, shows XFS.

Comment 2 Eduardo Minguez 2017-09-12 13:12:38 UTC
Created attachment 1324894 [details]
rpm -qa | sort output from the fresh installed instance

Comment 3 Eduardo Minguez 2017-09-12 13:13:05 UTC
Created attachment 1324895 [details]
rpm -qa | sort output from the fresh installed instance upgraded to latest packages

Comment 4 Daniel Walsh 2017-09-12 14:01:20 UTC
We can not update a RHEL7.3 system to RHEL7.4 and run overlay on it.  The XFS on RHEL7.3 is configured differently then on RHEL7.4 fresh installs.  This causes some strange issues on an overlay system.  The only way to modify the RHEL7.3 systems XFS is to reinstall it.  Another option would be to put the /var/lib/docker onto a brand new XFS partition, and then you could use it with Overlay.

Vivek could you give more information on the issues.

Comment 5 Eduardo Minguez 2017-09-12 14:10:04 UTC
(In reply to Daniel Walsh from comment #4)
> We can not update a RHEL7.3 system to RHEL7.4 and run overlay on it.  The
> XFS on RHEL7.3 is configured differently then on RHEL7.4 fresh installs. 
> This causes some strange issues on an overlay system.  The only way to
> modify the RHEL7.3 systems XFS is to reinstall it.  Another option would be
> to put the /var/lib/docker onto a brand new XFS partition, and then you
> could use it with Overlay.
> 
> Vivek could you give more information on the issues.

Thanks for the quick answer Dan. The thing is I did nothing with the docker volume (/dev/vdb, dedicated disk) until I had the system updated to 7.4.
If the issue is just because it was 7.3 I think we should add a note in the documentation warning this.

Thanks.

Comment 6 Qian Cai 2017-09-12 14:13:34 UTC
Can you post the full output of docker info?

Comment 7 Eduardo Minguez 2017-09-12 14:17:59 UTC
(In reply to CAI Qian from comment #6)
> Can you post the full output of docker info?

Sure.

[root@overlay2testrhel7 ~]# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.12.6
Storage Driver: overlay2
 Backing Filesystem: extfs
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: null host bridge overlay
 Authorization: rhel-push-plugin
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Security Options: seccomp selinux
Kernel Version: 3.10.0-693.2.1.el7.x86_64
Operating System: Employee SKU
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 3
CPUs: 2
Total Memory: 3.702 GiB
Name: overlay2testrhel7.novalocal
ID: P3TD:LTUY:CJL3:CC7Y:7LID:K64X:LD3Z:2BI6:TPS2:XKNU:BUIB:LKQS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://registry.access.redhat.com/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Insecure Registries:
 127.0.0.0/8
Registries: registry.access.redhat.com (secure), docker.io (secure)

Comment 8 Qian Cai 2017-09-12 14:33:28 UTC
(In reply to Daniel Walsh from comment #4)
> We can not update a RHEL7.3 system to RHEL7.4 and run overlay on it.  The
> XFS on RHEL7.3 is configured differently then on RHEL7.4 fresh installs. 
> This causes some strange issues on an overlay system.  The only way to
> modify the RHEL7.3 systems XFS is to reinstall it.  Another option would be
> to put the /var/lib/docker onto a brand new XFS partition, and then you
> could use it with Overlay.
I don't think this is correct. XFS formatted by RHEL 7.3 installer will have d_type=1 by default.

Comment 9 Qian Cai 2017-09-12 15:00:05 UTC
I think this is what happened. According to your email, your docker-storage-setup is invalid (missing CONTAINER_ROOT_LV_NAME and CONTAINER_ROOT_LV_MOUNT_PATH as mentioned in emails).

DEVS=/dev/vdb
VG=docker_vol
STORAGE_DRIVER=overlay2

This will still put overlay on the filesystem the same as your original /var where both DEVS and VG will be ignored (expect c-s-s still try to partition and format the disk for nothing).

Therefore, the OSP RHEL 7.3 cloud image must use ext4 for /var. You could confirm it with the flesh-installed RHEL 7.3. On the other hand, OSP RHEL 7.4 cloud image uses XFS for /var. That will tell why you see differences between upgrade case and fresh-install case.

There are multiple problems. The first is that people expect overlay/overlay2 driver works like devicemapper config when adding an extra disk, so it might be a good idea to fix c-s-s to make it works with a config file like this.

DEVS=/dev/vdb
VG=docker_vol
STORAGE_DRIVER=overlay2

People expect that c-s-s will create a partition and format it with XFS to add to the VG, and mount it under /var/lib/docker to be used by overlay2 driver.

The other issue is that the OCP installer should remove the check preventing overlay/overlay2 driver with ext4 from working. Technically, I don't see why it won't work.

Comment 10 Eduardo Minguez 2017-09-12 15:24:06 UTC
(In reply to CAI Qian from comment #9)
> I think this is what happened. According to your email, your
> docker-storage-setup is invalid (missing CONTAINER_ROOT_LV_NAME and
> CONTAINER_ROOT_LV_MOUNT_PATH as mentioned in emails).
> 
> DEVS=/dev/vdb
> VG=docker_vol
> STORAGE_DRIVER=overlay2
> 
> This will still put overlay on the filesystem the same as your original /var
> where both DEVS and VG will be ignored (expect c-s-s still try to partition
> and format the disk for nothing).
> 
> Therefore, the OSP RHEL 7.3 cloud image must use ext4 for /var. You could
> confirm it with the flesh-installed RHEL 7.3. On the other hand, OSP RHEL
> 7.4 cloud image uses XFS for /var. That will tell why you see differences
> between upgrade case and fresh-install case.

It looks like we have a winner, you are right:

# cat /etc/fstab 

UUID=28ed3875-45ea-43fc-837a-e1b555c60162 /                       ext4    defaults        1 1

> 
> There are multiple problems. The first is that people expect
> overlay/overlay2 driver works like devicemapper config when adding an extra
> disk, so it might be a good idea to fix c-s-s to make it works with a config
> file like this.
> 
> DEVS=/dev/vdb
> VG=docker_vol
> STORAGE_DRIVER=overlay2

TBH the OCP documentation just links to the RHEL documentation, but I think the documentation can be improved.

> 
> People expect that c-s-s will create a partition and format it with XFS to
> add to the VG, and mount it under /var/lib/docker to be used by overlay2
> driver.
> 

Exactly.

> The other issue is that the OCP installer should remove the check preventing
> overlay/overlay2 driver with ext4 from working. Technically, I don't see why
> it won't work.

Thanks CAI Qian!

Comment 11 Vivek Goyal 2017-09-13 12:04:18 UTC
(In reply to CAI Qian from comment #9)
> 
> There are multiple problems. The first is that people expect
> overlay/overlay2 driver works like devicemapper config when adding an extra
> disk, so it might be a good idea to fix c-s-s to make it works with a config
> file like this.
> 
> DEVS=/dev/vdb
> VG=docker_vol
> STORAGE_DRIVER=overlay2
> 
> People expect that c-s-s will create a partition and format it with XFS to
> add to the VG, and mount it under /var/lib/docker to be used by overlay2
> driver.

I this is confusing then I think we should improve documentation. Both the modes are valid. If you just specify STORAGE_DRIVER=overlay2, then no new logical volume will be created and overlay2 will be setup on rootfs. 

One needs to specify CONTAINER_ROOT_LV_NAME and CONTAINER_ROOT_LV_MOUNT_PATH to create a new logical volume for storing container images and container rootfs.

Comment 12 Eduardo Minguez 2017-09-13 12:21:43 UTC
I've created a separate bugzilla for the documentation update https://bugzilla.redhat.com/show_bug.cgi?id=1491268 as this has been marked as NOTABUG

Comment 13 Qian Cai 2017-09-13 12:29:15 UTC
(In reply to Vivek Goyal from comment #11)
> (In reply to CAI Qian from comment #9)
> > 
> > There are multiple problems. The first is that people expect
> > overlay/overlay2 driver works like devicemapper config when adding an extra
> > disk, so it might be a good idea to fix c-s-s to make it works with a config
> > file like this.
> > 
> > DEVS=/dev/vdb
> > VG=docker_vol
> > STORAGE_DRIVER=overlay2
> > 
> > People expect that c-s-s will create a partition and format it with XFS to
> > add to the VG, and mount it under /var/lib/docker to be used by overlay2
> > driver.
> 
> I this is confusing then I think we should improve documentation. Both the
> modes are valid. If you just specify STORAGE_DRIVER=overlay2, then no new
> logical volume will be created and overlay2 will be setup on rootfs. 
> 
> One needs to specify CONTAINER_ROOT_LV_NAME and CONTAINER_ROOT_LV_MOUNT_PATH
> to create a new logical volume for storing container images and container
> rootfs.

Well, I would also prefer that c-s-s just warn people that CONTAINER_ROOT_LV_NAME and CONTAINER_ROOT_LV_MOUNT_PATH are missing in above 3-line config rather than still trying to partition and format the extra disk and adding to the VG for nothing.

Comment 14 Vivek Goyal 2017-09-13 12:42:02 UTC
So who created this config file? Did user get it over upgrade or user created this. If user created this, then it is just a configuration issue and user needs to go read the documentation. There are so many ways configuration can be wrong and I can't keep track of all these situations and keep warning about it.

Comment 15 Qian Cai 2017-09-13 13:09:11 UTC
This is probably going to be the top configuration that people think it would work with extra disk and overlay2 when they switch from devicemapper. Every doc said that just by appending STORAGE_DRIVER=overlay2 in all cases it would work, i.e, cloud-init, kickstart etc, so it is not like people will use other controversial configuration as much as like this.

Comment 16 Vivek Goyal 2017-09-13 13:46:37 UTC
(In reply to CAI Qian from comment #15)

> Every doc said that just by appending STORAGE_DRIVER=overlay2 in all cases
> it would work, i.e, cloud-init, kickstart etc, so it is not like people will
> use other controversial configuration as much as like this.

Sure and it is working. You just specify STORAGE_DRIVER=overlay2 and and you will get overlay2 driver working with docker. Where is the problem.

It never said that it will setup overlay2 on a separate logical volume and not on top of rootfs. Which document says that above configuration will set it up on a separate logical volume?

Why are you assuming that by default people want to setup overlay2 on a separate logical volume and not on rootfs. I would argue that for most of the cases, people would like to setup overlay2 on top of rootfs and only in select few cases they would like to use a separate logical volume and overlay2 being setup on that.

Comment 17 Eduardo Minguez 2017-09-13 13:51:56 UTC
(In reply to Vivek Goyal from comment #16)

> Why are you assuming that by default people want to setup overlay2 on a
> separate logical volume and not on rootfs. I would argue that for most of
> the cases, people would like to setup overlay2 on top of rootfs and only in
> select few cases they would like to use a separate logical volume and
> overlay2 being setup on that.

For OCP installations, this[1] encourages you use a different block device.

[1] https://docs.openshift.com/container-platform/3.6/install_config/install/host_preparation.html#configuring-docker-storage

Comment 18 Qian Cai 2017-09-25 15:20:24 UTC
(In reply to Vivek Goyal from comment #16)
> (In reply to CAI Qian from comment #15)
> 
> > Every doc said that just by appending STORAGE_DRIVER=overlay2 in all cases
> > it would work, i.e, cloud-init, kickstart etc, so it is not like people will
> > use other controversial configuration as much as like this.
> 
> Sure and it is working. You just specify STORAGE_DRIVER=overlay2 and and you
> will get overlay2 driver working with docker. Where is the problem.
> 
> It never said that it will setup overlay2 on a separate logical volume and
> not on top of rootfs. Which document says that above configuration will set
> it up on a separate logical volume?
> 
> Why are you assuming that by default people want to setup overlay2 on a
> separate logical volume and not on rootfs. I would argue that for most of
> the cases, people would like to setup overlay2 on top of rootfs and only in
> select few cases they would like to use a separate logical volume and
> overlay2 being setup on that.
Because people want isolation between their docker storage and rootfs. They want in case of full of docker storage or corruption is not make the whole system is broken due to rootfs full. They want a way that they can extend the size of docker storage separately from rootfs. They want to a way to add a separate block device to back docker storage.

Comment 19 Vivek Goyal 2017-09-25 18:54:52 UTC
(In reply to CAI Qian from comment #18)

[..]
> > Why are you assuming that by default people want to setup overlay2 on a
> > separate logical volume and not on rootfs. I would argue that for most of
> > the cases, people would like to setup overlay2 on top of rootfs and only in
> > select few cases they would like to use a separate logical volume and
> > overlay2 being setup on that.
> Because people want isolation between their docker storage and rootfs. They
> want in case of full of docker storage or corruption is not make the whole
> system is broken due to rootfs full. They want a way that they can extend
> the size of docker storage separately from rootfs. They want to a way to add
> a separate block device to back docker storage.

Sure. Those who want docker storage on a separate disk with overlayfs, then can specify CONTAINER_ROOT_LV_NAME. But that does not mean that should be default of container-storage-setup.

Why docker does not setup docker root on a separate volume by default? By default all the apps share same rootfs. And if admin wants some apps to not share rootfs storage, they need to configure it explicitly. And this is no different.

Comment 20 Qian Cai 2017-09-25 19:06:57 UTC
(In reply to Vivek Goyal from comment #19)
> (In reply to CAI Qian from comment #18)
> 
> [..]
> > > Why are you assuming that by default people want to setup overlay2 on a
> > > separate logical volume and not on rootfs. I would argue that for most of
> > > the cases, people would like to setup overlay2 on top of rootfs and only in
> > > select few cases they would like to use a separate logical volume and
> > > overlay2 being setup on that.
> > Because people want isolation between their docker storage and rootfs. They
> > want in case of full of docker storage or corruption is not make the whole
> > system is broken due to rootfs full. They want a way that they can extend
> > the size of docker storage separately from rootfs. They want to a way to add
> > a separate block device to back docker storage.
> 
> Sure. Those who want docker storage on a separate disk with overlayfs, then
> can specify CONTAINER_ROOT_LV_NAME. But that does not mean that should be
> default of container-storage-setup.
I am not asking to make it default. I simply ask to make it more intuitive and smooth for people to transit from devicemapper to overlayfs when adding a block device, i.e., to make DEVS + STORAGE_DRIVER=overlay2/overlay works without CONTAINER_ROOT_LV_NAME and CONTAINER_ROOT_LV_PATH, so by default CONTAINER_ROOT_LV_NAME would be some generated name (similar to an auto-generated docker thin pool name) and CONTAINER_ROOT_LV_PATH would set default to /var/lib/docker.

> Why docker does not setup docker root on a separate volume by default? By
> default all the apps share same rootfs. And if admin wants some apps to not
> share rootfs storage, they need to configure it explicitly. And this is no
> different.
I would docker has no idea if people would have a separate block device to back it the storage or not, but our doc always recommend people to setup one where possible as mentioned by Eduardo.

Comment 21 Vivek Goyal 2017-09-25 20:16:25 UTC
(In reply to CAI Qian from comment #20)
> (In reply to Vivek Goyal from comment #19)
> > (In reply to CAI Qian from comment #18)
> > 
> > [..]
> > > > Why are you assuming that by default people want to setup overlay2 on a
> > > > separate logical volume and not on rootfs. I would argue that for most of
> > > > the cases, people would like to setup overlay2 on top of rootfs and only in
> > > > select few cases they would like to use a separate logical volume and
> > > > overlay2 being setup on that.
> > > Because people want isolation between their docker storage and rootfs. They
> > > want in case of full of docker storage or corruption is not make the whole
> > > system is broken due to rootfs full. They want a way that they can extend
> > > the size of docker storage separately from rootfs. They want to a way to add
> > > a separate block device to back docker storage.
> > 
> > Sure. Those who want docker storage on a separate disk with overlayfs, then
> > can specify CONTAINER_ROOT_LV_NAME. But that does not mean that should be
> > default of container-storage-setup.
> I am not asking to make it default. I simply ask to make it more intuitive
> and smooth for people to transit from devicemapper to overlayfs when adding
> a block device, i.e., to make DEVS + STORAGE_DRIVER=overlay2/overlay works
> without CONTAINER_ROOT_LV_NAME and CONTAINER_ROOT_LV_PATH, 

Can't do that in current framework. Unfortunately, DEVS= is used for two
purposes. One is to be able to add more disks to a volume group which 
is used by rootfs of image. And second task is setting up container storage.

If container-storage-setup was being used only for setting up container
storage, then we could assume that DEVS is only for exclusive use of
container storage. But that's not the case.

At some point of time, if all the logic w.r.t rootfs of atomic image
moves into a separate component, then one can think of doing something
like that.


> so by default
> CONTAINER_ROOT_LV_NAME would be some generated name (similar to an
> auto-generated docker thin pool name) and CONTAINER_ROOT_LV_PATH would set
> default to /var/lib/docker.

We explicitly stayed away from that because we wanted to make this tool
generic so that other container runtimes like crio could make use of it.
Unfortunately we have decided not to make use of this for crio for now. But still, design theme is that this is "container-storage-setup" and not "docker-storage-setup". So can't assume ROOT_LV_PATH to "/var/lib/docker".

> 
> > Why docker does not setup docker root on a separate volume by default? By
> > default all the apps share same rootfs. And if admin wants some apps to not
> > share rootfs storage, they need to configure it explicitly. And this is no
> > different.
> I would docker has no idea if people would have a separate block device to
> back it the storage or not, but our doc always recommend people to setup one
> where possible as mentioned by Eduardo.

container-storage-setup does not make any recommendation either way. If openshift does, then openshift needs to fix their docs and make sure CONTAINER_ROOT_LV_NAME is specified.

Comment 22 Eduardo Minguez 2017-09-26 07:02:16 UTC
Actually, we do recommend a separate disk here http://www.projectatomic.io/docs/docker-storage-recommendation/ and in the OCP documentation.

Also, it would be nice to clarify the options in the official documentation: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/managing_storage_with_docker_formatted_containers#overlay_graph_driver adding a good description of what every variables does.

Comment 23 Qian Cai 2017-09-26 13:51:35 UTC
> > so by default
> > CONTAINER_ROOT_LV_NAME would be some generated name (similar to an
> > auto-generated docker thin pool name) and CONTAINER_ROOT_LV_PATH would set
> > default to /var/lib/docker.
> 
> We explicitly stayed away from that because we wanted to make this tool
> generic so that other container runtimes like crio could make use of it.
> Unfortunately we have decided not to make use of this for crio for now. But
> still, design theme is that this is "container-storage-setup" and not
> "docker-storage-setup". So can't assume ROOT_LV_PATH to "/var/lib/docker".
With 100% use-case now for docker, It does sound more efficient to set default to /var/lib/docker.

Comment 24 Vivek Goyal 2017-09-26 17:25:32 UTC
(In reply to CAI Qian from comment #23)
> > > so by default
> > > CONTAINER_ROOT_LV_NAME would be some generated name (similar to an
> > > auto-generated docker thin pool name) and CONTAINER_ROOT_LV_PATH would set
> > > default to /var/lib/docker.
> > 
> > We explicitly stayed away from that because we wanted to make this tool
> > generic so that other container runtimes like crio could make use of it.
> > Unfortunately we have decided not to make use of this for crio for now. But
> > still, design theme is that this is "container-storage-setup" and not
> > "docker-storage-setup". So can't assume ROOT_LV_PATH to "/var/lib/docker".
> With 100% use-case now for docker, It does sound more efficient to set
> default to /var/lib/docker.

No. We took a long time to move away from hard coding. Hard coding is never a good idea. You never know if crio comes back to using container-storage-setup.

so fix the documentation and ask users to read documentation, please.

Comment 25 Qian Cai 2017-09-26 17:58:33 UTC
(In reply to Vivek Goyal from comment #24)
> (In reply to CAI Qian from comment #23)
> > > > so by default
> > > > CONTAINER_ROOT_LV_NAME would be some generated name (similar to an
> > > > auto-generated docker thin pool name) and CONTAINER_ROOT_LV_PATH would set
> > > > default to /var/lib/docker.
> > > 
> > > We explicitly stayed away from that because we wanted to make this tool
> > > generic so that other container runtimes like crio could make use of it.
> > > Unfortunately we have decided not to make use of this for crio for now. But
> > > still, design theme is that this is "container-storage-setup" and not
> > > "docker-storage-setup". So can't assume ROOT_LV_PATH to "/var/lib/docker".
> > With 100% use-case now for docker, It does sound more efficient to set
> > default to /var/lib/docker.
> 
> No. We took a long time to move away from hard coding. Hard coding is never
> a good idea. You never know if crio comes back to using
> container-storage-setup.
This is not the first time there are hard coding in c-s-s. Considering devicemapper cases,

_DOCKER_ROOT_LV_NAME="docker-root-lv"
_DOCKER_ROOT_DIR="/var/lib/docker"
_DOCKER_METADATA_DIR="/var/lib/docker"
DOCKER_ROOT_VOLUME_SIZE=40%FREE

It is all about consistent. If you are not going to hard-code for overlayfs, it is a good idea to remove all those hard-code for devicemapper as well.

Comment 26 Vivek Goyal 2017-09-26 18:20:33 UTC
(In reply to CAI Qian from comment #25)
> (In reply to Vivek Goyal from comment #24)
> > (In reply to CAI Qian from comment #23)
> > > > > so by default
> > > > > CONTAINER_ROOT_LV_NAME would be some generated name (similar to an
> > > > > auto-generated docker thin pool name) and CONTAINER_ROOT_LV_PATH would set
> > > > > default to /var/lib/docker.
> > > > 
> > > > We explicitly stayed away from that because we wanted to make this tool
> > > > generic so that other container runtimes like crio could make use of it.
> > > > Unfortunately we have decided not to make use of this for crio for now. But
> > > > still, design theme is that this is "container-storage-setup" and not
> > > > "docker-storage-setup". So can't assume ROOT_LV_PATH to "/var/lib/docker".
> > > With 100% use-case now for docker, It does sound more efficient to set
> > > default to /var/lib/docker.
> > 
> > No. We took a long time to move away from hard coding. Hard coding is never
> > a good idea. You never know if crio comes back to using
> > container-storage-setup.
> This is not the first time there are hard coding in c-s-s. Considering
> devicemapper cases,
> 
> _DOCKER_ROOT_LV_NAME="docker-root-lv"
> _DOCKER_ROOT_DIR="/var/lib/docker"
> _DOCKER_METADATA_DIR="/var/lib/docker"
> DOCKER_ROOT_VOLUME_SIZE=40%FREE
> 
> It is all about consistent. If you are not going to hard-code for overlayfs,
> it is a good idea to remove all those hard-code for devicemapper as well.

We have deprecated usage of DOCKER_ROOT_VOLUME and DOCKER_ROOT_VOLUME_SIZE. Primarily because of hardcoding. Refer to man page. Only remaining bits are there because we had introduced them once and could not get rid of them.

Its all about writing software which makes sense. And avoid hard coding as much as possible. This makes software generic and usable in many more situations.

Just because a user does not want to read documentation and does not want to provide the right config option, does not mean we start hard coding.

Comment 27 Qian Cai 2017-09-26 18:34:42 UTC
(In reply to Vivek Goyal from comment #26)
> (In reply to CAI Qian from comment #25)
> > (In reply to Vivek Goyal from comment #24)
> > > (In reply to CAI Qian from comment #23)
> > > > > > so by default
> > > > > > CONTAINER_ROOT_LV_NAME would be some generated name (similar to an
> > > > > > auto-generated docker thin pool name) and CONTAINER_ROOT_LV_PATH would set
> > > > > > default to /var/lib/docker.
> > > > > 
> > > > > We explicitly stayed away from that because we wanted to make this tool
> > > > > generic so that other container runtimes like crio could make use of it.
> > > > > Unfortunately we have decided not to make use of this for crio for now. But
> > > > > still, design theme is that this is "container-storage-setup" and not
> > > > > "docker-storage-setup". So can't assume ROOT_LV_PATH to "/var/lib/docker".
> > > > With 100% use-case now for docker, It does sound more efficient to set
> > > > default to /var/lib/docker.
> > > 
> > > No. We took a long time to move away from hard coding. Hard coding is never
> > > a good idea. You never know if crio comes back to using
> > > container-storage-setup.
> > This is not the first time there are hard coding in c-s-s. Considering
> > devicemapper cases,
> > 
> > _DOCKER_ROOT_LV_NAME="docker-root-lv"
> > _DOCKER_ROOT_DIR="/var/lib/docker"
> > _DOCKER_METADATA_DIR="/var/lib/docker"
> > DOCKER_ROOT_VOLUME_SIZE=40%FREE
> > 
> > It is all about consistent. If you are not going to hard-code for overlayfs,
> > it is a good idea to remove all those hard-code for devicemapper as well.
> 
> We have deprecated usage of DOCKER_ROOT_VOLUME and DOCKER_ROOT_VOLUME_SIZE.
> Primarily because of hardcoding. Refer to man page. Only remaining bits are
> there because we had introduced them once and could not get rid of them.
Actually, _DOCKER_ROOT_LV_NAME and _DOCKER_METADATA_DIR hard-coding have nothing to do with deprecated options. Also, we do have a compact mode in c-s-s, so we have already dealt with docker specific default in compact mode for devicemapper.

# Check subcommands
case $# in
  0)
    CONTAINER_THINPOOL=docker-pool
    _DOCKER_COMPAT_MODE=1

> 
> Its all about writing software which makes sense. And avoid hard coding as
> much as possible. This makes software generic and usable in many more
> situations.
> 
> Just because a user does not want to read documentation and does not want to
> provide the right config option, does not mean we start hard coding.

Comment 28 Vivek Goyal 2017-09-26 18:42:21 UTC
I can answer this but now I am really tired to continuing this conversation.

Please fix the documentation and if user wants xfs on a separate volume, use new CONTAINER* options. Otherwise by default no separate volume will be set.


Note You need to log in before you can comment on or make changes to this bug.