RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2123460 - Podman skips mounting one of the volumes silently inside the container while creating container
Summary: Podman skips mounting one of the volumes silently inside the container while ...
Keywords:
Status: CLOSED DUPLICATE of bug 2124952
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.4
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Giuseppe Scrivano
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-09-01 18:05 UTC by mangesh.panche
Modified: 2023-09-19 04:25 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-09-14 13:00:21 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-133054 0 None None None 2022-09-01 18:14:20 UTC

Description mangesh.panche 2022-09-01 18:05:44 UTC
Description of problem:
Podman skips mounting one of the volumes silently inside the container while creating container

Version-Release number of selected component (if applicable):
RHEL 8.4
podman-4.0.2-6.module+el8.6.0+14877+f643d2d6.x86_64


How reproducible:
Intermittent


Steps to Reproduce:
1. Create a container with multiple bind mount volumes 

Actual results:
Container starts successfully but one of the volumes is not mounted inside the container

Expected results:
All volumes are mounted successfully

Additional info:
Podman compose contents:
version: "2.2"
services:
    intHO_17-0:
.
.
.
        volumes:
            - vntHO_logs:/mnt/nblogs
            - vntHO_msdpcat:/mnt/msdpcat
            - vntHO_msdpdata-0:/mnt/msdp/vol0            <--- volume in the conf
            - vntHO_msdpdata-1:/mnt/msdp/vol1
            - vntHO_vpfs_shares:/mnt/vpfs_shares:Z
            - /etc/vxos-release:/etc/flex-release:ro
            - /opt/veritas/appliance/appha/status-codes:/opt/veritas/vxplat-manage/health-include:ro
            - /opt/veritas/flex/tools/instance-vxapp-addons.sh:/opt/veritas/vxplat-manage/add-ons:ro
            - /opt/VRTSperl:/opt/VRTSperl:ro
            - /mnt/data/apps/.lockdown-mode.conf:/opt/veritas/vxplat-manage/lockdown-mode.conf:z,ro
            - /usr/lib64/vxfsmisc.so:/usr/lib64/vxfsmisc.so:ro
            - /mnt/worm:/opt/veritas/vxplat-manage/clock:ro
.
.
. 

Podman volume ls shows the volume

# podman volume ls 
DRIVER      VOLUME NAME
local       208ec26cfe43b7934e78961b2660dac49e4f136961bd79247e0ae029c77a7290
local       4225861e76dd8df8f39b1149489634fcec0737875e90b6b7fddfe6d92031e4ad
local       b5337900019d1cb11c67df8b6fa8c8012c6b400ca28ca7f74f12fbf22344c6da
veritas     etcd_data
veritas     metrics_data
veritas     vUDqq_advdisk-0
filevol     vUDqq_config
veritas     vUDqq_logs
veritas     vUDqq_msdpdata-0          <----
filevol     vUDqq_nfs_shares
veritas     vXIwW_catalog
veritas     vXIwW_logs
veritas     vntHO_logs
veritas     vntHO_msdpcat
veritas     vntHO_msdpdata-0
veritas     vntHO_msdpdata-1
filevol     vntHO_vpfs_shares


# docker inspect intHO_17-0  | jq .[].Mounts
[
  {
    "Type": "volume",
    "Name": "vntHO_logs",
    "Source": "/mnt/containers/vntHO_logs_vxosdg",
    "Destination": "/mnt/nblogs",
    "Driver": "veritas",
    "Mode": "",
    "Options": [
      "nosuid",
      "nodev",
      "rbind"
    ],
    "RW": true,
    "Propagation": "rprivate"
  },
  {
    "Type": "volume",
    "Name": "vntHO_msdpdata-0",
    "Source": "/mnt/containers/vntHO_msdpdata-0_vxosdg",
    "Destination": "/mnt/msdp/vol0",
    "Driver": "veritas",
    "Mode": "",
    "Options": [
      "nosuid",
      "nodev",
      "rbind"
    ],
    "RW": true,
    "Propagation": "rprivate"
  },
  {
    "Type": "volume",
    "Name": "b5337900019d1cb11c67df8b6fa8c8012c6b400ca28ca7f74f12fbf22344c6da",
    "Source": "/var/lib/containers/storage/volumes/b5337900019d1cb11c67df8b6fa8c8012c6b400ca28ca7f74f12fbf22344c6da/_data",
    "Destination": "/opt/veritas/vxapp-manage",
    "Driver": "local",
    "Mode": "z",
    "Options": [
      "nosuid",
      "nodev",
      "rbind"
    ],
    "RW": false,
    "Propagation": "rprivate"
  },
  {
    "Type": "volume",
    "Name": "vntHO_msdpcat",
    "Source": "/mnt/containers/vntHO_msdpcat_vxosdg",
    "Destination": "/mnt/msdpcat",
    "Driver": "veritas",
    "Mode": "",
    "Options": [
      "nosuid",
      "nodev",
      "rbind"
    ],
    "RW": true,
    "Propagation": "rprivate"
  },
  {
    "Type": "volume",
    "Name": "vntHO_msdpdata-1",
    "Source": "/mnt/containers/vntHO_msdpdata-1_vxosdg",
    "Destination": "/mnt/msdp/vol1",
    "Driver": "veritas",
    "Mode": "",
    "Options": [
      "nosuid",
      "nodev",
      "rbind"
    ],
    "RW": true,
    "Propagation": "rprivate"
  },
  {
    "Type": "volume",
    "Name": "vntHO_vpfs_shares",
    "Source": "/mnt/containers/vntHO_vpfs_shares",
    "Destination": "/mnt/vpfs_shares",
    "Driver": "filevol",
    "Mode": "Z",
    "Options": [
      "nosuid",
      "nodev",
      "rbind"
    ],
    "RW": true,
    "Propagation": "rprivate"
  },
  {
    "Type": "bind",
    "Source": "/etc/vxos-release",
    "Destination": "/etc/flex-release",
    "Driver": "",
    "Mode": "",
    "Options": [
      "rbind"
    ],
    "RW": false,
    "Propagation": "rprivate"
  },
  {
    "Type": "bind",
    "Source": "/mnt/worm",
    "Destination": "/opt/veritas/vxplat-manage/clock",
    "Driver": "",
    "Mode": "",
    "Options": [
      "rbind"
    ],
    "RW": false,
    "Propagation": "rprivate"
  },
  {
    "Type": "bind",
    "Source": "/opt/veritas/flex/tools/instance-vxapp-addons.sh",
    "Destination": "/opt/veritas/vxplat-manage/add-ons",
    "Driver": "",
    "Mode": "",
    "Options": [
      "rbind"
    ],
    "RW": false,
    "Propagation": "rprivate"
  },
  {
    "Type": "bind",
    "Source": "/opt/VRTSperl",
    "Destination": "/opt/VRTSperl",
    "Driver": "",
    "Mode": "",
    "Options": [
      "rbind"
    ],
    "RW": false,
    "Propagation": "rprivate"
  },
  {
    "Type": "bind",
    "Source": "/usr/lib64/vxfsmisc.so",
    "Destination": "/usr/lib64/vxfsmisc.so",
    "Driver": "",
    "Mode": "",
    "Options": [
      "rbind"
    ],
    "RW": false,
    "Propagation": "rprivate"
  },
  {
    "Type": "bind",
    "Source": "/opt/veritas/appliance/appha/status-codes",
    "Destination": "/opt/veritas/vxplat-manage/health-include",
    "Driver": "",
    "Mode": "",
    "Options": [
      "rbind"
    ],
    "RW": false,
    "Propagation": "rprivate"
  },
  {
    "Type": "bind",
    "Source": "/mnt/data/apps/.lockdown-mode.conf",
    "Destination": "/opt/veritas/vxplat-manage/lockdown-mode.conf",
    "Driver": "",
    "Mode": "",
    "Options": [
      "rbind"
    ],
    "RW": false,
    "Propagation": "rprivate"
  }
]


Podman logs at that time:
Aug 31 02:18:55 eagappnbu466 podman[35397]: @ - - [31/Aug/2022:02:18:55 +0000] "GET /v1.40/networks/bond0 HTTP/1.1" 200 867 "" "docker-compose/1.29.2 docker-py/5.0.0 Linux/4.18.0-305.49.1.el8_4.x86_64"
Aug 31 02:18:55 eagappnbu466 podman[35397]: @ - - [31/Aug/2022:02:18:55 +0000] "GET /v1.40/volumes/vntHO_logs HTTP/1.1" 200 280 "" "docker-compose/1.29.2 docker-py/5.0.0 Linux/4.18.0-305.49.1.el8_4.x86_64"
Aug 31 02:18:55 eagappnbu466 podman[35397]: @ - - [31/Aug/2022:02:18:55 +0000] "GET /v1.40/volumes/vntHO_msdpcat HTTP/1.1" 200 281 "" "docker-compose/1.29.2 docker-py/5.0.0 Linux/4.18.0-305.49.1.el8_4.x86_64"
Aug 31 02:18:55 eagappnbu466 podman[35397]: @ - - [31/Aug/2022:02:18:55 +0000] "GET /v1.40/volumes/vntHO_msdpdata-0 HTTP/1.1" 200 333 "" "docker-compose/1.29.2 docker-py/5.0.0 Linux/4.18.0-305.49.1.el8_4.x86_64"
Aug 31 02:18:55 eagappnbu466 podman[35397]: @ - - [31/Aug/2022:02:18:55 +0000] "GET /v1.40/volumes/vntHO_msdpdata-1 HTTP/1.1" 200 294 "" "docker-compose/1.29.2 docker-py/5.0.0 Linux/4.18.0-305.49.1.el8_4.x86_64"
Aug 31 02:18:55 eagappnbu466 podman[35397]: @ - - [31/Aug/2022:02:18:55 +0000] "GET /v1.40/volumes/vntHO_vpfs_shares HTTP/1.1" 200 156 "" "docker-compose/1.29.2 docker-py/5.0.0 Linux/4.18.0-305.49.1.el8_4.x86_64"
Aug 31 02:18:55 eagappnbu466 podman[35397]: time="2022-08-31T02:18:55Z" level=info msg="About to copy up into volume b5337900019d1cb11c67df8b6fa8c8012c6b400ca28ca7f74f12fbf22344c6da"
Aug 31 02:18:55 eagappnbu466 podman[35397]: @ - - [31/Aug/2022:02:18:55 +0000] "POST /v1.40/containers/create?name=ntHO_tool HTTP/1.1" 201 88 "" "docker-compose/1.29.2 docker-py/5.0.0 Linux/4.18.0-305.49.1.el8_4.x86_64"
Aug 31 02:18:55 eagappnbu466 podman[35397]: @ - - [31/Aug/2022:02:18:55 +0000] "GET /v1.40/containers/1ddd6a87791c30d60080c3404ee08068e6c7bc71bfa9c65ed52ca0d5e2ee14f8/json HTTP/1.1" 200 4831 "" "docker-compose/1.29.2 docker-py/5.0.0 Linux/4.18.0-305.49.1.el8_4.x86_64"
Aug 31 02:18:55 eagappnbu466 podman[35397]: time="2022-08-31T02:18:55Z" level=info msg="Mounting volume vntHO_logs using plugin veritas for container 2f73349cfc4630255319c6c8dfc1b46a8996ace9d14d8e07563b165915918ec2"
Aug 31 02:18:56 eagappnbu466 podman[35397]: time="2022-08-31T02:18:56Z" level=info msg="Mounting volume vntHO_msdpcat using plugin veritas for container 2f73349cfc4630255319c6c8dfc1b46a8996ace9d14d8e07563b165915918ec2"
Aug 31 02:18:57 eagappnbu466 podman[35397]: time="2022-08-31T02:18:57Z" level=info msg="Mounting volume vntHO_msdpdata-1 using plugin veritas for container 2f73349cfc4630255319c6c8dfc1b46a8996ace9d14d8e07563b165915918ec2" 
Aug 31 02:18:57 eagappnbu466 podman[35397]: time="2022-08-31T02:18:57Z" level=info msg="Mounting volume vntHO_vpfs_shares using plugin filevol for container 2f73349cfc4630255319c6c8dfc1b46a8996ace9d14d8e07563b165915918ec2"

Comment 2 Matthew Heon 2022-09-01 19:19:26 UTC
Is Docker Compose or Podman Compose in use? From the REST API logs, I'm suspecting Docker Compose?

Comment 3 mangesh.panche 2022-09-01 19:40:27 UTC
Hi Matthew,

Yes. Docker compose is used in this case. 

But the podman inspect output provided above shows all the volumes (including vntHO_msdpdata-0), but it is not mounted inside the container. Also podman log does not show it getting mounted. 

# docker exec -it intHO_17-0 bash
engine : df
Filesystem                                                1K-blocks     Used   Available Use% Mounted on
overlay                                                    10485760     1628    10484132   1% /
tmpfs                                                     131703508      444   131703064   1% /run
tmpfs                                                         65536        0       65536   0% /dev
tmpfs                                                     131703508       12   131703496   1% /tmp
tmpfs                                                     131703508   569052   131134456   1% /dev/log
tmpfs                                                     131703508        0   131703508   0% /run/lock
/dev/mapper/system-thunder_2.2--20220825090426_630c1f50    10255636  5648744     4066220  59% /opt/VRTSperl
/dev/vx/dsk/vxosdg/vntHO_logs                             262144000    85512   260012176   1% /mnt/nblogs
shm                                                           64000        0       64000   0% /dev/shm
/dev/vx/dsk/vxosdg/vntHO_msdpcat                         5368709120   502600  5326268368   1% /mnt/msdpcat
/dev/loop2                                                   999320     2564      927944   1% /mnt/vpfs_shares
/dev/vx/dsk/vxosdg/vntHO_msdpdata-1                     53159056384  1957656 52888663280   1% /mnt/msdp/vol1
/dev/mapper/system-containers                             153524992 13665712   139859280   9% /opt/veritas/vxapp-manage
tmpfs                                                     131703508        0   131703508   0% /var/log/journal
tmpfs                                                     131703508        0   131703508   0% /sys/fs/cgroup
/dev/vx/dsk/vxosdg/data                                    41943040  3416268    36119817   9% /opt/veritas/vxplat-manage/lockdown-mode.conf
/dev/vx/dsk/vxosdg/worm                                     1048576    18776      965445   2% /opt/veritas/vxplat-manage/clock
tmpfs                                                     131703508        0   131703508   0% /proc/acpi
tmpfs                                                     131703508        0   131703508   0% /proc/scsi
tmpfs                                                     131703508        0   131703508   0% /sys/firmware
tmpfs                                                     131703508        0   131703508   0% /sys/fs/selinux
tmpfs                                                      26340704        0    26340704   0% /run/user/0
tmpfs                                                      26340704        0    26340704   0% /run/user/3000

Comment 4 Giuseppe Scrivano 2022-09-02 08:06:44 UTC
I am not able to reproduce here using the "local" driver.

Could you please provide a reproducer?

Could the volume be unmounted on the host while the container runs and that is just propagated to the container?  Is the veritas driver using a FUSE file system?

Comment 5 mangesh.panche 2022-09-02 21:40:32 UTC
It looks like the side effect of the Bug 2080458.

During the previous container stop, the unmount for volume vUDqq_msdpdata-0 timedout in Podman, but the veritas driver successfully unmounted the volume.


Aug 31 02:17:08 eagappnbu466 podman[35397]: time="2022-08-31T02:17:08Z" level=info msg="Unmounting volume vntHO_logs using plugin veritas for container 2f73349cfc4630255319c6c8dfc1b46a8996ace9d14d8e07563b165915918ec2"
Aug 31 02:17:08 eagappnbu466 podman[35397]: time="2022-08-31T02:17:08Z" level=info msg="Unmounting volume vntHO_vpfs_shares using plugin filevol for container 2f73349cfc4630255319c6c8dfc1b46a8996ace9d14d8e07563b165915918ec2"
Aug 31 02:17:08 eagappnbu466 podman[35397]: time="2022-08-31T02:17:08Z" level=info msg="Unmounting volume vntHO_msdpcat using plugin veritas for container 2f73349cfc4630255319c6c8dfc1b46a8996ace9d14d8e07563b165915918ec2"
Aug 31 02:17:09 eagappnbu466 podman[35397]: time="2022-08-31T02:17:09Z" level=info msg="Unmounting volume vntHO_msdpdata-0 using plugin veritas for container 2f73349cfc4630255319c6c8dfc1b46a8996ace9d14d8e07563b165915918ec2"
Aug 31 02:17:14 eagappnbu466 podman[35397]: time="2022-08-31T02:17:14Z" level=info msg="Request Failed(Internal Server Error): error unmounting container a257b49ed699bd61804c435da92b59bc2572057f54d556f2579d6a584da7c646 storage: error unmounting volume vntHO_msdpdata-0 for container a257b49ed699bd61804c435da92b59bc2572057f54d556f2579d6a584da7c646: error sending request to volume plugin veritas endpoint /VolumeDriver.Unmount: Post \"http://plugin/VolumeDriver.Unmount\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"

As a result, the MountCount did not get decremented and stayed at 1, even though the volume was unmounted

# docker volume inspect vUDqq_msdpdata-0
[
     {
          "Name": "vUDqq_msdpdata-0",
          "Driver": "veritas",
          "Mountpoint": "/mnt/containers/vUDqq_msdpdata-0_vxosdg",
          "CreatedAt": "2022-08-30T08:36:37.307331782Z",
          "Labels": {},
          "Scope": "local",
          "Options": {
               "context": "system_u:object_r:container_file_t:s0:c527",
               "extended_attr": " B2098764002A44D140FAF8EBF3",
               "extended_fs_attr": "bsize=8192",
               "size": "10737418240KB"
          },
          "MountCount": 1,
          "NeedsCopyUp": true,
          "NeedsChown": true
     }
]

So, during next container start, the volume did not get mounted, since the MountCount > 0 (See the below code)

func (v *Volume) mount() error {
	if !v.needsMount() {
		return nil
	}

	// Update the volume from the DB to get an accurate mount counter.
	if err := v.update(); err != nil {
		return err
	}

	// If the count is non-zero, the volume is already mounted.
	// Nothing to do.
	if v.state.MountCount > 0 {                     
		v.state.MountCount++
		logrus.Debugf("Volume %s mount count now at %d", v.Name(), v.state.MountCount)
		return v.save()
	}


Bug 2080458 should be prioritized. We are seeing more and more repercussions of the bug. It should be addressed at the earliest.

Can we get the fix for it on RHEL 8.6.

Comment 6 Tom Sweeney 2022-09-02 21:43:45 UTC
@gscrivan Please see the latest comments here.  @dornelas heads up, this looks to be a potential backport/zero day.

Comment 7 Tom Sweeney 2022-09-06 21:07:32 UTC
@mangesh.panche Given we have several BZ's for the config change, can we close this as resolved through those BZ's?  The RHEL 8.6 BZ is https://bugzilla.redhat.com/show_bug.cgi?id=2080458
and we have for potential RHEL 8.7 zero day: https://bugzilla.redhat.com/show_bug.cgi?id=2080458  and for potential 9.1 zero day: https://bugzilla.redhat.com/show_bug.cgi?id=2124676

Comment 8 mangesh.panche 2022-09-06 23:37:45 UTC
@tsweeney The target release for 2080458 as 8.8. We are looking for the fix on 8.6.

Comment 9 Tom Sweeney 2022-09-07 14:33:34 UTC
@mangesh.panche 2080458 I've removed the target for that one.  It's still 8.6.  I just realized that I had neglected to make an 8.8 variant of the BZ so we could zeroday it for 8.7.  I've done so with: https://bugzilla.redhat.com/show_bug.cgi?id=2124952.

Given that, are you OK with closing this BZ now?

Comment 10 mangesh.panche 2022-09-14 02:23:40 UTC
I am fine with closing this, so far the issue is being addressed using 2080458.

Comment 11 Tom Sweeney 2022-09-14 13:00:21 UTC
Closeing as a duplicate of 2124952 as the fix for that also fixes this.

*** This bug has been marked as a duplicate of bug 2124952 ***

Comment 12 Red Hat Bugzilla 2023-09-19 04:25:42 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.