RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2093079 - Podman does not detect volume from the volume plugin, unlike docker
Summary: Podman does not detect volume from the volume plugin, unlike docker
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Jindrich Novy
QA Contact: Joy Pu
URL:
Whiteboard:
Depends On:
Blocks: 2109295
TreeView+ depends on / blocked
 
Reported: 2022-06-02 21:21 UTC by mangesh.panche
Modified: 2022-11-08 09:33 UTC (History)
15 users (show)

Fixed In Version: podman-4.1.1-6.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2109295 (view as bug list)
Environment:
Last Closed: 2022-11-08 09:15:47 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github containers podman pull 14713 0 None Merged add podman volume reload to sync volume plugins 2022-06-29 09:47:41 UTC
Github containers podman pull 14860 0 None Merged [v4.1.1-rhel] Backport volume reload 2022-07-08 14:19:07 UTC
Red Hat Issue Tracker RHELPLAN-124140 0 None None None 2022-06-02 21:34:47 UTC
Red Hat Product Errata RHSA-2022:7457 0 None None None 2022-11-08 09:16:27 UTC

Description mangesh.panche 2022-06-02 21:21:47 UTC
Description of problem:
If volume is created out of band in the volume plugin, the volume is not detected by podman, unlike docker. This affects deployment of podman in clustered environment where the volume is created on shared storage from one node, is not detected by podman running on the other node.

Version-Release number of selected component (if applicable):
Version:      4.0.2

How reproducible:
Easily reproducible

Steps to Reproduce:
1. Create volume using custom volume driver, on shared storage, on one node of a two node cluster .
2. The podman running on the other node, does not detect the volume, using same custom volume driver.

Actual results:
Podman can detect only volumes created locally using podman create.

Expected results:
Podman should detect all the volumes seen by the volume plugin driver.

Additional info:
Docker supports this configuration, where in volume available in the plugin are detected and listed. This is a must have requirement for deployment in the clustered environment. Not supporting this use case would make it difficult for migration from docker to podman in those deployments.

Presence of same volume name in two different plugin can be an invalid configuration or a corner case and the behavior can be defined and handled accordingly. Even in case of docker the behavior is indeterministic, if two drivers report the same volume.

The following options can be considered:

If the same volume is reported by two plugins, the remove of the volume has to be done by specifying the driver.
A mechanism to discover the volumes from the plugin via podman volume refresh/reload
A option to bypass the database for a plugin. The volume create/delete for this plugin will be sent to the driver directly.

Comment 1 Tom Sweeney 2022-06-03 21:00:19 UTC
Matt, can you take a look at this please?

Comment 3 Daniel Walsh 2022-06-06 16:47:42 UTC
Can't you configure in containers.conf the plugin that is used for this volume?  Docker watches a directory for sockets and then `discovers` them while Podman requires you to define them in containers.conf, I believe.

Why is looking in a particular directory for sockets coming and going better then having a configuration file the lists where the sockets are?

Comment 4 mangesh.panche 2022-06-07 18:16:54 UTC
Hi @Daniel Walsh,

The plugin is configured in containers.conf and that is how it is volumes are created using the plugin on first node. On the second node, even though the containers.conf has the plugin info, it is does not detect the volumes from the plugin.

Comment 5 Daniel Walsh 2022-06-07 21:18:06 UTC
Ok that would be a bug.

Comment 6 mangesh.panche 2022-06-17 18:29:03 UTC
@dwalsh Is there any update on this?

Comment 7 Derrick Ornelas 2022-06-17 22:16:00 UTC
We're looking at how to prioritize this work against other on-going work and priorities. We should be able to provide further details in the next one to two weeks. Thanks for your patience.

Comment 8 Brent Baude 2022-06-23 15:51:19 UTC
@mangesh, we would like to schedule a quick talk with to discuss our proposal on how to deal with this bugzilla.  would you be willing to have this conversation with us?  if so, we can directly email each other to get it set up.

Comment 9 mangesh.panche 2022-06-23 17:53:12 UTC
@Brent Sure, we can have a call to discuss the proposals. Please email me at mangesh.panche.

Comment 10 Paul Holzinger 2022-06-27 15:42:39 UTC
I implemented this feature as podman volume reload command. This will sync the volume plugins with the libpod db volumes.
Will this work for you?

Upstream PR: https://github.com/containers/podman/pull/14713

Comment 11 Brent Baude 2022-06-27 17:38:18 UTC
@mangesh, does this satisfy your requirements?

Comment 12 Tom Sweeney 2022-06-29 00:06:53 UTC
This will be fixed in RHEL 8.7 and 9.1 with Podman v4.2 or later.  Setting to POST and assigning to Jindrich for any further BZ/packaging needs.

Comment 13 mangesh.panche 2022-06-29 17:55:20 UTC
We are on RHEL 8.6 and would like the fix in upcoming release of 8.6 i.e. 8.6.0.2 which I understand is in August.
What are the timelines of 8.7?

Comment 14 Jindrich Novy 2022-06-30 11:11:51 UTC
8.7 is currently planned to be GA during November.

Comment 16 mangesh.panche 2022-06-30 20:27:28 UTC
Regarding the solution proposed,

Is this operation "podman volume reload" idempotent? 

What's the performance impact of running the operation when 100 volumes are configured
- no new volumes discovered and
- 25 new volumes discovered?

Comment 17 Tom Sweeney 2022-07-01 15:50:43 UTC
@pholzing can you address Mangesh's questions in comment https://bugzilla.redhat.com/show_bug.cgi?id=2093079#c16 please?

Comment 18 Paul Holzinger 2022-07-04 12:23:35 UTC
> Regarding the solution proposed,
> 
> Is this operation "podman volume reload" idempotent? 

Yes, it will sync the local libpod db against the plugins state, so it will add volumes to the db if new ones
are found in the plugin but it will also remove volumes from the db if they were deleted from the plugin.
So if the plugin volumes do not change podman volume reload does nothing and returns successfully.


> What's the performance impact of running the operation when 100 volumes are
> configured
> - no new volumes discovered and
> - 25 new volumes discovered?

In all cases we have to get the full list of volumes from the configured plugins and iterate through them.
From a quick test with 100 new volumes "podman volume reload" took 0.47 seconds on my laptop and and
then running it again with no new discovered volumes it took 0.18 seconds.

Comment 22 mangesh.panche 2022-07-06 18:07:50 UTC
@Paul Thanks for the information. 

We would need the fix in RHEL 8.6 as discussed earlier.

Comment 24 Tom Sweeney 2022-07-07 18:17:12 UTC
@pholzing please do the backport to the v4.1.1-rhel branch.

Comment 25 Paul Holzinger 2022-07-07 19:12:38 UTC
PR: https://github.com/containers/podman/pull/14860

Comment 34 Joy Pu 2022-07-18 16:24:11 UTC
Test with podman-4.1.1-6.module+el8.7.0+15895+a6753917.x86_64 and it works as expected.

Set up two host using one volume plugin convoy and use podman volume reload to get the volume created in another host, the volume can be get and used as expected. So set this to verified. More details:

Create from one host:
[root@kvm-06-guest25 ~]# podman volume create --driver convoy test_othernode
test_othernode

Reload and use it from another host:
[root@ibm-x3250m6-06 podman]# podman volume reload
Added:
test_othernode
[root@ibm-x3250m6-06 podman]# podman run -it -v test_othernode:/test quay.io/libpod/busybox 
Trying to pull quay.io/libpod/busybox:latest...
Getting image source signatures
Copying blob 9758c28807f2 done  
Copying config f0b02e9d09 done  
Writing manifest to image destination
Storing signatures
/ # ls 
bin   dev   etc   home  proc  root  run   sys   test  tmp   usr   var
/ # cd test
/test # ls
/test # echo "hello world" > test
/test # cat test
hello world
/test # exit
[root@ibm-x3250m6-06 podman]# podman rm -fa
602c019f8820148cd5f4106eb2de37a39f93e4de1ef738e8a06a605857d62c09
[root@ibm-x3250m6-06 podman]# podman volume rm test_othernode
test_othernode


Use reload to check if the volume is also deleted in the first host.
[root@kvm-06-guest25 ~]# podman volume reload
Removed:
test_othernode

Comment 40 errata-xmlrpc 2022-11-08 09:15:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7457


Note You need to log in before you can comment on or make changes to this bug.