RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1928935 - RFE: Let `podman volume prune` show the volumes that are going to be removed
Summary: RFE: Let `podman volume prune` show the volumes that are going to be removed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.3
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: rc
: ---
Assignee: Jindrich Novy
QA Contact: Joy Pu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-02-15 20:03 UTC by Joerg K
Modified: 2021-11-09 19:31 UTC (History)
10 users (show)

Fixed In Version: podman-3.3.0-0.4.el8 or newer
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-09 17:37:05 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github containers podman issues 7862 0 None closed Volume prune command removes used volume 2021-02-15 20:03:50 UTC
Github containers podman issues 8913 0 None closed RFE: Let `podman volume prune` show the volumes that are going to be removed 2021-02-15 20:03:50 UTC
Github containers podman pull 9116 0 None closed List volumes before pruning 2021-02-15 20:03:50 UTC
Red Hat Product Errata RHSA-2021:4154 0 None None None 2021-11-09 17:37:36 UTC

Description Joerg K 2021-02-15 20:03:50 UTC
Description of problem:

Running podman volume prune you have to confirm the operation but you don't see which volumes actually will be removed after you confirm the command:

~~~
$ podman volume prune
WARNING! This will remove all volumes not used by at least one container.
Are you sure you want to continue? [y/N]
~~~

I was trapped by the Github issue [#7862](https://github.com/containers/podman/issues/7862). This could have been avoided by showing the volumes that are going to be removed before you confirm the removal. Because the user would have the chance that there is a volume going to be removed that's currently in use by a running container instance. Possible data loss could be prevented this way.

This feature is already implemented upstream and might be included in an upcoming build for RHEL 8!? What do you think about it?

Version-Release number of selected component (if applicable):
podman.x86_64 2.0.5-5.module+el8.3.0+8221+97165c3f @rhel-8-for-x86_64-appstream-rpms

How reproducible:
Always

Steps to Reproduce:
1. podman volume prune

Actual results:
WARNING! This will remove all volumes not used by at least one container.
Are you sure you want to continue? [y/N]

Expected results:
WARNING! This will remove all volumes not used by at least one container. The following volumes are going to be removed:
volume1
volume3
...
Are you sure you want to continue? [y/N]

Additional info:

Comment 2 Daniel Walsh 2021-06-11 14:43:18 UTC
Fixed in podman 3.2

Comment 3 Tom Sweeney 2021-06-11 15:31:53 UTC
Setting to post and assigning to Jindrich for any packaging or BZ needs.

Comment 4 Jindrich Novy 2021-06-15 09:33:35 UTC
https://patch-diff.githubusercontent.com/raw/containers/podman/pull/9116.patch is applied in the current code base.

Can we get qa ack please?

Comment 5 Joy Pu 2021-07-12 08:19:47 UTC
Test with podman-3.3.0-0.21.module+el8.5.0+11747+c7c34607.x86_64 and the command works as expected:
# podman volume prune
WARNING! This will remove all volumes not used by at least one container. The following volumes will be removed:
test
Are you sure you want to continue? [y/N] n
[root@kvm-06-guest34 rpms]# podman volume prune
WARNING! This will remove all volumes not used by at least one container. The following volumes will be removed:
test
Are you sure you want to continue? [y/N] y
test

Comment 9 Joy Pu 2021-08-06 09:11:23 UTC
Test with podman-3.3.0-2.module+el8.5.0+12136+c1ac9593.x86_64 and it works as expected. So set this to verified. Details:
# podman run -dt -v test:/test registry.access.redhat.com/ubi8:latest sleep 999
14b72aa48c6d253b7fc05a165895fedbd90faa28a22758ce531cf53ec7cafbd8
# podman volume create test1
test1
# podman volume ls
DRIVER      VOLUME NAME
local       test
local       test1
# podman volume prune
WARNING! This will remove all volumes not used by at least one container. The following volumes will be removed:
test1
Are you sure you want to continue? [y/N] y
test1

Comment 11 errata-xmlrpc 2021-11-09 17:37:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:4154


Note You need to log in before you can comment on or make changes to this bug.