Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
I found a case where the "not changed" number is > 0:
[root@taft-01 ~]# pvscan
PV /dev/sda2 VG vg_taft01 lvm2 [67.75 GiB / 0 free]
PV /dev/sdb1 lvm2 [135.67 GiB]
PV /dev/sdc1 lvm2 [135.67 GiB]
PV /dev/sdd1 lvm2 [135.67 GiB]
PV /dev/sde1 lvm2 [135.67 GiB]
PV /dev/sdf1 lvm2 [135.67 GiB]
PV /dev/sdg1 lvm2 [135.67 GiB]
PV /dev/sdh1 lvm2 [135.67 GiB]
Total: 8 [1017.41 GiB] / in use: 1 [67.75 GiB] / in no VG: 7 [949.66 GiB]
[root@taft-01 ~]# pvchange --deltag 1 --deltag 2 /dev/sdb1
Can't change tag on Physical Volume /dev/sdb1 not in volume group
0 physical volumes changed / 1 physical volume not changed
[root@taft-01 ~]# pvchange --deltag 1 --deltag 2 /dev/sdb1 /dev/sdc1
Can't change tag on Physical Volume /dev/sdb1 not in volume group
Can't change tag on Physical Volume /dev/sdc1 not in volume group
0 physical volumes changed / 2 physical volumes not changed
I'd still argue that this message is worthless. All the pvs on this system were "not changed", regardless of how many were listed on the cmdline.
This is apparently not technical problem but question of wording and understandability.
Alasdair, please can you comment it?
Fix is either one liner (if you device to change it) or close as not-a-bug :-)
Check the exit status too: if any are 'not changed' that indicates an error.
Only the PVs specified on the cmdline are included - i.e. it's talking about "of the PVs you asked me to change, I changed X and didn't change Y".
So the first two cases look right. The 3rd case could maybe be changed, but I'm not sure: Should we include /dev/foobar (which it discovered not to be a PV) in a total number of PVs?
(In reply to comment #4)
> I'm not sure: Should we include /dev/foobar (which it discovered not to be a
> PV) in a total number of PVs?
/dev/foobar is not a PV - naming nonexistent device a PV would be more confusing than it is today, I think. It's OK as it is. So I'm proposing CLOSED/NOTABUG. Corey, Alasdair?