RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1320729 - renaming RAID volumes to previously existing volume names causes confusion when lvresize calls fsadm to reduce
Summary: renaming RAID volumes to previously existing volume names causes confusion wh...
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.8
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-23 20:49 UTC by Corey Marthaler
Modified: 2017-06-02 09:43 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-02 09:39:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1253833 0 unspecified CLOSED renaming logical volumes to previously existing volume names causes confusion when lvresize calls fsadm 2021-09-03 12:56:05 UTC

Internal Links: 1253833

Description Corey Marthaler 2016-03-23 20:49:33 UTC
Description of problem:
This is basically bug 1196910 but attempted with raid volumes.

resizefs does the umount and the fs reduction even though lvm raid reduction is not yet supported.



SCENARIO (raid1) - [raid_fsadm_reduce_after_rename_to_previously_used]
Create raid, add and mount fs, swap (rename) device names, and then attempt to resize it while online

host-118.virt.lab.msp.redhat.com: lvcreate  --nosync --type raid1 -m 1 -n resizeA -L 400M raid_sanity
  WARNING: New raid1 won't be synchronised. Don't read what you didn't write!
Placing an ext on resizeA volume
mkfs /dev/raid_sanity/resizeA
mke2fs 1.41.12 (17-May-2010)
mount /dev/raid_sanity/resizeA /mnt/resizeA

host-118.virt.lab.msp.redhat.com: lvcreate  --nosync --type raid1 -m 1 -n resizeB -L 400M raid_sanity
  WARNING: New raid1 won't be synchronised. Don't read what you didn't write!
Placing an ext on resizeB volume
mkfs /dev/raid_sanity/resizeB
mke2fs 1.41.12 (17-May-2010)
mount /dev/raid_sanity/resizeB /mnt/resizeB

lvrename /dev/raid_sanity/resizeA /dev/raid_sanity/resizeC
lvrename /dev/raid_sanity/resizeB /dev/raid_sanity/resizeA

[root@host-118 ~]# lvs -a -o +devices
  LV                 VG          Attr       LSize    Cpy%Sync Devices
  resizeA            raid_sanity Rwi-aor--- 400.00m  100.00   resizeA_rimage_0(0),resizeA_rimage_1(0)
  [resizeA_rimage_0] raid_sanity iwi-aor--- 400.00m           /dev/sde2(102)
  [resizeA_rimage_1] raid_sanity iwi-aor--- 400.00m           /dev/sde1(102)
  [resizeA_rmeta_0]  raid_sanity ewi-aor---   4.00m           /dev/sde2(101)
  [resizeA_rmeta_1]  raid_sanity ewi-aor---   4.00m           /dev/sde1(101)
  resizeC            raid_sanity Rwi-aor--- 400.00m  100.00   resizeC_rimage_0(0),resizeC_rimage_1(0)
  [resizeC_rimage_0] raid_sanity iwi-aor--- 400.00m           /dev/sde2(1)
  [resizeC_rimage_1] raid_sanity iwi-aor--- 400.00m           /dev/sde1(1)
  [resizeC_rmeta_0]  raid_sanity ewi-aor---   4.00m           /dev/sde2(0)
  [resizeC_rmeta_1]  raid_sanity ewi-aor---   4.00m           /dev/sde1(0)

[root@host-118 ~]# lvreduce -r -n -f -L 200M /dev/raid_sanity/resizeA
Do you want to unmount "/mnt/resizeB"? [Y|n] y
fsck from util-linux-ng 2.17.2

/dev/mapper/raid_sanity-resizeA: 11/102400 files (0.0% non-contiguous), 15246/409600 blocks
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/mapper/raid_sanity-resizeA to 204800 (1k) blocks.
The filesystem on /dev/mapper/raid_sanity-resizeA is now 204800 blocks long.

  Unable to reduce RAID LV - operation not implemented.


# Filesystem is now reported as 2M, and it's additionally confusing now seeing two mounted "raid_sanity-resizeA" filesystems of differing sizes, but the user only has themselves to blame for that.

[root@host-118 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_host118-lv_root
                      6.5G  3.2G  3.1G  51% /
tmpfs                 499M     0  499M   0% /dev/shm
/dev/vda1             477M   34M  418M   8% /boot
/dev/mapper/raid_sanity-resizeA
                      388M  2.3M  366M   1% /mnt/resizeA
/dev/mapper/raid_sanity-resizeA
                      194M  1.6M  183M   1% /mnt/resizeB

# LVM still reports the 400M size
[root@host-118 ~]# lvs -a -o +devices
  LV                 VG          Attr       LSize    Cpy%Sync Devices
  resizeA            raid_sanity Rwi-aor--- 400.00m  100.00   resizeA_rimage_0(0),resizeA_rimage_1(0)
  [resizeA_rimage_0] raid_sanity iwi-aor--- 400.00m           /dev/sde2(102)
  [resizeA_rimage_1] raid_sanity iwi-aor--- 400.00m           /dev/sde1(102)
  [resizeA_rmeta_0]  raid_sanity ewi-aor---   4.00m           /dev/sde2(101)
  [resizeA_rmeta_1]  raid_sanity ewi-aor---   4.00m           /dev/sde1(101)
  resizeC            raid_sanity Rwi-aor--- 400.00m  100.00   resizeC_rimage_0(0),resizeC_rimage_1(0)
  [resizeC_rimage_0] raid_sanity iwi-aor--- 400.00m           /dev/sde2(1)
  [resizeC_rimage_1] raid_sanity iwi-aor--- 400.00m           /dev/sde1(1)
  [resizeC_rmeta_0]  raid_sanity ewi-aor---   4.00m           /dev/sde2(0)
  [resizeC_rmeta_1]  raid_sanity ewi-aor---   4.00m           /dev/sde1(0)


Version-Release number of selected component (if applicable):
2.6.32-633.el6.x86_64

lvm2-2.02.143-3.el6    BUILT: Tue Mar 22 09:26:10 CDT 2016
lvm2-libs-2.02.143-3.el6    BUILT: Tue Mar 22 09:26:10 CDT 2016
lvm2-cluster-2.02.143-3.el6    BUILT: Tue Mar 22 09:26:10 CDT 2016
udev-147-2.72.el6    BUILT: Tue Mar  1 06:14:05 CST 2016
device-mapper-1.02.117-3.el6    BUILT: Tue Mar 22 09:26:10 CDT 2016
device-mapper-libs-1.02.117-3.el6    BUILT: Tue Mar 22 09:26:10 CDT 2016
device-mapper-event-1.02.117-3.el6    BUILT: Tue Mar 22 09:26:10 CDT 2016
device-mapper-event-libs-1.02.117-3.el6    BUILT: Tue Mar 22 09:26:10 CDT 2016
device-mapper-persistent-data-0.6.2-0.1.rc7.el6    BUILT: Tue Mar 22 08:58:09 CDT 2016
cmirror-2.02.143-3.el6    BUILT: Tue Mar 22 09:26:10 CDT 2016


Note You need to log in before you can comment on or make changes to this bug.