RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1164347 - Failed rename when splitting off raid image - there's still an existing device with the name to which we're trying to rename
Summary: Failed rename when splitting off raid image - there's still an existing devic...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-11-14 17:55 UTC by Corey Marthaler
Modified: 2021-09-03 12:52 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.115-3.el7
Doc Type: Bug Fix
Doc Text:
Doesn't require doc text - it's been upstream regression between RHEL release.
Clone Of:
Environment:
Last Closed: 2015-03-05 13:10:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:0513 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2015-03-05 16:14:41 UTC

Description Corey Marthaler 2014-11-14 17:55:51 UTC
Description of problem:
Now that image splitting should "work" (bug 1091543), I hit this issue when attempting to split a pv from a cache pool.

./split_image -e sequentially_split_off_all_stacked_cache_pool_raid1_pvs

SCENARIO - [sequentially_split_off_all_stacked_cache_pool_raid1_pvs]
Create raid1 volume with many legs, convert to cache pool, and then sequentially split off each one of the PVs
host-110.virt.lab.msp.redhat.com: lvcreate  -L 500M -n origin split_image /dev/sdb1

host-110.virt.lab.msp.redhat.com: lvcreate  --type raid1 -m 4 -n split_cache -L 500M split_image /dev/sdc1 /dev/sdf1 /dev/sde1 /dev/sdd1 /dev/sda1
host-110.virt.lab.msp.redhat.com: lvcreate  --type raid1 -m 4 -n split_cache_meta -L 8M split_image /dev/sdc1 /dev/sdf1 /dev/sde1 /dev/sdd1 /dev/sda1
Waiting until all mirror|raid volumes become fully syncd...
   1/2 mirror(s) are fully synced: ( 48.89% 100.00% )
   1/2 mirror(s) are fully synced: ( 78.81% 100.00% )
   2/2 mirror(s) are fully synced: ( 100.00% 100.00% )

Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --yes --type cache-pool --poolmetadata split_image/split_cache_meta split_image/split_cache
  WARNING: Converting logical volume split_image/split_cache and split_image/split_cache_meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachepool split_image/split_cache split_image/origin



[root@host-110 ~]# lvs -a -o +devices
 LV                           Attr       LSize   Pool          Origin         Data% Meta% Cpy%Sync Devices
 [lvol0_pmspare]              ewi-------   8.00m                                                   /dev/sda1(129)
 origin                       Cwi-a-C--- 500.00m [split_cache] [origin_corig] 0.06  1.07  0.00     origin_corig(0)
 [origin_corig]               owi-aoC--- 500.00m                                                   /dev/sdb1(0)
 [split_cache]                Cwi---C--- 500.00m                              0.06  1.07  0.00     split_cache_cdata(0)
 [split_cache_cdata]          Cwi-aor--- 500.00m                                          100.00   split_cache_cdata_rimage_0(0),split_cache_cdata_rimage_1(0),split_cache_cdata_rimage_2(0),split_cache_cdata_rimage_3(0),split_cache_cdata_rimage_4(0)
 [split_cache_cdata_rimage_0] iwi-aor--- 500.00m                                                   /dev/sdc1(1)
 [split_cache_cdata_rimage_1] iwi-aor--- 500.00m                                                   /dev/sdf1(1)
 [split_cache_cdata_rimage_2] iwi-aor--- 500.00m                                                   /dev/sde1(1)
 [split_cache_cdata_rimage_3] iwi-aor--- 500.00m                                                   /dev/sdd1(1)
 [split_cache_cdata_rimage_4] iwi-aor--- 500.00m                                                   /dev/sda1(1)
 [split_cache_cdata_rmeta_0]  ewi-aor---   4.00m                                                   /dev/sdc1(0)
 [split_cache_cdata_rmeta_1]  ewi-aor---   4.00m                                                   /dev/sdf1(0)
 [split_cache_cdata_rmeta_2]  ewi-aor---   4.00m                                                   /dev/sde1(0)
 [split_cache_cdata_rmeta_3]  ewi-aor---   4.00m                                                   /dev/sdd1(0)
 [split_cache_cdata_rmeta_4]  ewi-aor---   4.00m                                                   /dev/sda1(0)
 [split_cache_cmeta]          ewi-aor---   8.00m                                          100.00   split_cache_cmeta_rimage_0(0),split_cache_cmeta_rimage_1(0),split_cache_cmeta_rimage_2(0),split_cache_cmeta_rimage_3(0),split_cache_cmeta_rimage_4(0)
 [split_cache_cmeta_rimage_0] iwi-aor---   8.00m                                                   /dev/sdc1(127)
 [split_cache_cmeta_rimage_1] iwi-aor---   8.00m                                                   /dev/sdf1(127)
 [split_cache_cmeta_rimage_2] iwi-aor---   8.00m                                                   /dev/sde1(127)
 [split_cache_cmeta_rimage_3] iwi-aor---   8.00m                                                   /dev/sdd1(127)
 [split_cache_cmeta_rimage_4] iwi-aor---   8.00m                                                   /dev/sda1(127)
 [split_cache_cmeta_rmeta_0]  ewi-aor---   4.00m                                                   /dev/sdc1(126)
 [split_cache_cmeta_rmeta_1]  ewi-aor---   4.00m                                                   /dev/sdf1(126)
 [split_cache_cmeta_rmeta_2]  ewi-aor---   4.00m                                                   /dev/sde1(126)
 [split_cache_cmeta_rmeta_3]  ewi-aor---   4.00m                                                   /dev/sdd1(126)
 [split_cache_cmeta_rmeta_4]  ewi-aor---   4.00m                                                   /dev/sda1(126)

splitting off legs:
         /dev/sdd1

lvconvert --yes --splitmirrors 1 --name new0 split_image/split_cache_cdata /dev/sdd1

couldn't split image
  device-mapper: rename ioctl on split_image-split_cache_cdata_rimage_4 failed: Device or resource busy
  Failed to rename split_image-split_cache_cdata_rimage_4 (253:12) to split_image-split_cache_cdata_rimage_3
  Failed to resume split_image/split_cache_cdata after committing changes
  Releasing activation in critical section.
  libdevmapper exiting with 24 device(s) still suspended.


[root@host-110 ~]# lvs -a -o +devices
[DEADLOCK]

Nov 14 11:40:54 host-110 kernel: INFO: task lvs:25669 blocked for more than 120 seconds.
Nov 14 11:40:54 host-110 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 14 11:40:54 host-110 kernel: lvs             D ffff88003fc14600     0 25669  22467 0x00000080
Nov 14 11:40:54 host-110 kernel: ffff8800172b7a70 0000000000000086 ffff880014306660 ffff8800172b7fd8
Nov 14 11:40:54 host-110 kernel: ffff8800172b7fd8 ffff8800172b7fd8 ffff880014306660 ffff88003fc14ec8
Nov 14 11:40:54 host-110 kernel: ffff880036a6d400 ffff880014306660 0000000000000000 0000000000000000
Nov 14 11:40:54 host-110 kernel: Call Trace:
Nov 14 11:40:54 host-110 kernel: [<ffffffff81608c6d>] io_schedule+0x9d/0x130
Nov 14 11:40:54 host-110 kernel: [<ffffffff81203163>] do_blockdev_direct_IO+0xc03/0x2620
Nov 14 11:40:54 host-110 kernel: [<ffffffff812b9fe0>] ? disk_map_sector_rcu+0x80/0x80
Nov 14 11:40:54 host-110 kernel: [<ffffffff811fed10>] ? I_BDEV+0x10/0x10
Nov 14 11:40:54 host-110 kernel: [<ffffffff81204bd5>] __blockdev_direct_IO+0x55/0x60
Nov 14 11:40:54 host-110 kernel: [<ffffffff811fed10>] ? I_BDEV+0x10/0x10
Nov 14 11:40:54 host-110 kernel: [<ffffffff811ff367>] blkdev_direct_IO+0x57/0x60
Nov 14 11:40:54 host-110 kernel: [<ffffffff811fed10>] ? I_BDEV+0x10/0x10
Nov 14 11:40:54 host-110 kernel: [<ffffffff811581f3>] generic_file_aio_read+0x6d3/0x750
Nov 14 11:40:54 host-110 kernel: [<ffffffff811e569e>] ? mntput_no_expire+0x3e/0x120
Nov 14 11:40:54 host-110 kernel: [<ffffffff811e57a4>] ? mntput+0x24/0x40
Nov 14 11:40:54 host-110 kernel: [<ffffffff811ff8ec>] blkdev_aio_read+0x4c/0x70
Nov 14 11:40:54 host-110 kernel: [<ffffffff811c557d>] do_sync_read+0x8d/0xd0
Nov 14 11:40:54 host-110 kernel: [<ffffffff811c5c5c>] vfs_read+0x9c/0x170
Nov 14 11:40:54 host-110 kernel: [<ffffffff811c6788>] SyS_read+0x58/0xb0
Nov 14 11:40:54 host-110 kernel: [<ffffffff816134e9>] system_call_fastpath+0x16/0x1b


Version-Release number of selected component (if applicable):
3.10.0-189.el7.x86_64
lvm2-2.02.112-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
lvm2-libs-2.02.112-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
lvm2-cluster-2.02.112-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
device-mapper-1.02.91-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
device-mapper-libs-1.02.91-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
device-mapper-event-1.02.91-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
device-mapper-event-libs-1.02.91-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014
device-mapper-persistent-data-0.3.2-1.el7    BUILT: Thu Apr  3 09:58:51 CDT 2014
cmirror-2.02.112-1.el7    BUILT: Tue Nov 11 09:39:35 CST 2014

Comment 2 Zdenek Kabelac 2014-11-26 20:52:23 UTC
Version 2.02.112 is not good release for cache testing.

Please retest once 113 is available.

It's passing on my 3.18.0-0.rc5.git0.2.fc22.x86_64

# lvconvert --yes --splitmirrors 1 --name new0 vg/split_cache_cdata 

#  lvs -a
  LV                           VG   Attr       LSize  Pool          Origin         Data%  Meta%  Move Log Cpy%Sync Convert
  lvol1                        vg   -wi-a----- 10,00m                                                                     
  [lvol2_pmspare]              vg   ewi-------  8,00m                                                                     
  new0                         vg   -wa-a----- 10,00m                                                                     
  origin                       vg   Cwi-a-C--- 10,00m [split_cache] [origin_corig] 1,25   0,34            0,00            
  [origin_corig]               vg   owi-aoC--- 10,00m                                                                     
  [split_cache]                vg   Cwi---C--- 10,00m                              1,25   0,34            0,00            
  [split_cache_cdata]          vg   Cwa-aor--- 10,00m                                                     100,00          
  [split_cache_cdata_rimage_0] vg   iwa-aor--- 10,00m                                                                     
  [split_cache_cdata_rimage_1] vg   iwa-aor--- 10,00m                                                                     
  [split_cache_cdata_rimage_2] vg   iwa-aor--- 10,00m                                                                     
  [split_cache_cdata_rimage_3] vg   iwa-aor--- 10,00m                                                                     
  [split_cache_cdata_rmeta_0]  vg   ewa-aor--- 32,00k                                                                     
  [split_cache_cdata_rmeta_1]  vg   ewa-aor--- 32,00k                                                                     
  [split_cache_cdata_rmeta_2]  vg   ewa-aor--- 32,00k                                                                     
  [split_cache_cdata_rmeta_3]  vg   ewa-aor--- 32,00k                                                                     
  [split_cache_cmeta]          vg   ewa-aor---  8,00m                                                     100,00          
  [split_cache_cmeta_rimage_0] vg   iwa-aor---  8,00m                                                                     
  [split_cache_cmeta_rimage_1] vg   iwa-aor---  8,00m                                                                     
  [split_cache_cmeta_rimage_2] vg   iwa-aor---  8,00m                                                                     
  [split_cache_cmeta_rimage_3] vg   iwa-aor---  8,00m                                                                     
  [split_cache_cmeta_rimage_4] vg   iwa-aor---  8,00m                                                                     
  [split_cache_cmeta_rmeta_0]  vg   ewa-aor--- 32,00k                                                                     
  [split_cache_cmeta_rmeta_1]  vg   ewa-aor--- 32,00k                                                                     
  [split_cache_cmeta_rmeta_2]  vg   ewa-aor--- 32,00k                                                                     
  [split_cache_cmeta_rmeta_3]  vg   ewa-aor--- 32,00k                                                                     
  [split_cache_cmeta_rmeta_4]  vg   ewa-aor--- 32,00k

Comment 3 Corey Marthaler 2014-12-03 23:23:53 UTC
The deadlock no longer exists in lvm2-2.02.114, but there is still an issue here (unless I need a newer kernel then kernel-3.10.0-206).


[root@host-118 ~]# lvs -a -o +devices
  LV                           Attr       LSize   Pool          Origin         Data%  Meta% Cpy%Sync Devices
  [lvol0_pmspare]              ewi-------   8.00m                                                    /dev/sdd1(125)
  origin                       Cwi-a-C--- 500.00m [split_cache] [origin_corig] 0.06   1.07  0.00     origin_corig(0)
  [origin_corig]               owi-aoC--- 500.00m                                                    /dev/sdd1(0)
  [split_cache]                Cwi---C--- 500.00m                              0.06   1.07  0.00     split_cache_cdata(0)
  [split_cache_cdata]          Cwi-aor--- 500.00m                                           100.00   split_cache_cdata_rimage_0(0),split_cache_cdata_rimage_1(0),split_cache_cdata_rimage_2(0),split_cache_cdata_rimage_3(0),split_cache_cdata_rimage_4(0)
  [split_cache_cdata_rimage_0] iwi-aor--- 500.00m                                                    /dev/sdc1(1)
  [split_cache_cdata_rimage_1] iwi-aor--- 500.00m                                                    /dev/sda1(1)
  [split_cache_cdata_rimage_2] iwi-aor--- 500.00m                                                    /dev/sde1(1)
  [split_cache_cdata_rimage_3] iwi-aor--- 500.00m                                                    /dev/sdf1(1)
  [split_cache_cdata_rimage_4] iwi-aor--- 500.00m                                                    /dev/sdb1(1)
  [split_cache_cdata_rmeta_0]  ewi-aor---   4.00m                                                    /dev/sdc1(0)
  [split_cache_cdata_rmeta_1]  ewi-aor---   4.00m                                                    /dev/sda1(0)
  [split_cache_cdata_rmeta_2]  ewi-aor---   4.00m                                                    /dev/sde1(0)
  [split_cache_cdata_rmeta_3]  ewi-aor---   4.00m                                                    /dev/sdf1(0)
  [split_cache_cdata_rmeta_4]  ewi-aor---   4.00m                                                    /dev/sdb1(0)
  [split_cache_cmeta]          ewi-aor---   8.00m                                           100.00   split_cache_cmeta_rimage_0(0),split_cache_cmeta_rimage_1(0),split_cache_cmeta_rimage_2(0),split_cache_cmeta_rimage_3(0),split_cache_cmeta_rimage_4(0)
  [split_cache_cmeta_rimage_0] iwi-aor---   8.00m                                                    /dev/sdc1(127)
  [split_cache_cmeta_rimage_1] iwi-aor---   8.00m                                                    /dev/sda1(127)
  [split_cache_cmeta_rimage_2] iwi-aor---   8.00m                                                    /dev/sde1(127)
  [split_cache_cmeta_rimage_3] iwi-aor---   8.00m                                                    /dev/sdf1(127)
  [split_cache_cmeta_rimage_4] iwi-aor---   8.00m                                                    /dev/sdb1(127)
  [split_cache_cmeta_rmeta_0]  ewi-aor---   4.00m                                                    /dev/sdc1(126)
  [split_cache_cmeta_rmeta_1]  ewi-aor---   4.00m                                                    /dev/sda1(126)
  [split_cache_cmeta_rmeta_2]  ewi-aor---   4.00m                                                    /dev/sde1(126)
  [split_cache_cmeta_rmeta_3]  ewi-aor---   4.00m                                                    /dev/sdf1(126)
  [split_cache_cmeta_rmeta_4]  ewi-aor---   4.00m                                                    /dev/sdb1(126)

[root@host-118 ~]# lvconvert --yes --splitmirrors 1 --name new0 split_image/split_cache_cdata /dev/sdf1
  device-mapper: rename ioctl on split_image-split_cache_cdata_rimage_4 failed: Device or resource busy
  Failed to rename split_image-split_cache_cdata_rimage_4 (253:12) to split_image-split_cache_cdata_rimage_3
  Failed to resume split_image/split_cache_cdata after committing changes
  Releasing activation in critical section.
  libdevmapper exiting with 24 device(s) still suspended.
[root@host-118 ~]# echo $?
5


Dec  3 17:04:03 host-118 kernel: device-mapper: raid: RAID1 device #4 now at position #3
Dec  3 17:04:03 host-118 kernel: md/raid1:mdX: active with 4 out of 4 mirrors
Dec  3 17:04:03 host-118 kernel: created bitmap (1 pages) for device mdX
Dec  3 17:04:03 host-118 lvm[1427]: No longer monitoring RAID device split_image-split_cache_cdata for events.
Dec  3 17:04:03 host-118 lvm[1427]: No longer monitoring RAID device split_image-split_cache_cmeta for events.
Dec  3 17:04:04 host-118 kernel: device-mapper: ioctl: Unable to change name on mapped device split_image-split_cache_cdata_rimage_4 to one that already exists: split_image-split_cache_cdata_rimage_3


[root@host-118 ~]# lvs -a -o +devices
  LV                                   Attr       LSize   Pool          Origin         Data%  Meta% Cpy%Sync Devices
  [lvol0_pmspare]                      ewi-------   8.00m                                                    /dev/sdd1(125)
  new0                                 -wi-so---- 500.00m                                                    /dev/sdf1(1)
  origin                               Cwi-s-C--- 500.00m [split_cache] [origin_corig] 0.06   1.56  0.00     origin_corig(0)
  [origin_corig]                       owi-soC--- 500.00m                                                    /dev/sdd1(0)
  [split_cache]                        Cwi---C--- 500.00m                              0.06   1.56  0.00     split_cache_cdata(0)
  [split_cache_cdata]                  Cwi-sor--- 500.00m                                           100.00   split_cache_cdata_rimage_0(0),split_cache_cdata_rimage_1(0),split_cache_cdata_rimage_2(0),split_cache_cdata_rimage_3(0)                              
  [split_cache_cdata_rimage_0]         iwi-sor--- 500.00m                                                    /dev/sdc1(1)
  [split_cache_cdata_rimage_1]         iwi-sor--- 500.00m                                                    /dev/sda1(1)
  [split_cache_cdata_rimage_2]         iwi-sor--- 500.00m                                                    /dev/sde1(1)
  [split_cache_cdata_rimage_3]         iwi-sor--- 500.00m                                                    /dev/sdb1(1)
  [split_cache_cdata_rmeta_0]          ewi-sor---   4.00m                                                    /dev/sdc1(0)
  [split_cache_cdata_rmeta_1]          ewi-sor---   4.00m                                                    /dev/sda1(0)
  [split_cache_cdata_rmeta_2]          ewi-sor---   4.00m                                                    /dev/sde1(0)
  [split_cache_cdata_rmeta_3]          ewi-sor---   4.00m                                                    /dev/sdb1(0)
  split_cache_cdata_rmeta_3__extracted -wi-so----   4.00m                                                    /dev/sdf1(0)
  [split_cache_cmeta]                  ewi-sor---   8.00m                                           100.00   split_cache_cmeta_rimage_0(0),split_cache_cmeta_rimage_1(0),split_cache_cmeta_rimage_2(0),split_cache_cmeta_rimage_3(0),split_cache_cmeta_rimage_4(0)
  [split_cache_cmeta_rimage_0]         iwi-sor---   8.00m                                                    /dev/sdc1(127)
  [split_cache_cmeta_rimage_1]         iwi-sor---   8.00m                                                    /dev/sda1(127)
  [split_cache_cmeta_rimage_2]         iwi-sor---   8.00m                                                    /dev/sde1(127)
  [split_cache_cmeta_rimage_3]         iwi-sor---   8.00m                                                    /dev/sdf1(127)
  [split_cache_cmeta_rimage_4]         iwi-sor---   8.00m                                                    /dev/sdb1(127)
  [split_cache_cmeta_rmeta_0]          ewi-sor---   4.00m                                                    /dev/sdc1(126)
  [split_cache_cmeta_rmeta_1]          ewi-sor---   4.00m                                                    /dev/sda1(126)
  [split_cache_cmeta_rmeta_2]          ewi-sor---   4.00m                                                    /dev/sde1(126)
  [split_cache_cmeta_rmeta_3]          ewi-sor---   4.00m                                                    /dev/sdf1(126)
  [split_cache_cmeta_rmeta_4]          ewi-sor---   4.00m                                                    /dev/sdb1(126)

# Can't be deactivated/removed without getting out a hammer
[root@host-118 ~]# vgchange -an split_image
  Logical volume split_image/new0 is used by another device.
  Can't deactivate volume group "split_image" with 2 open logical volume(s)
[root@host-118 ~]# lvchange -an split_image/new0
  Logical volume split_image/new0 is used by another device.

Comment 4 Corey Marthaler 2014-12-04 23:34:54 UTC
FWIW, the same thing happens when attempting to split from raid cache origin volumes.

[root@host-119 ~]# lvconvert --yes --splitmirrors 1 --name new0 split_image/split_origin_corig /dev/sda1
  device-mapper: rename ioctl on split_image-split_origin_corig_rimage_1 failed: Device or resource busy
  Failed to rename split_image-split_origin_corig_rimage_1 (253:5) to split_image-split_origin_corig_rimage_0
  Failed to resume split_image/split_origin_corig after committing changes
  Releasing activation in critical section.
  libdevmapper exiting with 14 device(s) still suspended.

Comment 5 Zdenek Kabelac 2014-12-17 10:32:20 UTC
My assumption here is -

The raid splitimage is doing improper renaming operation where it tries to reuse same device name while the splitted name is effectively still in use.

But it appears to be a different kind of upstream bug.


Splitted leg needs to be 'deactivated' so the rimage_3 name is 'released' and could be reused for leg rename of  rimage_4 ->  rimage_3.

(device-mapper: ioctl: Unable to change name on mapped device split_image-split_cache_cdata_rimage_4 to one that already exists: split_image-split_cache_cdata_rimage_3)

Comment 6 Peter Rajnoha 2014-12-17 14:38:24 UTC
LVM2 v112 was not a good release to test, tested one v114 did not have the deadlock, but it has a different issue, I'm renaming the

Comment 7 Zdenek Kabelac 2014-12-17 14:56:19 UTC
I think we should be able to apply here the very same logic we deployed for cache target with 'PENDING_REMOVE' flag - so we should be able to 'remove' during raid resume part without 'extra' metadata commit.

Unsure what is currently the better fit - either got with multi metadata write/commit path - or add  PENDING for raid target in a similar way we have for cache target.

Comment 8 Corey Marthaler 2015-01-08 19:38:26 UTC
Adding the magic keywords since even the basic case is broken and used to work in 7.0.


3.10.0-219.el7.x86_64
lvm2-2.02.114-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
lvm2-libs-2.02.114-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
lvm2-cluster-2.02.114-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
device-mapper-1.02.92-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
device-mapper-libs-1.02.92-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
device-mapper-event-1.02.92-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
device-mapper-event-libs-1.02.92-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015
device-mapper-persistent-data-0.4.1-2.el7    BUILT: Wed Nov 12 12:39:46 CST 2014
cmirror-2.02.114-4.el7    BUILT: Wed Jan  7 07:07:47 CST 2015


[root@host-110 ~]# lvs -a -o +devices
  LV                                Attr       LSize   Cpy%Sync Devices
  split_pvs_sequentially            rwi-a-r--- 300.00m 100.00   split_pvs_sequentially_rimage_0(0),split_pvs_sequentially_rimage_1(0),split_pvs_sequentially_rimage_2(0),split_pvs_sequentially_rimage_3(0),split_pvs_sequentially_rimage_4(0)
  [split_pvs_sequentially_rimage_0] iwi-aor--- 300.00m          /dev/sde1(1)
  [split_pvs_sequentially_rimage_1] iwi-aor--- 300.00m          /dev/sdd1(1)
  [split_pvs_sequentially_rimage_2] iwi-aor--- 300.00m          /dev/sdb1(1)
  [split_pvs_sequentially_rimage_3] iwi-aor--- 300.00m          /dev/sda1(1)
  [split_pvs_sequentially_rimage_4] iwi-aor--- 300.00m          /dev/sdf1(1)
  [split_pvs_sequentially_rmeta_0]  ewi-aor---   4.00m          /dev/sde1(0)
  [split_pvs_sequentially_rmeta_1]  ewi-aor---   4.00m          /dev/sdd1(0)
  [split_pvs_sequentially_rmeta_2]  ewi-aor---   4.00m          /dev/sdb1(0)
  [split_pvs_sequentially_rmeta_3]  ewi-aor---   4.00m          /dev/sda1(0)
  [split_pvs_sequentially_rmeta_4]  ewi-aor---   4.00m          /dev/sdf1(0)

[root@host-110 ~]# lvconvert --splitmirrors 1 --name new0 split_image/split_pvs_sequentially /dev/sda1
  device-mapper: rename ioctl on split_image-split_pvs_sequentially_rimage_4 failed: Device or resource busy
  Failed to rename split_image-split_pvs_sequentially_rimage_4 (253:11) to split_image-split_pvs_sequentially_rimage_3
  Failed to resume split_image/split_pvs_sequentially after committing changes
  Releasing activation in critical section.
  libdevmapper exiting with 11 device(s) still suspended.

[  257.238435] device-mapper: ioctl: Unable to change name on mapped device split_image-split_pvs_sequentially_rimage_4 to one that already exists: split_image-split_pvs_sequentially_rimage_3


[root@host-110 ~]# lvs -a -o +devices
  LV                                        Attr       LSize   Cpy%Sync Devices
  new0                                      -wi-so---- 300.00m          /dev/sda1(1)
  split_pvs_sequentially                    rwi-s-r--- 300.00m 100.00   split_pvs_sequentially_rimage_0(0),split_pvs_sequentially_rimage_1(0),split_pvs_sequentially_rimage_2(0),split_pvs_sequentially_rimage_3(0)
  [split_pvs_sequentially_rimage_0]         iwi-sor--- 300.00m          /dev/sde1(1)
  [split_pvs_sequentially_rimage_1]         iwi-sor--- 300.00m          /dev/sdd1(1)
  [split_pvs_sequentially_rimage_2]         iwi-sor--- 300.00m          /dev/sdb1(1)
  [split_pvs_sequentially_rimage_3]         iwi-sor--- 300.00m          /dev/sdf1(1)
  [split_pvs_sequentially_rmeta_0]          ewi-sor---   4.00m          /dev/sde1(0)
  [split_pvs_sequentially_rmeta_1]          ewi-sor---   4.00m          /dev/sdd1(0)
  [split_pvs_sequentially_rmeta_2]          ewi-sor---   4.00m          /dev/sdb1(0)
  [split_pvs_sequentially_rmeta_3]          ewi-sor---   4.00m          /dev/sdf1(0)
  split_pvs_sequentially_rmeta_3__extracted -wi-so----   4.00m          /dev/sda1(0)

Comment 10 Heinz Mauelshagen 2015-01-13 14:59:47 UTC
(In reply to Zdenek Kabelac from comment #5)
> My assumption here is -
> 
> The raid splitimage is doing improper renaming operation where it tries to
> reuse same device name while the splitted name is effectively still in use.
> 
> But it appears to be a different kind of upstream bug.
> 
> 
> Splitted leg needs to be 'deactivated' so the rimage_3 name is 'released'
> and could be reused for leg rename of  rimage_4 ->  rimage_3.
> 
> (device-mapper: ioctl: Unable to change name on mapped device
> split_image-split_cache_cdata_rimage_4 to one that already exists:
> split_image-split_cache_cdata_rimage_3)

FYI: my development code on upstream branch avoids device name shifting altogether, thus avoiding the problem in question.

Comment 11 Zdenek Kabelac 2015-01-26 14:49:32 UTC
This bug is unrelated to cache - setting it solely as RAID problem.

small test suite example:

lvcreate --type raid1 -m 4 -l 2 -n $lv1 $vg
lvconvert --yes --splitmirrors 1 $vg/$lv1 "$dev1"

Comment 12 Zdenek Kabelac 2015-01-27 11:02:39 UTC
So the origin of problem is date from my upstream commit: 62c7027a7c675dfef8f772b1e20ac18705b847a9

This commit started to properly operate on devices which do get lock - this is especially important in the 'clustered' usage - where any sub-device  from top-level device is not holding the lock and thus looks inactivate.  

Also for the same reason we cannot operate with subdevices since then in case of any error/power-off whole table logic becomes inconsistent.

Other important use-case is device-stacking where 'raid1' is used as a subdevice of a thin-pool - in this case we go from thin-pool level to suspend/resume raid device used as a data or metadata pool LV.

As Jon stated the name rotation is mandatory feature for raid1 functionality (since leg-0 is the 'master' leg) I'm looking to look at how the code could be fixed, so the top-level suspends/resumes will work properly.

We cannot revert the original commit that has caused rename problem as that would disable device stacking.

Comment 13 Heinz Mauelshagen 2015-01-27 13:43:51 UTC
Is it really important to keep a leg suffixed with '0' to get the master leg logic proper? Taking the leg in slot 0 should do as master whatever its name is suffixed with, no?

Comment 16 Corey Marthaler 2015-01-28 20:41:19 UTC
Fix verified in the latest rpms. With the exception of bug 1186903, all the raid image (using a specified pv) splitting test cases work again.


3.10.0-225.el7.x86_64
lvm2-2.02.115-3.el7    BUILT: Wed Jan 28 09:59:01 CST 2015
lvm2-libs-2.02.115-3.el7    BUILT: Wed Jan 28 09:59:01 CST 2015
lvm2-cluster-2.02.115-3.el7    BUILT: Wed Jan 28 09:59:01 CST 2015
device-mapper-1.02.93-3.el7    BUILT: Wed Jan 28 09:59:01 CST 2015
device-mapper-libs-1.02.93-3.el7    BUILT: Wed Jan 28 09:59:01 CST 2015
device-mapper-event-1.02.93-3.el7    BUILT: Wed Jan 28 09:59:01 CST 2015
device-mapper-event-libs-1.02.93-3.el7    BUILT: Wed Jan 28 09:59:01 CST 2015
device-mapper-persistent-data-0.4.1-2.el7    BUILT: Wed Nov 12 12:39:46 CST 2014
cmirror-2.02.115-3.el7    BUILT: Wed Jan 28 09:59:01 CST 2015




SCENARIO - [sequentially_split_off_all_stacked_cache_pool_raid1_pvs]
Create raid1 volume with many legs, convert to cache pool, and then sequentially split off each one of the PVs
host-115.virt.lab.msp.redhat.com: lvcreate  -L 500M -n origin split_image /dev/sdd1

host-115.virt.lab.msp.redhat.com: lvcreate  --type raid1 -m 4 -n split_cache -L 500M split_image /dev/sde1 /dev/sdc1 /dev/sdh1 /dev/sdf1 /dev/sdb1
host-115.virt.lab.msp.redhat.com: lvcreate  --type raid1 -m 4 -n split_cache_meta -L 8M split_image /dev/sde1 /dev/sdc1 /dev/sdh1 /dev/sdf1 /dev/sdb1
Waiting until all mirror|raid volumes become fully syncd...
   2/2 mirror(s) are fully synced: ( 100.00% 100.00% )

Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --yes --type cache-pool --poolmetadata split_image/split_cache_meta split_image/split_cache
  WARNING: Converting logical volume split_image/split_cache and split_image/split_cache_meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachepool split_image/split_cache split_image/origin

splitting off legs:
         /dev/sdh1 /dev/sdb1 /dev/sde1 /dev/sdc1


SCENARIO - [sequentially_split_off_all_raid1_pvs]
Create a raid1 with many legs and then sequentially split off each one of the PVs
host-115.virt.lab.msp.redhat.com: lvcreate  --type raid1 -m 4 -n split_pvs_sequentially -L 300M split_image
Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )

splitting off legs:
         /dev/sdd1 /dev/sdc1 /dev/sde1 /dev/sdf1


SCENARIO - [sequentially_split_off_all_stacked_cache_origin_raid1_pvs]
Create raid1 volume with many legs, convert to cache origin, and then sequentially split off each one of the PVs
host-115.virt.lab.msp.redhat.com: lvcreate  --type raid1 -m 4 -n split_origin -L 500M split_image /dev/sde1 /dev/sdc1 /dev/sdh1 /dev/sdf1 /dev/sdd1

Waiting until all mirror|raid volumes become fully syncd...
   0/1 mirror(s) are fully synced: ( 84.49% )
   1/1 mirror(s) are fully synced: ( 100.00% )

host-115.virt.lab.msp.redhat.com: lvcreate  -n cache -L 500M split_image /dev/sdb1
host-115.virt.lab.msp.redhat.com: lvcreate  -n cache_meta -L 8M split_image /dev/sdb1
Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --yes --type cache-pool --poolmetadata split_image/cache_meta split_image/cache
  WARNING: Converting logical volume split_image/cache and split_image/cache_meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachepool split_image/cache split_image/split_origin

splitting off legs:
         /dev/sdh1 /dev/sdf1 /dev/sdd1 /dev/sdc1

Comment 18 errata-xmlrpc 2015-03-05 13:10:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0513.html


Note You need to log in before you can comment on or make changes to this bug.