| Summary: | targets not resumed when creating multiple snapshots of mirrors | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Nate Straz <nstraz> | ||||
| Component: | lvm2 | Assignee: | LVM and device-mapper development team <lvm-team> | ||||
| Status: | CLOSED WORKSFORME | QA Contact: | Corey Marthaler <cmarthal> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 6.1 | CC: | agk, coughlan, dwysocha, heinzm, jbrassow, mbroz, prajnoha, prockai, thornber, zkabelac | ||||
| Target Milestone: | rc | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2011-09-06 18:36:27 UTC | Type: | --- | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Attachments: |
|
||||||
Since RHEL 6.1 External Beta has begun, and this bug remains unresolved, it has been rejected as it is not proposed as exception or blocker. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux. Created attachment 490354 [details]
clvmd -d output during reproduction
I ran this through with debug on twice. Once I typed the commands in by hand, which did not reproduce. Then again by pasting all commands into the terminal, this did reproduce the hang.
The log contains all output from both runs with # comments between each stage.
should try to reproduce with upstream code to see if a recent fix helps Adding QA ack for 6.2. Devel will need to provide unit testing results however before this bug can be ultimately verified by QA. Nate, are you able to re-try this with upstream code? No, I don't have the bandwidth to try this on upstream code. Do we have test results with recent upstream code by now? Is this reproducible with current 6.2 build (lvm2-2.02.87-1.el6)? It does not appear that this bug exists in the latest 6.2 rpms.
============================================================
Iteration 20 of 20 started at Tue Sep 6 12:25:28 CDT 2011
============================================================
SCENARIO - [snaphot_exclusive_mirror]
Snapshot an exclusively activated mirror
grant-03: lvcreate -m 1 -n exclusive_origin -L 100M mirror_sanity
Deactivate and then exclusively activate mirror
Taking multiple snapshots of exclusive mirror
1 2 3 4 5
Removing snapshots of exclusive mirror
1 2 3 4 5
Deactivating mirror exclusive_origin... and removing
2.6.32-192.el6.x86_64
lvm2-2.02.87-1.el6 BUILT: Fri Aug 12 06:11:57 CDT 2011
lvm2-libs-2.02.87-1.el6 BUILT: Fri Aug 12 06:11:57 CDT 2011
lvm2-cluster-2.02.87-1.el6 BUILT: Fri Aug 12 06:11:57 CDT 2011
udev-147-2.37.el6 BUILT: Wed Aug 10 07:48:15 CDT 2011
device-mapper-1.02.66-1.el6 BUILT: Fri Aug 12 06:11:57 CDT 2011
device-mapper-libs-1.02.66-1.el6 BUILT: Fri Aug 12 06:11:57 CDT 2011
device-mapper-event-1.02.66-1.el6 BUILT: Fri Aug 12 06:11:57 CDT 2011
device-mapper-event-libs-1.02.66-1.el6 BUILT: Fri Aug 12 06:11:57 CDT 2011
cmirror-2.02.87-1.el6 BUILT: Fri Aug 12 06:11:57 CDT 2011
|
Description of problem: The mirror_sanity scenario snaphot_exclusive_mirror hangs. It attempts to create five snapshots of an exclusively activated cluster mirror but the third lvcreate hangs. dmsetup info output shows that several devices are left in a suspended state. Version-Release number of selected component (if applicable): 2.6.32-128.el6.x86_64 lvm2-2.02.83-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 lvm2-libs-2.02.83-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 lvm2-cluster-2.02.83-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 udev-147-2.35.el6 BUILT: Wed Mar 30 07:32:05 CDT 2011 device-mapper-1.02.62-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 device-mapper-libs-1.02.62-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 device-mapper-event-1.02.62-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 device-mapper-event-libs-1.02.62-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 cmirror-2.02.83-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 How reproducible: easily Steps to Reproduce: lvcreate -m 1 -n exclusive_origin -L 100M mirror_sanity lvchange -an /dev/mirror_sanity/exclusive_origin lvchange -aye /dev/mirror_sanity/exclusive_origin lvcreate -s mirror_sanity/exclusive_origin -n msnap_1 -L 20M lvcreate -s mirror_sanity/exclusive_origin -n msnap_2 -L 20M lvcreate -s mirror_sanity/exclusive_origin -n msnap_3 -L 20M <-- hangs Actual results: SCENARIO - [snaphot_exclusive_mirror] Snapshot an exclusively activated mirror buzz-01: lvcreate -m 1 -n exclusive_origin -L 100M mirror_sanity Deactivate and then exclusively activate mirror Taking multiple snapshots of exclusive mirror 1 2 3 Error locking on node buzz-01: Command timed out Failed to suspend origin exclusive_origin Error locking on node buzz-01: Command timed out Attempt to drop cached metadata failed after reverted update for VG mirror_sanity. Error locking on node buzz-01: Command timed out Error locking on node buzz-01: Command timed out [root@buzz-01 ~]# dmsetup info Name: mirror_sanity-msnap_2 State: SUSPENDED Read Ahead: 256 Tables present: LIVE Open count: 0 Event number: 0 Major, minor: 253, 10 Number of targets: 1 UUID: LVM-9VZslOJx0ir7TWhvrbCpOc8deNDaMlMyu1BbbGrvMqVuOdQRuQJxupYP05yBXdJo Name: mirror_sanity-msnap_3-cow State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 253, 13 Number of targets: 1 UUID: LVM-9VZslOJx0ir7TWhvrbCpOc8deNDaMlMyipNpM1jt8J0dC6L6O8UWgqF7GokXaRk2-cow Name: mirror_sanity-msnap_1 State: SUSPENDED Read Ahead: 256 Tables present: LIVE Open count: 0 Event number: 0 Major, minor: 253, 7 Number of targets: 1 UUID: LVM-9VZslOJx0ir7TWhvrbCpOc8deNDaMlMyKsXBmpFgjrxz6TWxVuXmt6d9BhbjkcmD Name: mirror_sanity-exclusive_origin_mimage_1 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 253, 5 Number of targets: 1 UUID: LVM-9VZslOJx0ir7TWhvrbCpOc8deNDaMlMy9OXr1prrwVPscf5DWclfloAKBRfwKKzV Name: mirror_sanity-exclusive_origin_mimage_0 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 253, 4 Number of targets: 1 UUID: LVM-9VZslOJx0ir7TWhvrbCpOc8deNDaMlMyAwD1WWxn1bt8kMNUYeBLrnegOQxOuHVE Name: mirror_sanity-msnap_1-cow State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 253, 9 Number of targets: 1 UUID: LVM-9VZslOJx0ir7TWhvrbCpOc8deNDaMlMyKsXBmpFgjrxz6TWxVuXmt6d9BhbjkcmD-cow Name: mirror_sanity-exclusive_origin_mlog State: SUSPENDED Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 253, 3 Number of targets: 1 UUID: LVM-9VZslOJx0ir7TWhvrbCpOc8deNDaMlMyYHQW6DNxDAINJw989Eel6hM8X5Zvapfw Name: mirror_sanity-exclusive_origin State: SUSPENDED Read Ahead: 256 Tables present: LIVE Open count: 0 Event number: 0 Major, minor: 253, 6 Number of targets: 1 UUID: LVM-9VZslOJx0ir7TWhvrbCpOc8deNDaMlMyReG9l74Dcqj0S69nYOYSnuLX6Ar2HUsL Name: mirror_sanity-exclusive_origin-real State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 4 Event number: 0 Major, minor: 253, 8 Number of targets: 1 UUID: LVM-9VZslOJx0ir7TWhvrbCpOc8deNDaMlMyReG9l74Dcqj0S69nYOYSnuLX6Ar2HUsL-real Name: mirror_sanity-msnap_2-cow State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 253, 11 Number of targets: 1 UUID: LVM-9VZslOJx0ir7TWhvrbCpOc8deNDaMlMyu1BbbGrvMqVuOdQRuQJxupYP05yBXdJo-cow Name: mirror_sanity-msnap_3 State: ACTIVE Read Ahead: 256 Tables present: LIVE & INACTIVE Open count: 0 Event number: 0 Major, minor: 253, 12 Number of targets: 1 UUID: LVM-9VZslOJx0ir7TWhvrbCpOc8deNDaMlMyipNpM1jt8J0dC6L6O8UWgqF7GokXaRk2 [root@buzz-01 ~]# dmsetup table --target mirror mirror_sanity-exclusive_origin-real: 0 204800 mirror userspace 4 LVM-9VZslOJx0ir7TWhvrbCpOc8deNDaMlMyYHQW6DNxDAINJw989Eel6hM8X5Zvapfw clustered-disk 253:3 1024 2 253:4 0 253:5 0 1 handle_errors Expected results: Additional info: