RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 524040 - block device cannot be detached from DomU
Summary: block device cannot be detached from DomU
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.1
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Andrew Jones
QA Contact: Martin Jenner
URL:
Whiteboard:
Depends On: 524039
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-09-17 16:48 UTC by Andrew Jones
Modified: 2011-01-05 10:10 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 524039
Environment:
Last Closed: 2010-06-17 12:07:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Andrew Jones 2009-09-17 16:48:26 UTC
+++ This bug was initially created as a clone of Bug #524039 +++

Description of problem:

On a PV DomU with 2.6.31-12.fc12.x86_64 detaching a block device fails, dumping a backtrace to the console.

How reproducible:

100%

Steps to Reproduce:

On Dom0

dd if=/dev/zero of=/var/lib/xen/images/disk1.dsk bs=1 count=1 seek=10G
virsh attach-disk rawhide --driver file /var/lib/xen/images/disk1.dsk xvdb

Attach will complete successfully and from DomU the new device can be seen and used without any problem.

virsh detach-disk rawhide xvdb

Actual results:

Detach appears to complete successfully from Dom0, but a stack backtrace is dumped to DomU's console and the device is only partially removed.

Expected results:

Device is completely removed without failure.

Additional info:

The backtrace

general protection fault: 0000 [#1] SMP
last sysfs file: /sys/devices/vbd-51728/block/xvdb/size
CPU 0
Modules linked in: nfsd lockd nfs_acl auth_rpcgss exportfs sunrpc ip6t_REJECT nf_conntrack_ipv6 ip6table_filter ip6_tables ipv6 dm_multipath uinput joydev xen_netfront xen_blkfront [last unloaded: microcode]
Pid: 11, comm: xenwatch Tainted: G        W  2.6.31-12.fc12.x86_64 #1
RIP: e030:[<ffffffff812136eb>]  [<ffffffff812136eb>] debugfs_remove+0x27/0x90
RSP: e02b:ffff88007d461bf0  EFLAGS: 00010202
RAX: 0000000000000000 RBX: ffff8800041fb850 RCX: 00000000001a0006
RDX: 0000000000000000 RSI: ffffea000321bfa8 RDI: 6b6b6b6b6b6b6b6b
RBP: ffff88007d461c10 R08: 0000000000000000 R09: 00000000e027cfb5
R10: ffffffff81678e0f R11: 0000000000000000 R12: 6b6b6b6b6b6b6b6b
R13: ffff88007d461c80 R14: ffff8800798d5520 R15: ffffffffa0005048
FS:  00007f9f6266e7c0(0000) GS:ffffc90000000000(0000) knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007f97099ac000 CR3: 000000007b978000 CR4: 0000000000002620
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000000
Process xenwatch (pid: 11, threadinfo ffff88007d460000, task ffff88007d4524a0)
Stack:
 ffff88007d461c30 00000000e027cfb5 ffff8800041fb850 0000000000000000
<0> ffff88007d461c40 ffffffff811126bf ffff88007d461c40 00000000e027cfb5
<0> ffff88007d461c70 ffff88007ad3c480 ffff88007d461c70 ffffffff8126ad2c
Call Trace:
 [<ffffffff811126bf>] bdi_unregister+0x36/0x74
 [<ffffffff8126ad2c>] unlink_gendisk+0x43/0x75
 [<ffffffff811a537b>] del_gendisk+0x9e/0x10d
 [<ffffffffa000134a>] blkfront_closing+0xad/0x105 [xen_blkfront]
 [<ffffffffa00017ef>] backend_changed+0x44d/0x52e [xen_blkfront]
 [<ffffffff8100ec42>] ? check_events+0x12/0x20
 [<ffffffff8130fa30>] otherend_changed+0xf3/0x194
 [<ffffffff8100ec2f>] ? xen_restore_fl_direct_end+0x0/0x1
 [<ffffffff8130e52d>] xenwatch_thread+0x119/0x160
 [<ffffffff81081487>] ? autoremove_wake_function+0x0/0x5f
 [<ffffffff8130e414>] ? xenwatch_thread+0x0/0x160
 [<ffffffff81081034>] kthread+0xac/0xb4
 [<ffffffff8101312a>] child_rip+0xa/0x20
 [<ffffffff81012a90>] ? restore_args+0x0/0x30
 [<ffffffff81013120>] ? child_rip+0x0/0x20
Code: 41 5e c9 c3 55 48 89 e5 41 54 53 48 83 ec 10 0f 1f 44 00 00 65 48 8b 04 25 28 00 00 00 48 89 45 e8 31 c0 48 85 ff 49 89 fc 74 4e <48> 8b 5f 68 48 85 db 74 45 48 8b 7b 50 48 85 ff 74 3c 48 81 c7
RIP  [<ffffffff812136eb>] debugfs_remove+0x27/0x90
 RSP <ffff88007d461bf0>
---[ end trace a7919e7f17c0a729 ]---

Comment 2 RHEL Program Management 2009-09-17 17:28:41 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 3 Andrew Jones 2010-02-24 11:47:18 UTC
Retesting with latest bits shows that this is working now. We should try to identify what fixed it though, and since what revision.

Comment 4 Andrew Jones 2010-06-17 12:07:38 UTC
This has been working quite stabling for a while now. Will close this out as current release. We can reopen as 6.1 work if we need to learn more about it later.


Note You need to log in before you can comment on or make changes to this bug.