Bug 1095078
| Summary: | migration with block I/O error when use glusterfs storage backends | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Sibiao Luo <sluo> |
| Component: | glusterfs | Assignee: | Vijay Bellur <vbellur> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | SATHEESARAN <sasundar> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 7.0 | CC: | amainkar, amit.shah, areis, asrivast, chayang, juzhang, knoel, lmiksik, mazhang, michen, ovasik, qzhang, rbalakri, rcyriac, rwheeler, sasundar, tlavigne, vagarwal, vbellur, virt-maint, xfu |
| Target Milestone: | rc | Keywords: | Regression, ZStream |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.6.0.41-1.el7 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-03-26 11:34:36 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Sibiao Luo
2014-05-07 05:18:35 UTC
My glusterfs version and volume info. # rpm -q glusterfs glusterfs-3.5qa2-0.425.git9360107.el7.x86_64 # gluster gluster> volume info Volume Name: sluo_volume Type: Distribute Volume ID: 25ec85a1-e90b-499e-86e6-12db3e02b745 Status: Started Snap Volume: no Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: 10.66.106.6:/home/brick1 # lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Thread(s) per core: 1 Core(s) per socket: 8 Socket(s): 2 NUMA node(s): 4 Vendor ID: AuthenticAMD CPU family: 16 Model: 9 Model name: AMD Opteron(tm) Processor 6128 Stepping: 1 CPU MHz: 2000.154 BogoMIPS: 4000.07 Virtualization: AMD-V L1d cache: 64K L1i cache: 64K L2 cache: 512K L3 cache: 5118K NUMA node0 CPU(s): 0,2,4,6 NUMA node1 CPU(s): 8,10,12,14 NUMA node2 CPU(s): 9,11,13,15 NUMA node3 CPU(s): 1,3,5,7 (In reply to Sibiao Luo from comment #0) > How reproducible: > always not 100% if just one time migration, but 100% if do ping-pong migration according my testing. BTW, if use the glusterfs(fuse) not the glusterfs(native) which will met anther problem after ping-pong migration. Please let me know to separate a new bug for it if it was not the same issue, thanks. block I/O error in device 'drive-virtio-disk': File descriptor in bad state (77) (qemu) info status VM status: paused (io-error) (qemu) cont (qemu) block I/O error in device 'drive-virtio-disk': File descriptor in bad state (77) Best Regards, sluo 1. I just tested the virtio-blk and virtio-scsi which both can hit it. 2. Just migration from rhel7.0(qemu-kvm-1.5.3-60.el7.x86_64) to rhel7.0(qemu-kvm-1.5.3-60.el7.x86_64) which also can hit it. 3. Also tried the glusterfs in rhel6.6 server side host as storage backends which did not hit such issue. glusterfs server rhel6 side: glusterfs-3.5qa2-0.340.gitc193996.el7.x86_64 glusterfs client rhel7 side: glusterfs-3.5qa2-0.425.git9360107.el7.x86_64 So i think this issue is glusterfs rhel7 package issue. Please help move to the right componment if i mistake it, thanks. Best Regard, sluo (In reply to Sibiao Luo from comment #4) > 1. I just tested the virtio-blk and virtio-scsi which both can hit it. > > 2. Just migration from rhel7.0(qemu-kvm-1.5.3-60.el7.x86_64) to > rhel7.0(qemu-kvm-1.5.3-60.el7.x86_64) which also can hit it. > > 3. Also tried the glusterfs in rhel6.6 server side host as storage backends > which did not hit such issue. > glusterfs server rhel6 side: > glusterfs-3.5qa2-0.340.gitc193996.el7.x86_64 > > glusterfs client rhel7 side: > glusterfs-3.5qa2-0.425.git9360107.el7.x86_64 Clear up it as following: ########### I: Not hit this issue. glusterfs server: rhel6 host glusterfs-3.4.0.59rhs-1.el6rhs.x86_64 glusterfs client: rhel7 host glusterfs-3.5qa2-0.425.git9360107.el7.x86_64 ########### II: Not hit this issue. glusterfs server: rhel7 host glusterfs-3.5qa2-0.340.gitc193996.el7.x86_64 glusterfs client: rhel7 host glusterfs-3.5qa2-0.425.git9360107.el7.x86_64 ########### III: Hit this issue. glusterfs server: rhel7 host glusterfs-3.5qa2-0.340.gitc193996.el7.x86_64 glusterfs client: rhel7 host glusterfs-3.5qa2-0.425.git9360107.el7.x86_64 > So i think this issue is glusterfs rhel7 package issue. Please help move to > the right componment if i mistake it, thanks. > (In reply to Sibiao Luo from comment #5) > (In reply to Sibiao Luo from comment #4) > > 1. I just tested the virtio-blk and virtio-scsi which both can hit it. > > > > 2. Just migration from rhel7.0(qemu-kvm-1.5.3-60.el7.x86_64) to > > rhel7.0(qemu-kvm-1.5.3-60.el7.x86_64) which also can hit it. > > > > 3. Also tried the glusterfs in rhel6.6 server side host as storage backends > > which did not hit such issue. > > glusterfs server rhel6 side: > > glusterfs-3.5qa2-0.340.gitc193996.el7.x86_64 > > > > glusterfs client rhel7 side: > > glusterfs-3.5qa2-0.425.git9360107.el7.x86_64 > > Clear up it as following: > > ########### I: Not hit this issue. > glusterfs server: > rhel6 host > glusterfs-3.4.0.59rhs-1.el6rhs.x86_64 > glusterfs client: > rhel7 host > glusterfs-3.5qa2-0.425.git9360107.el7.x86_64 > > ########### II: Not hit this issue. > glusterfs server: > rhel7 host > glusterfs-3.5qa2-0.340.gitc193996.el7.x86_64 > glusterfs client: > rhel7 host > glusterfs-3.5qa2-0.425.git9360107.el7.x86_64 > > ########### III: Hit this issue. > glusterfs server: > rhel7 host > glusterfs-3.5qa2-0.340.gitc193996.el7.x86_64 glusterfs-3.5qa2-0.425.git9360107.el7.x86_64 > glusterfs client: > rhel7 host > glusterfs-3.5qa2-0.425.git9360107.el7.x86_64 > > > So i think this issue is glusterfs rhel7 package issue. Please help move to > > the right componment if i mistake it, thanks. > > Tested the live migration of guest(RHEL 7.0) from RHEL 7.1 to other RHEL 7.1, where guest uses the glusterfs shared storage domain using libgfapi access mechanism. Guest migration was successful and guest vm was running healthy post migration. Repeated the steps couple of times and couldn't reproduce this issue. Tested with : RHEL 7.1 Nightly : http://download.devel.redhat.com/composes/nightly/RHEL-7.1-20150116.n.0/ Glusterfs RPMS : glusterfs-3.6.0.42-1.el7rhs http://download.devel.redhat.com/brewroot/packages/glusterfs/3.6.0.42/1.el7rhs/x86_64/ qemu-kvm : 1.5.3-60.el7_0.11.x86_64 qemu-kvm-1.5.3-60.el7_0.11.x86_64 qemu-kvm-common-1.5.3-60.el7_0.11.x86_64 qemu-kvm-tools-1.5.3-60.el7_0.11.x86_64 Marking this bug as VERIFIED with the test results as commented in comment11 |