Bug 1462108 - 0xc0000001 error when install win2016 guest with gluster backend
0xc0000001 error when install win2016 guest with gluster backend
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev (Show other bugs)
7.4
Unspecified Unspecified
high Severity high
: rc
: ---
Assigned To: Jeff Cody
Longxiang Lyu
: Regression
Depends On:
Blocks: 1473046
  Show dependency treegraph
 
Reported: 2017-06-16 04:13 EDT by Suqin Huang
Modified: 2017-12-11 10:26 EST (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-12-11 10:26:29 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 2 Suqin Huang 2017-06-16 04:20:20 EDT
1. could not reproduce with qemu-kvm-rhev-2.6.0-28.el7_3.2.x86_64 and qemu-kvm-rhev-2.9.0-1.el7.x86_64 

2. virtio-win: 
virtio-win-1.9.1-0
Comment 7 Suqin Huang 2017-06-19 22:04:19 EDT
the env for qemu-kvm-rhev-2.6.0-28.el7_3.2.x86_64 testing is the same as qemu-kvm-rhev-2.9.0-10.el7.x86_64.rpm, 

I need to double check qemu-kvm-rhev-2.9.0-1.el7.x86_64.rpm, as I test it before, the server is the same, but I'm not sure the client
Comment 8 Suqin Huang 2017-06-20 01:51:11 EDT
# qemu-img info gluster://bootp-73-199-197.lab.eng.pek2.redhat.com:0/gv0/win2016-64-virtio-scsi.qcow2
[2017-06-20 05:50:52.975478] I [MSGID: 104045] [glfs-master.c:91:notify] 0-gfapi: New graph 68702d64-6c33-3838-6738-2d31362e7268 (0) coming up
[2017-06-20 05:50:52.975554] I [MSGID: 114020] [client.c:2356:notify] 0-gv0-client-0: parent translators are ready, attempting connect on transport
[2017-06-20 05:50:52.979341] I [MSGID: 114020] [client.c:2356:notify] 0-gv0-client-1: parent translators are ready, attempting connect on transport
[2017-06-20 05:50:52.980121] I [rpc-clnt.c:2001:rpc_clnt_reconfig] 0-gv0-client-0: changing port to 49152 (from 0)
[2017-06-20 05:50:52.983242] I [rpc-clnt.c:2001:rpc_clnt_reconfig] 0-gv0-client-1: changing port to 49152 (from 0)
[2017-06-20 05:50:52.985821] I [MSGID: 114057] [client-handshake.c:1439:select_server_supported_programs] 0-gv0-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2017-06-20 05:50:52.986350] I [MSGID: 114046] [client-handshake.c:1215:client_setvolume_cbk] 0-gv0-client-0: Connected to gv0-client-0, attached to remote volume '/data/brick1/gv0'.
[2017-06-20 05:50:52.986365] I [MSGID: 114047] [client-handshake.c:1226:client_setvolume_cbk] 0-gv0-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2017-06-20 05:50:52.986411] I [MSGID: 108005] [afr-common.c:4659:afr_notify] 0-gv0-replicate-0: Subvolume 'gv0-client-0' came back up; going online.
[2017-06-20 05:50:52.986937] I [MSGID: 114035] [client-handshake.c:201:client_set_lk_version_cbk] 0-gv0-client-0: Server lk version = 1
[2017-06-20 05:50:52.988417] I [MSGID: 114057] [client-handshake.c:1439:select_server_supported_programs] 0-gv0-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2017-06-20 05:50:52.989045] I [MSGID: 114046] [client-handshake.c:1215:client_setvolume_cbk] 0-gv0-client-1: Connected to gv0-client-1, attached to remote volume '/data/brick1/gv0'.
[2017-06-20 05:50:52.989060] I [MSGID: 114047] [client-handshake.c:1226:client_setvolume_cbk] 0-gv0-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2017-06-20 05:50:53.004248] I [MSGID: 114035] [client-handshake.c:201:client_set_lk_version_cbk] 0-gv0-client-1: Server lk version = 1
[2017-06-20 05:50:53.006433] I [MSGID: 104041] [glfs-resolve.c:885:__glfs_active_subvol] 0-gv0: switched to graph 68702d64-6c33-3838-6738-2d31362e7268 (0)
[2017-06-20 05:50:53.008280] W [MSGID: 114031] [client-rpc-fops.c:2211:client3_3_seek_cbk] 0-gv0-client-0: remote operation failed [No such device or address]
[2017-06-20 05:50:53.014621] I [MSGID: 114021] [client.c:2365:notify] 0-gv0-client-0: current graph is no longer active, destroying rpc_client 
[2017-06-20 05:50:53.014660] I [MSGID: 114021] [client.c:2365:notify] 0-gv0-client-1: current graph is no longer active, destroying rpc_client 
[2017-06-20 05:50:53.014667] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-gv0-client-0: disconnected from gv0-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[2017-06-20 05:50:53.014726] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-gv0-client-1: disconnected from gv0-client-1. Client process will keep trying to connect to glusterd until brick's port is available
[2017-06-20 05:50:53.014745] E [MSGID: 108006] [afr-common.c:4684:afr_notify] 0-gv0-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2017-06-20 05:50:53.015548] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-gfapi: size=84 max=1 total=1
[2017-06-20 05:50:53.015642] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-gfapi: size=188 max=2 total=2
[2017-06-20 05:50:53.015956] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-gfapi: size=140 max=2 total=8
[2017-06-20 05:50:53.016002] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-gv0-client-0: size=1316 max=3 total=6
[2017-06-20 05:50:53.016013] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-gv0-client-1: size=1316 max=2 total=8
[2017-06-20 05:50:53.016026] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-gv0-replicate-0: size=11708 max=2 total=8
[2017-06-20 05:50:53.016180] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-gv0-dht: size=1148 max=0 total=0
[2017-06-20 05:50:53.016238] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-gv0-dht: size=3380 max=2 total=7
[2017-06-20 05:50:53.016366] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-gv0-read-ahead: size=188 max=0 total=0
[2017-06-20 05:50:53.016379] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-gv0-readdir-ahead: size=60 max=0 total=0
[2017-06-20 05:50:53.016392] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-gv0-io-cache: size=68 max=2 total=2
[2017-06-20 05:50:53.016402] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy] 0-gv0-io-cache: size=252 max=2 total=11
[2017-06-20 05:50:53.016432] I [io-stats.c:3822:fini] 0-gv0: io-stats translator unloaded
[2017-06-20 05:50:53.016651] I [MSGID: 101191] [event-epoll.c:659:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 2
[2017-06-20 05:50:53.016688] I [MSGID: 101191] [event-epoll.c:659:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 1
image: gluster://bootp-73-199-197.lab.eng.pek2.redhat.com:0/gv0/win2016-64-virtio-scsi.qcow2
file format: qcow2
virtual size: 30G (32212254720 bytes)
disk size: 9.5G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
Comment 11 Suqin Huang 2017-07-26 03:00:58 EDT
/dev/mapper/rhel_intel--3323--24--2-root        52403200   15057960  37345240  29% /
devtmpfs                                        12314352          0  12314352   0% /dev
tmpfs                                           12342508         12  12342496   1% /dev/shm
tmpfs                                           12342508      25148  12317360   1% /run
tmpfs                                           12342508          0  12342508   0% /sys/fs/cgroup
/dev/sda1                                        1038336     255792    782544  25% /boot
/dev/mapper/rhel_intel--3323--24--2-home       178183164     783920 177399244   1% /home
bootp-73-199-200.lab.eng.pek2.redhat.com:/gv0   97608448   24135040  73473408  25% /mnt/gluster
Comment 15 Longxiang Lyu 2017-12-05 22:34:41 EST
fail to reproduce with the latest version.

Note You need to log in before you can comment on or make changes to this bug.