Bug 1223839 - /lib64/libglusterfs.so.0(+0x21725)[0x7f248655a725] ))))) 0-rpc_transport: invalid argument: this
Summary: /lib64/libglusterfs.so.0(+0x21725)[0x7f248655a725] ))))) 0-rpc_transport: inv...
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: libgfapi
Version: 3.7.0
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-21 14:11 UTC by Nikolai Sednev
Modified: 2017-03-08 10:48 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-08 10:48:49 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
alma02 logs (269.62 KB, application/x-gzip)
2015-05-21 14:20 UTC, Nikolai Sednev
no flags Details
sosreportalma02 (6.47 MB, application/x-xz)
2015-05-21 14:28 UTC, Nikolai Sednev
no flags Details
latest logs from alma03 (15.13 MB, application/x-gzip)
2015-05-26 14:06 UTC, Nikolai Sednev
no flags Details

Description Nikolai Sednev 2015-05-21 14:11:08 UTC
Description of problem:
Looks like we have here a gluster issue:
it seams that gluster is not longer responding:
[2015-05-21 10:52:46.565452] W [socket.c:3059:socket_connect] 0-nfs: Ignore failed connection attempt on /var/run/gluster/9fe4e0a3980d08f6d7b2c8a10bf0e93d.socket, (No such file or directory)
[2015-05-21 10:52:46.565532] W [socket.c:642:__socket_rwv] 0-nfs: readv on /var/run/gluster/9fe4e0a3980d08f6d7b2c8a10bf0e93d.socket failed (Invalid argument)

engine's VM was terminated
qemu: terminating on signal 15 from pid 52851
[2015-05-21 08:58:23.165365] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fcaeba09f16] (--> /lib64/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7fcaeb7d65a3] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7fcaeb7d98ec] (--> /lib64/libglusterfs.so.0(+0x21791)[0x7fcaeba06791] (--> /lib64/libglusterfs.so.0(+0x21725)[0x7fcaeba06725] ))))) 0-rpc_transport: invalid argument: this


Version-Release number of selected component (if applicable):
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.3.x86_64                    
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113553.20150424113551.git7c14f4c.el7.noarch
vdsm-python-4.17.0-834.gitd066d4a.el7.noarch                                               
glusterfs-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64                                      
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.3.x86_64                                      
vdsm-gluster-4.17.0-834.gitd066d4a.el7.noarch                                              
qemu-img-ev-2.1.2-23.el7_1.3.1.x86_64                                                      
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.3.x86_64                                     
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.3.x86_64
ovirt-release-master-001-0.8.master.noarch
vdsm-yajsonrpc-4.17.0-834.gitd066d4a.el7.noarch
glusterfs-api-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64
glusterfs-geo-replication-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64
libvirt-daemon-1.2.8-16.el7_1.3.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.3.x86_64
mom-0.4.4-0.0.master.20150515133332.git2d32797.el7.noarch
sanlock-3.2.2-2.el7.x86_64
qemu-kvm-tools-ev-2.1.2-23.el7_1.3.1.x86_64
ovirt-host-deploy-1.4.0-0.0.master.20150505205623.giteabc23b.el7.noarch
qemu-kvm-common-ev-2.1.2-23.el7_1.3.1.x86_64
glusterfs-client-xlators-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64
glusterfs-cli-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64
vdsm-infra-4.17.0-834.gitd066d4a.el7.noarch
vdsm-jsonrpc-4.17.0-834.gitd066d4a.el7.noarch
ovirt-engine-sdk-python-3.6.0.0-0.14.20150520.git8420a90.el7.centos.noarch
libvirt-daemon-driver-interface-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64
sanlock-lib-3.2.2-2.el7.x86_64
vdsm-4.17.0-834.gitd066d4a.el7.noarch
glusterfs-fuse-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64
libvirt-python-1.2.8-7.el7_1.1.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.3.x86_64
vdsm-xmlrpc-4.17.0-834.gitd066d4a.el7.noarch
glusterfs-libs-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64
qemu-kvm-ev-2.1.2-23.el7_1.3.1.x86_64
glusterfs-server-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64
libvirt-client-1.2.8-16.el7_1.3.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.3.x86_64
ovirt-hosted-engine-setup-1.3.0-0.0.master.20150518075146.gitdd9741f.el7.noarch
sanlock-python-3.2.2-2.el7.x86_64
vdsm-cli-4.17.0-834.gitd066d4a.el7.noarch
glusterfs-rdma-3.7.0beta2-0.2.gitc1cd4fa.el7.centos.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Install HC
2.Run deployment and establish Ovirt3.6.0-2 engine over RHEL6.6 VM.
3.When Engine's installation and deployment complete, continue with the HC-HE deployment.

Actual results:
Got stuck on  INFO  ] Engine replied: DB Up!Welcome to Health Status!
[ INFO  ] Connecting to the Engine
          Enter the name of the cluster to which you want to add the host (Default) [Default]:
[ INFO  ] Waiting for the host to become operational in the engine. This may take several minutes...
[ INFO  ] Still waiting for VDSM host to become operational...
[ INFO  ] Still waiting for VDSM host to become operational...
[ INFO  ] Still waiting for VDSM host to become operational...

Expected results:
Should get VM started and finish the deployment.

Additional info:
logs from host attached.

Comment 1 Nikolai Sednev 2015-05-21 14:20:18 UTC
Created attachment 1028201 [details]
alma02 logs

Comment 2 Nikolai Sednev 2015-05-21 14:28:12 UTC
Created attachment 1028220 [details]
sosreportalma02

Comment 3 Sandro Bonazzola 2015-05-21 15:00:59 UTC
Closing as duplicate of bug #1210137 should be already fixed in gluster-3.7.0.
 GA.

*** This bug has been marked as a duplicate of bug 1210137 ***

Comment 4 Nikolai Sednev 2015-05-26 14:04:42 UTC
Not fixed:
Hi Sandro,
I failed to deploy the HC again Today, can you confirm that all components are up to date and take a look at logs attached?


Engine failed to get host added to it and stuck at :
[ INFO ] Waiting for the host to become operational in the engine. This may take several minutes...
[ INFO ] Still waiting for VDSM host to become operational...
[ INFO ] Still waiting for VDSM host to become operational...

While engine itself died on background.
Components on RHEL7.1 as follows:
glusterfs-server-3.7.0-2.el7.x86_64
glusterfs-cli-3.7.0-2.el7.x86_64
qemu-kvm-ev-2.1.2-23.el7_1.3.1.x86_64
vdsm-4.17.0-860.git92219e2.el7.noarch
glusterfs-3.7.0-2.el7.x86_64
ovirt-release-master-001-0.9.master.noarch
ovirt-host-deploy-1.4.0-0.0.master.20150525194300.git9a06f4b.el7.noarch
glusterfs-libs-3.7.0-2.el7.x86_64
ovirt-engine-sdk-python-3.6.0.0-0.14.20150520.git8420a90.el7.centos.noarch
mom-0.4.4-0.0.master.20150525150210.git93ec8be.el7.noarch
glusterfs-api-3.7.0-2.el7.x86_64
glusterfs-rdma-3.7.0-2.el7.x86_64
ovirt-hosted-engine-setup-1.3.0-0.0.master.20150518075146.gitdd9741f.el7.noarch
glusterfs-client-xlators-3.7.0-2.el7.x86_64
glusterfs-geo-replication-3.7.0-2.el7.x86_64
ovirt-hosted-engine-ha-1.3.0-0.0.master.20150424113553.20150424113551.git7c14f4c.el7.noarch
sanlock-3.2.2-2.el7.x86_64
qemu-kvm-tools-ev-2.1.2-23.el7_1.3.1.x86_64
glusterfs-fuse-3.7.0-2.el7.x86_64
qemu-kvm-common-ev-2.1.2-23.el7_1.3.1.x86_64
qemu-img-ev-2.1.2-23.el7_1.3.1.x86_64


ERROR from VDSM log:
Detector thread::ERROR::2015-05-26 13:18:20,833::sslutils::332:rotocolDetector.SSLHandshakeDispatcher:handle_read) Error during handshake: unexpected eof
clientIFinit:EBUG::2015-05-26 13:18:25,142::task::592::Storage.TaskManager.Task:_updateState) Task=`a131cf48-d22f-4d6c-99a9-dfb60f10f772`::moving from state init -> state preparing
clientIFinit::INFO::2015-05-26 13:18:25,155::logUtils::48:ispatcher:wrapper) Run and protect: getConnectedStoragePoolsList(options=None)
clientIFinit::INFO::2015-05-26 13:18:25,156::logUtils::51:ispatcher:wrapper) Run and protect: getConnectedStoragePoolsList, Return response: {'poollist': []}
clientIFinit:EBUG::2015-05-26 13:18:25,156::task::1188::Storage.TaskManager.Task:prepare) Task=`a131cf48-d22f-4d6c-99a9-dfb60f10f772`::finished: {'poollist': []}
clientIFinit:EBUG::2015-05-26 13:18:25,156::task::592::Storage.TaskManager.Task:_updateState) Task=`a131cf48-d22f-4d6c-99a9-dfb60f10f772`::moving from state preparing -> state finished
clientIFinit:EBUG::2015-05-26 13:18:25,156::resourceManager::940::Storage.ResourceManager.Owner:releaseAll) Owner.releaseAll requests {} resources {}
clientIFinit:EBUG::2015-05-26 13:18:25,156::resourceManager::977::Storage.ResourceManager.Owner:cancelAll) Owner.cancelAll requests {}
clientIFinit:EBUG::2015-05-26 13:18:25,156::task::990::Storage.TaskManager.Task:_decref) Task=`a131cf48-d22f-4d6c-99a9-dfb60f10f772`::ref 0 aborting False
periodic.Executor-worker-2::WARNING::2015-05-26 13:18:29,172:eriodic::253:eriodic.VmDispatcher:__call__) could not run <class 'virt.periodic.BlockjobMonitor'> on ['eeab2f2a-a643-4de9-850b-120009c021f4']
clientIFinit:EBUG::2015-05-26 13:18:30,162::task::592::Storage.TaskManager.Task:_updateState) Task=`bee37f8a-c1ff-470c-962b-f2dddadf2028`::moving from state init -> state preparing
clientIFinit::INFO::2015-05-26 13:18:30,162::logUtils::48:ispatcher:wrapper) Run and protect: getConnectedStoragePoolsList(options=None)
clientIFinit::INFO::2015-05-26 13:18:30,162::logUtils::51:ispatcher:wrapper) Run and protect: getConnectedStoragePoolsList, Return response: {'poollist': []}
clientIFinit:EBUG::2015-05-26 13:18:30,163::task::1188::Storage.TaskManager.Task:prepare) Task=`bee37f8a-c1ff-470c-962b-f2dddadf2028`::finished: {'poollist': []}
clientIFinit:EBUG::2015-05-26 13:18:30,163::task::592::Storage.TaskManager.Task:_updateState) Task=`bee37f8a-c1ff-470c-962b-f2dddadf2028`::moving from state preparing -> state finished
clientIFinit:EBUG::2015-05-26 13:18:30,163::resourceManager::940::Storage.ResourceManager.Owner:releaseAll) Owner.releaseAll requests {} resources {}
clientIFinit:EBUG::2015-05-26 13:18:30,163::resourceManager::977::Storage.ResourceManager.Owner:cancelAll) Owner.cancelAll requests {}
clientIFinit:EBUG::2015-05-26 13:18:30,163::task::990::Storage.TaskManager.Task:_decref) Task=`bee37f8a-c1ff-470c-962b-f2dddadf2028`::ref 0 aborting False



Thread-13::ERROR::2015-05-26 12:00:59,142::sdc::137::Storage.StorageDomainCache:_findDomain) looking for unfetched domain 3fee9170-0710-4df7-a25f-c02565d
d6aef
Thread-13::ERROR::2015-05-26 12:00:59,142::sdc::154::Storage.StorageDomainCache:_findUnfetchedDomain) looking for domain 3fee9170-0710-4df7-a25f-c02565dd
6aef
Thread-13:EBUG::2015-05-26 12:00:59,142::lvm::371::Storage.OperationMutex:_reloadvgs) Got the operational mutex
Thread-13:EBUG::2015-05-26 12:00:59,143::lvm::291::Storage.Misc.excCmd:cmd) /usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = [
"^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''r|.*|'\'' ] } gl
obal { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b
--nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_
count,pv_name 3fee9170-0710-4df7-a25f-c02565dd6aef (cwd None)
storageRefresh:EBUG::2015-05-26 12:00:59,149::lvm::291::Storage.Misc.excCmd:cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart
lvmetad before enabling it!\n'; <rc> = 0


Thread-93::ERROR::2015-05-26 14:59:53,871::monitor::366::Storage.Monitor:_releaseHostId) Error releasing host id 1 for domain 3fee9170-0710-4df7-a25f-c02
565dd6aef
Traceback (most recent call last):
File "/usr/share/vdsm/storage/monitor.py", line 363, in _releaseHostId
self.domain.releaseHostId(self.hostId, unused=True)
File "/usr/share/vdsm/storage/sd.py", line 480, in releaseHostId
self._clusterLock.releaseHostId(hostId, async, unused)
File "/usr/share/vdsm/storage/clusterlock.py", line 252, in releaseHostId
raise se.ReleaseHostIdFailure(self._sdUUID, e)
ReleaseHostIdFailure: Cannot release host id: (u'3fee9170-0710-4df7-a25f-c02565dd6aef', SanlockException(16, 'Sanlock lockspace remove failure', 'Device or
resource busy'))
MainThread:EBUG::2015-05-26 14:59:53,872::taskManager::90::Storage.TaskManager:prepareForShutdown) Request to stop all tasks
MainThread::INFO::2015-05-26 14:59:53,873::taskManager::96::Storage.TaskManager:prepareForShutdown) fb078ba3-4147-4e17-b443-888c4b269c72
MainThread::INFO::2015-05-26 14:59:53,873::logUtils::51:ispatcher:wrapper) Run and protect: prepareForShutdown, Return response: None
MainThread:EBUG::2015-05-26 14:59:53,873::task::1188::Storage.TaskManager.Task:prepare) Task=`4d8218c0-f687-46c1-b612-3fbb0d73eafc`::finished: None
MainThread:EBUG::2015-05-26 14:59:53,873::task::592::Storage.TaskManager.Task:_updateState) Task=`4d8218c0-f687-46c1-b612-3fbb0d73eafc`::moving from st
ate preparing -> state finished
MainThread:EBUG::2015-05-26 14:59:53,873::resourceManager::940::Storage.ResourceManager.Owner:releaseAll) Owner.releaseAll requests {} resources {}
MainThread:EBUG::2015-05-26 14:59:53,873::resourceManager::977::Storage.ResourceManager.Owner:cancelAll) Owner.cancelAll requests {}
MainThread:EBUG::2015-05-26 14:59:53,874::task::990::Storage.TaskManager.Task:_decref) Task=`4d8218c0-f687-46c1-b612-3fbb0d73eafc`::ref 0 aborting Fals
e

From Gluster log:
[2015-05-26 10:47:28.189446] I [dht-rename.c:1422ht_rename] 0-hosted_engine_glusterfs-dht: renaming /3fee9170-0710-4df7-a25f-c02565dd6aef/master/tasks/da
3b5428-4917-4273-8c40-d6dcac85e2a7.temp (hash=hosted_engine_glusterfs-client-0/cache=hosted_engine_glusterfs-client-0) => /3fee9170-0710-4df7-a25f-c02565dd
6aef/master/tasks/da3b5428-4917-4273-8c40-d6dcac85e2a7 (hash=hosted_engine_glusterfs-client-0/cache=<nul>)
The message "I [MSGID: 109036] [dht-common.c:6689ht_log_new_layout_for_dir_selfheal] 0-hosted_engine_glusterfs-dht: Setting layout of /3fee9170-0710-4df7-a25f-c02565dd6aef/master/tasks/da3b5428-4917-4273-8c40-d6dcac85e2a7.temp with [Subvol_name: hosted_engine_glusterfs-client-0, Err: -1 , Start: 0 , Stop: 4294967295 , Hash: 1 ], " repeated 4 times between [2015-05-26 10:47:27.723660] and [2015-05-26 10:47:28.175946]
[2015-05-26 10:52:33.741009] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 3279: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 10:57:34.211909] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 3984: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 11:02:34.837164] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 4619: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 11:07:35.360679] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 5290: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 11:12:35.854795] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 5919: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 11:17:36.345393] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 6548: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 11:22:36.841516] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 7166: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 11:27:37.330561] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 7795: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 11:32:37.826158] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 8424: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 11:37:38.318198] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 9042: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 11:42:38.804247] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 9671: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 11:47:39.317402] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 10300: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 11:52:39.813964] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 10918: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 11:57:40.301172] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 11547: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)
[2015-05-26 09:00:46.422998] W [socket.c:642:__socket_rwv] 0-glusterfs: readv on 10.35.117.24:24007 failed (No data available)
[2015-05-26 09:00:56.611177] I [glusterfsd-mgmt.c:1512:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing
[2015-05-26 09:00:59.204112] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 11896: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data available)


From engine log:
2015-05-26 10:48:11.123+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name HostedEngine -S -machine rhel6.5.0,accel=kv
m,usb=off -cpu SandyBridge -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid eeab2f2a-a643-4de9-850b-120009c021f4 -smbios type=1,manufac
turer=oVirt,product=oVirt Node,version=7.1-1.el7_1.1,serial=4C4C4544-0059-4410-8053-B7C04F573032_a0:36:9f:3a:c4:f0,uuid=eeab2f2a-a643-4de9-850b-120009c021f
4 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/HostedEngine.monitor,server,nowait -mon chardev=charmonitor,id=moni
tor,mode=control -rtc base=2015-05-26T10:48:10,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device piix3-usb
-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -dri
ve if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=gluster://alma03.q
a.lab.tlv.redhat.com/hosted_engine_glusterfs/3fee9170-0710-4df7-a25f-c02565dd6aef/images/fe742d0e-864c-49bb-9045-0d50b63dd5f1/edbe0790-f77f-43e0-a9ad-87cb4
d27b3e1,if=none,id=drive-virtio-disk0,format=raw,serial=fe742d0e-864c-49bb-9045-0d50b63dd5f1,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-
blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netde
v=hostnet0,id=net0,mac=00:16:3e:7b:b8:53,bus=pci.0,addr=0x3,bootindex=1 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/eeab2f2a-a643-4
de9-850b-120009c021f4.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat
.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/eeab2f2a-a643-4de9-850b-120009c021f4.org.qemu.guest_agent.0,server,nowait -
device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev socket,id=charchannel2,path=/var/lib/
libvirt/qemu/channels/eeab2f2a-a643-4de9-850b-120009c021f4.org.ovirt.hosted-engine-setup.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=3,c
hardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -vnc
0:0,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -msg timestamp=on
char device redirected to /dev/pts/1 (label charconsole0)
qemu: terminating on signal 15 from pid 105714
[2015-05-26 10:49:11.159161] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f5a681edf16] (--> /lib64
/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7f5a67fba5a3] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7f5a67fbd8ec] (--> /lib64/libglusterfs.so.0(+0x21
791)[0x7f5a681ea791] (--> /lib64/libglusterfs.so.0(+0x21725)[0x7f5a681ea725] ))))) 0-rpc_transport: invalid argument: this
2015-05-26 10:49:12.390+0000: shutting down
2015-05-26 10:49:19.267+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name HostedEngine -S -machine rhel6.5.0,accel=kv
m,usb=off -cpu SandyBridge -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid eeab2f2a-a643-4de9-850b-120009c021f4 -smbios type=1,manufac
turer=oVirt,product=oVirt Node,version=7.1-1.el7_1.1,serial=4C4C4544-0059-4410-8053-B7C04F573032_a0:36:9f:3a:c4:f0,uuid=eeab2f2a-a643-4de9-850b-120009c021f
4 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/HostedEngine.monitor,server,nowait -mon chardev=charmonitor,id=moni
tor,mode=control -rtc base=2015-05-26T10:49:18,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device piix3-usb
-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -dri
ve if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=gluster://alma03.q
a.lab.tlv.redhat.com/hosted_engine_glusterfs/3fee9170-0710-4df7-a25f-c02565dd6aef/images/fe742d0e-864c-49bb-9045-0d50b63dd5f1/edbe0790-f77f-43e0-a9ad-87cb4
d27b3e1,if=none,id=drive-virtio-disk0,format=raw,serial=fe742d0e-864c-49bb-9045-0d50b63dd5f1,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-
blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netde
v=hostnet0,id=net0,mac=00:16:3e:7b:b8:53,bus=pci.0,addr=0x3,bootindex=1 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/eeab2f2a-a643-4
de9-850b-120009c021f4.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat
.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/eeab2f2a-a643-4de9-850b-120009c021f4.org.qemu.guest_agent.0,server,nowait -
device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev socket,id=charchannel2,path=/var/lib/
libvirt/qemu/channels/eeab2f2a-a643-4de9-850b-120009c021f4.org.ovirt.hosted-engine-setup.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=3,c
hardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -vnc
0:0,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -msg timestamp=on
char device redirected to /dev/pts/1 (label charconsole0)
[2015-05-26 10:53:58.132433] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fa98cd89f16] (--> /lib64
/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7fa98cb565a3] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7fa98cb598ec] (--> /lib64/libglusterfs.so.0(+0x21791)[0x7fa98cd86791] (--> /lib64/libglusterfs.so.0(+0x21725)[0x7fa98cd86725] ))))) 0-rpc_transport: invalid argument: this
2015-05-26 10:53:59.366+0000: shutting down
2015-05-26 10:54:10.150+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name HostedEngine -S -machine rhel6.5.0,accel=kvm,usb=off -cpu SandyBridge -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid eeab2f2a-a643-4de9-850b-120009c021f4 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7.1-1.el7_1.1,serial=4C4C4544-0059-4410-8053-B7C04F573032_a0:36:9f:3a:c4:f0,uuid=eeab2f2a-a643-4de9-850b-120009c021f4 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/HostedEngine.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2015-05-26T10:54:09,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=gluster://alma03.qa.lab.tlv.redhat.com/hosted_engine_glusterfs/3fee9170-0710-4df7-a25f-c02565dd6aef/images/fe742d0e-864c-49bb-9045-0d50b63dd5f1/edbe0790-f77f-43e0-a9ad-87cb4d27b3e1,if=none,id=drive-virtio-disk0,format=raw,serial=fe742d0e-864c-49bb-9045-0d50b63dd5f1,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:7b:b8:53,bus=pci.0,addr=0x3,bootindex=1 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/eeab2f2a-a643-4de9-850b-120009c021f4.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/eeab2f2a-a643-4de9-850b-120009c021f4.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev socket,id=charchannel2,path=/var/lib/libvirt/qemu/channels/eeab2f2a-a643-4de9-850b-120009c021f4.org.ovirt.hosted-engine-setup.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -vnc 0:0,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -msg timestamp=on
char device redirected to /dev/pts/1 (label charconsole0)
[2015-05-26 11:01:41.117528] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f248655df16] (--> /lib64/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7f248632a5a3] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7f248632d8ec] (--> /lib64/libglusterfs.so.0(+0x21791)[0x7f248655a791] (--> /lib64/libglusterfs.so.0(+0x21725)[0x7f248655a725] ))))) 0-rpc_transport: invalid argument: this
2015-05-26 11:01:42.285+0000: shutting down
2015-05-26 11:04:44.879+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name HostedEngine -S -machine rhel6.5.0,accel=kvm,usb=off -cpu SandyBridge -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid eeab2f2a-a643-4de9-850b-120009c021f4 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7.1-1.el7_1.1,serial=4C4C4544-0059-4410-8053-B7C04F573032_a0:36:9f:3a:c4:f0,uuid=eeab2f2a-a643-4de9-850b-120009c021f4 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/HostedEngine.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2015-05-26T11:04:44,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=gluster://alma03.qa.lab.tlv.redhat.com/hosted_engine_glusterfs/3fee9170-0710-4df7-a25f-c02565dd6aef/images/fe742d0e-864c-49bb-9045-0d50b63dd5f1/edbe0790-f77f-43e0-a9ad-87cb4d27b3e1,if=none,id=drive-virtio-disk0,format=raw,serial=fe742d0e-864c-49bb-9045-0d50b63dd5f1,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:7b:b8:53,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/eeab2f2a-a643-4de9-850b-120009c021f4.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/eeab2f2a-a643-4de9-850b-120009c021f4.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev socket,id=charchannel2,path=/var/lib/libvirt/qemu/channels/eeab2f2a-a643-4de9-850b-120009c021f4.org.ovirt.hosted-engine-setup.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -vnc 0:0,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -msg timestamp=on
char device redirected to /dev/pts/1 (label charconsole0)
2015-05-26 09:00:56.436+0000: shutting down
qemu: terminating on signal 15 from pid 115077
[2015-05-26 09:00:56.797604] E [rpc-transport.c:512:rpc_transport_unref] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7fa6c5f14f16] (--> /lib64/libgfrpc.so.0(rpc_transport_unref+0xa3)[0x7fa6c5ce15a3] (--> /lib64/libgfrpc.so.0(rpc_clnt_unref+0x5c)[0x7fa6c5ce48ec] (--> /lib64/libglusterfs.so.0(+0x21791)[0x7fa6c5f11791] (--> /lib64/libglusterfs.so.0(+0x21725)[0x7fa6c5f11725] ))))) 0-rpc_transport: invalid argument: this

Comment 5 Nikolai Sednev 2015-05-26 14:06:15 UTC
Created attachment 1029961 [details]
latest logs from alma03

Comment 6 Sandro Bonazzola 2015-05-26 14:20:36 UTC
Moving to gluster. Looks like they didn't solve the THIS issue with qemu in 3.7.0-2.

Comment 7 Kaushal 2017-03-08 10:48:49 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.


Note You need to log in before you can comment on or make changes to this bug.