RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 869956 - 3.1.z vdsm 6.4 - Can not create NFS storage in rhevm (with RHEL6.4 kernel - 2.6.32-335)
Summary: 3.1.z vdsm 6.4 - Can not create NFS storage in rhevm (with RHEL6.4 kernel - ...
Keywords:
Status: CLOSED DUPLICATE of bug 902677
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.4
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: rc
: 6.4
Assignee: Saggi Mizrahi
QA Contact: Haim
URL:
Whiteboard: storage
: 873855 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-10-25 08:33 UTC by EricLee
Modified: 2014-01-13 00:54 UTC (History)
25 users (show)

Fixed In Version: v4.10.1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-01-28 06:44:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
RHEVM /var/log/ovirt-engine/engine.log (7.05 KB, text/plain)
2012-10-25 08:33 UTC, EricLee
no flags Details
VDSM client host /vra/log/vdsm/vdsm.log (32.09 KB, text/plain)
2012-10-25 08:34 UTC, EricLee
no flags Details
RHEVM /var/log/ovirt-engine/engine.log using vdsm-41 (18.13 KB, text/plain)
2012-11-07 02:31 UTC, EricLee
no flags Details
VDSM client host /vra/log/vdsm/vdsm.log using vdsm-41 (137.77 KB, text/plain)
2012-11-07 02:31 UTC, EricLee
no flags Details
vdsm log (25.65 KB, text/plain)
2013-01-17 13:44 UTC, Mike Burns
no flags Details

Description EricLee 2012-10-25 08:33:58 UTC
Created attachment 633227 [details]
RHEVM /var/log/ovirt-engine/engine.log

Description
Can not create NFS storage in rhevm

version
VDSM client:
# rpm -qa libvirt qemu-kvm-rhev vdsm kernel seabios
qemu-kvm-rhev-0.12.1.2-2.317.el6.x86_64
seabios-0.6.1.2-25.el6.x86_64
libvirt-0.10.2-5.el6.x86_64
kernel-2.6.32-330.el6.x86_64
vdsm-4.9.6-39.0.el6_3.x86_64
RHEVM:
# rpm -qa rhevm; uname -r
rhevm-3.1.0-22.el6ev.noarch
2.6.32-330.el6.x86_64

How reproducible:
100%

Steps
1. Build a RHEVM environment, register a VDSM client to RHEVM.

2. Configure a NFS server.

3. New a NFS domain in RHEVM using the NFS server you just created.

Failed with error:
Error while executing action New NFS Storage Domain: Error creating a storage domain

4. Check RHEVM /var/log/ovirt-engine/engine.log, get error info:
2012-10-25 16:02:49,158 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp-/127.0.0.1:8702-10) [ea38d4f] Error code StorageDomainCreationError and error message VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Error creating a storage domain: ('storageType=1, sdUUID=9a16b215-78c6-4873-a26b-76e9b8946d73, domainName=512, domClass=1, typeSpecificArg=10.66.5.12:/mnt/data/rhevm domVersion=3',)
2012-10-25 16:02:49,158 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp-/127.0.0.1:8702-10) [ea38d4f] Command CreateStorageDomainVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Error creating a storage domain: ('storageType=1, sdUUID=9a16b215-78c6-4873-a26b-76e9b8946d73, domainName=512, domClass=1, typeSpecificArg=10.66.5.12:/mnt/data/rhevm domVersion=3',)
2012-10-25 16:02:49,158 ERROR [org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand] (ajp-/127.0.0.1:8702-10) [ea38d4f] Command org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Error creating a storage domain: ('storageType=1, sdUUID=9a16b215-78c6-4873-a26b-76e9b8946d73, domainName=512, domClass=1, typeSpecificArg=10.66.5.12:/mnt/data/rhevm domVersion=3',)

5. Check the vdsm client host /vra/log/vdsm/vdsm.log, get error info:
Thread-3856:EBUG::2012-10-25 16:02:49,156:ersistentDict::287::Storage.PersistentDict:flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=512', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=10.66.5.12:/mnt/data/rhevm', 'ROLE=Regular', 'SDUUID=9a16b215-78c6-4873-a26b-76e9b8946d73', 'TYPE=NFS', 'VERSION=3', '_SHA_CKSUM=fd67fa49acb463a04b632e53a4a336164e7a80b0']
Thread-3856:EBUG::2012-10-25 16:02:49,158:ersistentDict::170::Storage.PersistentDict:transaction) Finished transaction
Thread-3856:EBUG::2012-10-25 16:02:49,159::fileSD::107::Storage.StorageDomain:__init__) Reading domain in path /rhev/data-center/mnt/10.66.5.12:_mnt_data_rhevm/9a16b215-78c6-4873-a26b-76e9b8946d73
Thread-3856:EBUG::2012-10-25 16:02:49,159:ersistentDict::185::Storage.PersistentDict:__init__) Created a persistant dict with FileMetadataRW backend
Thread-3856::WARNING::2012-10-25 16:02:49,169::remoteFileHandler::185::Storage.CrabRPCProxy:callCrabRPCFunction) Problem with handler, treating as timeout
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 177, in callCrabRPCFunction
    rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)
  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 146, in _recvAll
    timeLeft):
  File "/usr/lib64/python2.6/contextlib.py", line 83, in helper
    return GeneratorContextManager(func(*args, **kwds))
  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 133, in _poll
    raise Timeout()
Timeout
Thread-3856::ERROR::2012-10-25 16:02:49,170::task::853::TaskManager.Task:_setError) Task=`5c73ae30-0df5-414d-9bfb-59745167e3e7`::Unexpected error

Actual result
As steps

Expect result
Should succeed.

Comment 1 EricLee 2012-10-25 08:34:54 UTC
Created attachment 633228 [details]
VDSM client host /vra/log/vdsm/vdsm.log

Comment 13 EricLee 2012-11-07 02:30:07 UTC
Please see the next two comments.

Comment 14 EricLee 2012-11-07 02:31:13 UTC
Created attachment 639763 [details]
RHEVM /var/log/ovirt-engine/engine.log using vdsm-41

Comment 15 EricLee 2012-11-07 02:31:57 UTC
Created attachment 639764 [details]
VDSM client host /vra/log/vdsm/vdsm.log using vdsm-41

Comment 22 Ayal Baron 2012-11-07 14:42:54 UTC
*** Bug 873855 has been marked as a duplicate of this bug. ***

Comment 23 Ayal Baron 2012-11-07 14:43:59 UTC
From bug 873855 : "Description of problem:
VDSM doesn't use recommended alignment and buffer size and this can cause a segmentation fault in new kernels"

Comment 25 EricLee 2012-11-22 05:17:53 UTC
I can not reproduce this bug with the packages:
# rpm -qa libvirt qemu-kvm-rhev kernel vdsm
qemu-kvm-rhev-0.12.1.2-2.330.el6.x86_64
kernel-2.6.32-338.el6.x86_64
libvirt-0.10.2-9.el6.x86_64
vdsm-4.10.2-1.0.el6.x86_64
And can create NFS storage domain successfully(with V3 as the default nfs Version in RHEVM GUI).

Comment 26 Ayal Baron 2013-01-03 20:58:27 UTC
Change-Id: Iadea310039b30073197b7ad90afb930c460bda17

Comment 27 Mike Burns 2013-01-17 13:44:41 UTC
Created attachment 680207 [details]
vdsm log

Testing this bug with RHEV-H 6.4 Snapshot 4 build fails to either create or attach NFS export domains.

Relevant section of vdsm log is attached.

versions

rhev-hypervisor6-6.4-20130116.3.0.el6
vdsm-4.10.2-1.1.el6.x86_64
ovirt-node-2.5.0-15.el6.noarch

Comment 28 gaoshang 2013-01-21 08:59:51 UTC
This bug blocked RHEVH+RHEVM(vdsm/rhevm mode) host/guest association acceptance testing, need to be fixed ASAP.

Comment 29 haiyang,dong 2013-01-23 03:06:40 UTC
Test versions:
rhev-hypervisor6-6.4-20130116.3.0.auto1412.el6.iso
vdsm-4.10.2-1.2.el6.x86_64

After add rpcbind service into this auto rhev-h 6.4 build,register rhevh into rhevm si ,then NFS storage domains can be attached successfully now. About "Miss /etc/init.d/rpcbind service in rhev-h 6.4" issue, review bug 902677.

Comment 30 Allon Mureinik 2013-01-23 10:10:37 UTC
According to comments below, this seems to have been fixed.
Moving to ON_QA to verify it.

Comment 31 gaoshang 2013-01-25 12:53:44 UTC
This bug still exist in rhevh 6.4 snapshot5 [1], failed to create NFS storage in rhevm and can not find /etc/init.d/rpcbind service.

[root@hp-dc5850-02 ~]# service rpcbind status
rpcbind: unrecognized service


Our host/guest association acceptance testing are still blocked, but virt-who Errata testing needs to complete before EOB of Jan 30, hope it can be fixed ASAP.

[1]http://download.englab.nay.redhat.com/rel-eng/RHEL6.4-Snapshot-5.0/6.4/RHEVH/

Comment 34 Yaniv Kaul 2013-01-28 06:44:44 UTC

*** This bug has been marked as a duplicate of bug 902677 ***


Note You need to log in before you can comment on or make changes to this bug.