Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1583899

Summary: ConnectStoragePoolVDS failed: Cannot find master domain
Product: [oVirt] vdsm Reporter: 曾浩 <754267513>
Component: CoreAssignee: Fred Rolland <frolland>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Elad <ebenahar>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.20.19CC: 754267513, bugs, tnisan
Target Milestone: ovirt-4.3.5Flags: rule-engine: ovirt-4.3+
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-04-22 10:26:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
new node vdsm logs none

Description 曾浩 2018-05-30 02:43:25 UTC
Created attachment 1445609 [details]
new node vdsm logs

Description of problem:

sharestorage with iscsi
one node in the cluster with the status is ok!
now i want to add new node, the node is failed!


Version-Release number of selected component (if applicable):

Mar 11 06:03:22 Installed: ovirt-release42-4.2.1.1-1.el7.centos.noarch
Mar 11 06:06:26 Installed: python-ovirt-engine-sdk4-4.2.4-2.el7.centos.x86_64
Mar 11 06:06:58 Installed: ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
Mar 11 06:07:08 Installed: ovirt-setup-lib-1.1.4-1.el7.centos.noarch
Mar 11 06:07:09 Installed: ovirt-host-deploy-1.7.2-1.el7.centos.noarch
Mar 11 06:07:52 Installed: ovirt-node-ng-nodectl-4.2.0-0.20180214.0.el7.noarch
Mar 11 06:10:23 Installed: ovirt-vmconsole-1.0.4-1.el7.noarch
Mar 11 06:10:46 Installed: ovirt-vmconsole-host-1.0.4-1.el7.noarch
Mar 11 06:10:57 Installed: ovirt-imageio-common-1.2.1-0.el7.centos.noarch
Mar 11 06:12:12 Installed: ovirt-imageio-daemon-1.2.1-0.el7.centos.noarch
Mar 11 06:12:13 Installed: ovirt-host-dependencies-4.2.1-1.el7.centos.x86_64
Mar 11 06:12:14 Installed: ovirt-provider-ovn-driver-1.2.5-1.el7.centos.noarch
Mar 11 06:12:17 Installed: ovirt-hosted-engine-ha-2.2.4-1.el7.centos.noarch
Mar 11 06:12:23 Installed: cockpit-ovirt-dashboard-0.11.11-0.1.el7.centos.noarch
Mar 11 06:12:26 Installed: ovirt-hosted-engine-setup-2.2.9-1.el7.centos.noarch
Mar 11 06:12:26 Installed: ovirt-host-4.2.1-1.el7.centos.x86_64
Mar 11 06:12:33 Installed: ovirt-release-host-node-4.2.1.1-1.el7.centos.noarch
Mar 11 06:12:33 Installed: ovirt-node-ng-image-update-placeholder-4.2.1.1-1.el7.centos.noarch


How reproducible:


Steps to Reproduce:
1. install new node and create new host with ovirt-engine
2. active  new node
3. ovirt engine log failed

Actual results:


Expected results:


Additional info:

2018-05-30 09:56:43,413+0800 INFO  (jsonrpc/6) [storage.Multipath] Resizing map '36f86eee1002f3e7112345678000103e8' (map_size=16384, slave_size=0) (multipath:119)
2018-05-30 09:56:43,432+0800 ERROR (jsonrpc/6) [storage.Multipath] Could not resize device 36f86eee1002f3e7112345678000103e8 (multipath:98)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/multipath.py", line 96, in resize_devices
    _resize_if_needed(guid)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/multipath.py", line 120, in _resize_if_needed
    supervdsm.getProxy().resizeMap(name)
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in __call__
    return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in <lambda>
    **kwargs)
  File "<string>", line 2, in resizeMap
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod
    raise convert_to_error(kind, result)
Error: Resizing map 'dm-13' failed: out='fail\n' err=''


some log in attach!!

Comment 1 曾浩 2018-06-04 06:30:56 UTC
storage with iscsi, 10TB-->+10TB 20TB

Comment 2 Sandro Bonazzola 2019-01-28 09:36:54 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 3 Fred Rolland 2019-03-17 16:38:48 UTC
Hi,

What do you mean by "the node is failed"?


Can you provide a simple reproducing flow?

Comment 4 Fred Rolland 2019-04-22 10:26:31 UTC
No reproducing scenario provided, and not clear what is the actual problem.

Closing for now, as we don't have enough information to investigate.

Please feel free to reopen with the information.

Comment 5 Red Hat Bugzilla 2023-09-14 04:29:09 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days