Bug 1697706 - Null pointer exception observed while setting up remote data sync on the storage domain leads
Summary: Null pointer exception observed while setting up remote data sync on the stor...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Gluster
Version: 4.3.2.1
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ovirt-4.3.4
: 4.3.4
Assignee: Sahina Bose
QA Contact: bipin
URL:
Whiteboard:
Depends On:
Blocks: 1697704
TreeView+ depends on / blocked
 
Reported: 2019-04-09 03:34 UTC by SATHEESARAN
Modified: 2019-06-11 06:25 UTC (History)
7 users (show)

Fixed In Version: ovirt-engine-4.3.4
Doc Type: Bug Fix
Doc Text:
Cause: To determine gluster volume id for storage domain where none is associated, there's a query to match brick address with host IPv4 address Consequence: This check fails when there is no IPv4 address assigned to any of the interface on host Fix: Fixed the check to handle null addresses as well as IPv6 address Result: Works as expected
Clone Of: 1697704
Environment:
Last Closed: 2019-06-11 06:25:44 UTC
oVirt Team: Gluster
Embargoed:
pm-rhel: ovirt-4.3+


Attachments (Terms of Use)
engine.log (18.31 MB, application/octet-stream)
2019-04-09 03:37 UTC, SATHEESARAN
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 99388 0 master MERGED gluster: Fix NPE and handle IPv6 for host check 2019-04-16 06:31:50 UTC
oVirt gerrit 99468 0 ovirt-engine-4.3 MERGED gluster: Fix NPE and handle IPv6 for host check 2019-04-23 12:32:07 UTC

Description SATHEESARAN 2019-04-09 03:34:28 UTC
Description of problem:
-----------------------
While setting up the remote data sync setup, NPE is observed in the logs and the window to set remote-sync is actually stuck indefinitely.

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
RHV 4.3.3
RHHI-V 1.6

How reproducible:
-----------------
Always ( 100% )

Steps to Reproduce:
-------------------
1. Complete HC deployment with 3 nodes
2. Create geo-rep session from CLI on one of the node to remote site ( secondary site )
3. Sync the geo-rep session for the corresponding volume in RHV Manager UI -> Storage -> Volumes
4. On the corresponding storage domain, click on 'Remote data sync setup'

Actual results:
---------------
Null Pointer Exception observed in the logs, 'remote data sync setup' window is stuck indefinitely

Expected results:
-----------------
No NPE and allows to setup remote data sync setup.


Additional info:

--- Additional comment from SATHEESARAN on 2019-04-09 03:33:05 UTC ---

2019-04-09 08:10:00,382+05 ERROR [org.ovirt.engine.core.bll.gluster.GetGeoRepSessionsForStorageDomainQuery] (default task-157) [38a90e40-f083-45e6-b145-b80f24ed0b75] Query 'GetGeoRepSessionsForStorageDomainQuery
' failed: null
2019-04-09 08:10:00,382+05 ERROR [org.ovirt.engine.core.bll.gluster.GetGeoRepSessionsForStorageDomainQuery] (default task-157) [38a90e40-f083-45e6-b145-b80f24ed0b75] Exception: java.lang.NullPointerException
        at org.ovirt.engine.core.bll.gluster.GetGeoRepSessionsForStorageDomainQuery.lambda$null$0(GetGeoRepSessionsForStorageDomainQuery.java:83) [bll.jar:]
        at java.util.stream.MatchOps$1MatchSink.accept(MatchOps.java:90) [rt.jar:1.8.0_201]
        at java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1359) [rt.jar:1.8.0_201]
        at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126) [rt.jar:1.8.0_201]
        at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498) [rt.jar:1.8.0_201]
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485) [rt.jar:1.8.0_201]
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) [rt.jar:1.8.0_201]
        at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230) [rt.jar:1.8.0_201]
        at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196) [rt.jar:1.8.0_201]
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) [rt.jar:1.8.0_201]
        at java.util.stream.ReferencePipeline.anyMatch(ReferencePipeline.java:449) [rt.jar:1.8.0_201]
        at org.ovirt.engine.core.bll.gluster.GetGeoRepSessionsForStorageDomainQuery.lambda$executeQueryCommand$1(GetGeoRepSessionsForStorageDomainQuery.java:83) [bll.jar:]
        at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:174) [rt.jar:1.8.0_201]
        at java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1359) [rt.jar:1.8.0_201]
        at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126) [rt.jar:1.8.0_201]
        at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498) [rt.jar:1.8.0_201]
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485) [rt.jar:1.8.0_201]
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) [rt.jar:1.8.0_201]
        at java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:152) [rt.jar:1.8.0_201]

Comment 1 SATHEESARAN 2019-04-09 03:37:36 UTC
Created attachment 1553777 [details]
engine.log

Comment 2 SATHEESARAN 2019-04-09 10:35:43 UTC
This issue is because of setting 'remote data sync setup' on the storage domain,
which is automatically created from HC deployment via cockpit and that SD doesn't
have relevant gluster volume's UUID linked to

Comment 3 bipin 2019-06-07 17:15:42 UTC
Tested with ovirt-engine-4.3.4.3-0.1.el7.noarch and could see no NPE. So moving the bug as verified.

Steps:
=====
1.Deploy HC in 3 POD
2.Create a geo-replication session between source and destination
3.Click on the "sync" button for the geo-replication volume
4.On the corresponding storage domain click on "Remote Data Sync Setup"

Comment 4 Sandro Bonazzola 2019-06-11 06:25:44 UTC
This bugzilla is included in oVirt 4.3.4 release, published on June 11th 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.3.4 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.