Bug 2102397
Summary: | OpenShift Regional Disaster Recovery with Advanced Cluster Management | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | rskruhak |
Component: | odf-dr | Assignee: | umanga <uchapaga> |
odf-dr sub component: | multicluster-orchestrator | QA Contact: | krishnaram Karthick <kramdoss> |
Status: | CLOSED DUPLICATE | Docs Contact: | |
Severity: | urgent | ||
Priority: | unspecified | CC: | akandath, hnallurv, jdobson, kseeger, lsantann, madam, muagarwa, ncho, ocs-bugs, odf-bz-bot, olakra, srangana, uchapaga |
Version: | 4.10 | ||
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Known Issue | |
Doc Text: |
.Ceph does not recognize global IP assigned by Globalnet.
Ceph does not recognize global IP assigned by Globalnet, so disaster recovery solution cannot be configured between clusters with overlapping service CIDR using Globalnet. Due to this disaster recovery solution does not work when service `CIDR` overlaps.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2022-10-29 03:39:11 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2094357 |
Description
rskruhak
2022-06-29 20:50:00 UTC
See this is similar to the following BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2100751 Re-assigning to the ODF team to investigate also seeing on rook-ceph-rbd-mirror pod on both clusters "stderr Failed to find physical volume "/dev/sdb" Here is log below [2022-06-28 17:20:25,282][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /var/log/ceph/ocs-deviceset-drstorage-0-data-XXXX raw prepare --bluestore --data /mnt/ocs-deviceset-drstorage-0-data-XXXX [2022-06-28 17:20:25,283][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk -plno KNAME,NAME,TYPE [2022-06-28 17:20:25,292][ceph_volume.process][INFO ] stdout /dev/loop0 /dev/loop0 loop [2022-06-28 17:20:25,292][ceph_volume.process][INFO ] stdout /dev/sda /dev/sda disk [2022-06-28 17:20:25,292][ceph_volume.process][INFO ] stdout /dev/sda1 /dev/sda1 part [2022-06-28 17:20:25,292][ceph_volume.process][INFO ] stdout /dev/sda2 /dev/sda2 part [2022-06-28 17:20:25,293][ceph_volume.process][INFO ] stdout /dev/sda3 /dev/sda3 part [2022-06-28 17:20:25,293][ceph_volume.process][INFO ] stdout /dev/sda4 /dev/sda4 part [2022-06-28 17:20:25,293][ceph_volume.process][INFO ] stdout /dev/sdb /dev/sdb disk [2022-06-28 17:20:25,301][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -a --units=b --nosuffix -S lv_path=/mnt/ocs-deviceset-drstorage-0-data-XXXX -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2022-06-28 17:20:25,443][ceph_volume.process][INFO ] Running command: /usr/bin/lsblk --nodeps -P -o NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL /mnt/ocs-deviceset-drstorage-0-data-XXXX [2022-06-28 17:20:25,452][ceph_volume.process][INFO ] stdout NAME="sdb" KNAME="sdb" MAJ:MIN="8:16" FSTYPE="" MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="Virtual disk " SIZE="500G" STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="1" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0" PKNAME="" PARTLABEL="" [2022-06-28 17:20:25,453][ceph_volume.process][INFO ] Running command: /usr/sbin/blkid -c /dev/null -p /mnt/ocs-deviceset-drstorage-0-data-XXXX [2022-06-28 17:20:25,460][ceph_volume.process][INFO ] Running command: /usr/sbin/pvs --noheadings --readonly --units=b --nosuffix --separator=";" -o vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_extent_size /mnt/ocs-deviceset-drstorage-0-data-XXXX [2022-06-28 17:20:25,598][ceph_volume.process][INFO ] stderr Failed to find physical volume "/dev/sdb". [2022-06-28 17:20:25,599][ceph_volume.util.disk][INFO ] opening device /mnt/ocs-deviceset-drstorage-0-data-XXXX to check for BlueStore label [2022-06-28 17:20:25,599][ceph_volume.util.disk][INFO ] opening device /mnt/ocs-deviceset-drstorage-0-data-XXXX to check for BlueStore label [2022-06-28 17:20:25,600][ceph_volume.process][INFO ] Running command: /usr/sbin/udevadm info --query=property /mnt/ocs-deviceset-drstorage-0-data-XXXX [2022-06-28 17:20:25,716][ceph_volume.process][INFO ] stderr Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected. One thing to add is we are using globalnet with the submariner add on with ACM due to our clusters having overlapping CIDRs. There seems to be multiple issues in this setup. The issue with using gobalnet is tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=2104971. We'll document a workaround in that BZ. The issue with RBACs in token-exchange pods where fixed in version 4.11. So it should no longer be an issue. Even in 4.10, without the RBACs, there is no impact on functionality, only status updates are affected. *** Bug 2100751 has been marked as a duplicate of this bug. *** *** Bug 2072996 has been marked as a duplicate of this bug. *** Is there a way I can get added to the bugzilla for globalnet? https://bugzilla.redhat.com/show_bug.cgi?id=2104971 Pls fill doc text *** This bug has been marked as a duplicate of bug 2104971 *** |