Bug 1637379 - Portmap entries showing stale brick entries when bricks are down
Summary: Portmap entries showing stale brick entries when bricks are down
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: RHGS 3.4.z Batch Update 3
Assignee: Mohit Agrawal
QA Contact: Upasana
URL:
Whiteboard:
Depends On:
Blocks: 1646892
TreeView+ depends on / blocked
 
Reported: 2018-10-09 07:34 UTC by Upasana
Modified: 2019-02-04 07:41 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.12.2-33
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1646892 (view as bug list)
Environment:
Last Closed: 2019-02-04 07:41:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0263 0 None None None 2019-02-04 07:41:38 UTC

Description Upasana 2018-10-09 07:34:11 UTC
Description of problem:
=======================
With brickmux enabled if we kill the brick process on a node and take a glusterd statedump the portmap entries show stale brick entries

Version-Release number of selected component (if applicable):
===========================================================
glusterfs-server-3.12.2-18.1.el7rhgs.x86_64


How reproducible:
=================
Inconsistent (2/6)

Steps to Reproduce:
====================
1.Enabled brick mux and created about 34 volumes and started them 
2.took a statedump of glusterd
3.now killed the brick pid of b1 using kill -9
4.took statedump of glusterd 


Actual results:
===============
[root@dhcp35-36 gluster]# cat glusterdump.5096.dump.1539067684|grep pmap
glusterd.pmap_port=49152
glusterd.pmap[49152].type=4
glusterd.pmap[49152].brickname=/gluster/brick1/v2 /gluster/brick2/vol3 /gluster/brick2/vol4 /gluster/brick2/vol5 /gluster/brick2/vol6 /gluster/brick1/vol_-1-1 /gluster/brick2/vol_-1-2 /gluster/brick3/vol_-1-3 /gluster/brick1/vol_-10-1 /gluster/brick2/vol_-10-2 /gluster/brick3/vol_-10-3 /gluster/brick1/vol_-2-1 /gluster/brick2/vol_-2-2 /gluster/brick3/vol_-2-3 /gluster/brick1/vol_-3-1 /gluster/brick2/vol_-3-2 /gluster/brick3/vol_-3-3 /gluster/brick1/vol_-4-1 /gluster/brick2/vol_-4-2 /gluster/brick3/vol_-4-3 /gluster/brick1/vol_-5-1 /gluster/brick2/vol_-5-2 /gluster/brick3/vol_-5-3 /gluster/brick1/vol_-6-1 /gluster/brick2/vol_-6-2 /gluster/brick3/vol_-6-3 /gluster/brick1/vol_-7-1 /gluster/brick2/vol_-7-2 /gluster/brick3/vol_-7-3 /gluster/brick1/vol_-8-1 /gluster/brick2/vol_-8-2 /gluster/brick3/vol_-8-3 /gluster/brick1/vol_-9-1 /gluster/brick2/vol_-9-2
glusterd.pmap_port=49153
glusterd.pmap[49153].type=0
glusterd.pmap[49153].brickname=(null)


Expected results:
=================
the statedump must now reflect as below show port map details something like below

glusterd.pmap_port=49152
glusterd.pmap[49152].type=0
glusterd.pmap[49152].brickname=(null)


Additional info:
===============
Further performed the below 2 steps
1.created a new volume and started it took a statedump saw that it is assigned with a new port


[root@dhcp35-36 gluster]# cat glusterdump.5096.dump.1539068690|grep pmap
glusterd.pmap_port=49152
glusterd.pmap[49152].type=4
glusterd.pmap[49152].brickname=/gluster/brick1/v2 /gluster/brick2/vol3 /gluster/brick2/vol4 /gluster/brick2/vol5 /gluster/brick2/vol6 /gluster/brick1/vol_-1-1 /gluster/brick2/vol_-1-2 /gluster/brick3/vol_-1-3 /gluster/brick1/vol_-10-1 /gluster/brick2/vol_-10-2 /gluster/brick3/vol_-10-3 /gluster/brick1/vol_-2-1 /gluster/brick2/vol_-2-2 /gluster/brick3/vol_-2-3 /gluster/brick1/vol_-3-1 /gluster/brick2/vol_-3-2 /gluster/brick3/vol_-3-3 /gluster/brick1/vol_-4-1 /gluster/brick2/vol_-4-2 /gluster/brick3/vol_-4-3 /gluster/brick1/vol_-5-1 /gluster/brick2/vol_-5-2 /gluster/brick3/vol_-5-3 /gluster/brick1/vol_-6-1 /gluster/brick2/vol_-6-2 /gluster/brick3/vol_-6-3 /gluster/brick1/vol_-7-1 /gluster/brick2/vol_-7-2 /gluster/brick3/vol_-7-3 /gluster/brick1/vol_-8-1 /gluster/brick2/vol_-8-2 /gluster/brick3/vol_-8-3 /gluster/brick1/vol_-9-1 /gluster/brick2/vol_-9-2
glusterd.pmap_port=49153
glusterd.pmap[49153].type=4
glusterd.pmap[49153].brickname=/gluster/brick1/v21


Stopped and started the volume and took a statedump it was the same as above.

2.Started one of those stopped volumes by using force and took a statedump 

glusterd.pmap_port=49152
glusterd.pmap[49152].type=4
glusterd.pmap[49152].brickname=/gluster/brick1/v2 /gluster/brick2/vol3 /gluster/brick2/vol4 /gluster/brick2/vol5 /gluster/brick2/vol6 /gluster/brick1/vol_-1-1 /gluster/brick2/vol_-1-2 /gluster/brick3/vol_-1-3 /gluster/brick1/vol_-10-1 /gluster/brick2/vol_-10-2 /gluster/brick3/vol_-10-3 /gluster/brick1/vol_-2-1 /gluster/brick2/vol_-2-2 /gluster/brick3/vol_-2-3 /gluster/brick1/vol_-3-1 /gluster/brick2/vol_-3-2 /gluster/brick3/vol_-3-3 /gluster/brick1/vol_-4-1 /gluster/brick2/vol_-4-2 /gluster/brick3/vol_-4-3 /gluster/brick1/vol_-5-1 /gluster/brick2/vol_-5-2 /gluster/brick3/vol_-5-3 /gluster/brick1/vol_-6-1 /gluster/brick2/vol_-6-2 /gluster/brick3/vol_-6-3 /gluster/brick1/vol_-7-1 /gluster/brick2/vol_-7-2 /gluster/brick3/vol_-7-3 /gluster/brick1/vol_-8-1 /gluster/brick2/vol_-8-2 /gluster/brick3/vol_-8-3 /gluster/brick1/vol_-9-1 /gluster/brick2/vol_-9-2
glusterd.pmap_port=49153
glusterd.pmap[49153].type=0
glusterd.pmap[49153].brickname=(null)
glusterd.pmap_port=49154
glusterd.pmap[49154].type=4
glusterd.pmap[49154].brickname=/gluster/brick1/v21 /gluster/brick2/vol_-9-2
[root@dhcp35-36 gluster]# 

The brick path /gluster/brick2/vol_-9-2 exists in both the port



[root@dhcp35-36 gluster]# gluster vol info
 
Volume Name: replica-vol10
Type: Replicate
Volume ID: 3dc07b81-b26a-4600-9ea4-8a1aa3412bd8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.36:/gluster/brick1/v2
Brick2: 10.70.35.78:/gluster/brick1/v2
Brick3: 10.70.35.192:/gluster/brick1/v2
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: replica-vol11
Type: Replicate
Volume ID: 03dcf4c6-9418-4a0b-89d7-d730cafe4c43
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.36:/gluster/brick1/v21
Brick2: 10.70.35.78:/gluster/brick1/v21
Brick3: 10.70.35.192:/gluster/brick1/v21
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: replica-vol3
Type: Replicate
Volume ID: dc41a775-d65b-40d7-a9d5-c30a6405d648
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.36:/gluster/brick2/vol3
Brick2: 10.70.35.78:/gluster/brick2/vol3
Brick3: 10.70.35.192:/gluster/brick2/vol3
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: replica-vol4
Type: Replicate
Volume ID: 2762fa7a-21df-4e94-9f83-dfd0ec96967a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.36:/gluster/brick2/vol4
Brick2: 10.70.35.78:/gluster/brick2/vol4
Brick3: 10.70.35.192:/gluster/brick2/vol4
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: replica-vol5
Type: Replicate
Volume ID: 3fe31981-2701-42bc-b174-40f26a028994
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.36:/gluster/brick2/vol5
Brick2: 10.70.35.78:/gluster/brick2/vol5
Brick3: 10.70.35.192:/gluster/brick2/vol5
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: replica-vol6
Type: Replicate
Volume ID: dded6e4f-9240-41fd-bb1f-55d32404ec97
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.36:/gluster/brick2/vol6
Brick2: 10.70.35.78:/gluster/brick2/vol6
Brick3: 10.70.35.192:/gluster/brick2/vol6
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-1-1
Type: Replicate
Volume ID: c8642846-f469-4890-aa44-836ad113f8f6
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick1/vol_-1-1
Brick2: 10.70.35.78:/gluster/brick1/vol_-1-1
Brick3: 10.70.35.36:/gluster/brick1/vol_-1-1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-1-2
Type: Replicate
Volume ID: ff010649-eb46-4f6b-97ef-f1b5fa8fcf45
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick2/vol_-1-2
Brick2: 10.70.35.78:/gluster/brick2/vol_-1-2
Brick3: 10.70.35.36:/gluster/brick2/vol_-1-2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-1-3
Type: Replicate
Volume ID: e1d8f22b-5f95-4c0d-aa72-a6f0198c3e9f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick3/vol_-1-3
Brick2: 10.70.35.78:/gluster/brick3/vol_-1-3
Brick3: 10.70.35.36:/gluster/brick3/vol_-1-3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-10-1
Type: Replicate
Volume ID: c36876a9-8a97-4062-9ed6-7ae01b9ff8a8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick1/vol_-10-1
Brick2: 10.70.35.78:/gluster/brick1/vol_-10-1
Brick3: 10.70.35.36:/gluster/brick1/vol_-10-1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-10-2
Type: Replicate
Volume ID: 1832ed72-6287-4a09-a45e-1b500c655b7e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick2/vol_-10-2
Brick2: 10.70.35.78:/gluster/brick2/vol_-10-2
Brick3: 10.70.35.36:/gluster/brick2/vol_-10-2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-10-3
Type: Replicate
Volume ID: 231f5121-fa0f-484e-9152-f2f95485ff3e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick3/vol_-10-3
Brick2: 10.70.35.78:/gluster/brick3/vol_-10-3
Brick3: 10.70.35.36:/gluster/brick3/vol_-10-3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-2-1
Type: Replicate
Volume ID: b5ea7e9c-af42-4d6a-b3d5-6f3f648c5631
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick1/vol_-2-1
Brick2: 10.70.35.78:/gluster/brick1/vol_-2-1
Brick3: 10.70.35.36:/gluster/brick1/vol_-2-1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-2-2
Type: Replicate
Volume ID: ee9ed353-ce96-459b-a01d-dfabfa2fb9e9
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick2/vol_-2-2
Brick2: 10.70.35.78:/gluster/brick2/vol_-2-2
Brick3: 10.70.35.36:/gluster/brick2/vol_-2-2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-2-3
Type: Replicate
Volume ID: 10792991-8754-4e44-97ca-2fea440bce9d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick3/vol_-2-3
Brick2: 10.70.35.78:/gluster/brick3/vol_-2-3
Brick3: 10.70.35.36:/gluster/brick3/vol_-2-3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-3-1
Type: Replicate
Volume ID: 9d2e378b-6128-472b-8763-790198e5dba8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick1/vol_-3-1
Brick2: 10.70.35.78:/gluster/brick1/vol_-3-1
Brick3: 10.70.35.36:/gluster/brick1/vol_-3-1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-3-2
Type: Replicate
Volume ID: 07745afc-bdca-4dda-a26a-49c59ce720ec
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick2/vol_-3-2
Brick2: 10.70.35.78:/gluster/brick2/vol_-3-2
Brick3: 10.70.35.36:/gluster/brick2/vol_-3-2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-3-3
Type: Replicate
Volume ID: 99d59c09-e223-448d-96e4-07d0e389e2f8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick3/vol_-3-3
Brick2: 10.70.35.78:/gluster/brick3/vol_-3-3
Brick3: 10.70.35.36:/gluster/brick3/vol_-3-3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-4-1
Type: Replicate
Volume ID: 5cf57641-29a0-4619-a46e-48db7d222ea0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick1/vol_-4-1
Brick2: 10.70.35.78:/gluster/brick1/vol_-4-1
Brick3: 10.70.35.36:/gluster/brick1/vol_-4-1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-4-2
Type: Replicate
Volume ID: 3774d248-aea3-4fe5-97a8-623511390125
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick2/vol_-4-2
Brick2: 10.70.35.78:/gluster/brick2/vol_-4-2
Brick3: 10.70.35.36:/gluster/brick2/vol_-4-2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-4-3
Type: Replicate
Volume ID: 8e43775d-dbe9-42b8-8891-70786585b7f9
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick3/vol_-4-3
Brick2: 10.70.35.78:/gluster/brick3/vol_-4-3
Brick3: 10.70.35.36:/gluster/brick3/vol_-4-3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-5-1
Type: Replicate
Volume ID: d3702d74-4acc-4136-ba6f-eaf0060bbc42
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick1/vol_-5-1
Brick2: 10.70.35.78:/gluster/brick1/vol_-5-1
Brick3: 10.70.35.36:/gluster/brick1/vol_-5-1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-5-2
Type: Replicate
Volume ID: 2d534e2d-e640-4b27-941a-687150666b1d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick2/vol_-5-2
Brick2: 10.70.35.78:/gluster/brick2/vol_-5-2
Brick3: 10.70.35.36:/gluster/brick2/vol_-5-2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-5-3
Type: Replicate
Volume ID: 7ff6c489-1861-4d8a-a7ae-335d40ed1412
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick3/vol_-5-3
Brick2: 10.70.35.78:/gluster/brick3/vol_-5-3
Brick3: 10.70.35.36:/gluster/brick3/vol_-5-3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-6-1
Type: Replicate
Volume ID: ee03b122-ad01-43ac-8db8-f9aa221b5b1f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick1/vol_-6-1
Brick2: 10.70.35.78:/gluster/brick1/vol_-6-1
Brick3: 10.70.35.36:/gluster/brick1/vol_-6-1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-6-2
Type: Replicate
Volume ID: 858a87ef-6556-4f9b-96ec-0e9632339ab7
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick2/vol_-6-2
Brick2: 10.70.35.78:/gluster/brick2/vol_-6-2
Brick3: 10.70.35.36:/gluster/brick2/vol_-6-2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-6-3
Type: Replicate
Volume ID: 8205d65e-0418-4126-86c4-53399c449e2d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick3/vol_-6-3
Brick2: 10.70.35.78:/gluster/brick3/vol_-6-3
Brick3: 10.70.35.36:/gluster/brick3/vol_-6-3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-7-1
Type: Replicate
Volume ID: aa5bebec-1a08-4df0-bdd2-a5c8c20d1648
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick1/vol_-7-1
Brick2: 10.70.35.78:/gluster/brick1/vol_-7-1
Brick3: 10.70.35.36:/gluster/brick1/vol_-7-1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-7-2
Type: Replicate
Volume ID: b1306d40-0c40-4054-93ae-dbe457991a52
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick2/vol_-7-2
Brick2: 10.70.35.78:/gluster/brick2/vol_-7-2
Brick3: 10.70.35.36:/gluster/brick2/vol_-7-2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-7-3
Type: Replicate
Volume ID: 28b0fae0-2bb2-423b-9bae-326499ffc056
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick3/vol_-7-3
Brick2: 10.70.35.78:/gluster/brick3/vol_-7-3
Brick3: 10.70.35.36:/gluster/brick3/vol_-7-3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-8-1
Type: Replicate
Volume ID: bf70d466-40df-4a22-8fb0-bc962be14d8e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick1/vol_-8-1
Brick2: 10.70.35.78:/gluster/brick1/vol_-8-1
Brick3: 10.70.35.36:/gluster/brick1/vol_-8-1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-8-2
Type: Replicate
Volume ID: 33d3e0c5-2a3c-47c9-8781-88f7578d8529
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick2/vol_-8-2
Brick2: 10.70.35.78:/gluster/brick2/vol_-8-2
Brick3: 10.70.35.36:/gluster/brick2/vol_-8-2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-8-3
Type: Replicate
Volume ID: 5a51b1b0-98fa-41bc-818f-2cacbb60ddf7
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick3/vol_-8-3
Brick2: 10.70.35.78:/gluster/brick3/vol_-8-3
Brick3: 10.70.35.36:/gluster/brick3/vol_-8-3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-9-1
Type: Replicate
Volume ID: b510a570-dd5b-4eb3-bd8a-fb9ba75aa060
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick1/vol_-9-1
Brick2: 10.70.35.78:/gluster/brick1/vol_-9-1
Brick3: 10.70.35.36:/gluster/brick1/vol_-9-1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
 
Volume Name: vol_-9-2
Type: Replicate
Volume ID: ba250ecc-ca4d-4697-a1cf-6917662baf0c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.224:/gluster/brick2/vol_-9-2
Brick2: 10.70.35.78:/gluster/brick2/vol_-9-2
Brick3: 10.70.35.36:/gluster/brick2/vol_-9-2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.max-bricks-per-process: 0
cluster.brick-multiplex: true
[root@dhcp35-36 gluster]# 




[root@dhcp35-36 gluster]# gluster vol status
Status of volume: replica-vol10
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.36:/gluster/brick1/v2        N/A       N/A        N       N/A  
Brick 10.70.35.78:/gluster/brick1/v2        49153     0          Y       24313
Brick 10.70.35.192:/gluster/brick1/v2       49152     0          Y       20214
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
 
Task Status of Volume replica-vol10
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: replica-vol11
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.36:/gluster/brick1/v21       49154     0          Y       6750 
Brick 10.70.35.78:/gluster/brick1/v21       49153     0          Y       24313
Brick 10.70.35.192:/gluster/brick1/v21      49152     0          Y       20214
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume replica-vol11
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: replica-vol3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.36:/gluster/brick2/vol3      N/A       N/A        N       N/A  
Brick 10.70.35.78:/gluster/brick2/vol3      49153     0          Y       24313
Brick 10.70.35.192:/gluster/brick2/vol3     49152     0          Y       20214
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
 
Task Status of Volume replica-vol3
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: replica-vol4
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.36:/gluster/brick2/vol4      N/A       N/A        N       N/A  
Brick 10.70.35.78:/gluster/brick2/vol4      49153     0          Y       24313
Brick 10.70.35.192:/gluster/brick2/vol4     49152     0          Y       20214
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
 
Task Status of Volume replica-vol4
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: replica-vol5
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.36:/gluster/brick2/vol5      N/A       N/A        N       N/A  
Brick 10.70.35.78:/gluster/brick2/vol5      49153     0          Y       24313
Brick 10.70.35.192:/gluster/brick2/vol5     49152     0          Y       20214
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
 
Task Status of Volume replica-vol5
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: replica-vol6
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.36:/gluster/brick2/vol6      N/A       N/A        N       N/A  
Brick 10.70.35.78:/gluster/brick2/vol6      49153     0          Y       24313
Brick 10.70.35.192:/gluster/brick2/vol6     49152     0          Y       20214
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
 
Task Status of Volume replica-vol6
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-1-1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick1/vol_-1-1 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick1/vol_-1-1  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick1/vol_-1-1  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-1-1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-1-2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick2/vol_-1-2 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick2/vol_-1-2  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick2/vol_-1-2  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-1-2
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-1-3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick3/vol_-1-3 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick3/vol_-1-3  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick3/vol_-1-3  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-1-3
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-10-1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick1/vol_-10-
1                                           49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick1/vol_-10-1 49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick1/vol_-10-1 N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-10-1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-10-2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick2/vol_-10-
2                                           49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick2/vol_-10-2 49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick2/vol_-10-2 N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-10-2
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-10-3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick3/vol_-10-
3                                           49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick3/vol_-10-3 49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick3/vol_-10-3 N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-10-3
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-2-1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick1/vol_-2-1 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick1/vol_-2-1  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick1/vol_-2-1  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
 
Task Status of Volume vol_-2-1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-2-2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick2/vol_-2-2 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick2/vol_-2-2  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick2/vol_-2-2  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-2-2
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-2-3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick3/vol_-2-3 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick3/vol_-2-3  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick3/vol_-2-3  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-2-3
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-3-1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick1/vol_-3-1 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick1/vol_-3-1  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick1/vol_-3-1  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-3-1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-3-2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick2/vol_-3-2 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick2/vol_-3-2  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick2/vol_-3-2  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-3-2
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-3-3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick3/vol_-3-3 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick3/vol_-3-3  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick3/vol_-3-3  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-3-3
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-4-1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick1/vol_-4-1 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick1/vol_-4-1  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick1/vol_-4-1  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-4-1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-4-2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick2/vol_-4-2 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick2/vol_-4-2  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick2/vol_-4-2  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-4-2
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-4-3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick3/vol_-4-3 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick3/vol_-4-3  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick3/vol_-4-3  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-4-3
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-5-1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick1/vol_-5-1 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick1/vol_-5-1  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick1/vol_-5-1  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-5-1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-5-2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick2/vol_-5-2 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick2/vol_-5-2  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick2/vol_-5-2  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-5-2
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-5-3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick3/vol_-5-3 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick3/vol_-5-3  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick3/vol_-5-3  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-5-3
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-6-1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick1/vol_-6-1 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick1/vol_-6-1  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick1/vol_-6-1  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-6-1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-6-2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick2/vol_-6-2 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick2/vol_-6-2  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick2/vol_-6-2  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-6-2
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-6-3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick3/vol_-6-3 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick3/vol_-6-3  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick3/vol_-6-3  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
 
Task Status of Volume vol_-6-3
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-7-1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick1/vol_-7-1 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick1/vol_-7-1  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick1/vol_-7-1  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-7-1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-7-2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick2/vol_-7-2 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick2/vol_-7-2  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick2/vol_-7-2  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-7-2
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-7-3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick3/vol_-7-3 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick3/vol_-7-3  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick3/vol_-7-3  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-7-3
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-8-1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick1/vol_-8-1 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick1/vol_-8-1  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick1/vol_-8-1  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
 
Task Status of Volume vol_-8-1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-8-2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick2/vol_-8-2 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick2/vol_-8-2  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick2/vol_-8-2  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
 
Task Status of Volume vol_-8-2
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-8-3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick3/vol_-8-3 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick3/vol_-8-3  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick3/vol_-8-3  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
 
Task Status of Volume vol_-8-3
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-9-1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick1/vol_-9-1 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick1/vol_-9-1  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick1/vol_-9-1  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
 
Task Status of Volume vol_-9-1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: vol_-9-2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.35.224:/gluster/brick2/vol_-9-2 49152     0          Y       23450
Brick 10.70.35.78:/gluster/brick2/vol_-9-2  49153     0          Y       24313
Brick 10.70.35.36:/gluster/brick2/vol_-9-2  N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       6922 
Self-heal Daemon on 10.70.35.192            N/A       N/A        Y       6697 
Self-heal Daemon on 10.70.35.159            N/A       N/A        Y       8862 
Self-heal Daemon on 10.70.35.23             N/A       N/A        Y       8956 
Self-heal Daemon on 10.70.35.224            N/A       N/A        Y       1670 
Self-heal Daemon on 10.70.35.78             N/A       N/A        Y       12130
 
Task Status of Volume vol_-9-2
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp35-36 gluster]#

Comment 5 Atin Mukherjee 2018-10-09 12:21:21 UTC
"With brickmux enabled if we kill the brick process on a node and take a glusterd statedump the portmap entries show stale brick entries" ---> Can you please explain what do you mean by stale brick entries in the portmap here?

A snip from the description:

<snip>
glusterd.pmap_port=49153
glusterd.pmap[49153].type=0
glusterd.pmap[49153].brickname=(null)
glusterd.pmap_port=49154
glusterd.pmap[49154].type=4
glusterd.pmap[49154].brickname=/gluster/brick1/v21 /gluster/brick2/vol_-9-2
[root@dhcp35-36 gluster]# 

The brick path /gluster/brick2/vol_-9-2 exists in both the port
</snip>

Are you assuming /gluster/brick2/vol_-9-2 is assigned to port 49153 as well? If so, why? Remember brickname of the same portmap entry is set to NULL.

This isn't a bug. If you have any justification of why you think otherwise please add a comment.

Related to brick not coming up, I see a separate bug https://bugzilla.redhat.com/show_bug.cgi?id=1637445 is filed which will be updated in sometime.

Comment 6 Upasana 2018-10-09 13:01:28 UTC
(In reply to Atin Mukherjee from comment #5)
> "With brickmux enabled if we kill the brick process on a node and take a
> glusterd statedump the portmap entries show stale brick entries" ---> Can
> you please explain what do you mean by stale brick entries in the portmap
> here?
> 
> A snip from the description:
> 
> <snip>
> glusterd.pmap_port=49153
> glusterd.pmap[49153].type=0
> glusterd.pmap[49153].brickname=(null)
> glusterd.pmap_port=49154
> glusterd.pmap[49154].type=4
> glusterd.pmap[49154].brickname=/gluster/brick1/v21 /gluster/brick2/vol_-9-2
> [root@dhcp35-36 gluster]# 
> 
> The brick path /gluster/brick2/vol_-9-2 exists in both the port
> </snip>

glusterd.pmap_port=49152
glusterd.pmap[49152].type=4
glusterd.pmap[49152].brickname=/gluster/brick1/v2 /gluster/brick2/vol3 /gluster/brick2/vol4 /gluster/brick2/vol5 /gluster/brick2/vol6 /gluster/brick1/vol_-1-1 /gluster/brick2/vol_-1-2 /gluster/brick3/vol_-1-3 /gluster/brick1/vol_-10-1 /gluster/brick2/vol_-10-2 /gluster/brick3/vol_-10-3 /gluster/brick1/vol_-2-1 /gluster/brick2/vol_-2-2 /gluster/brick3/vol_-2-3 /gluster/brick1/vol_-3-1 /gluster/brick2/vol_-3-2 /gluster/brick3/vol_-3-3 /gluster/brick1/vol_-4-1 /gluster/brick2/vol_-4-2 /gluster/brick3/vol_-4-3 /gluster/brick1/vol_-5-1 /gluster/brick2/vol_-5-2 /gluster/brick3/vol_-5-3 /gluster/brick1/vol_-6-1 /gluster/brick2/vol_-6-2 /gluster/brick3/vol_-6-3 /gluster/brick1/vol_-7-1 /gluster/brick2/vol_-7-2 /gluster/brick3/vol_-7-3 /gluster/brick1/vol_-8-1 /gluster/brick2/vol_-8-2 /gluster/brick3/vol_-8-3 /gluster/brick1/vol_-9-1 /gluster/brick2/vol_-9-2
glusterd.pmap_port=49153
glusterd.pmap[49153].type=0
glusterd.pmap[49153].brickname=(null)
glusterd.pmap_port=49154
glusterd.pmap[49154].type=4
glusterd.pmap[49154].brickname=/gluster/brick1/v21 /gluster/brick2/vol_-9-2
[root@dhcp35-36 gluster]# 

As per this the port 49152 still has all the bricks assigned to it.
whereas those bricks have been killed.

Can you please confirm if this is still not a bug

> Are you assuming /gluster/brick2/vol_-9-2 is assigned to port 49153 as well?
> If so, why? Remember brickname of the same portmap entry is set to NULL.
> 
> This isn't a bug. If you have any justification of why you think otherwise
> please add a comment.
> 
> Related to brick not coming up, I see a separate bug
> https://bugzilla.redhat.com/show_bug.cgi?id=1637445 is filed which will be
> updated in sometime.

Comment 7 Atin Mukherjee 2018-10-09 13:27:12 UTC
Thanks for the confirmation. I kind of oversighted that information. I'm reopening the BZ. But please note, that kill -9 is not a right way to gracefully bring down a process as this doesn't initiate the PMAP_SIGNOUT event. However, let me check where the problem is.

Comment 8 Atin Mukherjee 2018-10-09 16:30:32 UTC
Analysis so far:

What's happening here, in __glusterd_brick_rpc_notify when we initiate pmap_registry_remove if a brick is started and the process is killed, we check for the existance of the pidfile and if the service is running. Ideally the service should be killed. However we're still seeing the process and hence pmap_registry_remove () is not invoked and hence we're left with the stale entries. This doesn't happen every time. I'll dig this further.

Comment 9 Atin Mukherjee 2018-10-10 03:59:24 UTC
So here's the experiment I did. In gf_is_service_running () if I add couple of log entries to see if the process is running or not, the race goes away. gf_is_service_running () which calls gf_is_pid_running () checks for /proc/<pidfile>/cmdline to see if such entry exists.

This gets me to a hypothesis that by the time glusterd starts processing the brick disconnect event in __glusterd_brick_rpc_notify (), the pid entry from the proc is not yet wiped out which means kernel is taking an additional time to take care of post kill formalities of the process and this is where glusterd assumes that process is still running and happily ignore initiating pmap_registry_remove ().

We'd need to see how we can get this working.

Comment 10 Atin Mukherjee 2018-10-29 13:44:15 UTC
I just came up with a patch which I'd like to try testing, however unfortunately I am unable to reproduce this anymore. No luck with upstream master, rhgs-3.4.0 & rhgs-3.4.1 branches.

I need help in a setup where this can be reproducible, and then I can give a private build with the following change to test this out:

atin@dhcp35-96:~/codebase/upstream/glusterfs_master/glusterfs$ gd
diff --git a/xlators/mgmt/glusterd/src/glusterd-handler.c b/xlators/mgmt/glusterd/src/glusterd-handler.c
index aa8892784..4748ea337 100644
--- a/xlators/mgmt/glusterd/src/glusterd-handler.c
+++ b/xlators/mgmt/glusterd/src/glusterd-handler.c
@@ -6209,7 +6209,9 @@ __glusterd_brick_rpc_notify(struct rpc_clnt *rpc, void *mydata,
                  * the same if the process is not running
                  */
                 GLUSTERD_GET_BRICK_PIDFILE(pidfile, volinfo, brickinfo, conf);
-                if (!gf_is_service_running(pidfile, &pid)) {
+                if (!gf_is_service_running(pidfile, &pid) ||
+                    (is_brick_mx_enabled() &&
+                    !search_brick_path_from_proc(pid, brickinfo->path))) {
                     ret = pmap_registry_remove(
                         THIS, brickinfo->port, brickinfo->path,
                         GF_PMAP_PORT_BRICKSERVER, NULL, _gf_true);
diff --git a/xlators/mgmt/glusterd/src/glusterd-utils.h b/xlators/mgmt/glusterd/src/glusterd-utils.h
index 4bdc048dd..5c6fa95ae 100644
--- a/xlators/mgmt/glusterd/src/glusterd-utils.h
+++ b/xlators/mgmt/glusterd/src/glusterd-utils.h
@@ -873,4 +873,7 @@ glusterd_get_volinfo_from_brick(char *brick, glusterd_volinfo_t **volinfo);
 gf_boolean_t
 glusterd_is_profile_on(glusterd_volinfo_t *volinfo);
 
+char *
+search_brick_path_from_proc(pid_t brick_pid, char *brickpath);
+
 #endif

Comment 22 Atin Mukherjee 2018-11-09 11:26:35 UTC
Upstream patch : https://review.gluster.org/#/c/21568/

Comment 31 errata-xmlrpc 2019-02-04 07:41:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0263


Note You need to log in before you can comment on or make changes to this bug.