Description of problem: For each gluster-block hosting device, it is required to set a volume option - gluster volume set <vol1> group gluster-block and restart the volume. Heketi should set this value for each of the block-hosting volume created and restart the volume before allowing any block device to be created. Version-Release number of selected component (if applicable): cns-deploy-5.0.0-23.el7rhgs.x86_64 How reproducible: always
block hosting volume now has the group gluster-block option set in build - cns-deploy-5.0.0-29.el7rhgs.x86_64 Volume Name: vol_e7a5fa9bcd676b546d8ba5f6700b5fe1 Type: Replicate Volume ID: ba4141da-430c-4a91-a0fd-99ddff3f16b0 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.46.197:/var/lib/heketi/mounts/vg_45d6887a929d3d5a6f96e1b7fec36875/brick_a6dbc8a4397b04aa6913a4ee4b5aa1fd/brick Brick2: 10.70.46.193:/var/lib/heketi/mounts/vg_d7ad1e190eb717fdf389a758d1b244c2/brick_4280677f48f4b75a9c889f0e6d4e1427/brick Brick3: 10.70.46.203:/var/lib/heketi/mounts/vg_e133d9fa814d7d8e867847071e238e2c/brick_6ef5ac3cb567c77b0271de55b6f3f433/brick Options Reconfigured: server.allow-insecure: on user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: enable performance.readdir-ahead: off performance.open-behind: off performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet nfs.disable: on cluster.brick-multiplex: on Moving the bug to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:2879