Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1480237 Details for
Bug 1624578
Gluster node reboot fails after gdeploy configuration if brick filesystems have VDO beneath
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
Sample gdeploy configuration file
gdeploy.conf (text/plain), 8.74 KB, created by
Giuseppe Ragusa
on 2018-09-01 18:44:04 UTC
(
hide
)
Description:
Sample gdeploy configuration file
Filename:
MIME Type:
Creator:
Giuseppe Ragusa
Created:
2018-09-01 18:44:04 UTC
Size:
8.74 KB
patch
obsolete
># gDeploy Gluster configuration file for HVP > ># Nodes in the trusted pool >[hosts] >pinkiepie.gluster.private > >#[script1] >#action=execute >#file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -h pinkiepie.gluster.private >#ignore_script_errors=no > ># Blacklist devices in multipath.conf >[script2] >action=execute >file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh >ignore_script_errors=no > ># Disable hooks >[script3] >action=execute >file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh > ># SELinux is active >[selinux] >yes > >#[disktype] >#jbod > > ># Setup LVM > >[vdo01:pinkiepie.gluster.private] >action=create >devices=sdb >names=vdo_sdb >logicalsize=751G >blockmapcachesize=128M >readcache=enabled >readcachesize=20M >emulate512=yes >writepolicy=auto >ignore_vdo_errors=no >slabsize=32G > >[vdo02:pinkiepie.gluster.private] >action=create >devices=sdc >names=vdo_sdc >logicalsize=4916G >blockmapcachesize=128M >readcache=enabled >readcachesize=20M >emulate512=yes >writepolicy=auto >ignore_vdo_errors=no >slabsize=32G > >[pv01:pinkiepie.gluster.private] >action=create >devices=mapper/vdo_sdb >ignore_pv_errors=no > >[pv02:pinkiepie.gluster.private] >action=create >devices=mapper/vdo_sdc >ignore_pv_errors=no > >[vg01:pinkiepie.gluster.private] >action=create >vgname=vgdisk1 >pvname=mapper/vdo_sdb >ignore_vg_errors=no > >[vg02:pinkiepie.gluster.private] >action=create >vgname=vgdisk2 >pvname=mapper/vdo_sdc >ignore_vg_errors=no > >[lv01:pinkiepie.gluster.private] >action=create >vgname=vgdisk2 >lvtype=thick >lvname=ctdb >mount=/gluster_bricks/ctdb/brick1 >size=1GB >ignore_lv_errors=no > >[lv02:pinkiepie.gluster.private] >action=create >vgname=vgdisk1 >lvtype=thick >lvname=enginedomain >mount=/gluster_bricks/enginedomain/brick1 >size=96GB >ignore_lv_errors=no > >[lv03:pinkiepie.gluster.private] >action=create >vgname=vgdisk1 >lvtype=thinpool >poolname=lvthinpool >chunksize=1536k >extent=90%FREE >ignore_lv_errors=no > >[lv04:pinkiepie.gluster.private] >action=create >vgname=vgdisk1 >lvtype=thinlv >poolname=lvthinpool >lvname=vmstoredomain >mount=/gluster_bricks/vmstoredomain/brick1 >virtualsize=500GB >ignore_lv_errors=no > >[lv05:pinkiepie.gluster.private] >action=create >vgname=vgdisk1 >lvtype=thinlv >poolname=lvthinpool >lvname=isodomain >mount=/gluster_bricks/isodomain/brick1 >virtualsize=30GB >ignore_lv_errors=no > >[lv06:pinkiepie.gluster.private] >action=create >vgname=vgdisk2 >lvtype=thinpool >poolname=lvthinpool >chunksize=1536k >extent=90%FREE >ignore_lv_errors=no > >[lv07:pinkiepie.gluster.private] >action=create >vgname=vgdisk2 >lvtype=thinlv >poolname=lvthinpool >lvname=winshare >mount=/gluster_bricks/winshare/brick1 >virtualsize=1024GB >ignore_lv_errors=no > >[lv08:pinkiepie.gluster.private] >action=create >vgname=vgdisk2 >lvtype=thinlv >poolname=lvthinpool >lvname=unixshare >mount=/gluster_bricks/unixshare/brick1 >virtualsize=1024GB >ignore_lv_errors=no > >[lv09:pinkiepie.gluster.private] >action=create >vgname=vgdisk2 >lvtype=thinlv >poolname=lvthinpool >lvname=blockshare >mount=/gluster_bricks/blockshare/brick1 >virtualsize=1024GB >ignore_lv_errors=no > >[lv010:pinkiepie.gluster.private] >action=create >vgname=vgdisk2 >lvtype=thinlv >poolname=lvthinpool >lvname=backup >mount=/gluster_bricks/backup/brick1 >virtualsize=1024GB >ignore_lv_errors=no > >[service1] >action=enable >service=chronyd > >[service2] >action=restart >service=chronyd > >[service3] >action=enable >service=glusterd > >[service4] >action=restart >service=glusterd >slice_setup=yes > >[firewalld] >action=add >ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp,24010/tcp,3260/tcp >services=glusterfs > ># Gluster volume definitions > >[volume1] >action=create >volname=engine >transport=tcp > >force=yes > ># Note: single node does not support cluster.shd-max-threads (part of group virt settings, so that must be avoided too) >key=storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,cluster.use-compound-fops,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock >value=36,36,on,64MB,32,full,granular,10000,10,off,on,off,on,on,off,off,off,enable > >brick_dirs=/gluster_bricks/enginedomain/brick1/brick >ignore_volume_errors=no > >[volume2] >action=create >volname=vmstore >transport=tcp > >force=yes > ># Note: single node does not support cluster.shd-max-threads (part of group virt settings, so that must be avoided too) >key=storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,cluster.use-compound-fops,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock >value=36,36,on,64MB,32,full,granular,10000,10,off,on,off,on,on,off,off,off,enable > >brick_dirs=/gluster_bricks/vmstoredomain/brick1/brick >ignore_volume_errors=no > >[volume3] >action=create >volname=iso >transport=tcp > >force=yes > ># Note: single node does not support cluster.shd-max-threads (part of group virt settings, so that must be avoided too) >key=storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,cluster.use-compound-fops,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock,nfs.disable >value=36,36,on,64MB,32,full,granular,10000,10,off,on,off,on,on,off,off,off,enable,off > >brick_dirs=/gluster_bricks/isodomain/brick1/brick >ignore_volume_errors=no > >[volume4] >action=create >volname=ctdb >transport=tcp > >force=yes > ># Note: single node does not support cluster.shd-max-threads >key=storage.owner-uid,storage.owner-gid,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,cluster.use-compound-fops,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock >value=36,36,32,full,granular,10000,10,off,on,off,on,on,off,off,off,enable > >brick_dirs=/gluster_bricks/ctdb/brick1/brick >ignore_volume_errors=no > >[volume5] >action=create >volname=winshare >transport=tcp > >force=yes > ># Note: single node does not support cluster.shd-max-threads >key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,cluster.use-compound-fops >value=metadata-cache,0,0,on,64MB,32,full,granular,10000,10,off,on,off,on,on > >brick_dirs=/gluster_bricks/winshare/brick1/brick >ignore_volume_errors=no > >[volume6] >action=create >volname=unixshare >transport=tcp > >force=yes > ># Note: single node does not support cluster.shd-max-threads >key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,cluster.use-compound-fops,nfs.disable >value=metadata-cache,0,0,on,64MB,32,full,granular,10000,10,off,on,off,on,on,off > >brick_dirs=/gluster_bricks/unixshare/brick1/brick >ignore_volume_errors=no > >[volume7] >action=create >volname=blockshare >transport=tcp > >force=yes > ># Note: single node does not support cluster.shd-max-threads (part of group gluster-block settings, so that must be avoided too) >key=storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,cluster.use-compound-fops >value=0,0,on,64MB,32,full,granular,10000,10,off,on,off,on,on > >brick_dirs=/gluster_bricks/blockshare/brick1/brick >ignore_volume_errors=no > >[volume8] >action=create >volname=backup >transport=tcp > >force=yes > ># Note: single node does not support cluster.shd-max-threads >key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,cluster.use-compound-fops,nfs.disable >value=metadata-cache,0,0,on,64MB,32,full,granular,10000,10,off,on,off,on,on,off > >brick_dirs=/gluster_bricks/backup/brick1/brick >ignore_volume_errors=no
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1624578
: 1480237