Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1479235 Details for
Bug 1621436
Heketi returns HTTP code 500 when we try to delete expanded volumes in parallel
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
Hekete server logs for "rally_2018_08_28.log"
heketi_server_2018_08_28.log (text/plain), 134.41 KB, created by
Valerii Ponomarov
on 2018-08-28 12:21:17 UTC
(
hide
)
Description:
Hekete server logs for "rally_2018_08_28.log"
Filename:
MIME Type:
Creator:
Valerii Ponomarov
Created:
2018-08-28 12:21:17 UTC
Size:
134.41 KB
patch
obsolete
>[negroni] Started POST /volumes >[heketi] INFO 2018/08/28 11:43:13 Allocating brick set #0 >[negroni] Started POST /volumes >[negroni] Completed 202 Accepted in 147.766111ms >[heketi] INFO 2018/08/28 11:43:13 Allocating brick set #0 >[asynchttp] INFO 2018/08/28 11:43:13 asynchttp.go:288: Started job 4d912af6697c58bab51dadea32d72059 >[heketi] INFO 2018/08/28 11:43:13 Started async operation: Create Volume >[heketi] INFO 2018/08/28 11:43:13 Creating brick 54f8b2d556bf6a30d3990382dccacc95 >[negroni] Started GET /queue/4d912af6697c58bab51dadea32d72059 >[negroni] Completed 200 OK in 97.933µs >[kubeexec] DEBUG 2018/08/28 11:43:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mkdir -p /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_54f8b2d556bf6a30d3990382dccacc95 >Result: >[negroni] Completed 202 Accepted in 252.468466ms >[asynchttp] INFO 2018/08/28 11:43:13 asynchttp.go:288: Started job 7849db9f0fc271c365e91a686a04246f >[heketi] INFO 2018/08/28 11:43:13 Started async operation: Create Volume >[heketi] INFO 2018/08/28 11:43:13 Creating brick 7d951cd9233c2bafe3c5d611e3d8883f >[negroni] Started GET /queue/7849db9f0fc271c365e91a686a04246f >[negroni] Completed 200 OK in 97.57µs >[kubeexec] DEBUG 2018/08/28 11:43:13 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mkdir -p /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_7d951cd9233c2bafe3c5d611e3d8883f >Result: >[kubeexec] DEBUG 2018/08/28 11:43:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_6c0538bc7bed0679f0f595c61c72656f/tp_98ea781d875cd291ab399d971f8462fd --virtualsize 1048576K --name brick_54f8b2d556bf6a30d3990382dccacc95 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_54f8b2d556bf6a30d3990382dccacc95" created. >[kubeexec] DEBUG 2018/08/28 11:43:14 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_80280148c66c9e91e0c27f64a751900f/tp_c3276a72cbfdfb7bd2902c2abfa2c469 --virtualsize 1048576K --name brick_7d951cd9233c2bafe3c5d611e3d8883f >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_7d951cd9233c2bafe3c5d611e3d8883f" created. >[negroni] Started GET /queue/4d912af6697c58bab51dadea32d72059 >[negroni] Completed 200 OK in 214.219µs >[negroni] Started GET /queue/7849db9f0fc271c365e91a686a04246f >[negroni] Completed 200 OK in 120.18µs >[kubeexec] DEBUG 2018/08/28 11:43:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95 >Result: meta-data=/dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/08/28 11:43:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: awk "BEGIN {print \"/dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95 /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_54f8b2d556bf6a30d3990382dccacc95 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/08/28 11:43:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95 /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_54f8b2d556bf6a30d3990382dccacc95 >Result: >[kubeexec] DEBUG 2018/08/28 11:43:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f >Result: meta-data=/dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/08/28 11:43:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: mkdir /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_54f8b2d556bf6a30d3990382dccacc95/brick >Result: >[cmdexec] INFO 2018/08/28 11:43:16 Creating volume rally-fkx14ljxp9o267 with no durability >[kubeexec] DEBUG 2018/08/28 11:43:16 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: awk "BEGIN {print \"/dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_7d951cd9233c2bafe3c5d611e3d8883f xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/08/28 11:43:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_7d951cd9233c2bafe3c5d611e3d8883f >Result: >[kubeexec] DEBUG 2018/08/28 11:43:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mkdir /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_7d951cd9233c2bafe3c5d611e3d8883f/brick >Result: >[cmdexec] INFO 2018/08/28 11:43:17 Creating volume rally-tljuuhzo4iht8d with no durability >[negroni] Started GET /queue/4d912af6697c58bab51dadea32d72059 >[negroni] Completed 200 OK in 109.883µs >[negroni] Started GET /queue/7849db9f0fc271c365e91a686a04246f >[negroni] Completed 200 OK in 109.815µs >[kubeexec] DEBUG 2018/08/28 11:43:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: gluster --mode=script volume create rally-fkx14ljxp9o267 10.70.46.10:/var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_54f8b2d556bf6a30d3990382dccacc95/brick >Result: volume create: rally-fkx14ljxp9o267: success: please start the volume to access data >[kubeexec] DEBUG 2018/08/28 11:43:17 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: gluster --mode=script volume create rally-tljuuhzo4iht8d 10.70.46.26:/var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_7d951cd9233c2bafe3c5d611e3d8883f/brick >Result: volume create: rally-tljuuhzo4iht8d: success: please start the volume to access data >[kubeexec] DEBUG 2018/08/28 11:43:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: gluster --mode=script volume start rally-tljuuhzo4iht8d >Result: volume start: rally-tljuuhzo4iht8d: success >[kubeexec] DEBUG 2018/08/28 11:43:18 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: gluster --mode=script volume start rally-fkx14ljxp9o267 >Result: volume start: rally-fkx14ljxp9o267: success >[heketi] INFO 2018/08/28 11:43:18 Create Volume succeeded >[asynchttp] INFO 2018/08/28 11:43:18 asynchttp.go:292: Completed job 7849db9f0fc271c365e91a686a04246f in 5.226129414s >[heketi] INFO 2018/08/28 11:43:18 Create Volume succeeded >[asynchttp] INFO 2018/08/28 11:43:18 asynchttp.go:292: Completed job 4d912af6697c58bab51dadea32d72059 in 5.406452391s >[negroni] Started GET /queue/4d912af6697c58bab51dadea32d72059 >[negroni] Completed 303 See Other in 139.972µs >[negroni] Started GET /volumes/094c35c2fc6baf80fa9497f4fb393314 >[negroni] Completed 200 OK in 4.232626ms >[negroni] Started POST /volumes/094c35c2fc6baf80fa9497f4fb393314/expand >[heketi] INFO 2018/08/28 11:43:19 Allocating brick set #0 >[negroni] Started GET /queue/7849db9f0fc271c365e91a686a04246f >[negroni] Completed 303 See Other in 126.883µs >[negroni] Started GET /volumes/c1db889aef3227f9988c520891cdb889 >[negroni] Completed 200 OK in 3.222616ms >[negroni] Completed 202 Accepted in 136.234771ms >[asynchttp] INFO 2018/08/28 11:43:19 asynchttp.go:288: Started job 5c5d7b82ad5b415eb4aefc6e88f9cb4c >[heketi] INFO 2018/08/28 11:43:19 Started async operation: Expand Volume >[heketi] INFO 2018/08/28 11:43:19 Creating brick b8381259199cde3925065e7c54e98386 >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 129.777µs >[negroni] Started POST /volumes/c1db889aef3227f9988c520891cdb889/expand >[heketi] INFO 2018/08/28 11:43:19 Allocating brick set #0 >[kubeexec] DEBUG 2018/08/28 11:43:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: mkdir -p /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386 >Result: >[negroni] Completed 202 Accepted in 111.306614ms >[asynchttp] INFO 2018/08/28 11:43:19 asynchttp.go:288: Started job 4babb6131e96ebff51df1783a8123656 >[heketi] INFO 2018/08/28 11:43:19 Started async operation: Expand Volume >[heketi] INFO 2018/08/28 11:43:19 Creating brick 09687c15792025019098e4cba8244507 >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 105.04µs >[kubeexec] DEBUG 2018/08/28 11:43:19 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mkdir -p /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_09687c15792025019098e4cba8244507 >Result: >[kubeexec] DEBUG 2018/08/28 11:43:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_91c1336b9d8010eb5c52368eef886671/tp_69ee637f55a586f983695e4c10f809fd --virtualsize 1048576K --name brick_b8381259199cde3925065e7c54e98386 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_b8381259199cde3925065e7c54e98386" created. >[kubeexec] DEBUG 2018/08/28 11:43:20 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: lvcreate --autobackup=n --poolmetadatasize 8192K --chunksize 256K --size 1048576K --thin vg_b1ebb7cf4e45c57d379df092a591ec0a/tp_6d8bfcab38687a07c32ec7980fd037e2 --virtualsize 1048576K --name brick_09687c15792025019098e4cba8244507 >Result: Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data. > Logical volume "brick_09687c15792025019098e4cba8244507" created. >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 138.519µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 197.746µs >[kubeexec] DEBUG 2018/08/28 11:43:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386 >Result: meta-data=/dev/mapper/vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/08/28 11:43:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: awk "BEGIN {print \"/dev/mapper/vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386 /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/08/28 11:43:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mkfs.xfs -i size=512 -n size=8192 /dev/mapper/vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_09687c15792025019098e4cba8244507 >Result: meta-data=/dev/mapper/vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_09687c15792025019098e4cba8244507 isize=512 agcount=8, agsize=32768 blks > = sectsz=512 attr=2, projid32bit=1 > = crc=1 finobt=0, sparse=0 >data = bsize=4096 blocks=262144, imaxpct=25 > = sunit=64 swidth=64 blks >naming =version 2 bsize=8192 ascii-ci=0 ftype=1 >log =internal log bsize=4096 blocks=2560, version=2 > = sectsz=512 sunit=64 blks, lazy-count=1 >realtime =none extsz=4096 blocks=0, rtextents=0 >[kubeexec] DEBUG 2018/08/28 11:43:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: awk "BEGIN {print \"/dev/mapper/vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_09687c15792025019098e4cba8244507 /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_09687c15792025019098e4cba8244507 xfs rw,inode64,noatime,nouuid 1 2\" >> \"/var/lib/heketi/fstab\"}" >Result: >[kubeexec] DEBUG 2018/08/28 11:43:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386 /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386 >Result: >[kubeexec] DEBUG 2018/08/28 11:43:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mount -o rw,inode64,noatime,nouuid /dev/mapper/vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_09687c15792025019098e4cba8244507 /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_09687c15792025019098e4cba8244507 >Result: >[kubeexec] DEBUG 2018/08/28 11:43:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: mkdir /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386/brick >Result: >[kubeexec] DEBUG 2018/08/28 11:43:23 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: mkdir /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_09687c15792025019098e4cba8244507/brick >Result: >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 112.195µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 119.053µs >[kubeexec] DEBUG 2018/08/28 11:43:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: gluster --mode=script volume add-brick rally-tljuuhzo4iht8d 10.70.46.26:/var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_09687c15792025019098e4cba8244507/brick >Result: volume add-brick: success >[kubeexec] DEBUG 2018/08/28 11:43:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: gluster --mode=script volume add-brick rally-fkx14ljxp9o267 10.70.47.176:/var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386/brick >Result: volume add-brick: success >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 197.033µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 121.709µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 123.581µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 184.075µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 132.752µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 127.146µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 196.135µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 204.625µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 131.761µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 137.108µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 187.409µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 166.385µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 115.629µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 184.524µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 185.775µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 115.562µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 121.138µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 172.749µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 130.084µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 177.836µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 158.472µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 190.459µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 132µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 115.085µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 134.479µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 227.276µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 190.442µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 189.206µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 219.832µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 113.956µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 215.342µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 188.052µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 120.906µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 151.088µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 128.865µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 182.959µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 183.533µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 183.069µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 207.156µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 188.209µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 120.788µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 120.204µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 181.376µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 178.959µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 180.082µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 187.686µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 111.537µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 198.601µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 186.499µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 202.572µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 185.906µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 134.504µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 140.234µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 172.11µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 113.463µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 120.856µs >[heketi] INFO 2018/08/28 11:44:21 Starting Node Health Status refresh >[cmdexec] INFO 2018/08/28 11:44:21 Check Glusterd service status in node vp-ansible-v311-app-cns-0 >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 198.499µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 187.416µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 123.786µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 118.817µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 124.938µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 116.452µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 151.958µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 110.97µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 172.267µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 160.139µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 122.434µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 126.445µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 115.992µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 113.115µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 113.042µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 111.082µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 212.852µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 129.018µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 178.929µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 217.092µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 137.187µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 211.849µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 184.546µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 124.513µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 178.145µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 159.326µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 183.529µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 137.249µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 195.909µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 177.207µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 201.536µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 184.249µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 122.053µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 175.266µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 185.062µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 175.873µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 179.796µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 126.26µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 188.166µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 113.413µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 184.2µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 175.425µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 115.253µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 129.052µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 177.375µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 171.123µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 178.732µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 120.043µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 160.766µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 173.947µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 176.259µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 184.613µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 177.539µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 122.182µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 182.226µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 123.693µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 196.759µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 202.262µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 147.74µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 178.075µs >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 200 OK in 200.484µs >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 200 OK in 230.413µs >[kubeexec] ERROR 2018/08/28 11:45:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume rebalance rally-tljuuhzo4iht8d start] on glusterfs-cns-jlgq2: Err[command terminated with exit code 1]: Stdout [Error : Request timed out >]: Stderr [] >[cmdexec] ERROR 2018/08/28 11:45:24 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:124: Unable to start rebalance on the volume &{[{/var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_09687c15792025019098e4cba8244507/brick 10.70.46.26}] rally-tljuuhzo4iht8d 0 [] 0 0 1 false}: Unable to execute command on glusterfs-cns-jlgq2: >[cmdexec] ERROR 2018/08/28 11:45:24 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:125: Action Required: run rebalance manually on the volume &{[{/var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_09687c15792025019098e4cba8244507/brick 10.70.46.26}] rally-tljuuhzo4iht8d 0 [] 0 0 1 false} >[kubeexec] DEBUG 2018/08/28 11:45:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:03 UTC; 6 days ago > Process: 429 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 430 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod630cf184_a5f1_11e8_9a3f_005056a549ca.slice/docker-24ed69a095449460f61deded6aaa6a7de0753c88a3cf0a5e07b85b9cd4a92805.scope/system.slice/glusterd.service > ââ 430 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 852 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id heketidbstorage.10.70.46.26.var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.26-var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.pid -S /var/run/gluster/6a2a03eab842b37b7169d67e7d04d6c2.socket --brick-name /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_cdc9f126aeab5fc4af31b482980e213e/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 1267 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id rally-tljuuhzo4iht8d.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick -p /var/run/gluster/vols/rally-tljuuhzo4iht8d/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.pid -S /var/run/gluster/1aa61f85bdaf434a953a7efb3d468b98.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_7d951cd9233c2bafe3c5d611e3d8883f/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49154 --xlator-option rally-tljuuhzo4iht8d-server.listen-port=49154 > ââ29575 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.pid -S /var/run/gluster/4d30a5ebbf6049f7294f2c96c0889f0e.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_3510a38a8d5d693a612baa22d916237b/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31589 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/13b5dab3bb9b7241888ca2821e42e2c0.socket --xlator-option *replicate*.node-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f >[heketi] INFO 2018/08/28 11:45:24 Periodic health check status: node 064f1f469119e5c69a56a8c81b5fd96a up=true >[cmdexec] INFO 2018/08/28 11:45:24 Check Glusterd service status in node vp-ansible-v311-app-cns-1 >[kubeexec] ERROR 2018/08/28 11:45:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume rebalance rally-fkx14ljxp9o267 start] on glusterfs-cns-hzqg6: Err[command terminated with exit code 1]: Stdout [Error : Request timed out >]: Stderr [] >[cmdexec] ERROR 2018/08/28 11:45:24 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:124: Unable to start rebalance on the volume &{[{/var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386/brick 10.70.47.176}] rally-fkx14ljxp9o267 0 [] 0 0 1 false}: Unable to execute command on glusterfs-cns-hzqg6: >[cmdexec] ERROR 2018/08/28 11:45:24 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:125: Action Required: run rebalance manually on the volume &{[{/var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386/brick 10.70.47.176}] rally-fkx14ljxp9o267 0 [] 0 0 1 false} >[heketi] INFO 2018/08/28 11:45:24 Expand Volume succeeded >[asynchttp] INFO 2018/08/28 11:45:24 asynchttp.go:292: Completed job 4babb6131e96ebff51df1783a8123656 in 2m4.681097315s >[kubeexec] DEBUG 2018/08/28 11:45:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:55:53 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631c8e73_a5f1_11e8_9a3f_005056a549ca.slice/docker-0c00732c129d13630118d564e7628c6b1a7329d974aa476e520ae3be5990a263.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 856 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id heketidbstorage.10.70.46.10.var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.10-var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.pid -S /var/run/gluster/eb9a06fb398d5e1d9578e0fc2dc82d85.socket --brick-name /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_55d03facb36899f57cdabd107689eba0/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ29465 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id rally-fkx14ljxp9o267.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.pid -S /var/run/gluster/6801d81632622820f0b62ec2150f3d75.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_54f8b2d556bf6a30d3990382dccacc95/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49154 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49154 > ââ29683 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.pid -S /var/run/gluster/43582dc96bce5442618f73c71cf50bbd.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_3dc24288734a047a8fcc00ab2c7a0974/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31513 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/97a465235ca3ff589c72ec017f4ab551.socket --xlator-option *replicate*.node-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 >[heketi] INFO 2018/08/28 11:45:24 Periodic health check status: node 284f3e3a4c2fd7c0e78b2e759afa9847 up=true >[cmdexec] INFO 2018/08/28 11:45:24 Check Glusterd service status in node vp-ansible-v311-app-cns-2 >[heketi] INFO 2018/08/28 11:45:24 Expand Volume succeeded >[asynchttp] INFO 2018/08/28 11:45:24 asynchttp.go:292: Completed job 5c5d7b82ad5b415eb4aefc6e88f9cb4c in 2m4.9551472s >[kubeexec] DEBUG 2018/08/28 11:45:24 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:11 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6310af79_a5f1_11e8_9a3f_005056a549ca.slice/docker-55cc0a963be6c53dd856f9025694fb7897b40fb9112aca2b194ddc65ec54d76f.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 842 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id heketidbstorage.10.70.47.176.var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.176-var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.pid -S /var/run/gluster/fb2b61aa838bbe43a440ffc2986e9bc7.socket --brick-name /var/lib/heketi/mounts/vg_b2c811c50490ee9832f1e0ecfb15f660/brick_7251b95b366cb335bf154a71d67f1250/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 6707 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id rally-fkx14ljxp9o267.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.pid -S /var/run/gluster/61260ed6fbb5d2443c6491e364f5216c.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49155 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49155 > ââ29609 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.pid -S /var/run/gluster/a8eee47923e596454bd4f1c2e122327b.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_88fc94292cf739f1608b4abb3a42a5aa/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31591 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/645e4eea62d3f97263e806a381eeabda.socket --xlator-option *replicate*.node-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d >[heketi] INFO 2018/08/28 11:45:24 Periodic health check status: node f48c1fefdee6989a05560260afcd0a2d up=true >[heketi] INFO 2018/08/28 11:45:24 Cleaned 0 nodes from health cache >[negroni] Started GET /queue/5c5d7b82ad5b415eb4aefc6e88f9cb4c >[negroni] Completed 303 See Other in 192.496µs >[negroni] Started GET /volumes/094c35c2fc6baf80fa9497f4fb393314 >[negroni] Completed 200 OK in 6.863635ms >[negroni] Started DELETE /volumes/094c35c2fc6baf80fa9497f4fb393314 >[negroni] Completed 202 Accepted in 158.802333ms >[asynchttp] INFO 2018/08/28 11:45:25 asynchttp.go:288: Started job 5089afd05668f88c7772e3b3f47248ba >[heketi] INFO 2018/08/28 11:45:25 Started async operation: Delete Volume >[negroni] Started GET /queue/5089afd05668f88c7772e3b3f47248ba >[negroni] Completed 200 OK in 162.401µs >[kubeexec] DEBUG 2018/08/28 11:45:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: gluster --mode=script snapshot list rally-fkx14ljxp9o267 --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[negroni] Started GET /queue/4babb6131e96ebff51df1783a8123656 >[negroni] Completed 303 See Other in 137.163µs >[negroni] Started GET /volumes/c1db889aef3227f9988c520891cdb889 >[negroni] Completed 200 OK in 2.297332ms >[kubeexec] ERROR 2018/08/28 11:45:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop rally-fkx14ljxp9o267 force] on glusterfs-cns-qrfrz: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: rally-fkx14ljxp9o267: failed: Another transaction is in progress for rally-fkx14ljxp9o267. Please try again after sometime. >] >[cmdexec] ERROR 2018/08/28 11:45:25 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:145: Unable to stop volume rally-fkx14ljxp9o267: Unable to execute command on glusterfs-cns-qrfrz: volume stop: rally-fkx14ljxp9o267: failed: Another transaction is in progress for rally-fkx14ljxp9o267. Please try again after sometime. >[negroni] Started DELETE /volumes/c1db889aef3227f9988c520891cdb889 >[negroni] Completed 202 Accepted in 132.073246ms >[asynchttp] INFO 2018/08/28 11:45:25 asynchttp.go:288: Started job e8ae4e03ae059ee4acd4e77f903cee39 >[heketi] INFO 2018/08/28 11:45:25 Started async operation: Delete Volume >[negroni] Started GET /queue/e8ae4e03ae059ee4acd4e77f903cee39 >[negroni] Completed 200 OK in 108.031µs >[kubeexec] ERROR 2018/08/28 11:45:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete rally-fkx14ljxp9o267] on glusterfs-cns-qrfrz: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: rally-fkx14ljxp9o267: failed: Another transaction is in progress for rally-fkx14ljxp9o267. Please try again after sometime. >] >[cmdexec] ERROR 2018/08/28 11:45:25 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:154: Unable to delete volume rally-fkx14ljxp9o267: Unable to execute command on glusterfs-cns-qrfrz: volume delete: rally-fkx14ljxp9o267: failed: Another transaction is in progress for rally-fkx14ljxp9o267. Please try again after sometime. >[heketi] ERROR 2018/08/28 11:45:25 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:673: Unable to delete volume: Unable to delete volume rally-fkx14ljxp9o267: Unable to execute command on glusterfs-cns-qrfrz: volume delete: rally-fkx14ljxp9o267: failed: Another transaction is in progress for rally-fkx14ljxp9o267. Please try again after sometime. >[heketi] ERROR 2018/08/28 11:45:25 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:433: Error executing delete volume: Unable to delete volume rally-fkx14ljxp9o267: Unable to execute command on glusterfs-cns-qrfrz: volume delete: rally-fkx14ljxp9o267: failed: Another transaction is in progress for rally-fkx14ljxp9o267. Please try again after sometime. >[asynchttp] INFO 2018/08/28 11:45:25 asynchttp.go:292: Completed job 5089afd05668f88c7772e3b3f47248ba in 671.34354ms >[heketi] ERROR 2018/08/28 11:45:25 /src/github.com/heketi/heketi/apps/glusterfs/operations_manage.go:113: Delete Volume Failed: Unable to delete volume rally-fkx14ljxp9o267: Unable to execute command on glusterfs-cns-qrfrz: volume delete: rally-fkx14ljxp9o267: failed: Another transaction is in progress for rally-fkx14ljxp9o267. Please try again after sometime. >[kubeexec] DEBUG 2018/08/28 11:45:25 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: gluster --mode=script snapshot list rally-tljuuhzo4iht8d --xml >Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> ><cliOutput> > <opRet>0</opRet> > <opErrno>0</opErrno> > <opErrstr/> > <snapList> > <count>0</count> > </snapList> ></cliOutput> >[kubeexec] ERROR 2018/08/28 11:45:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume stop rally-tljuuhzo4iht8d force] on glusterfs-cns-jlgq2: Err[command terminated with exit code 1]: Stdout []: Stderr [volume stop: rally-tljuuhzo4iht8d: failed: Another transaction is in progress for rally-tljuuhzo4iht8d. Please try again after sometime. >] >[cmdexec] ERROR 2018/08/28 11:45:26 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:145: Unable to stop volume rally-tljuuhzo4iht8d: Unable to execute command on glusterfs-cns-jlgq2: volume stop: rally-tljuuhzo4iht8d: failed: Another transaction is in progress for rally-tljuuhzo4iht8d. Please try again after sometime. >[kubeexec] ERROR 2018/08/28 11:45:26 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:242: Failed to run command [gluster --mode=script volume delete rally-tljuuhzo4iht8d] on glusterfs-cns-jlgq2: Err[command terminated with exit code 1]: Stdout []: Stderr [volume delete: rally-tljuuhzo4iht8d: failed: Another transaction is in progress for rally-tljuuhzo4iht8d. Please try again after sometime. >] >[cmdexec] ERROR 2018/08/28 11:45:26 /src/github.com/heketi/heketi/executors/cmdexec/volume.go:154: Unable to delete volume rally-tljuuhzo4iht8d: Unable to execute command on glusterfs-cns-jlgq2: volume delete: rally-tljuuhzo4iht8d: failed: Another transaction is in progress for rally-tljuuhzo4iht8d. Please try again after sometime. >[heketi] ERROR 2018/08/28 11:45:26 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:673: Unable to delete volume: Unable to delete volume rally-tljuuhzo4iht8d: Unable to execute command on glusterfs-cns-jlgq2: volume delete: rally-tljuuhzo4iht8d: failed: Another transaction is in progress for rally-tljuuhzo4iht8d. Please try again after sometime. >[heketi] ERROR 2018/08/28 11:45:26 /src/github.com/heketi/heketi/apps/glusterfs/operations.go:433: Error executing delete volume: Unable to delete volume rally-tljuuhzo4iht8d: Unable to execute command on glusterfs-cns-jlgq2: volume delete: rally-tljuuhzo4iht8d: failed: Another transaction is in progress for rally-tljuuhzo4iht8d. Please try again after sometime. >[asynchttp] INFO 2018/08/28 11:45:26 asynchttp.go:292: Completed job e8ae4e03ae059ee4acd4e77f903cee39 in 625.237002ms >[heketi] ERROR 2018/08/28 11:45:26 /src/github.com/heketi/heketi/apps/glusterfs/operations_manage.go:113: Delete Volume Failed: Unable to delete volume rally-tljuuhzo4iht8d: Unable to execute command on glusterfs-cns-jlgq2: volume delete: rally-tljuuhzo4iht8d: failed: Another transaction is in progress for rally-tljuuhzo4iht8d. Please try again after sometime. >[negroni] Started GET /queue/5089afd05668f88c7772e3b3f47248ba >[negroni] Completed 500 Internal Server Error in 112.532µs >[negroni] Started GET /queue/e8ae4e03ae059ee4acd4e77f903cee39 >[negroni] Completed 500 Internal Server Error in 141.338µs >[heketi] INFO 2018/08/28 11:46:21 Starting Node Health Status refresh >[cmdexec] INFO 2018/08/28 11:46:21 Check Glusterd service status in node vp-ansible-v311-app-cns-0 >[kubeexec] DEBUG 2018/08/28 11:46:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:03 UTC; 6 days ago > Process: 429 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 430 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod630cf184_a5f1_11e8_9a3f_005056a549ca.slice/docker-24ed69a095449460f61deded6aaa6a7de0753c88a3cf0a5e07b85b9cd4a92805.scope/system.slice/glusterd.service > ââ 430 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 852 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id heketidbstorage.10.70.46.26.var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.26-var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.pid -S /var/run/gluster/6a2a03eab842b37b7169d67e7d04d6c2.socket --brick-name /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_cdc9f126aeab5fc4af31b482980e213e/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 1267 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id rally-tljuuhzo4iht8d.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick -p /var/run/gluster/vols/rally-tljuuhzo4iht8d/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.pid -S /var/run/gluster/1aa61f85bdaf434a953a7efb3d468b98.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_7d951cd9233c2bafe3c5d611e3d8883f/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49154 --xlator-option rally-tljuuhzo4iht8d-server.listen-port=49154 > ââ29575 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.pid -S /var/run/gluster/4d30a5ebbf6049f7294f2c96c0889f0e.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_3510a38a8d5d693a612baa22d916237b/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31589 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/13b5dab3bb9b7241888ca2821e42e2c0.socket --xlator-option *replicate*.node-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f >[heketi] INFO 2018/08/28 11:46:21 Periodic health check status: node 064f1f469119e5c69a56a8c81b5fd96a up=true >[cmdexec] INFO 2018/08/28 11:46:21 Check Glusterd service status in node vp-ansible-v311-app-cns-1 >[kubeexec] DEBUG 2018/08/28 11:46:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:55:53 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631c8e73_a5f1_11e8_9a3f_005056a549ca.slice/docker-0c00732c129d13630118d564e7628c6b1a7329d974aa476e520ae3be5990a263.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 856 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id heketidbstorage.10.70.46.10.var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.10-var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.pid -S /var/run/gluster/eb9a06fb398d5e1d9578e0fc2dc82d85.socket --brick-name /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_55d03facb36899f57cdabd107689eba0/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ29465 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id rally-fkx14ljxp9o267.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.pid -S /var/run/gluster/6801d81632622820f0b62ec2150f3d75.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_54f8b2d556bf6a30d3990382dccacc95/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49154 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49154 > ââ29683 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.pid -S /var/run/gluster/43582dc96bce5442618f73c71cf50bbd.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_3dc24288734a047a8fcc00ab2c7a0974/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31513 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/97a465235ca3ff589c72ec017f4ab551.socket --xlator-option *replicate*.node-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 >[heketi] INFO 2018/08/28 11:46:22 Periodic health check status: node 284f3e3a4c2fd7c0e78b2e759afa9847 up=true >[cmdexec] INFO 2018/08/28 11:46:22 Check Glusterd service status in node vp-ansible-v311-app-cns-2 >[kubeexec] DEBUG 2018/08/28 11:46:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:11 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6310af79_a5f1_11e8_9a3f_005056a549ca.slice/docker-55cc0a963be6c53dd856f9025694fb7897b40fb9112aca2b194ddc65ec54d76f.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 842 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id heketidbstorage.10.70.47.176.var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.176-var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.pid -S /var/run/gluster/fb2b61aa838bbe43a440ffc2986e9bc7.socket --brick-name /var/lib/heketi/mounts/vg_b2c811c50490ee9832f1e0ecfb15f660/brick_7251b95b366cb335bf154a71d67f1250/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 6707 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id rally-fkx14ljxp9o267.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.pid -S /var/run/gluster/61260ed6fbb5d2443c6491e364f5216c.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49155 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49155 > ââ29609 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.pid -S /var/run/gluster/a8eee47923e596454bd4f1c2e122327b.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_88fc94292cf739f1608b4abb3a42a5aa/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31591 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/645e4eea62d3f97263e806a381eeabda.socket --xlator-option *replicate*.node-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d >[heketi] INFO 2018/08/28 11:46:22 Periodic health check status: node f48c1fefdee6989a05560260afcd0a2d up=true >[heketi] INFO 2018/08/28 11:46:22 Cleaned 0 nodes from health cache >[heketi] INFO 2018/08/28 11:48:21 Starting Node Health Status refresh >[cmdexec] INFO 2018/08/28 11:48:21 Check Glusterd service status in node vp-ansible-v311-app-cns-0 >[kubeexec] DEBUG 2018/08/28 11:48:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:03 UTC; 6 days ago > Process: 429 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 430 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod630cf184_a5f1_11e8_9a3f_005056a549ca.slice/docker-24ed69a095449460f61deded6aaa6a7de0753c88a3cf0a5e07b85b9cd4a92805.scope/system.slice/glusterd.service > ââ 430 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 852 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id heketidbstorage.10.70.46.26.var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.26-var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.pid -S /var/run/gluster/6a2a03eab842b37b7169d67e7d04d6c2.socket --brick-name /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_cdc9f126aeab5fc4af31b482980e213e/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 1267 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id rally-tljuuhzo4iht8d.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick -p /var/run/gluster/vols/rally-tljuuhzo4iht8d/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.pid -S /var/run/gluster/1aa61f85bdaf434a953a7efb3d468b98.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_7d951cd9233c2bafe3c5d611e3d8883f/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49154 --xlator-option rally-tljuuhzo4iht8d-server.listen-port=49154 > ââ29575 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.pid -S /var/run/gluster/4d30a5ebbf6049f7294f2c96c0889f0e.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_3510a38a8d5d693a612baa22d916237b/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31589 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/13b5dab3bb9b7241888ca2821e42e2c0.socket --xlator-option *replicate*.node-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f >[heketi] INFO 2018/08/28 11:48:21 Periodic health check status: node 064f1f469119e5c69a56a8c81b5fd96a up=true >[cmdexec] INFO 2018/08/28 11:48:21 Check Glusterd service status in node vp-ansible-v311-app-cns-1 >[kubeexec] DEBUG 2018/08/28 11:48:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:55:53 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631c8e73_a5f1_11e8_9a3f_005056a549ca.slice/docker-0c00732c129d13630118d564e7628c6b1a7329d974aa476e520ae3be5990a263.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 856 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id heketidbstorage.10.70.46.10.var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.10-var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.pid -S /var/run/gluster/eb9a06fb398d5e1d9578e0fc2dc82d85.socket --brick-name /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_55d03facb36899f57cdabd107689eba0/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ29465 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id rally-fkx14ljxp9o267.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.pid -S /var/run/gluster/6801d81632622820f0b62ec2150f3d75.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_54f8b2d556bf6a30d3990382dccacc95/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49154 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49154 > ââ29683 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.pid -S /var/run/gluster/43582dc96bce5442618f73c71cf50bbd.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_3dc24288734a047a8fcc00ab2c7a0974/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31513 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/97a465235ca3ff589c72ec017f4ab551.socket --xlator-option *replicate*.node-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 >[heketi] INFO 2018/08/28 11:48:22 Periodic health check status: node 284f3e3a4c2fd7c0e78b2e759afa9847 up=true >[cmdexec] INFO 2018/08/28 11:48:22 Check Glusterd service status in node vp-ansible-v311-app-cns-2 >[kubeexec] DEBUG 2018/08/28 11:48:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:11 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6310af79_a5f1_11e8_9a3f_005056a549ca.slice/docker-55cc0a963be6c53dd856f9025694fb7897b40fb9112aca2b194ddc65ec54d76f.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 842 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id heketidbstorage.10.70.47.176.var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.176-var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.pid -S /var/run/gluster/fb2b61aa838bbe43a440ffc2986e9bc7.socket --brick-name /var/lib/heketi/mounts/vg_b2c811c50490ee9832f1e0ecfb15f660/brick_7251b95b366cb335bf154a71d67f1250/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 6707 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id rally-fkx14ljxp9o267.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.pid -S /var/run/gluster/61260ed6fbb5d2443c6491e364f5216c.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49155 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49155 > ââ29609 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.pid -S /var/run/gluster/a8eee47923e596454bd4f1c2e122327b.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_88fc94292cf739f1608b4abb3a42a5aa/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31591 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/645e4eea62d3f97263e806a381eeabda.socket --xlator-option *replicate*.node-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d >[heketi] INFO 2018/08/28 11:48:22 Periodic health check status: node f48c1fefdee6989a05560260afcd0a2d up=true >[heketi] INFO 2018/08/28 11:48:22 Cleaned 0 nodes from health cache >[heketi] INFO 2018/08/28 11:50:21 Starting Node Health Status refresh >[cmdexec] INFO 2018/08/28 11:50:21 Check Glusterd service status in node vp-ansible-v311-app-cns-0 >[kubeexec] DEBUG 2018/08/28 11:50:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:03 UTC; 6 days ago > Process: 429 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 430 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod630cf184_a5f1_11e8_9a3f_005056a549ca.slice/docker-24ed69a095449460f61deded6aaa6a7de0753c88a3cf0a5e07b85b9cd4a92805.scope/system.slice/glusterd.service > ââ 430 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 852 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id heketidbstorage.10.70.46.26.var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.26-var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.pid -S /var/run/gluster/6a2a03eab842b37b7169d67e7d04d6c2.socket --brick-name /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_cdc9f126aeab5fc4af31b482980e213e/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 1267 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id rally-tljuuhzo4iht8d.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick -p /var/run/gluster/vols/rally-tljuuhzo4iht8d/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.pid -S /var/run/gluster/1aa61f85bdaf434a953a7efb3d468b98.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_7d951cd9233c2bafe3c5d611e3d8883f/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49154 --xlator-option rally-tljuuhzo4iht8d-server.listen-port=49154 > ââ29575 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.pid -S /var/run/gluster/4d30a5ebbf6049f7294f2c96c0889f0e.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_3510a38a8d5d693a612baa22d916237b/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31589 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/13b5dab3bb9b7241888ca2821e42e2c0.socket --xlator-option *replicate*.node-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f >[heketi] INFO 2018/08/28 11:50:21 Periodic health check status: node 064f1f469119e5c69a56a8c81b5fd96a up=true >[cmdexec] INFO 2018/08/28 11:50:21 Check Glusterd service status in node vp-ansible-v311-app-cns-1 >[kubeexec] DEBUG 2018/08/28 11:50:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:55:53 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631c8e73_a5f1_11e8_9a3f_005056a549ca.slice/docker-0c00732c129d13630118d564e7628c6b1a7329d974aa476e520ae3be5990a263.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 856 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id heketidbstorage.10.70.46.10.var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.10-var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.pid -S /var/run/gluster/eb9a06fb398d5e1d9578e0fc2dc82d85.socket --brick-name /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_55d03facb36899f57cdabd107689eba0/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ29465 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id rally-fkx14ljxp9o267.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.pid -S /var/run/gluster/6801d81632622820f0b62ec2150f3d75.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_54f8b2d556bf6a30d3990382dccacc95/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49154 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49154 > ââ29683 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.pid -S /var/run/gluster/43582dc96bce5442618f73c71cf50bbd.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_3dc24288734a047a8fcc00ab2c7a0974/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31513 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/97a465235ca3ff589c72ec017f4ab551.socket --xlator-option *replicate*.node-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 >[heketi] INFO 2018/08/28 11:50:22 Periodic health check status: node 284f3e3a4c2fd7c0e78b2e759afa9847 up=true >[cmdexec] INFO 2018/08/28 11:50:22 Check Glusterd service status in node vp-ansible-v311-app-cns-2 >[kubeexec] DEBUG 2018/08/28 11:50:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:11 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6310af79_a5f1_11e8_9a3f_005056a549ca.slice/docker-55cc0a963be6c53dd856f9025694fb7897b40fb9112aca2b194ddc65ec54d76f.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 842 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id heketidbstorage.10.70.47.176.var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.176-var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.pid -S /var/run/gluster/fb2b61aa838bbe43a440ffc2986e9bc7.socket --brick-name /var/lib/heketi/mounts/vg_b2c811c50490ee9832f1e0ecfb15f660/brick_7251b95b366cb335bf154a71d67f1250/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 6707 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id rally-fkx14ljxp9o267.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.pid -S /var/run/gluster/61260ed6fbb5d2443c6491e364f5216c.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49155 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49155 > ââ29609 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.pid -S /var/run/gluster/a8eee47923e596454bd4f1c2e122327b.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_88fc94292cf739f1608b4abb3a42a5aa/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31591 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/645e4eea62d3f97263e806a381eeabda.socket --xlator-option *replicate*.node-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d >[heketi] INFO 2018/08/28 11:50:22 Periodic health check status: node f48c1fefdee6989a05560260afcd0a2d up=true >[heketi] INFO 2018/08/28 11:50:22 Cleaned 0 nodes from health cache >[heketi] INFO 2018/08/28 11:52:21 Starting Node Health Status refresh >[cmdexec] INFO 2018/08/28 11:52:21 Check Glusterd service status in node vp-ansible-v311-app-cns-0 >[kubeexec] DEBUG 2018/08/28 11:52:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:03 UTC; 6 days ago > Process: 429 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 430 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod630cf184_a5f1_11e8_9a3f_005056a549ca.slice/docker-24ed69a095449460f61deded6aaa6a7de0753c88a3cf0a5e07b85b9cd4a92805.scope/system.slice/glusterd.service > ââ 430 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 852 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id heketidbstorage.10.70.46.26.var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.26-var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.pid -S /var/run/gluster/6a2a03eab842b37b7169d67e7d04d6c2.socket --brick-name /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_cdc9f126aeab5fc4af31b482980e213e/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 1267 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id rally-tljuuhzo4iht8d.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick -p /var/run/gluster/vols/rally-tljuuhzo4iht8d/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.pid -S /var/run/gluster/1aa61f85bdaf434a953a7efb3d468b98.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_7d951cd9233c2bafe3c5d611e3d8883f/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49154 --xlator-option rally-tljuuhzo4iht8d-server.listen-port=49154 > ââ29575 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.pid -S /var/run/gluster/4d30a5ebbf6049f7294f2c96c0889f0e.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_3510a38a8d5d693a612baa22d916237b/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31589 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/13b5dab3bb9b7241888ca2821e42e2c0.socket --xlator-option *replicate*.node-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f >[heketi] INFO 2018/08/28 11:52:21 Periodic health check status: node 064f1f469119e5c69a56a8c81b5fd96a up=true >[cmdexec] INFO 2018/08/28 11:52:21 Check Glusterd service status in node vp-ansible-v311-app-cns-1 >[kubeexec] DEBUG 2018/08/28 11:52:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:55:53 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631c8e73_a5f1_11e8_9a3f_005056a549ca.slice/docker-0c00732c129d13630118d564e7628c6b1a7329d974aa476e520ae3be5990a263.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 856 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id heketidbstorage.10.70.46.10.var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.10-var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.pid -S /var/run/gluster/eb9a06fb398d5e1d9578e0fc2dc82d85.socket --brick-name /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_55d03facb36899f57cdabd107689eba0/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ29465 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id rally-fkx14ljxp9o267.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.pid -S /var/run/gluster/6801d81632622820f0b62ec2150f3d75.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_54f8b2d556bf6a30d3990382dccacc95/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49154 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49154 > ââ29683 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.pid -S /var/run/gluster/43582dc96bce5442618f73c71cf50bbd.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_3dc24288734a047a8fcc00ab2c7a0974/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31513 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/97a465235ca3ff589c72ec017f4ab551.socket --xlator-option *replicate*.node-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 >[heketi] INFO 2018/08/28 11:52:22 Periodic health check status: node 284f3e3a4c2fd7c0e78b2e759afa9847 up=true >[cmdexec] INFO 2018/08/28 11:52:22 Check Glusterd service status in node vp-ansible-v311-app-cns-2 >[kubeexec] DEBUG 2018/08/28 11:52:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:11 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6310af79_a5f1_11e8_9a3f_005056a549ca.slice/docker-55cc0a963be6c53dd856f9025694fb7897b40fb9112aca2b194ddc65ec54d76f.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 842 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id heketidbstorage.10.70.47.176.var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.176-var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.pid -S /var/run/gluster/fb2b61aa838bbe43a440ffc2986e9bc7.socket --brick-name /var/lib/heketi/mounts/vg_b2c811c50490ee9832f1e0ecfb15f660/brick_7251b95b366cb335bf154a71d67f1250/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 6707 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id rally-fkx14ljxp9o267.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.pid -S /var/run/gluster/61260ed6fbb5d2443c6491e364f5216c.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49155 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49155 > ââ29609 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.pid -S /var/run/gluster/a8eee47923e596454bd4f1c2e122327b.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_88fc94292cf739f1608b4abb3a42a5aa/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31591 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/645e4eea62d3f97263e806a381eeabda.socket --xlator-option *replicate*.node-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d >[heketi] INFO 2018/08/28 11:52:22 Periodic health check status: node f48c1fefdee6989a05560260afcd0a2d up=true >[heketi] INFO 2018/08/28 11:52:22 Cleaned 0 nodes from health cache >[heketi] INFO 2018/08/28 11:54:21 Starting Node Health Status refresh >[cmdexec] INFO 2018/08/28 11:54:21 Check Glusterd service status in node vp-ansible-v311-app-cns-0 >[kubeexec] DEBUG 2018/08/28 11:54:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:03 UTC; 6 days ago > Process: 429 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 430 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod630cf184_a5f1_11e8_9a3f_005056a549ca.slice/docker-24ed69a095449460f61deded6aaa6a7de0753c88a3cf0a5e07b85b9cd4a92805.scope/system.slice/glusterd.service > ââ 430 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 852 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id heketidbstorage.10.70.46.26.var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.26-var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.pid -S /var/run/gluster/6a2a03eab842b37b7169d67e7d04d6c2.socket --brick-name /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_cdc9f126aeab5fc4af31b482980e213e/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 1267 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id rally-tljuuhzo4iht8d.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick -p /var/run/gluster/vols/rally-tljuuhzo4iht8d/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.pid -S /var/run/gluster/1aa61f85bdaf434a953a7efb3d468b98.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_7d951cd9233c2bafe3c5d611e3d8883f/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49154 --xlator-option rally-tljuuhzo4iht8d-server.listen-port=49154 > ââ29575 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.pid -S /var/run/gluster/4d30a5ebbf6049f7294f2c96c0889f0e.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_3510a38a8d5d693a612baa22d916237b/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31589 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/13b5dab3bb9b7241888ca2821e42e2c0.socket --xlator-option *replicate*.node-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f >[heketi] INFO 2018/08/28 11:54:21 Periodic health check status: node 064f1f469119e5c69a56a8c81b5fd96a up=true >[cmdexec] INFO 2018/08/28 11:54:21 Check Glusterd service status in node vp-ansible-v311-app-cns-1 >[kubeexec] DEBUG 2018/08/28 11:54:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:55:53 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631c8e73_a5f1_11e8_9a3f_005056a549ca.slice/docker-0c00732c129d13630118d564e7628c6b1a7329d974aa476e520ae3be5990a263.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 856 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id heketidbstorage.10.70.46.10.var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.10-var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.pid -S /var/run/gluster/eb9a06fb398d5e1d9578e0fc2dc82d85.socket --brick-name /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_55d03facb36899f57cdabd107689eba0/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ29465 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id rally-fkx14ljxp9o267.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.pid -S /var/run/gluster/6801d81632622820f0b62ec2150f3d75.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_54f8b2d556bf6a30d3990382dccacc95/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49154 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49154 > ââ29683 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.pid -S /var/run/gluster/43582dc96bce5442618f73c71cf50bbd.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_3dc24288734a047a8fcc00ab2c7a0974/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31513 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/97a465235ca3ff589c72ec017f4ab551.socket --xlator-option *replicate*.node-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 >[heketi] INFO 2018/08/28 11:54:22 Periodic health check status: node 284f3e3a4c2fd7c0e78b2e759afa9847 up=true >[cmdexec] INFO 2018/08/28 11:54:22 Check Glusterd service status in node vp-ansible-v311-app-cns-2 >[kubeexec] DEBUG 2018/08/28 11:54:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:11 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6310af79_a5f1_11e8_9a3f_005056a549ca.slice/docker-55cc0a963be6c53dd856f9025694fb7897b40fb9112aca2b194ddc65ec54d76f.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 842 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id heketidbstorage.10.70.47.176.var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.176-var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.pid -S /var/run/gluster/fb2b61aa838bbe43a440ffc2986e9bc7.socket --brick-name /var/lib/heketi/mounts/vg_b2c811c50490ee9832f1e0ecfb15f660/brick_7251b95b366cb335bf154a71d67f1250/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 6707 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id rally-fkx14ljxp9o267.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.pid -S /var/run/gluster/61260ed6fbb5d2443c6491e364f5216c.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49155 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49155 > ââ29609 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.pid -S /var/run/gluster/a8eee47923e596454bd4f1c2e122327b.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_88fc94292cf739f1608b4abb3a42a5aa/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31591 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/645e4eea62d3f97263e806a381eeabda.socket --xlator-option *replicate*.node-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d >[heketi] INFO 2018/08/28 11:54:22 Periodic health check status: node f48c1fefdee6989a05560260afcd0a2d up=true >[heketi] INFO 2018/08/28 11:54:22 Cleaned 0 nodes from health cache >[negroni] Started GET /volumes >[negroni] Completed 200 OK in 5.128471ms >[negroni] Started GET /volumes/094c35c2fc6baf80fa9497f4fb393314 >[negroni] Completed 200 OK in 3.003055ms >[negroni] Started GET /volumes/7c63529f0b15f298ebf42de88f10bc57 >[negroni] Completed 200 OK in 1.622124ms >[negroni] Started GET /volumes/88eb861a7ad3f268c2d092be1287cef6 >[negroni] Completed 200 OK in 539.227µs >[negroni] Started GET /volumes/a833a9314f4557589a9d874105357140 >[negroni] Completed 200 OK in 673.574µs >[negroni] Started GET /volumes/c1db889aef3227f9988c520891cdb889 >[negroni] Completed 200 OK in 1.048082ms >[heketi] INFO 2018/08/28 11:56:21 Starting Node Health Status refresh >[cmdexec] INFO 2018/08/28 11:56:21 Check Glusterd service status in node vp-ansible-v311-app-cns-0 >[kubeexec] DEBUG 2018/08/28 11:56:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:03 UTC; 6 days ago > Process: 429 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 430 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod630cf184_a5f1_11e8_9a3f_005056a549ca.slice/docker-24ed69a095449460f61deded6aaa6a7de0753c88a3cf0a5e07b85b9cd4a92805.scope/system.slice/glusterd.service > ââ 430 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 852 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id heketidbstorage.10.70.46.26.var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.26-var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.pid -S /var/run/gluster/6a2a03eab842b37b7169d67e7d04d6c2.socket --brick-name /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_cdc9f126aeab5fc4af31b482980e213e/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 1267 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id rally-tljuuhzo4iht8d.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick -p /var/run/gluster/vols/rally-tljuuhzo4iht8d/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.pid -S /var/run/gluster/1aa61f85bdaf434a953a7efb3d468b98.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_7d951cd9233c2bafe3c5d611e3d8883f/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49154 --xlator-option rally-tljuuhzo4iht8d-server.listen-port=49154 > ââ29575 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.pid -S /var/run/gluster/4d30a5ebbf6049f7294f2c96c0889f0e.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_3510a38a8d5d693a612baa22d916237b/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31589 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/13b5dab3bb9b7241888ca2821e42e2c0.socket --xlator-option *replicate*.node-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f >[heketi] INFO 2018/08/28 11:56:21 Periodic health check status: node 064f1f469119e5c69a56a8c81b5fd96a up=true >[cmdexec] INFO 2018/08/28 11:56:21 Check Glusterd service status in node vp-ansible-v311-app-cns-1 >[kubeexec] DEBUG 2018/08/28 11:56:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:55:53 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631c8e73_a5f1_11e8_9a3f_005056a549ca.slice/docker-0c00732c129d13630118d564e7628c6b1a7329d974aa476e520ae3be5990a263.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 856 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id heketidbstorage.10.70.46.10.var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.10-var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.pid -S /var/run/gluster/eb9a06fb398d5e1d9578e0fc2dc82d85.socket --brick-name /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_55d03facb36899f57cdabd107689eba0/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ29465 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id rally-fkx14ljxp9o267.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.pid -S /var/run/gluster/6801d81632622820f0b62ec2150f3d75.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_54f8b2d556bf6a30d3990382dccacc95/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49154 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49154 > ââ29683 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.pid -S /var/run/gluster/43582dc96bce5442618f73c71cf50bbd.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_3dc24288734a047a8fcc00ab2c7a0974/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31513 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/97a465235ca3ff589c72ec017f4ab551.socket --xlator-option *replicate*.node-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 >[heketi] INFO 2018/08/28 11:56:22 Periodic health check status: node 284f3e3a4c2fd7c0e78b2e759afa9847 up=true >[cmdexec] INFO 2018/08/28 11:56:22 Check Glusterd service status in node vp-ansible-v311-app-cns-2 >[kubeexec] DEBUG 2018/08/28 11:56:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:11 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6310af79_a5f1_11e8_9a3f_005056a549ca.slice/docker-55cc0a963be6c53dd856f9025694fb7897b40fb9112aca2b194ddc65ec54d76f.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 842 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id heketidbstorage.10.70.47.176.var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.176-var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.pid -S /var/run/gluster/fb2b61aa838bbe43a440ffc2986e9bc7.socket --brick-name /var/lib/heketi/mounts/vg_b2c811c50490ee9832f1e0ecfb15f660/brick_7251b95b366cb335bf154a71d67f1250/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 6707 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id rally-fkx14ljxp9o267.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.pid -S /var/run/gluster/61260ed6fbb5d2443c6491e364f5216c.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49155 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49155 > ââ29609 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.pid -S /var/run/gluster/a8eee47923e596454bd4f1c2e122327b.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_88fc94292cf739f1608b4abb3a42a5aa/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31591 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/645e4eea62d3f97263e806a381eeabda.socket --xlator-option *replicate*.node-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d >[heketi] INFO 2018/08/28 11:56:22 Periodic health check status: node f48c1fefdee6989a05560260afcd0a2d up=true >[heketi] INFO 2018/08/28 11:56:22 Cleaned 0 nodes from health cache >[negroni] Started GET /db/dump >[negroni] Completed 200 OK in 9.641438ms >[negroni] Started GET /db/dump >[negroni] Completed 200 OK in 3.348109ms >[heketi] INFO 2018/08/28 11:58:21 Starting Node Health Status refresh >[cmdexec] INFO 2018/08/28 11:58:21 Check Glusterd service status in node vp-ansible-v311-app-cns-0 >[kubeexec] DEBUG 2018/08/28 11:58:21 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-0 Pod: glusterfs-cns-jlgq2 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:03 UTC; 6 days ago > Process: 429 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 430 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod630cf184_a5f1_11e8_9a3f_005056a549ca.slice/docker-24ed69a095449460f61deded6aaa6a7de0753c88a3cf0a5e07b85b9cd4a92805.scope/system.slice/glusterd.service > ââ 430 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 852 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id heketidbstorage.10.70.46.26.var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.26-var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.pid -S /var/run/gluster/6a2a03eab842b37b7169d67e7d04d6c2.socket --brick-name /var/lib/heketi/mounts/vg_b1ebb7cf4e45c57d379df092a591ec0a/brick_cdc9f126aeab5fc4af31b482980e213e/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b1ebb7cf4e45c57d379df092a591ec0a-brick_cdc9f126aeab5fc4af31b482980e213e-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 1267 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id rally-tljuuhzo4iht8d.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick -p /var/run/gluster/vols/rally-tljuuhzo4iht8d/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.pid -S /var/run/gluster/1aa61f85bdaf434a953a7efb3d468b98.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_7d951cd9233c2bafe3c5d611e3d8883f/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_7d951cd9233c2bafe3c5d611e3d8883f-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49154 --xlator-option rally-tljuuhzo4iht8d-server.listen-port=49154 > ââ29575 /usr/sbin/glusterfsd -s 10.70.46.26 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.26.var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.26-var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.pid -S /var/run/gluster/4d30a5ebbf6049f7294f2c96c0889f0e.socket --brick-name /var/lib/heketi/mounts/vg_80280148c66c9e91e0c27f64a751900f/brick_3510a38a8d5d693a612baa22d916237b/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_80280148c66c9e91e0c27f64a751900f-brick_3510a38a8d5d693a612baa22d916237b-brick.log --xlator-option *-posix.glusterd-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31589 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/13b5dab3bb9b7241888ca2821e42e2c0.socket --xlator-option *replicate*.node-uuid=0202e093-70b5-4ddd-927d-bdb37c3daa5f >[heketi] INFO 2018/08/28 11:58:21 Periodic health check status: node 064f1f469119e5c69a56a8c81b5fd96a up=true >[cmdexec] INFO 2018/08/28 11:58:21 Check Glusterd service status in node vp-ansible-v311-app-cns-1 >[kubeexec] DEBUG 2018/08/28 11:58:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-1 Pod: glusterfs-cns-qrfrz Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:55:53 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631c8e73_a5f1_11e8_9a3f_005056a549ca.slice/docker-0c00732c129d13630118d564e7628c6b1a7329d974aa476e520ae3be5990a263.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 856 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id heketidbstorage.10.70.46.10.var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick -p /var/run/gluster/vols/heketidbstorage/10.70.46.10-var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.pid -S /var/run/gluster/eb9a06fb398d5e1d9578e0fc2dc82d85.socket --brick-name /var/lib/heketi/mounts/vg_557622450e27d9663fd36087fbe35bee/brick_55d03facb36899f57cdabd107689eba0/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_557622450e27d9663fd36087fbe35bee-brick_55d03facb36899f57cdabd107689eba0-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ29465 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id rally-fkx14ljxp9o267.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.pid -S /var/run/gluster/6801d81632622820f0b62ec2150f3d75.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_54f8b2d556bf6a30d3990382dccacc95/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_54f8b2d556bf6a30d3990382dccacc95-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49154 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49154 > ââ29683 /usr/sbin/glusterfsd -s 10.70.46.10 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.46.10.var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.46.10-var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.pid -S /var/run/gluster/43582dc96bce5442618f73c71cf50bbd.socket --brick-name /var/lib/heketi/mounts/vg_6c0538bc7bed0679f0f595c61c72656f/brick_3dc24288734a047a8fcc00ab2c7a0974/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_6c0538bc7bed0679f0f595c61c72656f-brick_3dc24288734a047a8fcc00ab2c7a0974-brick.log --xlator-option *-posix.glusterd-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31513 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/97a465235ca3ff589c72ec017f4ab551.socket --xlator-option *replicate*.node-uuid=a14e3b75-4e27-49cc-9591-52deb56b0223 >[heketi] INFO 2018/08/28 11:58:22 Periodic health check status: node 284f3e3a4c2fd7c0e78b2e759afa9847 up=true >[cmdexec] INFO 2018/08/28 11:58:22 Check Glusterd service status in node vp-ansible-v311-app-cns-2 >[kubeexec] DEBUG 2018/08/28 11:58:22 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:246: Host: vp-ansible-v311-app-cns-2 Pod: glusterfs-cns-hzqg6 Command: systemctl status glusterd >Result: â glusterd.service - GlusterFS, a clustered file-system server > Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) > Active: active (running) since Wed 2018-08-22 09:56:11 UTC; 6 days ago > Process: 431 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) > Main PID: 432 (glusterd) > CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6310af79_a5f1_11e8_9a3f_005056a549ca.slice/docker-55cc0a963be6c53dd856f9025694fb7897b40fb9112aca2b194ddc65ec54d76f.scope/system.slice/glusterd.service > ââ 432 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > ââ 842 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id heketidbstorage.10.70.47.176.var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick -p /var/run/gluster/vols/heketidbstorage/10.70.47.176-var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.pid -S /var/run/gluster/fb2b61aa838bbe43a440ffc2986e9bc7.socket --brick-name /var/lib/heketi/mounts/vg_b2c811c50490ee9832f1e0ecfb15f660/brick_7251b95b366cb335bf154a71d67f1250/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_b2c811c50490ee9832f1e0ecfb15f660-brick_7251b95b366cb335bf154a71d67f1250-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49152 --xlator-option heketidbstorage-server.listen-port=49152 > ââ 6707 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id rally-fkx14ljxp9o267.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick -p /var/run/gluster/vols/rally-fkx14ljxp9o267/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.pid -S /var/run/gluster/61260ed6fbb5d2443c6491e364f5216c.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_b8381259199cde3925065e7c54e98386/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_b8381259199cde3925065e7c54e98386-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49155 --xlator-option rally-fkx14ljxp9o267-server.listen-port=49155 > ââ29609 /usr/sbin/glusterfsd -s 10.70.47.176 --volfile-id vol_a833a9314f4557589a9d874105357140.10.70.47.176.var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick -p /var/run/gluster/vols/vol_a833a9314f4557589a9d874105357140/10.70.47.176-var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.pid -S /var/run/gluster/a8eee47923e596454bd4f1c2e122327b.socket --brick-name /var/lib/heketi/mounts/vg_91c1336b9d8010eb5c52368eef886671/brick_88fc94292cf739f1608b4abb3a42a5aa/brick -l /var/log/glusterfs/bricks/var-lib-heketi-mounts-vg_91c1336b9d8010eb5c52368eef886671-brick_88fc94292cf739f1608b4abb3a42a5aa-brick.log --xlator-option *-posix.glusterd-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d --brick-port 49153 --xlator-option vol_a833a9314f4557589a9d874105357140-server.listen-port=49153 > ââ31591 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/645e4eea62d3f97263e806a381eeabda.socket --xlator-option *replicate*.node-uuid=dda5114b-4ef3-40c8-948c-cdb3b1b56f4d >[heketi] INFO 2018/08/28 11:58:22 Periodic health check status: node f48c1fefdee6989a05560260afcd0a2d up=true >[heketi] INFO 2018/08/28 11:58:22 Cleaned 0 nodes from health cache
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1621436
:
1478272
|
1478275
|
1479234
| 1479235 |
1479236