Bug 1659825 - Regurarly health-check failed, going down
Summary: Regurarly health-check failed, going down
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: GlusterFS
Classification: Community
Component: posix
Version: 5
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Mohit Agrawal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-16 20:02 UTC by aedu@wyssmann.com
Modified: 2018-12-19 09:05 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-12-19 09:05:44 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
logs of all 3 nodes (2.44 MB, application/zip)
2018-12-16 20:02 UTC, aedu@wyssmann.com
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github 616 0 None None None 2020-10-01 04:15:48 UTC

Description aedu@wyssmann.com 2018-12-16 20:02:47 UTC
Created attachment 1514921 [details]
logs of all 3 nodes

I have a cluster setup as mentioned in #615 which consists of 3 nodes:

    server1, 192.168.100.1
    server2, 192.168.100.2
    server3, 192.168.100.3

I am using Gluster 5.2. The volume status is healthy, but when - for testing - I start to copy a bunch of files to the mounted volume I suddenly get

data-gluster-brick1[10104]: [2018-12-16 11:54:53.398787] M [MSGID: 113075] [posix-helpers.c:1957:posix_health_check_thread_proc] 0-datavol-posix: health-check failed, going down

Broadcast message from systemd-journald@server3 (Sun 2018-12-16 12:54:53 CET):
data-gluster-brick1[10104]: [2018-12-16 11:54:53.398861] M [MSGID: 113075] [posix-helpers.c:1975:posix_health_check_thread_proc] 0-datavol-posix: still alive! -> SIGTERM

Message from syslogd@localhost at Dec 16 12:54:53 ...
 data-gluster-brick1[10104]: [2018-12-16 11:54:53.398861] M [MSGID: 113075] [posix-helpers.c:1975:posix_health_check_thread_proc] 0-datavol-posix: still alive! -> SIGTERM

glusterd reports

Dec 16 12:45:47 server1 data-gluster-brick1[15946]: [2018-12-16 11:45:47.940510] M [MSGID: 113075] [posix-helpers.c:1957:posix_health_check_thread_proc] 0-datavol-posix: health-check failed, going down
Dec 16 12:45:47 server1 data-gluster-brick1[15946]: [2018-12-16 11:45:47.940650] M [MSGID: 113075] [posix-helpers.c:1975:posix_health_check_thread_proc] 0-datavol-posix: still alive! -> SIGTERM

The volume is a replicated one, with 1 brick per node. The bricks are on top of a thin pool

# lvdisplay 
  --- Logical volume ---
  LV Name                vg_md3_thinpool
  VG Name                vg_md3
  LV UUID                w9Obnd-rPz0-kPUX-UQpw-8WBv-JsWp-iNWgHH
  LV Write Access        read/write
  LV Creation host, time server1, 2018-12-10 15:01:19 +0100
  LV Pool metadata       vg_md3_thinpool_tmeta
  LV Pool data           vg_md3_thinpool_tdata
  LV Status              available
  # open                 2Q
  LV Size                1.70 TiB
  Allocated pool data    0.63%
  Allocated metadata     0.16%
  Current LE             445645
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
   
  --- Logical volume ---
  LV Path                /dev/vg_md3/vg_md3_thinlv
  LV Name                vg_md3_thinlv
  VG Name                vg_md3
  LV UUID                h3J0tR-qN6u-X5Ea-B5di-TnfR-mt9c-HAkYH1
  LV Write Access        read/write
  LV Creation host, time server1, 2018-12-10 15:01:21 +0100
  LV Pool name           vg_md3_thinpool
  LV Status              available
  # open                 1
  LV Size                1.70 TiB
  Mapped size            0.63%
  Current LE             445645
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

Disk config looks as this:

# fdisk -l
Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C90B8273-29EF-4411-83E3-F8896BE33F22

Device          Start        End    Sectors  Size Type
/dev/sdb1        4096   33558527   33554432   16G Linux RAID
/dev/sdb2    33558528   34607103    1048576  512M Linux RAID
/dev/sdb3    34607104 2182090751 2147483648    1T Linux RAID
/dev/sdb4  2182090752 5860533134 3678442383  1.7T Linux RAID
/dev/sdb5        2048       4095       2048    1M BIOS boot

Partition table entries are not in disk order.


Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 97DB8175-3C0A-4A10-AB86-DBDE6BEA65A2

Device          Start        End    Sectors  Size Type
/dev/sda1        4096   33558527   33554432   16G Linux RAID
/dev/sda2    33558528   34607103    1048576  512M Linux RAID
/dev/sda3    34607104 2182090751 2147483648    1T Linux RAID
/dev/sda4  2182090752 5860533134 3678442383  1.7T Linux RAID
/dev/sda5        2048       4095       2048    1M BIOS boot

Partition table entries are not in disk order.


Disk /dev/md3: 1.7 TiB, 1883228274688 bytes, 3678180224 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md2: 1023.9 GiB, 1099377410048 bytes, 2147221504 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md1: 511.4 MiB, 536281088 bytes, 1047424 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md0: 16 GiB, 17163091968 bytes, 33521664 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg_md3-vg_md3_thinlv: 1.7 TiB, 1869170606080 bytes, 3650723840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes

Comment 1 Mohit Agrawal 2018-12-17 02:52:47 UTC
Hi,

On the basis of current logs, it seems more than one brick processes are running on the nodes for the same brick.

First time brick was started at the time

>>>>>>>>>>>>>>>>>>>>>>

2018-12-15 10:37:15.124245] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.8.8 (args: /usr/sbin/glusterfsd -s 192.168.100.1 --volfile-id datavol.192.168.100.1.data-gluster-brick1 -p /var/lib/glusterd/vols/datavol/run/192.168.100.1-data-gluster-brick1.pid -S /var/run/gluster/94414a1c51f6146a22e4158fdc3505f2.socket --brick-name /data/gluster/brick1 -l /var/log/glusterfs/bricks/data-gluster-brick1.log --xlator-option *-posix.glusterd-uuid=e0f96163-8510-42a8-8437-60f4883f9a03 --brick-port 49152 --xlator-option datavol-server.listen-port=49152)

>>>>>>>>>>>>>>>>>>>>>

Thereafter I am not able to see any message specific to the graceful shutdown of the brick process but the new brick process was started.
Did you reboot the node at that time that's why no cleanup_and_exit message specific to brick shutdown ??

>>>>>>>>>>>>>>>>>>>.

[2018-12-15 17:04:25.642955] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-datavol-server: accepted client from CTX_ID:3d9429d7-f51c-4481-9217-cd489951fcf4-GRAPH_ID:0-PID:17284-HOST:server2-PC_NAME:datavol-client-0-RECON_NO:-0 (version: 5.2)
[2018-12-15 17:04:25.643509] I [login.c:76:gf_auth] 0-auth/login: allowed user names: ff65d1fc-1bbd-41c6-bb71-fe0fe3028d15
[2018-12-15 17:04:25.643544] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-datavol-server: accepted client from server3-5745-2018/12/15-17:04:20:745092-datavol-client-0-0-0 (version: 3.8.8)
[2018-12-15 17:04:26.259670] I [MSGID: 100030] [glusterfsd.c:2691:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 5.2 (args: /usr/sbin/glusterfsd -s 192.168.100.1 --volfile-id datavol.192.168.100.1.data-gluster-brick1 -p /var/run/gluster/vols/datavol/192.168.100.1-data-gluster-brick1.pid -S /var/run/gluster/1a6b22a893ad975c.socket --brick-name /data/gluster/brick1 -l /var/log/glusterfs/bricks/data-gluster-brick1.log --xlator-option *-posix.glusterd-uuid=e0f96163-8510-42a8-8437-60f4883f9a03 --process-name brick --brick-port 49153 --xlator-option datavol-server.listen-port=49153)
[2018-12-15 17:04:26.263795] I [MSGID: 101190] [event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1


>>>>>>>>>>>>>>>>>>>>>

Below is the health check timestamp  as shared by you, it means brick is still running and some thread is trying to update timestamp in health check file because i am not able to see any message at the same timestamp in logs

server1 # cat /data/gluster/brick1/.glusterfs/health_check
2018-12-16 16:26:46


Thanks,
Mohit Agrawal

Comment 2 Atin Mukherjee 2018-12-17 09:38:41 UTC
How did we end up running two brick instances at the same time?

Mohit - could you elaborate more? Does glusterd log indicate that it tried to start the brick twice?

Comment 3 Shyamsundar 2018-12-17 15:14:14 UTC
Going to drop a 2 brick instance that I saw in the regression tests here: bz #1637743

It maybe different, but worth a cross post.

Comment 4 aedu@wyssmann.com 2018-12-17 15:24:18 UTC
I was installing gluster from debian standard repo (3.8 I believe) and did all the setup. Afterwards I've upgraded to latest version 5.2. Perhaps missing a migration path or something?

Comment 5 Mohit Agrawal 2018-12-17 16:25:01 UTC
@Atin,

We don't have old glusterd logs,I said two brick instances were running on the basis of
timestamp, ideally timestamp should be matched at the posix_health_check thread down but it was not matching in logs.

@Adrian

As you mentioned you missed the migration path that could be the reason for two brick instances. To ensure the same you can check brick STIM in ps output.

Can you confirm for now how many brick instances are running on the node?
 

Thanks,
Mohit Agrawal

Comment 6 aedu@wyssmann.com 2018-12-19 05:59:22 UTC
@Mohit

In meanwhile I have completely wiped my 3 nodes and did the setup again. Now the cluster seems stable

 ps -aux | grep gluster
root     17833  0.0  0.0 516320 17048 ?        Ssl  Dec18   0:00 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
root     21378  0.0  0.0 1255148 16916 ?       Ssl  Dec18   0:09 /usr/sbin/glusterfsd -s 192.168.100.1 --volfile-id datavol.192.168.100.1.data-gluster-brick1 -p /var/run/gluster/vols/datavol/192.168.100.1-data-gluster-brick1.pid -S /var/run/gluster/1a6b22a893ad975c.socket --brick-name /data/gluster/brick1 -l /var/log/glusterfs/bricks/data-gluster-brick1.log --xlator-option *-posix.glusterd-uuid=4243f229-c7d6-43d5-96b2-d08b1493a73e --process-name brick --brick-port 49152 --xlator-option datavol-server.listen-port=49152
root     21401  0.0  0.0 555792  8376 ?        Ssl  Dec18   0:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/6725f36305610489.socket --xlator-option *replicate*.node-uuid=4243f229-c7d6-43d5-96b2-d08b1493a73e --process-name glustershd
root     29542  0.0  0.0  15540  1020 pts/0    S+   06:46   0:00 grep gluster


This time I've installed the latest gluster packages BEFORE I have created the cluster.

Comment 7 Atin Mukherjee 2018-12-19 08:47:36 UTC
I'm moving this to Posix component for now. This doesn't look like a glusterd issue.

Comment 8 Mohit Agrawal 2018-12-19 09:03:06 UTC
Hi Adrian,

I believe now gluster is stable so I am closing the bugzilla.
You can reopen it if u face any issue.

Thanks,
Mohit Agrawal


Note You need to log in before you can comment on or make changes to this bug.