Bug 1610787 - gluster-blockd is not coming up after gluster pod reboot because of dependency tcmu-runner
Summary: gluster-blockd is not coming up after gluster pod reboot because of dependenc...
Keywords:
Status: CLOSED DUPLICATE of bug 1560418
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-block
Version: cns-3.10
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: ---
Assignee: Prasanna Kumar Kalever
QA Contact: Nitin Goyal
URL:
Whiteboard:
Depends On:
Blocks: 1568862 1598322
TreeView+ depends on / blocked
 
Reported: 2018-08-01 12:47 UTC by Nitin Goyal
Modified: 2019-04-23 04:38 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-08-02 06:46:34 UTC
Embargoed:


Attachments (Terms of Use)

Description Nitin Goyal 2018-08-01 12:47:16 UTC
Description of problem:
gluster-blockd service is not coming up after restarting gluster pod because of dependency. It was stuck in vfs layer. And there is no any access system call from tcmu-runner process, it should be invoked by call some user space libraries. and just get stuck in the access system call trace, currently from the logs it should be one kernel's bug.

Version-Release number of selected component (if applicable):
sh-4.2# uname -r
3.10.0-862.11.2.el7.x86_64
sh-4.2# 
sh-4.2# rpm -qa | grep gluster
glusterfs-client-xlators-3.8.4-54.15.el7rhgs.x86_64
glusterfs-fuse-3.8.4-54.15.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-54.15.el7rhgs.x86_64
glusterfs-libs-3.8.4-54.15.el7rhgs.x86_64
glusterfs-3.8.4-54.15.el7rhgs.x86_64
glusterfs-api-3.8.4-54.15.el7rhgs.x86_64
glusterfs-cli-3.8.4-54.15.el7rhgs.x86_64
glusterfs-server-3.8.4-54.15.el7rhgs.x86_64
gluster-block-0.2.1-23.el7rhgs.x86_64
sh-4.2# 
sh-4.2# rpm -qa | grep tcmu
libtcmu-1.2.0-23.el7rhgs.x86_64
tcmu-runner-1.2.0-23.el7rhgs.x86_64

How reproducible:


Steps to Reproduce:
1. create 300 block hosting volume.
2. create 100 block devices.
3. delete one gluster pod.

Actual results:
gluster-blockd and tcmu-runner is not coming up.


Expected results:
gluster-blockd and tcmu-runner should come up.

Additional info:

Comment 2 Nitin Goyal 2018-08-01 12:49:18 UTC
from the analysis it is seen that tcmu runner is in dead state.

sh-4.2# ps -aux | grep Ds
root      1516  0.0  0.0 584520 19656 ?        Ds   Jul31   0:00 /usr/bin/tcmu-runner --tcmu-log-dir /var/log/glusterfs/gluster-block
root     27770  0.0  0.0   9088   660 pts/4    S+   05:56   0:00 grep Ds
sh-4.2# 
sh-4.2# systemctl status tcmu-runner -l
● tcmu-runner.service - LIO Userspace-passthrough daemon
   Loaded: loaded (/usr/lib/systemd/system/tcmu-runner.service; static; vendor preset: disabled)
   Active: failed (Result: timeout) since Tue 2018-07-31 13:14:20 UTC; 16h ago
  Process: 1501 ExecStartPre=/usr/libexec/gluster-block/wait-for-bricks.sh 120 (code=exited, status=1/FAILURE)
 Main PID: 1516
   CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ccc22aa_94c1_11e8_8bae_005056a5f2d4.slice/docker-028937c3bdc69b3af7bd5a67bfa26254edd155a755d6aa386eff7aad98874365.scope/system.slice/tcmu-runner.service
           └─1516 /usr/bin/tcmu-runner --tcmu-log-dir /var/log/glusterfs/gluster-block

Jul 31 13:01:40 dhcp46-217.lab.eng.blr.redhat.com tcmu-runner[1516]: 2018-07-31 13:01:40.044 1516 [ERROR] add_device:516 : could not open /dev/uio1
Jul 31 13:01:40 dhcp46-217.lab.eng.blr.redhat.com tcmu-runner[1516]: add_device:516 : could not open /dev/uio1
Jul 31 13:01:40 dhcp46-217.lab.eng.blr.redhat.com tcmu-runner[1516]: 2018-07-31 13:01:40.044 1516 [ERROR] add_device:516 : could not open /dev/uio10
Jul 31 13:01:40 dhcp46-217.lab.eng.blr.redhat.cosh-4.2# ps -aux | grep Ds
root      1516  0.0  0.0 584520 19656 ?        Ds   Jul31   0:00 /usr/bin/tcmu-runner --tcmu-log-dir /var/log/glusterfs/gluster-block
root     27770  0.0  0.0   9088   660 pts/4    S+   05:56   0:00 grep Ds
sh-4.2# 
sh-4.2# systemctl status tcmu-runner -l
● tcmu-runner.service - LIO Userspace-passthrough daemon
   Loaded: loaded (/usr/lib/systemd/system/tcmu-runner.service; static; vendor preset: disabled)
   Active: failed (Result: timeout) since Tue 2018-07-31 13:14:20 UTC; 16h ago
  Process: 1501 ExecStartPre=/usr/libexec/gluster-block/wait-for-bricks.sh 120 (code=exited, status=1/FAILURE)
 Main PID: 1516
   CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ccc22aa_94c1_11e8_8bae_005056a5f2d4.slice/docker-028937c3bdc69b3af7bd5a67bfa26254edd155a755d6aa386eff7aad98874365.scope/system.slice/tcmu-runner.service
           └─1516 /usr/bin/tcmu-runner --tcmu-log-dir /var/log/glusterfs/gluster-block

Jul 31 13:01:40 dhcp46-217.lab.eng.blr.redhat.com tcmu-runner[1516]: 2018-07-31 13:01:40.044 1516 [ERROR] add_device:516 : could not open /dev/uio1
Jul 31 13:01:40 dhcp46-217.lab.eng.blr.redhat.com tcmu-runner[1516]: add_device:516 : could not open /dev/uio1
Jul 31 13:01:40 dhcp46-217.lab.eng.blr.redhat.com tcmu-runner[1516]: 2018-07-31 13:01:40.044 1516 [ERROR] add_device:516 : could not open /dev/uio10
Jul 31 13:01:40 dhcp46-217.lab.eng.blr.redhat.com tcmu-runner[1516]: add_device:516 : could not open /dev/uio10
Jul 31 13:11:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: tcmu-runner.service start operation timed out. Terminating.
Jul 31 13:12:50 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: tcmu-runner.service stop-final-sigterm timed out. Killing.
Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: tcmu-runner.service still around after final SIGKILL. Entering failed mode.
Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: Failed to start LIO Userspace-passthrough daemon.
Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: Unit tcmu-runner.service entered failed state.
Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: tcmu-runner.service failed.
sh-4.2# 
sh-4.2# systemctl status gluster-blockd -l
● gluster-blockd.service - Gluster block storage utility
   Loaded: loaded (/usr/lib/systemd/system/gluster-blockd.service; enabled; vendor preset: disabled)
   Active: inactive (dead)

Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: Dependency failed for Gluster block storage utility.
Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: Job gluster-blockd.service/start failed with result 'dependency'.
sh-4.2# 
sh-4.2# systemctl status gluster-block-target -l
● gluster-block-target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/gluster-block-target.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: Dependency failed for Restore LIO kernel target configuration.
Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: Job gluster-block-target.service/start failed with result 'dependency'.m tcmu-runner[1516]: add_device:516 : could not open /dev/uio10
Jul 31 13:11:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: tcmu-runner.service start operation timed out. Terminating.
Jul 31 13:12:50 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: tcmu-runner.service stop-final-sigterm timed out. Killing.
Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: tcmu-runner.service still around after final SIGKILL. Entering failed mode.
Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: Failed to start LIO Userspace-passthrough daemon.
Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: Unit tcmu-runner.service entered failed state.
Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: tcmu-runner.service failed.
sh-4.2# 
sh-4.2# systemctl status gluster-blockd -l
● gluster-blockd.service - Gluster block storage utility
   Loaded: loaded (/usr/lib/systemd/system/gluster-blockd.service; enabled; vendor preset: disabled)
   Active: inactive (dead)

Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: Dependency failed for Gluster block storage utility.
Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: Job gluster-blockd.service/start failed with result 'dependency'.
sh-4.2# 
sh-4.2# systemctl status gluster-block-target -l
● gluster-block-target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/gluster-block-target.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: Dependency failed for Restore LIO kernel target configuration.
Jul 31 13:14:20 dhcp46-217.lab.eng.blr.redhat.com systemd[1]: Job gluster-block-target.service/start failed with result 'dependency'.

Comment 4 Xiubo Li 2018-08-02 05:33:59 UTC
The Call trace of the D state process.
sh-4.2# cat /proc/1516/stack
[<ffffffffbcd1079e>] lookup_slow+0x33/0xa7
[<ffffffffbc82e6e8>] path_lookupat+0x838/0x8b0
[<ffffffffbc82e78b>] filename_lookup+0x2b/0xc0
[<ffffffffbc831bc7>] user_path_at_empty+0x67/0xc0
[<ffffffffbc831c31>] user_path_at+0x11/0x20
[<ffffffffbc81d4b2>] SyS_faccessat+0xb2/0x230
[<ffffffffbc81d648>] SyS_access+0x18/0x20
[<ffffffffbcd2579b>] system_call_fastpath+0x22/0x27
[<ffffffffffffffff>] 0xffffffffffffffff
sh-4.2# 

It was stuck in vfs layer. And there is no any access system call from tcmu-runner process, it should be invoked by call some user space libraries. and just get stuck in the access system call trace, currently from the logs it should be one kernel's bug.

Thanks,

Comment 5 Xiubo Li 2018-08-02 06:46:34 UTC
It's a little strange that from the pods there are some processes is missing, and only could be seen in the node:

[root@dhcp47-153 ~]# oc get pods -o wide
NAME                                          READY     STATUS    RESTARTS   AGE       IP             NODE
glusterblock-storage-provisioner-dc-1-g5hjn   1/1       Running   3          5d        10.131.0.6     dhcp47-138.lab.eng.blr.redhat.com
glusterfs-storage-djgdp                       1/1       Running   1          2d        10.70.47.165   dhcp47-165.lab.eng.blr.redhat.com
glusterfs-storage-f469d                       1/1       Running   0          1d        10.70.46.217   dhcp46-217.lab.eng.blr.redhat.com
glusterfs-storage-s7nh6                       1/1       Running   3          5d        10.70.47.138   dhcp47-138.lab.eng.blr.redhat.com
heketi-storage-1-gtcf5                        1/1       Running   8          2d        10.128.0.13    dhcp47-153.lab.eng.blr.redhat.com
[root@dhcp47-153 ~]#
[root@dhcp47-153 ~]#
[root@dhcp47-153 ~]# ssh root.46.217
[root@dhcp46-217 ~]# ps -aux |grep -e targetctl -e tcmu -e iscsi
root      1941  0.0  0.0  31508   512 ?        Ss   Jul31   0:05 /usr/sbin/iscsid
root      1943  0.0  0.0  34184  5572 ?        S<Ls Jul31   0:00 /usr/sbin/iscsid
root      1946  0.0  0.0      0     0 ?        S<   Jul31   0:00 [iscsi_eh]
root      4691  0.0  0.0 625572 21856 ?        Ds   Jul31   0:00 /usr/bin/tcmu-runner --tcmu-log-dir /var/log/glusterfs/gluster-block
root     12161  0.0  0.0 584520 19656 ?        Ds   Jul31   0:00 /usr/bin/tcmu-runner --tcmu-log-dir /var/log/glusterfs/gluster-block
root     16373  0.0  0.0 122668 11080 ?        Ds   Jul31   0:00 /usr/bin/python /usr/bin/targetctl clear
[root@dhcp46-217 ~]# 
[root@dhcp46-217 ~]# 
[root@dhcp46-217 ~]# cat /proc/16373/stack
[<ffffffffc021f25e>] tcmu_netlink_event+0x2ae/0x3e0 [target_core_user]
[<ffffffffc021f48e>] tcmu_destroy_device+0x4e/0x190 [target_core_user]
[<ffffffffc01d1004>] target_free_device+0xb4/0x110 [target_core_mod]
[<ffffffffc01cc135>] target_core_dev_release+0x15/0x20 [target_core_mod]
[<ffffffffbc8ad72a>] config_item_release+0x6a/0xf0
[<ffffffffbc8ad7dc>] config_item_put+0x2c/0x30
[<ffffffffbc8aba0b>] configfs_rmdir+0x1eb/0x310
[<ffffffffbc82da4c>] vfs_rmdir+0xdc/0x150
[<ffffffffbc830fd1>] do_rmdir+0x1f1/0x220
[<ffffffffbc832206>] SyS_rmdir+0x16/0x20
[<ffffffffbcd2579b>] system_call_fastpath+0x22/0x27
[<ffffffffffffffff>] 0xffffffffffffffff
[root@dhcp46-217 ~]# 

When restarting the storage pod, due to some reason the 'targetctl clear' process's netlink event was lost and it will stuck here in D state forever. And this is one known BZ which have been fix in next kernel release, please: https://bugzilla.redhat.com/show_bug.cgi?id=1608677 and https://bugzilla.redhat.com/show_bug.cgi?id=1560418.

After the 'targetctl clear' process is stuck in D state here, it's already holding the mutex_lock(&dentry->d_inode->i_mutex).

And for the others who will acquire this lock will go to sleep and be set in D state too, please see the mutex_lock kernel code:

mutex_lock() --> __mutex_fastpath_lock() --> __mutex_lock_slowpath():

769 static __used noinline void __sched
770 __mutex_lock_slowpath(atomic_t *lock_count)                                                                                                            
771 {
772         struct mutex *lock = container_of(lock_count, struct mutex, count);
773 
774         __mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0, // <-- Setting D state here.
775                             NULL, _RET_IP_, NULL);
776 }

So we can see that the new restart tcmu-runner and any other new targetctl/targetcli commands will stuck and in D state by trying to hold the mutex_lock forever.

This is one duplicate BZ to 1560418.

Thanks.

*** This bug has been marked as a duplicate of bug 1560418 ***


Note You need to log in before you can comment on or make changes to this bug.