Bug 1283990
Summary: | DHT: ls is taking 11 mins to display 100 files | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | RajeshReddy <rmekala> |
Component: | distribute | Assignee: | Raghavendra G <rgowdapp> |
Status: | CLOSED WORKSFORME | QA Contact: | Prasad Desala <tdesala> |
Severity: | unspecified | Docs Contact: | |
Priority: | medium | ||
Version: | rhgs-3.1 | CC: | mzywusko, nbalacha, rgowdapp, rhinduja, rhs-bugs, sanandpa, smohan, tdesala |
Target Milestone: | --- | Keywords: | Triaged, ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | dht-rca-unknown, dht-readdirp-perf, dht-retest | ||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-03-01 07:26:25 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1356454 | ||
Bug Blocks: |
Description
RajeshReddy
2015-11-20 13:27:14 UTC
sosreport are available @ /home/repo/sosreports/bug.1283990 on rhsqe-repo.lab.eng.blr.redhat.com Tested with build glusterfs-3.7.5-7.el6rhs.x86_64: Created 2x2 volume and exported the samba share on windows client. Executed the perl script mentioned in bug description which creates nested folders and files. Executed add-brick operation while i/o's are going on. Executed rebalance. Mount point is still accessible and there are no I/O errors on windows client. Marking the BZ verified. By mistake moved to verified. This should be RCAed to determine the cause for the slow ls. It is likely to be because of the readdirp performance as well as the deep dir structure. [root@unused glusterfs]# gluster volume info Volume Name: dist-rep Type: Distributed-Replicate Volume ID: 75866874-eacd-425b-bd40-c348f6049a78 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: booradley:/home/export/dist-rep/5 Brick2: booradley:/home/export/dist-rep/6 Brick3: booradley:/home/export/dist-rep/7 Brick4: booradley:/home/export/dist-rep/8 Brick5: booradley:/home/export/dist-rep/9 Brick6: booradley:/home/export/dist-rep/10 Options Reconfigured: transport.address-family: inet nfs.disable: on [root@unused ~]# mount -t glusterfs booradley:/dist-rep /mnt/glusterfs [root@unused glusterfs]# mount | grep glusterfs booradley:/dist-rep on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) [root@unused ~]# cd /mnt/glusterfs [root@unused glusterfs]# time ls -lR . > /dev/null real 0m5.107s user 0m0.021s sys 0m0.103s [root@unused glusterfs]# ls 1 [root@unused glusterfs]# find . -type f | wc -l 100 [root@unused glusterfs]# find . -iname 200 ./1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100/101/102/103/104/105/106/107/108/109/110/111/112/113/114/115/116/117/118/119/120/121/122/123/124/125/126/127/128/129/130/131/132/133/134/135/136/137/138/139/140/141/142/143/144/145/146/147/148/149/150/151/152/153/154/155/156/157/158/159/160/161/162/163/164/165/166/167/168/169/170/171/172/173/174/175/176/177/178/179/180/181/182/183/184/185/186/187/188/189/190/191/192/193/194/195/196/197/198/199/200 ls -lR completed in 5 seconds. So, the issue is not reproducible on upstream master (commit: 96b33b4b278391ca8a7755cf274931d4f1808cb5) The test was slightly different: [root@dht-rhs-23 subdir200]# time ls Looks like it was an ls run inside the deepest directory. [root@unused ~]# umount /mnt/glusterfs [root@unused ~]# gluster volume stop dist-rep Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: dist-rep: success [root@unused ~]# gluster volume start dist-rep volume start: dist-rep: success [root@unused ~]# mount -t glusterfs booradley:/dist-rep /mnt/glusterfs [root@unused ~]# cd /mnt/glusterfs [root@unused glusterfs]# cd ./1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100/101/102/103/104/105/106/107/108/109/110/111/112/113/114/115/116/117/118/119/120/121/122/123/124/125/126/127/128/129/130/131/132/133/134/135/136/137/138/139/140/141/142/143/144/145/146/147/148/149/150/151/152/153/154/155/156/157/158/159/160/161/162/163/164/165/166/167/168/169/170/171/172/173/174/175/176/177/178/179/180/181/182/183/184/185/186/187/188/189/190/191/192/193/194/195/196/197/198/199/200 [root@unused 200]# time ls -l > /dev/null real 0m0.735s user 0m0.004s sys 0m0.008s [root@unused 200]# Prasad, Can you verify whether this bug is reproducible on rhgs-3.3.0? regards, Raghavendra |