Description of problem: ============== DHT: ls is taking 11 mins to display 100 files Version-Release number of selected component (if applicable): ============ glusterfs-server-3.7.5-6 How reproducible: Steps to Reproduce: ============== 1. Create 2x2 volume and mount it on client using fuse 2. Create deep directories (200) and then create 100 files on the deepest directory (../dir199/dir200) 3.Add two new bricks to the volume and then run reblanace start force Though reblance took 177 sec, ls took 11m to display all files under ../subdir199/sudir200 Actual results: Expected results: =========== ls should not take this much time Additional info: ============= [root@rhs-client19 data]# gluster vol info afr2x2_temp Volume Name: afr2x2_temp Type: Distributed-Replicate Volume ID: b032d61d-c115-4af7-9f9f-5f1554180b11 Status: Started Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick5/afr2x2_temp Brick2: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick5/afr2x2_temp Brick3: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick6/afr2x2_temp Brick4: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick6/afr2x2_temp Brick5: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick7/afr2x2_temp_2 Brick6: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick7/afr2x2_temp_2 Options Reconfigured: performance.readdir-ahead: on [root@rhs-client18 subdir200]# gluster vol rebalance afr2x2_temp status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 70 3.9GB 743 0 0 completed 177.00 rhs-client19.lab.eng.blr.redhat.com 607 4.2KB 934 0 0 completed 72.00 volume rebalance: afr2x2_temp: success [root@rhs-client18 subdir200]# gluster vol rebalance afr2x2_temp status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 70 3.9GB 743 0 0 completed 177.00 rhs-client19.lab.eng.blr.redhat.com 607 4.2KB 934 0 0 completed 72.00 volume rebalance: afr2x2_temp: success [root@dht-rhs-23 subdir200]# time ls createdir.sh file131 file167 file201 file237 file272 file307 file342 file378 file412 file448 file483 file518 file553 file589 file623 file659 file694 file729 file764 file8 file834 file87 file904 file94 file975 create.sh file132 file168 file202 file238 file273 file308 file343 file379 file413 file449 file484 file519 file554 file59 file624 file66 file695 file73 file765 file80 file835 file870 file905 file940 file976 file1 file133 file169 file203 file239 file274 file309 file344 file38 file414 file45 file485 file52 file555 file590 file625 file660 file696 file730 file766 file800 file836 file871 file906 file941 file977 file10 file134 file17 file204 file24 file275 file31 file345 file380 file415 file450 file486 file520 file556 file591 file626 file661 file697 file731 file767 file801 file837 file872 file907 file942 file978 file100 file135 file170 file205 file240 file276 file310 file346 file381 file416 file451 file487 file521 file557 file592 file627 file662 file698 file732 file768 file802 file838 file873 file908 file943 file979 file1000 file136 file171 file206 file241 file277 file311 file347 file382 file417 file452 file488 file522 file558 file593 file628 file663 file699 file733 file769 file803 file839 file874 file909 file944 file98 file101 file137 file172 file207 file242 file278 file312 file348 file383 file418 file453 file489 file523 file559 file594 file629 file664 file7 file734 file77 file804 file84 file875 file91 file945 file980 file102 file138 file173 file208 file243 file279 file313 file349 file384 file419 file454 file49 file524 file56 file595 file63 file665 file70 file735 file770 file805 file840 file876 file910 file946 file981 file103 file139 file174 file209 file244 file28 file314 file35 file385 file42 file455 file490 file525 file560 file596 file630 file666 file700 file736 file771 file806 file841 file877 file911 file947 file982 file104 file14 file175 file21 file245 file280 file315 file350 file386 file420 file456 file491 file526 file561 file597 file631 file667 file701 file737 file772 file807 file842 file878 file912 file948 file983 file105 file140 file176 file210 file246 file281 file316 file351 file387 file421 file457 file492 file527 file562 file598 file632 file668 file702 file738 file773 file808 file843 file879 file913 file949 file984 file106 file141 file177 file211 file247 file282 file317 file352 file388 file422 file458 file493 file528 file563 file599 file633 file669 file703 file739 file774 file809 file844 file88 file914 file95 file985 file107 file142 file178 file212 file248 file283 file318 file353 file389 file423 file459 file494 file529 file564 file6 file634 file67 file704 file74 file775 file81 file845 file880 file915 file950 file986 file108 file143 file179 file213 file249 file284 file319 file354 file39 file424 file46 file495 file53 file565 file60 file635 file670 file705 file740 file776 file810 file846 file881 file916 file951 file987 file109 file144 file18 file214 file25 file285 file32 file355 file390 file425 file460 file496 file530 file566 file600 file636 file671 file706 file741 file777 file811 file847 file882 file917 file952 file988 file11 file145 file180 file215 file250 file286 file320 file356 file391 file426 file461 file497 file531 file567 file601 file637 file672 file707 file742 file778 file812 file848 file883 file918 file953 file989 file110 file146 file181 file216 file251 file287 file321 file357 file392 file427 file462 file498 file532 file568 file602 file638 file673 file708 file743 file779 file813 file849 file884 file919 file954 file99 file111 file147 file182 file217 file252 file288 file322 file358 file393 file428 file463 file499 file533 file569 file603 file639 file674 file709 file744 file78 file814 file85 file885 file92 file955 file990 file112 file148 file183 file218 file253 file289 file323 file359 file394 file429 file464 file5 file534 file57 file604 file64 file675 file71 file745 file780 file815 file850 file886 file920 file956 file991 file113 file149 file184 file219 file254 file29 file324 file36 file395 file43 file465 file50 file535 file570 file605 file640 file676 file710 file746 file781 file816 file851 file887 file921 file957 file992 file114 file15 file185 file22 file255 file290 file325 file360 file396 file430 file466 file500 file536 file571 file606 file641 file677 file711 file747 file782 file817 file852 file888 file922 file958 file993 file115 file150 file186 file220 file256 file291 file326 file361 file397 file431 file467 file501 file537 file572 file607 file642 file678 file712 file748 file783 file818 file853 file889 file923 file959 file994 file116 file151 file187 file221 file257 file292 file327 file362 file398 file432 file468 file502 file538 file573 file608 file643 file679 file713 file749 file784 file819 file854 file89 file924 file96 file995 file117 file152 file188 file222 file258 file293 file328 file363 file399 file433 file469 file503 file539 file574 file609 file644 file68 file714 file75 file785 file82 file855 file890 file925 file960 file996 file118 file153 file189 file223 file259 file294 file329 file364 file4 file434 file47 file504 file54 file575 file61 file645 file680 file715 file750 file786 file820 file856 file891 file926 file961 file997 file119 file154 file19 file224 file26 file295 file33 file365 file40 file435 file470 file505 file540 file576 file610 file646 file681 file716 file751 file787 file821 file857 file892 file927 file962 file998 file12 file155 file190 file225 file260 file296 file330 file366 file400 file436 file471 file506 file541 file577 file611 file647 file682 file717 file752 file788 file822 file858 file893 file928 file963 file999 file120 file156 file191 file226 file261 file297 file331 file367 file401 file437 file472 file507 file542 file578 file612 file648 file683 file718 file753 file789 file823 file859 file894 file929 file964 test file121 file157 file192 file227 file262 file298 file332 file368 file402 file438 file473 file508 file543 file579 file613 file649 file684 file719 file754 file79 file824 file86 file895 file93 file965 file122 file158 file193 file228 file263 file299 file333 file369 file403 file439 file474 file509 file544 file58 file614 file65 file685 file72 file755 file790 file825 file860 file896 file930 file966 file123 file159 file194 file229 file264 file3 file334 file37 file404 file44 file475 file51 file545 file580 file615 file650 file686 file720 file756 file791 file826 file861 file897 file931 file967 file124 file16 file195 file23 file265 file30 file335 file370 file405 file440 file476 file510 file546 file581 file616 file651 file687 file721 file757 file792 file827 file862 file898 file932 file968 file125 file160 file196 file230 file266 file300 file336 file371 file406 file441 file477 file511 file547 file582 file617 file652 file688 file722 file758 file793 file828 file863 file899 file933 file969 file126 file161 file197 file231 file267 file301 file337 file372 file407 file442 file478 file512 file548 file583 file618 file653 file689 file723 file759 file794 file829 file864 file9 file934 file97 file127 file162 file198 file232 file268 file302 file338 file373 file408 file443 file479 file513 file549 file584 file619 file654 file69 file724 file76 file795 file83 file865 file90 file935 file970 file128 file163 file199 file233 file269 file303 file339 file374 file409 file444 file48 file514 file55 file585 file62 file655 file690 file725 file760 file796 file830 file866 file900 file936 file971 file129 file164 file2 file234 file27 file304 file34 file375 file41 file445 file480 file515 file550 file586 file620 file656 file691 file726 file761 file797 file831 file867 file901 file937 file972 file13 file165 file20 file235 file270 file305 file340 file376 file410 file446 file481 file516 file551 file587 file621 file657 file692 file727 file762 file798 file832 file868 file902 file938 file973 file130 file166 file200 file236 file271 file306 file341 file377 file411 file447 file482 file517 file552 file588 file622 file658 file693 file728 file763 file799 file833 file869 file903 file939 file974 real 11m57.684s user 0m0.078s sys 0m0.147s
sosreport are available @ /home/repo/sosreports/bug.1283990 on rhsqe-repo.lab.eng.blr.redhat.com
Tested with build glusterfs-3.7.5-7.el6rhs.x86_64: Created 2x2 volume and exported the samba share on windows client. Executed the perl script mentioned in bug description which creates nested folders and files. Executed add-brick operation while i/o's are going on. Executed rebalance. Mount point is still accessible and there are no I/O errors on windows client. Marking the BZ verified.
By mistake moved to verified.
This should be RCAed to determine the cause for the slow ls. It is likely to be because of the readdirp performance as well as the deep dir structure.
[root@unused glusterfs]# gluster volume info Volume Name: dist-rep Type: Distributed-Replicate Volume ID: 75866874-eacd-425b-bd40-c348f6049a78 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: booradley:/home/export/dist-rep/5 Brick2: booradley:/home/export/dist-rep/6 Brick3: booradley:/home/export/dist-rep/7 Brick4: booradley:/home/export/dist-rep/8 Brick5: booradley:/home/export/dist-rep/9 Brick6: booradley:/home/export/dist-rep/10 Options Reconfigured: transport.address-family: inet nfs.disable: on [root@unused ~]# mount -t glusterfs booradley:/dist-rep /mnt/glusterfs [root@unused glusterfs]# mount | grep glusterfs booradley:/dist-rep on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) [root@unused ~]# cd /mnt/glusterfs [root@unused glusterfs]# time ls -lR . > /dev/null real 0m5.107s user 0m0.021s sys 0m0.103s [root@unused glusterfs]# ls 1 [root@unused glusterfs]# find . -type f | wc -l 100 [root@unused glusterfs]# find . -iname 200 ./1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100/101/102/103/104/105/106/107/108/109/110/111/112/113/114/115/116/117/118/119/120/121/122/123/124/125/126/127/128/129/130/131/132/133/134/135/136/137/138/139/140/141/142/143/144/145/146/147/148/149/150/151/152/153/154/155/156/157/158/159/160/161/162/163/164/165/166/167/168/169/170/171/172/173/174/175/176/177/178/179/180/181/182/183/184/185/186/187/188/189/190/191/192/193/194/195/196/197/198/199/200 ls -lR completed in 5 seconds. So, the issue is not reproducible on upstream master (commit: 96b33b4b278391ca8a7755cf274931d4f1808cb5)
The test was slightly different: [root@dht-rhs-23 subdir200]# time ls Looks like it was an ls run inside the deepest directory.
[root@unused ~]# umount /mnt/glusterfs [root@unused ~]# gluster volume stop dist-rep Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: dist-rep: success [root@unused ~]# gluster volume start dist-rep volume start: dist-rep: success [root@unused ~]# mount -t glusterfs booradley:/dist-rep /mnt/glusterfs [root@unused ~]# cd /mnt/glusterfs [root@unused glusterfs]# cd ./1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100/101/102/103/104/105/106/107/108/109/110/111/112/113/114/115/116/117/118/119/120/121/122/123/124/125/126/127/128/129/130/131/132/133/134/135/136/137/138/139/140/141/142/143/144/145/146/147/148/149/150/151/152/153/154/155/156/157/158/159/160/161/162/163/164/165/166/167/168/169/170/171/172/173/174/175/176/177/178/179/180/181/182/183/184/185/186/187/188/189/190/191/192/193/194/195/196/197/198/199/200 [root@unused 200]# time ls -l > /dev/null real 0m0.735s user 0m0.004s sys 0m0.008s [root@unused 200]#
Prasad, Can you verify whether this bug is reproducible on rhgs-3.3.0? regards, Raghavendra