Bug 1035777

Summary: iozone get interrupted while setting gluster volume set option.
Product: Red Hat Gluster Storage Reporter: Anil Shah <ashah>
Component: glusterfsAssignee: Nagaprasad Sathyanarayana <nsathyan>
Status: CLOSED EOL QA Contact: storage-qa-internal <storage-qa-internal>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2.1CC: nsathyan, smohan, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard: glusterd
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-03 17:11:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Anil Shah 2013-11-28 13:02:57 UTC
Description of problem:

IOzone gets interrupted while setting  cluster.metadata-change-log  and  cluster.data-change-log option to off.

Version-Release number of selected component (if applicable):
glusterfs 3.4.0.44.1u2rhs

How reproducible:
Everytime

Steps to Reproduce:
1.start the iozone on nfs mount point.
2.Run command  gluster vol set dist-rep2  cluster.data-change-log off
volume set: success

3.

Actual results:
IOZone gets stooped after gluster volume set command.

Expected results:
Iozone should not be stopped.

Additional info:


[root@rhsauto001 ~]# gluster v i
 
Volume Name: dist-rep2
Type: Distributed-Replicate
Volume ID: 65a2f89e-7d92-4632-b10e-bfc7119a8f9a
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.36.236:/rhs/brick1/d2r12
Brick2: 10.70.36.237:/rhs/brick1/d2r22
Brick3: 10.70.36.236:/rhs/brick1/d4r12
Brick4: 10.70.36.237:/rhs/brick1/d4r22
Brick5: 10.70.36.236:/rhs/brick1/d6r12
Brick6: 10.70.36.237:/rhs/brick1/d6r22
Brick7: 10.70.36.231:/rhs/brick1/d1r12
Brick8: 10.70.36.233:/rhs/brick1/d1r22
Brick9: 10.70.36.231:/rhs/brick1/d3r12
Brick10: 10.70.36.233:/rhs/brick1/d3r22
Brick11: 10.70.36.231:/rhs/brick1/d5r12
Brick12: 10.70.36.233:/rhs/brick1/d5r22
Options Reconfigured:
cluster.metadata-change-log: off
cluster.data-change-log: off


[root@rhsauto010 glusterfs]# time  /opt/./iozone -a
	Iozone: Performance Test of File I/O
	        Version $Revision: 3.326 $
		Compiled for 64 bit mode.
		Build: linux 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

	Run began: Thu Nov 28 05:44:14 2013

	Auto Mode
	Command line used: /opt/./iozone -a
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride                                   
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
              64       4  249985  496362    18807  1421755 1780008  889431    2789   769584    31066   633392   789967   12802  1562436
              64       8  387461 1484662    27375  1599680 2133730 1033216    4956  1163036    27294   763021   955947   12132  1828508
              64      16  418903 1562436    27623  3203069  821391  983980   10814  1013707    21753   842003  1101022   26196   831569
              64      32  557144  673098    24874  1734015 2444640 1163036   14496  1330167    19040   926260  1492919   19650  2006158
              64      64  470274  889431    21080  1780008 2662899 1226821   16297   955947    21549   855419  1049372   19219  2067979
             128       4  546819  848397    29849  1684006 1229084  962469    2631  1229084    15678   726674   876086   23231  1663139
             128       8  633247  978253    31512  1939522 1220700 1162547    5013  1451764    59646   826202   809997   25226  1175271
             128      16  592699 1038825    42035  1967960 2511022  864797    9814  1684006    54307   677178  1133103   26621  2098744
             128      32  673778 1032829    46868  2132084 1185654 1349580   16320  1209698    31926  1048974   889145   33825  2248149
             128      64  488594 1102844    44535  2286447 2905056 1373754   30717  1451764    42511   901083  1755594   36068  1254941
             128     128  684081  876086    49021  2132084 2985839 1391557   43880  1266785    40137   971174  1206978   35076  2326073
             256       4  609455  900936    57233  1534342 1405778  988889    3196  1471270    18965   725596   748353   54353  1422540
             256       8  691930 1028679    62668  2065659 2419396 1019886    5359  1319408    28397   844929  1027695   46026  1471270
             256      16  644965 1132871    66032  2187712 2665657 1347557   12682  2148318   124278   917880  1132871   55689  1704877
             256      32  733527 1011241    72560  2325094 2719671 1168627   17570  1437779    84071   762164   984356   39012  4422226
             256      64  669913 1190657    78210  2419396 2081678 1453348   29972  1778290    63225   883152  2441400   74139  2509882
             256     128  585845 1195962    70758  1778290 3197509 1016025   50814  1588832    57918  1108314  1829808   69905  2413957
             256     256  599585 1506359    68562  4350555 3087188 1361224   55942  2533571    72297   668246  1300235   78525  3123106
             512       4  577310 1718254    47284  1565443 1897396  867738    2904  1257451    21268   622855   622855   81681  1541840
             512       8fsync: Remote I/O error

iozone: interrupted

exiting iozone


real	0m4.705s
user	0m0.059s
sys	0m0.340s

Comment 3 vsomyaju 2013-12-20 10:42:46 UTC
Observation:
------------
If we set the afr data-change-log option to off, iozone
at mount point will stuck. While debugging I found that, afr_writev_wind function checks that on how many nodes pre-op is successfull, and if pre-op success count is zero, write will return  back with local->op_ret  -1.

Comment 5 Vivek Agarwal 2015-12-03 17:11:57 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.