Bug 1309576 - AWS: Linux untar is hanging
AWS: Linux untar is hanging
Status: CLOSED WORKSFORME
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
Anoop
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-18 02:47 EST by RajeshReddy
Modified: 2018-02-06 23:26 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-02-06 23:26:46 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description RajeshReddy 2016-02-18 02:47:10 EST
Description of problem:
================
AWS: Linux untar is hanging 

Version-Release number of selected component (if applicable):
=============
glusterfs-server-3.7.5-19.el7rhgs.x86_64


How reproducible:


Steps to Reproduce:
===============
1. Create AWS instance with 24 EBS volumes (magnetic) of 800 GB each, and one SSD volume of 100 GB
2. Create RAID 0 group out of 24 EBS (magnetic volumes) and then create two bricks 
3. Out of 100 GB SSD create hot brick
4. Create 1x2 volume and then attach hot tier 
5. Mount the volume on client using FUSE and run linux untar but IO is hanging due to XFS lock issue 

Actual results:


Expected results:


Additional info:
============
[ec2-user@ip-172-31-63-234 ~]$ sudo gluster vol info 
 
Volume Name: tier_perform
Type: Tier
Volume ID: 28166f21-e0d0-4ca4-a502-144a836d1deb
Status: Started
Number of Bricks: 3
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 1
Brick1: ip-172-31-63-234.ec2.internal:/rhs/hot/tier_perform
Cold Tier:
Cold Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick2: ip-172-31-63-234.ec2.internal:/rhs/brick1/tier_perform
Brick3: ip-172-31-63-234.ec2.internal:/rhs/brick2/tier_perform
Options Reconfigured:
performance.readdir-ahead: on
features.ctr-enabled: on
cluster.tier-mode: cache
[ec2-user@ip-172-31-63-234 ~]$ 

[ec2-user@ip-172-31-63-234 ~]$ sudo fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/xvda: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


#         Start          End    Size  Type            Name
 1         2048         4095      1M  BIOS boot parti 
 2         4096     41943006     20G  Microsoft basic 

Disk /dev/xvdi: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdh: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdg: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdf: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvde: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdd: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdc: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdb: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdz: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdy: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdx: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdw: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdv: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdu: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdt: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvds: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdr: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdq: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdp: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdo: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdn: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdm: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdl: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdk: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/xvdj: 751.6 GB, 751619276800 bytes, 1468006400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md0: 18035.6 GB, 18035641417728 bytes, 35225862144 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 12582912 bytes


Disk /dev/md1: 107.3 GB, 107307073536 bytes, 209584128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 524288 bytes


Disk /dev/mapper/rhs_vg-rhs_pool_tmeta: 17.0 GB, 16978542592 bytes, 33161216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 12582912 bytes
Alignment offset: 262144 bytes


Disk /dev/mapper/rhs_vg-rhs_pool_tdata: 1879.0 GB, 1879048192000 bytes, 3670016000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 12582912 bytes


Disk /dev/mapper/rhs_vghot-rhs_poolhot_tmeta: 11.0 GB, 11035607040 bytes, 21553920 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 524288 bytes
Alignment offset: 262144 bytes


Disk /dev/mapper/rhs_vghot-rhs_poolhot_tdata: 79.5 GB, 79456894976 bytes, 155189248 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 524288 bytes
Alignment offset: 393216 bytes


Disk /dev/mapper/rhs_vghot-rhs_poolhot-tpool: 79.5 GB, 79456894976 bytes, 155189248 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 1310720 bytes


Disk /dev/mapper/rhs_vghot-rhs_poolhot: 79.5 GB, 79456894976 bytes, 155189248 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 1310720 bytes


Disk /dev/mapper/rhs_vghot-rhs_lvhot: 79.5 GB, 79456894976 bytes, 155189248 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 1310720 bytes


Disk /dev/mapper/rhs_vg-rhs_pool-tpool: 1879.0 GB, 1879048192000 bytes, 3670016000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 1310720 bytes


Disk /dev/mapper/rhs_vg-rhs_pool: 1879.0 GB, 1879048192000 bytes, 3670016000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 1310720 bytes


Disk /dev/mapper/rhs_vg-rhs_lv1: 912.7 GB, 912680550400 bytes, 1782579200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 1310720 bytes


Disk /dev/mapper/rhs_vg-rhs_lv2: 912.7 GB, 912680550400 bytes, 1782579200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 1310720 bytes


Disk /dev/mapper/rhs_vg-69c186c44a454453a246b44f472a97d4_1: 912.7 GB, 912680550400 bytes, 1782579200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 1310720 bytes

[ec2-user@ip-172-31-63-234 ~]$ 

[ec2-user@ip-172-31-63-234 ~]$ sudo vgs
  VG        #PV #LV #SN Attr   VSize  VFree 
  rhs_vg      1   4   0 wz--n- 16.40t 14.66t
  rhs_vghot   1   2   0 wz--n- 99.94g  5.38g
[ec2-user@ip-172-31-63-234 ~]$ 



[ec2-user@ip-172-31-63-234 ~]$ sudo lvs
  LV                                 VG        Attr       LSize   Pool        Origin  Data%  Meta%  Move Log Cpy%Sync Convert
  69c186c44a454453a246b44f472a97d4_1 rhs_vg    Vwi-a-t--- 850.00g rhs_pool    rhs_lv2 0.09                                   
  rhs_lv1                            rhs_vg    Vwi-aot--- 850.00g rhs_pool            10.62                                  
  rhs_lv2                            rhs_vg    Vwi-aot--- 850.00g rhs_pool            10.53                                  
  rhs_pool                           rhs_vg    twi-aot---   1.71t                     10.31  0.04                            
  rhs_lvhot                          rhs_vghot Vwi-aot---  74.00g rhs_poolhot         91.44                                  
  rhs_poolhot                        rhs_vghot twi-aot---  74.00g                     91.44  0.02                            


Dmesg
============
[55364.157442] lost page write due to I/O error on dm-6
[55364.157445] Buffer I/O error on device dm-6, logical block 6428
[55364.160781] lost page write due to I/O error on dm-6
[55364.160783] Buffer I/O error on device dm-6, logical block 6429
[55364.164077] lost page write due to I/O error on dm-6
[55364.164081] Buffer I/O error on device dm-6, logical block 6430
[55364.166907] lost page write due to I/O error on dm-6
[55364.166909] Buffer I/O error on device dm-6, logical block 6431
[55364.169985] lost page write due to I/O error on dm-6
[55364.169988] Buffer I/O error on device dm-6, logical block 6432
[55364.172892] lost page write due to I/O error on dm-6
[55364.172893] Buffer I/O error on device dm-6, logical block 6433
[55364.175863] lost page write due to I/O error on dm-6
[55369.178070] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 115260408 12
[55369.178488] quiet_error: 5 callbacks suppressed
[55369.178490] Buffer I/O error on device dm-6, logical block 13341471
[55369.181691] lost page write due to I/O error on dm-6
[55369.181693] Buffer I/O error on device dm-6, logical block 13341472
[55369.183483] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 134135792 12
[55369.183491] Buffer I/O error on device dm-6, logical block 15766174
[55369.183491] lost page write due to I/O error on dm-6
[55369.183492] Buffer I/O error on device dm-6, logical block 15766175
[55369.183493] lost page write due to I/O error on dm-6
[55369.183493] Buffer I/O error on device dm-6, logical block 15766176
[55369.183494] lost page write due to I/O error on dm-6
[55369.184187] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 162458488 76
[55369.184194] Buffer I/O error on device dm-6, logical block 6671
[55369.184194] lost page write due to I/O error on dm-6
[55369.184195] Buffer I/O error on device dm-6, logical block 6672
[55369.184196] lost page write due to I/O error on dm-6
[55369.184196] Buffer I/O error on device dm-6, logical block 6673
[55369.184197] lost page write due to I/O error on dm-6
[55369.184198] Buffer I/O error on device dm-6, logical block 6674
[55369.184198] lost page write due to I/O error on dm-6
[55369.184199] Buffer I/O error on device dm-6, logical block 6675
[55369.184199] lost page write due to I/O error on dm-6
[55369.207958] lost page write due to I/O error on dm-6
[55374.212293] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 134136824 16
[55374.212303] quiet_error: 15 callbacks suppressed
[55374.212305] Buffer I/O error on device dm-6, logical block 15766303
[55374.215943] lost page write due to I/O error on dm-6
[55374.215947] Buffer I/O error on device dm-6, logical block 15766304
[55374.219247] lost page write due to I/O error on dm-6
[55374.219249] Buffer I/O error on device dm-6, logical block 15766305
[55374.222551] lost page write due to I/O error on dm-6
[55374.222553] Buffer I/O error on device dm-6, logical block 15766306
[55374.225870] lost page write due to I/O error on dm-6
[55379.250506] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 134139816 60
[55379.250517] Buffer I/O error on device dm-6, logical block 15766677
[55379.255541] lost page write due to I/O error on dm-6
[55379.255544] Buffer I/O error on device dm-6, logical block 15766678
[55379.259697] lost page write due to I/O error on dm-6
[55379.259700] Buffer I/O error on device dm-6, logical block 15766679
[55379.264224] lost page write due to I/O error on dm-6
[55379.264227] Buffer I/O error on device dm-6, logical block 15766680
[55379.269163] lost page write due to I/O error on dm-6
[55379.269165] Buffer I/O error on device dm-6, logical block 15766681
[55379.273135] lost page write due to I/O error on dm-6
[55379.273138] Buffer I/O error on device dm-6, logical block 15766682
[55379.277808] lost page write due to I/O error on dm-6
[55379.277811] Buffer I/O error on device dm-6, logical block 15766683
[55379.281493] lost page write due to I/O error on dm-6
[55379.281495] Buffer I/O error on device dm-6, logical block 15766684
[55379.288505] lost page write due to I/O error on dm-6
[55379.288508] Buffer I/O error on device dm-6, logical block 15766685
[55379.293705] lost page write due to I/O error on dm-6
[55379.293708] Buffer I/O error on device dm-6, logical block 15766686
[55379.299472] lost page write due to I/O error on dm-6
[55379.938231] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[55384.271155] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 134140736 120
[55384.271678] quiet_error: 5 callbacks suppressed
[55384.271680] Buffer I/O error on device dm-6, logical block 15766792
[55384.273593] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 134141936 32
[55384.273601] Buffer I/O error on device dm-6, logical block 15766942
[55384.273601] lost page write due to I/O error on dm-6
[55384.273603] Buffer I/O error on device dm-6, logical block 15766943
[55384.273603] lost page write due to I/O error on dm-6
[55384.273604] Buffer I/O error on device dm-6, logical block 15766944
[55384.273604] lost page write due to I/O error on dm-6
[55384.273605] Buffer I/O error on device dm-6, logical block 15766945
[55384.273605] lost page write due to I/O error on dm-6
[55384.273606] Buffer I/O error on device dm-6, logical block 15766946
[55384.273607] lost page write due to I/O error on dm-6
[55384.273607] Buffer I/O error on device dm-6, logical block 15766947
[55384.273608] lost page write due to I/O error on dm-6
[55384.273608] Buffer I/O error on device dm-6, logical block 15766948
[55384.273609] lost page write due to I/O error on dm-6
[55384.273610] Buffer I/O error on device dm-6, logical block 15766949
[55384.273610] lost page write due to I/O error on dm-6
[55384.296109] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 115261320 76
[55384.296118] Buffer I/O error on device dm-6, logical block 13341585
[55384.296118] lost page write due to I/O error on dm-6
[55384.302998] lost page write due to I/O error on dm-6
[55389.313021] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 153008120 92
[55389.313032] quiet_error: 47 callbacks suppressed
[55389.313034] Buffer I/O error on device dm-6, logical block 18191135
[55389.316417] lost page write due to I/O error on dm-6
[55389.316420] Buffer I/O error on device dm-6, logical block 18191136
[55389.319694] lost page write due to I/O error on dm-6
[55389.319697] Buffer I/O error on device dm-6, logical block 18191137
[55389.322878] lost page write due to I/O error on dm-6
[55389.322881] Buffer I/O error on device dm-6, logical block 18191138
[55389.326095] lost page write due to I/O error on dm-6
[55389.326098] Buffer I/O error on device dm-6, logical block 18191139
[55389.329342] lost page write due to I/O error on dm-6
[55389.329345] Buffer I/O error on device dm-6, logical block 18191140
[55389.329687] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 115264504 8
[55389.329694] Buffer I/O error on device dm-6, logical block 13341983
[55389.329695] lost page write due to I/O error on dm-6
[55389.329696] Buffer I/O error on device dm-6, logical block 13341984
[55389.329697] lost page write due to I/O error on dm-6
[55389.338605] lost page write due to I/O error on dm-6
[55389.338608] Buffer I/O error on device dm-6, logical block 18191141
[55389.339003] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 153009064 60
[55389.339012] Buffer I/O error on device dm-6, logical block 18191253
[55389.339013] lost page write due to I/O error on dm-6
[55389.339349] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 115267568 12
[55389.344509] lost page write due to I/O error on dm-6
[55394.378353] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 115269576 44
[55394.378363] quiet_error: 33 callbacks suppressed
[55394.378365] Buffer I/O error on device dm-6, logical block 13342617
[55394.381379] lost page write due to I/O error on dm-6
[55394.381381] Buffer I/O error on device dm-6, logical block 13342618
[55394.384311] lost page write due to I/O error on dm-6
[55394.384314] Buffer I/O error on device dm-6, logical block 13342619
[55394.387192] lost page write due to I/O error on dm-6
[55394.387195] Buffer I/O error on device dm-6, logical block 13342620
[55394.390051] lost page write due to I/O error on dm-6
[55394.390053] Buffer I/O error on device dm-6, logical block 13342621
[55394.392964] lost page write due to I/O error on dm-6
[55394.392968] Buffer I/O error on device dm-6, logical block 13342622
[55394.395750] lost page write due to I/O error on dm-6
[55394.395752] Buffer I/O error on device dm-6, logical block 13342623
[55394.398587] lost page write due to I/O error on dm-6
[55394.398590] Buffer I/O error on device dm-6, logical block 13342624
[55394.401374] lost page write due to I/O error on dm-6
[55394.401376] Buffer I/O error on device dm-6, logical block 13342625
[55394.404213] lost page write due to I/O error on dm-6
[55394.404215] Buffer I/O error on device dm-6, logical block 13342626
[55394.407166] lost page write due to I/O error on dm-6
[55399.391422] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 115270608 48
[55399.391503] quiet_error: 1 callbacks suppressed
[55399.391505] Buffer I/O error on device dm-6, logical block 13342746
[55399.394575] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 115271656 44
[55399.394583] Buffer I/O error on device dm-6, logical block 13342877
[55399.394584] lost page write due to I/O error on dm-6
[55399.394586] Buffer I/O error on device dm-6, logical block 13342878
[55399.394586] lost page write due to I/O error on dm-6
[55399.394587] Buffer I/O error on device dm-6, logical block 13342879
[55399.394588] lost page write due to I/O error on dm-6
[55399.394589] Buffer I/O error on device dm-6, logical block 13342880
[55399.394589] lost page write due to I/O error on dm-6
[55399.394590] Buffer I/O error on device dm-6, logical block 13342881
[55399.394590] lost page write due to I/O error on dm-6
[55399.394591] Buffer I/O error on device dm-6, logical block 13342882
[55399.394591] lost page write due to I/O error on dm-6
[55399.394592] Buffer I/O error on device dm-6, logical block 13342883
[55399.394593] lost page write due to I/O error on dm-6
[55399.394593] Buffer I/O error on device dm-6, logical block 13342884
[55399.394594] lost page write due to I/O error on dm-6
[55399.394595] Buffer I/O error on device dm-6, logical block 13342885
[55399.394595] lost page write due to I/O error on dm-6
[55399.401019] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 134143992 8
[55399.412745] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 51311504 64
[55399.415718] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 124710872 24
[55399.421747] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 124711912 16
[55399.423970] lost page write due to I/O error on dm-6
[55404.458890] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 162461664 32
[55404.458900] quiet_error: 41 callbacks suppressed
[55404.458901] Buffer I/O error on device dm-6, logical block 7068
[55404.461808] lost page write due to I/O error on dm-6
[55404.461811] Buffer I/O error on device dm-6, logical block 7069
[55404.462452] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 124712920 28
[55404.462459] Buffer I/O error on device dm-6, logical block 14555675
[55404.462460] lost page write due to I/O error on dm-6
[55404.462461] Buffer I/O error on device dm-6, logical block 14555676
[55404.462461] lost page write due to I/O error on dm-6
[55404.462462] Buffer I/O error on device dm-6, logical block 14555677
[55404.462462] lost page write due to I/O error on dm-6
[55404.462463] Buffer I/O error on device dm-6, logical block 14555678
[55404.462464] lost page write due to I/O error on dm-6
[55404.462464] Buffer I/O error on device dm-6, logical block 14555679
[55404.462465] lost page write due to I/O error on dm-6
[55404.462466] Buffer I/O error on device dm-6, logical block 14555680
[55404.462466] lost page write due to I/O error on dm-6
[55404.462467] Buffer I/O error on device dm-6, logical block 14555681
[55404.462467] lost page write due to I/O error on dm-6
[55404.468512] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 115272688 72
[55404.468584] Buffer I/O error on device dm-6, logical block 13343006
[55404.468584] lost page write due to I/O error on dm-6
[55404.472809] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 51312584 64
[55404.487790] lost page write due to I/O error on dm-6
[55431.672533] md/raid0:md1: make_request bug: can't convert block across chunks or bigger than 512k 153011168 40
[55431.672566] quiet_error: 39 callbacks suppressed
[55431.672568] Buffer I/O error on device dm-6, logical block 18191516
[55431.675383] lost page write due to I/O error on dm-6
[55431.675386] Buffer I/O error on device dm-6, logical block 18191517
[55431.678140] lost page write due to I/O error on dm-6
[55431.678143] Buffer I/O error on device dm-6, logical block 18191518
[55431.680866] lost page write due to I/O error on dm-6
[55431.680867] Buffer I/O error on device dm-6, logical block 18191519
[55431.683603] lost page write due to I/O error on dm-6
[55431.683604] Buffer I/O error on device dm-6, logical block 18191520
[55431.686365] lost page write due to I/O error on dm-6
[55431.686366] Buffer I/O error on device dm-6, logical block 18191521
[55431.689095] lost page write due to I/O error on dm-6
[55431.689097] Buffer I/O error on device dm-6, logical block 18191522
[55431.691833] lost page write due to I/O error on dm-6
[55431.691834] Buffer I/O error on device dm-6, logical block 18191523
[55431.694555] lost page write due to I/O error on dm-6
[55431.694557] Buffer I/O error on device dm-6, logical block 18191524
[55431.697304] lost page write due to I/O error on dm-6
[55431.697305] Buffer I/O error on device dm-6, logical block 18191525
[55431.700031] lost page write due to I/O error on dm-6
[55438.960726] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[55498.986697] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[55559.007757] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[55619.028755] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[55679.054899] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[55681.404095] INFO: task glusterfsd:6645 blocked for more than 120 seconds.
[55681.407105] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[55681.410728] glusterfsd      D ffff88080fc53680     0  6645      1 0x00000080
[55681.410732]  ffff8807d1c83af8 0000000000000082 ffff8807d166e660 ffff8807d1c83fd8
[55681.410736]  ffff8807d1c83fd8 ffff8807d1c83fd8 ffff8807d166e660 ffff8807cbccc800
[55681.410739]  ffff8807cefd1228 ffff8807cbccc9c0 00000000000412cc 0000000000000000
[55681.410742] Call Trace:
[55681.410752]  [<ffffffff81609839>] schedule+0x29/0x70
[55681.410782]  [<ffffffffa020859d>] xlog_grant_head_wait+0x9d/0x180 [xfs]
[55681.410795]  [<ffffffffa020871e>] xlog_grant_head_check+0x9e/0x110 [xfs]
[55681.410809]  [<ffffffffa020c0af>] xfs_log_reserve+0xdf/0x1b0 [xfs]
[55681.410820]  [<ffffffffa01c5684>] xfs_trans_reserve+0x204/0x210 [xfs]
[55681.410831]  [<ffffffffa01bb696>] xfs_vn_update_time+0x56/0x190 [xfs]
[55681.410835]  [<ffffffff811e1a05>] update_time+0x25/0xd0
[55681.410838]  [<ffffffff811e1cb0>] file_update_time+0xa0/0xf0
[55681.410847]  [<ffffffffa01b2a1b>] xfs_file_aio_write_checks+0xdb/0xf0 [xfs]
[55681.410856]  [<ffffffffa01b2ac3>] xfs_file_buffered_aio_write+0x93/0x260 [xfs]
[55681.410866]  [<ffffffffa01b2d60>] xfs_file_aio_write+0xd0/0x150 [xfs]
[55681.410870]  [<ffffffff811c61fd>] do_sync_write+0x8d/0xd0
[55681.410873]  [<ffffffff811c699d>] vfs_write+0xbd/0x1e0
[55681.410875]  [<ffffffff811d6cdd>] ? putname+0x3d/0x60
[55681.410878]  [<ffffffff811c73e8>] SyS_write+0x58/0xb0
[55681.410881]  [<ffffffff81614389>] system_call_fastpath+0x16/0x1b
[55681.410883] INFO: task glusterfsd:31155 blocked for more than 120 seconds.
[55681.413855] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[55681.417510] glusterfsd      D ffff88080fc53680     0 31155      1 0x00000080
[55681.417513]  ffff8807d2e53930 0000000000000082 ffff8807cd602220 ffff8807d2e53fd8
[55681.417516]  ffff8807d2e53fd8 ffff8807d2e53fd8 ffff8807cd602220 ffff8807cbccc800
[55681.417518]  ffff8807cefd1000 ffff8807cbccc9c0 0000000000167c88 000000000000000f
[55681.417520] Call Trace:
[55681.417523]  [<ffffffff81609839>] schedule+0x29/0x70
[55681.417537]  [<ffffffffa020859d>] xlog_grant_head_wait+0x9d/0x180 [xfs]
[55681.417548]  [<ffffffffa020871e>] xlog_grant_head_check+0x9e/0x110 [xfs]
[55681.417559]  [<ffffffffa020c0af>] xfs_log_reserve+0xdf/0x1b0 [xfs]
[55681.417569]  [<ffffffffa01c5684>] xfs_trans_reserve+0x204/0x210 [xfs]
[55681.417579]  [<ffffffffa01ccd9a>] xfs_attr_set_int+0x15a/0x470 [xfs]
[55681.417582]  [<ffffffff81285865>] ? mls_sid_to_context+0x2a5/0x2e0
[55681.417591]  [<ffffffffa01bb960>] ? xfs_init_security+0x20/0x20 [xfs]
[55681.417601]  [<ffffffffa01cd5ff>] xfs_attr_set+0x9f/0xb0 [xfs]
[55681.417609]  [<ffffffffa01bb9a1>] xfs_initxattrs+0x41/0x60 [xfs]
[55681.417613]  [<ffffffff81268d5a>] security_inode_init_security+0xea/0x120
[55681.417621]  [<ffffffffa01bb958>] xfs_init_security+0x18/0x20 [xfs]
[55681.417629]  [<ffffffffa01bba9d>] xfs_vn_mknod+0xdd/0x1e0 [xfs]
[55681.417638]  [<ffffffffa01bbbd3>] xfs_vn_create+0x13/0x20 [xfs]
[55681.417640]  [<ffffffff811d2dcd>] vfs_create+0xcd/0x130
[55681.417642]  [<ffffffff811d600f>] do_last+0xb8f/0x1270
[55681.417645]  [<ffffffff811abc4e>] ? kmem_cache_alloc_trace+0x1ce/0x1f0
[55681.417647]  [<ffffffff811d67b2>] path_openat+0xc2/0x490
[55681.417649]  [<ffffffff811d7e82>] ? user_path_at_empty+0x72/0xc0
[55681.417651]  [<ffffffff811d7f7b>] do_filp_open+0x4b/0xb0
[55681.417654]  [<ffffffff811e49d7>] ? __alloc_fd+0xa7/0x130
[55681.417657]  [<ffffffff811c5c73>] do_sys_open+0xf3/0x1f0
[55681.417659]  [<ffffffff811c5d8e>] SyS_open+0x1e/0x20
[55681.417662]  [<ffffffff81614389>] system_call_fastpath+0x16/0x1b
[55681.417669] INFO: task kworker/u30:3:31178 blocked for more than 120 seconds.
[55681.420708] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[55681.424402] kworker/u30:3   D ffff88080fc33680     0 31178      2 0x00000080
[55681.424409] Workqueue: writeback bdi_writeback_workfn (flush-253:6)
[55681.424412]  ffff8807d094f710 0000000000000046 ffff8807cbe271c0 ffff8807d094ffd8
[55681.424415]  ffff8807d094ffd8 ffff8807d094ffd8 ffff8807cbe271c0 ffff8807cbccc800
[55681.424417]  ffff8807d2fd7730 ffff8807cbccc9c0 00000000000b7398 0000000000000004
[55681.424420] Call Trace:
[55681.424423]  [<ffffffff81609839>] schedule+0x29/0x70
[55681.424437]  [<ffffffffa020859d>] xlog_grant_head_wait+0x9d/0x180 [xfs]
[55681.424449]  [<ffffffffa020871e>] xlog_grant_head_check+0x9e/0x110 [xfs]
[55681.424461]  [<ffffffffa020c0af>] xfs_log_reserve+0xdf/0x1b0 [xfs]
[55681.424472]  [<ffffffffa01c5684>] xfs_trans_reserve+0x204/0x210 [xfs]
[55681.424481]  [<ffffffffa01bafc9>] xfs_iomap_write_allocate+0x1c9/0x350 [xfs]
[55681.424489]  [<ffffffffa01a6356>] xfs_map_blocks+0x216/0x240 [xfs]
[55681.424496]  [<ffffffffa01a75bb>] xfs_vm_writepage+0x25b/0x5d0 [xfs]
[55681.424501]  [<ffffffff811610a3>] __writepage+0x13/0x50
[55681.424505]  [<ffffffff81161bc1>] write_cache_pages+0x251/0x4d0
[55681.424508]  [<ffffffff81161090>] ? global_dirtyable_memory+0x70/0x70
[55681.424511]  [<ffffffff81161e8d>] generic_writepages+0x4d/0x80
[55681.424518]  [<ffffffffa01a6ea3>] xfs_vm_writepages+0x43/0x50 [xfs]
[55681.424520]  [<ffffffff81162f3e>] do_writepages+0x1e/0x40
[55681.424523]  [<ffffffff811f0340>] __writeback_single_inode+0x40/0x220
[55681.424526]  [<ffffffff811f103e>] writeback_sb_inodes+0x25e/0x420
[55681.424529]  [<ffffffff811f129f>] __writeback_inodes_wb+0x9f/0xd0
[55681.424532]  [<ffffffff811f1ae3>] wb_writeback+0x263/0x2f0
[55681.424534]  [<ffffffff811f311b>] bdi_writeback_workfn+0x2cb/0x460
[55681.424537]  [<ffffffff8108f0cb>] process_one_work+0x17b/0x470
[55681.424540]  [<ffffffff8108fe9b>] worker_thread+0x11b/0x400
[55681.424542]  [<ffffffff8108fd80>] ? rescuer_thread+0x400/0x400
[55681.424544]  [<ffffffff8109727f>] kthread+0xcf/0xe0
[55681.424546]  [<ffffffff810971b0>] ? kthread_create_on_node+0x140/0x140
[55681.424549]  [<ffffffff816142d8>] ret_from_fork+0x58/0x90
[55681.424551]  [<ffffffff810971b0>] ? kthread_create_on_node+0x140/0x140
[55681.424553] INFO: task kworker/2:0:31297 blocked for more than 120 seconds.
[55681.427623] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[55681.431256] kworker/2:0     D ffff88080fc53680     0 31297      2 0x00000080
[55681.431271] Workqueue: xfs-log/dm-6 xfs_log_worker [xfs]
[55681.431273]  ffff8806853f3cc0 0000000000000046 ffff8807e388b8e0 ffff8806853f3fd8
[55681.431276]  ffff8806853f3fd8 ffff8806853f3fd8 ffff8807e388b8e0 ffff8807cbccc800
[55681.431278]  ffff8807cefd1da8 ffff8807cbccc9c0 00000000000412b4 0000000000000000
[55681.431281] Call Trace:
[55681.431284]  [<ffffffff81609839>] schedule+0x29/0x70
[55681.431295]  [<ffffffffa020859d>] xlog_grant_head_wait+0x9d/0x180 [xfs]
[55681.431306]  [<ffffffffa020871e>] xlog_grant_head_check+0x9e/0x110 [xfs]
[55681.431317]  [<ffffffffa020c0af>] xfs_log_reserve+0xdf/0x1b0 [xfs]
[55681.431328]  [<ffffffffa01c5684>] xfs_trans_reserve+0x204/0x210 [xfs]
[55681.431336]  [<ffffffffa01b4d45>] xfs_fs_log_dummy+0x35/0x80 [xfs]
[55681.431347]  [<ffffffffa020b918>] xfs_log_worker+0x48/0x50 [xfs]
[55681.431350]  [<ffffffff8108f0cb>] process_one_work+0x17b/0x470
[55681.431354]  [<ffffffff8108fe9b>] worker_thread+0x11b/0x400
[55681.431356]  [<ffffffff8108fd80>] ? rescuer_thread+0x400/0x400
[55681.431358]  [<ffffffff8109727f>] kthread+0xcf/0xe0
[55681.431360]  [<ffffffff810971b0>] ? kthread_create_on_node+0x140/0x140
[55681.431363]  [<ffffffff816142d8>] ret_from_fork+0x58/0x90
[55681.431365]  [<ffffffff810971b0>] ? kthread_create_on_node+0x140/0x140
[55739.082949] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[55799.107587] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[55801.431108] INFO: task glusterfsd:6645 blocked for more than 120 seconds.
[55801.434120] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[55801.437872] glusterfsd      D ffff88080fc53680     0  6645      1 0x00000080
[55801.437876]  ffff8807d1c83af8 0000000000000082 ffff8807d166e660 ffff8807d1c83fd8
[55801.437879]  ffff8807d1c83fd8 ffff8807d1c83fd8 ffff8807d166e660 ffff8807cbccc800
[55801.437882]  ffff8807cefd1228 ffff8807cbccc9c0 00000000000412cc 0000000000000000
[55801.437884] Call Trace:
[55801.437893]  [<ffffffff81609839>] schedule+0x29/0x70
[55801.437924]  [<ffffffffa020859d>] xlog_grant_head_wait+0x9d/0x180 [xfs]
[55801.437937]  [<ffffffffa020871e>] xlog_grant_head_check+0x9e/0x110 [xfs]
[55801.437950]  [<ffffffffa020c0af>] xfs_log_reserve+0xdf/0x1b0 [xfs]
[55801.437960]  [<ffffffffa01c5684>] xfs_trans_reserve+0x204/0x210 [xfs]
[55801.437970]  [<ffffffffa01bb696>] xfs_vn_update_time+0x56/0x190 [xfs]
[55801.437973]  [<ffffffff811e1a05>] update_time+0x25/0xd0
[55801.437976]  [<ffffffff811e1cb0>] file_update_time+0xa0/0xf0
[55801.437984]  [<ffffffffa01b2a1b>] xfs_file_aio_write_checks+0xdb/0xf0 [xfs]
[55801.437992]  [<ffffffffa01b2ac3>] xfs_file_buffered_aio_write+0x93/0x260 [xfs]
[55801.438001]  [<ffffffffa01b2d60>] xfs_file_aio_write+0xd0/0x150 [xfs]
[55801.438004]  [<ffffffff811c61fd>] do_sync_write+0x8d/0xd0
[55801.438007]  [<ffffffff811c699d>] vfs_write+0xbd/0x1e0
[55801.438009]  [<ffffffff811d6cdd>] ? putname+0x3d/0x60
[55801.438012]  [<ffffffff811c73e8>] SyS_write+0x58/0xb0
[55801.438015]  [<ffffffff81614389>] system_call_fastpath+0x16/0x1b
[55801.438017] INFO: task glusterfsd:31155 blocked for more than 120 seconds.
[55801.440976] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[55801.444926] glusterfsd      D ffff88080fc53680     0 31155      1 0x00000080
[55801.444930]  ffff8807d2e53930 0000000000000082 ffff8807cd602220 ffff8807d2e53fd8
[55801.444933]  ffff8807d2e53fd8 ffff8807d2e53fd8 ffff8807cd602220 ffff8807cbccc800
[55801.444935]  ffff8807cefd1000 ffff8807cbccc9c0 0000000000167c88 000000000000000f
[55801.444938] Call Trace:
[55801.444942]  [<ffffffff81609839>] schedule+0x29/0x70
[55801.444960]  [<ffffffffa020859d>] xlog_grant_head_wait+0x9d/0x180 [xfs]
[55801.444978]  [<ffffffffa020871e>] xlog_grant_head_check+0x9e/0x110 [xfs]
[55801.444996]  [<ffffffffa020c0af>] xfs_log_reserve+0xdf/0x1b0 [xfs]
[55801.445013]  [<ffffffffa01c5684>] xfs_trans_reserve+0x204/0x210 [xfs]
[55801.445030]  [<ffffffffa01ccd9a>] xfs_attr_set_int+0x15a/0x470 [xfs]
[55801.445035]  [<ffffffff81285865>] ? mls_sid_to_context+0x2a5/0x2e0
[55801.445050]  [<ffffffffa01bb960>] ? xfs_init_security+0x20/0x20 [xfs]
[55801.445066]  [<ffffffffa01cd5ff>] xfs_attr_set+0x9f/0xb0 [xfs]
[55801.445083]  [<ffffffffa01bb9a1>] xfs_initxattrs+0x41/0x60 [xfs]
[55801.445100]  [<ffffffff81268d5a>] security_inode_init_security+0xea/0x120
[55801.445115]  [<ffffffffa01bb958>] xfs_init_security+0x18/0x20 [xfs]
[55801.445129]  [<ffffffffa01bba9d>] xfs_vn_mknod+0xdd/0x1e0 [xfs]
[55801.445143]  [<ffffffffa01bbbd3>] xfs_vn_create+0x13/0x20 [xfs]
[55801.445148]  [<ffffffff811d2dcd>] vfs_create+0xcd/0x130
[55801.445151]  [<ffffffff811d600f>] do_last+0xb8f/0x1270
[55801.445156]  [<ffffffff811abc4e>] ? kmem_cache_alloc_trace+0x1ce/0x1f0
[55801.445159]  [<ffffffff811d67b2>] path_openat+0xc2/0x490
[55801.445163]  [<ffffffff811d7e82>] ? user_path_at_empty+0x72/0xc0
[55801.445166]  [<ffffffff811d7f7b>] do_filp_open+0x4b/0xb0
[55801.445170]  [<ffffffff811e49d7>] ? __alloc_fd+0xa7/0x130
[55801.445175]  [<ffffffff811c5c73>] do_sys_open+0xf3/0x1f0
[55801.445180]  [<ffffffff811c5d8e>] SyS_open+0x1e/0x20
[55801.445184]  [<ffffffff81614389>] system_call_fastpath+0x16/0x1b
[55801.445194] INFO: task kworker/1:0:31128 blocked for more than 120 seconds.
[55801.448440] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[55801.452106] kworker/1:0     D ffff88080fc33680     0 31128      2 0x00000080
[55801.452121] Workqueue: xfs-eofblocks/dm-6 xfs_eofblocks_worker [xfs]
[55801.452122]  ffff8807d13e3a28 0000000000000046 ffff8807d1656660 ffff8807d13e3fd8
[55801.452125]  ffff8807d13e3fd8 ffff8807d13e3fd8 ffff8807d1656660 ffff8807cbccc800
[55801.452128]  ffff8807d2fd72e0 ffff8807cbccc9c0 00000000000ee780 0000000000000000
[55801.452132] Call Trace:
[55801.452137]  [<ffffffff81609839>] schedule+0x29/0x70
[55801.452155]  [<ffffffffa020859d>] xlog_grant_head_wait+0x9d/0x180 [xfs]
[55801.452174]  [<ffffffffa020871e>] xlog_grant_head_check+0x9e/0x110 [xfs]
[55801.452192]  [<ffffffffa020c0af>] xfs_log_reserve+0xdf/0x1b0 [xfs]
[55801.452208]  [<ffffffffa01c5684>] xfs_trans_reserve+0x204/0x210 [xfs]
[55801.452221]  [<ffffffffa01aaa87>] xfs_free_eofblocks+0x147/0x270 [xfs]
[55801.452227]  [<ffffffff8110e42e>] ? irq_get_irq_data+0xe/0x10
[55801.452241]  [<ffffffffa01b7085>] xfs_inode_free_eofblocks+0x95/0x170 [xfs]
[55801.452255]  [<ffffffffa01b54a6>] xfs_inode_ag_walk.isra.9+0x256/0x380 [xfs]
[55801.452269]  [<ffffffffa01b6ff0>] ? xfs_inode_clear_eofblocks_tag+0x170/0x170 [xfs]
[55801.452274]  [<ffffffff8107ef77>] ? internal_add_timer+0x17/0x40
[55801.452278]  [<ffffffff81080b0d>] ? mod_timer+0x11d/0x240
[55801.452283]  [<ffffffff812da2a9>] ? radix_tree_gang_lookup_tag+0x99/0xf0
[55801.452302]  [<ffffffffa0205fd2>] ? xfs_perag_get_tag+0x42/0xe0 [xfs]
[55801.452317]  [<ffffffffa01b6886>] xfs_inode_ag_iterator_tag+0x76/0xc0 [xfs]
[55801.452331]  [<ffffffffa01b6ff0>] ? xfs_inode_clear_eofblocks_tag+0x170/0x170 [xfs]
[55801.452344]  [<ffffffffa01b6bad>] xfs_icache_free_eofblocks+0x2d/0x40 [xfs]
[55801.452359]  [<ffffffffa01b6bdb>] xfs_eofblocks_worker+0x1b/0x30 [xfs]
[55801.452365]  [<ffffffff8108f0cb>] process_one_work+0x17b/0x470
[55801.452368]  [<ffffffff8108fe9b>] worker_thread+0x11b/0x400
[55801.452372]  [<ffffffff8108fd80>] ? rescuer_thread+0x400/0x400
[55801.452376]  [<ffffffff8109727f>] kthread+0xcf/0xe0
[55801.452380]  [<ffffffff810971b0>] ? kthread_create_on_node+0x140/0x140
[55801.452384]  [<ffffffff816142d8>] ret_from_fork+0x58/0x90
[55801.452387]  [<ffffffff810971b0>] ? kthread_create_on_node+0x140/0x140
[55801.452390] INFO: task kworker/u30:3:31178 blocked for more than 120 seconds.
[55801.455725] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[55801.459378] kworker/u30:3   D ffff88080fc33680     0 31178      2 0x00000080
[55801.459385] Workqueue: writeback bdi_writeback_workfn (flush-253:6)
[55801.459388]  ffff8807d094f710 0000000000000046 ffff8807cbe271c0 ffff8807d094ffd8
[55801.459391]  ffff8807d094ffd8 ffff8807d094ffd8 ffff8807cbe271c0 ffff8807cbccc800
[55801.459393]  ffff8807d2fd7730 ffff8807cbccc9c0 00000000000b7398 0000000000000004
[55801.459395] Call Trace:
[55801.459399]  [<ffffffff81609839>] schedule+0x29/0x70
[55801.459417]  [<ffffffffa020859d>] xlog_grant_head_wait+0x9d/0x180 [xfs]
[55801.459434]  [<ffffffffa020871e>] xlog_grant_head_check+0x9e/0x110 [xfs]
[55801.459453]  [<ffffffffa020c0af>] xfs_log_reserve+0xdf/0x1b0 [xfs]
[55801.459471]  [<ffffffffa01c5684>] xfs_trans_reserve+0x204/0x210 [xfs]
[55801.459485]  [<ffffffffa01bafc9>] xfs_iomap_write_allocate+0x1c9/0x350 [xfs]
[55801.459498]  [<ffffffffa01a6356>] xfs_map_blocks+0x216/0x240 [xfs]
[55801.459510]  [<ffffffffa01a75bb>] xfs_vm_writepage+0x25b/0x5d0 [xfs]
[55801.459516]  [<ffffffff811610a3>] __writepage+0x13/0x50
[55801.459520]  [<ffffffff81161bc1>] write_cache_pages+0x251/0x4d0
[55801.459524]  [<ffffffff81161090>] ? global_dirtyable_memory+0x70/0x70
[55801.459529]  [<ffffffff81161e8d>] generic_writepages+0x4d/0x80
[55801.459541]  [<ffffffffa01a6ea3>] xfs_vm_writepages+0x43/0x50 [xfs]
[55801.459545]  [<ffffffff81162f3e>] do_writepages+0x1e/0x40
[55801.459549]  [<ffffffff811f0340>] __writeback_single_inode+0x40/0x220
[55801.459552]  [<ffffffff811f103e>] writeback_sb_inodes+0x25e/0x420
[55801.459556]  [<ffffffff811f129f>] __writeback_inodes_wb+0x9f/0xd0
[55801.459560]  [<ffffffff811f1ae3>] wb_writeback+0x263/0x2f0
[55801.459564]  [<ffffffff811f311b>] bdi_writeback_workfn+0x2cb/0x460
[55801.459567]  [<ffffffff8108f0cb>] process_one_work+0x17b/0x470
[55801.459571]  [<ffffffff8108fe9b>] worker_thread+0x11b/0x400
[55801.459574]  [<ffffffff8108fd80>] ? rescuer_thread+0x400/0x400
[55801.459578]  [<ffffffff8109727f>] kthread+0xcf/0xe0
[55801.459581]  [<ffffffff810971b0>] ? kthread_create_on_node+0x140/0x140
[55801.459585]  [<ffffffff816142d8>] ret_from_fork+0x58/0x90
[55801.459588]  [<ffffffff810971b0>] ? kthread_create_on_node+0x140/0x140
[55801.459592] INFO: task kworker/2:0:31297 blocked for more than 120 seconds.
[55801.462896] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[55801.466566] kworker/2:0     D ffff88080fc53680     0 31297      2 0x00000080
[55801.466584] Workqueue: xfs-log/dm-6 xfs_log_worker [xfs]
[55801.466586]  ffff8806853f3cc0 0000000000000046 ffff8807e388b8e0 ffff8806853f3fd8
[55801.466591]  ffff8806853f3fd8 ffff8806853f3fd8 ffff8807e388b8e0 ffff8807cbccc800
[55801.466594]  ffff8807cefd1da8 ffff8807cbccc9c0 00000000000412b4 0000000000000000
[55801.466598] Call Trace:
[55801.466602]  [<ffffffff81609839>] schedule+0x29/0x70
[55801.466622]  [<ffffffffa020859d>] xlog_grant_head_wait+0x9d/0x180 [xfs]
[55801.466639]  [<ffffffffa020871e>] xlog_grant_head_check+0x9e/0x110 [xfs]
[55801.466658]  [<ffffffffa020c0af>] xfs_log_reserve+0xdf/0x1b0 [xfs]
[55801.466674]  [<ffffffffa01c5684>] xfs_trans_reserve+0x204/0x210 [xfs]
[55801.466689]  [<ffffffffa01b4d45>] xfs_fs_log_dummy+0x35/0x80 [xfs]
[55801.466708]  [<ffffffffa020b918>] xfs_log_worker+0x48/0x50 [xfs]
[55801.466712]  [<ffffffff8108f0cb>] process_one_work+0x17b/0x470
[55801.466716]  [<ffffffff8108fe9b>] worker_thread+0x11b/0x400
[55801.466719]  [<ffffffff8108fd80>] ? rescuer_thread+0x400/0x400
[55801.466722]  [<ffffffff8109727f>] kthread+0xcf/0xe0
[55801.466726]  [<ffffffff810971b0>] ? kthread_create_on_node+0x140/0x140
[55801.466730]  [<ffffffff816142d8>] ret_from_fork+0x58/0x90
[55801.466733]  [<ffffffff810971b0>] ? kthread_create_on_node+0x140/0x140
[55859.129197] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[55919.152606] SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
[55921.466098] INFO: task glusterfsd:6645 blocked for more than 120 seconds.
[55921.469087] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[55921.472729] glusterfsd      D ffff88080fc53680     0  6645      1 0x00000080
[55921.472733]  ffff8807d1c83af8 0000000000000082 ffff8807d166e660 ffff8807d1c83fd8
[55921.472737]  ffff8807d1c83fd8 ffff8807d1c83fd8 ffff8807d166e660 ffff8807cbccc800
[55921.472739]  ffff8807cefd1228 ffff8807cbccc9c0 00000000000412cc 0000000000000000
[55921.472742] Call Trace:
[55921.472749]  [<ffffffff81609839>] schedule+0x29/0x70
[55921.472779]  [<ffffffffa020859d>] xlog_grant_head_wait+0x9d/0x180 [xfs]
[55921.472793]  [<ffffffffa020871e>] xlog_grant_head_check+0x9e/0x110 [xfs]
[55921.472806]  [<ffffffffa020c0af>] xfs_log_reserve+0xdf/0x1b0 [xfs]
[55921.472817]  [<ffffffffa01c5684>] xfs_trans_reserve+0x204/0x210 [xfs]
[55921.472826]  [<ffffffffa01bb696>] xfs_vn_update_time+0x56/0x190 [xfs]
[55921.472833]  [<ffffffff811e1a05>] update_time+0x25/0xd0
[55921.472835]  [<ffffffff811e1cb0>] file_update_time+0xa0/0xf0
[55921.472844]  [<ffffffffa01b2a1b>] xfs_file_aio_write_checks+0xdb/0xf0 [xfs]
[55921.472853]  [<ffffffffa01b2ac3>] xfs_file_buffered_aio_write+0x93/0x260 [xfs]
[55921.472861]  [<ffffffffa01b2d60>] xfs_file_aio_write+0xd0/0x150 [xfs]
[55921.472866]  [<ffffffff811c61fd>] do_sync_write+0x8d/0xd0
[55921.472869]  [<ffffffff811c699d>] vfs_write+0xbd/0x1e0
[55921.472871]  [<ffffffff811d6cdd>] ? putname+0x3d/0x60
[55921.472874]  [<ffffffff811c73e8>] SyS_write+0x58/0xb0
[55921.472877]  [<ffffffff81614389>] system_call_fastpath+0x16/0x1b
[ec2-user@ip-172-31-63-234 ~]$
Comment 2 RajeshReddy 2016-03-16 01:53:14 EDT
Created VG out of 24 EBS (1 TB) disks and then created two logical volumes and used these two volumes as bricks and ran same workflow (Linux untar) multiple times and did not encountered the issue mentioned above so it is looks like the problem with software raid (mdadm)
Comment 4 Amar Tumballi 2018-02-06 23:26:46 EST
We have noticed that the bug is not reproduced in the latest version of the product (RHGS-3.3.1+).

If the bug is still relevant and is being reproduced, feel free to reopen the bug.

Note You need to log in before you can comment on or make changes to this bug.