Bug 589070 - GFS2 lockdep warnings when quota is enabled
GFS2 lockdep warnings when quota is enabled
Status: NEW
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
rawhide
All Linux
low Severity medium
: ---
: ---
Assigned To: Steve Whitehouse
Fedora Extras Quality Assurance
:
Depends On: 636287 437149
Blocks:
  Show dependency treegraph
 
Reported: 2010-05-05 06:18 EDT by chellwig@redhat.com
Modified: 2014-09-08 20:40 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description chellwig@redhat.com 2010-05-05 06:18:18 EDT
When doing simple file creations on a quota-enabled gfs2 filesystem lockdep is not very happy:

[  288.229653] GFS2: fsid=: Trying to join cluster "lock_nolock", "vdb6"
[  288.231487] GFS2: fsid=vdb6.0: Now mounting FS...
[  288.319026] GFS2: fsid=vdb6.0: jid=0, already locked for use
[  288.320782] GFS2: fsid=vdb6.0: jid=0: Looking at journal...
[  288.334825] GFS2: fsid=vdb6.0: jid=0: Done
[  288.507120] 
[  288.507124] =======================================================
[  288.508476] [ INFO: possible circular locking dependency detected ]
[  288.508476] 2.6.34-rc1-xfs #553
[  288.508476] -------------------------------------------------------
[  288.508476] xfs_io/5079 is trying to acquire lock:
[  288.508476]  (&sdp->sd_quota_mutex){+.+...}, at: [<c06132f0>] do_qc+0x40/0x1d0
[  288.508476] 
[  288.508476] but task is already holding lock:
[  288.508476]  (&ip->i_rw_mutex){++++..}, at: [<c05f6e92>] gfs2_block_map+0x72/0xd90
[  288.508476] 
[  288.508476] which lock already depends on the new lock.
[  288.508476] 
[  288.508476] 
[  288.508476] the existing dependency chain (in reverse order) is:
[  288.508476] 
[  288.508476] -> #1 (&ip->i_rw_mutex){++++..}:
[  288.508476]        [<c01933a4>] __lock_acquire+0xc74/0x1280
[  288.508476]        [<c0193a44>] lock_acquire+0x94/0x100
[  288.508476]        [<c08ef7c7>] down_read+0x47/0x90
[  288.508476]        [<c05f712b>] gfs2_block_map+0x30b/0xd90
[  288.508476]        [<c06141c8>] bh_get+0x98/0x1c0
[  288.508476]        [<c061440e>] qdsb_get+0x11e/0x150
[  288.508476]        [<c06144af>] gfs2_quota_hold+0x6f/0x180
[  288.508476]        [<c06145e4>] gfs2_quota_lock+0x24/0xf0
[  288.508476]        [<c0605cff>] gfs2_createi+0x26f/0x900
[  288.508476]        [<c0610776>] gfs2_create+0x56/0x120
[  288.508476]        [<c020962c>] vfs_create+0x7c/0x90
[  288.508476]        [<c020a7f8>] do_last+0x4c8/0x560
[  288.508476]        [<c020c494>] do_filp_open+0x194/0x470
[  288.508476]        [<c01feb8f>] do_sys_open+0x4f/0x110
[  288.508476]        [<c01fecb9>] sys_open+0x29/0x40
[  288.508476]        [<c012f19c>] sysenter_do_call+0x12/0x3c
[  288.508476] 
[  288.508476] -> #0 (&sdp->sd_quota_mutex){+.+...}:
[  288.508476]        [<c01937c4>] __lock_acquire+0x1094/0x1280
[  288.508476]        [<c0193a44>] lock_acquire+0x94/0x100
[  288.508476]        [<c08ef097>] __mutex_lock_common+0x47/0x370
[  288.508476]        [<c08ef475>] mutex_lock_nested+0x35/0x40
[  288.508476]        [<c06132f0>] do_qc+0x40/0x1d0
[  288.508476]        [<c06134f0>] gfs2_quota_change+0x70/0xc0
[  288.508476]        [<c0618698>] gfs2_alloc_block+0x198/0x2e0
[  288.508476]        [<c05f72bd>] gfs2_block_map+0x49d/0xd90
[  288.508476]        [<c02246b8>] __block_prepare_write+0x148/0x3a0
[  288.508476]        [<c0224936>] block_prepare_write+0x26/0x40
[  288.508476]        [<c060b09d>] gfs2_write_begin+0x38d/0x490
[  288.508476]        [<c01ce08d>] generic_file_buffered_write+0xcd/0x1f0
[  288.508476]        [<c01d0893>] __generic_file_aio_write+0x3d3/0x4f0
[  288.508476]        [<c01d0a0e>] generic_file_aio_write+0x5e/0xc0
[  288.508476]        [<c060cd8a>] gfs2_file_aio_write+0x6a/0x90
[  288.508476]        [<c02005ac>] do_sync_write+0x9c/0xd0
[  288.508476]        [<c02007fa>] vfs_write+0x9a/0x160
[  288.508476]        [<c0201073>] sys_pwrite64+0x63/0x80
[  288.508476]        [<c012f19c>] sysenter_do_call+0x12/0x3c
[  288.508476] 
[  288.508476] other info that might help us debug this:
[  288.508476] 
[  288.508476] 3 locks held by xfs_io/5079:
[  288.508476]  #0:  (&sb->s_type->i_mutex_key#12){+.+.+.}, at: [<c01d09fb>] generic_file_aio_write+0x4b/0xc0
[  288.508476]  #1:  (&sdp->sd_log_flush_lock){++++..}, at: [<c0607f52>] gfs2_log_reserve+0x102/0x1a0
[  288.508476]  #2:  (&ip->i_rw_mutex){++++..}, at: [<c05f6e92>] gfs2_block_map+0x72/0xd90
[  288.508476] 
[  288.508476] stack backtrace:
[  288.508476] Pid: 5079, comm: xfs_io Tainted: G        W  2.6.34-rc1-xfs #553
[  288.508476] Call Trace:
[  288.508476]  [<c08eddcf>] ? printk+0x18/0x1a
[  288.508476]  [<c0191282>] print_circular_bug+0xc2/0xd0
[  288.508476]  [<c01937c4>] __lock_acquire+0x1094/0x1280
[  288.508476]  [<c01348a8>] ? sched_clock+0x8/0x10
[  288.508476]  [<c0193a44>] lock_acquire+0x94/0x100
[  288.508476]  [<c06132f0>] ? do_qc+0x40/0x1d0
[  288.508476]  [<c08ef097>] __mutex_lock_common+0x47/0x370
[  288.508476]  [<c06132f0>] ? do_qc+0x40/0x1d0
[  288.508476]  [<c08ef475>] mutex_lock_nested+0x35/0x40
[  288.508476]  [<c06132f0>] ? do_qc+0x40/0x1d0
[  288.508476]  [<c06132f0>] do_qc+0x40/0x1d0
[  288.508476]  [<c061a45c>] ? gfs2_statfs_change+0xec/0x1a0
[  288.508476]  [<c06134f0>] gfs2_quota_change+0x70/0xc0
[  288.508476]  [<c0618698>] gfs2_alloc_block+0x198/0x2e0
[  288.508476]  [<c05f72bd>] gfs2_block_map+0x49d/0xd90
[  288.508476]  [<c08f0cdd>] ? _raw_spin_unlock+0x1d/0x20
[  288.508476]  [<c02246b8>] __block_prepare_write+0x148/0x3a0
[  288.508476]  [<c05f6e20>] ? gfs2_block_map+0x0/0xd90
[  288.508476]  [<c0224936>] block_prepare_write+0x26/0x40
[  288.508476]  [<c05f6e20>] ? gfs2_block_map+0x0/0xd90
[  288.508476]  [<c060b09d>] gfs2_write_begin+0x38d/0x490
[  288.508476]  [<c05f6e20>] ? gfs2_block_map+0x0/0xd90
[  288.508476]  [<c01ce08d>] generic_file_buffered_write+0xcd/0x1f0
[  288.508476]  [<c01d0893>] __generic_file_aio_write+0x3d3/0x4f0
[  288.508476]  [<c01d0a0e>] generic_file_aio_write+0x5e/0xc0
[  288.508476]  [<c060cd8a>] gfs2_file_aio_write+0x6a/0x90
[  288.508476]  [<c018eafb>] ? trace_hardirqs_off+0xb/0x10
[  288.508476]  [<c01837ad>] ? cpu_clock+0x6d/0x70
[  288.508476]  [<c02005ac>] do_sync_write+0x9c/0xd0
[  288.508476]  [<c01e3e76>] ? might_fault+0x46/0xa0
[  288.508476]  [<c01e3e76>] ? might_fault+0x46/0xa0
[  288.508476]  [<c02007fa>] vfs_write+0x9a/0x160
[  288.508476]  [<c0200510>] ? do_sync_write+0x0/0xd0
[  288.508476]  [<c012f1d5>] ? sysenter_exit+0xf/0x1a
[  288.508476]  [<c0201073>] sys_pwrite64+0x63/0x80
[  288.508476]  [<c012f19c>] sysenter_do_call+0x12/0x3c
Comment 1 Bug Zapper 2010-07-30 07:33:07 EDT
This bug appears to have been reported against 'rawhide' during the Fedora 14 development cycle.
Changing version to '14'.

More information and reason for this action is here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Comment 2 Josh Boyer 2011-08-30 14:29:06 EDT
Did this get fixed?
Comment 3 chellwig@redhat.com 2011-08-31 01:35:50 EDT
No idea.  Steve just asked me to fill this bug when I ran into it.  I do not regularly test gfs2.
Comment 4 Josh Boyer 2011-08-31 07:10:14 EDT
Steve?
Comment 5 Steve Whitehouse 2011-08-31 07:25:44 EDT
It has not been fixed. The issue is that because this happens on the initial mount of a gfs2 filesystem on any kernel with lockdep turned on, we never discover if gfs2 will trigger any lockdep issues, since this turns off lockdep.

I did send an upstream patch a little while back which added annotation to turn off the messages for that particular lock. It was rejected because the real problem was the way in which that lock was being used, and it was suggested that the best way to resolve the problem was to fix that.

However the lock in question is used in a rather complicated way, so I didn't get around to coding up a patch to resolve the lock issue.

See this thread:

http://markmail.org/message/hjupiiktn2blrihz
Comment 6 Josh Boyer 2011-08-31 07:46:04 EDT
Move this to rawhide then.
Comment 8 Fedora End Of Life 2013-04-03 15:05:03 EDT
This bug appears to have been reported against 'rawhide' during the Fedora 19 development cycle.
Changing version to '19'.

(As we did not run this process for some time, it could affect also pre-Fedora 19 development
cycle bugs. We are very sorry. It will help us with cleanup during Fedora 19 End Of Life. Thank you.)

More information and reason for this action is here:
https://fedoraproject.org/wiki/BugZappers/HouseKeeping/Fedora19
Comment 9 Justin M. Forbes 2013-04-05 12:27:52 EDT
Is this still a problem with 3.9 based F19 kernels?
Comment 10 Steve Whitehouse 2013-04-05 12:49:58 EDT
Yes, we'd have closed these bugs if they were fixed.
Comment 11 Josh Boyer 2013-09-18 16:30:55 EDT
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 19 kernel bugs.

Fedora 19 has now been rebased to 3.11.1-200.fc19.  Please test this kernel update and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you experience different issues, please open a new bug report for those.
Comment 12 Justin M. Forbes 2014-01-03 17:07:49 EST
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 19 kernel bugs.

Fedora 19 has now been rebased to 3.12.6-200.fc19.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you have moved on to Fedora 20, and are still experiencing this issue, please change the version to Fedora 20.

If you experience different issues, please open a new bug report for those.

Note You need to log in before you can comment on or make changes to this bug.