Bug 151402 - divide error: 0000 -- in get_local_rgrp( ) ?
Summary: divide error: 0000 -- in get_local_rgrp( ) ?
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Cluster Suite
Classification: Retired
Component: gfs   
(Show other bugs)
Version: 3
Hardware: All Linux
medium
medium
Target Milestone: ---
Assignee: Brian Stevens
QA Contact: GFS Bugs
URL:
Whiteboard:
Keywords:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2005-03-17 17:08 UTC by Adam "mantis" Manthei
Modified: 2014-09-09 00:40 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-10-29 21:49:23 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

Description Adam "mantis" Manthei 2005-03-17 17:08:36 UTC
Description of problem:

divide error: 0000
autofs lock_gulm crc32 gfs lock_harness pool iscsi_sfnet e1000 microcode
keybdev mousedev hid input usb-uhci usbcore ext3 jbd qla2300 sd_mod scsi_mod
CPU:    0
EIP:    0060:Ã<e02ac820>Ã    Not tainted
EFLAGS: 00010246

EIP is at recent_rgrp_add Ãgfsà 0x20 (2.4.21-31.EL/i686)
eax: 00000022   ebx: 00000000   ecx: e02c153c   edx: 00000000
esi: dd50cc00   edi: dd7c7400   ebp: e02c1000   esp: d54c1d7c
ds: 0068   es: 0068   ss: 0068
Process fsstress (pid: 10650, stackpage=d54c1000)
Stack: dd7c7400 dd50cc00 00000001 00000007 e02acad8 dd7c7400 dbb94c00 00000001
       dbb94d00 00000003 00000000 dbb94c00 dd50cc00 e02c1000 00000000 dbb94c00
       d5f73204 dbb94cd4 e02acbe8 d5f73204 dbb94cd4 00000001 00000000 e02c1000
Call Trace:   Ã<e02acad8>à get_local_rgrp Ãgfsà 0x1e8 (0xd54c1d8c)
Ã<e02acbe8>à gfs_inplace_reserv Ãgfsà 0x78 (0xd54c1dc4)
Ã<e029a484>à gfs_createi Ãgfsà 0x214 (0xd54c1dec)
Ã<e029a187>à gfs_lookupi Ãgfsà 0x487 (0xd54c1e14)
Ã<e02966c7>à gfs_glock_dq Ãgfsà 0xa7 (0xd54c1e2c)
Ã<e0294b0b>à gfs_init_holder Ãgfsà 0x3b (0xd54c1e40)
Ã<e027816b>à gfs_drevalidate Ãgfsà 0x10b (0xd54c1e5c)
Ã<e027ffaa>à gfs_create Ãgfsà 0x9a (0xd54c1e70)
Ã<e027fd60>à gfs_lookup Ãgfsà 0x0 (0xd54c1ebc)
Ã<c01623ab>à vfs_create Ãkernelà 0x9b (0xd54c1f08)
Ã<c01629dd>à open_namei Ãkernelà 0x58d (0xd54c1f28)
Ã<c0153993>à filp_open Ãkernelà 0x43 (0xd54c1f58)
Ã<c0153da3>à sys_open Ãkernelà 0x63 (0xd54c1f90)
Ã<c0153e6f>à sys_creat Ãkernelà 0x1f (0xd54c1fb0)

Code: f7 b5 68 27 10 00 8b 95 3c 05 00 00 39 ca 89 c6 74 12 8d 42

Kernel panic: Fatal exception


Version-Release number of selected component (if applicable):
GFS-6.0.2.8
GFS-modules-6.0.2.8-0
kernel-2.4.21-31.EL


How reproducible:
I can get this within an hour

Steps to Reproduce:
1. create two filesystems on an iscsi device (not sure if iscsi matters) eith 50
 32M journals (i.e. mkfs.gfs -j 50 -J 32 ... )
2. Mount both filesystems on 12 different machines
3. on each filesystem, run `fsstress -p 10 -d $filesystem` 
4. wait for panic

Actual results:
panic

Expected results:
no panic

Additional info:
o I forgot to limit the number of users in fsstress at first, so there might 
  be an issue with quota ID's since it fsstress picks random UID's from 0 
  to 65535

o I occassionaly was trying to grab lock dumps from my three node dedicated RLM
  gulm lock server.  On large lock spaces, this can take a few seconds

o I'm using iscsi as the storage.  I don't know if that has any effect on the
  setup or not.

Comment 1 Ken Preslan 2005-03-18 19:39:49 UTC
There is a fix for this.  It's just not in GFS-modules-6.0.2.8.  I don't know if it's made it into a RPM yet.

Comment 2 Ken Preslan 2005-04-20 19:34:00 UTC
This is in the RPMs now.



Comment 5 Lon Hohberger 2010-10-29 21:49:23 UTC
This bugzilla is reported to have been fixed years ago.


Note You need to log in before you can comment on or make changes to this bug.