Bug 763916 (GLUSTER-2184)

Summary: Gluster nodes not reporting correct status of drives over 2TB size
Product: [Retired] GlusterSP Reporter: Jim <mcintyrejamest>
Component: coreAssignee: Balamurugan Arumugam <bala>
Status: CLOSED WONTFIX QA Contact:
Severity: medium Docs Contact:
Priority: high    
Version: 3.1.0CC: platform, shireesh
Target Milestone: 3.2   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Jim 2010-12-02 19:07:33 UTC
I have 1 primary and 2 nodes with Jetstor SCSI/SATA RAID units connected to them.  The Jetstors have 16 1TB drives setup as RAID6 / 64bit LBA / 32bit stripping with 4 hot spares.  This gives me a 10TB drive for each Jetstor.  The Jetstor connected to the primary can init and format correctly with a ready status.  The nodes init but will not format.  There is a progress bar but reverts to failed after reaching 100%.  I can swap the Jetstors around and the one connected to the primary shows a ready status.  The nodes never give me a ready status.

Do the nodes have issues reporting a drive more than 2TB?