Red Hat Bugzilla – Bug 497676
hdd power management: palimpsest reports a false positive
Last modified: 2009-08-05 17:10:19 EDT
Description of problem:
I have 2x 750GB seagate drives in JBOD on my system that i got out of a free nas i won at an RH conference :). These drives shut themselves down after 15 minutes of inactivity. when this happens, actually soon after, palimpsest reports that the disks are failing because it doesn't know their having a bit of a power nap.
Version-Release number of selected component (if applicable):
Fedora 11 Rawhide 26/04/09
Steps to Reproduce:
1. install drives that have firmware that hibernates them automatically
no annoying messages
I have been unable to find any way to disable the hibernate feature by using hdparm -Z or any other means. either palimpsest needs to be made aware of this shituation or it be made able to disable the hibernate feature or i figure out a way to poll the device every 10 minutes to stop it going to sleep.
Please try updating to the latest rawhide and see if the problem persists.
libatasmart and DeviceKit-disks have seen several updates since April 9, to fix issues in this area.
Sounds like a libatasmart issue or a problem with your disks. Please include the output of 'skdump /dev/sdX' when
1. the disks are not sleeping
2. the disks are sleeping
Also let us know what version of libatasmart you are using (rpm -q libatasmart).
Also, what exactly does palimpsest say about the disks when you runt it when this happens?
How are the devices connected to the machine? SATA/eSATA? USB?
Created attachment 341546 [details]
tgz of screenshots and command output
i managed to screw up the data gathering from sdc but the sdd output was fine. as there were so many files i zipped it all up for convenience.
Both sdc and sdd are connected to sata2/3, my raptors sda/b are 0 and 1 respectively
[root@x64 ~]# rpm -q libatasmart
When i "refresh" the disk through the palimpsest util the drive checks out as OK and the error messages go away. Both disks seem to 'fail' simultaneously.
Thanks for the info - please avoid a zipping up attachments; it makes it much harder to read the bug report, separate attachments please!
According to this info libatasmart says things are peachy so I'm wondering if it's a DeviceKit-disks bug. Please attach the output of 'devkit-disks --show-info /dev/sdX' for the disk as well. Thanks!
Created attachment 341719 [details]
output of devkit-disks sdc
Created attachment 341721 [details]
output of devkit-disks sdd
sorry about the zippage, it seemed like a good idea at the time. anyway, here's the asked output.
cheers for looking in to this.
Thanks for the info in comment 6 and comment 7. Hmm, this looks fine, was there a "disk is failing" icon / notification when you captured this? If not, please try again when that icon is being shown.
Ideally, please include both 'skdump /dev/sdX' and 'devkit-disks --show-info /dev/sdX' info captured when the "disk is failing" icon is shown.
Yes, I'd need the 'skdump' output when this problem happens. Also the dump the output of 'skdump --save=mysmartdata /dev/sdX' produces.
This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle.
Changing version to '11'.
More information and reason for this action is here:
Same here. ASUS 1000HE on Fedora 11, I get the error that "one or more disks are failing" and the only hard disk is in this machine (160GB). I checked the disk using some tools I have here (I checked the disk outside Linux sector by sector. Not a single error).
Looking at fedoraforum, I see that I'm not the only one who this applet reports wrong info, so how about disabling this applet for now so people won't panic for nothing??
Hetz, your problem appears to be unrelated to the power management issue discussed here. Please post a seperate bug report for your issue and include the SMART blob data gnerated by 'skdump --save=mysmartdata /dev/sdX'.
Phil, could you please provide the requested SMART data dumbs, too?
Hmm or actually, given that this bug was on NEEDINFO since April I'll now close this as INSUFFICIENT_DATA