Bug 734440 - anaconda coredumps while detecting storage
Summary: anaconda coredumps while detecting storage
Keywords:
Status: CLOSED DUPLICATE of bug 728949
Alias: None
Product: Fedora
Classification: Fedora
Component: parted
Version: 16
Hardware: powerpc
OS: Unspecified
high
urgent
Target Milestone: ---
Assignee: Brian Lane
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks: F16Alphappc
TreeView+ depends on / blocked
 
Reported: 2011-08-30 12:11 UTC by Karsten Hopp
Modified: 2012-03-14 13:06 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-09-01 21:46:46 UTC


Attachments (Terms of Use)
gdb backtrace of the core file (31.60 KB, text/plain)
2011-08-30 12:13 UTC, Karsten Hopp
no flags Details
fist few sectors of /dev/sda (17.00 KB, application/octet-stream)
2011-08-30 16:17 UTC, Karsten Hopp
no flags Details
fist few sectors of /dev/sdb (17.00 KB, application/octet-stream)
2011-08-30 16:18 UTC, Karsten Hopp
no flags Details
anaconda.log (5.24 KB, text/plain)
2011-08-30 16:19 UTC, Karsten Hopp
no flags Details
program.log (2.14 KB, text/plain)
2011-08-30 16:20 UTC, Karsten Hopp
no flags Details
storage.log (9.30 KB, text/plain)
2011-08-30 16:21 UTC, Karsten Hopp
no flags Details
syslog (88.09 KB, text/plain)
2011-08-30 16:21 UTC, Karsten Hopp
no flags Details

Description Karsten Hopp 2011-08-30 12:11:51 UTC
Description of problem:
anaconda coredumps while detecting storage on a PPC system. This currently prevents any progress in getting F16 on PPC installable.

Version-Release number of selected component (if applicable):
anaconda-16.14.6-1.fc16
parted-3.0-2.fc16

How reproducible:
always

Steps to Reproduce:
1. install a PPC machine with p.e. 
http://ppc.koji.fedoraproject.org/scratch/karsten/iso/Fedora-20110825-ppc64-netinst.iso
2. observe the anaconda abort during storage detection
3. grab core file from /root
  
Additional info:

I can provide a machine for debugging the core file if necessary. If absolutely required I can even provide a machine at a stage directly after the failure, but as that's the only machine I currently have I'd try to avoid that.

Comment 1 Karsten Hopp 2011-08-30 12:13:50 UTC
Created attachment 520603 [details]
gdb backtrace of the core file

Comment 2 Brian Lane 2011-08-30 16:13:15 UTC
The first few sectors of the disk may help as well.

dd if=/dev/whatever of=disk.img bs=512 count=34

And the logs from /tmp/*log attached as individual files.

Comment 3 Karsten Hopp 2011-08-30 16:17:22 UTC
Created attachment 520634 [details]
fist few sectors of /dev/sda

Comment 4 Karsten Hopp 2011-08-30 16:18:05 UTC
Created attachment 520635 [details]
fist few sectors of /dev/sdb

Comment 5 Karsten Hopp 2011-08-30 16:19:42 UTC
Created attachment 520636 [details]
anaconda.log

Comment 6 Karsten Hopp 2011-08-30 16:20:58 UTC
Created attachment 520637 [details]
program.log

Comment 7 Karsten Hopp 2011-08-30 16:21:24 UTC
Created attachment 520638 [details]
storage.log

Comment 8 Karsten Hopp 2011-08-30 16:21:48 UTC
Created attachment 520639 [details]
syslog

Comment 9 Brian Lane 2011-09-01 21:46:46 UTC
Are one or both of these devices really short?

ped_geometry_read_alloc (geom=0x172589a8, buffer=0xfffe46dda28, offset=-8, count=1)

There's a bug in some of the probing code where it doesn't properly check for the length before calculating the offset, ending up with a negative value on very small devices.

*** This bug has been marked as a duplicate of bug 728949 ***

Comment 10 Karsten Hopp 2011-09-01 23:59:37 UTC
There are small partitions on this system, but no partition is as small as the 8 blocks that are mentioned in bug 728949:

bash-4.2# parted -l
Model: ATA Maxtor 6Y160M0 (scsi)
Disk /dev/sda: 164GB
Sector size (logical/physical): 512B/512B
Partition Table: mac

Number  Start   End     Size    File system  Name                  Flags
 1      512B    32.8kB  32.3kB               Apple
 2      32.8kB  1081kB  1049kB  hfs          untitled              boot
 3      134MB   14.9GB  14.8GB  hfs+         Apple_HFS_Untitled_1
 4      14.9GB  15.1GB  210MB   ext4         untitled
 5      15.1GB  164GB   149GB                untitled              lvm


Model: ATA SAMSUNG HD103SJ (scsi)
Disk /dev/sdb: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: mac

Number  Start   End     Size    File system  Name   Flags
 1      512B    32.8kB  32.3kB               Apple
 2      32.8kB  1000kB  968kB   ext3         hfs    boot
 3      1000kB  500MB   499MB                83
 4      500MB   50.0GB  49.5GB  ext3         83
 5      50.0GB  100GB   50.0GB  ext3         83
 6      100GB   1000GB  900GB   ext3         83


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/vg_macg5-LogVol02: 124GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End    Size   File system  Flags
 1      0.00B  124GB  124GB  ext4


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/vg_macg5-LogVol03: 10.5GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End     Size    File system  Flags
 1      0.00B  10.5GB  10.5GB  ext4


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/vg_macg5-lv_swap: 4295MB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End     Size    File system     Flags
 1      0.00B  4295MB  4295MB  linux-swap(v1)


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/vg_macg5-lv_root: 10.5GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End     Size    File system  Flags
 1      0.00B  10.5GB  10.5GB  ext4

Comment 11 Karsten Hopp 2011-09-02 00:09:06 UTC
btw, what's messing up all those strings in the DEBUG storage output:

'DEVLINKS': '/dev/scd0 /dev/disk/by-id/ata-IPNOEE_RVD-DWR_VD-R01D7_D .....
                                           ^^^^^^^^^^^^^^^^^^^^^^^

'ID_MODEL': 'IPNOEE_RVD-DWR_VD-R01D7'
             ^^^^^^^^^^^^^^^^^^^^^^^
..............


udev has it correct:
# udevadm info --query=all --name=/dev/sr0| grep ID_MODEL=
E: ID_MODEL=DVD-RW_DVR-107D


Note You need to log in before you can comment on or make changes to this bug.