Bug 869482

Summary: Anaconda recognizes a ZFS ZPOOL (EFI Labeled) as having 'FREE SPACE' (POTENTIAL DATA LOSS)
Product: [Fedora] Fedora Reporter: Reartes Guillermo <rtguille>
Component: anacondaAssignee: Anaconda Maintenance Team <anaconda-maint-list>
Status: CLOSED WONTFIX QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: high Docs Contact:
Priority: unspecified    
Version: 18CC: anaconda-maint-list, awilliam, bugzilla, g.kaviyarasu, jonathan, robatino, tflink, vanmeeuwen+fedora
Target Milestone: ---Keywords: CommonBugs
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard: RejectedBlocker https://fedoraproject.org/wiki/Common_F18_bugs#solaris-zfs-freespace
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-02-05 12:41:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
storage.log (sdc is already wiped by anaconda)
none
screenshot of 3.
none
screenshot of 6.
none
anaconda.log
none
program.log
none
storage.log
none
syslog file
none
screenshot #1
none
screenshot #2
none
screenshot #3
none
Example#2: anaconda.log
none
Example#2: program.log
none
Example#2: storage.log
none
screenshot #4
none
screenshot #5
none
Example#3: anaconda.log
none
Example#3: program.log
none
Example#3: storage.log
none
result of "# dd if=/dev/sdc of=~/gptprimary.bin count=34"
none
result of "# dd if=/dev/sdc of=~/gptbackup.bin skip=9215965" none

Description Reartes Guillermo 2012-10-24 04:16:43 UTC
Created attachment 632535 [details]
storage.log (sdc is already wiped by anaconda)

Description of problem:

When selecting sdc or sdd, each disk is treated as having 'plenty of free space' and can be used with AUTOMATIC PARTITIONING. Acording to the gpt disklabel, there is no such free space in this scenario. Each disks is a member of a ZPOOL.

Worse, if i select MANUAL PARTITIONING, anaconda will show no preexisting partitions for those disks. That is not good... 

Installing with automatic partitioning selecting sdc, results in the 'IndexError: tuple index out of range' error. (already reported). So, the installation process doe not complete.

Booting Solaris, zpool shows the pool as degraded, and the affected disk has 3 linux partitions (/boot, swap ans /). So, the ZPOOL was damaged. (in this case).

Version-Release number of selected component (if applicable):
F18b TC6

How reproducible:
always

Steps to Reproduce:
1. do either automatic partitioning or manual partitioning selecting sdc or sdd
  
Actual results: 

POTENTIAL DATA LOSS, if installing with ZFS ZPOOLs attached. (EFI Labeled).
Since disks containing a ZPOOL with EFI Label are considered as having free space, F18 will happily wipe them. 
Also nothing in shown in manual partitioning when selection those disks.

Expected results:
Fedora should offered the option to delete or preserve. 

Additional info:

The guest has the following disk setup when starting F18:

sda  unpartitioned
sdb  msdos disklabel, VTOC (bootdisk)  
sdc  gpt disklabel, EFI Label, contains a ZPOOL (111kb free space)
sdd  gpt disklabel, EFI Label, contains a ZPOOL (111kb free space)


$ cat solx86-tst3_gdisk-l_sda.out 
GPT fdisk (gdisk) version 0.8.4

Partition table scan:
  MBR: not present
  BSD: not present
  APM: not present
  GPT: not present

Creating new GPT entries.
Disk /dev/sda: 9216000 sectors, 4.4 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 1A483F23-6E74-4728-B641-136F1F81863B
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 9215966
Partitions will be aligned on 2048-sector boundaries
Total free space is 9215933 sectors (4.4 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name


$ cat solx86-tst3_gdisk-l_sdb.out 
GPT fdisk (gdisk) version 0.8.4

Partition table scan:
  MBR: MBR only
  BSD: not present
  APM: not present
  GPT: not present


***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format.
***************************************************************

Disk /dev/sdb: 19456000 sectors, 9.3 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 1CAB5B37-162F-4B93-8CBC-DC5D4179390B
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 19455966
Partitions will be aligned on 1-sector boundaries
Total free space is 17283 sectors (8.4 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1           16065        19454714   9.3 GiB     BF00  Solaris root
   
$ cat solx86-tst3_gdisk-l_sdc.out 
GPT fdisk (gdisk) version 0.8.4

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdc: 9216000 sectors, 4.4 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): E1D8DD0C-7929-9FC3-FEFA-8486E38D4149
Partition table holds up to 9 entries
First usable sector is 34, last usable sector is 9215102
Partitions will be aligned on 1-sector boundaries
Total free space is 222 sectors (111.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1             256         9198718   4.4 GiB     BF01  zfs
   9         9198719         9215102   8.0 MiB     BF07
   
   
$ cat solx86-tst3_gdisk-l_sdd.out 
GPT fdisk (gdisk) version 0.8.4

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdd: 9216000 sectors, 4.4 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 7D05098C-9B0E-47EC-F6C7-9B42D1990C3C
Partition table holds up to 9 entries
First usable sector is 34, last usable sector is 9215102
Partitions will be aligned on 1-sector boundaries
Total free space is 222 sectors (111.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1             256         9198718   4.4 GiB     BF01  zfs
   9         9198719         9215102   8.0 MiB     BF07

Comment 1 Reartes Guillermo 2012-11-04 17:52:29 UTC
Tried with F18b TC7 and there was a partial improvement in the handling of ZFS ZPOOLs with EFI LABEL. 

1 - For Automatic Partitioning:

If selecting each disk individually containing the ZFS ZPOOL (EFI LABEL),
it was recognized as 'having free space' in STORAGE: INSTALLATION 
DESTINATION, but when returning to the MAIN HUB, STORAGE displayed
"Error checking storage configuration'. So, at least the data will 
not be wiped by accident.

However, if one selects multiple disks containing a ZFS ZPOOL (EFI 
LABEL) they are recognized as having free space and the 'begin install'
button is UNLOCKED so one still can loose data.

2 - For Manual Partitioning:

Selecting each disk individually containing the ZFS ZPOOL (EFI 
LABEL) was recongnized as having free space in STORAGE: INSTALLATION
DESTINATION, in MANUAL PARTITIONING, no preexisting partitions were shown.
Luckily, it is not possible to use the "click here to create them..." 
the red banner will complain "new lv is to large...", so it is not 
possible to wipe the data like that. 
One can create the partition manually, but when one returns to the 
MAIN HUB, STORAGE displayed "Error checking storage configuration'.
So, at least the data will not be wiped.

However, if one selects multiple disks containing a ZFS ZPOOL (EFI 
LABEL),  no preexisting partitions were shown.
The "click here to create them..." do create proposed partitions.
And if one returns to the MAIN HUB, 'begin installation' is UNLOCKED.
So one can still lose data.

Comment 2 Reartes Guillermo 2012-11-25 21:31:15 UTC
Tried with F18b RC1 and there was another partial improvement (and some worsening) in the handling of ZFS ZPOOLs with EFI LABEL.


1 - For AUTOMATIC PARTITIONING:

If selecting each disk (witch contains the ZPOOL) individually, anaconda detected that there is something. Anaconda present the dialog to 'preserve|shrink|delete'. This is an improvement.

If one select 'shrink' (one should not be able to) or 'delete', anaconda currently accepts it but later in the MAIN HUB it refuses to accept the partitions. So still no data loss.

However, if one selects multiple disks containing a ZFS ZPOOL (EFI LABEL) they are recognized as having free space and the 'begin install' button is UNLOCKED so one still can loose data. (No change since last time i test). It is still possible for data loss to happen in this way.

If one selects multiple disks, like the previous paragraph says and instead
of launching the installation, one enters again into STORAGE, and selects only one disk, anaconda presents a dialog without any action (no preserve, shrink, delete), the only option is 'cancel'. Hitting 'cancel' and entering again to STORAGE fixes it until it is found again.

2 - For MANUAL PARTITIONING:

A- Selecting each disk INDIVIDUALLY containing the ZFS ZPOOL (EFI LABEL) was recognized as NOT having free space. That is ok.

But in STORAGE: INSTALLATION DESTINATION, in MANUAL PARTITIONING, no preexisting partitions were shown.

Luckily, it is not possible to use the "click here to create them..." the red banner will complain "new lv is to large...", so it is not possible to wipe the data like that. 

One can create the partition manually, but when one returns to the MAIN HUB, STORAGE the 'Begin Installation' button is UNLOCKED. This is WORSE than before, now data loss might happen.


B- If one selects MULTIPLE disks containing a ZFS ZPOOL (EFI LABEL), no preexisting partitions were shown.

The "click here to create them..." do create proposed partitions. And if one returns to the MAIN HUB, 'begin installation' is UNLOCKED. So one can still lose data.

Comment 3 David Lehman 2012-12-11 20:26:03 UTC
Please attach logs using any anaconda newer than 18.19-1. Thanks.

Comment 4 Reartes Guillermo 2012-12-12 21:20:13 UTC
New Try:

0. Boot F18 smoke5 (18.37)

1. Enter STORAGE: INSTALLATION DESTINATION

2. Select a single disk that is part of a ZPOOL and click 'continue'

3. The 'reclaim space' dialog is shown, click 'cancel' to return to the 'main hub'

4. Enter STORAGE: INSTALLATION DESTINATION (AGAIN)

5. Add another disk that is also part of a ZPOOL and click 'continue' (now with 2 disks)

6. The 'installation options' dialog is shown, and it shows that there is free space.

7. Click 'continue' and anaconda will return to the 'main hub' with 'automatic partitioning selected' and the 'begin installation' button UNLOCKED.

I will attach screen-shots of 3. and 6. and the logs of the actions performed above.

Comment 5 Reartes Guillermo 2012-12-12 21:20:58 UTC
Created attachment 662640 [details]
screenshot of 3.

Comment 6 Reartes Guillermo 2012-12-12 21:21:38 UTC
Created attachment 662641 [details]
screenshot of 6.

Comment 7 Reartes Guillermo 2012-12-12 21:22:34 UTC
Created attachment 662642 [details]
anaconda.log

Comment 8 Reartes Guillermo 2012-12-12 21:23:05 UTC
Created attachment 662643 [details]
program.log

Comment 9 Reartes Guillermo 2012-12-12 21:23:44 UTC
Created attachment 662644 [details]
storage.log

Comment 10 Reartes Guillermo 2012-12-12 21:24:38 UTC
Created attachment 662645 [details]
syslog file

Comment 11 Reartes Guillermo 2013-01-04 14:28:22 UTC
Still an issue with 18.37.8 (TC4)

Comment 12 Chris Murphy 2013-01-07 23:29:13 UTC
Why was this never proposed as a blocker? I'm going to test this now.

Comment 13 Chris Murphy 2013-01-08 00:19:17 UTC
Anaconda 18.37.10-1 

I have a 3TB disk. 1TB = ext4, 1MB=BIOS Boot, 2TB=FreeBSD ZFS. Parted recognizes all three. 

Installation Options reports 983KB Free space. 1TB reclaimable. 2TB reclaimable by deleting. Reclaim Space sees the FreeBSD ZFS as "Unknown". And in Manual Partitioning this partition shows up under Unknown as "sda" type Unknown. The ext4 and BIOS Boot are recognized as such.

So what's the problem? Multidisk ZFS?

Comment 14 Chris Murphy 2013-01-08 00:54:01 UTC
To 3TB disks, 1TB ZFS partition on one, 2TB ZFS partition on the other. Both combined in one zpool.

Anaconda still sees just 1GB free space. Both ZFS partitions are seen as Unknown, whether in guided autopart's Reclaim Disk Space; or Manual Partitioning.

I'm not understanding the problem. I went through comment 4 steps: select 1 disk, back out, select the other disk in addition, I still don't get any free space.

Comment 15 Reartes Guillermo 2013-01-08 00:56:52 UTC
The issue is that if you select an single disk which is part of a zpool (EFI Labeled) anaconda will act correctly. It will be necessary to 'delete' it before continuing.

But if one selects two or more disks which are part of a zpool (EFI labeled) anaconda will just say "there is free space" and that is not ok. That is in fact how it found free space in order to produce bug 888293 comment# 11.

It should not matter if i select 1, 2, or n disks, if they contain a zpool (EFI Labeled) one must always reclaim space.

@Chris Murphy
I do not believe that you will ever hit the problem with your setup, because you are not using an "EFI Labeled zpool" but a zfs on a slice/partition. That is working ok.

Review the description of the bugreport, specifically the gdisk output of the efi labeled zpool disks, you will see  "Partition table holds up to 9 entries" (instead of the usual 128). It is not just "any" GPT disklabel. Anaconda cannot use that disklabel, Anaconda must replace with another GPT diskabel. I do believe that this confuses anaconda.

Comment 16 Chris Murphy 2013-01-08 01:23:45 UTC
OK I get it. So it's yet another partition scheme used in the Solaris world. Color me surprised. And it's not weirdly/confusingly named either.

oldUI brought up a big dialog whenever such a disk was encountered saying essentially "I have no idea what's on this disk, can I discard all of the data on it?" and it's identified by a bunch of numbers, model and/or serial if I remember right.

Windows' installer, I think, will see such a disk as unallocated. No further warning that you're about to nuke something possibly important. OS X's installer only shows volumes, never devices; so since it has neither a supported partition map or volume it simply isn't visible. You have to use a different application to intentionally blow that disk away to use it.

So I think it's a valid question for the anaconda team how they want to handle this, considering the present paradigm is "show all physical local disks" regardless of what's on them. Present behavior does sorta invite data loss in these obscure cases compared to oldUI, seems to me.

Comment 17 Reartes Guillermo 2013-01-08 01:29:40 UTC
No, it is a GPT Disklabel, but one that it unusable by anaconda.

A Solaris EFI Label is a GPT disklabel but not all GPT disklabels are Solaris EFI Labels. It is not another 'partition scheme' at all.

Comment 18 Chris Murphy 2013-01-08 01:59:08 UTC
What does parted have to say about it? If it's a completely conforming GPT, it should be recognized. Can you post the two files from:

dd if=/dev/sdc of=~/gptprimary.bin count=34
dd if=/dev/sdc of=~/gptbackup.bin skip=9215102

Comment 19 Chris Murphy 2013-01-08 05:25:42 UTC
I can't reproduce this with a Solaris 11.1 formatted, labeled EFI. gdisk shows me a 128 entry partition table, not a 9 entry one. The GUID is Solaris ZFS as yours is.

Parted doesn't complain about either disk. And neither single disk or two disks does anaconda "guided" or Manual Partitioning see either disk as having free space. It does see them as having Unknown partitions.

So far I can't reproduce this bug.

Comment 20 Reartes Guillermo 2013-01-08 07:53:07 UTC
Well, comment#19 shows that the issue is less general, that is good. I am currently testing with S10 and not S11. 

I will retry the issue with the RC1 media and upload current logs and screen-shots.

Guest: SOLx96-TST3 (RAM: 4096mb)
OS: Solaris 10

Disks:

* sda s/n: VM_SOLx86-TST3-001
Content: msdos disklabel with a solaris partition containing a solaris vtoc

* sdb s/n: VM_SOLx86-TST3-002
* sdc s/n: VM_SOLx86-TST3-003
Content: an EFI Labeled zpool (a mirror named testpool)

* sdd s/n: VM_SOLx86-TST3-004
* sde s/n: VM_SOLx86-TST3-005
* sdf s/n: VM_SOLx86-TST3-006
* sdg s/n: VM_SOLx86-TST3-007
Content: an EFI Labeled zpool (a raidz named rz5pool)

Comment 21 Reartes Guillermo 2013-01-08 07:53:57 UTC
Created attachment 674572 [details]
screenshot #1

Test #1:

0. Boot without any kernel parameter RC1 media
1. Enter STORAGE: INSTALLATION DESTINATION
2. Select disk sda (s/n: VM_SOLx86-TST3-001)
3. Anaconda behaves correctly, see screen-shot #1
4. Cancel, because i wont install over the os.

Comment 22 Reartes Guillermo 2013-01-08 07:55:12 UTC
Created attachment 674573 [details]
screenshot #2

Test #2:

0. Boot (with or without gpt kernel parameter) RC1 media
1. Enter STORAGE: INSTALLATION DESTINATION
2. Select any disk but no more than one at a time (do not select sda (s/n: VM_SOLx86-TST3-001))
For example i selected: sdb with s/n: VM_SOLx86-TST3-002
3. Anaconda behaves correctly, see screen-shot #2, #3
4. Reclaim space, 'delete' the disk in screen-shot #3
5. Anaconda rejects the storage configuration #4

Comment 23 Reartes Guillermo 2013-01-08 07:55:48 UTC
Created attachment 674574 [details]
screenshot #3

Comment 24 Reartes Guillermo 2013-01-08 07:56:47 UTC
Created attachment 674576 [details]
Example#2: anaconda.log

Comment 25 Reartes Guillermo 2013-01-08 07:57:16 UTC
Created attachment 674577 [details]
Example#2: program.log

Comment 26 Reartes Guillermo 2013-01-08 07:58:04 UTC
Created attachment 674578 [details]
Example#2: storage.log

Comment 27 Reartes Guillermo 2013-01-08 07:59:03 UTC
Created attachment 674579 [details]
screenshot #4

Comment 28 Reartes Guillermo 2013-01-08 08:00:07 UTC
Created attachment 674580 [details]
screenshot #5

Test #3:

0. boot without any kernel parameter RC1 media
1. enter STORAGE: INSTALLATION DESTINATION
2. Select 2 disks (do not select sda (s/n: VM_SOLx86-TST3-001))
For example i selected: sdb (s/n: VM_SOLx86-TST3-002) and sde (s/n: VM_SOLx86-TST3-005)
3. Anaconda behaves incorrectly, it says i have free space, check
screenshot #5
4. Anaconda will accept the storage configuration, it will be possible to start installing but it will crash.

Comment 29 Reartes Guillermo 2013-01-08 08:00:59 UTC
Created attachment 674581 [details]
Example#3: anaconda.log

Comment 30 Reartes Guillermo 2013-01-08 08:02:08 UTC
Created attachment 674582 [details]
Example#3: program.log

Comment 31 Reartes Guillermo 2013-01-08 08:02:54 UTC
Created attachment 674583 [details]
Example#3: storage.log

Comment 32 Reartes Guillermo 2013-01-08 08:07:36 UTC
Created attachment 674585 [details]
result of "# dd if=/dev/sdc of=~/gptprimary.bin count=34"

> What does parted have to say about it? If it's a completely 
> conforming GPT, it should be recognized. Can you post the
> two files from:

The disk has been reformatted since then, this is the current GPT disklabel

$ cat gdisk-l-sdc.out 
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdc: 9216000 sectors, 4.4 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 699E9F17-C966-7EED-DC6F-A10ECA889BF5
Partition table holds up to 9 entries
First usable sector is 34, last usable sector is 9215965
Partitions will be aligned on 2-sector boundaries
Total free space is 222 sectors (111.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1             256         9199581   4.4 GiB     BF01  zfs
   9         9199582         9215965   8.0 MiB     BF07

I used 9215965 instead of the old 9215102
   
# dd if=/dev/sdc of=~/gptprimary.bin count=34
# dd if=/dev/sdc of=~/gptbackup.bin skip=9215965

Parted gives a lot of errors, unlike gdisk. I have found these relevant entries 
regarding parted errors and the 9 entry partition table (parted)

(Possibly Broken GPT from Solaris):
http://lists.gnu.org/archive/html/bug-parted/2012-01/msg00033.html
* This is about the 9 entry table, there are commits mentioned to handle this.

(Minimum size of GPT partition table header):
http://lists.gnu.org/archive/html/bug-parted/2012-09/msg00007.html

Comment 33 Reartes Guillermo 2013-01-08 08:08:33 UTC
Created attachment 674586 [details]
result of "# dd if=/dev/sdc of=~/gptbackup.bin skip=9215965"

Comment 34 Reartes Guillermo 2013-01-08 08:12:38 UTC
In Example #2 from comment #22 anaconda rejects the storage configuration, that is actually bad. (Which corresponds with comment #27, screen-shot #4)

Comment 35 Reartes Guillermo 2013-01-08 08:19:24 UTC
(ZFS Support in Parted)
http://lists.gnu.org/archive/html/bug-parted/2012-12/msg00003.html

Comment 36 Chris Murphy 2013-01-09 01:25:13 UTC
The header size of your GPT is not 92 bytes like all other GPTs I've encountered. If I've got it correct, the four bytes 00h 02h 00h 00h in little endian means 512. The spec says the header can be 92 bytes up to but not larger than sector size. So 512 makes sense, whereas 2 bytes doesn't. But this might make anaconda hiccup. I'm not sure what it's parsing for. The ZFS partition and GUID are valid and in a reasonable location, however it's where all other disks have their EFI System partition. So I wonder if anaconda looks for an EFI System partition since that's required for booting according to the UEFI spec, but this isn't UEFI hardware you're booting presumably. So it's a gray area where a spec isn't going to help much.

There's more on header size in this thread:
http://lists.gnu.org/archive/html/bug-parted/2012-09/msg00007.html

In any case the Solaris 11.1 GPT doesn't behave with anaconda in the way the Solaris 10 GPT does. So I wonder if Oracle decided to do more to conform rather than doing their own kinda odd thing.

Comment 37 Chris Murphy 2013-01-09 01:31:18 UTC
SUMMARY of this 36 comment thread:

A Solaris 10 partitioned and ZFS formatted disk, can appear in anaconda as free space. While the GPT on the disk is uncommon, it still appears to be a valid GPT, which defines (effectively) no free space.

Given the UI doesn't aid the user in identifying a disk the user may not want to choose for any use/modification, subsequently identifying the disk as containing free space is a problem.

Proposing as blocker, Fedora 18 Beta Criterion #9: "The installer must be able to complete an installation using automatic partitioning to a validly-formatted disk with sufficient empty space, using the empty space and installing a bootloader but leaving the pre-existing partitions and data untouched."

The installer would allow pre-existing partitions and data to be overwritten.

This very well could be an edge case, but I think it's better for people more experienced to take a look at this and make a decision, than to just let it escape under the radar.

Comment 38 Adam Williamson 2013-01-09 01:34:31 UTC
I've been following it along, and I'm highly inclined to -1, CommonBugs it. I dunno if we can block for every exotic storage format at this point.

Comment 39 Tim Flink 2013-01-09 02:05:05 UTC
-1/-1 for similar reasons Adam stated - this is too much of a corner case to be fixing so late.

Comment 40 Kevin Fenzi 2013-01-09 02:31:46 UTC
I agree with Adam. -1 blocker, document in common bugs.

Comment 41 Dennis Gilmore 2013-01-09 03:13:57 UTC
after reading through this im inclined to -1 blocker, document in common bugs.

Comment 42 Adam Williamson 2013-01-09 06:39:49 UTC
So far that's -4, setting to RejectedBlocker.

Comment 43 Fedora End Of Life 2013-12-21 09:11:08 UTC
This message is a reminder that Fedora 18 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 18. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '18'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 18's end of life.

Thank you for reporting this issue and we are sorry that we may not be 
able to fix it before Fedora 18 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior to Fedora 18's end of life.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 44 Adam Williamson 2013-12-21 19:28:36 UTC
Has anyone checked this with 19 or 20?

Comment 45 Reartes Guillermo 2013-12-21 20:10:45 UTC
i will check it later, in more detail

Comment 46 Fedora End Of Life 2014-02-05 12:41:41 UTC
Fedora 18 changed to end-of-life (EOL) status on 2014-01-14. Fedora 18 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.