RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 737949 - DeviceError: ('device has not been created', 'vda4')
Summary: DeviceError: ('device has not been created', 'vda4')
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: anaconda
Version: 6.2
Hardware: x86_64
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Anaconda Maintenance Team
QA Contact: Release Test Team
URL:
Whiteboard: abrt_hash:2732c3f98b61eabecc814885387...
Depends On:
Blocks: 691780
TreeView+ depends on / blocked
 
Reported: 2011-09-13 13:18 UTC by Alexander Todorov
Modified: 2013-02-06 09:05 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-10-10 13:48:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
File: anaconda-tb-I9X_v4 (597.55 KB, text/plain)
2011-09-13 13:18 UTC, Alexander Todorov
no flags Details

Description Alexander Todorov 2011-09-13 13:18:05 UTC
abrt version: 2.0.5
executable:     /mnt/runtime/usr/bin/python
hashmarkername: anaconda
kernel:         2.6.32-195.el6.x86_64
product:        Red Hat Enterprise Linux
reason:         DeviceError: ('device has not been created', 'vda4')
time:           Tue Sep 13 09:17:05 2011
version:        6.2

anaconda-tb-I9X_v4: Binary file, 611892 bytes

description:
:The following was filed automatically by anaconda:
:anaconda 13.21.138 exception report
:Traceback (most recent call first):
:  File "/usr/lib/anaconda/storage/devices.py", line 1398, in destroy
:    raise DeviceError("device has not been created", self.name)
:  File "/usr/lib/anaconda/storage/deviceaction.py", line 219, in execute
:    self.device.destroy()
:  File "/usr/lib/anaconda/storage/devicetree.py", line 713, in processActions
:    action.execute(intf=self.intf)
:  File "/usr/lib/anaconda/storage/__init__.py", line 344, in doIt
:    self.devicetree.processActions()
:  File "/usr/lib/anaconda/packages.py", line 110, in turnOnFilesystems
:    anaconda.id.storage.doIt()
:  File "/usr/lib/anaconda/dispatch.py", line 208, in moveStep
:    rc = stepFunc(self.anaconda)
:  File "/usr/lib/anaconda/dispatch.py", line 126, in gotoNext
:    self.moveStep()
:  File "/usr/lib/anaconda/gui.py", line 1390, in nextClicked
:    self.anaconda.dispatch.gotoNext()
:  File "/usr/lib/anaconda/gui.py", line 1528, in keyRelease
:    self.nextClicked()
:DeviceError: ('device has not been created', 'vda4')

Comment 1 Alexander Todorov 2011-09-13 13:18:09 UTC
Created attachment 522928 [details]
File: anaconda-tb-I9X_v4

Comment 2 Alexander Todorov 2011-09-13 13:21:08 UTC
What I did: 

/boot - ext4 - force as primary
swap - force as primary
sw raid - force as primary
sw raid - force as primary
md0 - RAID0, PV - encrypt
lv_root - ext4 - no encryption, use all space

proceed with installation and get this traceback. 

vda4 should be the last sw raid member which I forced as primary.

Comment 3 David Cantrell 2011-10-10 13:48:14 UTC
I just performed an installation of RHEL 6.2 Beta in a KVM guest on a RHEL 6.1 host using the guide you posted in comment #2.  I don't get a traceback and the partitions are set up as you describe.  However, vda4 is not the last swraid member, it is the swap partition.  The partitioning code sorts the swap partition down to the end of the disk.  But the partitions are still created as you describe:

/boot is an ext4 primary partition at vda1
swap is a primary partition at vda4
vda2 and vda3 are primary partitions
md0 is created as a RAID 0 array of vda2 and vda3, encrypted, and LVM PV type
a volume group is created on the new LVM PV on md0
a single logical volume is created on the volume group called lv_root, on /

I don't get a traceback and the only difference is that the swap partition ends up getting sorted down to vda4, but I don't think that's a problem.  All of the partitions are primary, as requested.

Comment 4 Alexander Todorov 2013-02-06 08:18:45 UTC
I hit this bug again today with latest 6.4. I was creating and removing partitions. I will try to reproduce more reliably.

Comment 5 Alexander Todorov 2013-02-06 09:05:02 UTC
On a KVM guest with 3 disks: 

vda - 2GB
vdb - 10GB
vdc - 10GB


* Start with: 
vda1 - ext4, vda2 - swap
vdb1 - physical volume
vdc1 - physical volume
VG containing vdb1 and vdc1

* Delete the VG and vdb1

* On vdb create 4 ext4 partitions, DO NOT force as primary. Use mount points like /1, /2, /3, /4


* vdb4 is created as extended partition, holding vd5. 


* delete vdb3, at this point previous vd4/vd5 becomes vd3 (distinguishable by its mount point /4

* Edit vdb3 - force as primary - moves to the top of partitions list in UI (shown as vdb1, /4)

* Edit - vdb2, /1 and then vdb3, /2 and force as primary


* Now order is vdb1 /1, vdb2 /2, vdb3 /4


* Create new partition, mount it under /3, force as primary - It becomes vdb3 /3 and then we have vd4, /4. 


* On vdb attempt to create 5th primary partition - anaconda tells you this is not possible. 

* Return to the disk layout screen, select vdb and delete all partitions by pressing Alt+D


* On vdb create RAID partition

* Edit vdc1 and format as software raid

* Create md0  - RAID0 array mounted under /boot

* Delete md0

* Edit vdb1 and vdc1 and change the partitioning type from software raid to physical volume.

* Create a VG and a LV for /boot, press F12 (Next)

* Anaconda tells you bootable partitions can't be on LVM

* Delete /boot

* Create new LV for /

* Edit vda1 and vda2 and reformat for /boot and swap

* Proceed with installation


I have the feeling that the creation/edit of primary partitions on vbd causes this bug but I wasn't able to reproduce. I tried several combinations of the steps above, starting with clean disk, not clean disks, etc. but failed to reproduce reliably.


Note You need to log in before you can comment on or make changes to this bug.