Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
(In reply to Lokesh Mandvekar from comment #4)
> Luwen,
>
> The patch at
> https://github.com/projectatomic/docker-storage-setup/commit/
> df2af9439577cedc2c502512d887c8df10a33cbf should be fixing this along with
> the change that we replace docker with docker-latest everywhere in
> docker-latest-storage-setup except we retain the local variable
> docker_devmapper_meta_dir as-is.
>
> Vivek, please correct me if i'm wrong.
Thanks, then the wipe signatures works fine for me.. move to verified
The step is just for reference, run docker-latest-storage-setup twice, the second time will trigger the "Fatal "Failed to wipe signatures on device ${dev}1" to exit.
# cat /etc/sysconfig/docker-latest-storage-setup
WIPE_SIGNATURES=true
DEVS=vdb
VG=docker-latest
# docker-latest-storage-setup
INFO: Volume group backing root filesystem could not be determined
INFO: Wipe Signatures is set to true. Any signatures on /dev/vdb will be wiped.
/dev/vdb: 2 bytes were erased at offset 0x00000438 (ext2): 53 ef
create_disk_partions
Checking that no-one is using this disk right now ...
OK
Disk /dev/vdb: 16644 cylinders, 16 heads, 63 sectors/track
sfdisk: /dev/vdb: unrecognized partition table type
Old situation:
sfdisk: No partitions found
New situation:
Units: sectors of 512 bytes, counting from 0
Device Boot Start End #sectors Id System
/dev/vdb1 2048 16777215 16775168 8e Linux LVM
/dev/vdb2 0 - 0 0 Empty
/dev/vdb3 0 - 0 0 Empty
/dev/vdb4 0 - 0 0 Empty
Warning: partition 1 does not start at a cylinder boundary
Warning: partition 1 does not end at a cylinder boundary
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table
Re-reading the partition table ...
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
wipefs executing.......
result....
Physical volume "/dev/vdb1" successfully created
Volume group "docker-latest" successfully created
Rounding up size to full physical extent 12.00 MiB
Logical volume "docker-latest-poolmeta" created.
Logical volume "docker-latest-pool" created.
WARNING: Converting logical volume docker-latest/docker-latest-pool and docker-latest/docker-latest-poolmeta to pool's data and metadata volumes.
THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Converted docker-latest/docker-latest-pool to thin pool.
Logical volume "docker-latest-pool" changed.
[root@myregistrydomain ~]# docker-latest-storage-setup
INFO: Volume group backing root filesystem could not be determined
INFO: Wipe Signatures is set to true. Any signatures on /dev/vdb will be wiped.
wipefs: error: /dev/vdb: probing initialization failed: Device or resource busy
ERROR: Failed to wipe signatures on device /dev/vdb
Ok, found out. If DEVS=/dev/vdb is used, this problem does not happen.
Please use full device path name instead of just disk name. I will fix it anyway but this is not a good practice.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://rhn.redhat.com/errata/RHEA-2016-1057.html