RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1330714 - docker-storage-setup frequently fails when lvm2 not initially installed in system
Summary: docker-storage-setup frequently fails when lvm2 not initially installed in sy...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: docker-latest
Version: 7.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Lokesh Mandvekar
QA Contact: atomic-bugs@redhat.com
Yoana Ruseva
URL:
Whiteboard:
Depends On: 1330290
Blocks: 1330706
TreeView+ depends on / blocked
 
Reported: 2016-04-26 18:45 UTC by Lokesh Mandvekar
Modified: 2016-05-12 14:56 UTC (History)
10 users (show)

Fixed In Version: docker-latest-1.10.3-20.el7
Doc Type: Bug Fix
Doc Text:
Clone Of: 1330290
Environment:
Last Closed: 2016-05-12 14:56:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1057 0 normal SHIPPED_LIVE new packages: docker-latest 2016-05-12 18:51:24 UTC

Comment 4 Lokesh Mandvekar 2016-05-04 19:37:07 UTC
Luwen,

The patch at https://github.com/projectatomic/docker-storage-setup/commit/df2af9439577cedc2c502512d887c8df10a33cbf should be fixing this along with the change that we replace docker with docker-latest everywhere in docker-latest-storage-setup except we retain the local variable docker_devmapper_meta_dir as-is.

Vivek, please correct me if i'm wrong.

Comment 5 Luwen Su 2016-05-05 03:17:02 UTC
(In reply to Lokesh Mandvekar from comment #4)
> Luwen,
> 
> The patch at
> https://github.com/projectatomic/docker-storage-setup/commit/
> df2af9439577cedc2c502512d887c8df10a33cbf should be fixing this along with
> the change that we replace docker with docker-latest everywhere in
> docker-latest-storage-setup except we retain the local variable
> docker_devmapper_meta_dir as-is.
> 
> Vivek, please correct me if i'm wrong.

Thanks, then the wipe signatures works fine for me.. move to verified

The step is just for reference, run docker-latest-storage-setup twice, the second time will trigger the "Fatal "Failed to wipe signatures on device ${dev}1" to exit.

# cat /etc/sysconfig/docker-latest-storage-setup
WIPE_SIGNATURES=true
DEVS=vdb
VG=docker-latest

# docker-latest-storage-setup 
INFO: Volume group backing root filesystem could not be determined
INFO: Wipe Signatures is set to true. Any signatures on /dev/vdb will be wiped.
/dev/vdb: 2 bytes were erased at offset 0x00000438 (ext2): 53 ef
create_disk_partions
Checking that no-one is using this disk right now ...
OK

Disk /dev/vdb: 16644 cylinders, 16 heads, 63 sectors/track
sfdisk:  /dev/vdb: unrecognized partition table type

Old situation:
sfdisk: No partitions found

New situation:
Units: sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System
/dev/vdb1          2048  16777215   16775168  8e  Linux LVM
/dev/vdb2             0         -          0   0  Empty
/dev/vdb3             0         -          0   0  Empty
/dev/vdb4             0         -          0   0  Empty
Warning: partition 1 does not start at a cylinder boundary
Warning: partition 1 does not end at a cylinder boundary
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
wipefs executing.......
result....
  Physical volume "/dev/vdb1" successfully created
  Volume group "docker-latest" successfully created
  Rounding up size to full physical extent 12.00 MiB
  Logical volume "docker-latest-poolmeta" created.
  Logical volume "docker-latest-pool" created.
  WARNING: Converting logical volume docker-latest/docker-latest-pool and docker-latest/docker-latest-poolmeta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted docker-latest/docker-latest-pool to thin pool.
  Logical volume "docker-latest-pool" changed.
[root@myregistrydomain ~]# docker-latest-storage-setup 
INFO: Volume group backing root filesystem could not be determined
INFO: Wipe Signatures is set to true. Any signatures on /dev/vdb will be wiped.
wipefs: error: /dev/vdb: probing initialization failed: Device or resource busy
ERROR: Failed to wipe signatures on device /dev/vdb

Comment 6 Vivek Goyal 2016-05-05 12:42:57 UTC
Ok, this issue is solved by using udevadm settle patch.

https://github.com/projectatomic/docker-storage-setup/commit/80648727909370d394b6e72af76e9c88fe8e1f4c

Also, I think WIPE_SIGNATURES should take affect only when disk is not already part of volume group. I need to look at the code closely and see why it is hitting for you again when you run docker storage setup again.

Comment 7 Vivek Goyal 2016-05-05 13:02:22 UTC
Ok, found out. If DEVS=/dev/vdb is used, this problem does not happen.


Please use full device path name instead of just disk name. I will fix it anyway but this is not a good practice.

Comment 8 Vivek Goyal 2016-05-05 14:02:05 UTC
Created a PR to fix the issue of wipefs being called on a disk which is already part of volume group.

https://github.com/projectatomic/docker-storage-setup/pull/131

Comment 9 Daniel Walsh 2016-05-05 14:19:25 UTC
We will need to live with this problem with DEVS=vdb until the next release 7.2.5

Comment 11 errata-xmlrpc 2016-05-12 14:56:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-1057.html


Note You need to log in before you can comment on or make changes to this bug.