Bug 1119045 - vgscan 'Parse error at byte 1587 (line 101): unexpected token Error parsing metadata for VG pve.'
Summary: vgscan 'Parse error at byte 1587 (line 101): unexpected token Error parsing...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm2
Version: 20
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: LVM and device-mapper development team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-13 14:48 UTC by colin
Modified: 2015-06-30 01:04 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-06-30 01:04:41 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
Strace output for vgscan that fails (65.70 KB, text/plain)
2014-07-13 14:48 UTC, colin
no flags Details
Strace output for lvdisplay that fails (62.76 KB, text/plain)
2014-07-13 14:52 UTC, colin
no flags Details
first 1Meg of /dev/sda1 (1000.00 KB, application/octet-stream)
2014-07-14 12:38 UTC, colin
no flags Details
LVM config - snipped from attachment dd_dev_sda1.bin 0000:2000 to 0000:2640 (1.58 KB, text/plain)
2014-07-14 18:29 UTC, colin
no flags Details

Description colin 2014-07-13 14:48:13 UTC
Created attachment 917628 [details]
Strace output for vgscan that fails

Description of problem:
unable to mount an LVM partition

Version-Release number of selected component (if applicable):

# pvdisplay --version
  LVM version:     2.02.106(2) (2014-04-10)
  Library version: 1.02.85 (2014-04-10)
  Driver version:  4.27.0

# vgdisplay --version
  LVM version:     2.02.106(2) (2014-04-10)
  Library version: 1.02.85 (2014-04-10)
  Driver version:  4.27.0

How reproducible:


Steps to Reproduce:
1. I have a SATA drive with 2 partitions- one ext3 and one LVM
2. I cannot access or examine the LVM partition due to LVM error.
3.

Actual results:
# vgdisplay 
  Parse error at byte 1587 (line 101): unexpected token
  Error parsing metadata for VG pve.
  Skipping volume group pve
  Internal error: Volume Group pve was not unlocked

# pvdisplay 
  Parse error at byte 1587 (line 101): unexpected token
  Error parsing metadata for VG pve.
  Skipping volume group pve
  Internal error: Volume Group pve was not unlocked

# pvscan 
  PV /dev/sda2   VG pve   lvm2 [233.26 GiB / 15.99 GiB free]
  Total: 1 [233.26 GiB] / in use: 1 [233.26 GiB] / in no VG: 0 [0   ]

# vgscan 
  Reading all physical volumes.  This may take a while...
  Parse error at byte 1587 (line 101): unexpected token
  Error parsing metadata for VG pve.
  Skipping volume group pve
  Internal error: Volume Group pve was not unlocked


Expected results:

 Should be able to mount the LVM partition and examine the data.

Additional info:

 The drive was taken out of a Proxmox VE 3.1 server.
 Proxmox VE 3.x is based on Debian 7.x (Wheezy). 

 LVM was installed by the proxmox installer.
 This is a mainline Debian Stable compatible distro and it is not unreasonable to hope that Fedora should be able to read the data partition.

# fdisk -l /dev/sda                

Disk /dev/sda: 233.8 GiB, 251000193024 bytes, 490234752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000ae428

Device    Boot     Start       End    Blocks  Id System
/dev/sda1 *         2048   1048575    523264  83 Linux
/dev/sda2        1048576 490233855 244592640  8e Linux LVM

straces attached:
1) strace vgscan 2> strace_vgscan.err
2) strace lvdisplay 2> strace_lvdisplay.err

Comment 1 colin 2014-07-13 14:52:15 UTC
Created attachment 917629 [details]
Strace output for lvdisplay that fails

Error output from:
#strace lvdisplay 2> strace_lvdisplay.err

Comment 2 colin 2014-07-13 14:58:10 UTC
Comment on attachment 917629 [details]
Strace output for lvdisplay that fails


I can see the error output line:
'write(2, "Parse error at byte 1587 (line 1"..., 53Parse error at byte 1587 (line 101): unexpected token) = 53'

 created at line 865

Comment 3 colin 2014-07-13 15:00:16 UTC
Comment on attachment 917628 [details]
Strace output for vgscan that fails

In strace_vgscan.err

I can see that the error output line:
'write(2, "Parse error at byte 1587 (line 1"..., 53Parse error at byte 1587 (line 101): unexpected token) = 53'

is created at line 901

Comment 4 Bryn M. Reeves 2014-07-14 12:00:24 UTC
The tools are complaining that an illegal token (character, word or other element) appeared in the LVM metadata read from the device.

An strace just records the system calls made by a process so it's not very helpful for understanding a problem like this where we're reading unexpected data from the devices (as a first cut the output of '<lvmtool> -vvv' is usually a better starting point).

To understand what's happening here it would help to see the metadata from the devices; you can either grab this using 'dd' (~1M or so from the start of each PV) or use the '-m' option to lvmdump:

  $ sudo lvmdump -m
  [sudo] password for bmr: 
  Creating dump directory: /root/lvmdump-localhost.localdomain-20140714115540
   
  Gathering LVM & device-mapper version info...
  Gathering dmsetup info...
  Gathering process info...
  Gathering console messages...
  Gathering /etc/lvm info...
  Gathering /dev listing...
  Gathering /sys/block listing...
  Gathering LVM metadata from Physical Volumes...
    /dev/sda2
  Creating report tarball in /root/lvmdump-localhost.localdomain-20140714115540.tgz...

Bear in mind that depending on the previous content of the disks it's possible that other data will be present in the metadata captures; use the 'private' flag when attaching the tarball if you'd like to restrict access to it.

Comment 5 colin 2014-07-14 12:38:05 UTC
Created attachment 917782 [details]
first 1Meg of /dev/sda1

output from # dd bs=512 count=2000 if=/dev/sda2 of=dd_dev_sda1.bin

Comment 6 colin 2014-07-14 12:45:24 UTC
Thankyou for your rapid and informative reply Bryn.

I can today add a few more datapoints.

1) I tried to examine the LVM partition with gparted but it barfed with similar error.

[root@#localhost ~]# yum info gparted
Loaded plugins: langpacks, refresh-packagekit
Installed Packages
Name        : gparted
Arch        : x86_64
Version     : 0.18.0
Release     : 1.fc20
Size        : 6.1 M
Repo        : installed
From repo   : updates

2) So I booted with gparted live iso image:
gparted-live-0.18.0-3-amd64.iso
That worked.

3) I booted with Centos Live iso:
CentOS-7.0-1406-x86_64-KdeLive
That worked.

4) As you suggested I ran 'lvmdump -m'
Output pasted below:

[root@#localhost ~]# lvmdump -m
/sbin/lvmdump: line 97: test: too many arguments
Creating dump directory: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404
/sbin/lvmdump: line 104: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
 
Gathering LVM & device-mapper version info...
/sbin/lvmdump: line 104: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 183: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/versions: No such file or directory
/sbin/lvmdump: line 184: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/versions: No such file or directory
/sbin/lvmdump: line 185: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/versions: No such file or directory
/sbin/lvmdump: line 186: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/versions: No such file or directory
/sbin/lvmdump: line 187: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/versions: No such file or directory
/sbin/lvmdump: line 188: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/versions: No such file or directory
/sbin/lvmdump: line 189: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/versions: No such file or directory
/sbin/lvmdump: line 190: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/versions: No such file or directory
Gathering dmsetup info...
/sbin/lvmdump: line 104: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 109: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/dmsetup_info: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 109: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/dmsetup_table: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 109: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/dmsetup_status: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 109: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 111: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/dmsetup_ls_tree: No such file or directory
Gathering process info...
/sbin/lvmdump: line 104: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 109: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/ps_info: No such file or directory
Gathering console messages...
/sbin/lvmdump: line 104: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 109: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/messages: No such file or directory
Gathering /etc/lvm info...
/sbin/lvmdump: line 104: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 109: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/etc_lvm_listing: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 109: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 109: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 109: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
Gathering /dev listing...
/sbin/lvmdump: line 104: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 109: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/dev_listing: No such file or directory
Gathering /sys/block listing...
/sbin/lvmdump: line 104: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 109: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/sysblock_listing: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 109: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/sysblock_listing: No such file or directory
Gathering LVM metadata from Physical Volumes...
/sbin/lvmdump: line 104: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
/sbin/lvmdump: line 108: /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404/lvmdump.log: No such file or directory
Creating report tarball in /root/lvmdump-#localhost.localdomain
#localhost
K8.localdomain-20140714124404.tgz...
[root@#localhost ~]#

Comment 7 colin 2014-07-14 12:58:26 UTC
I spent a while looking for the private attachment feature, but I stopped when I found this:
http://www.bugzilla.org/features/#private

Private Attachments and Comments

If you are in the "insider group," you can mark certain attachments and comments as private, and then they will be invisible to users who are not in the insider group.

Users will know that a comment was hidden (because the comment numbering will look something like "1, 2, 3, 5" to them), but they will not be able to access its contents.

If posting of private attachments can be enabled then I am happy to do so.

Guessing from the appearance of all those lvmdump warnings-
 file: K8.localdomain-20140714124404.tgz 
might not anyway be so useful.
 
Do let me know if there is anything else that I can do to aid in diagnosing this bug.

many thanks,
Colin.

Comment 8 Bryn M. Reeves 2014-07-14 12:58:27 UTC
One problem here is that you have a '#' character in your hostname:

[root@#localhost ~]
      ^

It's not strictly illegal for the system hostname (although hash is forbidden in DNS names) but it can cause problems for programs that don't correctly handle the required escaping or sanitisation (which appears to include lvmdump).

Comment 9 colin 2014-07-14 13:03:35 UTC
TBH, I didn't think that there originally was # in the hostname!

IIRC - The Proxmox server was (unimiginatively) named localhost.localdomain


I plugged that harddrive into my workstation to retreive the VM images stored on it, so it looks like somthing has gone wrong with the naming along the way.

Comment 10 Bryn M. Reeves 2014-07-14 13:17:46 UTC
You have a valid LVM2 label at offset 0x200:

0000200: 4c41 4245 4c4f 4e45 0100 0000 0000 0000  LABELONE........
0000210: 852e f3d8 2000 0000 4c56 4d32 2030 3031  .... ...LVM2 001
0000220: 4a46 6d32 3276 4870 3358 596b 6679 5970  JFm22vHp3XYkfyYp
0000230: 4361 414b 486c 3575 326d 687a 584c 6157  CaAKHl5u2mhzXLaW
0000240: 0000 c050 3a00 0000 0000 1000 0000 0000  ...P:...........

Followed by some garbage (looks like database table data - probably left over from a previous use of the disk although if it was written later 

[...]
00008e0: e2e2 e0ff e2e2 e0ff e1e1 dfff e1e1 dfff  ................
00008f0: dfdf ddff dcdc dbff d5d5 d3ff cbcb c9ff  ................
0000900: 9d9b a0ff 3e39 49ff 3f3b 4aff 5a54 6bff  ....>9I.?;J.ZTk.
0000910: 6d67 7fff 736c 84ff 6e67 80ff 6760 7aff  mg..sl..ng..g`z.
0000920: 6861 7aff 6a62 7bff 6b65 7dff 6d67 7eff  haz.jb{.ke}.mg~.
0000930: 7169 81ff 746d 83ff 716a 81ff 6b64 7bff  qi..tm..qj..kd{.
0000940: 6159 71ff 554d 64ff 4640 53fb 0d0d 1239  aYq.UMd.F@S....9
0000950: 4747 4700 4747 4700 4747 4700 4747 4700  GGG.GGG.GGG.GGG.
0000960: 4747 4700 4747 4700 4747 4700 4747 4700  GGG.GGG.GGG.GGG.
0000970: 4747 4700 4747 4700 4747 4700 4747 4700  GGG.GGG.GGG.GGG.
0000980: 4747 4700 4747 4700 4747 4700 4747 4700  GGG.GGG.GGG.GGG.
0000990: 4747 4700 4747 4700 4747 4700 4747 4700  GGG.GGG.GGG.GGG.
00009a0: 4747 4700 4747 4700 4747 4700 4747 4700  GGG.GGG.GGG.GGG.
00009b0: 4747 4700 4747 4700 4747 4700 4747 4700  GGG.GGG.GGG.GGG.
00009c0: 4747 4700 4747 4700 4747 4700 c7c8 c5e2  GGG.GGG.GGG.....
[...]

There is LVM2 volume group metadata present beginning at offset 0x1200

0001200: 7076 6520 7b0a 6964 203d 2022 6d67 6b6b  pve {.id = "mgkk
0001210: 564f 2d66 3442 4a2d 6a30 5637 2d51 4d38  VO-f4BJ-j0V7-QM8
0001220: 702d 4450 7666 2d4b 5679 542d 6263 3272  p-DPvf-KVyT-bc2r
0001230: 4a77 220a 7365 716e 6f20 3d20 310a 666f  Jw".seqno = 1.fo
[...]

LVM2 uses a circular buffer to store metadata meaning several generations may be present on-disk. They are distinguished by the sequence number and the active/backup metadata pointers in the header:

We seem to have up to seqno = 4 here:

$ xxd /tmp/dd_dev_sda1.bin | grep seqno
0001230: 4a77 220a 7365 716e 6f20 3d20 310a 666f  Jw".seqno = 1.fo
0001630: 4a77 220a 7365 716e 6f20 3d20 320a 666f  Jw".seqno = 2.fo
0001a30: 4a77 220a 7365 716e 6f20 3d20 330a 666f  Jw".seqno = 3.fo
0002030: 4a77 220a 7365 716e 6f20 3d20 340a 666f  Jw".seqno = 4.fo

This begins at offset 0x2000:

0002000: 7076 6520 7b0a 6964 203d 2022 6d67 6b6b  pve {.id = "mgkk
0002010: 564f 2d66 3442 4a2d 6a30 5637 2d51 4d38  VO-f4BJ-j0V7-QM8
0002020: 702d 4450 7666 2d4b 5679 542d 6263 3272  p-DPvf-KVyT-bc2r
0002030: 4a77 220a 7365 716e 6f20 3d20 340a 666f  Jw".seqno = 4.fo

Pulling that generation out and tidying the formatting up a bit shows the metadata is there and is consistent (brace structure is correct and the required sections are present):

  http://fpaste.org/117802/43368140/

It's not immediately clear why the parser is objecting to this; the line/character reference (line 101, byte 1587) is right at the end of the metadata for seqno 4. I'll try to reproduce this on a test system.

Do you know what version of lvm2 is included in proxmox?

Comment 11 Bryn M. Reeves 2014-07-14 13:18:50 UTC
"looks like database table data - probably left over from a previous use of the disk although if it was written later ... it may have overwritten volume group metadata".

Looking at the full output though this is not the case; the complete set of metadata is there, it's just not clear why the parser is choking on it.

Comment 12 colin 2014-07-14 13:23:12 UTC
As it happens I have just yesterday freshly installed  a new server with the same Proxmox version.

So I can run some diagnostic on it if you would tell me what you would like.

thanks
Colin.

Comment 13 Alasdair Kergon 2014-07-14 13:43:27 UTC
Can you upload the output of a command with -vvvv added to the command line?

Comment 14 colin 2014-07-14 13:48:03 UTC
yup. cando.

If you paste here what you want me to exceute, 
I will run it when I get back from lunch. ;-)

Comment 15 Bryn M. Reeves 2014-07-14 15:03:52 UTC
Could you attach the output of the command 'vgscan -vvvv' please?

Comment 16 colin 2014-07-14 15:28:54 UTC
> Do you know what version of lvm2 is included in proxmox?

root@proxmox31:~# lvm version
  LVM version:     2.02.98(2) (2012-10-15)
  Library version: 1.02.77 (2012-10-15)
  Driver version:  4.23.6

> Could you attach the output of the command 'vgscan -vvvv' please?

[root@#localhost vgscan]# pwd
/home/colin/lvm_sda/vgscan
[root@#localhost vgscan]# ls -alrt
total 8
drwxr-xr-x 2 root  root  4096 Jul 14 16:26 .
drwxr-xr-x 9 colin colin 4096 Jul 14 16:26 ..
[root@#localhost vgscan]# vgscan -vvvv
#libdm-config.c:940       Setting activation/monitoring to 1
#lvmcmdline.c:1152         Processing: vgscan -vvvv
#lvmcmdline.c:1155         O_DIRECT will be used
#libdm-config.c:876       Setting global/locking_type to 1
#libdm-config.c:940       Setting global/wait_for_locks to 1
#locking/locking.c:244       File-based locking selected.
#libdm-config.c:845       Setting global/locking_dir to /run/lock/lvm
#libdm-config.c:940       Setting global/prioritise_write_locks to 1
#locking/file_locking.c:246       Locking /run/lock/lvm/P_global WB
#locking/file_locking.c:150         _do_flock /run/lock/lvm/P_global:aux WB
#locking/file_locking.c:150         _do_flock /run/lock/lvm/P_global WB
#locking/file_locking.c:60         _undo_flock /run/lock/lvm/P_global:aux
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#global"
#filters/filter-persistent.c:52     Wiping cache of LVM-capable devices
#device/dev-cache.c:336         /dev/sda: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD2500YS-01SHB1_WD-WCANY3816234: Aliased to /dev/sda in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee1ab3f09a1: Aliased to /dev/sda in device cache
#device/dev-cache.c:336         /dev/sda1: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD2500YS-01SHB1_WD-WCANY3816234-part1: Aliased to /dev/sda1 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee1ab3f09a1-part1: Aliased to /dev/sda1 in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/8fb5afaa-a3b7-4a70-99b8-3055a280c0b4: Aliased to /dev/sda1 in device cache
#device/dev-cache.c:336         /dev/sda2: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD2500YS-01SHB1_WD-WCANY3816234-part2: Aliased to /dev/sda2 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/lvm-pv-uuid-JFm22v-Hp3X-Ykfy-YpCa-AKHl-5u2m-hzXLaW: Aliased to /dev/sda2 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee1ab3f09a1-part2: Aliased to /dev/sda2 in device cache
#device/dev-cache.c:336         /dev/sdb: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447: Aliased to /dev/sdb in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4: Aliased to /dev/sdb in device cache
#device/dev-cache.c:336         /dev/sdb1: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part1: Aliased to /dev/sdb1 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part1: Aliased to /dev/sdb1 in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/5331b414-8cb1-480e-b0df-6f0e149cb165: Aliased to /dev/sdb1 in device cache
#device/dev-cache.c:336         /dev/sdb2: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part2: Aliased to /dev/sdb2 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part2: Aliased to /dev/sdb2 in device cache
#device/dev-cache.c:336         /dev/sdb3: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part3: Aliased to /dev/sdb3 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part3: Aliased to /dev/sdb3 in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/6d993902-83ee-49b2-be5a-a434f33e3169: Aliased to /dev/sdb3 in device cache
#device/dev-cache.c:336         /dev/sdb4: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part4: Aliased to /dev/sdb4 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part4: Aliased to /dev/sdb4 in device cache
#device/dev-cache.c:336         /dev/sdb5: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part5: Aliased to /dev/sdb5 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part5: Aliased to /dev/sdb5 in device cache
#device/dev-cache.c:333         /dev/disk/by-label/swap: Aliased to /dev/sdb5 in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/79a60b48-aa84-48f7-8bf5-bcf5141cf27b: Aliased to /dev/sdb5 in device cache
#device/dev-cache.c:336         /dev/sdb6: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part6: Aliased to /dev/sdb6 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part6: Aliased to /dev/sdb6 in device cache
#device/dev-cache.c:333         /dev/disk/by-label/home: Aliased to /dev/sdb6 in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/ab6a8674-b388-436d-88bb-97d7c30307ce: Aliased to /dev/sdb6 in device cache
#device/dev-cache.c:336         /dev/sdb7: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part7: Aliased to /dev/sdb7 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part7: Aliased to /dev/sdb7 in device cache
#device/dev-cache.c:333         /dev/disk/by-label/data: Aliased to /dev/sdb7 in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/b54aea3e-e019-40ea-be30-8295dfeaf78d: Aliased to /dev/sdb7 in device cache
#device/dev-cache.c:336         /dev/sr0: Added to device cache
#device/dev-cache.c:333         /dev/cdrom: Aliased to /dev/sr0 in device cache (preferred name)
#device/dev-cache.c:333         /dev/disk/by-id/ata-ASUS_DRW-24B5ST_D1D0CL139846: Aliased to /dev/cdrom in device cache
#device/dev-cache.c:336         /dev/dm-0: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/dm-name-pve-swap: Aliased to /dev/dm-0 in device cache (preferred name)
#device/dev-cache.c:333         /dev/disk/by-id/dm-uuid-LVM-mgkkVOf4BJj0V7QM8pDPvfKVyTbc2rJwE3QS4lzT7yBJR0y3eQeFpIQgQJrkMSVS: Aliased to /dev/disk/by-id/dm-name-pve-swap in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/ef433440-7a3f-40e8-ad6d-ca9ae63d3042: Aliased to /dev/disk/by-id/dm-name-pve-swap in device cache
#device/dev-cache.c:333         /dev/mapper/pve-swap: Aliased to /dev/disk/by-id/dm-name-pve-swap in device cache (preferred name)
#device/dev-cache.c:333         /dev/pve/swap: Aliased to /dev/mapper/pve-swap in device cache (preferred name)
#device/dev-cache.c:336         /dev/dm-1: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/dm-name-pve-root: Aliased to /dev/dm-1 in device cache (preferred name)
#device/dev-cache.c:333         /dev/disk/by-id/dm-uuid-LVM-mgkkVOf4BJj0V7QM8pDPvfKVyTbc2rJw3lfvvziNln5sIqrpOrHqCkSda77U0iu0: Aliased to /dev/disk/by-id/dm-name-pve-root in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/0f25f544-b5d2-46f8-b6c8-6e448f276869: Aliased to /dev/disk/by-id/dm-name-pve-root in device cache
#device/dev-cache.c:333         /dev/mapper/pve-root: Aliased to /dev/disk/by-id/dm-name-pve-root in device cache (preferred name)
#device/dev-cache.c:333         /dev/pve/root: Aliased to /dev/mapper/pve-root in device cache (preferred name)
#device/dev-cache.c:336         /dev/dm-2: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/dm-name-pve-data: Aliased to /dev/dm-2 in device cache (preferred name)
#device/dev-cache.c:333         /dev/disk/by-id/dm-uuid-LVM-mgkkVOf4BJj0V7QM8pDPvfKVyTbc2rJwKq1jBv5H9KTpMBpl1TAJCLWOJDCN9A8q: Aliased to /dev/disk/by-id/dm-name-pve-data in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/25303dd4-4062-4dd5-abeb-afa259c04355: Aliased to /dev/disk/by-id/dm-name-pve-data in device cache
#device/dev-cache.c:333         /dev/mapper/pve-data: Aliased to /dev/disk/by-id/dm-name-pve-data in device cache (preferred name)
#device/dev-cache.c:333         /dev/pve/data: Aliased to /dev/mapper/pve-data in device cache (preferred name)
#cache/lvmcache.c:1627     Wiping internal VG cache
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#global"
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#orphans_lvm1"
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#orphans_lvm1"
#cache/lvmcache.c:1355         lvmcache: initialised VG #orphans_lvm1
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#orphans_pool"
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#orphans_pool"
#cache/lvmcache.c:1355         lvmcache: initialised VG #orphans_pool
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#orphans_lvm2"
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#orphans_lvm2"
#cache/lvmcache.c:1355         lvmcache: initialised VG #orphans_lvm2
#vgscan.c:61   Reading all physical volumes.  This may take a while...
#toollib.c:674     Finding all volume groups
#cache/lvmetad.c:655         Asking lvmetad for complete list of known VGs
#libdm-config.c:845       Setting response to OK
#libdm-config.c:845       Setting response to OK
#cache/lvmetad.c:387         Asking lvmetad for VG mgkkVO-f4BJ-j0V7-QM8p-DPvf-KVyT-bc2rJw (name unknown)
#libdm-config.c:845       Setting response to OK
#libdm-config.c:845       Setting response to OK
#libdm-config.c:845       Setting name to pve
#libdm-config.c:845       Setting metadata/format to lvm2
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "pve"
#format_text/format-text.c:1945         <backtrace>
#libdm-config.c:845       Setting id to JFm22v-Hp3X-Ykfy-YpCa-AKHl-5u2m-hzXLaW
#libdm-config.c:845       Setting format to lvm2
#libdm-config.c:876       Setting device to 2050
#libdm-config.c:876       Setting dev_size to 489185280
#libdm-config.c:876       Setting label_sector to 1
#filters/filter-mpath.c:156         /dev/sda2: Device is a partition, using primary device /dev/sda for mpath component detection
#device/dev-io.c:537         Opened /dev/sda2 RO O_DIRECT
#device/dev-io.c:314       /dev/sda2: size is 489185280 sectors
#device/dev-io.c:591         Closed /dev/sda2
#device/dev-io.c:314       /dev/sda2: size is 489185280 sectors
#device/dev-io.c:537         Opened /dev/sda2 RO O_DIRECT
#device/dev-io.c:145         /dev/sda2: block size is 4096 bytes
#device/dev-io.c:156         /dev/sda2: physical block size is 512 bytes
#device/dev-io.c:591         Closed /dev/sda2
#cache/lvmcache.c:1353         lvmcache: /dev/sda2: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mdas
#libdm-config.c:876       Setting size to 1044480
#libdm-config.c:876       Setting start to 4096
#libdm-config.c:876       Setting ignore to 0
#metadata/vg.c:60         Allocated VG pve at 0x7f5d1e773e70.
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "pve"
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "pve"
#cache/lvmcache.c:1353         lvmcache: /dev/sda2: now in VG pve with 1 mdas
#cache/lvmcache.c:1130         lvmcache: /dev/sda2: setting pve VGID to mgkkVOf4BJj0V7QM8pDPvfKVyTbc2rJw
#metadata/vg.c:75         Freeing VG pve at 0x7f5d1e773e70.
#toollib.c:574     Finding volume group "pve"
#locking/file_locking.c:246       Locking /run/lock/lvm/V_pve RB
#locking/file_locking.c:150         _do_flock /run/lock/lvm/V_pve:aux WB
#locking/file_locking.c:60         _undo_flock /run/lock/lvm/V_pve:aux
#locking/file_locking.c:150         _do_flock /run/lock/lvm/V_pve RB
#cache/lvmetad.c:387         Asking lvmetad for VG mgkkVO-f4BJ-j0V7-QM8p-DPvf-KVyT-bc2rJw (pve)
#libdm-config.c:845       Setting response to OK
#libdm-config.c:845       Setting response to OK
#libdm-config.c:845       Setting name to pve
#libdm-config.c:845       Setting metadata/format to lvm2
#libdm-config.c:845       Setting id to JFm22v-Hp3X-Ykfy-YpCa-AKHl-5u2m-hzXLaW
#libdm-config.c:845       Setting format to lvm2
#libdm-config.c:876       Setting device to 2050
#libdm-config.c:876       Setting dev_size to 489185280
#libdm-config.c:876       Setting label_sector to 1
#libdm-config.c:876       Setting size to 1044480
#libdm-config.c:876       Setting start to 4096
#libdm-config.c:876       Setting ignore to 0
#metadata/vg.c:60         Allocated VG pve at 0x7f5d1e76fe60.
#metadata/pv_manip.c:354         /dev/sda2 0:      0   1792: swap(0:0)
#metadata/pv_manip.c:354         /dev/sda2 1:   1792  14912: root(0:0)
#metadata/pv_manip.c:354         /dev/sda2 2:  16704  38916: data(0:0)
#metadata/pv_manip.c:354         /dev/sda2 3:  55620   4094: NULL(0:0)
#libdm-config.c:489   Parse error at byte 1587 (line 101): unexpected token
#libdm-config.c:424         <backtrace>
#libdm-config.c:171         <backtrace>
#libdm-config.c:185         <backtrace>
#format_text/export.c:860   Error parsing metadata for VG pve.
#format_text/export.c:862         <backtrace>
#metadata/metadata.c:876         <backtrace>
#metadata/metadata.c:896         <backtrace>
#metadata/replicator_manip.c:571         Failed to vg_read pve
#toollib.c:175   Skipping volume group pve
#toollib.c:590         <backtrace>
#metadata/vg.c:75         Freeing VG pve at 0x7f5d1e76fe60.
#locking/file_locking.c:83       Unlocking /run/lock/lvm/P_global
#locking/file_locking.c:60         _undo_flock /run/lock/lvm/P_global
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#global"
#locking/file_locking.c:83       Unlocking /run/lock/lvm/V_pve
#locking/file_locking.c:60         _undo_flock /run/lock/lvm/V_pve
#lvmcmdline.c:1201         Completed: vgscan -vvvv
  Internal error: Volume Group pve was not unlocked
[root@#localhost vgscan]#

Comment 17 Zdenek Kabelac 2014-07-14 15:41:58 UTC
Do you have same problems without lvmetad ?

(/etc/lvm/lvm.conf   --  use_lvmetad=0)

Comment 18 colin 2014-07-14 16:02:43 UTC
[root@#localhost lvm_sda]# cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.orig
[root@#localhost lvm_sda]# joe /etc/lvm/lvm.conf

(/etc/lvm/lvm.conf   changed use_lvmetad=1 to =0 )


[root@#localhost lvm_sda]# service lvm2-lvmetad status
Redirecting to /bin/systemctl status  lvm2-lvmetad.service
lvm2-lvmetad.service - LVM2 metadata daemon
   Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; disabled)
   Active: active (running) since Mon 2014-07-14 16:19:15 BST; 39min ago
     Docs: man:lvmetad(8)
 Main PID: 407 (lvmetad)
   CGroup: /system.slice/lvm2-lvmetad.service
           └─407 /usr/sbin/lvmetad -f

^[[AJul 14 16:19:15 localhost.localdomain systemd[1]: Started LVM2 metadata daemon.
[root@#localhost lvm_sda]# service lvm2-lvmetad stop
Redirecting to /bin/systemctl stop  lvm2-lvmetad.service
Warning: Stopping lvm2-lvmetad.service, but it can still be activated by:
  lvm2-lvmetad.socket
[root@#localhost lvm_sda]# service lvm2-lvmetad start
Redirecting to /bin/systemctl start  lvm2-lvmetad.service
[root@#localhost lvm_sda]# service lvm2-lvmetad status
Redirecting to /bin/systemctl status  lvm2-lvmetad.service
lvm2-lvmetad.service - LVM2 metadata daemon
   Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; disabled)
   Active: active (running) since Mon 2014-07-14 16:59:31 BST; 5s ago
     Docs: man:lvmetad(8)
 Main PID: 5409 (lvmetad)
   CGroup: /system.slice/lvm2-lvmetad.service
           └─5409 /usr/sbin/lvmetad -f

Jul 14 16:59:31 #localhost.localdomain
#localhost
K8.localdomain systemd[1]: Started LVM2 metadata daemon.

[root@#localhost lvm_sda]# vgscan -vvvv
  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
#libdm-config.c:940       Setting activation/monitoring to 1
#lvmcmdline.c:1152         Processing: vgscan -vvvv
#lvmcmdline.c:1155         O_DIRECT will be used
#libdm-config.c:876       Setting global/locking_type to 1
#libdm-config.c:940       Setting global/wait_for_locks to 1
#locking/locking.c:244       File-based locking selected.
#libdm-config.c:845       Setting global/locking_dir to /run/lock/lvm
#libdm-config.c:940       Setting global/prioritise_write_locks to 1
#locking/file_locking.c:246       Locking /run/lock/lvm/P_global WB
#locking/file_locking.c:150         _do_flock /run/lock/lvm/P_global:aux WB
#locking/file_locking.c:150         _do_flock /run/lock/lvm/P_global WB
#locking/file_locking.c:60         _undo_flock /run/lock/lvm/P_global:aux
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#global"
#filters/filter-persistent.c:52     Wiping cache of LVM-capable devices
#device/dev-cache.c:336         /dev/sda: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD2500YS-01SHB1_WD-WCANY3816234: Aliased to /dev/sda in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee1ab3f09a1: Aliased to /dev/sda in device cache
#device/dev-cache.c:336         /dev/sda1: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD2500YS-01SHB1_WD-WCANY3816234-part1: Aliased to /dev/sda1 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee1ab3f09a1-part1: Aliased to /dev/sda1 in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/8fb5afaa-a3b7-4a70-99b8-3055a280c0b4: Aliased to /dev/sda1 in device cache
#device/dev-cache.c:336         /dev/sda2: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD2500YS-01SHB1_WD-WCANY3816234-part2: Aliased to /dev/sda2 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/lvm-pv-uuid-JFm22v-Hp3X-Ykfy-YpCa-AKHl-5u2m-hzXLaW: Aliased to /dev/sda2 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee1ab3f09a1-part2: Aliased to /dev/sda2 in device cache
#device/dev-cache.c:336         /dev/sdb: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447: Aliased to /dev/sdb in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4: Aliased to /dev/sdb in device cache
#device/dev-cache.c:336         /dev/sdb1: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part1: Aliased to /dev/sdb1 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part1: Aliased to /dev/sdb1 in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/5331b414-8cb1-480e-b0df-6f0e149cb165: Aliased to /dev/sdb1 in device cache
#device/dev-cache.c:336         /dev/sdb2: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part2: Aliased to /dev/sdb2 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part2: Aliased to /dev/sdb2 in device cache
#device/dev-cache.c:336         /dev/sdb3: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part3: Aliased to /dev/sdb3 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part3: Aliased to /dev/sdb3 in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/6d993902-83ee-49b2-be5a-a434f33e3169: Aliased to /dev/sdb3 in device cache
#device/dev-cache.c:336         /dev/sdb4: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part4: Aliased to /dev/sdb4 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part4: Aliased to /dev/sdb4 in device cache
#device/dev-cache.c:336         /dev/sdb5: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part5: Aliased to /dev/sdb5 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part5: Aliased to /dev/sdb5 in device cache
#device/dev-cache.c:333         /dev/disk/by-label/swap: Aliased to /dev/sdb5 in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/79a60b48-aa84-48f7-8bf5-bcf5141cf27b: Aliased to /dev/sdb5 in device cache
#device/dev-cache.c:336         /dev/sdb6: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part6: Aliased to /dev/sdb6 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part6: Aliased to /dev/sdb6 in device cache
#device/dev-cache.c:333         /dev/disk/by-label/home: Aliased to /dev/sdb6 in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/ab6a8674-b388-436d-88bb-97d7c30307ce: Aliased to /dev/sdb6 in device cache
#device/dev-cache.c:336         /dev/sdb7: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/ata-WDC_WD20EFRX-68AX9N0_WD-WMC301572447-part7: Aliased to /dev/sdb7 in device cache
#device/dev-cache.c:333         /dev/disk/by-id/wwn-0x50014ee058e09fe4-part7: Aliased to /dev/sdb7 in device cache
#device/dev-cache.c:333         /dev/disk/by-label/data: Aliased to /dev/sdb7 in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/b54aea3e-e019-40ea-be30-8295dfeaf78d: Aliased to /dev/sdb7 in device cache
#device/dev-cache.c:336         /dev/sr0: Added to device cache
#device/dev-cache.c:333         /dev/cdrom: Aliased to /dev/sr0 in device cache (preferred name)
#device/dev-cache.c:333         /dev/disk/by-id/ata-ASUS_DRW-24B5ST_D1D0CL139846: Aliased to /dev/cdrom in device cache
#device/dev-cache.c:336         /dev/dm-0: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/dm-name-pve-swap: Aliased to /dev/dm-0 in device cache (preferred name)
#device/dev-cache.c:333         /dev/disk/by-id/dm-uuid-LVM-mgkkVOf4BJj0V7QM8pDPvfKVyTbc2rJwE3QS4lzT7yBJR0y3eQeFpIQgQJrkMSVS: Aliased to /dev/disk/by-id/dm-name-pve-swap in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/ef433440-7a3f-40e8-ad6d-ca9ae63d3042: Aliased to /dev/disk/by-id/dm-name-pve-swap in device cache
#device/dev-cache.c:333         /dev/mapper/pve-swap: Aliased to /dev/disk/by-id/dm-name-pve-swap in device cache (preferred name)
#device/dev-cache.c:333         /dev/pve/swap: Aliased to /dev/mapper/pve-swap in device cache (preferred name)
#device/dev-cache.c:336         /dev/dm-1: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/dm-name-pve-root: Aliased to /dev/dm-1 in device cache (preferred name)
#device/dev-cache.c:333         /dev/disk/by-id/dm-uuid-LVM-mgkkVOf4BJj0V7QM8pDPvfKVyTbc2rJw3lfvvziNln5sIqrpOrHqCkSda77U0iu0: Aliased to /dev/disk/by-id/dm-name-pve-root in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/0f25f544-b5d2-46f8-b6c8-6e448f276869: Aliased to /dev/disk/by-id/dm-name-pve-root in device cache
#device/dev-cache.c:333         /dev/mapper/pve-root: Aliased to /dev/disk/by-id/dm-name-pve-root in device cache (preferred name)
#device/dev-cache.c:333         /dev/pve/root: Aliased to /dev/mapper/pve-root in device cache (preferred name)
#device/dev-cache.c:336         /dev/dm-2: Added to device cache
#device/dev-cache.c:333         /dev/disk/by-id/dm-name-pve-data: Aliased to /dev/dm-2 in device cache (preferred name)
#device/dev-cache.c:333         /dev/disk/by-id/dm-uuid-LVM-mgkkVOf4BJj0V7QM8pDPvfKVyTbc2rJwKq1jBv5H9KTpMBpl1TAJCLWOJDCN9A8q: Aliased to /dev/disk/by-id/dm-name-pve-data in device cache
#device/dev-cache.c:333         /dev/disk/by-uuid/25303dd4-4062-4dd5-abeb-afa259c04355: Aliased to /dev/disk/by-id/dm-name-pve-data in device cache
#device/dev-cache.c:333         /dev/mapper/pve-data: Aliased to /dev/disk/by-id/dm-name-pve-data in device cache (preferred name)
#device/dev-cache.c:333         /dev/pve/data: Aliased to /dev/mapper/pve-data in device cache (preferred name)
#cache/lvmcache.c:1627     Wiping internal VG cache
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#global"
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#orphans_lvm1"
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#orphans_lvm1"
#cache/lvmcache.c:1355         lvmcache: initialised VG #orphans_lvm1
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#orphans_pool"
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#orphans_pool"
#cache/lvmcache.c:1355         lvmcache: initialised VG #orphans_pool
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#orphans_lvm2"
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#orphans_lvm2"
#cache/lvmcache.c:1355         lvmcache: initialised VG #orphans_lvm2
#vgscan.c:61   Reading all physical volumes.  This may take a while...
#toollib.c:674     Finding all volume groups
#device/dev-io.c:537         Opened /dev/sda RO O_DIRECT
#device/dev-io.c:314       /dev/sda: size is 490234752 sectors
#device/dev-io.c:145         /dev/sda: block size is 4096 bytes
#device/dev-io.c:156         /dev/sda: physical block size is 512 bytes
#filters/filter-partitioned.c:45         /dev/sda: Skipping: Partition table signature found
#device/dev-io.c:591         Closed /dev/sda
#filters/filter-type.c:27         /dev/cdrom: Skipping: Unrecognised LVM device type 11
#ioctl/libdm-iface.c:1751         dm version   OF   [16384] (*1)
#ioctl/libdm-iface.c:1751         dm status   (253:0) OF   [16384] (*1)
#device/dev-io.c:537         Opened /dev/pve/swap RO O_DIRECT
#device/dev-io.c:314       /dev/pve/swap: size is 14680064 sectors
#device/dev-io.c:591         Closed /dev/pve/swap
#device/dev-io.c:314       /dev/pve/swap: size is 14680064 sectors
#device/dev-io.c:537         Opened /dev/pve/swap RO O_DIRECT
#device/dev-io.c:145         /dev/pve/swap: block size is 4096 bytes
#device/dev-io.c:156         /dev/pve/swap: physical block size is 512 bytes
#device/dev-io.c:591         Closed /dev/pve/swap
#device/dev-cache.c:1049         Using /dev/pve/swap
#device/dev-io.c:537         Opened /dev/pve/swap RO O_DIRECT
#device/dev-io.c:145         /dev/pve/swap: block size is 4096 bytes
#device/dev-io.c:156         /dev/pve/swap: physical block size is 512 bytes
#label/label.c:179       /dev/pve/swap: No label detected
#label/label.c:282         <backtrace>
#device/dev-io.c:591         Closed /dev/pve/swap
#filters/filter-mpath.c:156         /dev/sda1: Device is a partition, using primary device /dev/sda for mpath component detection
#device/dev-io.c:537         Opened /dev/sda1 RO O_DIRECT
#device/dev-io.c:314       /dev/sda1: size is 1046528 sectors
#device/dev-io.c:591         Closed /dev/sda1
#device/dev-io.c:314       /dev/sda1: size is 1046528 sectors
#device/dev-io.c:537         Opened /dev/sda1 RO O_DIRECT
#device/dev-io.c:145         /dev/sda1: block size is 4096 bytes
#device/dev-io.c:156         /dev/sda1: physical block size is 512 bytes
#device/dev-io.c:591         Closed /dev/sda1
#device/dev-cache.c:1049         Using /dev/sda1
#device/dev-io.c:537         Opened /dev/sda1 RO O_DIRECT
#device/dev-io.c:145         /dev/sda1: block size is 4096 bytes
#device/dev-io.c:156         /dev/sda1: physical block size is 512 bytes
#label/label.c:179       /dev/sda1: No label detected
#label/label.c:282         <backtrace>
#device/dev-io.c:591         Closed /dev/sda1
#ioctl/libdm-iface.c:1751         dm status   (253:1) OF   [16384] (*1)
#device/dev-io.c:537         Opened /dev/pve/root RO O_DIRECT
#device/dev-io.c:314       /dev/pve/root: size is 122159104 sectors
#device/dev-io.c:591         Closed /dev/pve/root
#device/dev-io.c:314       /dev/pve/root: size is 122159104 sectors
#device/dev-io.c:537         Opened /dev/pve/root RO O_DIRECT
#device/dev-io.c:145         /dev/pve/root: block size is 4096 bytes
#device/dev-io.c:156         /dev/pve/root: physical block size is 512 bytes
#device/dev-io.c:591         Closed /dev/pve/root
#device/dev-cache.c:1049         Using /dev/pve/root
#device/dev-io.c:537         Opened /dev/pve/root RO O_DIRECT
#device/dev-io.c:145         /dev/pve/root: block size is 4096 bytes
#device/dev-io.c:156         /dev/pve/root: physical block size is 512 bytes
#label/label.c:179       /dev/pve/root: No label detected
#label/label.c:282         <backtrace>
#device/dev-io.c:591         Closed /dev/pve/root
#filters/filter-mpath.c:156         /dev/sda2: Device is a partition, using primary device /dev/sda for mpath component detection
#device/dev-io.c:537         Opened /dev/sda2 RO O_DIRECT
#device/dev-io.c:314       /dev/sda2: size is 489185280 sectors
#device/dev-io.c:591         Closed /dev/sda2
#device/dev-io.c:314       /dev/sda2: size is 489185280 sectors
#device/dev-io.c:537         Opened /dev/sda2 RO O_DIRECT
#device/dev-io.c:145         /dev/sda2: block size is 4096 bytes
#device/dev-io.c:156         /dev/sda2: physical block size is 512 bytes
#device/dev-io.c:591         Closed /dev/sda2
#device/dev-cache.c:1049         Using /dev/sda2
#device/dev-io.c:537         Opened /dev/sda2 RO O_DIRECT
#device/dev-io.c:145         /dev/sda2: block size is 4096 bytes
#device/dev-io.c:156         /dev/sda2: physical block size is 512 bytes
#label/label.c:155       /dev/sda2: lvm2 label detected at sector 1
#cache/lvmcache.c:1353         lvmcache: /dev/sda2: now in VG #orphans_lvm2 (#orphans_lvm2) with 0 mdas
#format_text/format-text.c:1207         /dev/sda2: Found metadata at 8192 size 1617 (in area at 4096 size 1044480) for pve (mgkkVO-f4BJ-j0V7-QM8p-DPvf-KVyT-bc2rJw)
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "pve"
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "pve"
#cache/lvmcache.c:1353         lvmcache: /dev/sda2: now in VG pve with 1 mdas
#cache/lvmcache.c:1130         lvmcache: /dev/sda2: setting pve VGID to mgkkVOf4BJj0V7QM8pDPvfKVyTbc2rJw
#cache/lvmcache.c:1390         lvmcache: /dev/sda2: VG pve: Set creation host to proxmox.
#device/dev-io.c:591         Closed /dev/sda2
#ioctl/libdm-iface.c:1751         dm status   (253:2) OF   [16384] (*1)
#device/dev-io.c:537         Opened /dev/pve/data RO O_DIRECT
#device/dev-io.c:314       /dev/pve/data: size is 318799872 sectors
#device/dev-io.c:591         Closed /dev/pve/data
#device/dev-io.c:314       /dev/pve/data: size is 318799872 sectors
#device/dev-io.c:537         Opened /dev/pve/data RO O_DIRECT
#device/dev-io.c:145         /dev/pve/data: block size is 4096 bytes
#device/dev-io.c:156         /dev/pve/data: physical block size is 512 bytes
#device/dev-io.c:591         Closed /dev/pve/data
#device/dev-cache.c:1049         Using /dev/pve/data
#device/dev-io.c:537         Opened /dev/pve/data RO O_DIRECT
#device/dev-io.c:145         /dev/pve/data: block size is 4096 bytes
#device/dev-io.c:156         /dev/pve/data: physical block size is 512 bytes
#label/label.c:179       /dev/pve/data: No label detected
#label/label.c:282         <backtrace>
#device/dev-io.c:591         Closed /dev/pve/data
#device/dev-io.c:537         Opened /dev/sdb RO O_DIRECT
#device/dev-io.c:314       /dev/sdb: size is 3907029168 sectors
#device/dev-io.c:145         /dev/sdb: block size is 4096 bytes
#device/dev-io.c:156         /dev/sdb: physical block size is 4096 bytes
#filters/filter-partitioned.c:45         /dev/sdb: Skipping: Partition table signature found
#device/dev-io.c:591         Closed /dev/sdb
#filters/filter-mpath.c:156         /dev/sdb1: Device is a partition, using primary device /dev/sdb for mpath component detection
#device/dev-io.c:537         Opened /dev/sdb1 RO O_DIRECT
#device/dev-io.c:314       /dev/sdb1: size is 104857600 sectors
#device/dev-io.c:591         Closed /dev/sdb1
#device/dev-io.c:314       /dev/sdb1: size is 104857600 sectors
#device/dev-io.c:537         Opened /dev/sdb1 RO O_DIRECT
#device/dev-io.c:145         /dev/sdb1: block size is 4096 bytes
#device/dev-io.c:156         /dev/sdb1: physical block size is 4096 bytes
#device/dev-io.c:591         Closed /dev/sdb1
#device/dev-cache.c:1049         Using /dev/sdb1
#device/dev-io.c:537         Opened /dev/sdb1 RO O_DIRECT
#device/dev-io.c:145         /dev/sdb1: block size is 4096 bytes
#device/dev-io.c:156         /dev/sdb1: physical block size is 4096 bytes
#label/label.c:179       /dev/sdb1: No label detected
#label/label.c:282         <backtrace>
#device/dev-io.c:591         Closed /dev/sdb1
#filters/filter-mpath.c:156         /dev/sdb2: Device is a partition, using primary device /dev/sdb for mpath component detection
#device/dev-io.c:537         Opened /dev/sdb2 RO O_DIRECT
#device/dev-io.c:314       /dev/sdb2: size is 83886080 sectors
#device/dev-io.c:591         Closed /dev/sdb2
#device/dev-io.c:314       /dev/sdb2: size is 83886080 sectors
#device/dev-io.c:537         Opened /dev/sdb2 RO O_DIRECT
#device/dev-io.c:145         /dev/sdb2: block size is 4096 bytes
#device/dev-io.c:156         /dev/sdb2: physical block size is 4096 bytes
#device/dev-io.c:591         Closed /dev/sdb2
#device/dev-cache.c:1049         Using /dev/sdb2
#device/dev-io.c:537         Opened /dev/sdb2 RO O_DIRECT
#device/dev-io.c:145         /dev/sdb2: block size is 4096 bytes
#device/dev-io.c:156         /dev/sdb2: physical block size is 4096 bytes
#label/label.c:179       /dev/sdb2: No label detected
#label/label.c:282         <backtrace>
#device/dev-io.c:591         Closed /dev/sdb2
#filters/filter-mpath.c:156         /dev/sdb3: Device is a partition, using primary device /dev/sdb for mpath component detection
#device/dev-io.c:537         Opened /dev/sdb3 RO O_DIRECT
#device/dev-io.c:314       /dev/sdb3: size is 1048576 sectors
#device/dev-io.c:591         Closed /dev/sdb3
#device/dev-io.c:314       /dev/sdb3: size is 1048576 sectors
#device/dev-io.c:537         Opened /dev/sdb3 RO O_DIRECT
#device/dev-io.c:145         /dev/sdb3: block size is 4096 bytes
#device/dev-io.c:156         /dev/sdb3: physical block size is 4096 bytes
#device/dev-io.c:591         Closed /dev/sdb3
#device/dev-cache.c:1049         Using /dev/sdb3
#device/dev-io.c:537         Opened /dev/sdb3 RO O_DIRECT
#device/dev-io.c:145         /dev/sdb3: block size is 4096 bytes
#device/dev-io.c:156         /dev/sdb3: physical block size is 4096 bytes
#label/label.c:179       /dev/sdb3: No label detected
#label/label.c:282         <backtrace>
#device/dev-io.c:591         Closed /dev/sdb3
#filters/filter-mpath.c:156         /dev/sdb4: Device is a partition, using primary device /dev/sdb for mpath component detection
#device/dev-io.c:537         Opened /dev/sdb4 RO O_DIRECT
#device/dev-io.c:314       /dev/sdb4: size is 2 sectors
#filters/filter-partitioned.c:39         /dev/sdb4: Skipping: Too small to hold a PV
#device/dev-io.c:591         Closed /dev/sdb4
#filters/filter-mpath.c:156         /dev/sdb5: Device is a partition, using primary device /dev/sdb for mpath component detection
#device/dev-io.c:537         Opened /dev/sdb5 RO O_DIRECT
#device/dev-io.c:314       /dev/sdb5: size is 16777216 sectors
#device/dev-io.c:591         Closed /dev/sdb5
#device/dev-io.c:314       /dev/sdb5: size is 16777216 sectors
#device/dev-io.c:537         Opened /dev/sdb5 RO O_DIRECT
#device/dev-io.c:145         /dev/sdb5: block size is 4096 bytes
#device/dev-io.c:156         /dev/sdb5: physical block size is 4096 bytes
#device/dev-io.c:591         Closed /dev/sdb5
#device/dev-cache.c:1049         Using /dev/sdb5
#device/dev-io.c:537         Opened /dev/sdb5 RO O_DIRECT
#device/dev-io.c:145         /dev/sdb5: block size is 4096 bytes
#device/dev-io.c:156         /dev/sdb5: physical block size is 4096 bytes
#label/label.c:179       /dev/sdb5: No label detected
#label/label.c:282         <backtrace>
#device/dev-io.c:591         Closed /dev/sdb5
#filters/filter-mpath.c:156         /dev/sdb6: Device is a partition, using primary device /dev/sdb for mpath component detection
#device/dev-io.c:537         Opened /dev/sdb6 RO O_DIRECT
#device/dev-io.c:314       /dev/sdb6: size is 83886080 sectors
#device/dev-io.c:591         Closed /dev/sdb6
#device/dev-io.c:314       /dev/sdb6: size is 83886080 sectors
#device/dev-io.c:537         Opened /dev/sdb6 RO O_DIRECT
#device/dev-io.c:145         /dev/sdb6: block size is 4096 bytes
#device/dev-io.c:156         /dev/sdb6: physical block size is 4096 bytes
#device/dev-io.c:591         Closed /dev/sdb6
#device/dev-cache.c:1049         Using /dev/sdb6
#device/dev-io.c:537         Opened /dev/sdb6 RO O_DIRECT
#device/dev-io.c:145         /dev/sdb6: block size is 4096 bytes
#device/dev-io.c:156         /dev/sdb6: physical block size is 4096 bytes
#label/label.c:179       /dev/sdb6: No label detected
#label/label.c:282         <backtrace>
#device/dev-io.c:591         Closed /dev/sdb6
#filters/filter-mpath.c:156         /dev/sdb7: Device is a partition, using primary device /dev/sdb for mpath component detection
#device/dev-io.c:537         Opened /dev/sdb7 RO O_DIRECT
#device/dev-io.c:314       /dev/sdb7: size is 83886080 sectors
#device/dev-io.c:591         Closed /dev/sdb7
#device/dev-io.c:314       /dev/sdb7: size is 83886080 sectors
#device/dev-io.c:537         Opened /dev/sdb7 RO O_DIRECT
#device/dev-io.c:145         /dev/sdb7: block size is 4096 bytes
#device/dev-io.c:156         /dev/sdb7: physical block size is 4096 bytes
#device/dev-io.c:591         Closed /dev/sdb7
#device/dev-cache.c:1049         Using /dev/sdb7
#device/dev-io.c:537         Opened /dev/sdb7 RO O_DIRECT
#device/dev-io.c:145         /dev/sdb7: block size is 4096 bytes
#device/dev-io.c:156         /dev/sdb7: physical block size is 4096 bytes
#label/label.c:179       /dev/sdb7: No label detected
#label/label.c:282         <backtrace>
#device/dev-io.c:591         Closed /dev/sdb7
#toollib.c:574     Finding volume group "pve"
#locking/file_locking.c:246       Locking /run/lock/lvm/V_pve RB
#locking/file_locking.c:150         _do_flock /run/lock/lvm/V_pve:aux WB
#locking/file_locking.c:60         _undo_flock /run/lock/lvm/V_pve:aux
#locking/file_locking.c:150         _do_flock /run/lock/lvm/V_pve RB
#label/label.c:265         Using cached label for /dev/sda2
#device/dev-io.c:537         Opened /dev/sda2 RO O_DIRECT
#device/dev-io.c:145         /dev/sda2: block size is 4096 bytes
#device/dev-io.c:156         /dev/sda2: physical block size is 512 bytes
#metadata/vg.c:60         Allocated VG pve at 0x7f0575a41e20.
#label/label.c:265         Using cached label for /dev/sda2
#format_text/format-text.c:538         Read pve metadata (4) from /dev/sda2 at 8192 size 1617
#metadata/pv_manip.c:354         /dev/sda2 0:      0   1792: swap(0:0)
#metadata/pv_manip.c:354         /dev/sda2 1:   1792  14912: root(0:0)
#metadata/pv_manip.c:354         /dev/sda2 2:  16704  38916: data(0:0)
#metadata/pv_manip.c:354         /dev/sda2 3:  55620   4094: NULL(0:0)
#libdm-config.c:489   Parse error at byte 1587 (line 101): unexpected token
#libdm-config.c:424         <backtrace>
#libdm-config.c:171         <backtrace>
#libdm-config.c:185         <backtrace>
#format_text/export.c:860   Error parsing metadata for VG pve.
#format_text/export.c:862         <backtrace>
#metadata/metadata.c:876         <backtrace>
#metadata/metadata.c:896         <backtrace>
#metadata/replicator_manip.c:571         Failed to vg_read pve
#toollib.c:175   Skipping volume group pve
#toollib.c:590         <backtrace>
#metadata/vg.c:75         Freeing VG pve at 0x7f0575a41e20.
#locking/file_locking.c:83       Unlocking /run/lock/lvm/P_global
#locking/file_locking.c:60         _undo_flock /run/lock/lvm/P_global
#cache/lvmcache.c:438         Metadata cache has no info for vgname: "#global"
#locking/file_locking.c:83       Unlocking /run/lock/lvm/V_pve
#locking/file_locking.c:60         _undo_flock /run/lock/lvm/V_pve
#lvmcmdline.c:1201         Completed: vgscan -vvvv
  Internal error: Volume Group pve was not unlocked
  Device '/dev/sda2' has been left open (0 remaining references).
  Internal error: 1 device(s) were left open and have been closed.
[root@#localhost lvm_sda]#

Comment 19 Zdenek Kabelac 2014-07-14 16:09:25 UTC
ok, so the problem appears to be in the binary.

So now - the metadata were  generated on 3.16.0-0.rc4.git1.1.fc21.x86_64 - and they are normally readable on my x86_64 platform.

But you run this on some virtual server - what is endianess and actual CPU in use ?

Comment 20 colin 2014-07-14 16:11:41 UTC
 Your comment was:

    So with lvmetad now disabled.

    [root@#localhost lvm_sda]# pvscan
      WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
      PV /dev/sda2   VG pve   lvm2 [233.26 GiB / 15.99 GiB free]
      Total: 1 [233.26 GiB] / in use: 1 [233.26 GiB] / in no VG: 0 [0   ]

    [root@#localhost lvm_sda]# vgscan
      WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
      Reading all physical volumes.  This may take a while...
      Parse error at byte 1587 (line 101): unexpected token
      Error parsing metadata for VG pve.
      Skipping volume group pve
      Internal error: Volume Group pve was not unlocked
      Device '/dev/sda2' has been left open (0 remaining references).
      Internal error: 1 device(s) were left open and have been closed.

    Parsing fault still exhibited.

    Maybe it is not important, but I notice that the Problematic drive has an msdos partition table which must be a hangover from a former life in as a Fedora box.

    The Freshly installed Proxmox31 gets gived only the LVM volumes, no Partition Table.

Comment 21 colin 2014-07-14 16:16:21 UTC
>So now - the metadata were  generated on 3.16.0-0.rc4.git1.1.fc21.x86_64 - and they are normally readable on my x86_64 platform.

But you run this on some virtual server - what is endianess and actual CPU in use ?

I am not entirely sure exactly what you are askeing here.

The Problematic Drive came from an AMD64X6 server 

Running native x86_64

Comment 22 colin 2014-07-14 16:19:27 UTC
root@proxmox31:~/lvmstuff# uname -a
Linux proxmox31 2.6.32-26-pve #1 SMP Mon Oct 14 08:22:20 CEST 2013 x86_64 GNU/Linux

Comment 23 Zdenek Kabelac 2014-07-14 17:38:32 UTC
So problematic text/char for your binary - 8192 + 1587 -> offset 0x2633
is 0x09

00002510 20 31 36 37 │ 30 34 0A 5D │ 0A 7D 0A 7D │ 0A 7D 0A 7D   16704.].}.}.}.}
00002520 0A 23 20 47 │ 65 6E 65 72 │ 61 74 65 64 │ 20 62 79 20  .# Generated by
00002530 4C 56 4D 32 │ 20 76 65 72 │ 73 69 6F 6E │ 20 32 2E 30  LVM2 version 2.0
00002540 32 2E 39 38 │ 28 32 29 20 │ 28 32 30 31 │ 32 2D 31 30  2.98(2) (2012-10
00002550 2D 31 35 29 │ 3A 20 57 65 │ 64 20 4E 6F │ 76 20 32 30  -15): Wed Nov 20
00002560 20 31 39 3A │ 30 31 3A 34 │ 32 20 32 30 │ 31 33 0A 0A   19:01:42 2013..
00002570 63 6F 6E 74 │ 65 6E 74 73 │ 20 3D 20 22 │ 54 65 78 74  contents = "Text
00002580 20 46 6F 72 │ 6D 61 74 20 │ 56 6F 6C 75 │ 6D 65 20 47   Format Volume G
00002590 72 6F 75 70 │ 22 0A 76 65 │ 72 73 69 6F │ 6E 20 3D 20  roup".version =
000025A0 31 0A 0A 64 │ 65 73 63 72 │ 69 70 74 69 │ 6F 6E 20 3D  1..description =
000025B0 20 22 22 0A │ 0A 63 72 65 │ 61 74 69 6F │ 6E 5F 68 6F   ""..creation_ho
000025C0 73 74 20 3D │ 20 22 70 72 │ 6F 78 6D 6F │ 78 22 09 23  st = "proxmox".#
000025D0 20 4C 69 6E │ 75 78 20 70 │ 72 6F 78 6D │ 6F 78 20 32   Linux proxmox 2
000025E0 2E 36 2E 33 │ 32 2D 32 36 │ 2D 70 76 65 │ 20 23 31 20  .6.32-26-pve #1
000025F0 53 4D 50 20 │ 4D 6F 6E 20 │ 4F 63 74 20 │ 31 34 20 30  SMP Mon Oct 14 0
00002600 38 3A 32 32 │ 3A 32 30 20 │ 43 45 53 54 │ 20 32 30 31  8:22:20 CEST 201
00002610 33 20 78 38 │ 36 5F 36 34 │ 0A 63 72 65 │ 61 74 69 6F  3 x86_64.creatio
00002620 6E 5F 74 69 │ 6D 65 20 3D │ 20 31 33 38 │ 34 39 37 34  n_time = 1384974
00002630 31 30 32 09 │ 23 20 57 65 │ 64 20 4E 6F │ 76 20 32 30  102.# Wed Nov 20
00002640 20 31 39 3A │ 30 31 3A 34 │ 32 20 32 30 │ 31 33 0A 0A   19:01:42 2013..
00002650 00 EF EE FF │ EC ED EC FF │ E9 EA E9 FF │ E4 E5 E3 FF  .

Comment 24 colin 2014-07-14 18:29:10 UTC
Created attachment 917964 [details]
LVM config - snipped from attachment dd_dev_sda1.bin 0000:2000 to 0000:2640

Bryn attached a link to a pastedbin of the extracted LVM config text but I don't see it now so I made another one with okteta. will attach for completeness, in case its useful.

Comment 25 Alasdair Kergon 2014-07-14 19:02:42 UTC
Could we see the output of 'uname -a' from the machine on which you're having this problem, and also 'date'?

Comment 26 Alasdair Kergon 2014-07-14 19:23:15 UTC
(In reply to Zdenek Kabelac from comment #23)
> So problematic text/char for your binary - 8192 + 1587 -> offset 0x2633
> is 0x09

But it's not the on-disk metadata causing this error: that has already been read off disk and parsed successfully prior to the error.

The error is while parsing an internally-generated representation of the metadata which will include the machine's *own* hostname as returned by uname(2) and ctime(time(NULL)) and also relies on alloca().

Comment 27 Alasdair Kergon 2014-07-14 19:36:12 UTC
I can reproduce a similar error along these lines.

Comment 28 Alasdair Kergon 2014-07-14 19:37:58 UTC
  Parse error at byte 1588 (line 101): unexpected token

Comment 29 Alasdair Kergon 2014-07-14 19:39:40 UTC
LVM does not cope with hostnames that span several lines...

Comment 30 colin 2014-07-14 20:27:29 UTC
>Could we see the output of 'uname -a' from the machine on which you're having this problem, and also 'date'?

[colin@#localhost ~]$ date
Mon 14 Jul 21:14:31 BST 2014

[colin@#localhost ~]$ uname -a
Linux #localhost.localdomain
#localhost
K8.localdomain 3.15.3-200.fc20.x86_64 #1 SMP Tue Jul 1 16:18:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

AHAH! you have cracked it.

There are 2 commented out lines in /etc/hostname :-(

Now deleted.

Thankyou for spotting this issue.
:-)

Comment 31 Fedora End Of Life 2015-05-29 12:21:37 UTC
This message is a reminder that Fedora 20 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 20. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '20'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 20 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 32 Fedora End Of Life 2015-06-30 01:04:41 UTC
Fedora 20 changed to end-of-life (EOL) status on 2015-06-23. Fedora 20 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.