There seems to be some regression between the last snap1 F-11 live image and the latest F-11 updates. The snap1 F-11 live image has no problems reading the LVM partitions of my F-10 disk. I installed F-11 on a new disk (using the snap 1 live image from a USB stick) and installed all updates, and rebooted from the disk. Now if I plugin my old F-10 disk (with a USB enclosure), F-11 cannot mount the LVM partitions from the F-10 disk. Running the LVM GUI shows the correct physical and logical layout of it, but it shows the partitions as having "no filesystems". More importantly, the /dev devices (/dev/Volxxx) are not created, so I can't even mount the LVM partitions manually. This worked fine from the live image. Let me know how I can provide more information.
When you run vgscan and "vgchange -a y" from console - the old VG is not activated? If there is still problem, please post output of "vgchange -a y -vvvv" and if possible output of "lvmdump -m" command.
Yes, running 'vgchange -a y' made the volume show up. The question is now: why did this happen automatically on the live image, but not from the installed F-11 ? Is this not supposed to happen automatically when you hotplug a disk ? Closing, since this doesn't appear to be an lvm2 problem...
Hotplugging an LVM PV never did cause the VG to be activated (other than in special environments like the installer/live image). Once the lvm/udev integration work upstream is completed this should be quite straightforward (you could hack together a udev rule to do it now if required but it'd be a bit messy since lvm & udev are not tightly integrated as yet).