| Summary: | need information about new changes introduced in lvm.conf file in RHEL 6.8 Snapshot1 | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Gopal Gosavi <gopal.gosavi> |
| Component: | lvm2 | Assignee: | LVM and device-mapper development team <lvm-team> |
| lvm2 sub component: | Configuration files (RHEL6) | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED CURRENTRELEASE | Docs Contact: | |
| Severity: | urgent | ||
| Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, prockai, saket.pusalkar, zkabelac |
| Version: | 6.8 | ||
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-04-05 06:59:46 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Gopal Gosavi
2016-03-31 05:51:36 UTC
(In reply to Gopal Gosavi from comment #0) > Description of problem: > O.S RED HAT 6.8 Snapshot1 > There are certain changes in the lvm.conf file which is breaking Veritas dmp > native support feature. > Need answer to the following one question > Are there any new changes or variables or filters introduced in this new > lvm.conf for RHEL 6.8 snapshot1? There shouldn't be any incompatible change that would also change filtering in a way that things stop working. Reading your "steps to reproduce", it seems you had your root FS on LVM which is now not recognized and it's not booting. Is this what you're hitting? Are there any specific error messages issued during boot? There are no specific error messages seen by O.S not even by our Veritas product.
When we set the dmp_native_support on, the rootlvm on SAN disk having four paths gets migrated to DMP (Dynamic Multipathing) node
for example if
/dev/vgroot/rootvol whose physical vol path as (/dev/mapper/sd'x')
it gets migrated to /dev/vx/dmp/emc001_01 (dmpnode path)
the global filter after it is set to on
/]# lsinitrd -f etc/lvm/lvm.conf /boot/VxDMP_initrd-2.6.32-627.el6.x86_64 | grep filter | grep -v '#'
global_filter=[ "a|/dev/vx/dmp/.*|", "r|.*|" ]
to accept the dmpdevices
root@vmr710-02 /]# cat /etc/lvm/lvm.conf|grep global_filter
global_filter = [ "r|^/dev/(sdx|sdw|sdv|sdu|sdr|sdq|sdb|sdan|sdam|sdal|sdak|sdaj|sdai|sdah|sdag|sdaf|sdae|sdad|sdac|sda)[0-9]*$|", "r|/dev/VxDMP.*|", "r|/dev/vx/dmpconfig|", "r|/dev/vx/rdmp/.*|", "r|/dev/dm-[0-9]*|", "r|/dev/mpath/mpath[0-9]*|", "r|/dev/mapper/mpath[0-9]*|", ]
rejecting the above devices.
*The issue is when if doing dd read (i/Os) on dev/vgroot/rootvol,
because it is migrated, the ios should go through dmpnode i.e /dev/vx/dmp/emc001_01
rootlvm is recognized and the machine is booting properly
*Are we missing anything in reject filters, which should be rejected, something new in RHEL 6.8 snapshot1
I don't quite understand the problem yet. How precisely are you detecting what path is being used for the IOs? A method independent of LVM? (LVM commands can be confusing when filters are applied - something we're fixing in the next snapshot.) (Please be aware of bug 1215228 - when there are filters, the path displayed is the path LVM thinks the kernel should be using, not the path it necessarily is actually using - as it can lead to confusion about what the system is actually doing.) Hi Alas, thanks a lot for the information let me try with snapshot2 . If the issue get resolved, then we will close the case. Thanks much Please, try the latest version lvm2-2.02.143-6.el6 which contains recent fixes (that is snapshot 4). The packages are also available here: http://people.redhat.com/~prajnoha/lvm2/rpms/RHEL6/bz1322676/ I am not seeing any issue in snapshot 2 [root@vmr710-02 /]# lvm version LVM version: 2.02.143(2)-RHEL6 (2016-03-22) Library version: 1.02.117-RHEL6 (2016-03-22) Driver version: 4.33.1 So closing the case. Thanks a lot for all your efforts. |