RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1330933 - mirror creation with tag fails during zeroing when activation volume_list has corresponding tag restriction
Summary: mirror creation with tag fails during zeroing when activation volume_list has...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.8
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1331889
TreeView+ depends on / blocked
 
Reported: 2016-04-27 10:45 UTC by michal novacek
Modified: 2017-03-21 12:02 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.143-9.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-21 12:02:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0798 0 normal SHIPPED_LIVE lvm2 bug fix update 2017-03-21 12:51:51 UTC

Description michal novacek 2016-04-27 10:45:19 UTC
Description of problem:

I was unsuccessful in activating raid LV in HALVM setup after its creation. See
additional info for commands.

I'm logging this as lvm bug because there is no pacemaker resource agent
involved yet.

This involves also other types of --type raidX.

Version-Release number of selected component (if applicable):
lvm2-2.02.143-7.el6.x86_64

How reproducible: always

Steps to Reproduce:
1. See additional info

Actual results: Mirrored lvm cannot be activated.

Expected results: Mirrored lvm active.


Additional info:

[root@virt-145 ~]# lvs -a
  LV      VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_root vg_virt163 -wi-ao----   6.79g                                                    
  lv_swap vg_virt163 -wi-ao---- 824.00m                                                    

[root@virt-145 ~]# vgs -a
  VG         #PV #LV #SN Attr   VSize  VFree
  raidvg       6   0   0 wz--n- 29.95g 29.95g
  vg_virt163   1   2   0 wz--n-  7.59g     0

[root@virt-145 ~]# lvcreate -ay --addtag abcd \
    --config activation{volume_list=[\"@$(hostname -f)\"]} \
    --name raidlv --type raid1 --extents 100%VG --nosync raidvg
  WARNING: New raid1 won't be synchronised. Don't read what you didn't write!
  Volume "raidvg/raidlv_rmeta_0" is not active locally.
  Failed to zero raidvg/raidlv_rmeta_0
[root@virt-145 ~]# echo $?
5
[root@virt-145 ~]# lvs -a
  LV                VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  raidlv            raidvg     Rwi---r---      0                                                    
  [raidlv_rimage_0] raidvg     Iwi---r---  14.97g                                                    
  [raidlv_rimage_1] raidvg     Iwi---r---  14.97g                                                    
  raidlv_rmeta_0    raidvg     ewi---r---   4.00m                                                    
  raidlv_rmeta_1    raidvg     ewi---r---   4.00m                                                    
  lv_root           vg_virt163 -wi-ao----   6.79g                                                    
  lv_swap           vg_virt163 -wi-ao---- 824.00m
 
[root@virt-145 ~]# lvs @$(hostname -f)
  LV     VG     Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  raidlv raidvg Rwi---r---    0
 
[root@virt-145 ~]# lvchange -ay --config activation{volume_list=[\"@$(hostname -f)\"]} raidvg/raidlv
[root@virt-145 ~]# echo $?
0
[root@virt-145 ~]# lvdisplay /dev/vg/raidlv | grep 'LV Status'
  LV Status              NOT available



---

[root@virt-145 ~]# lvmconfig 
config {
        checks=1
        abort_on_errors=0
        profile_dir="/etc/lvm/profile"
}
local {
}
dmeventd {
        mirror_library="libdevmapper-event-lvm2mirror.so"
        snapshot_library="libdevmapper-event-lvm2snapshot.so"
        thin_library="libdevmapper-event-lvm2thin.so"
}
activation {
        volume_list=["vg_virt163","@virt-145.cluster-qe.lab.eng.brq.redhat.com"]
        checks=0
        udev_sync=1
        udev_rules=1
        verify_udev_operations=0
        retry_deactivation=1
        missing_stripe_filler="error"
        use_linear_target=1
        reserved_stack=64
        reserved_memory=8192
        process_priority=-18
        raid_region_size=512
        readahead="auto"
        raid_fault_policy="warn"
        mirror_image_fault_policy="remove"
        mirror_log_fault_policy="allocate"
        snapshot_autoextend_threshold=100
        snapshot_autoextend_percent=20
        thin_pool_autoextend_threshold=100
        thin_pool_autoextend_percent=20
        use_mlockall=0
        monitoring=1
        polling_interval=15
        activation_mode="degraded"
}
global {
        umask=63
        test=0
        units="h"
        si_unit_consistency=1
        suffix=1
        activation=1
        proc="/proc"
        etc="/etc"
        locking_type=1
        wait_for_locks=1
        fallback_to_clustered_locking=1
        fallback_to_local_locking=1
        locking_dir="/var/lock/lvm"
        prioritise_write_locks=1
        abort_on_internal_errors=0
        detect_internal_vg_cache_corruption=0
        metadata_read_only=0
        mirror_segtype_default="mirror"
        raid10_segtype_default="mirror"
        sparse_segtype_default="snapshot"
        use_lvmetad=0
        use_lvmlockd=0
        system_id_source="none"
        use_lvmpolld=0
}
shell {
        history_size=100
}
backup {
        backup=1
        backup_dir="/etc/lvm/backup"
        archive=1
        archive_dir="/etc/lvm/archive"
        retain_min=10
        retain_days=30
}
log {
        verbose=0
        silent=0
        syslog=1
        overwrite=0
        level=0
        indent=1
        command_names=0
        prefix="  "
        activation=0
        debug_classes=["memory","devices","activation","allocation","lvmetad","metadata","cache","locking","lvmpolld"]
}
allocation {
        maximise_cling=1
        use_blkid_wiping=0
        wipe_signatures_when_zeroing_new_lvs=1
        mirror_logs_require_separate_pvs=0
        cache_pool_metadata_require_separate_pvs=0
        thin_pool_metadata_require_separate_pvs=0
}
devices {
        dir="/dev"
        scan="/dev"
        obtain_device_list_from_udev=0
        external_device_info_source="none"
        preferred_names=["^/dev/mpath/","^/dev/mapper/mpath","^/dev/[hs]d"]
        cache_dir="/etc/lvm/cache"
        cache_file_prefix=""
        write_cache_state=1
        sysfs_scan=1
        multipath_component_detection=1
        md_component_detection=1
        fw_raid_component_detection=0
        md_chunk_alignment=1
        data_alignment_detection=1
        data_alignment=0
        data_alignment_offset_detection=1
        ignore_suspended_devices=0
        ignore_lvm_mirrors=1
        disable_after_error_count=0
        require_restorefile_with_uuid=1
        pv_min_size=2048
        issue_discards=0
}

Comment 2 michal novacek 2016-04-27 15:32:48 UTC
> [root@virt-145 ~]# \
>    lvcreate -ay --addtag abcd \
>    --config activation{volume_list=[\"@$(hostname -f)\"]} \
>    --name raidlv --type raid1 --extents 100%VG --nosync raidvg
> 

There is a mistake in lvcreate command. The --addtag parameter should not be "abcd" but "$(hostname -f). It got lost in bug creation.

Comment 3 Alasdair Kergon 2016-04-27 17:29:06 UTC
So the case reported is behaving correctly - that command will fail because it expects to be able to activate the LV but the configuration specified is preventing that.


Now, if we look at the case where the *same* tag is provided, the first activation for zeroing fails - the device needs to have the tag temporarily applied then removed afterwards.  Also need to check all other code that performs temporary activations for similar purposes.

Comment 4 Corey Marthaler 2016-04-27 18:19:30 UTC
appears related to bug 1161347.

Comment 5 Zdenek Kabelac 2016-04-29 14:17:59 UTC
Yep - as workaround - for initial volume creation  use empty volume list
and deactivate, add tag and activate later again.

Comment 6 Alasdair Kergon 2016-04-29 20:01:34 UTC
Workaround for now, continuing to respect tags.  Sequence of operations/failure path remains poor.

https://www.redhat.com/archives/lvm-devel/2016-April/msg00147.html

https://git.fedorahosted.org/cgit/lvm2.git/patch/?id=c76df666c903b59f069292c4c1507b1ac37a5590

Comment 7 Alasdair Kergon 2016-04-29 23:39:09 UTC
lvcreate --addtag tag1 now works if volume_list allows tag1 to be activated.

Comment 8 Alasdair Kergon 2016-04-29 23:39:48 UTC
A wider review is pushed to upstream bug 1331889

Comment 9 Corey Marthaler 2016-05-05 15:42:43 UTC
This fix will also fix the lvconvert scenarios that fail wrt to HA lvm volumes?

# here the existing mirror log failed with a log fault policy of allocate, so a new log should have been properly added.

May  5 10:01:27 host-077 lvm[5887]: Monitoring mirror device revolution_9-mirror_1 for events.
May  5 10:01:27 host-077 lvm[5887]: Mirror status: 1 of 3 images failed.
May  5 10:01:27 host-077 lvm[5887]: Mirror log status: 1 of 1 images failed.
May  5 10:01:27 host-077 lvm[5887]: Trying to up-convert to 2 images, 1 logs.
May  5 10:01:27 host-077 lvm[5887]: Volume "revolution_9/mirror_1_mlog" is not active locally.
May  5 10:01:27 host-077 lvm[5887]: Aborting. Failed to wipe mirror log.
May  5 10:01:27 host-077 lvm[5887]: Failed to initialise mirror log.
May  5 10:01:27 host-077 lvm[5887]: Trying to up-convert to 2 images, 0 logs.
May  5 10:01:35 host-077 lvm[5887]: Monitoring mirror device revolution_9-mirror_1 for events.
May  5 10:01:35 host-077 lvm[5887]: WARNING: Failed to replace 1 of 1 logs in volume mirror_1
May  5 10:01:36 host-077 lvm[5887]: Repair of mirrored device revolution_9-mirror_1 finished successfully.

Comment 10 Alasdair Kergon 2016-05-06 18:22:18 UTC
I'm not sure exactly what sequence of commands HA LVM is using.  There's also bug 1332909 which might have some connection - or not.

Comment 11 Alasdair Kergon 2016-05-06 18:26:55 UTC
If you find similar failures on lvconvert commands, after using the new code here, please open fresh bug(s) and I'll see if a similar fix can be applied to other code paths.

Comment 15 Roman Bednář 2016-11-18 10:19:41 UTC
Marking verified with latest rpms. Activation of mirror lv now works properly when volume_list value matches a tag. Workaround involving using empty volume list for LV creation is no longer needed.

-----------------------------------------------------------------------
Before fix:

lvm2-2.02.143-7.el6_8.1

# lvcreate -ay --addtag $(hostname -f) \
> --config activation{volume_list=[\"@$(hostname -f)\"]} \
> --name raidlv --type raid1 --extents 100%VG --nosync vg
  WARNING: New raid1 won't be synchronised. Don't read what you didn't write!
  Volume "vg/raidlv_rmeta_0" is not active locally.
  Failed to zero vg/raidlv_rmeta_0

-----------------------------------------------------------------------
After fix:

lvm2-2.02.143-9.el6

# lvcreate -ay --addtag $(hostname -f) \
> --config activation{volume_list=[\"@$(hostname -f)\"]} \
> --name raidlv --type raid1 --extents 100%VG --nosync vg
  WARNING: New raid1 won't be synchronised. Don't read what you didn't write!

# lvs -a
  LV                VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  raidlv            vg         Rwi-a-r---  39.99g                                    100.00          
  [raidlv_rimage_0] vg         iwi-aor---  39.99g                                                    
  [raidlv_rimage_1] vg         iwi-aor---  39.99g                                                    
  [raidlv_rmeta_0]  vg         ewi-aor---   4.00m                                                    
  [raidlv_rmeta_1]  vg         ewi-aor---   4.00m                                                    
  ...

-----------------------------------------------------------------------
2.6.32-663.el6.x86_64

lvm2-2.02.143-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
lvm2-libs-2.02.143-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
lvm2-cluster-2.02.143-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
udev-147-2.73.el6_8.2    BUILT: Tue Aug 30 15:17:19 CEST 2016
device-mapper-1.02.117-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
device-mapper-libs-1.02.117-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
device-mapper-event-1.02.117-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
device-mapper-event-libs-1.02.117-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016
device-mapper-persistent-data-0.6.2-0.1.rc7.el6    BUILT: Tue Mar 22 14:58:09 CET 2016
cmirror-2.02.143-9.el6    BUILT: Thu Nov 10 10:21:10 CET 2016

Comment 17 errata-xmlrpc 2017-03-21 12:02:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2017-0798.html


Note You need to log in before you can comment on or make changes to this bug.