Bug 1227742 - [New] - Cannot see lv,vg and pv on the system when user tries to create a brick from UI
Summary: [New] - Cannot see lv,vg and pv on the system when user tries to create a bri...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhsc
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 3.1.0
Assignee: Timothy Asir
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On: 1230495
Blocks: 1202842 1227788
TreeView+ depends on / blocked
 
Reported: 2015-06-03 12:13 UTC by RamaKasturi
Modified: 2023-09-14 03:00 UTC (History)
14 users (show)

Fixed In Version: rhsc-3.1.0-59, vdsm-4.16.20-1.1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-29 05:31:22 UTC
Embargoed:


Attachments (Terms of Use)
Attaching audit.log file (700.08 KB, text/plain)
2015-06-04 11:43 UTC, RamaKasturi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2015:1494 0 normal SHIPPED_LIVE Red Hat Gluster Storage Console 3.1 Enhancement and bug fixes 2015-07-29 09:24:02 UTC
oVirt gerrit 42142 0 master MERGED gluster: refresh lvm devices after brick create Never
oVirt gerrit 42365 0 ovirt-3.5-gluster MERGED gluster: refresh lvm devices after brick create Never

Description RamaKasturi 2015-06-03 12:13:38 UTC
Description of problem:
When user tries to create a brick from UI brick creation is successful. Once the brick is created on the back end system lv.vg and pv are not visible.

Once the system is reboots and comes up back, mounted brick is not seen in the df -Th output but there is a /etc/fstab entry present.

Version-Release number of selected component (if applicable):
rhsc-3.1.0-0.58.master.el6.noarch

How reproducible:
Always

Steps to Reproduce:
1. Add a new rhs node to the console.
2. Now select the device with which brick needs to be created.
3. click on create brick.
4. Now reboot the system, once the system reboots and comes up back mounted xfs partition or brick is invisible.

Actual results:
After step3, create brick will be successfully created but when user goes to back end system lv,vg and pv are not visible on the system.
After step4,

Expected results:
Brick creation should happen successfully and user should be able to see all the lv,vg and pv and xfs partition should be present on the system.

Additional info:

Comment 2 Ramesh N 2015-06-03 14:01:10 UTC
I am not able to reproduce this issue consistently in my setup as well the setup given by Kasturi. After creating brick, some times commands like vgs, pvs doesn't display the vgs and pvs. But if  i try to remove the hidden vg using vgremove command, it works.  After removing vg, pvs command displays the pvs. This is strange and we need to investigate more on this.

Comment 3 RamaKasturi 2015-06-03 17:21:23 UTC
I am setting up new nodes, will try reproducing it there and update the bug.

Comment 4 Timothy Asir 2015-06-04 05:54:25 UTC
Can you provide me few more details like:

1) What is the output of blkid?
2) Are you facing this issue while crating brick on a normal devices or
   in raid devices?
3) What is the LVM version?
4) Are you able to see the devices if you run vgscan, lvscan and what is the
   output it shows?

Comment 5 RamaKasturi 2015-06-04 07:08:06 UTC
Here is the output of all the questions you asked for.

ouput of blkid:
--------------
-----------------


/dev/mapper/vg_dhcp37111-lv_root: UUID="9c1fca03-401a-481f-b12b-2f4b3066c7a3" TYPE="ext4" 
/dev/vda1: UUID="7cff5f2e-1ffb-4023-9cca-8edbe987ea10" TYPE="ext4" 
/dev/vda2: UUID="BI4rIV-otPh-WGD9-4lwl-uLT3-hC4e-bv2WEO" TYPE="LVM2_member" 
/dev/mapper/vg_dhcp37111-lv_swap: UUID="9cf9adea-9646-4a57-9449-fd31e3646773" TYPE="swap" 
/dev/vdb1: UUID="QpHvsA-qZe0-gdo9-Hdv6-KbAf-yJTc-zYwmdb" TYPE="LVM2_member" 
/dev/vdc1: UUID="FscXk7-4Z7t-b305-YnD0-XX2f-diJP-qfdQGj" TYPE="LVM2_member" 
/dev/vdd1: UUID="TGxshA-f89y-sXA1-C7Ul-EWuK-REHN-YULw9Z" TYPE="LVM2_member" 
/dev/mapper/vg--brick1-pool--brick1_tdata: UUID="03ee4cb9-4283-4f99-81f8-f9fda7fcbaf3" TYPE="xfs" 
/dev/mapper/vg--brick1-pool--brick1-tpool: UUID="03ee4cb9-4283-4f99-81f8-f9fda7fcbaf3" TYPE="xfs" 
/dev/mapper/vg--brick1-pool--brick1: UUID="03ee4cb9-4283-4f99-81f8-f9fda7fcbaf3" TYPE="xfs" 
/dev/mapper/vg--brick1-brick1: UUID="03ee4cb9-4283-4f99-81f8-f9fda7fcbaf3" TYPE="xfs" 
/dev/mapper/vg--brick2-pool--brick2_tdata: UUID="66adec5a-917b-4e00-aedd-e9be718ea225" TYPE="xfs" 
/dev/mapper/vg--brick2-pool--brick2-tpool: UUID="66adec5a-917b-4e00-aedd-e9be718ea225" TYPE="xfs" 
/dev/mapper/vg--brick2-pool--brick2: UUID="66adec5a-917b-4e00-aedd-e9be718ea225" TYPE="xfs" 
/dev/mapper/vg--brick2-brick2: UUID="66adec5a-917b-4e00-aedd-e9be718ea225" TYPE="xfs" 
/dev/mapper/vg--brick3-pool--brick3_tdata: UUID="1cf5b057-c9cb-4c17-9d25-e21ddaac2057" TYPE="xfs" 
/dev/mapper/vg--brick3-pool--brick3-tpool: UUID="1cf5b057-c9cb-4c17-9d25-e21ddaac2057" TYPE="xfs" 
/dev/mapper/vg--brick3-pool--brick3: UUID="1cf5b057-c9cb-4c17-9d25-e21ddaac2057" TYPE="xfs" 
/dev/mapper/vg--brick3-brick3: UUID="1cf5b057-c9cb-4c17-9d25-e21ddaac2057" TYPE="xfs" 

2) Are you facing this issue while crating brick on a normal devices or
   in raid devices?
Normal devices.

3) What is the LVM version?

[root@dhcp37-111 ~]# rpm -qa | grep lvm
lvm2-libs-2.02.111-2.el6.x86_64
lvm2-2.02.111-2.el6.x86_64
mesa-private-llvm-3.4-3.el6.x86_64

4) Are you able to see the devices if you run vgscan, lvscan and what is the
   output it shows?

yes, i am able to see the devices after i run vgscan.

[root@dhcp37-111 ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vg-brick3" using metadata type lvm2
  Found volume group "vg-brick2" using metadata type lvm2
  Found volume group "vg-brick1" using metadata type lvm2
  Found volume group "vg_dhcp37111" using metadata type lvm2

Comment 6 RamaKasturi 2015-06-04 07:12:22 UTC
Blivet logs when bricks are created.

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/rhsc/1227742/

Comment 8 RamaKasturi 2015-06-04 07:51:59 UTC
prasanth,

   I have tried having selinux in premissive mode in one of the node which i added to console. I am able to reproduce the issue there too. I could resolve the issue of not having the vg,pv and lv displayed by running vgscan on the node. By this exercise what i have performed i guess this is not an issue with selinux.

Thanks
kasturi.

Comment 9 RamaKasturi 2015-06-04 11:41:50 UTC
Hi,

  I am seeing following AVC in my audit.log but i am able to perform all the operations with out any issues. Can you please look at these and let me know if these are something that needs to be fixed?

#============= glusterd_t ==============
allow glusterd_t fixed_disk_device_t:blk_file { read write getattr open ioctl };
allow glusterd_t fsadm_exec_t:file { execute execute_no_trans };
allow glusterd_t glusterd_var_lib_t:file { execute execute_no_trans };

#!!!! This avc is allowed in the current policy
allow glusterd_t hostname_exec_t:file { execute execute_no_trans };
allow glusterd_t kernel_t:system ipc_info;
#!!!! The source type 'glusterd_t' can write to a 'chr_file' of the following types:
# initrc_devpts_t, null_device_t, zero_device_t, fuse_device_t, devtty_t, ptynode, ttynode, tty_device_t, devpts_t

allow glusterd_t lvm_control_t:chr_file { read write getattr open };
allow glusterd_t lvm_exec_t:file { execute execute_no_trans };

#!!!! This avc is a constraint violation.  You will need to add an attribute to either the source or target type to make it work.
#Contraint rule: 
allow glusterd_t lvm_lock_t:file create;
#!!!! The source type 'glusterd_t' can write to a 'fifo_file' of the following type:
# glusterd_brick_t

allow glusterd_t lvm_var_run_t:fifo_file { write read getattr open lock };
allow glusterd_t node_t:rawip_socket node_bind;
allow glusterd_t self:capability { sys_nice ipc_lock net_raw };
allow glusterd_t self:process setsched;
allow glusterd_t self:rawip_socket { bind create };
allow glusterd_t self:sem { unix_read write unix_write associate read destroy create };
allow glusterd_t ssh_keygen_exec_t:file { execute execute_no_trans };
allow glusterd_t var_run_t:sock_file { write unlink };

#============= logrotate_t ==============

#!!!! This avc is allowed in the current policy
allow logrotate_t virt_cache_t:dir read;

Thanks
kasturi

Comment 10 RamaKasturi 2015-06-04 11:43:45 UTC
Created attachment 1034673 [details]
Attaching audit.log file

Comment 11 Milos Malik 2015-06-04 11:49:28 UTC
Either the machine was in permissive mode when AVCs appeared:

# getenforce

or the gluster domain is a permissive one:

# semanage permissive -l | grep glusterd_t

Comment 12 RamaKasturi 2015-06-04 11:53:24 UTC
Machine was always in Enforcing mode. It was never put in permissive mode.

[root@dhcp37-216 ~]# getenforce
Enforcing
[root@dhcp37-216 ~]# semanage permissive -l | grep glusterd_t
glusterd_t

Comment 14 Ramesh N 2015-06-05 05:12:06 UTC
running 'vgscan' after creating the brick makes the VGs visible.

Comment 16 Bala.FA 2015-06-10 10:43:53 UTC
(In reply to Ramesh N from comment #14)
> running 'vgscan' after creating the brick makes the VGs visible.

Its unusual to run 'vgscan' every time.  vgscan man page says

"In  LVM2, vgscans take place automatically; but you might still need to run one explicitly after changing hardware."


We should identify why they are disappeared.

Comment 17 Ramesh N 2015-06-10 11:16:47 UTC
(In reply to Bala.FA from comment #16)
> (In reply to Ramesh N from comment #14)
> > running 'vgscan' after creating the brick makes the VGs visible.
> 
> Its unusual to run 'vgscan' every time.  vgscan man page says
> 

Even i feel so. this was an workaround suggested to QE to continue the testing.
More update about this bug is available in the dependent bug bz#1227788

> "In  LVM2, vgscans take place automatically; but you might still need to run
> one explicitly after changing hardware."
> 
> 
> We should identify why they are disappeared.

Comment 19 Timothy Asir 2015-06-12 15:15:14 UTC
My observations and findings on this issue as bellow:

1) RHEL6.6
   Tried creating vg, pv using python-blivet
   Unable to see the vg devices when i run vgs.
   But after i run run vgscan, this bug does not appears again
   even after any number of reboot.


2) RHEL6.6 with vdsm
   Tried creating vg, pv using python-blivet
   Unable to see the vg devices when i run vgs
   and devices are not mounted on next reboot
   But once i run vgscan after a first brick create,
   this bug never occurs even after any number of reboot.

2) Plain RHEL6.7 without vdsm installed, updated latest kernel
   Tried creating vg, pv using python-blivet and
   observed this issue most of the time.
   But once I run vgscan or pvs I could not able to see
   this issue occured. I could able to create any number of bricks or
   lvm devices and could able to see the devices when i run vgs without
   any issues even after any number of reboots.
   And seen the bricks got mounted properly.

3) RHEL 6.7 vdsm installed, executed vgscan only once
after registration and kernel update
   Observed this issue until we run vgscan or pvs at-least once
   after the first brick creation or pv.

I tried the above kinds of setups in many different times on a plain rhel 6.7
or rhel 6.6 nodes with or without vdsm installed and able to see the issue
until i run vgscan at-least once after the first brick creation.
This issue is more likely on pvs which are not identified by udev. We are creating a pv without specifying any metastasize. The LVM should update the details automatically. But it fails here. This can be resolved by running an optional service called lvm2-lvmetad which was suggested by Mulhern. It works without any issue when I run this service after I enable it in lvm.conf in RHEL6. But when i check with vdsm team, i come to know that, VDSM disabled this service for some other valid reason. I haven't tried how it works if we run pvscan instead of pvs or vgscan. This issue is not observed in rhel7. So I feel refreshing the devices could be the fix. I have raised a bug in LVM for this issue.

Kernel version after update:
[root@dhcp42-45 ~]# uname -r
2.6.32-567.el6.x86_64

Comment 20 Timothy Asir 2015-06-13 14:23:41 UTC
Sent a workaround patch for this issue: https://gerrit.ovirt.org/#/c/42142/
This will be removed once this issue is fixed in LVM and Blivet.
Blivet has to check whether it has passing any argument that is causing the problem.

Comment 21 Bala.FA 2015-06-14 02:07:17 UTC
1. Have you checked whether vdsm enables multipath settings on those lvm devices after reboot?

2. How consistence is this behaviour reproducible?

Comment 22 RamaKasturi 2015-06-18 09:07:48 UTC
Verified and works fine with build rhsc-3.1.0-0.60.el6.noarch.

Python-blivet version:
***************************
python-blivet-1.0.0.1-1.el6rhs.noarch.

lvm version:
*****************
rpm -qa | grep lvm
lvm2-libs-2.02.118-2.el6.x86_64
mesa-private-llvm-3.4-3.el6.x86_64
lvm2-2.02.118-2.el6.x86_64

kernel:
***************
uname -a
Linux birdman.lab.eng.blr.redhat.com 2.6.32-569.el6.x86_64 #1 SMP Mon Jun 15 16:04:10 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux


When user tries to create a birck using either RAID/VMDISKS/LOGICALPARTITIONS all the vgs,pvs and lvs are shown.

Comment 23 errata-xmlrpc 2015-07-29 05:31:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2015-1494.html

Comment 24 Red Hat Bugzilla 2023-09-14 03:00:06 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.