Bug 1203837 - [RFE] Support for XFS-based local storage domains on 7.x based RHEV-H
Summary: [RFE] Support for XFS-based local storage domains on 7.x based RHEV-H
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node
Version: 3.4.5
Hardware: All
OS: Linux
high
high
Target Milestone: ovirt-3.6.0-rc
: 3.6.0
Assignee: Fabian Deutsch
QA Contact: Ying Cui
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-03-19 18:33 UTC by Allie DeVolder
Modified: 2019-10-10 09:46 UTC (History)
21 users (show)

Fixed In Version: ovirt-node-3.3.0-0.4.20150906git14a6024.el7ev
Doc Type: Enhancement
Doc Text:
With this update XFS based storage for local storage domains is now supported on the Red Hat Enterprise Virtualization Hypervisor. Red Hat Enterprise Virtualization 3.6 is not required to use XFS based storage domains.
Clone Of:
Environment:
Last Closed: 2016-03-09 14:18:45 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:
sherold: Triaged+


Attachments (Terms of Use)
Steps on RHEV-H FC with xfs (14.90 KB, text/plain)
2015-11-17 08:39 UTC, Ying Cui
no flags Details
Steps on RHEV-H iSCSI as xfs local domain (11.08 KB, text/plain)
2015-11-19 11:04 UTC, Ying Cui
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2161881 0 None None None 2016-02-15 12:57:07 UTC
Red Hat Product Errata RHBA-2016:0378 0 normal SHIPPED_LIVE ovirt-node bug fix and enhancement update for RHEV 3.6 2016-03-09 19:06:36 UTC
oVirt gerrit 38962 0 master MERGED Add xfs support Never

Description Allie DeVolder 2015-03-19 18:33:08 UTC
Add XFS support to RHEV-H.

- Why does the customer need this? (List the business requirements here)

Customer would like to use local storage domains larger than 16GB. In order for this to be possible, XFS must be in the rhevh kernel and supported

- How would the customer like to achieve this? (List the functional
requirements here)

   - Add XFS to rhevh kernel
   - Support XFS as file system for local storage domains

Comment 2 Itamar Heim 2015-03-22 16:21:24 UTC
allon - any considerations from storage team, or this should "just work" (as we just consume local paths, not create the fs itself)?
(to rephrase, would this be supported on a rhel host already?)

Comment 3 Allon Mureinik 2015-03-22 16:34:52 UTC
(In reply to Itamar Heim from comment #2)
> allon - any considerations from storage team, or this should "just work" (as
> we just consume local paths, not create the fs itself)?
> (to rephrase, would this be supported on a rhel host already?)
A local FS is essentially a POSIX FS which we assume is only accessed from one host and that someone else is responsible for mounting it.

I'd have a quick QA cycle on it, but I have no reason to expect any problems.

Or, TL;DR - "should just work".

Comment 4 Fabian Deutsch 2015-03-23 10:08:06 UTC
Let me note that we from the RHEV-H side need to whitelist the appropriate xfs modules and the necessary xfsprogs/userspace tools. This is done by the patch 38962.

Comment 5 Ying Cui 2015-04-28 07:22:06 UTC
Fabian, for this bug, now for patch 38962 is for adding xfs to rhevh kernel. I have one concern how do we test it on storage connection? how user achieve xfs file system connect to local storage domain? As we known rhevh is ext4 file system then connecting to local storage domain /data/image/rhev.

I will give qa_ack+ after QE get more thoughts on this bug and clear to know how to verify it. Thanks.

Comment 6 Fabian Deutsch 2015-04-28 07:50:53 UTC
The use case I can imagine is:

1. Install RHEV-H
2. Have a separate storage disk formated with XFS
3. Mount separate storage from 2. into /data


Allan, can you please clarify if the customers wants more than 16GB or more than 16TB of local storage?

Because according to this table https://access.redhat.com/solutions/1532 ext4 is supporting filesystem sizes up to 16TB.

Comment 7 Ying Cui 2015-04-28 08:29:28 UTC
(In reply to Fabian Deutsch from comment #6)
> The use case I can imagine is:
> 
> 1. Install RHEV-H
> 2. Have a separate storage disk formated with XFS
> 3. Mount separate storage from 2. into /data

qa_ack+, Thanks.

So more here, so rhevh will support ext4 local storage and XFS local storage meanwhile in the same host, and storage type is local, not sure here whether or not need vdsm storage RFE as well?

Comment 8 Fabian Deutsch 2015-04-28 08:44:25 UTC
(In reply to Ying Cui from comment #7)
…
> So more here, so rhevh will support ext4 local storage and XFS local storage
> meanwhile in the same host, and storage type is local, not sure here whether
> or not need vdsm storage RFE as well?

To me this bug is only about adding the platform support for XFS.
I expect that the use case described above is an exception.

For now I would not open an storage RFE for vdsm.

Comment 9 Ying Cui 2015-04-28 08:52:34 UTC
(In reply to Fabian Deutsch from comment #8)
> To me this bug is only about adding the platform support for XFS.
> I expect that the use case described above is an exception.
> 
> For now I would not open an storage RFE for vdsm.

ack, thanks.

Comment 10 Allon Mureinik 2015-04-28 10:41:58 UTC
(In reply to Ying Cui from comment #7)
> So more here, so rhevh will support ext4 local storage and XFS local storage
> meanwhile in the same host, and storage type is local, not sure here whether
> or not need vdsm storage RFE as well?

As far as VDSM is concerned, a local storage domain is:
1. A POSIX compliant file system (which XFS is)
2. Which does not need to be mounted (i.e., something other than VDSM takes care of it
3. VDSM can assume that no other host attempts to write to it

There's no dev effort from VDSM's dev to support XFS (at least not up-front), but if this a customer request, it may be useful to have a test-only BZ for RHEV QE to also verify this from their side.

Comment 16 Fabian Deutsch 2015-09-24 08:30:33 UTC
The XFS support would be as follows:

1. Install RHEV-H
2. On another disk (usb, iscsi, fc), manually create an xfs filesystem
3. Mount it to the path later used for local-storage
4. Setup local-storage in Engine

Comment 17 Ying Cui 2015-10-30 09:01:51 UTC
Tested Pass with another localdisk(ata). 

Version:
# cat /etc/redhat-release 
Red Hat Enterprise Virtualization Hypervisor release 7.2 (20151025.0.el7ev)
# rpm -qa ovirt-node
ovirt-node-3.3.0-0.18.20151022git82dc52c.el7ev.noarch

Test steps:
Test machine with two disks.
1. Installed RHEV-H on /dev/sda successful, and setup network via dhcp.

2. Node side, according to patches, the verification as the following:
# ls -al /lib/modules/3.10.0-325.el7.x86_64/kernel/fs/xfs/
total 1460
drwxr-xr-x.  2 root root    4096 Oct 25 10:28 .
drwxr-xr-x. 26 root root    4096 Oct 25 10:28 ..
-rw-r--r--.  1 root root 1468393 Oct 16 22:42 xfs.ko

# rpm -qa xfsprogs
xfsprogs-3.2.2-2.el7.x86_64

3. F2 to shell on RHEV-H to format disk with xfs and mount to rhevh /data/images/rhev/

# multipath -ll
ST3320613AS_9SZ4LP5S dm-6 ATA     ,ST3320613AS     
size=298G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 1:0:0:0 sdb 8:16 active ready running

# lsblk 
NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                      8:0    0 465.8G  0 disk  
├─sda1                   8:1    0   243M  0 part  
├─sda2                   8:2    0     4G  0 part  
├─sda3                   8:3    0     4G  0 part  /dev/.initramfs/live
└─sda4                   8:4    0 457.5G  0 part  
  ├─HostVG-Swap        253:2    0   7.7G  0 lvm   [SWAP]
  ├─HostVG-Config      253:3    0     8M  0 lvm   /config
  ├─HostVG-Logging     253:4    0     2G  0 lvm   /var/log
  └─HostVG-Data        253:5    0 447.4G  0 lvm   /data
sdb                      8:16   0 298.1G  0 disk  
└─ST3320613AS_9SZ4LP5S 253:6    0 298.1G  0 mpath 
loop0                    7:0    0 238.2M  1 loop  
loop1                    7:1    0   1.5G  1 loop  
├─live-rw              253:0    0   1.5G  0 dm    /
└─live-base            253:1    0   1.5G  1 dm    
loop2                    7:2    0   512M  0 loop  
└─live-rw              253:0    0   1.5G  0 dm    /

# fdisk /dev/mapper/ST3320613AS_9SZ4LP5S 
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xa48940e0.

Command (m for help): p

Disk /dev/mapper/ST3320613AS_9SZ4LP5S: 320.1 GB, 320072933376 bytes, 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xa48940e0

                           Device Boot      Start         End      Blocks   Id  System
Command (m for help): n
Partition number (1-128, default 1): 
First sector (2048-625142414, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-625142414, default 625142414): 
Created partition 1


Command (m for help): wq
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@dhcp-9-205 admin]# partprobe 
[root@dhcp-9-205 admin]# lsblk 
NAME                      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                         8:0    0 465.8G  0 disk  
├─sda1                      8:1    0   243M  0 part  
├─sda2                      8:2    0     4G  0 part  
├─sda3                      8:3    0     4G  0 part  /dev/.initramfs/live
└─sda4                      8:4    0 457.5G  0 part  
  ├─HostVG-Swap           253:2    0   7.7G  0 lvm   [SWAP]
  ├─HostVG-Config         253:3    0     8M  0 lvm   /config
  ├─HostVG-Logging        253:4    0     2G  0 lvm   /var/log
  └─HostVG-Data           253:5    0 447.4G  0 lvm   /data
sdb                         8:16   0 298.1G  0 disk  
├─sdb1                      8:17   0 298.1G  0 part  
└─ST3320613AS_9SZ4LP5S    253:6    0 298.1G  0 mpath 
  └─ST3320613AS_9SZ4LP5S1 253:7    0 298.1G  0 part  
loop0                       7:0    0 238.2M  1 loop  
loop1                       7:1    0   1.5G  1 loop  
├─live-rw                 253:0    0   1.5G  0 dm    /
└─live-base               253:1    0   1.5G  1 dm    
loop2                       7:2    0   512M  0 loop  
└─live-rw                 253:0    0   1.5G  0 dm    /

# mkfs.xfs -f /dev/mapper/ST3320613AS_9SZ4LP5S1 
meta-data=/dev/mapper/ST3320613AS_9SZ4LP5S1 isize=256    agcount=4, agsize=19535637 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=78142545, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=38155, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

# ls -al /data/images/rhev/
total 8
drwxr-xr-x. 2 vdsm kvm  4096 Oct 30 07:11 .
drwxr-xr-x. 3 root root 4096 Oct 30 07:11 ..

# mount -t xfs /dev/mapper/ST3320613AS_9SZ4LP5S1 /data/images/rhev/

# mount | grep /dev/mapper/ST3320613AS_9SZ4LP5S1
/dev/mapper/ST3320613AS_9SZ4LP5S1 on /data/images/rhev type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/ST3320613AS_9SZ4LP5S1 on /var/lib/libvirt/images/rhev type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

# vi /etc/fstab 
/dev/mapper/ST3320613AS_9SZ4LP5S1 /data/images/rhev xfs defaults,noatime 0 0

# ls -al /data/images/rhev
total 4
drwxr-xr-x. 2 root root    6 Oct 30 07:27 .
drwxr-xr-x. 3 root root 4096 Oct 30 07:11 ..

# chown 36:36 /data/images/rhev

# ls -al /data/images/rhev
total 4
drwxr-xr-x. 2 vdsm kvm     6 Oct 30 07:27 .
drwxr-xr-x. 3 root root 4096 Oct 30 07:11 ..

# df -Th /data/images/rhev/
Filesystem                        Type  Size  Used Avail Use% Mounted on
/dev/mapper/ST3320613AS_9SZ4LP5S1 xfs   298G   33M  298G   1% /data/images/rhev

4. Register RHEV-H to RHEVM 3.6.0-0.18 successful.
5. Approved RHEV-H with LocalStorageType DC, 3.6 Compatibility Version
6. RHEV-H is UP.
7. New Domain path is /data/images/rhev
8. The storage is activate.
9. Maintenance the storage.
10. Maintenance the Host.
11. Reboot the rhevh
12. After rhevh start, check RHEV-H status on RHEV-M, active rhevh is UP, and active the storage, all works well.

After rhevh reboot, check mount in rhevh

# mount | grep /dev/mapper/ST3320613AS_9SZ4LP5S1
/dev/mapper/ST3320613AS_9SZ4LP5S1 on /data/images/rhev type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
/dev/mapper/ST3320613AS_9SZ4LP5S1 on /var/lib/libvirt/images/rhev type xfs (rw,noatime,seclabel,attr2,inode64,noquota)

# df -Th /data/images/rhev/
Filesystem                        Type  Size  Used Avail Use% Mounted on
/dev/mapper/ST3320613AS_9SZ4LP5S1 xfs   298G   37M  298G   1% /data/images/rhev

Comment 20 Ying Cui 2015-11-02 04:34:00 UTC
Tested Pass with another USB 3.0 flash disk which is with LV data.

Version:
# cat /etc/redhat-release 
Red Hat Enterprise Virtualization Hypervisor release 7.2 (20151025.0.el7ev)
# rpm -qa ovirt-node
ovirt-node-3.3.0-0.18.20151022git82dc52c.el7ev.noarch

Test steps:

1. Installed RHEV-H on /dev/sda successful, and setup network via dhcp.

2. 
# ls -al /lib/modules/3.10.0-325.el7.x86_64/kernel/fs/xfs/
total 1460
drwxr-xr-x.  2 root root    4096 Oct 25 10:28 .
drwxr-xr-x. 26 root root    4096 Oct 25 10:28 ..
-rw-r--r--.  1 root root 1468393 Oct 16 22:42 xfs.ko

# rpm -qa xfsprogs
xfsprogs-3.2.2-2.el7.x86_64

# lsblk 
NAME                                              MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                                 8:0    0 465.8G  0 disk  
├─sda1                                              8:1    0   243M  0 part  
├─sda2                                              8:2    0     4G  0 part  
├─sda3                                              8:3    0     4G  0 part  /dev/.initramfs/live
└─sda4                                              8:4    0 457.5G  0 part  
  ├─HostVG-Swap                                   253:2    0   7.7G  0 lvm   [SWAP]
  ├─HostVG-Config                                 253:3    0     8M  0 lvm   /config
  ├─HostVG-Logging                                253:4    0     2G  0 lvm   /var/log
  └─HostVG-Data                                   253:5    0 447.4G  0 lvm   /data
sdb                                                 8:16   0 298.1G  0 disk  
└─ST3320613AS_9SZ4LP5S                            253:7    0 298.1G  0 mpath 
sdc                                                 8:32   1  29.1G  0 disk  
└─Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0:0
                                                  253:6    0  29.1G  0 mpath 
loop0                                               7:0    0 238.2M  1 loop  
loop1                                               7:1    0   1.5G  1 loop  
├─live-rw                                         253:0    0   1.5G  0 dm    /
└─live-base                                       253:1    0   1.5G  1 dm    
loop2                                               7:2    0   512M  0 loop  
└─live-rw                                         253:0    0   1.5G  0 dm    /

# multipath -ll
Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0:0 dm-6 Kingston,DataTraveler 2.0
size=29G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 6:0:0:0 sdc 8:32 active ready running
ST3320613AS_9SZ4LP5S dm-7 ATA     ,ST3320613AS     
size=298G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 1:0:0:0 sdb 8:16 active ready running

# fdisk /dev/mapper/Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0\:0 
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xf0ddc007.

Command (m for help): p

Disk /dev/mapper/Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0:0: 31.2 GB, 31221153792 bytes, 60978816 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xf0ddc007

                                                              Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-60978815, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-60978815, default 60978815): 
Using default value 60978815
Partition 1 of type Linux and of size 29.1 GiB is set

Command (m for help): p

Disk /dev/mapper/Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0:0: 31.2 GB, 31221153792 bytes, 60978816 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xf0ddc007

                                                              Device Boot      Start         End      Blocks   Id  System
/dev/mapper/Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0:0p1            2048    60978815    30488384   83  Linux

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): p

Disk /dev/mapper/Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0:0: 31.2 GB, 31221153792 bytes, 60978816 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xf0ddc007

                                                              Device Boot      Start         End      Blocks   Id  System
/dev/mapper/Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0:0p1            2048    60978815    30488384   8e  Linux LVM

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

# partprobe 


# fdisk -l /dev/mapper/Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0\:0

Disk /dev/mapper/Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0:0: 31.2 GB, 31221153792 bytes, 60978816 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xf0ddc007

                                                              Device Boot      Start         End      Blocks   Id  System
/dev/mapper/Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0:0p1            2048    60978815    30488384   8e  Linux LVM


# pvcreate /dev/mapper/Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0\:0p1 
  Physical volume "/dev/mapper/Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0:0p1" successfully created


# pvs
  PV                                                                   VG     Fmt  Attr PSize   PFree  
  /dev/mapper/Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0:0p1        lvm2 ---   29.08g  29.08g
  /dev/sda4                                                            HostVG lvm2 a--  457.51g 404.00m

# vgcreate TestxfsVG /dev/mapper/Kingston_DataTraveler_2.0_C86000BDBA11EEB15A2A010B-0\:0p1 
  Volume group "TestxfsVG" successfully created
[root@localhost admin]# vgs
  VG        #PV #LV #SN Attr   VSize   VFree  
  HostVG      1   4   0 wz--n- 457.51g 404.00m
  TestxfsVG   1   0   0 wz--n-  29.07g  29.07g

# lvcreate -L 29G -n LVxfsData TestxfsVG
  Logical volume "LVxfsData" created.
[root@localhost admin]# mkfs.xfs -f /dev/TestxfsVG/LVxfsData 
meta-data=/dev/TestxfsVG/LVxfsData isize=256    agcount=4, agsize=1900544 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=7602176, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=3712, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

# mount -t xfs /dev/TestxfsVG/LVxfsData /data/images/rhev/

# vi /etc/fstab 
/dev/TestxfsVG/LVxfsData /data/images/rhev/ xfs defaults,noatime 0 0


# chown 36:36 /data/images/rhev/

# df -Th /data/images/rhev/
Filesystem                      Type  Size  Used Avail Use% Mounted on
/dev/mapper/TestxfsVG-LVxfsData xfs    29G   33M   29G   1% /data/images/rhev

# mount | grep /dev/mapper/TestxfsVG-LVxfsData
/dev/mapper/TestxfsVG-LVxfsData on /data/images/rhev type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/TestxfsVG-LVxfsData on /var/lib/libvirt/images/rhev type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

4. Register RHEV-H to RHEVM 3.6.0-0.18 successful.
5. Approved RHEV-H with LocalStorageType DC, 3.6 Compatibility Version
6. RHEV-H is UP.
7. New Domain path is /data/images/rhev
8. The storage is activate.
9. Maintenance the storage.
10. Maintenance the Host.
11. Reboot the rhevh
12. After rhevh start, check RHEV-H status on RHEV-M, active rhevh is UP, and active the storage, all works well.

Comment 22 Ying Cui 2015-11-17 08:39:17 UTC
Created attachment 1095260 [details]
Steps on RHEV-H FC with xfs

Tested PASS on RHEV-H FC with xfs local storage
Detail steps in attachment.

Comment 23 Ying Cui 2015-11-19 11:04:00 UTC
Created attachment 1096583 [details]
Steps on RHEV-H iSCSI as xfs local domain

Steps on RHEV-H iSCSI as xfs local domain - PASS

for detail steps, pls see the attachment.

Comment 24 Ying Cui 2015-11-19 11:05:59 UTC
According to comment 17, comment 20, comment 22 and comment 23, I verified this bug on Node side. Thanks.

Comment 35 errata-xmlrpc 2016-03-09 14:18:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0378.html


Note You need to log in before you can comment on or make changes to this bug.