Bug 1447887 - [downstream clone - 4.1.3] [RFE] RHV-H should meet NIST 800-53 partitioning requirements by default
Summary: [downstream clone - 4.1.3] [RFE] RHV-H should meet NIST 800-53 partitioning r...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: rhev-hypervisor-ng
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.1.3
: ---
Assignee: Ryan Barry
QA Contact: Qin Yuan
URL:
Whiteboard:
Depends On: 1420068 1457670
Blocks: 1486087
TreeView+ depends on / blocked
 
Reported: 2017-05-04 07:34 UTC by rhev-integ
Modified: 2019-04-28 13:49 UTC (History)
15 users (show)

Fixed In Version: imgbased-0.9.24-0.1.el7ev
Doc Type: Enhancement
Doc Text:
Red Hat Virtualization Host (RHVH) now supports NIST 800-53 partitioning requirements to improve security. RHVH uses a NIST 800-53 partition layout by default, and existing configurations will be changed on update.
Clone Of: 1420068
Environment:
Last Closed: 2017-07-11 08:41:19 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1714 0 normal SHIPPED_LIVE redhat-virtualization-host bug fix, and enhancement update 2017-07-11 12:40:41 UTC
oVirt gerrit 72301 0 master MERGED osupdater: migrate to NIST partitioning 2020-07-23 08:43:16 UTC
oVirt gerrit 73212 0 ovirt-4.1 ABANDONED Revert "osupdater: migrate to NIST partitioning" 2020-07-23 08:43:15 UTC
oVirt gerrit 74162 0 master MERGED osupdater: Fix NIST migration 2020-07-23 08:43:15 UTC
oVirt gerrit 76356 0 ovirt-4.1 MERGED osupdater: Fix NIST migration 2020-07-23 08:43:15 UTC

Description rhev-integ 2017-05-04 07:34:45 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1420068 +++
======================================================================

Description of problem:

The default install of RHV-H should meet basic partitioning requirements as defined in NIST 800-53 SC-32 System Partitioning.

Each one of these directories should be mounted on it's own partition
/tmp
/var
/var/log/audit
/home



Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.Run default install of RHV-H
2. Check resulting partitioning
3.

Actual results:
There is only one partition - /

Expected results:
There are many partitions 
/
/home
/tmp
/var
/var/log/audit

Additional info:
The partitions can be logical volumes. The purpose is to prevent a malicious user from filling one of these directories and shutting the system down. This also prevents logging devices from filling the system and shutting it down. 

For further information please see

http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf

https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf

(Originally by Donny Davis)

Comment 1 rhev-integ 2017-05-04 07:34:53 UTC
Well, RHV-H also has /var as a separate partition already.

We can change our autoinstall class to match these requirements, though.

(Originally by Ryan Barry)

Comment 4 rhev-integ 2017-05-04 07:35:04 UTC
/tmp -- Is it sufficient if this would be a tmpfs? Or is it required to be a separate-on-disk-partition?

/var
/var/log/audit
/home

The three others seem reasonable.

(Originally by Fabian Deutsch)

Comment 5 rhev-integ 2017-05-04 07:35:11 UTC
I pulled this directly from the requirements documentation.
https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf

It does not specify the requirement for the partition to be on disk. That would just be standard practice. 

1.1.1 Create Separate Partition for /tmp

Description:
The /tmp directory is a world-writable directory used for temporary storage by all users and some applications.

Rationale:

Since the /tmp directory is intended to be world-writable, there is a risk of resource exhaustion if it is not bound to a separate partition. In addition, making /tmp its own file system allows an administrator to set the noexec option on the mount, making /tmp useless for an attacker to install executable code. It would also prevent an attacker from establishing a hardlink to a system setuid program and wait for it to be updated. Once the program was updated, the hardlink would be broken and the attacker would have his own copy of the program. If the program happened to have a security vulnerability, the attacker could continue to exploit the known flaw.

(Originally by Donny Davis)

Comment 7 rhev-integ 2017-05-04 07:35:21 UTC
(In reply to Fabian Deutsch from comment #3)
> /tmp -- Is it sufficient if this would be a tmpfs? Or is it required to be a
> separate-on-disk-partition?
> 
> /var
> /var/log/audit
> /home
> 
> The three others seem reasonable.


Guiding principal is to address availability. For example, some DoS attack might fill up audit logs. If /var/log/audit were a directory off /, the disk would get full and system could crash.

Implementation of partitioning has not been very prescriptive. Some people do disk partitions, others LVM, others as tmpfs or ramfs. 

Our US Gov accepted unit tests only verify that /tmp is its own partition. We don't look at how that is accomplished under the covers. 

Code Ref:
https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/partition_for_tmp.xml#L20#L22

(Originally by Shawn Wells)

Comment 8 rhev-integ 2017-05-04 07:35:26 UTC
p.s. all the partitions are (In reply to Shawn Wells from comment #6)
> (In reply to Fabian Deutsch from comment #3)
> > /tmp -- Is it sufficient if this would be a tmpfs? Or is it required to be a
> > separate-on-disk-partition?
> > 
> > /var
> > /var/log/audit
> > /home
> > 
> > The three others seem reasonable.
> 
> 
> Guiding principal is to address availability. For example, some DoS attack
> might fill up audit logs. If /var/log/audit were a directory off /, the disk
> would get full and system could crash.
> 
> Implementation of partitioning has not been very prescriptive. Some people
> do disk partitions, others LVM, others as tmpfs or ramfs. 
> 
> Our US Gov accepted unit tests only verify that /tmp is its own partition.
> We don't look at how that is accomplished under the covers. 
> 
> Code Ref:
> https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/
> partition_for_tmp.xml#L20#L22


p.s - under the covers, *all* the partition checks are evaluating /proc/mounts if a dedicated partition exists. We don't check *how* that is accomplished.

Ref for /home:
https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/partition_for_home.xml#L22#L24

Ref for /tmp:
https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/partition_for_tmp.xml#L20#L22

Ref for /var:
https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/partition_for_var.xml#L22#L24

Ref for /var/log:
https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/partition_for_var_log.xml#L19#L21

Ref for /var/log/audit:
https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/partition_for_var_log_audit.xml#L22#L25

(Originally by Shawn Wells)

Comment 9 rhev-integ 2017-05-04 07:35:31 UTC
Thanks for your inputs Shawn Wells!

(Originally by dougsland)

Comment 10 rhev-integ 2017-05-04 07:35:37 UTC
Absolutely.

Separating out the partitioning will also help us meet a broad range of US Government standards, such as the DoD STIG (baseline mandated for military systems), C2S (for many U.S. Intelligence systems), and USGCB (for civilian agencies). 

Should also be noted this will help with US Commercial markets. We're in the process to help establish configuration guidance for "Controlled Unclassified Information" or "CUI" -- aka private companies that do business with the Government or are regulated by them. Examples are transportation and finance. While not finalized, the config baselines for the Controlled Unclassified Information will likely have the exact same partitioning requirements.

Full listing of industries impacted by Controlled Unclassified can be found here:
https://www.archives.gov/cui/registry#categories

(Originally by Shawn Wells)

Comment 11 rhev-integ 2017-05-04 07:35:42 UTC
I have added to our Anaconda class the below entries and it worked (as talked with Ryan, we still need the imgbased change in osupdater.py): 

For now, added:
-------------------
/home          - 1GiB
/tmp           - 1GiB
/var/log       - 1GiB
/var/log/audit - 500MiB

We kept:
--------
/              - 6GiB
/var           - 15GiB
/boot          - 1GiB

https://github.com/dougsland/anaconda/commit/400c127b144809a4288f0c4cb94c7e8e08d1084a


# cat /etc/redhat-release 
Red Hat Enterprise Linux release 7.3

# df -h
Filesystem                                 Size  Used Avail Use% Mounted on
/dev/mapper/rhv-rhvh--4.1--0.20170208.0+1   27G  1.5G   24G   7% /
devtmpfs                                   4.8G     0  4.8G   0% /dev
tmpfs                                      4.8G  4.0K  4.8G   1% /dev/shm
tmpfs                                      4.8G  8.5M  4.8G   1% /run
tmpfs                                      4.8G     0  4.8G   0% /sys/fs/cgroup
/dev/sda1                                  976M  164M  745M  19% /boot
/dev/mapper/rhv-var                         15G   42M   14G   1% /var
/dev/mapper/rhv-tmp                        976M  2.6M  907M   1% /tmp
/dev/mapper/rhv-home                       976M  2.6M  907M   1% /home
/dev/mapper/rhv-var_log                    976M  5.3M  904M   1% /var/log
/dev/mapper/rhv-var_log_audit              477M  2.4M  445M   1% /var/log/audit
tmpfs                                      979M     0  979M   0% /run/user/0

# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/rhv/swap
  LV Name                swap
  VG Name                rhv
  LV UUID                2U2v2d-21uz-APzn-arQh-1qrn-UGA3-Jft6xq
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:20 -0600
  LV Status              available
  # open                 2
  LV Size                4.81 GiB
  Current LE             1232
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Name                pool00
  VG Name                rhv
  LV UUID                xJmX3Z-nsaE-d1yT-EICc-sbif-yVVd-zWfBjk
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:20 -0600
  LV Pool metadata       pool00_tmeta
  LV Pool data           pool00_tdata
  LV Status              available
  # open                 8
  LV Size                45.15 GiB
  Allocated pool data    6.10%
  Allocated metadata     3.65%
  Current LE             11558
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3
   
  --- Logical volume ---
  LV Path                /dev/rhv/var_log_audit
  LV Name                var_log_audit
  VG Name                rhv
  LV UUID                KrdChm-vMN1-pN3Y-PnV0-XkYL-S6nq-GyNjWi
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:20 -0600
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                500.00 MiB
  Mapped size            5.00%
  Current LE             125
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6
   
  --- Logical volume ---
  LV Path                /dev/rhv/var_log
  LV Name                var_log
  VG Name                rhv
  LV UUID                e2Kdd2-wLv9-0rzj-gTHZ-D1X8-EPUa-JbeD4B
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:21 -0600
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Mapped size            5.28%
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7
   
  --- Logical volume ---
  LV Path                /dev/rhv/var
  LV Name                var
  VG Name                rhv
  LV UUID                K2WChB-awu1-2vOV-PnXO-MyH5-GXLy-XBhUDa
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:22 -0600
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                15.00 GiB
  Mapped size            3.69%
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8
   
  --- Logical volume ---
  LV Path                /dev/rhv/tmp
  LV Name                tmp
  VG Name                rhv
  LV UUID                vpIGI5-2hqs-m5x8-zrbj-nI96-LV4g-fddqrq
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:24 -0600
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Mapped size            4.80%
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:9
   
  --- Logical volume ---
  LV Path                /dev/rhv/home
  LV Name                home
  VG Name                rhv
  LV UUID                MBFcUe-dpDc-hkkn-IyYP-0S4Z-8wvC-fbksUu
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:25 -0600
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Mapped size            4.79%
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:10
   
  --- Logical volume ---
  LV Path                /dev/rhv/root
  LV Name                root
  VG Name                rhv
  LV UUID                k67k4d-JQ1C-m32b-IG2G-dn0R-24nh-0d32HH
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:25 -0600
  LV Pool name           pool00
  LV Status              available
  # open                 0
  LV Size                26.66 GiB
  Mapped size            7.49%
  Current LE             6825
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:11
   
  --- Logical volume ---
  LV Path                /dev/rhv/rhvh-4.1-0.20170208.0
  LV Name                rhvh-4.1-0.20170208.0
  VG Name                rhv
  LV UUID                Z8JiJz-1k9H-R1gh-RYBB-HD1a-k8Bj-iO6kT7
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 19:02:12 -0600
  LV Pool name           pool00
  LV Thin origin name    root
  LV Status              NOT available
  LV Size                26.66 GiB
  Current LE             6825
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/rhv/rhvh-4.1-0.20170208.0+1
  LV Name                rhvh-4.1-0.20170208.0+1
  VG Name                rhv
  LV UUID                8dKEM1-Shsf-zCcF-vvFQ-zOvf-ZgtB-RPwaNS
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 19:02:13 -0600
  LV Pool name           pool00
  LV Thin origin name    rhvh-4.1-0.20170208.0
  LV Status              available
  # open                 1
  LV Size                26.66 GiB
  Mapped size            7.53%
  Current LE             6825
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4


Ryan, could you please review the github change? The minimum/default size looks good ? Others point of view are welcome as well.

(Originally by dougsland)

Comment 12 rhev-integ 2017-05-04 07:35:48 UTC
I would definitely err on the side of caution, especially since changed to anaconda don't have an immediate turnaround time.

We had a bug in 4.0.6 which required an async because vdsm logs were filling 15gb. 1gb is very small.

Can someone in QE give an estimate of sizes ona long-running system?

(Originally by Ryan Barry)

Comment 13 rhev-integ 2017-05-04 07:35:53 UTC
We have a longevity environment, it runs rhvh-4.0-0.20161012.0 about 99 days, the detailed informations are as below: 

# imgbase w
[INFO] You are on rhvh-4.0-0.20161012.0+1
# rpm -qa |grep imgbase
imgbased-0.8.5-0.1.el7ev.noarch

# uptime
14:53:57 up 99 days, 42 min,  3 users,  load average: 1.03, 1.16, 1.37

# du -sh /home
12K	/home

# du -sh /tmp
4.0K	/tmp

# du -sh /var/log
1017M	/var/log

# du -sh /var/log/audit
26M	/var/log/audit

(Originally by Qin Yuan)

Comment 14 rhev-integ 2017-05-04 07:35:59 UTC
Thanks Qin.

How much activity does this environment see? I'm worried because the NIST standards on RHEL assume log shipping, but we can't make that assumption if we're making NIST the default in RHV-H for all customers.

(Originally by Ryan Barry)

Comment 16 rhev-integ 2017-05-04 07:36:10 UTC
I don't foresee a need for such a large default /home, as for the most part hypervisors shouldn't really have users logging in and storing much in the way of data on the hypervisor. I have my hypervisors linked to IDM and I use my profile to store random config or debug stuff. 

Also included are my logs files from a RHV/Gluster pod that has been up for a couple months, but rebooted last week. This system has Openshift running on it, and also a user developing an ansible role against it, so machines are spun up and down many times per day. 

I would say something like this would be a sane set of defaults
-------------------
/              - 5GiB
/home          - 512MiB
/tmp           - 512MiB
/var/log       - 4GiB
/var/log/audit - 512MiB
/var           - 15GiB
/boot          - 1GiB


This brings the total space requirements for RHV-H to a minimum 26.5GB. 


# uptime
11:15:30 up 8 days,  5:45,  1 user,  load average: 0.96, 1.14, 1.05

# du -sh /home
16K	/home

# du -sh /tmp
80K	/tmp

du -sh /var/log
2.1G	/var/log

du -sh /var/log/audit
35M	/var/log/audit

(Originally by Donny Davis)

Comment 18 rhev-integ 2017-05-04 07:36:21 UTC
Thanks Donny. Those numbers look a lot safer to me.

(Originally by Ryan Barry)

Comment 19 rhev-integ 2017-05-04 07:36:27 UTC
(In reply to Ryan Barry from comment #13)
> Thanks Qin.
> 
> How much activity does this environment see? I'm worried because the NIST
> standards on RHEL assume log shipping, but we can't make that assumption if
> we're making NIST the default in RHV-H for all customers.

Our longevity hypervisor has 18 VMs on it, but it may be a little simple that cannot simulate the customers' most often used scenarios. The sizes listed above are just for reference.

(Originally by Qin Yuan)

Comment 20 rhev-integ 2017-05-04 07:36:33 UTC
Here is the info from RHEV QE production systems:

# ansible all -a "du -sh /home"  -u root  | grep /
4.0K	/home
4.0K	/home
4.0K	/home
20K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
20K	/home
20K	/home
4.0K	/home
792K	/home
4.0K	/home

# ansible all -a "du -sh /tmp"  -u root  | grep /
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp

# ansible all -a "du -sh /var/log"  -u root  | grep /
179M	/var/log
194M	/var/log
56G	/var/log
25M	/var/log
164M	/var/log
1.3G	/var/log
1.3G	/var/log
1.3G	/var/log
165M	/var/log
161M	/var/log
1.2G	/var/log
167M	/var/log
151M	/var/log
153M	/var/log
2.4G	/var/log
378M	/var/log
388M	/var/log
385M	/var/log
388M	/var/log
376M	/var/log

# ansible all -a "du -sh /var/log/audit"  -u root  | grep /
72K	/var/log/audit
240K	/var/log/audit
76K	/var/log/audit
108K	/var/log/audit
33M	/var/log/audit
4.0K	/var/log/audit
128K	/var/log/audit
124K	/var/log/audit
128K	/var/log/audit
156K	/var/log/audit
4.0K	/var/log/audit
152K	/var/log/audit
152K	/var/log/audit
156K	/var/log/audit
4.0K	/var/log/audit
180K	/var/log/audit
4.0K	/var/log/audit
4.0K	/var/log/audit
40M	/var/log/audit
34M	/var/log/audit

(Originally by Gil Klein)

Comment 21 rhev-integ 2017-05-04 07:36:38 UTC
Thanks Gil.

Douglas, we also got some feedback from other sources.

The largest /var/log which has been observed is ~6.5GB.

/var/log/audit is ~1.2 GB

We probably want to size /var/log at 7 or 8 GB.
/var/log audit at 2GB.

(Originally by Ryan Barry)

Comment 26 rhev-integ 2017-05-04 07:37:03 UTC
(In reply to Ryan Barry from comment #20)
> Thanks Gil.
> 
> Douglas, we also got some feedback from other sources.
> 
> The largest /var/log which has been observed is ~6.5GB.
> 
> /var/log/audit is ~1.2 GB
> 
> We probably want to size /var/log at 7 or 8 GB.
> /var/log audit at 2GB.

Sure, I will prepare a new patch for Anaconda folks.

(Originally by dougsland)

Comment 27 rhev-integ 2017-05-04 07:37:08 UTC
For reference only:

pull request for rhel7-branch is below and added the patch from imgbased side in External Trackers.

Add support for NIST 800-53 SC-32 System Partitioning schema.
https://github.com/rhinstaller/anaconda/pull/963

Any comments are welcome.

(Originally by dougsland)

Comment 33 rhev-integ 2017-05-04 07:37:40 UTC
We have checked that the patch in bug 1422952 works with 40GB disk. The value might be actually a bit lower. Please make sure your installation documentation is updated when you consume installer with the updated rhv install class (it should be included in RHEL 7.4).

(Originally by Radek Vykydal)

Comment 34 Huijuan Zhao 2017-05-11 06:33:50 UTC
Ryan, QE tested upgrade scenario related to this bug, the main workflow is ok, but there are some differences as before, could you help to check whether it is expected result?


Test version:
From: redhat-virtualization-host-4.0-20170307.1
To:   redhat-virtualization-host-4.1-20170506.0

# imgbase layout
rhvh-4.0-0.20170307.0
 +- rhvh-4.0-0.20170307.0+1
rhvh-4.1-0.20170506.0
 +- rhvh-4.1-0.20170506.0+1

Test steps:
1. Install redhat-virtualization-host-4.0-20170307.1
2. Touch file_1 in /var/log in rhvh-4.0
3. Setup local repos in rhvh-4.0, and upgrade to redhat-virtualization-host-4.1-20170506.0.
   # yum update
4. After upgrade, login new build rhvh-4.1-20170506.0, check file_1 in /var/log

5. Touch file_0506 in /var/log in rhvh-4.1-20170506.0
   Reboot host, login old image/layer rhvh-4.0-20170307.1, check file_0506 in /var/log.

6. Touch file_0307 in /var/log in rhvh-4.0-20170307.1
   Reboot host, login new image/layer rhvh-4.1-20170506.0, check file_0307 in /var/log

Actual results:
1. After step 4, file_1 in /var/log is visible. 
   It means files in /var/log can be sync during upgrade.

2. After step 5, file_0506 in /var/log is NOT visible.
3. After step 6, file_0307 in /var/log is NOT visible.

It means after upgrade from rhvh-4.0 to rhvh-4.1, as the partition and mounted filesystem are changed(including /var/log), the new touched files in /var/log can not sync between all images/layers. 
But in the previous build, for example, upgrade from rhvh-4.0-20170307.1 to rhvh-4.1-20170421.0, the new touched files in /var/log can sync between all images/layers.


Expected results:
1. After step 4, file_1 in /var/log is visible. 
2. After step 5, file_0506 in /var/log is visible.
3. After step 6, file_0307 in /var/log is visible.


So for the step 5 and step 6 results, are they expected results?
Thanks!

Comment 35 Qin Yuan 2017-05-11 08:30:17 UTC
Test Versions:
redhat-virtualization-host-4.1-20170506.0
imgbased-0.9.24-0.1.el7ev.noarch

Test Steps:
1. Install RHVH, choosing auto partitioning.
2. Log into RHVH, run checking cmds:
   # lvs
   # df -Th
3. Install RHVH again, choosing manual partitioning and creating /home partition with 50 GiB.
4. Log into RHVH, run the same checking cmds as in step2.

Test Results:
1. After step2, the checking results are:
[root@rhvh ~]# lvs
  LV                      VG              Attr       LSize   Pool   Origin                Data%  Meta%  Move Log Cpy%Sync Convert
  home                    rhvh_dhcp-8-226 Vwi-aotz--   1.00g pool00                       4.83                                   
  pool00                  rhvh_dhcp-8-226 twi-aotz-- 212.00g                              2.38   0.19                            
  rhvh-4.1-0.20170506.0   rhvh_dhcp-8-226 Vwi---tz-k 197.00g pool00 root                                                         
  rhvh-4.1-0.20170506.0+1 rhvh_dhcp-8-226 Vwi-aotz-- 197.00g pool00 rhvh-4.1-0.20170506.0 1.72                                   
  root                    rhvh_dhcp-8-226 Vwi-a-tz-- 197.00g pool00                       1.69                                   
  swap                    rhvh_dhcp-8-226 -wi-ao----   3.88g                                                                     
  tmp                     rhvh_dhcp-8-226 Vwi-aotz--   2.00g pool00                       4.80                                   
  var                     rhvh_dhcp-8-226 Vwi-aotz--  15.00g pool00                       3.49                                   
  var-log                 rhvh_dhcp-8-226 Vwi-aotz--   8.00g pool00                       2.04                                   
  var-log-audit           rhvh_dhcp-8-226 Vwi-aotz--   2.00g pool00                       4.32   

[root@rhvh ~]# df -Th
Filesystem                                              Type      Size  Used Avail Use% Mounted on
/dev/mapper/rhvh_dhcp--8--226-rhvh--4.1--0.20170506.0+1 ext4      194G  1.6G  183G   1% /
devtmpfs                                                devtmpfs  1.9G     0  1.9G   0% /dev
tmpfs                                                   tmpfs     1.9G  4.0K  1.9G   1% /dev/shm
tmpfs                                                   tmpfs     1.9G   17M  1.9G   1% /run
tmpfs                                                   tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/rhvh_dhcp--8--226-tmp                       ext4      2.0G  6.1M  1.8G   1% /tmp
/dev/mapper/rhvh_dhcp--8--226-home                      ext4      976M  2.6M  907M   1% /home
/dev/mapper/rhvh_dhcp--8--226-var                       ext4       15G   45M   14G   1% /var
/dev/mapper/rhvh_dhcp--8--226-var--log                  ext4      7.8G   39M  7.3G   1% /var/log
/dev/mapper/rhvh_dhcp--8--226-var--log--audit           ext4      2.0G  6.1M  1.8G   1% /var/log/audit
/dev/sda1                                               ext4      976M  165M  745M  19% /boot
tmpfs                                                   tmpfs     378M     0  378M   0% /run/user/0

2. After step4, the checking results are:
[root@rhvh ~]# lvs
  LV                      VG        Attr       LSize   Pool   Origin                Data%  Meta%  Move Log Cpy%Sync Convert
  home                    rhvh_rhvh Vwi-aotz--  50.00g pool00                       1.83                                   
  pool00                  rhvh_rhvh twi-aotz-- 207.88g                              2.61   0.20                            
  rhvh-4.1-0.20170506.0   rhvh_rhvh Vwi---tz-k 142.88g pool00 root                                                         
  rhvh-4.1-0.20170506.0+1 rhvh_rhvh Vwi-aotz-- 142.88g pool00 rhvh-4.1-0.20170506.0 2.15                                   
  root                    rhvh_rhvh Vwi-a-tz-- 142.88g pool00                       2.17                                   
  swap                    rhvh_rhvh -wi-ao----   8.00g                                                                     
  tmp                     rhvh_rhvh Vwi-aotz--   2.00g pool00                       3.64                                   
  var                     rhvh_rhvh Vwi-aotz--  15.00g pool00                       3.47                                   
  var-log                 rhvh_rhvh Vwi-aotz--   8.00g pool00                       1.87                                   
  var-log-audit           rhvh_rhvh Vwi-aotz--   2.00g pool00                       4.80    

[root@rhvh ~]# df -Th
Filesystem                                      Type      Size  Used Avail Use% Mounted on
/dev/mapper/rhvh_rhvh-rhvh--4.1--0.20170506.0+1 ext4      141G  1.6G  132G   2% /
devtmpfs                                        devtmpfs  1.9G     0  1.9G   0% /dev
tmpfs                                           tmpfs     1.9G  4.0K  1.9G   1% /dev/shm
tmpfs                                           tmpfs     1.9G   17M  1.9G   1% /run
tmpfs                                           tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sda1                                       ext4      976M  165M  745M  19% /boot
/dev/mapper/rhvh_rhvh-tmp                       ext4      2.0G  6.1M  1.8G   1% /tmp
/dev/mapper/rhvh_rhvh-var                       ext4       15G   45M   14G   1% /var
/dev/mapper/rhvh_rhvh-home                      ext4       50G   53M   47G   1% /home
/dev/mapper/rhvh_rhvh-var--log                  ext4      7.8G   39M  7.3G   1% /var/log
/dev/mapper/rhvh_rhvh-var--log--audit           ext4      2.0G  6.1M  1.8G   1% /var/log/audit
tmpfs                                           tmpfs     378M     0  378M   0% /run/user/0

Conclusions:
For /home, /tmp, /var/log, and /var/log/audit:
1. In step1, they were not created manually, the program could create them automatically, and the sizes are correct.
2. In step3, /home was created manually, the program just created the other needed partitions, and didn't recreate /home, as the size of /home is 50g, not 1g.
3. These newly added partitions could be mounted on at boot time.

The testing results are consistent with the expected, it could pass the installation test.

Needinfo huzhao for upgrade testing results.

Comment 36 Huijuan Zhao 2017-05-11 08:48:10 UTC
qiyuan, please see Comment 34.

Ryan, here are some updates when umount/mount "/var/log/" for comment 34:

1. After step 6, login new image/layer rhvh-4.1-20170506.0, 
   # umount -l -f /var/log
   The file_0307 is visible, but file_0506 is NOT visible.

   # mount /dev/mapper/rhvh_dhcp--10--16-var--log /var/log
   The file_0307 is NOT visible, but file_0506 is visible.


2. After step 6, login old image/layer rhvh-4.0-20170307.0,
   # mount /dev/mapper/rhvh_dhcp--10--16-var--log /var/log
   The file_0307 is NOT visible, but file_0506 is visible.

   # umount /var/log
   The file_0307 is visible, but file_0506 is NOT visible.

So I think you were right, the file is there, but hidden by the mount.

Just wonder are they expected results?

Comment 37 Qin Yuan 2017-05-16 02:08:39 UTC
According to comment #34, comment #36, and bug 1450831, QE has to set the bug status back to ASSIGNED.

Comment 38 Ryan Barry 2017-05-16 04:34:17 UTC
comment#34 and comment#36 were answered separately (albeit in email).

This is expected behavior, and trying to work around this would make RHVH behave differently than UNIX has behaved for decades. 

Files present on a mountpoint are masked when something is mounted over it.

Files on a filesystem which is mounted (/var/log, for example) are not visible when that filesystem is not mounted.

Any attempt at a fix would lead to racing systemd .mount targets, and risks losing early boot logging (since we could not rely on systemd to properly handle .mount, the information would need to be copied every time some ancillary service ran, open file handles to files which would be masked closed, all the services restarted, etc).

bug#1450831 is not related to this. It's a timeout from engine due to fixing the SElinux denials. The patch for that landed just before blocker+ was removed from the relevant bug, and it iterates through the rpmdb to find all scripts. This goes longer than engine's timeout.

That patch will likely be reverted and delayed to 4.1.3, so engine has time to find a workaround.

This bug is NOT FailedQA.

Comment 39 Qin Yuan 2017-05-16 10:05:37 UTC
According to comment#34, comment#35, comment#36, and comment#38, QE tested installation and upgrade scenarios about this bug, and all passed. So, set the bug status to VERIFIED.

Comment 41 Ying Cui 2017-05-23 00:58:19 UTC
This is confirmed by Ryan via the email, we reverted the NIST patches in imgbased-0.9.27-0.1.el7ev to check whether we encounter the regression Bug 1450831 - Failed to upgrade RHVH host on RHVM side or not. So moving this bug to ASSIGNED status firstly until we get the final solution on it.

Comment 42 Ryan Barry 2017-05-23 01:11:16 UTC
It would be better to move this back to MODIFIED.

Still not FailedQA. It was reverted to work around engine timeouts, not because the patch does not work.

Comment 44 Qin Yuan 2017-06-02 06:49:15 UTC
Installation verification:

Test Versions:
redhat-virtualization-host-4.1-20170531.0
imgbased-0.9.30-0.1.el7ev.noarch

Test Steps:
Run cases RHEVM-21642 and RHEVM-21643

Test Results:
Cases RHEVM-21642 and RHEVM-21643 both passed.

Installation verification for this RFE passed, Needinfo huzhao for upgrade verification results.

Comment 45 Huijuan Zhao 2017-06-02 07:21:38 UTC
As Target Milestone is 4.1.3, but currently both two 4.1.3 builds (imgbased-0.9.28-0.1.el7ev and imgbased-0.9.30-0.1.el7ev) have block bugs(bug 1457111 and bug 1457670) to block upgrade verification, so QE will test this bug once there is new available 4.1.3 build.

Comment 46 cshao 2017-06-02 08:06:51 UTC
remove 1457111 from depends list because the installation testing is pass.

Comment 47 Huijuan Zhao 2017-06-12 05:30:03 UTC
Test version:
From: rhvh-4.0-0.20170307.0
To:   rhvh-4.1-0.20170609.0

Test steps:
Tested the below 3 test cases,
RHEVM-21647
RHEVM-18103
RHEVM-18102

Test results:
All the 3 cases RHEVM-21647, RHEVM-18103 and RHEVM-18102 are pass.

So upgrade verification for this RFE is pass in rhvh-4.1-0.20170609.0.

Comment 48 Qin Yuan 2017-06-12 07:38:22 UTC
Reverified installation for the latest build:

Test Versions:
redhat-virtualization-host-4.1-20170609.2
imgbased-0.9.31-0.1.el7ev.noarch

Test Steps:
The same as steps in comment #44

Test results:
Cases RHEVM-21642 and RHEVM-21643 both passed.

For the latest build, the installation verification for this RFE passed. And according to comment #47, the upgrade verification also passed, then set the status to VERIFIED.

Comment 52 cshao 2017-07-06 03:13:11 UTC
Re-verify this bug according #c48.

Comment 54 errata-xmlrpc 2017-07-11 08:41:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1714


Note You need to log in before you can comment on or make changes to this bug.