Bug 1420068 - [RFE] RHV-H should meet NIST 800-53 partitioning requirements by default
Summary: [RFE] RHV-H should meet NIST 800-53 partitioning requirements by default
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: rhev-hypervisor-ng
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.2.0
: 4.2.0
Assignee: Ryan Barry
QA Contact: Qin Yuan
URL:
Whiteboard:
Depends On:
Blocks: 1422952 1447887
TreeView+ depends on / blocked
 
Reported: 2017-02-07 17:06 UTC by Donny Davis
Modified: 2019-05-16 13:03 UTC (History)
15 users (show)

Fixed In Version: imgbased-0.9.24-0.1.el7ev
Doc Type: Enhancement
Doc Text:
In this release, Red Hat Virtualization Host supports NIST SP 800-53 partitioning requirements to improve the security. Environments upgrading to Red Hat Virtualization 4.2 will also be configured to match NIST SP 800-53 partitioning requirements.
Clone Of:
: 1422952 1447887 (view as bug list)
Environment:
Last Closed: 2018-05-15 17:57:40 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:1524 0 None None None 2018-05-15 17:58:49 UTC
oVirt gerrit 72301 0 master MERGED osupdater: migrate to NIST partitioning 2021-02-04 16:54:42 UTC
oVirt gerrit 73212 0 ovirt-4.1 ABANDONED Revert "osupdater: migrate to NIST partitioning" 2021-02-04 16:54:43 UTC
oVirt gerrit 74162 0 master MERGED osupdater: Fix NIST migration 2021-02-04 16:54:43 UTC
oVirt gerrit 76356 0 ovirt-4.1 MERGED osupdater: Fix NIST migration 2021-02-04 16:54:43 UTC

Description Donny Davis 2017-02-07 17:06:31 UTC
Description of problem:

The default install of RHV-H should meet basic partitioning requirements as defined in NIST 800-53 SC-32 System Partitioning.

Each one of these directories should be mounted on it's own partition
/tmp
/var
/var/log/audit
/home



Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.Run default install of RHV-H
2. Check resulting partitioning
3.

Actual results:
There is only one partition - /

Expected results:
There are many partitions 
/
/home
/tmp
/var
/var/log/audit

Additional info:
The partitions can be logical volumes. The purpose is to prevent a malicious user from filling one of these directories and shutting the system down. This also prevents logging devices from filling the system and shutting it down. 

For further information please see

http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf

https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf

Comment 1 Ryan Barry 2017-02-07 17:18:55 UTC
Well, RHV-H also has /var as a separate partition already.

We can change our autoinstall class to match these requirements, though.

Comment 3 Fabian Deutsch 2017-02-09 11:31:27 UTC
/tmp -- Is it sufficient if this would be a tmpfs? Or is it required to be a separate-on-disk-partition?

/var
/var/log/audit
/home

The three others seem reasonable.

Comment 4 Donny Davis 2017-02-09 14:16:05 UTC
I pulled this directly from the requirements documentation.
https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_Benchmark_v1.1.0.pdf

It does not specify the requirement for the partition to be on disk. That would just be standard practice. 

1.1.1 Create Separate Partition for /tmp

Description:
The /tmp directory is a world-writable directory used for temporary storage by all users and some applications.

Rationale:

Since the /tmp directory is intended to be world-writable, there is a risk of resource exhaustion if it is not bound to a separate partition. In addition, making /tmp its own file system allows an administrator to set the noexec option on the mount, making /tmp useless for an attacker to install executable code. It would also prevent an attacker from establishing a hardlink to a system setuid program and wait for it to be updated. Once the program was updated, the hardlink would be broken and the attacker would have his own copy of the program. If the program happened to have a security vulnerability, the attacker could continue to exploit the known flaw.

Comment 6 Shawn Wells 2017-02-09 17:24:57 UTC
(In reply to Fabian Deutsch from comment #3)
> /tmp -- Is it sufficient if this would be a tmpfs? Or is it required to be a
> separate-on-disk-partition?
> 
> /var
> /var/log/audit
> /home
> 
> The three others seem reasonable.


Guiding principal is to address availability. For example, some DoS attack might fill up audit logs. If /var/log/audit were a directory off /, the disk would get full and system could crash.

Implementation of partitioning has not been very prescriptive. Some people do disk partitions, others LVM, others as tmpfs or ramfs. 

Our US Gov accepted unit tests only verify that /tmp is its own partition. We don't look at how that is accomplished under the covers. 

Code Ref:
https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/partition_for_tmp.xml#L20#L22

Comment 7 Shawn Wells 2017-02-09 17:34:15 UTC
p.s. all the partitions are (In reply to Shawn Wells from comment #6)
> (In reply to Fabian Deutsch from comment #3)
> > /tmp -- Is it sufficient if this would be a tmpfs? Or is it required to be a
> > separate-on-disk-partition?
> > 
> > /var
> > /var/log/audit
> > /home
> > 
> > The three others seem reasonable.
> 
> 
> Guiding principal is to address availability. For example, some DoS attack
> might fill up audit logs. If /var/log/audit were a directory off /, the disk
> would get full and system could crash.
> 
> Implementation of partitioning has not been very prescriptive. Some people
> do disk partitions, others LVM, others as tmpfs or ramfs. 
> 
> Our US Gov accepted unit tests only verify that /tmp is its own partition.
> We don't look at how that is accomplished under the covers. 
> 
> Code Ref:
> https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/
> partition_for_tmp.xml#L20#L22


p.s - under the covers, *all* the partition checks are evaluating /proc/mounts if a dedicated partition exists. We don't check *how* that is accomplished.

Ref for /home:
https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/partition_for_home.xml#L22#L24

Ref for /tmp:
https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/partition_for_tmp.xml#L20#L22

Ref for /var:
https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/partition_for_var.xml#L22#L24

Ref for /var/log:
https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/partition_for_var_log.xml#L19#L21

Ref for /var/log/audit:
https://github.com/OpenSCAP/scap-security-guide/blob/master/shared/oval/partition_for_var_log_audit.xml#L22#L25

Comment 8 Douglas Schilling Landgraf 2017-02-09 17:56:59 UTC
Thanks for your inputs Shawn Wells!

Comment 9 Shawn Wells 2017-02-09 18:07:53 UTC
Absolutely.

Separating out the partitioning will also help us meet a broad range of US Government standards, such as the DoD STIG (baseline mandated for military systems), C2S (for many U.S. Intelligence systems), and USGCB (for civilian agencies). 

Should also be noted this will help with US Commercial markets. We're in the process to help establish configuration guidance for "Controlled Unclassified Information" or "CUI" -- aka private companies that do business with the Government or are regulated by them. Examples are transportation and finance. While not finalized, the config baselines for the Controlled Unclassified Information will likely have the exact same partitioning requirements.

Full listing of industries impacted by Controlled Unclassified can be found here:
https://www.archives.gov/cui/registry#categories

Comment 10 Douglas Schilling Landgraf 2017-02-10 01:34:13 UTC
I have added to our Anaconda class the below entries and it worked (as talked with Ryan, we still need the imgbased change in osupdater.py): 

For now, added:
-------------------
/home          - 1GiB
/tmp           - 1GiB
/var/log       - 1GiB
/var/log/audit - 500MiB

We kept:
--------
/              - 6GiB
/var           - 15GiB
/boot          - 1GiB

https://github.com/dougsland/anaconda/commit/400c127b144809a4288f0c4cb94c7e8e08d1084a


# cat /etc/redhat-release 
Red Hat Enterprise Linux release 7.3

# df -h
Filesystem                                 Size  Used Avail Use% Mounted on
/dev/mapper/rhv-rhvh--4.1--0.20170208.0+1   27G  1.5G   24G   7% /
devtmpfs                                   4.8G     0  4.8G   0% /dev
tmpfs                                      4.8G  4.0K  4.8G   1% /dev/shm
tmpfs                                      4.8G  8.5M  4.8G   1% /run
tmpfs                                      4.8G     0  4.8G   0% /sys/fs/cgroup
/dev/sda1                                  976M  164M  745M  19% /boot
/dev/mapper/rhv-var                         15G   42M   14G   1% /var
/dev/mapper/rhv-tmp                        976M  2.6M  907M   1% /tmp
/dev/mapper/rhv-home                       976M  2.6M  907M   1% /home
/dev/mapper/rhv-var_log                    976M  5.3M  904M   1% /var/log
/dev/mapper/rhv-var_log_audit              477M  2.4M  445M   1% /var/log/audit
tmpfs                                      979M     0  979M   0% /run/user/0

# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/rhv/swap
  LV Name                swap
  VG Name                rhv
  LV UUID                2U2v2d-21uz-APzn-arQh-1qrn-UGA3-Jft6xq
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:20 -0600
  LV Status              available
  # open                 2
  LV Size                4.81 GiB
  Current LE             1232
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Name                pool00
  VG Name                rhv
  LV UUID                xJmX3Z-nsaE-d1yT-EICc-sbif-yVVd-zWfBjk
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:20 -0600
  LV Pool metadata       pool00_tmeta
  LV Pool data           pool00_tdata
  LV Status              available
  # open                 8
  LV Size                45.15 GiB
  Allocated pool data    6.10%
  Allocated metadata     3.65%
  Current LE             11558
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3
   
  --- Logical volume ---
  LV Path                /dev/rhv/var_log_audit
  LV Name                var_log_audit
  VG Name                rhv
  LV UUID                KrdChm-vMN1-pN3Y-PnV0-XkYL-S6nq-GyNjWi
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:20 -0600
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                500.00 MiB
  Mapped size            5.00%
  Current LE             125
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6
   
  --- Logical volume ---
  LV Path                /dev/rhv/var_log
  LV Name                var_log
  VG Name                rhv
  LV UUID                e2Kdd2-wLv9-0rzj-gTHZ-D1X8-EPUa-JbeD4B
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:21 -0600
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Mapped size            5.28%
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7
   
  --- Logical volume ---
  LV Path                /dev/rhv/var
  LV Name                var
  VG Name                rhv
  LV UUID                K2WChB-awu1-2vOV-PnXO-MyH5-GXLy-XBhUDa
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:22 -0600
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                15.00 GiB
  Mapped size            3.69%
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8
   
  --- Logical volume ---
  LV Path                /dev/rhv/tmp
  LV Name                tmp
  VG Name                rhv
  LV UUID                vpIGI5-2hqs-m5x8-zrbj-nI96-LV4g-fddqrq
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:24 -0600
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Mapped size            4.80%
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:9
   
  --- Logical volume ---
  LV Path                /dev/rhv/home
  LV Name                home
  VG Name                rhv
  LV UUID                MBFcUe-dpDc-hkkn-IyYP-0S4Z-8wvC-fbksUu
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:25 -0600
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Mapped size            4.79%
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:10
   
  --- Logical volume ---
  LV Path                /dev/rhv/root
  LV Name                root
  VG Name                rhv
  LV UUID                k67k4d-JQ1C-m32b-IG2G-dn0R-24nh-0d32HH
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 18:57:25 -0600
  LV Pool name           pool00
  LV Status              available
  # open                 0
  LV Size                26.66 GiB
  Mapped size            7.49%
  Current LE             6825
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:11
   
  --- Logical volume ---
  LV Path                /dev/rhv/rhvh-4.1-0.20170208.0
  LV Name                rhvh-4.1-0.20170208.0
  VG Name                rhv
  LV UUID                Z8JiJz-1k9H-R1gh-RYBB-HD1a-k8Bj-iO6kT7
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 19:02:12 -0600
  LV Pool name           pool00
  LV Thin origin name    root
  LV Status              NOT available
  LV Size                26.66 GiB
  Current LE             6825
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/rhv/rhvh-4.1-0.20170208.0+1
  LV Name                rhvh-4.1-0.20170208.0+1
  VG Name                rhv
  LV UUID                8dKEM1-Shsf-zCcF-vvFQ-zOvf-ZgtB-RPwaNS
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2017-02-09 19:02:13 -0600
  LV Pool name           pool00
  LV Thin origin name    rhvh-4.1-0.20170208.0
  LV Status              available
  # open                 1
  LV Size                26.66 GiB
  Mapped size            7.53%
  Current LE             6825
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4


Ryan, could you please review the github change? The minimum/default size looks good ? Others point of view are welcome as well.

Comment 11 Ryan Barry 2017-02-10 04:15:37 UTC
I would definitely err on the side of caution, especially since changed to anaconda don't have an immediate turnaround time.

We had a bug in 4.0.6 which required an async because vdsm logs were filling 15gb. 1gb is very small.

Can someone in QE give an estimate of sizes ona long-running system?

Comment 12 Qin Yuan 2017-02-10 07:13:55 UTC
We have a longevity environment, it runs rhvh-4.0-0.20161012.0 about 99 days, the detailed informations are as below: 

# imgbase w
[INFO] You are on rhvh-4.0-0.20161012.0+1
# rpm -qa |grep imgbase
imgbased-0.8.5-0.1.el7ev.noarch

# uptime
14:53:57 up 99 days, 42 min,  3 users,  load average: 1.03, 1.16, 1.37

# du -sh /home
12K	/home

# du -sh /tmp
4.0K	/tmp

# du -sh /var/log
1017M	/var/log

# du -sh /var/log/audit
26M	/var/log/audit

Comment 13 Ryan Barry 2017-02-10 07:17:15 UTC
Thanks Qin.

How much activity does this environment see? I'm worried because the NIST standards on RHEL assume log shipping, but we can't make that assumption if we're making NIST the default in RHV-H for all customers.

Comment 15 Donny Davis 2017-02-10 11:30:21 UTC
I don't foresee a need for such a large default /home, as for the most part hypervisors shouldn't really have users logging in and storing much in the way of data on the hypervisor. I have my hypervisors linked to IDM and I use my profile to store random config or debug stuff. 

Also included are my logs files from a RHV/Gluster pod that has been up for a couple months, but rebooted last week. This system has Openshift running on it, and also a user developing an ansible role against it, so machines are spun up and down many times per day. 

I would say something like this would be a sane set of defaults
-------------------
/              - 5GiB
/home          - 512MiB
/tmp           - 512MiB
/var/log       - 4GiB
/var/log/audit - 512MiB
/var           - 15GiB
/boot          - 1GiB


This brings the total space requirements for RHV-H to a minimum 26.5GB. 


# uptime
11:15:30 up 8 days,  5:45,  1 user,  load average: 0.96, 1.14, 1.05

# du -sh /home
16K	/home

# du -sh /tmp
80K	/tmp

du -sh /var/log
2.1G	/var/log

du -sh /var/log/audit
35M	/var/log/audit

Comment 17 Ryan Barry 2017-02-10 13:57:05 UTC
Thanks Donny. Those numbers look a lot safer to me.

Comment 18 Qin Yuan 2017-02-11 01:34:29 UTC
(In reply to Ryan Barry from comment #13)
> Thanks Qin.
> 
> How much activity does this environment see? I'm worried because the NIST
> standards on RHEL assume log shipping, but we can't make that assumption if
> we're making NIST the default in RHV-H for all customers.

Our longevity hypervisor has 18 VMs on it, but it may be a little simple that cannot simulate the customers' most often used scenarios. The sizes listed above are just for reference.

Comment 19 Gil Klein 2017-02-13 08:58:58 UTC
Here is the info from RHEV QE production systems:

# ansible all -a "du -sh /home"  -u root  | grep /
4.0K	/home
4.0K	/home
4.0K	/home
20K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
4.0K	/home
20K	/home
20K	/home
4.0K	/home
792K	/home
4.0K	/home

# ansible all -a "du -sh /tmp"  -u root  | grep /
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp
11M	/tmp

# ansible all -a "du -sh /var/log"  -u root  | grep /
179M	/var/log
194M	/var/log
56G	/var/log
25M	/var/log
164M	/var/log
1.3G	/var/log
1.3G	/var/log
1.3G	/var/log
165M	/var/log
161M	/var/log
1.2G	/var/log
167M	/var/log
151M	/var/log
153M	/var/log
2.4G	/var/log
378M	/var/log
388M	/var/log
385M	/var/log
388M	/var/log
376M	/var/log

# ansible all -a "du -sh /var/log/audit"  -u root  | grep /
72K	/var/log/audit
240K	/var/log/audit
76K	/var/log/audit
108K	/var/log/audit
33M	/var/log/audit
4.0K	/var/log/audit
128K	/var/log/audit
124K	/var/log/audit
128K	/var/log/audit
156K	/var/log/audit
4.0K	/var/log/audit
152K	/var/log/audit
152K	/var/log/audit
156K	/var/log/audit
4.0K	/var/log/audit
180K	/var/log/audit
4.0K	/var/log/audit
4.0K	/var/log/audit
40M	/var/log/audit
34M	/var/log/audit

Comment 20 Ryan Barry 2017-02-13 14:08:23 UTC
Thanks Gil.

Douglas, we also got some feedback from other sources.

The largest /var/log which has been observed is ~6.5GB.

/var/log/audit is ~1.2 GB

We probably want to size /var/log at 7 or 8 GB.
/var/log audit at 2GB.

Comment 25 Douglas Schilling Landgraf 2017-02-14 04:11:24 UTC
(In reply to Ryan Barry from comment #20)
> Thanks Gil.
> 
> Douglas, we also got some feedback from other sources.
> 
> The largest /var/log which has been observed is ~6.5GB.
> 
> /var/log/audit is ~1.2 GB
> 
> We probably want to size /var/log at 7 or 8 GB.
> /var/log audit at 2GB.

Sure, I will prepare a new patch for Anaconda folks.

Comment 26 Douglas Schilling Landgraf 2017-02-15 22:32:37 UTC
For reference only:

pull request for rhel7-branch is below and added the patch from imgbased side in External Trackers.

Add support for NIST 800-53 SC-32 System Partitioning schema.
https://github.com/rhinstaller/anaconda/pull/963

Any comments are welcome.

Comment 32 Radek Vykydal 2017-03-06 10:57:55 UTC
We have checked that the patch in bug 1422952 works with 40GB disk. The value might be actually a bit lower. Please make sure your installation documentation is updated when you consume installer with the updated rhv install class (it should be included in RHEL 7.4).

Comment 33 Emma Heftman 2017-10-16 09:48:52 UTC
Hey Ryan
I need help understanding the doc text and also working out the impact of the feature on the documentation.

1. For the doc text, when you say"and existing configurations will changed to match on update"

Can you please clarify what you mean.

2. Can you let me know whether the only thing that will change in the 4.2 documentation will be the size of the partitions, as described here: 

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/planning_and_prerequisites_guide/requirements#storage_requirements

Or does this affect other procedures such as upgrades?

Comment 34 Ryan Barry 2017-10-16 13:10:58 UTC
1) What's meant by this is that RHVH 4.0 or RHVH 4.1 prior to NIST was added will have the NIST partitions added and activated when upgrading.

2) That table is correct. NIST support was actually added in 4.1.3 as a Z-stream backport, so the documentation should be identical between these bugs.

Comment 35 Emma Heftman 2017-10-25 08:32:34 UTC
(In reply to Ryan Barry from comment #34)
> 1) What's meant by this is that RHVH 4.0 or RHVH 4.1 prior to NIST was added
> will have the NIST partitions added and activated when upgrading.
> 
> 2) That table is correct. NIST support was actually added in 4.1.3 as a
> Z-stream backport, so the documentation should be identical between these
> bugs.

Thanks Ryan, so even for upgrades there is nothing to mention?

Comment 36 Qin Yuan 2017-11-09 09:15:47 UTC
Verify Versions:
RHVH-4.2-20171105.2-RHVH-x86_64-dvd1.iso
imgbased-1.0.1-0.1.el7ev.noarch


Verify Steps and Results:

scenario 1: auto partitioning
1. Install RHVH iso
2. Choose auto partitioning
3. Finish installation
4. Run `lvs -a`, `df -Th`, `mount | grep discard`

The thin volumes home, tmp, var, var_log and var_log_audit were created, the sizes were 1G, 1G, 15G, 8G, and 2G, respectively. 
The corresponding file systems were mounted on /home, /tmp, /var, /var/log, and /var/log/audit.
Discard option presented in each thin volume's mount options.


scenario 2: custom partitioning, configure /boot, /, /var, /home, swap
1. Install RHVH iso
2. Choose custom partitioning, add /boot, /var=15G, /home=20G, swap and / manually.
3. Finish installation
4. Run `lvs -a`, `df -Th`, `mount | grep discard`

The results are similar as scenerio 1, except the size of /home was 20G.


Conclusions:
1. /home, /tmp, /var, /var/log, and /var/log/audit could be created as separated partitions by default.
2. When choosing auto partitioning, the default sizes of /home, /tmp, /var, /var/log, and /var/log/audit are 1G, 1G, 15G, 8G, and 2G, respectively.
3. When some of the NIST partitions are created manually, after installation finished, the sizes of those NIST partitions could be as configured, and the rest of NIST partitions could be created by default.


Huijuan, please help to provide the verification results of upgrade process.

Comment 37 Huijuan Zhao 2017-11-09 11:28:23 UTC
Tested 2 scenarios about upgrade, both are passed.
1. upgrade from NIST system to rhvh-4.2 
2. upgrade from non-NIST system to rhvh-4.2.

Scenario 1: Upgrade from NIST system to rhvh-4.2 
Test version:
From: rhvh-4.1-0.20171101.0
To:   rhvh-4.2-0.20171102.0

Test steps:
1. Install rhvh-4.1-0.20171101.0
2. Upgrade rhvh to rhvh-4.2-0.20171102.0
3. Login rhvh-4.2, do some checks:
#lvs -a
#findmnt
#df -Th
#mount | grep discard
4. Rollback to rhvh-4.1, again do the checks in step3

Test results:
1. After step3:
Except /home, /tmp, /var, /var/log, and /var/log/audit, there is another /var/crash which is 10GB.

# lvs -a
  LV                      VG                  Attr       LSize   Pool   Origin                Data%  Meta%  Move Log Cpy%Sync Convert
  home                    rhvh_ibm-x3650m5-04 Vwi-aotz--   1.00g pool00                       4.79                              …
     
  root                    rhvh_ibm-x3650m5-04 Vwi---tz--  <2.22t pool00                                                              
  swap                    rhvh_ibm-x3650m5-04 -wi-ao---- <15.44g                                                                     
  tmp                     rhvh_ibm-x3650m5-04 Vwi-aotz--   1.00g pool00                       4.86                                   
  var                     rhvh_ibm-x3650m5-04 Vwi-aotz--  15.00g pool00                       2.58                                   
  var_crash               rhvh_ibm-x3650m5-04 Vwi-aotz--  10.00g pool00                       2.86                                   
  var_log                 rhvh_ibm-x3650m5-04 Vwi-aotz--   8.00g pool00                       3.89                                   
  var_log_audit           rhvh_ibm-x3650m5-04 Vwi-aotz--   2.00g pool00                       5.74

# findmnt
TARGET                           SOURCE     FSTYPE     OPTIONS
...
├─/tmp                           /dev/mapper/rhvh_ibm--x3650m5--04-tmp
                                            ext4       rw,relatime,seclabel,discard,stripe=16,data=ordered
├─/var                           /dev/mapper/rhvh_ibm--x3650m5--04-var
                                            ext4       rw,relatime,seclabel,discard,stripe=16,data=ordered
│ ├─/var/crash                   /dev/mapper/rhvh_ibm--x3650m5--04-var_crash
                                            ext4       rw,relatime,seclabel,discard,stripe=16,data=ordered
│ └─/var/log                     /dev/mapper/rhvh_ibm--x3650m5--04-var_log
                                            ext4       rw,relatime,seclabel,discard,stripe=16,data=ordered
│   └─/var/log/audit             /dev/mapper/rhvh_ibm--x3650m5--04-var_log_audit
                                            ext4       rw,relatime,seclabel,discard,stripe=16,data=ordered
└─/home                          /dev/mapper/rhvh_ibm--x3650m5--04-home
                                            ext4       rw,relatime,seclabel,discard,stripe=16,data=ordered

# df -Th
Filesystem                                                  Type      Size  Used Avail Use% Mounted on
...
/dev/mapper/rhvh_ibm--x3650m5--04-var                       ext4       15G   42M   14G   1% /var
/dev/mapper/rhvh_ibm--x3650m5--04-tmp                       ext4      976M  2.8M  906M   1% /tmp
/dev/mapper/rhvh_ibm--x3650m5--04-home                      ext4      976M  2.6M  907M   1% /home
/dev/mapper/rhvh_ibm--x3650m5--04-var_log                   ext4      7.8G   85M  7.3G   2% /var/log
/dev/mapper/rhvh_ibm--x3650m5--04-var_crash                 ext4      9.8G   37M  9.2G   1% /var/crash
/dev/mapper/rhvh_ibm--x3650m5--04-var_log_audit             ext4      2.0G   26M  1.8G   2% /var/log/audit

# mount | grep discard
/dev/mapper/rhvh_ibm--x3650m5--04-rhvh--4.2--0.20171102.0+1 on / type ext4 (rw,relatime,seclabel,discard,stripe=16,data=ordered)
/dev/mapper/rhvh_ibm--x3650m5--04-tmp on /tmp type ext4 (rw,relatime,seclabel,discard,stripe=16,data=ordered)
/dev/mapper/rhvh_ibm--x3650m5--04-var on /var type ext4 (rw,relatime,seclabel,discard,stripe=16,data=ordered)
/dev/mapper/rhvh_ibm--x3650m5--04-home on /home type ext4 (rw,relatime,seclabel,discard,stripe=16,data=ordered)
/dev/mapper/rhvh_ibm--x3650m5--04-var_log on /var/log type ext4 (rw,relatime,seclabel,discard,stripe=16,data=ordered)
/dev/mapper/rhvh_ibm--x3650m5--04-var_crash on /var/crash type ext4 (rw,relatime,seclabel,discard,stripe=16,data=ordered)
/dev/mapper/rhvh_ibm--x3650m5--04-var_log_audit on /var/log/audit type ext4 (rw,relatime,seclabel,discard,stripe=16,data=ordered)

2. After step4, 
The result of "#lvs -a " is same as step3.
The results of "# findmnt", "# df -Th" and "# mount | grep discard" are different as step3, do NOT have "/var/crash" mounted.



Scenario 2: Upgrade from non-NIST system to rhvh-4.2.
Test version:
From: rhvh-4.1-0.20170522.0
To:   rhvh-4.2-0.20171102.0

Test steps:
Same as Scenario 1.

Test results:
1. After step3, test results are same as Scenario 1
2. After step4, The result of "#lvs -a " is same as step3.
The results of "# findmnt", "# df -Th" and "# mount | grep discard" are different as step3, do NOT have "/var/crash", "/var/log", "/var/log/audit", "/tmp", "/home" mounted.

Comment 38 cshao 2017-11-09 12:04:01 UTC
Verify this bug according #c36 & #c37.

Comment 39 Emma Heftman 2017-11-16 09:52:32 UTC
(In reply to Emma Heftman from comment #35)
> (In reply to Ryan Barry from comment #34)
> > 1) What's meant by this is that RHVH 4.0 or RHVH 4.1 prior to NIST was added
> > will have the NIST partitions added and activated when upgrading.
> > 
> > 2) That table is correct. NIST support was actually added in 4.1.3 as a
> > Z-stream backport, so the documentation should be identical between these
> > bugs.
> 
> Thanks Ryan, so even for upgrades there is nothing to mention?

Hey Ryan can you please confirm that there is nothing that needs to be changed for upgrades. Thanks

Comment 40 Ryan Barry 2017-11-16 10:23:35 UTC
Confirmed

Comment 44 errata-xmlrpc 2018-05-15 17:57:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:1524

Comment 45 Franta Kust 2019-05-16 13:03:02 UTC
BZ<2>Jira Resync


Note You need to log in before you can comment on or make changes to this bug.