Bug 1854191

Summary: storage: ignore null-blk when do find_unused_disk
Product: Red Hat Enterprise Linux 8 Reporter: Zhang Yi <yizhan>
Component: rhel-system-rolesAssignee: Pavel Cahyna <pcahyna>
Status: CLOSED ERRATA QA Contact: Zhang Yi <yizhan>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 8.3CC: djez
Target Milestone: rcKeywords: Rebase
Target Release: 8.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: role:storage
Fixed In Version: rhel-system-roles-1.0-15.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-11-04 04:03:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Zhang Yi 2020-07-06 17:14:07 UTC
Description of problem:
storage: ignore null-blk when do find_unused_disk

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

```
$ lsblk -o NAME,FSTYPE,TYPE  /dev/nullb0 /dev/nvme0n1`
NAME    FSTYPE TYPE
nullb0         disk
nvme0n1        disk
$ vgcreate foo4 /dev/nullb0 
  Device /dev/nullb0 excluded by a filter.
$ pvcreate /dev/nullb0 
  Device /dev/nullb0 excluded by a filter.
```

### playbook 
```
$ cat tests/nullb0.yml 
---
- hosts: all
  become: true
  vars:
    volume_group_size: '10g'
    volume_size: '80g'
    storage_safe_mode: false

  tasks:
    - include_role:
        name: storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_group_size }}"
        max_return: 2

    - name: Create one logical volumes which has 4 char vg and 78 lv
      include_role:
        name: storage
      vars:
        storage_pools:
            - name: foo4
              disks: ["{{ unused_disks[0] }}"]
              volumes:
                - name: test1
                  size: "{{ volume_size }}"
                  mount_point: '/opt/test1'

    - include_tasks: verify-role-results.yml

    - name: Clean up
      include_role:
        name: storage
      vars:
        storage_pools:
            - name: foo4
              disks: ["{{ unused_disks[0] }}"]
              state: absent
              volumes: []
```

$ ansible-playbook -i inventory tests/nullb0.yml  -vvvv
```
TASK [storage : debug] ******************************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:84
ok: [localhost] => {
    "_storage_pools": [
        {
            "disks": [
                "nullb0"
            ],
            "name": "foo4",
            "state": "present",
            "type": "lvm",
            "volumes": [
                {
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "xfs",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "/opt/test1",
                    "name": "test1",
                    "pool": "foo4",
                    "size": "80g",
                    "state": "present",
                    "type": "lvm"
                }
            ]
        }
    ]
}

TASK [storage : debug] ******************************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:87
ok: [localhost] => {
    "_storage_volumes": []
}

TASK [storage : get required packages] **************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:90
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167 && echo ansible-tmp-1590068525.539217-12932-14548696496167="` echo /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-12439gse6jap4/tmp4o2eo1nd TO /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167/ /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "actions": [],
    "changed": false,
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": true,
            "pools": [
                {
                    "disks": [
                        "nullb0"
                    ],
                    "name": "foo4",
                    "state": "present",
                    "type": "lvm",
                    "volumes": [
                        {
                            "fs_create_options": "",
                            "fs_label": "",
                            "fs_overwrite_existing": true,
                            "fs_type": "xfs",
                            "mount_check": 0,
                            "mount_device_identifier": "uuid",
                            "mount_options": "defaults",
                            "mount_passno": 0,
                            "mount_point": "/opt/test1",
                            "name": "test1",
                            "pool": "foo4",
                            "size": "80g",
                            "state": "present",
                            "type": "lvm"
                        }
                    ]
                }
            ],
            "safe_mode": true,
            "use_partitions": null,
            "volumes": []
        }
    },
    "leaves": [],
    "mounts": [],
    "packages": [
        "lvm2",
        "xfsprogs"
    ],
    "pools": [],
    "volumes": []
}

TASK [storage : make sure required packages are installed] ******************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:99
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977 && echo ansible-tmp-1590068529.3606427-13013-276846597370977="` echo /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
<localhost> PUT /root/.ansible/tmp/ansible-local-12439gse6jap4/tmp23kyjljz TO /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977/AnsiballZ_dnf.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977/ /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "allow_downgrade": false,
            "autoremove": false,
            "bugfix": false,
            "conf_file": null,
            "disable_excludes": null,
            "disable_gpg_check": false,
            "disable_plugin": [],
            "disablerepo": [],
            "download_dir": null,
            "download_only": false,
            "enable_plugin": [],
            "enablerepo": [],
            "exclude": [],
            "install_repoquery": true,
            "install_weak_deps": true,
            "installroot": "/",
            "list": null,
            "lock_timeout": 30,
            "name": [
                "lvm2",
                "xfsprogs"
            ],
            "releasever": null,
            "security": false,
            "skip_broken": false,
            "state": "present",
            "update_cache": false,
            "update_only": false,
            "validate_certs": true
        }
    },
    "msg": "Nothing to do",
    "rc": 0,
    "results": []
}

TASK [storage : manage the pools and volumes to match the specified state] **************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128 && echo ansible-tmp-1590068533.2727416-13029-153970000470128="` echo /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-12439gse6jap4/tmpwlp4zaij TO /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128/ /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_blivet_payload_vlkee3mf/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 835, in run_module
  File "/usr/lib/python3.6/site-packages/blivet/actionlist.py", line 48, in wrapped_func
    return func(obj, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/actionlist.py", line 327, in process
    action.execute(callbacks)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/deviceaction.py", line 656, in execute
    options=self.device.format_args)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/formats/__init__.py", line 513, in create
    self._create(**kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/formats/lvmpv.py", line 124, in _create
    blockdev.lvm.pvcreate(self.device, data_alignment=self.data_alignment, extra=[ea_yes])
  File "/usr/lib64/python3.6/site-packages/gi/overrides/BlockDev.py", line 993, in wrapped
    raise transform[1](msg)
fatal: [localhost]: FAILED! => {
    "actions": [],
    "changed": false,
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": false,
            "pools": [
                {
                    "disks": [
                        "nullb0"
                    ],
                    "name": "foo4",
                    "state": "present",
                    "type": "lvm",
                    "volumes": [
                        {
                            "_device": "/dev/mapper/foo4-test1",
                            "_mount_id": "/dev/mapper/foo4-test1",
                            "fs_create_options": "",
                            "fs_label": "",
                            "fs_overwrite_existing": true,
                            "fs_type": "xfs",
                            "mount_check": 0,
                            "mount_device_identifier": "uuid",
                            "mount_options": "defaults",
                            "mount_passno": 0,
                            "mount_point": "/opt/test1",
                            "name": "test1",
                            "pool": "foo4",
                            "size": "80g",
                            "state": "present",
                            "type": "lvm"
                        }
                    ]
                }
            ],
            "safe_mode": false,
            "use_partitions": null,
            "volumes": []
        }
    },
    "leaves": [],
    "mounts": [],
    "msg": "Failed to commit changes to disk",
    "packages": [
        "lvm2",
        "e2fsprogs",
        "dosfstools",
        "xfsprogs"
    ],
    "pools": [],
    "volumes": []
}

PLAY RECAP ******************************************************************************************************************************************************************************************************************************************
localhost                  : ok=35   changed=0    unreachable=0    failed=1    skipped=12   rescued=0    ignored=0   

```

Comment 9 errata-xmlrpc 2020-11-04 04:03:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (rhel-system-roles bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:4809