RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1848248 - storage: tests_lvm_errors.yml failed due to "Kernel module ext3 not available"
Summary: storage: tests_lvm_errors.yml failed due to "Kernel module ext3 not available"
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: rhel-system-roles
Version: 8.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.3
Assignee: Pavel Cahyna
QA Contact: Zhang Yi
URL:
Whiteboard:
Depends On: 1855344
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-18 05:07 UTC by Zhang Yi
Modified: 2020-07-31 07:20 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-31 07:20:00 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github linux-system-roles storage issues 110 0 None closed storage: tests_lvm_errors.yml failed due to "Kernel module ext3 not available" 2020-11-09 16:49:27 UTC

Description Zhang Yi 2020-06-18 05:07:42 UTC
Description of problem:
storage: tests_lvm_errors.yml failed due to "Kernel module ext3 not available"

Cloned from https://github.com/linux-system-roles/storage/issues/110


Version-Release number of selected component (if applicable):
rhel-system-roles-1.0-11.el8

How reproducible:


Steps to Reproduce:
# ansible-playbook -i inventory tests/tests_lvm_errors.yml -vvvv

TASK [Try to replace a pool by a file system on disk in safe mode] **************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_lvm_errors.yml:397

TASK [storage : Set version specific variables] *********************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:2
ok: [localhost] => (item=/root/test/storage/vars/RedHat-8.yml) => {
    "ansible_facts": {
        "blivet_package_list": [
            "python3-blivet",
            "libblockdev-dm",
            "libblockdev-lvm",
            "libblockdev-part"
        ]
    },
    "ansible_included_var_files": [
        "/root/test/storage/vars/RedHat-8.yml"
    ],
    "ansible_loop_var": "item",
    "changed": false,
    "item": "/root/test/storage/vars/RedHat-8.yml"
}
--snip--
TASK [storage : debug] **********************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:87
ok: [localhost] => {
    "_storage_volumes": [
        {
            "disks": [
                "sdk"
            ],
            "fs_create_options": "",
            "fs_label": "",
            "fs_overwrite_existing": true,
            "fs_type": "ext3",
            "mount_check": 0,
            "mount_device_identifier": "uuid",
            "mount_options": "defaults",
            "mount_passno": 0,
            "mount_point": "",
            "name": "test1",
            "size": 0,
            "state": "present",
            "type": "disk"
        }
    ]
}

TASK [storage : get required packages] ******************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:90
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412 && echo ansible-tmp-1592294822.8819156-4676-31572033814412="` echo /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-2092m5baly1_/tmpqex7kqkz TO /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412/ /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "actions": [],
    "changed": false,
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": true,
            "pools": [],
            "safe_mode": true,
            "use_partitions": null,
            "volumes": [
                {
                    "disks": [
                        "sdk"
                    ],
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "ext3",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "",
                    "name": "test1",
                    "size": 0,
                    "state": "present",
                    "type": "disk"
                }
            ]
        }
    },
    "leaves": [],
    "mounts": [],
    "packages": [
        "e2fsprogs"
    ],
    "pools": [],
    "volumes": []
}

TASK [storage : make sure required packages are installed] **********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:99
Running dnf
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291 && echo ansible-tmp-1592294826.6364615-4733-173430448428291="` echo /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
<localhost> PUT /root/.ansible/tmp/ansible-local-2092m5baly1_/tmpigd7fnjy TO /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291/AnsiballZ_dnf.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291/ /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "allow_downgrade": false,
            "autoremove": false,
            "bugfix": false,
            "conf_file": null,
            "disable_excludes": null,
            "disable_gpg_check": false,
            "disable_plugin": [],
            "disablerepo": [],
            "download_dir": null,
            "download_only": false,
            "enable_plugin": [],
            "enablerepo": [],
            "exclude": [],
            "install_repoquery": true,
            "install_weak_deps": true,
            "installroot": "/",
            "list": null,
            "lock_timeout": 30,
            "name": [
                "e2fsprogs"
            ],
            "releasever": null,
            "security": false,
            "skip_broken": false,
            "state": "present",
            "update_cache": false,
            "update_only": false,
            "validate_certs": true
        }
    },
    "msg": "Nothing to do",
    "rc": 0,
    "results": []
}

TASK [storage : manage the pools and volumes to match the specified state] ******************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317 && echo ansible-tmp-1592294830.480519-4749-39146031242317="` echo /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-2092m5baly1_/tmpl59s_7ae TO /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317/ /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_blivet_payload_9tzqeryj/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 820, in run_module
  File "/tmp/ansible_blivet_payload_9tzqeryj/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 607, in manage_volume
  File "/tmp/ansible_blivet_payload_9tzqeryj/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 280, in manage
  File "/tmp/ansible_blivet_payload_9tzqeryj/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 250, in _reformat
  File "/tmp/ansible_blivet_payload_9tzqeryj/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 207, in _get_format
fatal: [localhost]: FAILED! => {
    "actions": [],
    "changed": false,
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": false,
            "pools": [],
            "safe_mode": true,
            "use_partitions": null,
            "volumes": [
                {
                    "disks": [
                        "sdk"
                    ],
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "ext3",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "",
                    "name": "test1",
                    "size": 0,
                    "state": "present",
                    "type": "disk"
                }
            ]
        }
    },
    "leaves": [],
    "mounts": [],
    "msg": "required tools for file system 'ext3' are missing",
    "packages": [],
    "pools": [],
    "volumes": []
}

TASK [Check that we failed in the role] *****************************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_lvm_errors.yml:413
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Verify the output] ********************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_lvm_errors.yml:419
fatal: [localhost]: FAILED! => {
    "assertion": "blivet_output.failed and blivet_output.msg|regex_search('cannot remove existing formatting on volume.*in safe mode') and not blivet_output.changed",
    "changed": false,
    "evaluated_to": false,
    "msg": "Unexpected behavior w/ existing data on specified disks"
}

PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost                  : ok=208  changed=1    unreachable=0    failed=1    skipped=50   rescued=11   ignored=0 

Actual results:


Expected results:


Additional info:

# tail -30 /tmp/blivet.log

  VG space used = 1024 MiB
2020-06-16 04:07:13,901 INFO program/MainThread: Running [13] dmsetup info -co subsystem --noheadings testpool1-testvol1 ...
2020-06-16 04:07:13,907 INFO program/MainThread: stdout[13]: LVM

2020-06-16 04:07:13,907 INFO program/MainThread: stderr[13]: 
2020-06-16 04:07:13,907 INFO program/MainThread: ...done [13] (exit code: 0)
2020-06-16 04:07:13,913 DEBUG blivet/MainThread:                    DeviceTree.handle_format: name: testpool1-testvol1 ;
2020-06-16 04:07:13,913 DEBUG blivet/MainThread: no type or existing type for testpool1-testvol1, bailing
2020-06-16 04:07:13,913 INFO program/MainThread: Running... udevadm settle --timeout=300
2020-06-16 04:07:13,935 DEBUG program/MainThread: Return code: 0
2020-06-16 04:07:13,970 INFO blivet/MainThread: edd: MBR signature on sda is zero. new disk image?
2020-06-16 04:07:13,970 INFO blivet/MainThread: edd: collected mbr signatures: {'sdl': '0x000178c0'}
2020-06-16 04:07:13,978 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path: path: /dev/mapper/rhel_storageqe--62-root ; incomplete: False ; hidden: False ;
2020-06-16 04:07:13,982 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path returned existing 70 GiB lvmlv rhel_storageqe-62-root (56) with existing xfs filesystem
2020-06-16 04:07:13,982 DEBUG blivet/MainThread: resolved '/dev/mapper/rhel_storageqe--62-root' to 'rhel_storageqe-62-root' (lvmlv)
2020-06-16 04:07:13,983 DEBUG blivet/MainThread: resolved 'UUID=0c459216-6a71-4860-8e5f-97bfc9c93095' to 'sda2' (partition)
2020-06-16 04:07:13,983 DEBUG blivet/MainThread: resolved 'UUID=3189-4B31' to 'sda1' (partition)
2020-06-16 04:07:13,986 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path: path: /dev/mapper/rhel_storageqe--62-home ; incomplete: False ; hidden: False ;
2020-06-16 04:07:13,990 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path returned existing 199.93 GiB lvmlv rhel_storageqe-62-home (43) with existing xfs filesystem
2020-06-16 04:07:13,991 DEBUG blivet/MainThread: resolved '/dev/mapper/rhel_storageqe--62-home' to 'rhel_storageqe-62-home' (lvmlv)
2020-06-16 04:07:13,994 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path: path: /dev/mapper/rhel_storageqe--62-swap ; incomplete: False ; hidden: False ;
2020-06-16 04:07:13,997 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path returned existing 7.88 GiB lvmlv rhel_storageqe-62-swap (69) with existing swap
2020-06-16 04:07:13,998 DEBUG blivet/MainThread: resolved '/dev/mapper/rhel_storageqe--62-swap' to 'rhel_storageqe-62-swap' (lvmlv)
2020-06-16 04:07:14,001 DEBUG blivet/MainThread:                  DeviceTree.get_device_by_name: name: sdk ; incomplete: False ; hidden: False ;
2020-06-16 04:07:14,005 DEBUG blivet/MainThread:                  DeviceTree.get_device_by_name returned existing 279.4 GiB disk sdk (133) with existing lvmpv
2020-06-16 04:07:14,006 DEBUG blivet/MainThread: resolved 'sdk' to 'sdk' (disk)
2020-06-16 04:07:14,011 DEBUG blivet/MainThread:                   Ext3FS.supported: supported: True ;
2020-06-16 04:07:14,011 DEBUG blivet/MainThread: Kernel module ext3 not available
2020-06-16 04:07:14,011 DEBUG blivet/MainThread: get_format('ext3') returning Ext3FS instance with object id 225
2020-06-16 04:07:14,014 DEBUG blivet/MainThread:                Ext3FS.supported: supported: False ;

Comment 5 Zhang Yi 2020-07-31 06:49:19 UTC
Confirmed this issue was fixed on python3-blivet-3.2.2-5.el8.noarch.

Thanks
Yi

Comment 6 Pavel Cahyna 2020-07-31 07:20:00 UTC
It is not a problem in the storage role at all, closing as NOTABUG.


Note You need to log in before you can comment on or make changes to this bug.