Bug 1623944
Summary: | podman devicemapper support is broken | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Qian Cai <qcai> | |
Component: | podman | Assignee: | Nalin Dahyabhai <nalin> | |
Status: | CLOSED WONTFIX | QA Contact: | Martin Jenner <mjenner> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | 7.6 | CC: | 1336374132, dwalsh, jligon, lsm5, mheon, mifiedle, mpatel, nalin, umohnani, vgoyal, vrothber | |
Target Milestone: | rc | Keywords: | Extras | |
Target Release: | --- | |||
Hardware: | All | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1625394 (view as bug list) | Environment: | ||
Last Closed: | 2020-06-03 14:10:28 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1625394 |
Description
Qian Cai
2018-08-30 14:08:31 UTC
It also hang during deleting a container. Only happened when using devicemapper. # podman rm httpd time="2018-08-30T11:22:03-04:00" level=error msg="error joining network namespace for container 2452ad78a5a2b763e03429c1e10efda88633dac8674b71eac3979edb53691052" Can you paste the output of 'lvm pvscan' and 'lvm config' from the host? Probably a dumb question, but /dev/vdb is a disk that's attached to the VM, right? Yes, /dev/vdb is an extra block disk attached to the VM. # lvm pvscan PV /dev/vdb VG storage lvm2 [<10.00 GiB / 312.00 MiB free] Total: 1 [<10.00 GiB] / in use: 1 [<10.00 GiB] / in no VG: 0 [0 ] # lvm config config { checks=1 abort_on_errors=0 profile_dir="/etc/lvm/profile" } local { } dmeventd { mirror_library="libdevmapper-event-lvm2mirror.so" snapshot_library="libdevmapper-event-lvm2snapshot.so" thin_library="libdevmapper-event-lvm2thin.so" } activation { checks=0 udev_sync=1 udev_rules=1 verify_udev_operations=0 retry_deactivation=1 missing_stripe_filler="error" use_linear_target=1 reserved_stack=64 reserved_memory=8192 process_priority=-18 raid_region_size=2048 readahead="auto" raid_fault_policy="warn" mirror_image_fault_policy="remove" mirror_log_fault_policy="allocate" snapshot_autoextend_threshold=100 snapshot_autoextend_percent=20 thin_pool_autoextend_threshold=100 thin_pool_autoextend_percent=20 use_mlockall=0 monitoring=1 polling_interval=15 activation_mode="degraded" } global { umask=63 test=0 units="r" si_unit_consistency=1 suffix=1 activation=1 proc="/proc" etc="/etc" locking_type=1 wait_for_locks=1 fallback_to_clustered_locking=1 fallback_to_local_locking=1 locking_dir="/run/lock/lvm" prioritise_write_locks=1 abort_on_internal_errors=0 metadata_read_only=0 mirror_segtype_default="raid1" raid10_segtype_default="raid10" sparse_segtype_default="thin" use_lvmetad=1 use_lvmlockd=0 system_id_source="none" use_lvmpolld=1 notify_dbus=1 } shell { history_size=100 } backup { backup=1 backup_dir="/etc/lvm/backup" archive=1 archive_dir="/etc/lvm/archive" retain_min=10 retain_days=30 } log { verbose=0 silent=0 syslog=1 overwrite=0 level=0 indent=1 command_names=0 prefix=" " activation=0 debug_classes=["memory","devices","io","activation","allocation","lvmetad","metadata","cache","locking","lvmpolld","dbus"] } allocation { maximise_cling=1 use_blkid_wiping=1 wipe_signatures_when_zeroing_new_lvs=1 mirror_logs_require_separate_pvs=0 cache_pool_metadata_require_separate_pvs=0 thin_pool_metadata_require_separate_pvs=0 } devices { dir="/dev" scan="/dev" obtain_device_list_from_udev=1 external_device_info_source="none" preferred_names=["^/dev/mpath/","^/dev/mapper/mpath","^/dev/[hs]d"] cache_dir="/etc/lvm/cache" cache_file_prefix="" write_cache_state=1 sysfs_scan=1 multipath_component_detection=1 md_component_detection=1 fw_raid_component_detection=0 md_chunk_alignment=1 data_alignment_detection=1 data_alignment=0 data_alignment_offset_detection=1 ignore_suspended_devices=0 ignore_lvm_mirrors=1 disable_after_error_count=0 require_restorefile_with_uuid=1 pv_min_size=2048 issue_discards=0 allow_changes_with_duplicate_pvs=1 } This should not be a blocker as we primarily support overlayfs for podman. Still broken in podman-0.8.4-3.git9f9b8cf.el7 # podman pull brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhel7:7.6 time="2018-09-04T09:06:22-04:00" level=error msg="File descriptor 6 (/dev/mapper/control) leaked on pvdisplay invocation. Parent PID 26543: podman Failed to find physical volume "/dev/loop0". " error="exit status 5" time="2018-09-04T09:06:22-04:00" level=error error="exit status 2" # podman rm httpd time="2018-09-04T09:08:06-04:00" level=error msg="devmapper: Error unmounting device 983d636e98f6ab3c184c7f29072f2692caa6e018543bdcb2bb385af183e58e18: invalid argument" failed to delete container b43176c655d4a84e835a76a34c7bfc385786f51b9fae3361acc31cb6a03b9def: error removing container b43176c655d4a84e835a76a34c7bfc385786f51b9fae3361acc31cb6a03b9def root filesystem: invalid argument Still broken in podman-0.10.1.3-5 # cat /etc/containers/storage.conf driver = "devicemapper" directlvm_device = "/dev/sdb" directlvm_device_force = "True" # podman info ERRO[0000] Failed to GetDriver graph devicemapper /var/lib/containers/storage could not get runtime: failed to GetDriver graph devicemapper /var/lib/containers/storage: driver not supported I don't believe we have any intention of supporting this. Devicemapper has several fundamental issues that prevent it from being used from multiple processes, which would be very difficult to fix. Given that we're encouraging the use of overlayfs instead, I don't think we're going to be making the substantial time investment required to fix this one, so devicemapper will remain disabled. Actually Nalin had some ideas on how to fix devicemapper, although we have always wanted the long term goal to support an LVM driver. The reason I would want an LVM/Devicemapper driver would be to better support the use of KATA Containers. Nalin can you look into this? I would like to see this fixed eventually. Still in the back log We do not plan on supporting devicemapper in podman. The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |