Bug 1623944 - podman devicemapper support is broken [NEEDINFO]
Summary: podman devicemapper support is broken
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: podman
Version: 7.6
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Nalin Dahyabhai
QA Contact: Martin Jenner
URL:
Whiteboard:
Depends On:
Blocks: 1625394
TreeView+ depends on / blocked
 
Reported: 2018-08-30 14:08 UTC by Qian Cai
Modified: 2019-08-14 10:46 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1625394 (view as bug list)
Environment:
Last Closed:
vrothber: needinfo? (nalin)


Attachments (Terms of Use)

Description Qian Cai 2018-08-30 14:08:31 UTC
Description of problem:
I can reproduce this using both loopback and real block device. Docker devicemapper works fine.

# cat /etc/containers/storage.conf
# podman images
ERRO[0000] File descriptor 6 (/dev/mapper/control) leaked on pvdisplay invocation. Parent PID 13583: podman
  Failed to find physical volume "/dev/vdb".
  error="exit status 5"
directlvm_device = '/dev/vdb'
directlvm_device_force = "True"

# podman images
ERRO[0000] File descriptor 6 (/dev/mapper/control) leaked on pvdisplay invocation. Parent PID 13583: podman
  Failed to find physical volume "/dev/vdb".
  error="exit status 5"

# podman pull brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhel7:7.6
Trying to pull brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhel7:7.6...Getting image source signatures
Copying blob sha256:288a0f069a2e5c98d17f62f7a4e3cab45ec7ec68dbead75b594c1229b21d65c8
 72.21 MB / 72.21 MB [======================================================] 3s
Copying blob sha256:8f1daf29626156672bd5d638cad27217925cbfd628ca49e8a712d464edbaad37
 1.20 KB / 1.20 KB [========================================================] 0s
Copying config sha256:fd17bcfba895e4cc3f5cdcad66c868898e53a887835e1ea91e3bd35f105289b6
 6.19 KB / 6.19 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures
fd17bcfba895e4cc3f5cdcad66c868898e53a887835e1ea91e3bd35f105289b6

# podman run -d brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhel7:7.6 sleep 1000
3177030dc86ebd0dda9b022b0881595772de149d13672b5fc7660b81c8bf8db0

# podman stop 3177030dc86e
3177030dc86ebd0dda9b022b0881595772de149d13672b5fc7660b81c8bf8db0

# podman rm 3177030dc86e
ERRO[0000] devmapper: Error unmounting device 42e1d50bad86be0231a1b27c5259fa93ebd3546916c44346bfc356258688890b: invalid argument 
failed to delete container 3177030dc86ebd0dda9b022b0881595772de149d13672b5fc7660b81c8bf8db0: error removing container 3177030dc86ebd0dda9b022b0881595772de149d13672b5fc7660b81c8bf8db0 root filesystem: invalid argument

Version-Release number of selected component (if applicable):
podman-0.7.3-1.git0791210.el7.x86_64
RHEL-7.6-Snapshot-1.0

How reproducible:
always
Expected results:


Additional info:

Comment 1 Qian Cai 2018-08-30 15:25:43 UTC
It also hang during deleting a container. Only happened when using devicemapper.

# podman rm httpd
time="2018-08-30T11:22:03-04:00" level=error msg="error joining network namespace for container 2452ad78a5a2b763e03429c1e10efda88633dac8674b71eac3979edb53691052"

Comment 2 Nalin Dahyabhai 2018-08-30 17:36:36 UTC
Can you paste the output of 'lvm pvscan' and 'lvm config' from the host?  Probably a dumb question, but /dev/vdb is a disk that's attached to the VM, right?

Comment 3 Qian Cai 2018-08-30 17:40:33 UTC
Yes, /dev/vdb is an extra block disk attached to the VM.

# lvm pvscan
  PV /dev/vdb   VG storage         lvm2 [<10.00 GiB / 312.00 MiB free]
  Total: 1 [<10.00 GiB] / in use: 1 [<10.00 GiB] / in no VG: 0 [0   ]

# lvm config
config {
	checks=1
	abort_on_errors=0
	profile_dir="/etc/lvm/profile"
}
local {
}
dmeventd {
	mirror_library="libdevmapper-event-lvm2mirror.so"
	snapshot_library="libdevmapper-event-lvm2snapshot.so"
	thin_library="libdevmapper-event-lvm2thin.so"
}
activation {
	checks=0
	udev_sync=1
	udev_rules=1
	verify_udev_operations=0
	retry_deactivation=1
	missing_stripe_filler="error"
	use_linear_target=1
	reserved_stack=64
	reserved_memory=8192
	process_priority=-18
	raid_region_size=2048
	readahead="auto"
	raid_fault_policy="warn"
	mirror_image_fault_policy="remove"
	mirror_log_fault_policy="allocate"
	snapshot_autoextend_threshold=100
	snapshot_autoextend_percent=20
	thin_pool_autoextend_threshold=100
	thin_pool_autoextend_percent=20
	use_mlockall=0
	monitoring=1
	polling_interval=15
	activation_mode="degraded"
}
global {
	umask=63
	test=0
	units="r"
	si_unit_consistency=1
	suffix=1
	activation=1
	proc="/proc"
	etc="/etc"
	locking_type=1
	wait_for_locks=1
	fallback_to_clustered_locking=1
	fallback_to_local_locking=1
	locking_dir="/run/lock/lvm"
	prioritise_write_locks=1
	abort_on_internal_errors=0
	metadata_read_only=0
	mirror_segtype_default="raid1"
	raid10_segtype_default="raid10"
	sparse_segtype_default="thin"
	use_lvmetad=1
	use_lvmlockd=0
	system_id_source="none"
	use_lvmpolld=1
	notify_dbus=1
}
shell {
	history_size=100
}
backup {
	backup=1
	backup_dir="/etc/lvm/backup"
	archive=1
	archive_dir="/etc/lvm/archive"
	retain_min=10
	retain_days=30
}
log {
	verbose=0
	silent=0
	syslog=1
	overwrite=0
	level=0
	indent=1
	command_names=0
	prefix="  "
	activation=0
	debug_classes=["memory","devices","io","activation","allocation","lvmetad","metadata","cache","locking","lvmpolld","dbus"]
}
allocation {
	maximise_cling=1
	use_blkid_wiping=1
	wipe_signatures_when_zeroing_new_lvs=1
	mirror_logs_require_separate_pvs=0
	cache_pool_metadata_require_separate_pvs=0
	thin_pool_metadata_require_separate_pvs=0
}
devices {
	dir="/dev"
	scan="/dev"
	obtain_device_list_from_udev=1
	external_device_info_source="none"
	preferred_names=["^/dev/mpath/","^/dev/mapper/mpath","^/dev/[hs]d"]
	cache_dir="/etc/lvm/cache"
	cache_file_prefix=""
	write_cache_state=1
	sysfs_scan=1
	multipath_component_detection=1
	md_component_detection=1
	fw_raid_component_detection=0
	md_chunk_alignment=1
	data_alignment_detection=1
	data_alignment=0
	data_alignment_offset_detection=1
	ignore_suspended_devices=0
	ignore_lvm_mirrors=1
	disable_after_error_count=0
	require_restorefile_with_uuid=1
	pv_min_size=2048
	issue_discards=0
	allow_changes_with_duplicate_pvs=1
}

Comment 4 Mrunal Patel 2018-09-03 14:16:07 UTC
This should not be a blocker as we primarily support overlayfs for podman.

Comment 6 Qian Cai 2018-09-04 13:11:01 UTC
Still broken in podman-0.8.4-3.git9f9b8cf.el7

# podman pull brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhel7:7.6
time="2018-09-04T09:06:22-04:00" level=error msg="File descriptor 6 (/dev/mapper/control) leaked on pvdisplay invocation. Parent PID 26543: podman
  Failed to find physical volume "/dev/loop0".
" error="exit status 5" 
time="2018-09-04T09:06:22-04:00" level=error error="exit status 2" 

# podman rm httpd
time="2018-09-04T09:08:06-04:00" level=error msg="devmapper: Error unmounting device 983d636e98f6ab3c184c7f29072f2692caa6e018543bdcb2bb385af183e58e18: invalid argument" 
failed to delete container b43176c655d4a84e835a76a34c7bfc385786f51b9fae3361acc31cb6a03b9def: error removing container b43176c655d4a84e835a76a34c7bfc385786f51b9fae3361acc31cb6a03b9def root filesystem: invalid argument

Comment 9 Liu Jing 2018-12-25 02:43:42 UTC
Still broken in podman-0.10.1.3-5

# cat /etc/containers/storage.conf
driver = "devicemapper"
directlvm_device = "/dev/sdb"
directlvm_device_force = "True"

# podman info
ERRO[0000] Failed to GetDriver graph devicemapper /var/lib/containers/storage 
could not get runtime: failed to GetDriver graph devicemapper /var/lib/containers/storage: driver not supported

Comment 10 Matthew Heon 2019-01-02 14:23:57 UTC
I don't believe we have any intention of supporting this. Devicemapper has several fundamental issues that prevent it from being used from multiple processes, which would be very difficult to fix. Given that we're encouraging the use of overlayfs instead, I don't think we're going to be making the substantial time investment required to fix this one, so devicemapper will remain disabled.

Comment 11 Daniel Walsh 2019-01-02 15:14:57 UTC
Actually Nalin had some ideas on how to fix devicemapper, although we have always wanted the long term goal to support an LVM driver.

The reason I would want an LVM/Devicemapper driver would be to better support the use of KATA Containers.

Comment 12 Daniel Walsh 2019-01-10 20:26:31 UTC
Nalin can you look into this?

Comment 14 Daniel Walsh 2019-07-25 16:19:57 UTC
I would like to see this fixed eventually.

Comment 15 Daniel Walsh 2019-08-14 10:46:13 UTC
Still in the back log


Note You need to log in before you can comment on or make changes to this bug.