Bug 2091795

Summary: thin_pool_zero is commented out in lvm.conf which can be confusing
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Shay Rozen <srozen>
Component: lvm-operatorAssignee: N Balachandran <nibalach>
Status: CLOSED DUPLICATE QA Contact: Shay Rozen <srozen>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.11CC: jolmomar, lgangava, muagarwa, ocs-bugs, odf-bz-bot, sapillai
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-06-01 10:21:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Shay Rozen 2022-05-31 04:52:29 UTC
Description of problem (please be detailed as possible and provide log
snippests):
When check /etc/lvm/lvm.conf or check "lvm lvmconfig" thin_pool_zero is not enabled.


Version of all relevant components (if applicable):
odf4.11

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
data erasure is not enabled.

Is there any workaround available to the best of your knowledge?
set it manually and restart lvmd.

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
yes

Can this issue reproduce from the UI?
No relevant

If this is a regression, please provide more details to justify this:
No

Steps to Reproduce:
1. Install SNO and install lvmo 4.11
2. login to the node and check thin_pool_zero in /etc/lvm/lvm.conf 



Actual results:
thin_pool_zero is not enabled.


Expected results:
thin_pool_zero show be enabled.

Additional info:
[core@control-plane-0 ~]$ sudo lvm lvmconfig
config {
	checks=1
	abort_on_errors=0
	profile_dir="/etc/lvm/profile"
}
local {
}
dmeventd {
}
activation {
	checks=0
	udev_sync=1
	udev_rules=1
	retry_deactivation=1
	missing_stripe_filler="error"
	raid_region_size=2048
	raid_fault_policy="warn"
	mirror_image_fault_policy="remove"
	mirror_log_fault_policy="allocate"
	snapshot_autoextend_threshold=100
	snapshot_autoextend_percent=20
	thin_pool_autoextend_threshold=100
	thin_pool_autoextend_percent=20
	monitoring=1
	activation_mode="degraded"
}
global {
	umask=63
	test=0
	units="r"
	si_unit_consistency=1
	suffix=1
	activation=1
	proc="/proc"
	etc="/etc"
	wait_for_locks=1
	locking_dir="/run/lock/lvm"
	prioritise_write_locks=1
	abort_on_internal_errors=0
	metadata_read_only=0
	mirror_segtype_default="raid1"
	raid10_segtype_default="raid10"
	sparse_segtype_default="thin"
	use_lvmlockd=0
	system_id_source="none"
	use_lvmpolld=1
	notify_dbus=1
}
shell {
	history_size=100
}
backup {
	backup=1
	backup_dir="/etc/lvm/backup"
	archive=1
	archive_dir="/etc/lvm/archive"
	retain_min=10
	retain_days=30
}
log {
	verbose=0
	silent=0
	syslog=1
	overwrite=0
	level=0
	command_names=0
	prefix="  "
	activation=0
	debug_classes=["memory","devices","io","activation","allocation","metadata","cache","locking","lvmpolld","dbus"]
}
allocation {
	maximise_cling=1
	use_blkid_wiping=1
	wipe_signatures_when_zeroing_new_lvs=1
	mirror_logs_require_separate_pvs=0
}
devices {
	dir="/dev"
	scan="/dev"
	obtain_device_list_from_udev=1
	external_device_info_source="none"
	sysfs_scan=1
	scan_lvs=0
	multipath_component_detection=1
	md_component_detection=1
	fw_raid_component_detection=0
	md_chunk_alignment=1
	data_alignment_detection=1
	data_alignment=0
	data_alignment_offset_detection=1
	ignore_suspended_devices=0
	ignore_lvm_mirrors=1
	require_restorefile_with_uuid=1
	pv_min_size=2048
	issue_discards=0
	allow_changes_with_duplicate_pvs=0
	allow_mixed_block_sizes=0
}

Comment 3 Shay Rozen 2022-06-01 10:21:21 UTC

*** This bug has been marked as a duplicate of bug 2092349 ***