Description of problem: See bug 1908602. The solution for it, which we had to rush quickly in for fixing breakage while migrating to CentOS Stream, was to call Cli._read_conf_file. We should use only public API when dealing with dnf. I am still not sure what's the best thing to do in this case - call update_from_etc, call Cli.configure (which is not underscore-private but is not documented, AFAICT), or ask dnf to add a new method to the API.
Marek, can you please advice us about the best course of action? Thanks!
Sorry for the delay, I had pretty long PTO during Xmas. The problem is that variable values for substitutions (files from /etc/dnf/vars) are unfortunately not automatically loaded and CentOS Stream uses custom variables for constructing repo URL. The loading (as you discovered) is done via update_from_etc() method of Substitutions class. So all you should need is adding the `base.conf.substitutions.update_from_etc(base.conf.installroot)` call. I do not think you should even instantiate the Cli class - is there any particular reason for doing so? This class is meant to be used by dnf command line interface and API users should not rely on it. So your _createBase() method could look like this: def _createBase(self, offline=False): base = dnf.Base() # This avoid DNF trying to remove packages that were not touched by # its own transaction when doing a rollback. base.conf.clean_requirements_on_remove = False # This causes DNF to either use the highest available version or fail. # When it fails, it should show the reasons. Without this, it will # simply ignore the unsatisfied dependencies of the highest version, # and either install an older version or do nothing. base.conf.best = True base.conf.substitutions.update_from_etc(base.conf.installroot) base.init_plugins(disabled_glob=self._disabledPlugins) base.read_all_repos() base.repos.all().set_progress_bar(self._MyDownloadProgress(self._sink)) # dnf does not keep packages for offline usage # if offline: # base.repos.all().md_only_cached = True base.fill_sack() base.read_comps() return base There still remains one problem - the directories with variables could be altered by line 'varsdir=/the/path' in /etc/dnf/dnf.conf file. Unfortunately the only way how to get actual value of `varsdir` is using another private function: base.conf.read(priority=dnf.conf.PRIO_MAINCONFIG) varsdir = base.conf._get_value('varsdir') base.conf.substitutions.update_from_etc(base.conf.installroot, varsdir=base.conf._get_value('varsdir')
Sorry for misleading info - of course there is an easy way how to read a value from the configuration - simple accessing an attribute of the conf object: base = dnf.Base() # read the configuration from /etc/dnf/dnf.conf base.conf.read(priority=dnf.conf.PRIO_MAINCONFIG) # update variable substitutions from /etc/dnf/vars (or other location configured via varsdir) base.conf.substitutions.update_from_etc(base.conf.installroot, varsdir=base.conf.varsdir) base.init_plugins(disabled_glob=self._disabledPlugins) base.read_all_repos() ... ...
(In reply to Marek Blaha from comment #3) > Sorry for misleading info - of course there is an easy way how to read a > value from the configuration - simple accessing an attribute of the conf > object: > > base = dnf.Base() > # read the configuration from /etc/dnf/dnf.conf > base.conf.read(priority=dnf.conf.PRIO_MAINCONFIG) > # update variable substitutions from /etc/dnf/vars (or other location > configured via varsdir) > base.conf.substitutions.update_from_etc(base.conf.installroot, > varsdir=base.conf.varsdir) > base.init_plugins(disabled_glob=self._disabledPlugins) > base.read_all_repos() This fails for me [1][2]: 2021-01-10 13:48:18,513+0000 ERROR otopi.plugins.otopi.packagers.dnfpackager dnfpackager.error:84 DNF Locklist not set 2021-01-10 13:48:18,514+0000 DEBUG otopi.context context._executeMethod:145 method exception Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod method['method']() File "/usr/share/otopi/plugins/otopi/packagers/dnfpackager.py", line 205, in _setup with self._minidnf.transaction(): File "/usr/lib/python3.6/site-packages/otopi/minidnf.py", line 289, in __enter__ self._managed.beginTransaction() File "/usr/lib/python3.6/site-packages/otopi/minidnf.py", line 565, in beginTransaction self._base = self._createBase() File "/usr/lib/python3.6/site-packages/otopi/minidnf.py", line 354, in _createBase base.fill_sack() File "/usr/lib/python3.6/site-packages/dnf/base.py", line 422, in fill_sack self._plugins.run_sack() File "/usr/lib/python3.6/site-packages/dnf/plugin.py", line 155, in run_sack self._caller('sack') File "/usr/lib/python3.6/site-packages/dnf/plugin.py", line 104, in _caller getattr(plugin, method)() File "/usr/lib/python3.6/site-packages/dnf-plugins/versionlock.py", line 85, in sack for pat in _read_locklist(): File "/usr/lib/python3.6/site-packages/dnf-plugins/versionlock.py", line 215, in _read_locklist raise dnf.exceptions.Error(NO_LOCKLIST) dnf.exceptions.Error: Locklist not set So it seems like this is not enough for reading also e.g. /etc/dnf/plugins/versionlock.conf . Any idea? Feel free to reply here, or on gerrit. For latter, see also [3]. Thanks! [1] https://gerrit.ovirt.org/c/otopi/+/112911 [2] https://jenkins.ovirt.org/job/otopi_standard-check-patch/348/artifact/check-patch.el8.x86_64/logs/otopi-packager-20210110134815-ivsm7m.log [3] https://ovirt.org/develop/dev-process/working-with-gerrit.html
I can reproduce your issue by installing the versionlock DNF plugin. The thing is that dnf plugins are initialized, but they are not configured. You need to run two more API calls - base.pre_configure_plugins() before the repositories are configured and base.configure_plugins() after that: base = dnf.Base() # read the configuration from /etc/dnf/dnf.conf base.conf.read(priority=dnf.conf.PRIO_MAINCONFIG) base.conf.clean_requirements_on_remove = False base.conf.best = True # update variable substitutions from /etc/dnf/vars (or other location configured via varsdir) base.conf.substitutions.update_from_etc(base.conf.installroot, varsdir=base.conf.varsdir) base.init_plugins(disabled_glob=self._disabledPlugins) base.pre_configure_plugins() base.read_all_repos() base.configure_plugins() base.fill_sack() base.read_comps() Hope this helps.
(In reply to Marek Blaha from comment #5) > I can reproduce your issue by installing the versionlock DNF plugin. > The thing is that dnf plugins are initialized, but they are not configured. > You need to run two more API calls - base.pre_configure_plugins() before the > repositories are configured and base.configure_plugins() after that: > > base = dnf.Base() > # read the configuration from /etc/dnf/dnf.conf > base.conf.read(priority=dnf.conf.PRIO_MAINCONFIG) > base.conf.clean_requirements_on_remove = False > base.conf.best = True > # update variable substitutions from /etc/dnf/vars (or other location > configured via varsdir) > base.conf.substitutions.update_from_etc(base.conf.installroot, > varsdir=base.conf.varsdir) > base.init_plugins(disabled_glob=self._disabledPlugins) > base.pre_configure_plugins() > base.read_all_repos() > base.configure_plugins() > base.fill_sack() > base.read_comps() > > Hope this helps. It does, thanks!
Marek, it's not enough :-(. The current code/patch: https://github.com/oVirt/otopi/commit/2fc16ac947f329eacdbea63da6beac268f429198 With this, we got bug 1919803. What happens is that the relevant code calls minidnf with: disabledPlugins=('versionlock',) But it seems they are not disabled - we do not see updates of packages that are in versionlock. So I tend to guess this is due to the change for current bug. Any idea? Thanks!
*** Bug 1919803 has been marked as a duplicate of this bug. ***
(In reply to Yedidyah Bar David from comment #7) > Marek, it's not enough :-(. > > The current code/patch: > > https://github.com/oVirt/otopi/commit/ > 2fc16ac947f329eacdbea63da6beac268f429198 > > With this, we got bug 1919803. What happens is that the relevant code calls > minidnf with: > > disabledPlugins=('versionlock',) > > But it seems they are not disabled - we do not see updates of packages that > are in versionlock. > So I tend to guess this is due to the change for current bug. Any idea? > Thanks! Marek - this seems to fix: https://gerrit.ovirt.org/113151 This does not sound to me like intended behavior. OK to merge? Do you want a bug on dnf for this? Something else? Thanks!
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
(In reply to Yedidyah Bar David from comment #9) > > Marek - this seems to fix: > > https://gerrit.ovirt.org/113151 > > This does not sound to me like intended behavior. OK to merge? Do you want a > bug on dnf for this? Something else? Thanks! To provide some more context, after spending quite some time adding debug logs and looking at them: engine-setup (using otopi) instantiates dnf several times. Some of them with no disabled plugins, others with 'versionlock'. The log line for 'Loaded plugins' shows that versionlock is "always" included, despite of _get_plugins_files not loading them again. This seems to be because _plugin_classes returns 'Plugin.__subclasses__()', which is not cleared between instantiations (so it seems). Didn't investigate further to see how this actually affects behavior. Also: The behavior being dependent on cli being passed seems to be in the versionlock plugin itself - it has (also): def locking_enabled(self): if self.cli is None: enabled = True # loaded via the api, not called by cli ... I also realized it's related simply by carefully reading the first patch for current bug - before it, we also passed cli.
The problem is that the dnf plugins were designed with the dnf CLI workflow in mind - you setup the Base object, do what you need to do and than the program ends. Unfortunately this does not work well enough when you are re-initializing the dnf several times. Can you please summarize what is your use-case? We might be able to suggest you some workarounds if we know what are you trying to achieve. Regarding your latest patch - I'm afraid the problem still exists: 1. if you do not initialize the Cli() object the versionlock filtering is always active and you rely on that the plugin is not loaded at all (and here is the unfortunately persistent Plugin.__subclasses__ issue). So you cannot disable versionlock 2. OTOH if you do initialize Cli() class the way you did, the default values for cli.dema nds are following: demands.plugin_filtering_enabled = None demands.resolving = False Thus locking_enabled() method of the versionlock plugin will allways return False and ver sionlock filtering is allways disabled and you cannot enable it. There might be (kind of clumsy) workaround on setting cli.demands.plugin_filtering_enabled to True/False according to presence of versionlock plugin in disabled_plugins.
(In reply to Marek Blaha from comment #12) > The problem is that the dnf plugins were designed with the dnf CLI workflow > in mind - you setup the Base object, do what you need to do and than the > program ends. Unfortunately this does not work well enough when you are > re-initializing the dnf several times. > > Can you please summarize what is your use-case? We might be able to suggest > you some workarounds if we know what are you trying to achieve. otopi is a small framework for writing setup/install programs. One of its plugins is a "dnf packager", which allows such programs to also do package management (remove, install, upgrade, etc.). The main user (program) relevant to us for this bug is engine-setup, a utility for setting up and/or upgrading the oVirt engine. During its run, it might do several different relevant actions - some just meant to get information ("Do we have an update for these packages?"), others to also change stuff (mainly update). The engine itself is closely tied to its database, and we want to prevent updating the engine package without running engine-setup (which also updates the db schema). So during engine-setup, we also add relevant packages to versionlock.list, and on upgrades, we disable it so that we can upgrade. > > Regarding your latest patch - I'm afraid the problem still exists: > > 1. if you do not initialize the Cli() object the versionlock filtering is > always active and you rely on that the plugin is not loaded at all (and here > is the unfortunately persistent Plugin.__subclasses__ issue). So you cannot > disable versionlock > > 2. OTOH if you do initialize Cli() class the way you did, the default values > for cli.dema > nds are following: > demands.plugin_filtering_enabled = None > demands.resolving = False > Thus locking_enabled() method of the versionlock plugin will allways return > False and ver > sionlock filtering is allways disabled and you cannot enable it. Not sure I follow, but this does work. My question is/was whether this is by design (which I find weird) or by mistake (meaning it might break us if/when this mistake is fixed). If latter, please suggest a proper fix. If this requires a change in dnf (or core plugins), that's ok - current patch works for now. BTW, see also bug 1542492 comment 1 about base._plugins._unload. I didn't yet open a but about this. > > There might be (kind of clumsy) workaround on setting > cli.demands.plugin_filtering_enabled to True/False according to presence of > versionlock plugin in disabled_plugins.
(In reply to Yedidyah Bar David from comment #13) > The engine itself is closely tied to its database, and we want to prevent > updating the engine package without running engine-setup (which also updates > the db schema). So during engine-setup, we also add relevant packages to > versionlock.list, and on upgrades, we disable it so that we can upgrade. Did you consider using excludes instead of the versionlock plugin? Excludes are designed specifically for such use cases as yours. So instead of plugin initialization and setting up the versionlock.list file you can use something like: base.conf.exclude_pkgs(['engine-setup', 'ovirt']) See https://dnf.readthedocs.io/en/latest/api_conf.html#dnf.conf.Conf.exclude_pkgs.
(In reply to Marek Blaha from comment #14) > > Did you consider using excludes instead of the versionlock plugin? Excludes I wasn't involved in the decision to use versionlock. It was taken ~ 10 years ago, in the very beginning of RHV/oVirt (in the patch for bug 695324) (We didn't use gerrit then, so that's a bit hard to find). I do not think this will change. > are designed specifically for such use cases as yours. So instead of plugin > initialization and setting up the versionlock.list file you can use > something like: > > base.conf.exclude_pkgs(['engine-setup', 'ovirt']) > > See > https://dnf.readthedocs.io/en/latest/api_conf.html#dnf.conf.Conf. > exclude_pkgs. Not sure what exactly you suggest here. Perhaps you meant includepkgs? I think this might work, but sounds harder to maintain over time and more fragile (as it's linked to a specific repo, IIUC). The only advantage is not needing an "extra" plugin.
If you want to update all packages on a system except for your `engine` package you can do it this way (using dnf API): base = dnf.Base() base.conf.exclude_pkgs(['engine']) base.read_all_repos() base.fill_sack() base.upgrade_all() base.resolve() base.download_packages(base.transaction.install_set) base.do_transaction() Includepkgs are less used - they could be useful on a repository level. Let's say you want apply all upgrades from `updates` repo and only upgrades for selected packages from `updates-testing` repo. Then you might set `includepkgs=pkg1,pkg1,pkg3` in `updates-testing` repo configuration. But this doesn't seem like your use-case. excludepkgs=pkg1,pkg2 means that packages`pkg1` and `pkg2` are not used for any rpm transaction. Dnf "does not see" them and dnf will not touch them. This setting can be used globally (either in /etc/dnf/dnf.conf or using API call as in the example) or/and only for specific repository in the /etc/yum.repos.d/*.repo configuration. includepkgs=pkg1,pkg2 means that from given repo only `pkg1` and `pkg2` (and no other packages) are visible
(In reply to Marek Blaha from comment #16) > If you want to update all packages on a system except for your `engine` > package you can do it this way (using dnf API): It's the other way around. I want to _prevent_ users from updating the engine package. Inside engine-setup, I do want to update it (and perhaps others). So I thought what you meant is something like: 1. User adds ovirt repos (which include neither excludepkgs nor includepkgs options). User can run 'dnf update' and this updates everything. 2. User runs engine-setup. This configures the relevant repos to have excludepkgs=engine. 3. User runs 'dnf update'. This does not update the engine, even if there are updates, because of excludepkgs. 4. User runs engine-setup. engine-setup sees that there is an update to the engine, so (temporarily?) sets excludepkgs to empty list, updates, then restores excludepkgs. Is it so?
Oh, I was not aware that users are also involved. I was thinking that we are talking about some automatically generated environment where you use your tool to manage packages. But you want to set up environment where users would not accidentally upgrade your engine. In this case using versionlock make sense, although it sounds kind of brittle to me. The user still can disable the versionlock plugin or reconfigure it and as a result - your engine gets updated.
(In reply to Marek Blaha from comment #18) > Oh, I was not aware that users are also involved. I was thinking that we are > talking about some automatically generated environment where you use your > tool to manage packages. But you want to set up environment where users > would not accidentally upgrade your engine. > In this case using versionlock make sense, although it sounds kind of > brittle to me. The user still can disable the versionlock plugin or > reconfigure it and as a result - your engine gets updated. Correct. In reality, though, we didn't get too many reports about breakage caused by such things. We did discuss alternatives several years ago (e.g. make the engine check db content on start and verify that things are ok, or keeping also old compatibility/functionality, or making the engine upgrade the db by itself), but nothing was actually done - mainly because what we have works more-or-less ok. So: Coming back to my question in comment 9: We already merged the patch there. Do you see problem with this? Potential/future problems? Anything you want to track anywhere (as an otopi bug, dnf bug, something else)?
I do not think there is a problem with the patch itself. But I do not see how the cli instantiation could affect filtration of disabled plugins. There is a bug in dnf and according to the code (and my tests) adding cli parameter to base.init_plugins() call does not workaround it. The problem in dnf is that once plugin is initialized it stays initialized regardless the value of disabled_glob argument. So basically the only thing that matters is the order in which plugins are initialized. If you do this: import dnf # first initialize plugins with versionlock disabled disabled_plugins = ('versionlock',) base = dnf.Base() base.init_plugins(disabled_glob=disabled_plugins) # now the versionlock plugin is not loaded base._plugins._unload() # this is another ugly call of private method # then initialize all plugins disabled_plugins = () base = dnf.Base() base.init_plugins(disabled_glob=disabled_plugins) # now the versionlock plugin is loaded Everything seems to work. But if you change the order of blocks: import dnf # first initialize all plugins disabled_plugins = () base = dnf.Base() base.init_plugins(disabled_glob=disabled_plugins) # now the versionlock plugin is loaded base._plugins._unload() # this is another ugly call of private method # then try to disable the versionlock plugin disabled_plugins = ('versionlock',) base = dnf.Base() base.init_plugins(disabled_glob=disabled_plugins) # ERR - now the versionlock plugin is still loaded And it does behave the same way even with cli initialized. What cli instantiation actually affects is the behavior of the versionlock plugin itself. As you noticed when cli=None the versionlock plugin is always enabled. When cli is initialized the plugin is enabled according to values in cli.demands.plugin_filtering_enabled (or cli.demands.resolving if the former one is None). Given that you do not set any cli.demands in your code, the versionlock plugin will always be disabled. (as I tried to explain in comment#12). So I think an ugly workaround for this could be adding cli.demands.plugin_filtering_enabled = 'versionlock' not in disabled_plugins This demand is supported from dnf-plugins-core-4.0.12 and dnf-4.2.17. I'm going to file bug on bad plugins behavior when the dnf.Base() is initialized multiple times in one python session. ATM the only workaround that comes to my mind is running each dnf.Base instance in separate process. But I understand that this might not be feasible option for you.
(In reply to Marek Blaha from comment #20) > I do not think there is a problem with the patch itself. > But I do not see how the cli instantiation could affect filtration of > disabled plugins. I didn't try to fully understand either, but it does :-). > There is a bug in dnf and according to the code (and my > tests) adding cli parameter to base.init_plugins() call does not workaround > it. The problem in dnf is that once plugin is initialized it stays > initialized regardless the value of disabled_glob argument. So basically the > only thing that matters is the order in which plugins are initialized. If > you do this: > > > import dnf > > # first initialize plugins with versionlock disabled > disabled_plugins = ('versionlock',) > base = dnf.Base() > base.init_plugins(disabled_glob=disabled_plugins) > # now the versionlock plugin is not loaded > > base._plugins._unload() # this is another ugly call of private method ( I know - commented about this in bug 1542492 . Still didn't file a dnf bug ) > > # then initialize all plugins > disabled_plugins = () > base = dnf.Base() > base.init_plugins(disabled_glob=disabled_plugins) > # now the versionlock plugin is loaded > > > Everything seems to work. But if you change the order of blocks: > > > import dnf > > # first initialize all plugins > disabled_plugins = () > base = dnf.Base() > base.init_plugins(disabled_glob=disabled_plugins) > # now the versionlock plugin is loaded > > base._plugins._unload() # this is another ugly call of private method > > # then try to disable the versionlock plugin > disabled_plugins = ('versionlock',) > base = dnf.Base() > base.init_plugins(disabled_glob=disabled_plugins) > # ERR - now the versionlock plugin is still loaded > > > And it does behave the same way even with cli initialized. > > What cli instantiation actually affects is the behavior of the versionlock > plugin itself. As you noticed when cli=None the versionlock plugin is always > enabled. When cli is initialized the plugin is enabled according to values > in cli.demands.plugin_filtering_enabled (or cli.demands.resolving if the > former one is None). Given that you do not set any cli.demands in your code, > the versionlock plugin will always be disabled. (as I tried to explain in > comment#12). > So I think an ugly workaround for this could be adding > > cli.demands.plugin_filtering_enabled = 'versionlock' not in disabled_plugins > > This demand is supported from dnf-plugins-core-4.0.12 and dnf-4.2.17. > > > I'm going to file bug on bad plugins behavior when the dnf.Base() is > initialized multiple times in one python session. ATM the only workaround > that comes to my mind is running each dnf.Base instance in separate process. > But I understand that this might not be feasible option for you. Not really. But please note that we do on each transaction end: base._plugins._unload() base.close() Perhaps this is why it works for me? You might want to search 'base' in minidnf.py - it's not that large a file, 992 lines currently. I agree this will not provider a complete picture, because this does not include the code using it. But I think that code does very little re dnf internals (e.g. does not access base directly, IIRC).
On Red Hat Enterprise Linux release 8.3 (Ootpa) everything works just fine. I successfully deployed HE over NFS and didn't seen any unusual behavior as described within this bug, e.g. "Failed to execute stage 'Environment setup': No supported package manager found in your system." during engine-setup, while updating the engine. Tested on: rhvm-appliance-4.4-20201117.0.el8ev.x86_64 ovirt-hosted-engine-setup-2.4.9-4.el8ev.noarch ovirt-hosted-engine-ha-2.4.6-1.el8ev.noarch Linux 4.18.0-240.21.1.el8_3.x86_64 #1 SMP Wed Mar 17 11:34:58 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux Engine got successfully updated from 4.4.3.12-0.1.el8ev to 4.4.5.10-0.1.el8ev.noarch, using "engine-setup" command. "[ INFO ] Execution of setup completed successfully".
This bugzilla is included in oVirt 4.4.5 release, published on March 18th 2021. Since the problem described in this bug report should be resolved in oVirt 4.4.5 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.