mdadm-4.1-3.fc32 spec file under %post only raid-check.timer should be enabled at boot, raid-check.service should not be enabled at boot. You don't want to perform a raid-check on every boot. /usr/lib/systemd/system/raid-check.service does not need [Install] section. This is only needed if you plan to enable the service. Raid-check should not be able to run at boot only through a timer at a specified time. Depending on the size of raid doing a raid-check at boot increases boot time. Please look at fstrim.service fstrim.timer
(In reply to louisgtwo from comment #0) > mdadm-4.1-3.fc32 spec file under %post only raid-check.timer should be > enabled at boot, raid-check.service should not be enabled at boot. You don't > want to perform a raid-check on every boot. Hi %post in spec file means: Scriptlet that is executed just after the package is installed on the target system. So it doesn't enable raid-check.service on every boot? It just run raid-check.service after installing the package. > > /usr/lib/systemd/system/raid-check.service does not need [Install] section. > This is only needed if you plan to enable the service. Raid-check should not > be able to run at boot only through a timer at a specified time. Depending > on the size of raid doing a raid-check at boot increases boot time. %install Command or series of commands for copying the desired build artifacts from the %builddir (where the build happens) to the %buildroot directory (which contains the directory structure with the files to be packaged). This usually means copying files from ~/rpmbuild/BUILD to ~/rpmbuild/BUILDROOT and creating the necessary directories in ~/rpmbuild/BUILDROOT. This is only run when creating a package, not when the end-user installs the package. It looks like it's only used to copy files. It's not used to enable service. > > Please look at fstrim.service fstrim.timer I read these two files. Could you explain more about why do you ask to see these two files? Thanks Xiao
https://docs.fedoraproject.org/en-US/packaging-guidelines/Scriptlets/#_systemd Packages containing systemd unit files need to use scriptlets to ensure proper handling of those services. Services can either be enabled or disabled by default. I suggest modifying mdadm.spec %post %systemd_post mdmonitor.service raid-check.{service,timer} to %post %systemd_post mdmonitor.service raid-check.timer https://www.freedesktop.org/software/systemd/man/systemd.unit.html Unit files may include an "[Install]" section, which carries installation information for the unit. This section is not interpreted by systemd(1) during runtime; it is used by the enable and disable commands of the systemctl(1) tool during installation of a unit. We should not want to enable raid-check.service as this would slow down booting. Remove [Install] section from raid-check.service prevents this service from being enabled at boot. I suggest modifying raid-check.service [Unit] Description=RAID setup health check [Service] Type=oneshot ExecStart=/usr/sbin/raid-check [Install] WantedBy=multi-user.target to [Unit] Description=RAID setup health check [Service] Type=oneshot ExecStart=/usr/sbin/raid-check
I think dnf package is a better example.
This bug appears to have been reported against 'rawhide' during the Fedora 32 development cycle. Changing version to 32.
> https://docs.fedoraproject.org/en-US/packaging-guidelines/Scriptlets/ > #_systemd Thanks for the doc > > Packages containing systemd unit files need to use scriptlets to ensure > proper handling of those services. Services can either be enabled or > disabled by default. > > I suggest modifying mdadm.spec > > %post > %systemd_post mdmonitor.service raid-check.{service,timer} I checked the status of raid-check.service and raid-check.timer. They are disabled after installing mdadm [root@storageqe-72 ~]# systemctl status raid-check.timer ● raid-check.timer - Weekly RAID setup health check Loaded: loaded (/usr/lib/systemd/system/raid-check.timer; disabled; vendor preset: disabled) Active: inactive (dead) Trigger: n/a [root@storageqe-72 ~]# systemctl status raid-check.service ● raid-check.service - RAID setup health check Loaded: loaded (/usr/lib/systemd/system/raid-check.service; disabled; vendor preset: disabled) Active: inactive (dead) [root@storageqe-72 ~]# rpm -qa | grep mdadm mdadm-4.1-4.fc31.x86_64 They should be enabled, right? If so, it should be a bug. [root@storageqe-72 ~]# systemctl status mdmonitor ● mdmonitor.service - Software RAID monitoring and management Loaded: loaded (/usr/lib/systemd/system/mdmonitor.service; enabled; vendor preset: enabled) Active: inactive (dead) mdmonitor is enabled. > > to > > %post > %systemd_post mdmonitor.service raid-check.timer > > > https://www.freedesktop.org/software/systemd/man/systemd.unit.html > > Unit files may include an "[Install]" section, which carries installation > information for the unit. This section is not interpreted by systemd(1) > during runtime; it is used by the enable and disable commands of the > systemctl(1) tool during installation of a unit. > > We should not want to enable raid-check.service as this would slow down > booting. Remove [Install] section from raid-check.service prevents this > service from being enabled at boot. > > I suggest modifying raid-check.service > > [Unit] > Description=RAID setup health check > > [Service] > Type=oneshot > ExecStart=/usr/sbin/raid-check > > [Install] > WantedBy=multi-user.target > > to > > [Unit] > Description=RAID setup health check > > [Service] > Type=oneshot > ExecStart=/usr/sbin/raid-check agree++
We should only enable raid-check.timer raid-check.service should never be enabled and the [Install] section should be removed.
> [root@storageqe-72 ~]# rpm -qa | grep mdadm > mdadm-4.1-4.fc31.x86_64 We are talking about fc32 not fc31
(In reply to louisgtwo from comment #6) > We should only enable raid-check.timer > > raid-check.service should never be enabled and the [Install] section should > be removed. agree. I'll remove raid-check.service. For the disabled problem, I'm just curious. Because %systemd_post should enable them. But in fact, they are disabled. I'll check again after removing raid-check.service
(In reply to louisgtwo from comment #7) > > [root@storageqe-72 ~]# rpm -qa | grep mdadm > > mdadm-4.1-4.fc31.x86_64 > > We are talking about fc32 not fc31 Sorry, fc31 has the same problem too. I'll fix this in fc31 too
> For the disabled problem, I'm just curious. Because %systemd_post should enable them. > But in fact, they are disabled. I'll check again after removing raid-check.service %systemd_post mdmonitor.service raid-check.{service,timer} maybe %system_post does not like { }
please take a look at bug 855372 /usr/lib/systemd/system-preset/90-default.preset # https://bugzilla.redhat.com/show_bug.cgi?id=855372 enable mdmonitor.service enable mdmonitor-takeover.service maybe to need to put raid-check.timer in this file????
https://docs.fedoraproject.org/en-US/packaging-guidelines/Scriptlets/#_systemd If your package includes one or more systemd units that need to be enabled by default on package installation, they MUST be covered by the Fedora preset policy. https://docs.fedoraproject.org/en-US/packaging-guidelines/DefaultServices/ You should add enable raid-check.timer to /usr/lib/systemd/system-preset/90-default.preset
Sorry for all the comments. I did a test build locally using the mdadm-4.1-4 package, this was my modifications: 1. Left the specfile alone. The %systemd_post is correct. 2. raid-check.service (removed [Install] section). [Unit] Description=RAID setup health check [Service] Type=oneshot ExecStart=/usr/sbin/raid-check 3. This is outside the scope of this package. I added 'enable raid-check.timer' to /usr/lib/systemd/system-preset/90-default.preset When I installed the package $ sudo rpm -i mdadm-4.1-4.fc31.x86_64.rpm Created symlink /etc/systemd/system/multi-user.target.wants/mdmonitor.service → /usr/lib/systemd/system/mdmonitor.service. Created symlink /etc/systemd/system/timers.target.wants/raid-check.timer → /usr/lib/systemd/system/raid-check.timer. To list timers $ sudo systemctl list-timers --all NEXT LEFT LAST PASSED UNIT ACTIVATES Wed 2020-03-25 14:35:41 EDT 6min left n/a n/a dnf-makecache.timer dnf-makecache.service Wed 2020-03-25 14:40:41 EDT 11min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service Thu 2020-03-26 00:00:00 EDT 9h left Wed 2020-03-25 08:41:02 EDT 5h 48min ago mlocate-updatedb.timer mlocate-updatedb.service Thu 2020-03-26 00:00:00 EDT 9h left Wed 2020-03-25 08:41:02 EDT 5h 48min ago unbound-anchor.timer unbound-anchor.service n/a n/a n/a n/a raid-check.timer raid-check.service
(In reply to louisgtwo from comment #13) > Sorry for all the comments. I did a test build locally using the mdadm-4.1-4 > package, this was my modifications: > > 1. Left the specfile alone. The %systemd_post is correct. You mean we don't need to modify as follows? %systemd_post mdmonitor.service raid-check.{service,timer} to %systemd_post mdmonitor.service raid-check.timer If so, I'm confused now. What's the usage of %systemd_post? I thought %systemd_post is used to enable a systemd service or timer. > > 2. raid-check.service (removed [Install] section). > > [Unit] > Description=RAID setup health check > > [Service] > Type=oneshot > ExecStart=/usr/sbin/raid-check > > 3. This is outside the scope of this package. I added 'enable > raid-check.timer' to /usr/lib/systemd/system-preset/90-default.preset I commented this line %systemd_post and build a new package. Then I added 'enable raid-check.timer' to 90-default.preset. [root@dell-per620-01 ~]# systemctl status raid-check.timer ● raid-check.timer - Weekly RAID setup health check Loaded: loaded (/usr/lib/systemd/system/raid-check.timer; enabled; vendor preset: enabled) Active: inactive (dead) Trigger: n/a Triggers: ● raid-check.service So it looks like %systemd_post doesn't have any effect. Can we remove it? > > When I installed the package > $ sudo rpm -i mdadm-4.1-4.fc31.x86_64.rpm > Created symlink > /etc/systemd/system/multi-user.target.wants/mdmonitor.service → > /usr/lib/systemd/system/mdmonitor.service. > Created symlink /etc/systemd/system/timers.target.wants/raid-check.timer → > /usr/lib/systemd/system/raid-check.timer. > > To list timers > $ sudo systemctl list-timers --all > > NEXT LEFT LAST PASSED > UNIT ACTIVATES > Wed 2020-03-25 14:35:41 EDT 6min left n/a n/a > dnf-makecache.timer dnf-makecache.service > Wed 2020-03-25 14:40:41 EDT 11min left n/a n/a > systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service > Thu 2020-03-26 00:00:00 EDT 9h left Wed 2020-03-25 08:41:02 EDT 5h > 48min ago mlocate-updatedb.timer mlocate-updatedb.service > Thu 2020-03-26 00:00:00 EDT 9h left Wed 2020-03-25 08:41:02 EDT 5h > 48min ago unbound-anchor.timer unbound-anchor.service > n/a n/a n/a n/a > raid-check.timer raid-check.service
I filed a new bug for preset. https://bugzilla.redhat.com/show_bug.cgi?id=1817492
> You mean we don't need to modify as follows? > %systemd_post mdmonitor.service raid-check.{service,timer} > to > %systemd_post mdmonitor.service raid-check.timer > If so, I'm confused now. What's the usage of %systemd_post? > I thought %systemd_post is used to enable a systemd service or timer. Right, I think %system_post is a trigger for the presets. It does nothing unless you have a preset set in /usr/lib/systemd/system-preset/*.preset. You can remove raid-check.service from %systemd_post since you have no trigger in the presets. But raid-check.timer and mdmonitor.service is needed in %systemd_post for the trigger to happen. https://bugzilla.redhat.com/show_bug.cgi?id=1817491 need info. Hopefully you can have this squared away before final freeze.
(In reply to louisgtwo from comment #16) > > Right, I think %system_post is a trigger for the presets. It does nothing > unless you have a preset set in /usr/lib/systemd/system-preset/*.preset. You > can remove raid-check.service from %systemd_post since you have no trigger > in the presets. But raid-check.timer and mdmonitor.service is needed in > %systemd_post for the trigger to happen. What will it happen if I remove mdmonitor.service and raid-check.timer from %systemd_post too? Could you explain "trigger" in detail? > > > https://bugzilla.redhat.com/show_bug.cgi?id=1817491 need info. Hopefully you > can have this squared away before final freeze. I give my answer there.
By the way I find raid-check.timer can't be started after installing mdadm, even after adding "enable raid-check.timer" to 90-default.preset. [root@dell-per620-01 ~]# systemctl status raid-check.timer ● raid-check.timer - Weekly RAID setup health check Loaded: loaded (/usr/lib/systemd/system/raid-check.timer; enabled; vendor preset: enabled) Active: inactive (dead) Trigger: n/a Triggers: ● raid-check.service After reboot, raid-check.timer can start successfully. [root@dell-per620-01 ~]# systemctl status raid-check.timer ● raid-check.timer - Weekly RAID setup health check Loaded: loaded (/usr/lib/systemd/system/raid-check.timer; enabled; vendor preset: enabled) Active: active (waiting) since Thu 2020-03-26 11:53:59 EDT; 1min 42s ago Trigger: Sun 2020-03-29 01:00:00 EDT; 2 days left Triggers: ● raid-check.service Do we need to add this: diff --git a/mdadm.spec b/mdadm.spec index 9bc4443..4d21336 100644 --- a/mdadm.spec +++ b/mdadm.spec @@ -67,6 +67,7 @@ install -m644 %{SOURCE6} %{buildroot}/etc/libreport/events.d %post %systemd_post mdmonitor.service raid-check.{service,timer} +%{_bindir}/systemctl start raid-check.timer >/dev/null 2>&1 || : %{_bindir}/systemctl disable mdmonitor-takeover.service >/dev/null 2>&1 || :
> What will it happen if I remove mdmonitor.service and raid-check.timer from %systemd_post too? > Could you explain "trigger" in detail? You need %systemd_post it is a scriptlet to trigger the presets in /usr/lib/systemd/system-preset/*.preset %systemd_post requires arguments. Just did another local build with no %systemd_post. Installing the package did not enable the services. > After reboot, raid-check.timer can start successfully. Yes %systemd_post is to enable service/timer not start them. This is also true with mdmonitor.service. > %post > %systemd_post mdmonitor.service raid-check.{service,timer} > +%{_bindir}/systemctl start raid-check.timer >/dev/null 2>&1 || : > %{_bindir}/systemctl disable mdmonitor-takeover.service >/dev/null 2>&1 || : This is not needed. Notice mdmonitor.service is not started this way either. On a side note: to be correct, if you don't want mdmonitor-takeover.service to be enabled by default you should ask this service be removed from /usr/lib/systemd/system-preset/*.preset %{_bindir}/systemctl disable mdmonitor-takeover.service >/dev/null 2>&1 || : is a hack.
(In reply to louisgtwo from comment #19) > > You need %systemd_post it is a scriptlet to trigger the presets in > /usr/lib/systemd/system-preset/*.preset > > %systemd_post requires arguments. > > Just did another local build with no %systemd_post. Installing the package > did not enable the services. I tried this too. comment %systemd_post: %post -%systemd_post mdmonitor.service raid-check.timer +#%systemd_post mdmonitor.service raid-check.timer add enable to 90-default.preset enable raid-check.timer After installing the package: [root@dell-per620-01 ~]# systemctl status raid-check.timer ● raid-check.timer - Weekly RAID setup health check Loaded: loaded (/usr/lib/systemd/system/raid-check.timer; enabled; vendor preset: enabled) //enabled Active: inactive (dead) Trigger: n/a Triggers: ● raid-check.service [root@dell-per620-01 ~]# systemctl status mdmonitor ● mdmonitor.service - Software RAID monitoring and management Loaded: loaded (/usr/lib/systemd/system/mdmonitor.service; enabled; vendor preset: enabled) Active: inactive (dead) They are all enabled. The spec file doesn't have %systemd_post line. Do I miss something? I get different test results with you. If so, what's the differences? So I don't understand %systemd_post is a scriptlet to trigger the presets. Could you explain this in detail and give an example? > > > After reboot, raid-check.timer can start successfully. > > Yes %systemd_post is to enable service/timer not start them. This is also > true with mdmonitor.service. > > > %post > > %systemd_post mdmonitor.service raid-check.{service,timer} > > +%{_bindir}/systemctl start raid-check.timer >/dev/null 2>&1 || : > > %{_bindir}/systemctl disable mdmonitor-takeover.service >/dev/null 2>&1 || : > > This is not needed. Notice mdmonitor.service is not started this way either. Yes. The [Install] section helps them. So mdmonitor and raid-check.timer can start after boot But after installing the package, the mdmonitor.service and raid-check.timer don't start automatically. We must reboot the machine. Is it the right way? > > On a side note: to be correct, if you don't want mdmonitor-takeover.service > to be enabled by default you should ask this service be removed from > /usr/lib/systemd/system-preset/*.preset > > %{_bindir}/systemctl disable mdmonitor-takeover.service >/dev/null 2>&1 || : > > is a hack. It looks like package doesn't have this unit. I'll remove this line. Thanks Xiao
> They are all enabled. The spec file doesn't have %systemd_post line. > Do I miss something? I get different test results with you. If so, what's > the differences? > > So I don't understand %systemd_post is a scriptlet to trigger the presets. > Could you explain this in detail and give an example? Are you upgrading from a previous package? They may have been enabled before. Upgrading a package will not disable previous enables. What I do is remove rpm -e --nodeps mdadm (mdadm is needed by libblockdev-mdraid) and do a new install rpm -i mdadm-<newversion> to install the package. > But after installing the package, the mdmonitor.service and raid-check.timer don't start > automatically. We must reboot the machine. Is it the right way? Yes, you may need to ask someone at redhat or the maintainer of systemd.
(In reply to louisgtwo from comment #21) > > They are all enabled. The spec file doesn't have %systemd_post line. > > Do I miss something? I get different test results with you. If so, what's > > the differences? > > > > So I don't understand %systemd_post is a scriptlet to trigger the presets. > > Could you explain this in detail and give an example? > > Are you upgrading from a previous package? They may have been enabled > before. Upgrading a package will not disable previous enables. What I do is > remove rpm -e --nodeps mdadm (mdadm is needed by libblockdev-mdraid) and do > a new install rpm -i mdadm-<newversion> to install the package. I used command `rpm -e` without --nodeps. It didn't report dependency error. Then I used rpm -i mdadm-<newversion> So I got different results. [root@dell-per620-01 ~]# rpm -e mdadm-4.1-5.fc32.x86_64 Removed /etc/systemd/system/timers.target.wants/raid-check.timer. Removed /etc/systemd/system/multi-user.target.wants/mdmonitor.service. [root@dell-per620-01 ~]# systemctl status raid-check.timer Unit raid-check.timer could not be found. [root@dell-per620-01 ~]# rpm -ivh mdadm-4.1-5.fc32.x86_64.rpm Verifying... ################################# [100%] Preparing... ################################# [100%] Updating / installing... 1:mdadm-4.1-5.fc32 ################################# [100%] Created symlink /etc/systemd/system/multi-user.target.wants/mdmonitor.service → /usr/lib/systemd/system/mdmonitor.service. Created symlink /etc/systemd/system/timers.target.wants/raid-check.timer → /usr/lib/systemd/system/raid-check.timer. [root@dell-per620-01 ~]# systemctl status raid-check.timer ● raid-check.timer - Weekly RAID setup health check Loaded: loaded (/usr/lib/systemd/system/raid-check.timer; enabled; vendor preset: enabled) Active: inactive (dead) Trigger: n/a Triggers: ● raid-check.service This is the version I used https://koji.fedoraproject.org/koji/taskinfo?taskID=42873759
It's just a test version to test the usage %systemd_post diff --git a/mdadm.spec b/mdadm.spec index 9bc4443..a3d3649 100644 --- a/mdadm.spec +++ b/mdadm.spec @@ -1,7 +1,7 @@ Name: mdadm Version: 4.1 #define subversion rc2 -Release: 4%{?subversion:.%{subversion}}%{?dist} +Release: 5%{?subversion:.%{subversion}}%{?dist} Summary: The mdadm program controls Linux md devices (software RAID arrays) URL: http://www.kernel.org/pub/linux/utils/raid/mdadm/ License: GPLv2+ @@ -66,7 +66,7 @@ mkdir -p %{buildroot}/etc/libreport/events.d install -m644 %{SOURCE6} %{buildroot}/etc/libreport/events.d %post -%systemd_post mdmonitor.service raid-check.{service,timer} +#%systemd_post mdmonitor.service raid-check.{service,timer}
> %post > -%systemd_post mdmonitor.service raid-check.{service,timer} > +#%systemd_post mdmonitor.service raid-check.{service,timer} That is the problem. Do not comment this line out, delete it outright. Somehow # does nothing in %post %post %{_bindir}/systemctl disable mdmonitor-takeover.service >/dev/null 2>&1 || : # If there is any cron daemon configured, enable the systemd timer to avoid # breaking the configuration silently when upgrading from 4.1-rc2.0.2 or # earlier versions
(In reply to louisgtwo from comment #24) > > %post > > -%systemd_post mdmonitor.service raid-check.{service,timer} > > +#%systemd_post mdmonitor.service raid-check.{service,timer} > > That is the problem. Do not comment this line out, delete it outright. > Somehow # does nothing in %post Thanks very much for pointing out this. > > > > %post > > %{_bindir}/systemctl disable mdmonitor-takeover.service >/dev/null 2>&1 || : > > # If there is any cron daemon configured, enable the systemd timer to avoid > # breaking the configuration silently when upgrading from 4.1-rc2.0.2 or > # earlier versions
Hi all It's the patch I want to fix this bug. Could you review this? I removed the "systemctl enable raid-check.timer" part too. Because we don't enable a systemd service by this way. diff --git a/mdadm.spec b/mdadm.spec index 9bc4443..0b167c0 100644 --- a/mdadm.spec +++ b/mdadm.spec @@ -1,7 +1,7 @@ Name: mdadm Version: 4.1 #define subversion rc2 -Release: 4%{?subversion:.%{subversion}}%{?dist} +Release: 5%{?subversion:.%{subversion}}%{?dist} Summary: The mdadm program controls Linux md devices (software RAID arrays) URL: http://www.kernel.org/pub/linux/utils/raid/mdadm/ License: GPLv2+ @@ -66,14 +66,7 @@ mkdir -p %{buildroot}/etc/libreport/events.d install -m644 %{SOURCE6} %{buildroot}/etc/libreport/events.d %post -%systemd_post mdmonitor.service raid-check.{service,timer} -%{_bindir}/systemctl disable mdmonitor-takeover.service >/dev/null 2>&1 || : - -# If there is any cron daemon configured, enable the systemd timer to avoid -# breaking the configuration silently when upgrading from 4.1-rc2.0.2 or -# earlier versions -%triggerin -- mdadm < 4.1-rc2.0.2 -[ -e %{_sysconfdir}/crontab -o -e %{_sysconfdir}/anacrontab -o -e %{_sysconfdir}/fcrontab ] && %{_bindir}/systemctl enable raid-check.timer &>/dev/null || : +%systemd_post mdmonitor.service raid-check.timer
This looks good to me. One minor detail, since we don't want to enable raid-check.service this additional patch is needed to prevent raid-check.service from being enabled by mistake. --- a/raid-check.service +++ b/raid-check.service @@ -4,6 +4,3 @@ [Service] Type=oneshot ExecStart=/usr/sbin/raid-check - -[Install] -WantedBy=multi-user.target https://www.freedesktop.org/software/systemd/man/systemd.unit.html [Install] Section Options Unit files may include an "[Install]" section, which carries installation information for the unit. This section is not interpreted by systemd(1) during runtime; it is used by the enable and disable commands of the systemctl(1) tool during installation of a unit.
With my patch from previous comment I get this when trying to enable raid-check.service. I think this is correct since enabling raid-check.service would do a raid-check on every boot. Unless you see scenarios where admins would like to do this. $ sudo systemctl enable raid-check.service The unit files have no installation config (WantedBy=, RequiredBy=, Also=, Alias= settings in the [Install] section, and DefaultInstance= for template units). This means they are not meant to be enabled using systemctl. Possible reasons for having this kind of units are: • A unit may be statically enabled by being symlinked from another unit's .wants/ or .requires/ directory. • A unit's purpose may be to act as a helper for some other unit which has a requirement dependency on it. • A unit may be started when needed via activation (socket, path, timer, D-Bus, udev, scripted systemctl call, ...). • In case of template units, the unit is meant to be enabled with some instance name specified.
Hi louisgtwo Thanks for reminding me about this. Yes, it needs to remove [Install] section. Thanks Xiao
`fedpkg push` ran successfully, but `fedpkg build` fails. I filed a new issue at https://pagure.io/fedora-infrastructure/issue/8800
What is the status? Will this make it into f32?
(In reply to louisgtwo from comment #31) > What is the status? Will this make it into f32? It has been fixed in mdadm-4.1-5.fc32.
Yes I see this package in koji for the past week but not in updates-testing of f32 development? Is there a hold?
I change the state to POST to have a try.
Any update on mdadm-4.1-5.fc32 getting pushed to f32?
I'm not sure how to include one bug into fedora version. mdadm-4.1-5.fc32 hasn't been included in f32. Set fedora_prioritized_bug to have a try. And I'll ask other people about this.
The fedora_prioritized_bug flag is for the Prioritized Bugs process: https://docs.fedoraproject.org/en-US/program_management/prioritized_bugs/ It has no effect on package updates (nor does the status of a bug in Bugzilla. Just the opposite, in fact, as updates move through the system, Bodhi will update the status of the bug appropriately). In this case, I don't see an update in Bodhi. Xiao, did you submit an update (e.g. with `fedpkg update`) or did you just do a Koji build? If you only did a Koji build, that's why the update hasn't appeared: it doesn't exist. See: https://fedoraproject.org/wiki/Package_update_HOWTO#Later_Branched_and_stable_releases (You may also want to apply this patch to Rawhide) I am clearing the fedora_prioritized_bug flag.
(In reply to Ben Cotton from comment #37) > The fedora_prioritized_bug flag is for the Prioritized Bugs process: > https://docs.fedoraproject.org/en-US/program_management/prioritized_bugs/ > > It has no effect on package updates (nor does the status of a bug in > Bugzilla. Just the opposite, in fact, as updates move through the system, > Bodhi will update the status of the bug appropriately). > > In this case, I don't see an update in Bodhi. Xiao, did you submit an update > (e.g. with `fedpkg update`) or did you just do a Koji build? If you only did > a Koji build, that's why the update hasn't appeared: it doesn't exist. See: > https://fedoraproject.org/wiki/ > Package_update_HOWTO#Later_Branched_and_stable_releases Ben, I did `fedpkg push` and `fedpkg build`. So it must run `fedpkg update` after building? If so, I guess it's the reason. > > (You may also want to apply this patch to Rawhide) > > I am clearing the fedora_prioritized_bug flag. Yes, thanks for reminding me about this.
FEDORA-2020-5607de2fbc has been submitted as an update to Fedora 32. https://bodhi.fedoraproject.org/updates/FEDORA-2020-5607de2fbc
FEDORA-2020-5607de2fbc has been pushed to the Fedora 32 testing repository. In short time you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2020-5607de2fbc` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2020-5607de2fbc See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.
How about packages for f31 f30 rawhide?
FEDORA-2020-5607de2fbc has been pushed to the Fedora 32 stable repository. If problem still persists, please make note of it in this bug report.
Hi Louis f30 doesn't have this problem, it's introduced by f31. So we just need to fix it in f31 and rawhide. I already filed a new bug https://bugzilla.redhat.com/show_bug.cgi?id=1838409 for rawhide. But I'm thinking another problem. In the patch it removed this part: %post -%systemd_post mdmonitor.service raid-check.{service,timer} -%{_bindir}/systemctl disable mdmonitor-takeover.service >/dev/null 2>&1 || : - -# If there is any cron daemon configured, enable the systemd timer to avoid -# breaking the configuration silently when upgrading from 4.1-rc2.0.2 or -# earlier versions -%triggerin -- mdadm < 4.1-rc2.0.2 -[ -e %{_sysconfdir}/crontab -o -e %{_sysconfdir}/anacrontab -o -e %{_sysconfdir}/fcrontab ] && %{_bindir}/systemctl enable raid-check.timer &>/dev/null || : +%systemd_post mdmonitor.service raid-check.timer If the customer is using cron and we add raid-check.timer, does it mean raid-check.timer replace cron job? What's the right way to avoid corrupting users? Thanks Xiao
mdadm-4.1-3 replaced the cron job with a systemd timer. The package does not have both. So fresh installs or upgrading the customer will instead of seeing a cron job will see a systemd timer. You might want to check if the /sbin/raid-check has protections againsted running concerently. But since there are no bug reports about this. no worries. Can you check the status of https://bugzilla.redhat.com/show_bug.cgi?id=1817491 Thanks
(In reply to louisgtwo from comment #44) > mdadm-4.1-3 replaced the cron job with a systemd timer. The package does not > have both. So fresh installs or upgrading the customer will instead of > seeing a cron job will see a systemd timer. Thanks for noting this. @XioNi, I didn't catch this because I didn't expect such a change to be made. It was never advertised. In the future, please announce this as a self-contained change if there are any service or cron related changes. I have a custom cron job to override the default of 1 week to 1 month (first Sunday only) due to the large RAIDs I run. While the change did not remove my override I would be surprised if the timer got enabled and wondered why two raid-checks were running.
(In reply to Michael Cronenworth from comment #45) > (In reply to louisgtwo from comment #44) > > mdadm-4.1-3 replaced the cron job with a systemd timer. The package does not > > have both. So fresh installs or upgrading the customer will instead of > > seeing a cron job will see a systemd timer. > > Thanks for noting this. > > @XioNi, I didn't catch this because I didn't expect such a change to be > made. It was never advertised. In the future, please announce this as a > self-contained change if there are any service or cron related changes. > > I have a custom cron job to override the default of 1 week to 1 month (first > Sunday only) due to the large RAIDs I run. While the change did not remove > my override I would be surprised if the timer got enabled and wondered why > two raid-checks were running. Hi Michael This made me a headache too. In fact I don't notice this at first. It's introduce by commit 8491dee0428b45db4cabf3ddee8bb5b9c6e18f83 Author: Alejandro Domínguez <adomu> Date: Sun Feb 9 08:35:31 2020 +0100 Replace raid-check cron job with systemd timer In fact, I don't know what's the best way to resolve this problem. This bug is just disable raid-check.service and we fixed this. But as I mentioned in comment43, I'm worrying about the case which you are using cron. It looks like it's better that we keep it as it is now? Do you have some better suggestions? Thanks Xiao
Yes, the change should have been done in rawhide development. We can not think of all possible situations or else this package will be full of hacks. For the few customers who have a custom cron job they should be aware of the change. How can this change be added to the release notes for f33? Louis
Hi Ben Do you know the process about adding a release note for f33? By the way, could you help me check https://bugzilla.redhat.com/show_bug.cgi?id=1838409. The fedora update is successful, https://bodhi.fedoraproject.org/updates/FEDORA-2020-e0a6867e43 But the bug state hasn't changed to ON_QA Thanks Xiao
(In reply to XiaoNi from comment #48) > Do you know the process about adding a release note for f33? Open an issue in the Release Notes tracker: https://pagure.io/fedora-docs/release-notes/new_issue > By the way, could you help me check > https://bugzilla.redhat.com/show_bug.cgi?id=1838409. > The fedora update is successful, > https://bodhi.fedoraproject.org/updates/FEDORA-2020-e0a6867e43 > But the bug state hasn't changed to ON_QA > There's no bug associated with the update. When you create the update, you have to specify the bug(s) that it applies to. Otherwise, the tooling can't update the affected bugs. (It does not parse the changelog or any git commits for this). See, for example: https://bodhi.fedoraproject.org/updates/FEDORA-2020-9dd972289d You'll have to manually update the state for this bug as the update moves through the system.
(In reply to Ben Cotton from comment #49) > (In reply to XiaoNi from comment #48) > > > Do you know the process about adding a release note for f33? > > Open an issue in the Release Notes tracker: > https://pagure.io/fedora-docs/release-notes/new_issue > > > By the way, could you help me check > > https://bugzilla.redhat.com/show_bug.cgi?id=1838409. > > The fedora update is successful, > > https://bodhi.fedoraproject.org/updates/FEDORA-2020-e0a6867e43 > > But the bug state hasn't changed to ON_QA > > > There's no bug associated with the update. When you create the update, you > have to specify the bug(s) that it applies to. Otherwise, the tooling can't > update the affected bugs. (It does not parse the changelog or any git > commits for this). See, for example: > https://bodhi.fedoraproject.org/updates/FEDORA-2020-9dd972289d > > You'll have to manually update the state for this bug as the update moves > through the system. Thanks for pointing the problem. You mean I just need to move the state of bug 1838409 to ON_QA or CLOSED manually? Thanks Xiao
(In reply to XiaoNi from comment #50) > Thanks for pointing the problem. > > You mean I just need to move the state of bug 1838409 to ON_QA or CLOSED > manually? No. You do not touch the Bugzilla side at all. Bodhi has a place to type in a bug number. When you type in this bug number it will attach it to the Bodhi update. Bodhi will then automatically update the bug status and you do not need to do anything on the Bugzilla side.
Hi Michael Move the state to Assigned again. But I don't find the place to add bug at https://bodhi.fedoraproject.org/updates/FEDORA-2020-e0a6867e43. Thanks Xiao
Sorry, I want to move 1838409 to Assigned not this bug.
The update you linked is not a good example. - Rawhide updates are generated automatically. - Updates marked as "Stable" cannot be edited. In order to put a bug on an update it must either be brand new, "pending", or in "testing" to edit. For Rawhide only builds you are best off marking the bug manually as "CLOSED" and "RAWHIDE" and placing the package N-V-R into the "Fixed in Version" field.
Hi Michael Thanks for the information. I installed the latest rawhide f33 and yum install mdadm. mdadm-4.1-5.fc33.x86_64 is there. I'll close 1838409 manually as you said. Thanks Xiao