Bug 1892552 - vanish fails to start with default configuration (VCL compilation failed)
Summary: vanish fails to start with default configuration (VCL compilation failed)
Keywords:
Status: CLOSED DUPLICATE of bug 1896457
Alias: None
Product: Fedora
Classification: Fedora
Component: varnish
Version: 33
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ingvar Hagelund
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-29 06:35 UTC by Emil Malinov
Modified: 2021-11-09 08:57 UTC (History)
5 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2021-11-09 08:57:26 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Emil Malinov 2020-10-29 06:35:27 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Clean install Fedora 33 Server
2. Install varnish (dnf install varnish)
2. Start varnish with default configuration (systemctl start varnish)


Actual results:
[root@proxy ~]# systemctl start varnish
Job for varnish.service failed because the control process exited with error code.
See "systemctl status varnish.service" and "journalctl -xe" for details.
[root@proxy ~]# systemctl status varnish.service
● varnish.service - Varnish Cache, a high-performance HTTP accelerator
     Loaded: loaded (/usr/lib/systemd/system/varnish.service; disabled; vendor preset: disabled)
     Active: failed (Result: exit-code) since Thu 2020-10-29 19:21:33 NZDT; 12s ago
    Process: 24019 ExecStart=/usr/sbin/varnishd -a :6081 -f /etc/varnish/default.vcl -s malloc,256m (code=exited, st>
        CPU: 232ms

Oct 29 19:21:33 proxy varnishd[24020]: lto-wrapper: fatal error: gcc returned 1 exit status
Oct 29 19:21:33 proxy varnishd[24020]: compilation terminated.
Oct 29 19:21:33 proxy varnishd[24020]: /usr/bin/ld: error: lto-wrapper failed
Oct 29 19:21:33 proxy varnishd[24020]: collect2: error: ld returned 1 exit status
Oct 29 19:21:33 proxy varnishd[24020]: Running C-compiler failed, exited with 1
Oct 29 19:21:33 proxy varnishd[24020]: VCL compilation failed
Oct 29 19:21:33 proxy systemd[1]: varnish.service: Control process exited, code=exited, status=255/EXCEPTION
Oct 29 19:21:33 proxy systemd[1]: varnish.service: Failed with result 'exit-code'.
Oct 29 19:21:33 proxy systemd[1]: varnish.service: Unit process 24020 (varnishd) remains running after unit stopped.
Oct 29 19:21:33 proxy systemd[1]: Failed to start Varnish Cache, a high-performance HTTP accelerator.
lines 1-16/16 (END)...skipping...
● varnish.service - Varnish Cache, a high-performance HTTP accelerator
     Loaded: loaded (/usr/lib/systemd/system/varnish.service; disabled; vendor preset: disabled)
     Active: failed (Result: exit-code) since Thu 2020-10-29 19:21:33 NZDT; 12s ago
    Process: 24019 ExecStart=/usr/sbin/varnishd -a :6081 -f /etc/varnish/default.vcl -s malloc,256m (code=exited, status=255/EXCEPTION)
        CPU: 232ms

Oct 29 19:21:33 proxy varnishd[24020]: lto-wrapper: fatal error: gcc returned 1 exit status
Oct 29 19:21:33 proxy varnishd[24020]: compilation terminated.
Oct 29 19:21:33 proxy varnishd[24020]: /usr/bin/ld: error: lto-wrapper failed
Oct 29 19:21:33 proxy varnishd[24020]: collect2: error: ld returned 1 exit status
Oct 29 19:21:33 proxy varnishd[24020]: Running C-compiler failed, exited with 1
Oct 29 19:21:33 proxy varnishd[24020]: VCL compilation failed
Oct 29 19:21:33 proxy systemd[1]: varnish.service: Control process exited, code=exited, status=255/EXCEPTION
Oct 29 19:21:33 proxy systemd[1]: varnish.service: Failed with result 'exit-code'.
Oct 29 19:21:33 proxy systemd[1]: varnish.service: Unit process 24020 (varnishd) remains running after unit stopped.
Oct 29 19:21:33 proxy systemd[1]: Failed to start Varnish Cache, a high-performance HTTP accelerator.



Expected results:
varnish starts with default configuration


Additional info:
Ran the same steps on Fedora 32 Server with success.

Comment 1 Ingvar Hagelund 2020-10-29 10:21:22 UTC
This smells of a selinux problem. Can you post the output of 

sudo journalctl -u varnish -xe

and the latest entries in 

journalctl SYSLOG_IDENTIFIER=setroubleshoot


On a freshly installed f33 system, I get this error (the name of the tmpfile changing for each try, of course):

SELinux is preventing lto1-wpa from map access on the file /tmp/ccTOlFtj.o.

varnishd needs to be able to map .o files in /tmp, as it compiles its VCL configuration to a binary library .o file, linking it into the varnishd process itself.

For testing purposes, you may try

# setsebool domain_can_mmap_files on
# systemctl start varnish
# systemctl status varnish
# systemctl stop varnish
# setsebool domain_can_mmap_files off

If varnishd starts with domain_can_mmap_files 1, then this is the probably reason. It seems the policy has changed, as this boolean is the same in f32, but there it does not hinder varnishd to map .o files in /tmp.

Someone that knows selinux must compile the generic change for varnishd and add it to the general selinux policy.



Ingvar

Comment 2 Emil Malinov 2020-10-29 20:12:51 UTC
Hi Ingvar,

Thanks for you reply.

Here is the output

[root@proxy ~]# systemctl start varnish
Job for varnish.service failed because the control process exited with error code.
See "systemctl status varnish.service" and "journalctl -xe" for details.
[root@proxy ~]# journalctl -u varnish -xe
~
~
~
-- Logs begin at Fri 2020-10-30 08:45:55 NZDT, end at Fri 2020-10-30 08:48:02 NZDT. --
Oct 30 08:47:47 proxy systemd[1]: Starting Varnish Cache, a high-performance HTTP accelerator...
░░ Subject: A start job for unit varnish.service has begun execution
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit varnish.service has begun execution.
░░
░░ The job identifier is 997.
Oct 30 08:47:48 proxy varnishd[23964]: Error:
Oct 30 08:47:48 proxy varnishd[23964]: Message from C-compiler:
Oct 30 08:47:48 proxy varnishd[23964]: lto1: fatal error: Cannot map /tmp/ccUUg65n.o
Oct 30 08:47:48 proxy varnishd[23964]: compilation terminated.
Oct 30 08:47:48 proxy varnishd[23964]: lto-wrapper: fatal error: gcc returned 1 exit status
Oct 30 08:47:48 proxy varnishd[23964]: compilation terminated.
Oct 30 08:47:48 proxy varnishd[23964]: /usr/bin/ld: error: lto-wrapper failed
Oct 30 08:47:48 proxy varnishd[23964]: collect2: error: ld returned 1 exit status
Oct 30 08:47:48 proxy varnishd[23964]: Running C-compiler failed, exited with 1
Oct 30 08:47:48 proxy varnishd[23964]: VCL compilation failed
Oct 30 08:47:48 proxy systemd[1]: varnish.service: Control process exited, code=exited, status=255/EXCEPTION
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ An ExecStart= process belonging to unit varnish.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 255.
Oct 30 08:47:48 proxy systemd[1]: varnish.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ The unit varnish.service has entered the 'failed' state with result 'exit-code'.
Oct 30 08:47:48 proxy systemd[1]: varnish.service: Unit process 23964 (varnishd) remains running after unit stopped.
Oct 30 08:47:48 proxy systemd[1]: Failed to start Varnish Cache, a high-performance HTTP accelerator.
░░ Subject: A start job for unit varnish.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit varnish.service has finished with a failure.
░░
░░ The job identifier is 997 and the job result is failed.


[root@proxy ~]# journalctl SYSLOG_IDENTIFIER=setroubleshoot
-- Logs begin at Fri 2020-10-30 08:45:55 NZDT, end at Fri 2020-10-30 08:49:25 NZDT. --
Oct 30 08:47:51 proxy setroubleshoot[23974]: AnalyzeThread.run(): Cancel pending alarm
Oct 30 08:47:52 proxy setroubleshoot[23974]: SELinux is preventing lto1-wpa from map access on the file /tmp/ccUUg65n.o. For complete SELinux messages run: sealert -l 76dc3bd7-5f3b-4919-9414-f2e72524ac27
Oct 30 08:47:52 proxy setroubleshoot[23974]: AnalyzeThread.run(): Set alarm timeout to 10
[root@proxy ~]# sealert -l 76dc3bd7-5f3b-4919-9414-f2e72524ac27
SELinux is preventing lto1-wpa from map access on the file /tmp/ccUUg65n.o.

*****  Plugin catchall_boolean (89.3 confidence) suggests   ******************

If you want to allow domain to can mmap files
Then you must tell SELinux about this by enabling the 'domain_can_mmap_files' boolean.

Do
setsebool -P domain_can_mmap_files 1

*****  Plugin catchall (11.6 confidence) suggests   **************************

If you believe that lto1-wpa should be allowed map access on the ccUUg65n.o file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'lto1-wpa' --raw | audit2allow -M my-lto1wpa
# semodule -X 300 -i my-lto1wpa.pp


Additional Information:
Source Context                system_u:system_r:varnishd_t:s0
Target Context                system_u:object_r:varnishd_tmp_t:s0
Target Objects                /tmp/ccUUg65n.o [ file ]
Source                        lto1-wpa
Source Path                   lto1-wpa
Port                          <Unknown>
Host                          proxy
Source RPM Packages
Target RPM Packages
SELinux Policy RPM            selinux-policy-targeted-3.14.6-28.fc33.noarch
Local Policy RPM              selinux-policy-targeted-3.14.6-28.fc33.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     proxy
Platform                      Linux proxy 5.8.15-301.fc33.x86_64 #1 SMP Thu Oct
                              15 16:58:06 UTC 2020 x86_64 x86_64
Alert Count                   1
First Seen                    2020-10-30 08:47:48 NZDT
Last Seen                     2020-10-30 08:47:48 NZDT
Local ID                      76dc3bd7-5f3b-4919-9414-f2e72524ac27

Raw Audit Messages
type=AVC msg=audit(1604000868.64:265): avc:  denied  { map } for  pid=23973 comm="lto1-wpa" path="/tmp/ccUUg65n.o" dev="tmpfs" ino=56637 scontext=system_u:system_r:varnishd_t:s0 tcontext=system_u:object_r:varnishd_tmp_t:s0 tclass=file permissive=0


Hash: lto1-wpa,varnishd_t,varnishd_tmp_t,file,map


#########################################################################################################################################


I then disabled selinux and tried again. Varnish did not start.


[root@proxy ~]# ls /etc/selinux/
config  semanage.conf  targeted
[root@proxy ~]# vi /etc/selinux/config
[root@proxy ~]# reboot now
[root@proxy ~]#
login as: root
root.141.20's password:
Web console: https://proxy.mshome.net:9090/ or https://172.17.141.20:9090/

Last login: Fri Oct 30 08:46:33 2020 from 172.17.141.17
[root@proxy ~]# getenforce
Disabled
[root@proxy ~]# systemctl start varnish
Job for varnish.service failed because the control process exited with error code.
See "systemctl status varnish.service" and "journalctl -xe" for details.
[root@proxy ~]# journalctl -u varnish -xe
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ An ExecStart= process belonging to unit varnish.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 255.
Oct 30 08:47:48 proxy systemd[1]: varnish.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ The unit varnish.service has entered the 'failed' state with result 'exit-code'.
Oct 30 08:47:48 proxy systemd[1]: varnish.service: Unit process 23964 (varnishd) remains running after unit stopped.
Oct 30 08:47:48 proxy systemd[1]: Failed to start Varnish Cache, a high-performance HTTP accelerator.
░░ Subject: A start job for unit varnish.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit varnish.service has finished with a failure.
░░
░░ The job identifier is 997 and the job result is failed.
-- Reboot --
Oct 30 08:54:07 proxy systemd[1]: Starting Varnish Cache, a high-performance HTTP accelerator...
░░ Subject: A start job for unit varnish.service has begun execution
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit varnish.service has begun execution.
░░
░░ The job identifier is 847.
Oct 30 08:54:07 proxy varnishd[841]: Error:
Oct 30 08:54:07 proxy varnishd[841]: Message from C-compiler:
Oct 30 08:54:07 proxy varnishd[841]: lto-wrapper: fatal error: execvp: No such file or directory
Oct 30 08:54:07 proxy varnishd[841]: compilation terminated.
Oct 30 08:54:07 proxy varnishd[841]: /usr/bin/ld: error: lto-wrapper failed
Oct 30 08:54:07 proxy varnishd[841]: collect2: error: ld returned 1 exit status
Oct 30 08:54:07 proxy varnishd[841]: Running C-compiler failed, exited with 1
Oct 30 08:54:07 proxy varnishd[841]: VCL compilation failed
Oct 30 08:54:07 proxy systemd[1]: varnish.service: Control process exited, code=exited, status=255/EXCEPTION
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ An ExecStart= process belonging to unit varnish.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 255.
Oct 30 08:54:07 proxy systemd[1]: varnish.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ The unit varnish.service has entered the 'failed' state with result 'exit-code'.
Oct 30 08:54:07 proxy systemd[1]: varnish.service: Unit process 841 (varnishd) remains running after unit stopped.
Oct 30 08:54:07 proxy systemd[1]: varnish.service: Unit process 851 (lto-wrapper) remains running after unit stopped.
Oct 30 08:54:07 proxy systemd[1]: Failed to start Varnish Cache, a high-performance HTTP accelerator.
░░ Subject: A start job for unit varnish.service has failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit varnish.service has finished with a failure.
░░
░░ The job identifier is 847 and the job result is failed.



[root@proxy ~]# journalctl SYSLOG_IDENTIFIER=setroubleshoot
-- Logs begin at Fri 2020-10-30 08:45:55 NZDT, end at Fri 2020-10-30 08:54:07 NZDT. --
Oct 30 08:47:51 proxy setroubleshoot[23974]: AnalyzeThread.run(): Cancel pending alarm
Oct 30 08:47:52 proxy setroubleshoot[23974]: SELinux is preventing lto1-wpa from map access on the file /tmp/ccUUg65n.o. For complete SELinux messages run: sealert -l 76dc3bd7-5f3b-4919-9414-f2e72524ac27
Oct 30 08:47:52 proxy setroubleshoot[23974]: AnalyzeThread.run(): Set alarm timeout to 10



Note that this output is from the first try (with selinux enabled), with selinux disabled there are no entries.



I then enable selinux again and tried with suggested troubleshooting steps. Varnish fails to start, no new selinux entries.

[root@proxy ~]# vi /etc/selinux/config
[root@proxy ~]# reboot now
[root@proxy ~]#
login as: root
root.141.20's password:
Web console: https://proxy.mshome.net:9090/ or https://172.17.141.20:9090/

Last login: Fri Oct 30 08:53:24 2020 from 172.17.141.17
[root@proxy ~]# setsebool domain_can_mmap_files on
[root@proxy ~]# systemctl start varnish
Job for varnish.service failed because the control process exited with error code.
See "systemctl status varnish.service" and "journalctl -xe" for details.
[root@proxy ~]# systemctl status varnish
● varnish.service - Varnish Cache, a high-performance HTTP accelerator
     Loaded: loaded (/usr/lib/systemd/system/varnish.service; disabled; vendor preset: disabled)
     Active: failed (Result: exit-code) since Fri 2020-10-30 09:06:21 NZDT; 8s ago
    Process: 854 ExecStart=/usr/sbin/varnishd -a :6081 -f /etc/varnish/default.vcl -s malloc,256m (code=exited, status=255/EXCEPTION)
        CPU: 166ms

Oct 30 09:06:21 proxy varnishd[855]: compilation terminated.
Oct 30 09:06:21 proxy varnishd[855]: /usr/bin/ld: error: lto-wrapper failed
Oct 30 09:06:21 proxy varnishd[855]: collect2: error: ld returned 1 exit status
Oct 30 09:06:21 proxy varnishd[855]: Running C-compiler failed, exited with 1
Oct 30 09:06:21 proxy varnishd[855]: VCL compilation failed
Oct 30 09:06:21 proxy systemd[1]: varnish.service: Control process exited, code=exited, status=255/EXCEPTION
Oct 30 09:06:21 proxy systemd[1]: varnish.service: Failed with result 'exit-code'.
Oct 30 09:06:21 proxy systemd[1]: varnish.service: Unit process 855 (varnishd) remains running after unit stopped.
Oct 30 09:06:21 proxy systemd[1]: varnish.service: Unit process 865 (lto-wrapper) remains running after unit stopped.
Oct 30 09:06:21 proxy systemd[1]: Failed to start Varnish Cache, a high-performance HTTP accelerator.
[root@proxy ~]# setsebool domain_can_mmap_files off
[root@proxy ~]# journalctl SYSLOG_IDENTIFIER=setroubleshoot
-- Logs begin at Fri 2020-10-30 08:45:55 NZDT, end at Fri 2020-10-30 09:07:10 NZDT. --
Oct 30 08:47:51 proxy setroubleshoot[23974]: AnalyzeThread.run(): Cancel pending alarm
Oct 30 08:47:52 proxy setroubleshoot[23974]: SELinux is preventing lto1-wpa from map access on the file /tmp/ccUUg65n.o. For complete SELinux messages run: sealert -l 76dc3bd7-5f3b-4919-9414-f2e72524ac27
Oct 30 08:47:52 proxy setroubleshoot[23974]: AnalyzeThread.run(): Set alarm timeout to 10




############################################################################################

Finally, I should clarify that I get same error on fresh Fedora 32 Server install. The one that runs is also a Fedora 32, I had installed a month ago and hadn't used. But it I must have done something on it I don't remember now.


Thanks 
Emil

Comment 3 Emil Malinov 2020-10-31 00:59:59 UTC
I get the following output from

# varnishd -C -f /etc/varnish/default.vcl

Message from C-compiler:
lto-wrapper: fatal error: execvp: Permission denied
compilation terminated.
/usr/bin/ld: error: lto-wrapper failed
collect2: error: ld returned 1 exit status
Running C-compiler failed, exited with 1



# gcc -v

Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/10/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl --enable-offload-targets=nvptx-none --without-cuda-driver --enable-gnu-indirect-function --enable-cet --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
Supported LTO compression algorithms: zlib zstd
gcc version 10.2.1 20200826 (Red Hat 10.2.1-3) (GCC)


I looked for lto bugs and found this one

https://bugzilla.redhat.com/show_bug.cgi?id=1866012 duplicate of 1823349
https://bugzilla.redhat.com/show_bug.cgi?id=1823349

######################################################################################
installing make removes the message compiler message from


# varnishd -C -f /etc/varnish/default.vcl

Message from C-compiler:
lto-wrapper: fatal error: execvp: Permission denied
compilation terminated.
/usr/bin/ld: error: lto-wrapper failed
collect2: error: ld returned 1 exit status
Running C-compiler failed, exited with 1


######################################################################################

attempting to start varnish fail again, but this time with selinux issue


[root@proxy ~]# systemctl start varnish
Job for varnish.service failed because the control process exited with error code.
See "systemctl status varnish.service" and "journalctl -xe" for details.
[root@proxy ~]# journalctl -xe
░░ A start job for unit system-dbus\x2d:1.6\x2dorg.fedoraproject.Setroubleshootd.slice has finished successfully.
░░
░░ The job identifier is 1529.
Oct 31 13:54:58 proxy systemd[1]: Started dbus-:1.6-org.fedoraproject.Setroubleshootd.
░░ Subject: A start job for unit dbus-:1.6-org.fedoraproject.Setroubleshootd has finished successfully
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit dbus-:1.6-org.fedoraproject.Setroubleshootd has finished successfully.
░░
░░ The job identifier is 1528.
Oct 31 13:54:58 proxy audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.6-org.fedoraproject.Setroubleshootd@0>
Oct 31 13:54:59 proxy setroubleshoot[1314]: AnalyzeThread.run(): Cancel pending alarm
Oct 31 13:54:59 proxy systemd[1]: Created slice system-dbus\x2d:1.6\x2dorg.fedoraproject.SetroubleshootPrivileged.slice.
░░ Subject: A start job for unit system-dbus\x2d:1.6\x2dorg.fedoraproject.SetroubleshootPrivileged.slice has finished successfully
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit system-dbus\x2d:1.6\x2dorg.fedoraproject.SetroubleshootPrivileged.slice has finished successfully.
░░
░░ The job identifier is 1605.
Oct 31 13:54:59 proxy systemd[1]: Started dbus-:1.6-org.fedoraproject.SetroubleshootPrivileged.
░░ Subject: A start job for unit dbus-:1.6-org.fedoraproject.SetroubleshootPrivileged has finished successfully
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░
░░ A start job for unit dbus-:1.6-org.fedoraproject.SetroubleshootPrivileged has finished successfully.
░░
░░ The job identifier is 1604.
Oct 31 13:54:59 proxy audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.6-org.fedoraproject.SetroubleshootPri>
Oct 31 13:55:00 proxy kernel: hv_balloon: Balloon request will be partially fulfilled. Balloon floor reached.
Oct 31 13:55:01 proxy setroubleshoot[1314]: SELinux is preventing lto1-wpa from map access on the file /tmp/ccr5mxoT.o. For complete SELinux messages run: sealert -l 14a3c5d5-a6>
Oct 31 13:55:01 proxy python3[1314]: SELinux is preventing lto1-wpa from map access on the file /tmp/ccr5mxoT.o.

                                     *****  Plugin catchall_boolean (89.3 confidence) suggests   ******************

                                     If you want to allow domain to can mmap files
                                     Then you must tell SELinux about this by enabling the 'domain_can_mmap_files' boolean.

                                     Do
                                     setsebool -P domain_can_mmap_files 1

                                     *****  Plugin catchall (11.6 confidence) suggests   **************************

                                     If you believe that lto1-wpa should be allowed map access on the ccr5mxoT.o file by default.
                                     Then you should report this as a bug.
                                     You can generate a local policy module to allow this access.
                                     Do
                                     allow this access for now by executing:
                                     # ausearch -c 'lto1-wpa' --raw | audit2allow -M my-lto1wpa
                                     # semodule -X 300 -i my-lto1wpa.pp

Oct 31 13:55:01 proxy setroubleshoot[1314]: AnalyzeThread.run(): Set alarm timeout to 10



####################################################################################################
[root@proxy ~]# setenforce 0
[root@proxy ~]# getenforce
Permissive
[root@proxy ~]# systemctl start varnish
[root@proxy ~]#

turning off selinux enables varnish to start


####################################################################################################
next back to looking at selinux and allowing lto1-wpa to map access to the temp file

Comment 4 Zdenek Pytela 2020-11-12 10:21:21 UTC
If there is no other problem than AVC denial mapping tmp files, I'd close this bz a dup of bz#1896457.

Comment 5 Emil Malinov 2020-11-12 10:29:33 UTC
OK, thanks

Comment 6 Ben Cotton 2021-11-04 17:19:05 UTC
This message is a reminder that Fedora 33 is nearing its end of life.
Fedora will stop maintaining and issuing updates for Fedora 33 on 2021-11-30.
It is Fedora's policy to close all bug reports from releases that are no longer
maintained. At that time this bug will be closed as EOL if it remains open with a
Fedora 'version' of '33'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 33 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 7 Ingvar Hagelund 2021-11-09 08:57:26 UTC

*** This bug has been marked as a duplicate of bug 1896457 ***


Note You need to log in before you can comment on or make changes to this bug.