Login
Log in using an SSO provider:
Fedora Account System
Red Hat Associate
Red Hat Customer
Login using a Red Hat Bugzilla account
Forgot Password
Create an Account
Red Hat Bugzilla – Attachment 1906656 Details for
Bug 2119992
Updates rolled back on boot
Home
New
Search
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh92 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
[?]
This site requires JavaScript to be enabled to function correctly, please enable it.
Output of journalctl -b
journal (text/plain), 982.20 KB, created by
Daan Vanoverloop
on 2022-08-20 17:38:56 UTC
(
hide
)
Description:
Output of journalctl -b
Filename:
MIME Type:
Creator:
Daan Vanoverloop
Created:
2022-08-20 17:38:56 UTC
Size:
982.20 KB
patch
obsolete
>Jul 14 00:00:00 fedora kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] >Jul 14 00:00:00 fedora kernel: Linux version 5.18.16-200.fc36.aarch64 (mockbuild@buildvm-a64-12.iad2.fedoraproject.org) (gcc (GCC) 12.1.1 20220507 (Red Hat 12.1.1-1), GNU ld version 2.37-27.fc36) #1 SMP PREEMPT_DYNAMIC Wed Aug 3 15:07:15 UTC 2022 >Jul 14 00:00:00 fedora kernel: random: crng init done >Jul 14 00:00:00 fedora kernel: Machine model: Raspberry Pi 4 Model B Rev 1.1 >Jul 14 00:00:00 fedora kernel: efi: EFI v2.80 by Das U-Boot >Jul 14 00:00:00 fedora kernel: efi: RTPROP=0x3cb2f040 SMBIOS=0x3cb2b000 MOKvar=0x3ca12000 RNG=0x3ca04040 MEMRESERVE=0x3c9f4040 >Jul 14 00:00:00 fedora kernel: efi: seeding entropy pool >Jul 14 00:00:00 fedora kernel: Reserved memory: created CMA memory pool at 0x000000002c000000, size 64 MiB >Jul 14 00:00:00 fedora kernel: OF: reserved mem: initialized node linux,cma, compatible id shared-dma-pool >Jul 14 00:00:00 fedora kernel: NUMA: No NUMA configuration found >Jul 14 00:00:00 fedora kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000000fbffffff] >Jul 14 00:00:00 fedora kernel: NUMA: NODE_DATA [mem 0xfb79a6c0-0xfb7b0fff] >Jul 14 00:00:00 fedora kernel: Zone ranges: >Jul 14 00:00:00 fedora kernel: DMA [mem 0x0000000000000000-0x000000003fffffff] >Jul 14 00:00:00 fedora kernel: DMA32 [mem 0x0000000040000000-0x00000000fbffffff] >Jul 14 00:00:00 fedora kernel: Normal empty >Jul 14 00:00:00 fedora kernel: Device empty >Jul 14 00:00:00 fedora kernel: Movable zone start for each node >Jul 14 00:00:00 fedora kernel: Early memory node ranges >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x0000000000000000-0x0000000000000fff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x0000000000001000-0x000000003ca03fff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003ca04000-0x000000003ca04fff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003ca05000-0x000000003ca11fff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003ca12000-0x000000003ca12fff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003ca13000-0x000000003ca52fff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003ca53000-0x000000003ca53fff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003ca54000-0x000000003cb2afff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003cb2b000-0x000000003cb2bfff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003cb2c000-0x000000003cb2efff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003cb2f000-0x000000003cb31fff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003cb32000-0x000000003cb32fff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003cb33000-0x000000003cb36fff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003cb37000-0x000000003df4ffff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003df50000-0x000000003df5ffff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003df60000-0x000000003dffffff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x000000003f127000-0x000000003f127fff] >Jul 14 00:00:00 fedora kernel: node 0: [mem 0x0000000040000000-0x00000000fbffffff] >Jul 14 00:00:00 fedora kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000000fbffffff] >Jul 14 00:00:00 fedora kernel: On node 0, zone DMA: 4391 pages in unavailable ranges >Jul 14 00:00:00 fedora kernel: On node 0, zone DMA32: 3800 pages in unavailable ranges >Jul 14 00:00:00 fedora kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges >Jul 14 00:00:00 fedora kernel: percpu: Embedded 31 pages/cpu s88744 r8192 d30040 u126976 >Jul 14 00:00:00 fedora kernel: pcpu-alloc: s88744 r8192 d30040 u126976 alloc=31*4096 >Jul 14 00:00:00 fedora kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 >Jul 14 00:00:00 fedora kernel: Detected PIPT I-cache on CPU0 >Jul 14 00:00:00 fedora kernel: CPU features: detected: Spectre-v2 >Jul 14 00:00:00 fedora kernel: CPU features: detected: Spectre-v3a >Jul 14 00:00:00 fedora kernel: CPU features: detected: Spectre-v4 >Jul 14 00:00:00 fedora kernel: CPU features: detected: Spectre-BHB >Jul 14 00:00:00 fedora kernel: CPU features: kernel page table isolation forced ON by KASLR >Jul 14 00:00:00 fedora kernel: CPU features: detected: Kernel page table isolation (KPTI) >Jul 14 00:00:00 fedora kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 >Jul 14 00:00:00 fedora kernel: Fallback order for Node 0: 0 >Jul 14 00:00:00 fedora kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007873 >Jul 14 00:00:00 fedora kernel: Policy zone: DMA32 >Jul 14 00:00:00 fedora kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos2)/ostree/fedora-iot-e8397e68f7793d7e9cf907a131f5ac7c5f92558899e0a5bda2335a6bd44d8a19/vmlinuz-5.18.16-200.fc36.aarch64 modprobe.blacklist=vc4 console=tty0 root=UUID=433fb0a4-33dc-42c7-9208-3270ba010d05 ostree=/ostree/boot.0/fedora-iot/e8397e68f7793d7e9cf907a131f5ac7c5f92558899e0a5bda2335a6bd44d8a19/0 >Jul 14 00:00:00 fedora kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos2)/ostree/fedora-iot-e8397e68f7793d7e9cf907a131f5ac7c5f92558899e0a5bda2335a6bd44d8a19/vmlinuz-5.18.16-200.fc36.aarch64 ostree=/ostree/boot.0/fedora-iot/e8397e68f7793d7e9cf907a131f5ac7c5f92558899e0a5bda2335a6bd44d8a19/0", will be passed to user space. >Jul 14 00:00:00 fedora kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) >Jul 14 00:00:00 fedora kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) >Jul 14 00:00:00 fedora kernel: mem auto-init: stack:off, heap alloc:off, heap free:off >Jul 14 00:00:00 fedora kernel: software IO TLB: mapped [mem 0x00000000389f1000-0x000000003c9f1000] (64MB) >Jul 14 00:00:00 fedora kernel: Memory: 3754444K/4096004K available (16384K kernel code, 4350K rwdata, 13168K rodata, 7360K init, 10809K bss, 276024K reserved, 65536K cma-reserved) >Jul 14 00:00:00 fedora kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 >Jul 14 00:00:00 fedora kernel: ftrace: allocating 55939 entries in 219 pages >Jul 14 00:00:00 fedora kernel: ftrace: allocated 219 pages with 6 groups >Jul 14 00:00:00 fedora kernel: trace event string verifier disabled >Jul 14 00:00:00 fedora kernel: Dynamic Preempt: voluntary >Jul 14 00:00:00 fedora kernel: rcu: Preemptible hierarchical RCU implementation. >Jul 14 00:00:00 fedora kernel: rcu: RCU restricting CPUs from NR_CPUS=4096 to nr_cpu_ids=4. >Jul 14 00:00:00 fedora kernel: Trampoline variant of Tasks RCU enabled. >Jul 14 00:00:00 fedora kernel: Rude variant of Tasks RCU enabled. >Jul 14 00:00:00 fedora kernel: Tracing variant of Tasks RCU enabled. >Jul 14 00:00:00 fedora kernel: rcu: RCU calculated value of scheduler-enlistment delay is 10 jiffies. >Jul 14 00:00:00 fedora kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 >Jul 14 00:00:00 fedora kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 >Jul 14 00:00:00 fedora kernel: Root IRQ handler: gic_handle_irq >Jul 14 00:00:00 fedora kernel: GIC: Using split EOI/Deactivate mode >Jul 14 00:00:00 fedora kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) >Jul 14 00:00:00 fedora kernel: arch_timer: cp15 timer(s) running at 54.00MHz (phys). >Jul 14 00:00:00 fedora kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0xc743ce346, max_idle_ns: 440795203123 ns >Jul 14 00:00:00 fedora kernel: sched_clock: 56 bits at 54MHz, resolution 18ns, wraps every 4398046511102ns >Jul 14 00:00:00 fedora kernel: Console: colour dummy device 80x25 >Jul 14 00:00:00 fedora kernel: printk: console [tty0] enabled >Jul 14 00:00:00 fedora kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 108.00 BogoMIPS (lpj=540000) >Jul 14 00:00:00 fedora kernel: pid_max: default: 32768 minimum: 301 >Jul 14 00:00:00 fedora kernel: LSM: Security Framework initializing >Jul 14 00:00:00 fedora kernel: Yama: becoming mindful. >Jul 14 00:00:00 fedora kernel: SELinux: Initializing. >Jul 14 00:00:00 fedora kernel: LSM support for eBPF active >Jul 14 00:00:00 fedora kernel: landlock: Up and running. >Jul 14 00:00:00 fedora kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) >Jul 14 00:00:00 fedora kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) >Jul 14 00:00:00 fedora kernel: cblist_init_generic: Setting adjustable number of callback queues. >Jul 14 00:00:00 fedora kernel: cblist_init_generic: Setting shift to 2 and lim to 1. >Jul 14 00:00:00 fedora kernel: cblist_init_generic: Setting shift to 2 and lim to 1. >Jul 14 00:00:00 fedora kernel: cblist_init_generic: Setting shift to 2 and lim to 1. >Jul 14 00:00:00 fedora kernel: rcu: Hierarchical SRCU implementation. >Jul 14 00:00:00 fedora kernel: Remapping and enabling EFI services. >Jul 14 00:00:00 fedora kernel: smp: Bringing up secondary CPUs ... >Jul 14 00:00:00 fedora kernel: Detected PIPT I-cache on CPU1 >Jul 14 00:00:00 fedora kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] >Jul 14 00:00:00 fedora kernel: Detected PIPT I-cache on CPU2 >Jul 14 00:00:00 fedora kernel: CPU2: Booted secondary processor 0x0000000002 [0x410fd083] >Jul 14 00:00:00 fedora kernel: Detected PIPT I-cache on CPU3 >Jul 14 00:00:00 fedora kernel: CPU3: Booted secondary processor 0x0000000003 [0x410fd083] >Jul 14 00:00:00 fedora kernel: smp: Brought up 1 node, 4 CPUs >Jul 14 00:00:00 fedora kernel: SMP: Total of 4 processors activated. >Jul 14 00:00:00 fedora kernel: CPU features: detected: 32-bit EL0 Support >Jul 14 00:00:00 fedora kernel: CPU features: detected: 32-bit EL1 Support >Jul 14 00:00:00 fedora kernel: CPU features: detected: CRC32 instructions >Jul 14 00:00:00 fedora kernel: CPU features: emulated: Privileged Access Never (PAN) using TTBR0_EL1 switching >Jul 14 00:00:00 fedora kernel: CPU: All CPU(s) started at EL2 >Jul 14 00:00:00 fedora kernel: alternatives: patching kernel code >Jul 14 00:00:00 fedora kernel: devtmpfs: initialized >Jul 14 00:00:00 fedora kernel: Registered cp15_barrier emulation handler >Jul 14 00:00:00 fedora kernel: Registered setend emulation handler >Jul 14 00:00:00 fedora kernel: KASLR enabled >Jul 14 00:00:00 fedora kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns >Jul 14 00:00:00 fedora kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) >Jul 14 00:00:00 fedora kernel: pinctrl core: initialized pinctrl subsystem >Jul 14 00:00:00 fedora kernel: SMBIOS 3.0 present. >Jul 14 00:00:00 fedora kernel: DMI: Unknown Unknown Product/Unknown Product, BIOS 2021.10 10/01/2021 >Jul 14 00:00:00 fedora kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family >Jul 14 00:00:00 fedora kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations >Jul 14 00:00:00 fedora kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations >Jul 14 00:00:00 fedora kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations >Jul 14 00:00:00 fedora kernel: audit: initializing netlink subsys (disabled) >Jul 14 00:00:00 fedora kernel: audit: type=2000 audit(0.040:1): state=initialized audit_enabled=0 res=1 >Jul 14 00:00:00 fedora kernel: thermal_sys: Registered thermal governor 'fair_share' >Jul 14 00:00:00 fedora kernel: thermal_sys: Registered thermal governor 'step_wise' >Jul 14 00:00:00 fedora kernel: thermal_sys: Registered thermal governor 'user_space' >Jul 14 00:00:00 fedora kernel: cpuidle: using governor menu >Jul 14 00:00:00 fedora kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. >Jul 14 00:00:00 fedora kernel: ASID allocator initialised with 32768 entries >Jul 14 00:00:00 fedora kernel: Serial: AMBA PL011 UART driver >Jul 14 00:00:00 fedora kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages >Jul 14 00:00:00 fedora kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages >Jul 14 00:00:00 fedora kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages >Jul 14 00:00:00 fedora kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages >Jul 14 00:00:00 fedora kernel: cryptd: max_cpu_qlen set to 1000 >Jul 14 00:00:00 fedora kernel: raid6: skipped pq benchmark and selected neonx8 >Jul 14 00:00:00 fedora kernel: raid6: using neon recovery algorithm >Jul 14 00:00:00 fedora kernel: fbcon: Taking over console >Jul 14 00:00:00 fedora kernel: ACPI: Interpreter disabled. >Jul 14 00:00:00 fedora kernel: iommu: Default domain type: Translated >Jul 14 00:00:00 fedora kernel: iommu: DMA domain TLB invalidation policy: lazy mode >Jul 14 00:00:00 fedora kernel: SCSI subsystem initialized >Jul 14 00:00:00 fedora kernel: libata version 3.00 loaded. >Jul 14 00:00:00 fedora kernel: usbcore: registered new interface driver usbfs >Jul 14 00:00:00 fedora kernel: usbcore: registered new interface driver hub >Jul 14 00:00:00 fedora kernel: usbcore: registered new device driver usb >Jul 14 00:00:00 fedora kernel: pps_core: LinuxPPS API ver. 1 registered >Jul 14 00:00:00 fedora kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> >Jul 14 00:00:00 fedora kernel: PTP clock support registered >Jul 14 00:00:00 fedora kernel: EDAC MC: Ver: 3.0.0 >Jul 14 00:00:00 fedora kernel: Registered efivars operations >Jul 14 00:00:00 fedora kernel: NetLabel: Initializing >Jul 14 00:00:00 fedora kernel: NetLabel: domain hash size = 128 >Jul 14 00:00:00 fedora kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO >Jul 14 00:00:00 fedora kernel: NetLabel: unlabeled traffic allowed by default >Jul 14 00:00:00 fedora kernel: mctp: management component transport protocol core >Jul 14 00:00:00 fedora kernel: NET: Registered PF_MCTP protocol family >Jul 14 00:00:00 fedora kernel: vgaarb: loaded >Jul 14 00:00:00 fedora kernel: clocksource: Switched to clocksource arch_sys_counter >Jul 14 00:00:00 fedora kernel: VFS: Disk quotas dquot_6.6.0 >Jul 14 00:00:00 fedora kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) >Jul 14 00:00:00 fedora kernel: pnp: PnP ACPI: disabled >Jul 14 00:00:00 fedora kernel: NET: Registered PF_INET protocol family >Jul 14 00:00:00 fedora kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) >Jul 14 00:00:00 fedora kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) >Jul 14 00:00:00 fedora kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) >Jul 14 00:00:00 fedora kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) >Jul 14 00:00:00 fedora kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) >Jul 14 00:00:00 fedora kernel: TCP: Hash tables configured (established 32768 bind 32768) >Jul 14 00:00:00 fedora kernel: MPTCP token hash table entries: 4096 (order: 4, 98304 bytes, linear) >Jul 14 00:00:00 fedora kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) >Jul 14 00:00:00 fedora kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) >Jul 14 00:00:00 fedora kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family >Jul 14 00:00:00 fedora kernel: NET: Registered PF_XDP protocol family >Jul 14 00:00:00 fedora kernel: PCI: CLS 0 bytes, default 64 >Jul 14 00:00:00 fedora kernel: Trying to unpack rootfs image as initramfs... >Jul 14 00:00:00 fedora kernel: hw perfevents: enabled with armv8_cortex_a72 PMU driver, 7 counters available >Jul 14 00:00:00 fedora kernel: kvm [1]: IPA Size Limit: 44 bits >Jul 14 00:00:00 fedora kernel: kvm [1]: vgic interrupt IRQ9 >Jul 14 00:00:00 fedora kernel: kvm [1]: Hyp mode initialized successfully >Jul 14 00:00:00 fedora kernel: Initialise system trusted keyrings >Jul 14 00:00:00 fedora kernel: Key type blacklist registered >Jul 14 00:00:00 fedora kernel: workingset: timestamp_bits=37 max_order=20 bucket_order=0 >Jul 14 00:00:00 fedora kernel: zbud: loaded >Jul 14 00:00:00 fedora kernel: integrity: Platform Keyring initialized >Jul 14 00:00:00 fedora kernel: integrity: Machine keyring initialized >Jul 14 00:00:00 fedora kernel: NET: Registered PF_ALG protocol family >Jul 14 00:00:00 fedora kernel: xor: measuring software checksum speed >Jul 14 00:00:00 fedora kernel: 8regs : 2909 MB/sec >Jul 14 00:00:00 fedora kernel: 32regs : 3135 MB/sec >Jul 14 00:00:00 fedora kernel: arm64_neon : 2332 MB/sec >Jul 14 00:00:00 fedora kernel: xor: using function: 32regs (3135 MB/sec) >Jul 14 00:00:00 fedora kernel: Key type asymmetric registered >Jul 14 00:00:00 fedora kernel: Asymmetric key parser 'x509' registered >Jul 14 00:00:00 fedora kernel: Freeing initrd memory: 74244K >Jul 14 00:00:00 fedora kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed >Jul 14 00:00:00 fedora kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 243) >Jul 14 00:00:00 fedora kernel: io scheduler mq-deadline registered >Jul 14 00:00:00 fedora kernel: io scheduler kyber registered >Jul 14 00:00:00 fedora kernel: io scheduler bfq registered >Jul 14 00:00:00 fedora kernel: atomic64_test: passed >Jul 14 00:00:00 fedora kernel: irq_brcmstb_l2: registered L2 intc (/soc/interrupt-controller@7ef00100, parent irq: 39) >Jul 14 00:00:00 fedora kernel: Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled >Jul 14 00:00:00 fedora kernel: bcm2835-aux-uart fe215040.serial: there is not valid maps for state default >Jul 14 00:00:00 fedora kernel: fe215040.serial: ttyS0 at MMIO 0xfe215040 (irq = 22, base_baud = 62499999) is a 16550 >Jul 14 00:00:00 fedora kernel: msm_serial: driver initialized >Jul 14 00:00:00 fedora kernel: cacheinfo: Unable to detect cache hierarchy for CPU 0 >Jul 14 00:00:00 fedora kernel: bcm2835-power bcm2835-power: Broadcom BCM2835 power domains driver >Jul 14 00:00:00 fedora kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver >Jul 14 00:00:00 fedora kernel: ehci-pci: EHCI PCI platform driver >Jul 14 00:00:00 fedora kernel: usbcore: registered new interface driver usbserial_generic >Jul 14 00:00:00 fedora kernel: usbserial: USB Serial support registered for generic >Jul 14 00:00:00 fedora kernel: mousedev: PS/2 mouse device common for all mice >Jul 14 00:00:00 fedora kernel: brcmstb-i2c fef04500.i2c: @97500hz registered in polling mode >Jul 14 00:00:00 fedora kernel: brcmstb-i2c fef09500.i2c: @97500hz registered in polling mode >Jul 14 00:00:00 fedora kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. >Jul 14 00:00:00 fedora kernel: device-mapper: uevent: version 1.0.3 >Jul 14 00:00:00 fedora kernel: device-mapper: ioctl: 4.46.0-ioctl (2022-02-22) initialised: dm-devel@redhat.com >Jul 14 00:00:00 fedora kernel: ledtrig-cpu: registered to indicate activity on CPUs >Jul 14 00:00:00 fedora kernel: hid: raw HID events driver (C) Jiri Kosina >Jul 14 00:00:00 fedora kernel: usbcore: registered new interface driver usbhid >Jul 14 00:00:00 fedora kernel: usbhid: USB HID core driver >Jul 14 00:00:00 fedora kernel: bcm2835-mbox fe00b880.mailbox: mailbox enabled >Jul 14 00:00:00 fedora kernel: drop_monitor: Initializing network drop monitor service >Jul 14 00:00:00 fedora kernel: Initializing XFRM netlink socket >Jul 14 00:00:00 fedora kernel: NET: Registered PF_INET6 protocol family >Jul 14 00:00:00 fedora kernel: Segment Routing with IPv6 >Jul 14 00:00:00 fedora kernel: RPL Segment Routing with IPv6 >Jul 14 00:00:00 fedora kernel: In-situ OAM (IOAM) with IPv6 >Jul 14 00:00:00 fedora kernel: mip6: Mobile IPv6 >Jul 14 00:00:00 fedora kernel: NET: Registered PF_PACKET protocol family >Jul 14 00:00:00 fedora kernel: registered taskstats version 1 >Jul 14 00:00:00 fedora kernel: Loading compiled-in X.509 certificates >Jul 14 00:00:00 fedora kernel: Loaded X.509 cert 'Fedora kernel signing key: 2f2d8f22ba966a76b1a9c4185cbfb4919892bf21' >Jul 14 00:00:00 fedora kernel: zswap: loaded using pool lzo/zbud >Jul 14 00:00:00 fedora kernel: page_owner is disabled >Jul 14 00:00:00 fedora kernel: Key type ._fscrypt registered >Jul 14 00:00:00 fedora kernel: Key type .fscrypt registered >Jul 14 00:00:00 fedora kernel: Key type fscrypt-provisioning registered >Jul 14 00:00:00 fedora kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=yes, fsverity=yes >Jul 14 00:00:00 fedora kernel: Key type big_key registered >Jul 14 00:00:00 fedora kernel: Key type encrypted registered >Jul 14 00:00:00 fedora kernel: ima: secureboot mode disabled >Jul 14 00:00:00 fedora kernel: ima: No TPM chip found, activating TPM-bypass! >Jul 14 00:00:00 fedora kernel: Loading compiled-in module X.509 certificates >Jul 14 00:00:00 fedora kernel: Loaded X.509 cert 'Fedora kernel signing key: 2f2d8f22ba966a76b1a9c4185cbfb4919892bf21' >Jul 14 00:00:00 fedora kernel: ima: Allocated hash algorithm: sha256 >Jul 14 00:00:00 fedora kernel: ima: No architecture policies found >Jul 14 00:00:00 fedora kernel: evm: Initialising EVM extended attributes: >Jul 14 00:00:00 fedora kernel: evm: security.selinux >Jul 14 00:00:00 fedora kernel: evm: security.SMACK64 (disabled) >Jul 14 00:00:00 fedora kernel: evm: security.SMACK64EXEC (disabled) >Jul 14 00:00:00 fedora kernel: evm: security.SMACK64TRANSMUTE (disabled) >Jul 14 00:00:00 fedora kernel: evm: security.SMACK64MMAP (disabled) >Jul 14 00:00:00 fedora kernel: evm: security.apparmor (disabled) >Jul 14 00:00:00 fedora kernel: evm: security.ima >Jul 14 00:00:00 fedora kernel: evm: security.capability >Jul 14 00:00:00 fedora kernel: evm: HMAC attrs: 0x1 >Jul 14 00:00:00 fedora kernel: alg: No test for 842 (842-scomp) >Jul 14 00:00:00 fedora kernel: alg: No test for 842 (842-generic) >Jul 14 00:00:00 fedora kernel: uart-pl011 fe201000.serial: there is not valid maps for state default >Jul 14 00:00:00 fedora kernel: fe201000.serial: ttyAMA1 at MMIO 0xfe201000 (irq = 55, base_baud = 0) is a PL011 rev2 >Jul 14 00:00:00 fedora kernel: raspberrypi-firmware soc:firmware: Attached to firmware from 2021-09-30T19:21:54 >Jul 14 00:00:00 fedora kernel: Freeing unused kernel memory: 7360K >Jul 14 00:00:00 fedora kernel: Checked W+X mappings: passed, no W+X pages found >Jul 14 00:00:00 fedora kernel: rodata_test: all tests were successful >Jul 14 00:00:00 fedora kernel: Run /init as init process >Jul 14 00:00:00 fedora kernel: with arguments: >Jul 14 00:00:00 fedora kernel: /init >Jul 14 00:00:00 fedora kernel: with environment: >Jul 14 00:00:00 fedora kernel: HOME=/ >Jul 14 00:00:00 fedora kernel: TERM=linux >Jul 14 00:00:00 fedora kernel: BOOT_IMAGE=(hd0,msdos2)/ostree/fedora-iot-e8397e68f7793d7e9cf907a131f5ac7c5f92558899e0a5bda2335a6bd44d8a19/vmlinuz-5.18.16-200.fc36.aarch64 >Jul 14 00:00:00 fedora kernel: ostree=/ostree/boot.0/fedora-iot/e8397e68f7793d7e9cf907a131f5ac7c5f92558899e0a5bda2335a6bd44d8a19/0 >Jul 14 00:00:00 fedora systemd[1]: System time before build time, advancing clock. >Jul 14 00:00:00 fedora systemd[1]: systemd v250.8-1.fc36 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) >Jul 14 00:00:00 fedora systemd[1]: Detected architecture arm64. >Jul 14 00:00:00 fedora systemd[1]: Running in initial RAM disk. >Jul 14 00:00:00 fedora systemd[1]: No hostname configured, using default hostname. >Jul 14 00:00:00 fedora systemd[1]: Hostname set to <fedora>. >Jul 14 00:00:00 fedora systemd[1]: Initializing machine ID from random generator. >Jul 14 00:00:00 fedora systemd[1]: Failed to link BPF program. Assuming BPF is not available >Jul 14 00:00:00 fedora systemd[1]: Queued start job for default target initrd.target. >Jul 14 00:00:00 fedora systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. >Jul 14 00:00:00 fedora systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. >Jul 14 00:00:00 fedora systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). >Jul 14 00:00:00 fedora systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. >Jul 14 00:00:00 fedora systemd[1]: Reached target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. >Jul 14 00:00:00 fedora systemd[1]: Reached target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. >Jul 14 00:00:00 fedora systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. >Jul 14 00:00:00 fedora systemd[1]: Reached target local-fs.target - Local File Systems. >Jul 14 00:00:00 fedora systemd[1]: Reached target paths.target - Path Units. >Jul 14 00:00:00 fedora systemd[1]: Reached target slices.target - Slice Units. >Jul 14 00:00:00 fedora systemd[1]: Reached target swap.target - Swaps. >Jul 14 00:00:00 fedora systemd[1]: Reached target timers.target - Timer Units. >Jul 14 00:00:00 fedora systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. >Jul 14 00:00:00 fedora systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. >Jul 14 00:00:00 fedora systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). >Jul 14 00:00:00 fedora systemd[1]: Listening on systemd-journald.socket - Journal Socket. >Jul 14 00:00:00 fedora systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. >Jul 14 00:00:00 fedora systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. >Jul 14 00:00:00 fedora systemd[1]: Reached target sockets.target - Socket Units. >Jul 14 00:00:00 fedora systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... >Jul 14 00:00:00 fedora systemd[1]: Starting systemd-journald.service - Journal Service... >Jul 14 00:00:00 fedora systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... >Jul 14 00:00:00 fedora systemd[1]: Starting systemd-sysusers.service - Create System Users... >Jul 14 00:00:00 fedora systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... >Jul 14 00:00:00 fedora systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. >Jul 14 00:00:00 fedora kernel: Asymmetric key parser 'pkcs8' registered >Jul 14 00:00:00 fedora systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. >Jul 14 00:00:00 fedora systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... >Jul 14 00:00:00 fedora systemd[1]: Finished systemd-sysusers.service - Create System Users. >Jul 14 00:00:00 fedora systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... >Jul 14 00:00:00 fedora systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. >Jul 14 00:00:00 fedora kernel: audit: type=1130 audit(1657756800.889:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:00 fedora systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... >Jul 14 00:00:00 fedora systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. >Jul 14 00:00:00 fedora kernel: audit: type=1130 audit(1657756800.909:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:00 fedora systemd-journald[220]: Journal started >Jul 14 00:00:00 fedora systemd-journald[220]: Runtime Journal (/run/log/journal/5dadadc7233f41f9b30125b1f33fbfef) is 8.0M, max 76.2M, 68.2M free. >Jul 14 00:00:00 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:00 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:00 fedora systemd[1]: Started systemd-journald.service - Journal Service. >Jul 14 00:00:00 fedora systemd-sysusers[222]: Creating group 'nobody' with GID 65534. >Jul 14 00:00:00 fedora systemd-sysusers[222]: Creating group 'users' with GID 100. >Jul 14 00:00:00 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:00 fedora systemd-sysusers[222]: Creating group 'root' with GID 999. >Jul 14 00:00:00 fedora systemd-sysusers[222]: Creating group 'dbus' with GID 998. >Jul 14 00:00:00 fedora systemd-sysusers[222]: Creating user 'dbus' (System Message Bus) with UID 998 and GID 998. >Jul 14 00:00:00 fedora systemd-modules-load[221]: Inserted module 'pkcs8_key_parser' >Jul 14 00:00:00 fedora systemd-modules-load[221]: Inserted module 'ip_tables' >Jul 14 00:00:00 fedora kernel: audit: type=1130 audit(1657756800.939:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:00 fedora systemd-modules-load[221]: Inserted module 'ip6_tables' >Jul 14 00:00:00 fedora systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... >Jul 14 00:00:00 fedora systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. >Jul 14 00:00:00 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:00 fedora kernel: audit: type=1130 audit(1657756800.949:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:00 fedora systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. >Jul 14 00:00:00 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:00 fedora kernel: audit: type=1130 audit(1657756800.969:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:00 fedora systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... >Jul 14 00:00:01 fedora systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. >Jul 14 00:00:01 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:01 fedora kernel: audit: type=1130 audit(1657756801.009:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:01 fedora dracut-cmdline[242]: dracut-36.20220810.0 (IoT Edition) dracut-056-1.fc36 >Jul 14 00:00:01 fedora dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=(hd0,msdos2)/ostree/fedora-iot-e8397e68f7793d7e9cf907a131f5ac7c5f92558899e0a5bda2335a6bd44d8a19/vmlinuz-5.18.16-200.fc36.aarch64 modprobe.blacklist=vc4 console=tty0 root=UUID=433fb0a4-33dc-42c7-9208-3270ba010d05 ostree=/ostree/boot.0/fedora-iot/e8397e68f7793d7e9cf907a131f5ac7c5f92558899e0a5bda2335a6bd44d8a19/0 >Jul 14 00:00:01 fedora systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. >Jul 14 00:00:01 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:01 fedora systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... >Jul 14 00:00:01 fedora kernel: audit: type=1130 audit(1657756801.559:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:01 fedora systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. >Jul 14 00:00:01 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:01 fedora audit: BPF prog-id=21 op=LOAD >Jul 14 00:00:01 fedora audit: BPF prog-id=22 op=LOAD >Jul 14 00:00:01 fedora audit: BPF prog-id=23 op=LOAD >Jul 14 00:00:01 fedora kernel: audit: type=1130 audit(1657756801.739:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:01 fedora kernel: audit: type=1334 audit(1657756801.739:10): prog-id=21 op=LOAD >Jul 14 00:00:01 fedora kernel: audit: type=1334 audit(1657756801.739:11): prog-id=22 op=LOAD >Jul 14 00:00:01 fedora systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... >Jul 14 00:00:01 fedora systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. >Jul 14 00:00:01 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:01 fedora systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... >Jul 14 00:00:01 fedora systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. >Jul 14 00:00:01 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:01 fedora systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... >Jul 14 00:00:02 fedora systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. >Jul 14 00:00:02 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:02 fedora systemd[1]: Reached target sysinit.target - System Initialization. >Jul 14 00:00:02 fedora systemd[1]: Reached target basic.target - Basic System. >Jul 14 00:00:02 fedora systemd[1]: nm-initrd.service was skipped because of a failed condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). >Jul 14 00:00:02 fedora systemd[1]: Reached target network.target - Network. >Jul 14 00:00:02 fedora systemd[1]: nm-wait-online-initrd.service was skipped because of a failed condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). >Jul 14 00:00:02 fedora systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... >Jul 14 00:00:02 fedora kernel: usb_phy_generic phy: supply vcc not found, using dummy regulator >Jul 14 00:00:02 fedora kernel: usb_phy_generic phy: dummy supplies not allowed for exclusive requests >Jul 14 00:00:02 fedora kernel: sdhci: Secure Digital Host Controller Interface driver >Jul 14 00:00:02 fedora kernel: sdhci: Copyright(c) Pierre Ossman >Jul 14 00:00:02 fedora kernel: brcm-pcie fd500000.pcie: host bridge /scb/pcie@7d500000 ranges: >Jul 14 00:00:02 fedora kernel: brcm-pcie fd500000.pcie: No bus range found for /scb/pcie@7d500000, using [bus 00-ff] >Jul 14 00:00:02 fedora kernel: brcm-pcie fd500000.pcie: MEM 0x0600000000..0x063fffffff -> 0x00c0000000 >Jul 14 00:00:02 fedora kernel: brcm-pcie fd500000.pcie: IB MEM 0x0000000000..0x00bfffffff -> 0x0400000000 >Jul 14 00:00:02 fedora kernel: sdhci-pltfm: SDHCI platform and OF driver helper >Jul 14 00:00:02 fedora kernel: brcm-pcie fd500000.pcie: link up, 5.0 GT/s PCIe x1 (SSC) >Jul 14 00:00:02 fedora kernel: brcm-pcie fd500000.pcie: PCI host bridge to bus 0000:00 >Jul 14 00:00:02 fedora kernel: pci_bus 0000:00: root bus resource [bus 00-ff] >Jul 14 00:00:02 fedora kernel: pci_bus 0000:00: root bus resource [mem 0x600000000-0x63fffffff] (bus address [0xc0000000-0xffffffff]) >Jul 14 00:00:02 fedora kernel: pci 0000:00:00.0: [14e4:2711] type 01 class 0x060400 >Jul 14 00:00:02 fedora kernel: pci 0000:00:00.0: PME# supported from D0 D3hot >Jul 14 00:00:02 fedora kernel: pci 0000:01:00.0: [1106:3483] type 00 class 0x0c0330 >Jul 14 00:00:02 fedora kernel: pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] >Jul 14 00:00:02 fedora kernel: pci 0000:01:00.0: PME# supported from D0 D3cold >Jul 14 00:00:02 fedora kernel: mmc1: SDHCI controller on fe300000.mmcnr [fe300000.mmcnr] using PIO >Jul 14 00:00:02 fedora kernel: pci 0000:00:00.0: BAR 14: assigned [mem 0x600000000-0x6000fffff] >Jul 14 00:00:02 fedora kernel: pci 0000:01:00.0: BAR 0: assigned [mem 0x600000000-0x600000fff 64bit] >Jul 14 00:00:02 fedora kernel: pci 0000:00:00.0: PCI bridge to [bus 01] >Jul 14 00:00:02 fedora kernel: pci 0000:00:00.0: bridge window [mem 0x600000000-0x6000fffff] >Jul 14 00:00:02 fedora kernel: pcieport 0000:00:00.0: enabling device (0000 -> 0002) >Jul 14 00:00:02 fedora kernel: pcieport 0000:00:00.0: PME: Signaling with IRQ 56 >Jul 14 00:00:02 fedora kernel: bcmgenet fd580000.ethernet: GENET 5.0 EPHY: 0x0000 >Jul 14 00:00:02 fedora kernel: pcieport 0000:00:00.0: AER: enabled with IRQ 56 >Jul 14 00:00:02 fedora kernel: xhci_hcd 0000:01:00.0: enabling device (0000 -> 0002) >Jul 14 00:00:02 fedora kernel: xhci_hcd 0000:01:00.0: xHCI Host Controller >Jul 14 00:00:02 fedora kernel: xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 1 >Jul 14 00:00:02 fedora kernel: xhci_hcd 0000:01:00.0: hcc params 0x002841eb hci version 0x100 quirks 0x0000040000000890 >Jul 14 00:00:02 fedora kernel: xhci_hcd 0000:01:00.0: xHCI Host Controller >Jul 14 00:00:02 fedora kernel: xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 2 >Jul 14 00:00:02 fedora kernel: xhci_hcd 0000:01:00.0: Host supports USB 3.0 SuperSpeed >Jul 14 00:00:02 fedora kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.18 >Jul 14 00:00:02 fedora kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 >Jul 14 00:00:02 fedora kernel: usb usb1: Product: xHCI Host Controller >Jul 14 00:00:02 fedora kernel: usb usb1: Manufacturer: Linux 5.18.16-200.fc36.aarch64 xhci-hcd >Jul 14 00:00:02 fedora kernel: usb usb1: SerialNumber: 0000:01:00.0 >Jul 14 00:00:02 fedora kernel: hub 1-0:1.0: USB hub found >Jul 14 00:00:02 fedora kernel: hub 1-0:1.0: 1 port detected >Jul 14 00:00:02 fedora kernel: usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.18 >Jul 14 00:00:02 fedora kernel: usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 >Jul 14 00:00:02 fedora kernel: usb usb2: Product: xHCI Host Controller >Jul 14 00:00:02 fedora kernel: usb usb2: Manufacturer: Linux 5.18.16-200.fc36.aarch64 xhci-hcd >Jul 14 00:00:02 fedora kernel: usb usb2: SerialNumber: 0000:01:00.0 >Jul 14 00:00:02 fedora kernel: hub 2-0:1.0: USB hub found >Jul 14 00:00:02 fedora kernel: hub 2-0:1.0: 4 ports detected >Jul 14 00:00:02 fedora kernel: mmc1: new high speed SDIO card at address 0001 >Jul 14 00:00:02 fedora kernel: bcm2835-wdt bcm2835-wdt: Poweroff handler already present! >Jul 14 00:00:02 fedora kernel: bcm2835-wdt bcm2835-wdt: Broadcom BCM2835 watchdog timer >Jul 14 00:00:03 fedora kernel: unimac-mdio unimac-mdio.-19: Broadcom UniMAC MDIO bus >Jul 14 00:00:03 fedora kernel: dwc2 fe980000.usb: supply vusb_d not found, using dummy regulator >Jul 14 00:00:03 fedora kernel: dwc2 fe980000.usb: supply vusb_a not found, using dummy regulator >Jul 14 00:00:03 fedora kernel: mmc0: SDHCI controller on fe340000.mmc [fe340000.mmc] using ADMA >Jul 14 00:00:03 fedora kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd >Jul 14 00:00:03 fedora kernel: mmc0: new ultra high speed DDR50 SDXC card at address aaaa >Jul 14 00:00:03 fedora systemd-udevd[407]: Using default interface naming scheme 'v250'. >Jul 14 00:00:03 fedora kernel: dwc2 fe980000.usb: EPs: 8, dedicated fifos, 4080 entries in SPRAM >Jul 14 00:00:03 fedora kernel: dwc2 fe980000.usb: DWC OTG Controller >Jul 14 00:00:03 fedora kernel: dwc2 fe980000.usb: new USB bus registered, assigned bus number 3 >Jul 14 00:00:03 fedora kernel: dwc2 fe980000.usb: irq 24, io mem 0xfe980000 >Jul 14 00:00:03 fedora kernel: usb usb3: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.18 >Jul 14 00:00:03 fedora kernel: usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 >Jul 14 00:00:03 fedora kernel: usb usb3: Product: DWC OTG Controller >Jul 14 00:00:03 fedora kernel: usb usb3: Manufacturer: Linux 5.18.16-200.fc36.aarch64 dwc2_hsotg >Jul 14 00:00:03 fedora kernel: usb usb3: SerialNumber: fe980000.usb >Jul 14 00:00:03 fedora kernel: hub 3-0:1.0: USB hub found >Jul 14 00:00:03 fedora kernel: hub 3-0:1.0: 1 port detected >Jul 14 00:00:03 fedora kernel: mmcblk0: mmc0:aaaa SN64G 59.5 GiB >Jul 14 00:00:03 fedora kernel: mmcblk0: p1 p2 p3 >Jul 14 00:00:03 fedora kernel: usb 1-1: New USB device found, idVendor=2109, idProduct=3431, bcdDevice= 4.21 >Jul 14 00:00:03 fedora kernel: usb 1-1: New USB device strings: Mfr=0, Product=1, SerialNumber=0 >Jul 14 00:00:03 fedora kernel: usb 1-1: Product: USB2.0 Hub >Jul 14 00:00:03 fedora kernel: hub 1-1:1.0: USB hub found >Jul 14 00:00:03 fedora kernel: hub 1-1:1.0: 4 ports detected >Jul 14 00:00:03 fedora systemd[1]: Found device dev-disk-by\x2duuid-433fb0a4\x2d33dc\x2d42c7\x2d9208\x2d3270ba010d05.device - /dev/disk/by-uuid/433fb0a4-33dc-42c7-9208-3270ba010d05. >Jul 14 00:00:03 fedora systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. >Jul 14 00:00:03 fedora systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. >Jul 14 00:00:03 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:03 fedora systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. >Jul 14 00:00:03 fedora systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. >Jul 14 00:00:03 fedora systemd[1]: Reached target remote-fs.target - Remote File Systems. >Jul 14 00:00:03 fedora systemd[1]: dracut-pre-mount.service - dracut pre-mount hook was skipped because all trigger condition checks failed. >Jul 14 00:00:03 fedora systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-uuid/433fb0a4-33dc-42c7-9208-3270ba010d05... >Jul 14 00:00:03 fedora systemd-fsck[428]: /dev/mmcblk0p3: clean, 843419/3637760 files, 12831628/15201280 blocks >Jul 14 00:00:03 fedora systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-uuid/433fb0a4-33dc-42c7-9208-3270ba010d05. >Jul 14 00:00:03 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:03 fedora kernel: usb 1-1.1: new full-speed USB device number 3 using xhci_hcd >Jul 14 00:00:03 fedora systemd[1]: Mounting sysroot.mount - /sysroot... >Jul 14 00:00:03 fedora systemd[1]: Mounted sysroot.mount - /sysroot. >Jul 14 00:00:03 fedora kernel: EXT4-fs (mmcblk0p3): mounted filesystem with ordered data mode. Quota mode: none. >Jul 14 00:00:03 fedora systemd[1]: Starting ostree-prepare-root.service - OSTree Prepare OS/... >Jul 14 00:00:03 fedora ostree-prepare-root[433]: preparing sysroot at /sysroot >Jul 14 00:00:03 fedora ostree-prepare-root[433]: Resolved OSTree target to: /sysroot/ostree/deploy/fedora-iot/deploy/32990b844ce9eb4bba708bcba295a2ebab368a1ff454076b06fe7a438d9d99e0.0 >Jul 14 00:00:03 fedora ostree-prepare-root[433]: filesystem at /sysroot currently writable: 0 >Jul 14 00:00:03 fedora ostree-prepare-root[433]: sysroot.readonly configuration value: 0 >Jul 14 00:00:03 fedora kernel: usb 1-1.1: New USB device found, idVendor=0451, idProduct=16a8, bcdDevice= 0.09 >Jul 14 00:00:03 fedora kernel: usb 1-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 >Jul 14 00:00:03 fedora kernel: usb 1-1.1: Product: TI CC2531 USB CDC >Jul 14 00:00:03 fedora kernel: usb 1-1.1: Manufacturer: Texas Instruments >Jul 14 00:00:03 fedora kernel: usb 1-1.1: SerialNumber: __0X00124B001CCDE3F2 >Jul 14 00:00:03 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ostree-prepare-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:03 fedora systemd[1]: sysroot-ostree-deploy-fedora\x2diot-deploy-32990b844ce9eb4bba708bcba295a2ebab368a1ff454076b06fe7a438d9d99e0.0.mount: Deactivated successfully. >Jul 14 00:00:03 fedora systemd[1]: Finished ostree-prepare-root.service - OSTree Prepare OS/. >Jul 14 00:00:03 fedora systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. >Jul 14 00:00:03 fedora systemd[1]: Starting initrd-parse-etc.service - Reload Configuration from the Real Root... >Jul 14 00:00:03 fedora systemd[1]: Reloading. >Jul 14 00:00:04 fedora audit: BPF prog-id=24 op=LOAD >Jul 14 00:00:04 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:04 fedora audit: BPF prog-id=25 op=LOAD >Jul 14 00:00:04 fedora audit: BPF prog-id=26 op=LOAD >Jul 14 00:00:04 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:04 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:04 fedora audit: BPF prog-id=27 op=LOAD >Jul 14 00:00:04 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:04 fedora audit: BPF prog-id=28 op=LOAD >Jul 14 00:00:04 fedora audit: BPF prog-id=29 op=LOAD >Jul 14 00:00:04 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:04 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:04 fedora systemd[1]: initrd-parse-etc.service: Deactivated successfully. >Jul 14 00:00:04 fedora systemd[1]: Finished initrd-parse-etc.service - Reload Configuration from the Real Root. >Jul 14 00:00:04 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:04 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:04 fedora systemd[1]: Reached target initrd-fs.target - Initrd File Systems. >Jul 14 00:00:04 fedora systemd[1]: Reached target initrd.target - Initrd Default Target. >Jul 14 00:00:04 fedora systemd[1]: Starting dracut-mount.service - dracut mount hook... >Jul 14 00:00:04 fedora systemd[1]: Finished dracut-mount.service - dracut mount hook. >Jul 14 00:00:04 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:04 fedora systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... >Jul 14 00:00:05 fedora systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. >Jul 14 00:00:05 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... >Jul 14 00:00:05 fedora systemd[1]: Stopped target network.target - Network. >Jul 14 00:00:05 fedora systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. >Jul 14 00:00:05 fedora systemd[1]: Stopped target timers.target - Timer Units. >Jul 14 00:00:05 fedora systemd[1]: dbus.socket: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Closed dbus.socket - D-Bus System Message Bus Socket. >Jul 14 00:00:05 fedora systemd[1]: dracut-pre-pivot.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: Stopped target initrd.target - Initrd Default Target. >Jul 14 00:00:05 fedora systemd[1]: Stopped target basic.target - Basic System. >Jul 14 00:00:05 fedora systemd[1]: Stopped target ignition-subsequent.target - Subsequent (Not Ignition) boot complete. >Jul 14 00:00:05 fedora systemd[1]: Stopped target ignition-diskful-subsequent.target - Ignition Subsequent Boot Disk Setup. >Jul 14 00:00:05 fedora systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. >Jul 14 00:00:05 fedora systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. >Jul 14 00:00:05 fedora systemd[1]: Stopped target paths.target - Path Units. >Jul 14 00:00:05 fedora systemd[1]: Stopped target remote-fs.target - Remote File Systems. >Jul 14 00:00:05 fedora systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. >Jul 14 00:00:05 fedora systemd[1]: Stopped target slices.target - Slice Units. >Jul 14 00:00:05 fedora systemd[1]: Stopped target sockets.target - Socket Units. >Jul 14 00:00:05 fedora systemd[1]: Stopped target sysinit.target - System Initialization. >Jul 14 00:00:05 fedora systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. >Jul 14 00:00:05 fedora systemd[1]: systemd-ask-password-console.path: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. >Jul 14 00:00:05 fedora systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). >Jul 14 00:00:05 fedora systemd[1]: clevis-luks-askpass.path: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. >Jul 14 00:00:05 fedora systemd[1]: Stopped target local-fs.target - Local File Systems. >Jul 14 00:00:05 fedora systemd[1]: Stopped target swap.target - Swaps. >Jul 14 00:00:05 fedora systemd[1]: dracut-mount.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped dracut-mount.service - dracut mount hook. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: dracut-initqueue.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: systemd-sysctl.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: systemd-modules-load.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: systemd-udev-trigger.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: dracut-pre-trigger.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... >Jul 14 00:00:05 fedora systemd[1]: initrd-cleanup.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. >Jul 14 00:00:05 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: systemd-udevd.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: systemd-udevd.service: Consumed 3.449s CPU time. >Jul 14 00:00:05 fedora systemd[1]: systemd-udevd-control.socket: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. >Jul 14 00:00:05 fedora systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. >Jul 14 00:00:05 fedora systemd[1]: dracut-pre-udev.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: dracut-cmdline.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... >Jul 14 00:00:05 fedora systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: kmod-static-nodes.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: systemd-sysusers.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped systemd-sysusers.service - Create System Users. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully. >Jul 14 00:00:05 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:05 fedora systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. >Jul 14 00:00:05 fedora systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. >Jul 14 00:00:05 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:05 fedora systemd[1]: Reached target initrd-switch-root.target - Switch Root. >Jul 14 00:00:05 fedora systemd[1]: Starting initrd-switch-root.service - Switch Root... >Jul 14 00:00:05 fedora systemd[1]: Switching root. >Jul 14 00:00:05 fedora systemd-journald[220]: Journal stopped >Jul 14 00:00:10 fedora systemd-journald[220]: Received SIGTERM from PID 1 (systemd). >Jul 14 00:00:10 fedora kernel: kauditd_printk_skb: 44 callbacks suppressed >Jul 14 00:00:10 fedora kernel: audit: type=1404 audit(1657756806.559:56): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 >Jul 14 00:00:10 fedora kernel: SELinux: policy capability network_peer_controls=1 >Jul 14 00:00:10 fedora kernel: SELinux: policy capability open_perms=1 >Jul 14 00:00:10 fedora kernel: SELinux: policy capability extended_socket_class=1 >Jul 14 00:00:10 fedora kernel: SELinux: policy capability always_check_network=0 >Jul 14 00:00:10 fedora kernel: SELinux: policy capability cgroup_seclabel=1 >Jul 14 00:00:10 fedora kernel: SELinux: policy capability nnp_nosuid_transition=1 >Jul 14 00:00:10 fedora kernel: SELinux: policy capability genfs_seclabel_symlinks=1 >Jul 14 00:00:10 fedora kernel: SELinux: policy capability ioctl_skip_cloexec=0 >Jul 14 00:00:10 fedora kernel: audit: type=1403 audit(1657756807.179:57): auid=4294967295 ses=4294967295 lsm=selinux res=1 >Jul 14 00:00:10 fedora systemd[1]: Successfully loaded SELinux policy in 632.702ms. >Jul 14 00:00:10 fedora systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 271.667ms. >Jul 14 00:00:10 fedora systemd[1]: systemd v250.8-1.fc36 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) >Jul 14 00:00:10 fedora systemd[1]: Detected architecture arm64. >Jul 14 00:00:10 fedora kernel: audit: type=1334 audit(1657756807.709:58): prog-id=30 op=LOAD >Jul 14 00:00:10 fedora kernel: audit: type=1334 audit(1657756807.709:59): prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora kernel: audit: type=1334 audit(1657756807.709:60): prog-id=31 op=LOAD >Jul 14 00:00:10 fedora kernel: audit: type=1334 audit(1657756807.709:61): prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora kernel: audit: type=1334 audit(1657756807.739:62): prog-id=32 op=LOAD >Jul 14 00:00:10 fedora kernel: audit: type=1334 audit(1657756807.739:63): prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora kernel: audit: type=1334 audit(1657756807.739:64): prog-id=33 op=LOAD >Jul 14 00:00:10 fedora kernel: audit: type=1334 audit(1657756807.739:65): prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora systemd[1]: Failed to link BPF program. Assuming BPF is not available >Jul 14 00:00:10 fedora kernel: zram: Added device: zram0 >Jul 14 00:00:10 fedora systemd[1]: initrd-switch-root.service: Deactivated successfully. >Jul 14 00:00:10 fedora systemd[1]: Stopped initrd-switch-root.service - Switch Root. >Jul 14 00:00:10 fedora systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. >Jul 14 00:00:10 fedora systemd[1]: Created slice system-getty.slice - Slice /system/getty. >Jul 14 00:00:10 fedora systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. >Jul 14 00:00:10 fedora systemd[1]: Created slice system-sshd\x2dkeygen.slice - Slice /system/sshd-keygen. >Jul 14 00:00:10 fedora systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. >Jul 14 00:00:10 fedora systemd[1]: Created slice system-systemd\x2dzram\x2dsetup.slice - Slice /system/systemd-zram-setup. >Jul 14 00:00:10 fedora systemd[1]: Created slice user.slice - User and Session Slice. >Jul 14 00:00:10 fedora systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. >Jul 14 00:00:10 fedora systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. >Jul 14 00:00:10 fedora systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. >Jul 14 00:00:10 fedora systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. >Jul 14 00:00:10 fedora systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). >Jul 14 00:00:10 fedora systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. >Jul 14 00:00:10 fedora systemd[1]: Stopped target initrd-switch-root.target - Switch Root. >Jul 14 00:00:10 fedora systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. >Jul 14 00:00:10 fedora systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. >Jul 14 00:00:10 fedora systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. >Jul 14 00:00:10 fedora systemd[1]: Reached target remote-fs.target - Remote File Systems. >Jul 14 00:00:10 fedora systemd[1]: Reached target slices.target - Slice Units. >Jul 14 00:00:10 fedora systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. >Jul 14 00:00:10 fedora systemd[1]: Listening on dm-event.socket - Device-mapper event daemon FIFOs. >Jul 14 00:00:10 fedora systemd[1]: Listening on lvm2-lvmpolld.socket - LVM2 poll daemon socket. >Jul 14 00:00:10 fedora systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. >Jul 14 00:00:10 fedora systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. >Jul 14 00:00:10 fedora systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. >Jul 14 00:00:10 fedora systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. >Jul 14 00:00:10 fedora systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. >Jul 14 00:00:10 fedora systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. >Jul 14 00:00:10 fedora systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... >Jul 14 00:00:10 fedora systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... >Jul 14 00:00:10 fedora systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... >Jul 14 00:00:10 fedora systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... >Jul 14 00:00:10 fedora systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... >Jul 14 00:00:10 fedora systemd[1]: Starting lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... >Jul 14 00:00:10 fedora systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... >Jul 14 00:00:10 fedora systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... >Jul 14 00:00:10 fedora systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... >Jul 14 00:00:10 fedora systemd[1]: ostree-prepare-root.service: Deactivated successfully. >Jul 14 00:00:10 fedora systemd[1]: Stopped ostree-prepare-root.service - OSTree Prepare OS/. >Jul 14 00:00:10 fedora systemd[1]: Stopped systemd-journald.service - Journal Service. >Jul 14 00:00:10 fedora systemd[1]: Starting systemd-journald.service - Journal Service... >Jul 14 00:00:10 fedora systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... >Jul 14 00:00:10 fedora systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... >Jul 14 00:00:10 fedora systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... >Jul 14 00:00:10 fedora systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because all trigger condition checks failed. >Jul 14 00:00:10 fedora systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... >Jul 14 00:00:10 fedora systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. >Jul 14 00:00:10 fedora systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. >Jul 14 00:00:10 fedora systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. >Jul 14 00:00:10 fedora systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. >Jul 14 00:00:10 fedora systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. >Jul 14 00:00:10 fedora kernel: fuse: init (API version 7.36) >Jul 14 00:00:10 fedora systemd[1]: modprobe@configfs.service: Deactivated successfully. >Jul 14 00:00:10 fedora systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. >Jul 14 00:00:10 fedora systemd[1]: modprobe@drm.service: Deactivated successfully. >Jul 14 00:00:10 fedora systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. >Jul 14 00:00:10 fedora systemd[1]: modprobe@fuse.service: Deactivated successfully. >Jul 14 00:00:10 fedora systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. >Jul 14 00:00:10 fedora systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. >Jul 14 00:00:10 fedora systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. >Jul 14 00:00:10 fedora kernel: EXT4-fs (mmcblk0p3): re-mounted. Quota mode: none. >Jul 14 00:00:10 fedora systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... >Jul 14 00:00:10 fedora systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... >Jul 14 00:00:10 fedora systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... >Jul 14 00:00:10 fedora systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. >Jul 14 00:00:10 fedora systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. >Jul 14 00:00:10 fedora systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. >Jul 14 00:00:10 fedora systemd[1]: systemd-firstboot.service - First Boot Wizard was skipped because of a failed condition check (ConditionFirstBoot=yes). >Jul 14 00:00:10 fedora systemd[1]: systemd-hwdb-update.service - Rebuild Hardware Database was skipped because of a failed condition check (ConditionNeedsUpdate=/etc). >Jul 14 00:00:10 fedora systemd[1]: systemd-sysusers.service - Create System Users was skipped because of a failed condition check (ConditionNeedsUpdate=/etc). >Jul 14 00:00:10 fedora systemd-journald[545]: Journal started >Jul 14 00:00:10 fedora systemd-journald[545]: Runtime Journal (/run/log/journal/f6bc28022d7945348b2f18008b67b029) is 8.0M, max 76.2M, 68.2M free. >Jul 14 00:00:06 fedora audit: MAC_STATUS enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 >Jul 14 00:00:07 fedora audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 >Jul 14 00:00:07 fedora audit: BPF prog-id=30 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=31 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=32 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=33 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=34 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=35 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=36 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=37 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=38 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=39 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=40 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=41 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=42 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=43 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=44 op=LOAD >Jul 14 00:00:07 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:08 fedora audit: BPF prog-id=45 op=LOAD >Jul 14 00:00:08 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:08 fedora audit: BPF prog-id=46 op=LOAD >Jul 14 00:00:08 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=47 op=LOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=48 op=LOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=49 op=LOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=50 op=LOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=51 op=LOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=52 op=LOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... >Jul 14 00:00:10 fedora systemd[1]: Started systemd-journald.service - Journal Service. >Jul 14 00:00:10 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=ostree-prepare-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit: BPF prog-id=53 op=LOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=54 op=LOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=55 op=LOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:10 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:syslogd_t:s0 res=1 >Jul 14 00:00:10 fedora audit[545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffccc935e0 a2=4000 a3=ffff974c15a0 items=0 ppid=1 pid=545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:syslogd_t:s0 key=(null) >Jul 14 00:00:10 fedora audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" >Jul 14 00:00:10 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora systemd[1]: Queued start job for default target multi-user.target. >Jul 14 00:00:10 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:10 fedora systemd[1]: systemd-journald.service: Deactivated successfully. >Jul 14 00:00:10 fedora systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. >Jul 14 00:00:11 fedora systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. >Jul 14 00:00:11 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:11 fedora audit: BPF prog-id=56 op=LOAD >Jul 14 00:00:11 fedora audit: BPF prog-id=57 op=LOAD >Jul 14 00:00:11 fedora audit: BPF prog-id=58 op=LOAD >Jul 14 00:00:11 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:11 fedora audit: BPF prog-id=0 op=UNLOAD >Jul 14 00:00:11 fedora systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... >Jul 14 00:00:11 fedora systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. >Jul 14 00:00:11 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:11 fedora systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. >Jul 14 00:00:11 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:11 fedora kernel: kauditd_printk_skb: 72 callbacks suppressed >Jul 14 00:00:11 fedora kernel: audit: type=1130 audit(1657756811.619:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:11 fedora systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... >Jul 14 00:00:11 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:11 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:11 fedora systemd[1]: modprobe@configfs.service: Deactivated successfully. >Jul 14 00:00:11 fedora systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. >Jul 14 00:00:11 fedora kernel: audit: type=1130 audit(1657756811.959:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:11 fedora kernel: audit: type=1131 audit(1657756811.959:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:12 fedora systemd-udevd[578]: mmcblk0p1: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/platform/emmc2bus/fe340000.mmc/mmc_host/mmc0/mmc0:aaaa/block/mmcblk0/mmcblk0p1/queue/scheduler}, ignoring: No such file or directory >Jul 14 00:00:12 fedora systemd-udevd[559]: mmcblk0p3: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/platform/emmc2bus/fe340000.mmc/mmc_host/mmc0/mmc0:aaaa/block/mmcblk0/mmcblk0p3/queue/scheduler}, ignoring: No such file or directory >Jul 14 00:00:12 fedora kernel: iproc-rng200 fe104000.rng: hwrng registered >Jul 14 00:00:12 fedora systemd-udevd[575]: Using default interface naming scheme 'v250'. >Jul 14 00:00:12 fedora systemd-udevd[580]: mmcblk0p2: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/platform/emmc2bus/fe340000.mmc/mmc_host/mmc0/mmc0:aaaa/block/mmcblk0/mmcblk0p2/queue/scheduler}, ignoring: No such file or directory >Jul 14 00:00:12 fedora systemd[1]: Found device dev-zram0.device - /dev/zram0. >Jul 14 00:00:12 fedora systemd[1]: Starting systemd-zram-setup@zram0.service - Create swap on /dev/zram0... >Jul 14 00:00:12 fedora systemd[1]: Reached target usb-gadget.target - Hardware activated USB gadget. >Jul 14 00:00:12 fedora kernel: zram0: detected capacity change from 0 to 7802880 >Jul 14 00:00:12 fedora zram-generator[590]: Setting up swapspace version 1, size = 3.7 GiB (3995070464 bytes) >Jul 14 00:00:12 fedora zram-generator[590]: LABEL=zram0, UUID=d5ef8a6a-b52d-457f-b4e2-cf7bf0d35975 >Jul 14 00:00:12 fedora systemd-makefs[589]: /dev/zram0 successfully formatted as swap (label "zram0", uuid d5ef8a6a-b52d-457f-b4e2-cf7bf0d35975) >Jul 14 00:00:12 fedora kernel: audit: type=1130 audit(1657756812.659:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-zram-setup@zram0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:12 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-zram-setup@zram0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:12 fedora systemd[1]: Finished systemd-zram-setup@zram0.service - Create swap on /dev/zram0. >Jul 14 00:00:12 fedora systemd[1]: Activating swap dev-zram0.swap - Compressed Swap on /dev/zram0... >Jul 14 00:00:13 fedora systemd[1]: Activated swap dev-zram0.swap - Compressed Swap on /dev/zram0. >Jul 14 00:00:13 fedora systemd[1]: Reached target swap.target - Swaps. >Jul 14 00:00:13 fedora systemd[1]: tmp.mount - Temporary Directory /tmp was skipped because of a failed condition check (ConditionPathIsSymbolicLink=!/tmp). >Jul 14 00:00:13 fedora kernel: Adding 3901436k swap on /dev/zram0. Priority:100 extents:1 across:3901436k SSDscFS >Jul 14 00:00:13 fedora kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database >Jul 14 00:00:13 fedora kernel: cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7' >Jul 14 00:00:13 fedora systemd[1]: Finished lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. >Jul 14 00:00:13 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-monitor comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:13 fedora systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. >Jul 14 00:00:13 fedora kernel: audit: type=1130 audit(1657756813.109:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-monitor comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:13 fedora systemd[1]: var.mount: Directory /var to mount over is not empty, mounting anyway. >Jul 14 00:00:13 fedora kernel: cdc_acm 1-1.1:1.0: ttyACM0: USB ACM device >Jul 14 00:00:13 fedora kernel: usbcore: registered new interface driver cdc_acm >Jul 14 00:00:13 fedora kernel: cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters >Jul 14 00:00:13 fedora systemd[1]: Mounting var.mount - /var... >Jul 14 00:00:13 fedora systemd[1]: Starting systemd-fsck@dev-disk-by\x2duuid-DABC\x2d1692.service - File System Check on /dev/disk/by-uuid/DABC-1692... >Jul 14 00:00:13 fedora systemd[1]: Starting systemd-fsck@dev-disk-by\x2duuid-a8e09fbc\x2daffa\x2d4db5\x2d8162\x2d17d0c4bd7a57.service - File System Check on /dev/disk/by-uuid/a8e09fbc-affa-4db5-8162-17d0c4bd7a57... >Jul 14 00:00:13 fedora systemd[1]: Mounted var.mount - /var. >Jul 14 00:00:13 fedora systemd[1]: Starting ostree-remount.service - OSTree Remount OS/ Bind Mounts... >Jul 14 00:00:13 fedora ostree-remount[600]: Remounted rw: /sysroot >Jul 14 00:00:13 fedora ostree-remount[600]: Remounted rw: /var >Jul 14 00:00:13 fedora kernel: EXT4-fs (mmcblk0p3): re-mounted. Quota mode: none. >Jul 14 00:00:13 fedora kernel: EXT4-fs (mmcblk0p3): re-mounted. Quota mode: none. >Jul 14 00:00:13 fedora systemd[1]: Finished ostree-remount.service - OSTree Remount OS/ Bind Mounts. >Jul 14 00:00:13 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=ostree-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:13 fedora systemd[1]: Listening on systemd-rfkill.socket - Load/Save RF Kill Switch Status /dev/rfkill Watch. >Jul 14 00:00:13 fedora kernel: audit: type=1130 audit(1657756813.239:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=ostree-remount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:13 fedora systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... >Jul 14 00:00:13 fedora systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... >Jul 14 00:00:13 fedora kernel: brcmfmac: brcmf_fw_alloc_request: using brcm/brcmfmac43455-sdio for chip BCM4345/6 >Jul 14 00:00:13 fedora kernel: usbcore: registered new interface driver brcmfmac >Jul 14 00:00:13 fedora kernel: brcmfmac mmc1:0001:1: Direct firmware load for brcm/brcmfmac43455-sdio.raspberrypi,4-model-b.bin failed with error -2 >Jul 14 00:00:13 fedora systemd-fsck[616]: Cannot initialize conversion from codepage 850 to UTF-8: Invalid argument >Jul 14 00:00:13 fedora systemd-fsck[616]: Cannot initialize conversion from UTF-8 to codepage 850: Invalid argument >Jul 14 00:00:13 fedora systemd-fsck[616]: Using internal CP850 conversion table >Jul 14 00:00:13 fedora systemd-journald[545]: Time spent on flushing to /var/log/journal/f6bc28022d7945348b2f18008b67b029 is 44.980ms for 857 entries. >Jul 14 00:00:13 fedora systemd-journald[545]: System Journal (/var/log/journal/f6bc28022d7945348b2f18008b67b029) is 3.9G, max 4.0G, 14.1M free. >Jul 14 00:00:14 fedora systemd-journald[545]: Received client request to flush runtime journal. >Jul 14 00:00:14 fedora systemd-journald[545]: File /var/log/journal/f6bc28022d7945348b2f18008b67b029/system.journal corrupted or uncleanly shut down, renaming and replacing. >Jul 14 00:00:14 fedora kernel: audit: type=1130 audit(1657756813.579:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:14 fedora kernel: brcmfmac: brcmf_fw_alloc_request: using brcm/brcmfmac43455-sdio for chip BCM4345/6 >Jul 14 00:00:14 fedora kernel: brcmfmac: brcmf_c_preinit_dcmds: Firmware: BCM4345/6 wl0: Apr 15 2021 03:03:20 version 7.45.234 (4ca95bb CY) FWID 01-996384e2 >Jul 14 00:00:14 fedora kernel: Bluetooth: Core ver 2.22 >Jul 14 00:00:14 fedora kernel: NET: Registered PF_BLUETOOTH protocol family >Jul 14 00:00:14 fedora kernel: Bluetooth: HCI device and connection manager initialized >Jul 14 00:00:14 fedora kernel: Bluetooth: HCI socket layer initialized >Jul 14 00:00:14 fedora kernel: Bluetooth: L2CAP socket layer initialized >Jul 14 00:00:14 fedora kernel: Bluetooth: SCO socket layer initialized >Jul 14 00:00:14 fedora kernel: audit: type=1130 audit(1657756813.699:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2duuid-DABC\x2d1692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:14 fedora kernel: audit: type=1130 audit(1657756813.699:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2duuid-a8e09fbc\x2daffa\x2d4db5\x2d8162\x2d17d0c4bd7a57 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:14 fedora kernel: EXT4-fs (mmcblk0p2): mounted filesystem with ordered data mode. Quota mode: none. >Jul 14 00:00:14 fedora kernel: audit: type=1130 audit(1657756814.039:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-rfkill comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:13 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:13 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2duuid-DABC\x2d1692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:13 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2duuid-a8e09fbc\x2daffa\x2d4db5\x2d8162\x2d17d0c4bd7a57 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:14 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-rfkill comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:14 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dracut-shutdown comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:14 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:14 fedora systemd-fsck[615]: /dev/mmcblk0p2: clean, 1047/65536 files, 68436/262144 blocks >Jul 14 00:00:13 fedora systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. >Jul 14 00:00:13 fedora wireless[622]: Could not determine country! Unable to set regulatory domain. >Jul 14 00:00:14 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:14 fedora systemd-fsck[616]: fsck.fat 4.2 (2021-01-31) >Jul 14 00:00:14 fedora systemd-fsck[616]: /dev/mmcblk0p1: 280 files, 3913/64091 clusters >Jul 14 00:00:13 fedora systemd[1]: first-boot-complete.target - First Boot Complete was skipped because of a failed condition check (ConditionFirstBoot=yes). >Jul 14 00:00:13 fedora systemd-udevd[570]: Using default interface naming scheme 'v250'. >Jul 14 00:00:13 fedora systemd[1]: Starting systemd-rfkill.service - Load/Save RF Kill Switch Status... >Jul 14 00:00:13 fedora systemd[1]: Finished systemd-fsck@dev-disk-by\x2duuid-DABC\x2d1692.service - File System Check on /dev/disk/by-uuid/DABC-1692. >Jul 14 00:00:13 fedora systemd[1]: Finished systemd-fsck@dev-disk-by\x2duuid-a8e09fbc\x2daffa\x2d4db5\x2d8162\x2d17d0c4bd7a57.service - File System Check on /dev/disk/by-uuid/a8e09fbc-affa-4db5-8162-17d0c4bd7a57. >Jul 14 00:00:14 fedora bootctl[631]: Couldn't find EFI system partition, skipping. >Jul 14 00:00:13 fedora systemd[1]: Mounting boot.mount - /boot... >Jul 14 00:00:13 fedora systemd[1]: Mounted boot.mount - /boot. >Jul 14 00:00:13 fedora systemd[1]: Mounting boot-efi.mount - /boot/efi... >Jul 14 00:00:13 fedora systemd[1]: Mounted boot-efi.mount - /boot/efi. >Jul 14 00:00:13 fedora systemd[1]: Reached target local-fs.target - Local File Systems. >Jul 14 00:00:13 fedora systemd-udevd[574]: phy0: Process '/usr/sbin/setregdomain' failed with exit code 1. >Jul 14 00:00:14 fedora systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... >Jul 14 00:00:14 fedora systemd[1]: ldconfig.service - Rebuild Dynamic Linker Cache was skipped because all trigger condition checks failed. >Jul 14 00:00:14 fedora systemd[1]: selinux-autorelabel-mark.service - Mark the need to relabel after reboot was skipped because of a failed condition check (ConditionSecurity=!selinux). >Jul 14 00:00:14 fedora systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because all trigger condition checks failed. >Jul 14 00:00:14 fedora systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of a failed condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). >Jul 14 00:00:14 fedora systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... >Jul 14 00:00:14 fedora systemd[1]: systemd-machine-id-commit.service - Commit a transient machine-id on disk was skipped because of a failed condition check (ConditionPathIsMountPoint=/etc/machine-id). >Jul 14 00:00:14 fedora systemd[1]: Started systemd-rfkill.service - Load/Save RF Kill Switch Status. >Jul 14 00:00:14 fedora systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. >Jul 14 00:00:14 fedora systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. >Jul 14 00:00:14 fedora systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. >Jul 14 00:00:14 fedora systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... >Jul 14 00:00:14 fedora systemd-tmpfiles[632]: /usr/lib/tmpfiles.d/pkg-man-db.conf:1: Duplicate line for path "/var/cache/man", ignoring. >Jul 14 00:00:14 fedora systemd-tmpfiles[632]: /usr/lib/tmpfiles.d/tmp.conf:12: Duplicate line for path "/var/tmp", ignoring. >Jul 14 00:00:14 fedora systemd-tmpfiles[632]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. >Jul 14 00:00:14 fedora systemd-tmpfiles[632]: /usr/lib/tmpfiles.d/var.conf:19: Duplicate line for path "/var/cache", ignoring. >Jul 14 00:00:14 fedora systemd-tmpfiles[632]: /usr/lib/tmpfiles.d/var.conf:21: Duplicate line for path "/var/lib", ignoring. >Jul 14 00:00:14 fedora systemd-tmpfiles[632]: /usr/lib/tmpfiles.d/var.conf:23: Duplicate line for path "/var/spool", ignoring. >Jul 14 00:00:16 fedora systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. >Jul 14 00:00:16 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:16 fedora systemd[1]: Starting auditd.service - Security Auditing Service... >Jul 14 00:00:16 fedora systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... >Jul 14 00:00:16 fedora audit: BPF prog-id=59 op=LOAD >Jul 14 00:00:16 fedora audit: BPF prog-id=60 op=LOAD >Jul 14 00:00:16 fedora audit: BPF prog-id=61 op=LOAD >Jul 14 00:00:16 fedora systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... >Jul 14 00:00:16 fedora audit: BPF prog-id=62 op=LOAD >Jul 14 00:00:16 fedora systemd[1]: Starting systemd-resolved.service - Network Name Resolution... >Jul 14 00:00:16 fedora systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. >Jul 14 00:00:16 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:16 fedora audit[637]: AVC avc: denied { read } for pid=637 comm="auditd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:auditd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:16 fedora audit[637]: SYSCALL arch=c00000b7 syscall=56 success=no exit=-13 a0=ffffffffffffff9c a1=ffffa26946c0 a2=80000 a3=0 items=0 ppid=633 pid=637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditd" exe="/usr/sbin/auditd" subj=system_u:system_r:auditd_t:s0 key=(null) >Jul 14 00:00:16 fedora audit: PROCTITLE proctitle="/sbin/auditd" >Jul 14 00:00:16 fedora auditd[637]: audit dispatcher initialized with q_depth=1200 and 1 active plugins >Jul 14 00:00:16 fedora kernel: kauditd_printk_skb: 9 callbacks suppressed >Jul 14 00:00:16 fedora kernel: audit: type=1400 audit(1657756816.639:155): avc: denied { read } for pid=637 comm="auditd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:auditd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:16 fedora kernel: audit: type=1300 audit(1657756816.639:155): arch=c00000b7 syscall=56 success=no exit=-13 a0=ffffffffffffff9c a1=ffffa26946c0 a2=80000 a3=0 items=0 ppid=633 pid=637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditd" exe="/usr/sbin/auditd" subj=system_u:system_r:auditd_t:s0 key=(null) >Jul 14 00:00:16 fedora kernel: audit: type=1327 audit(1657756816.639:155): proctitle="/sbin/auditd" >Jul 14 00:00:16 fedora kernel: audit: type=1305 audit(1657756816.649:156): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 >Jul 14 00:00:16 fedora kernel: audit: type=1300 audit(1657756816.649:156): arch=c00000b7 syscall=206 success=yes exit=60 a0=3 a1=ffffdea43c40 a2=3c a3=0 items=0 ppid=633 pid=637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditd" exe="/usr/sbin/auditd" subj=system_u:system_r:auditd_t:s0 key=(null) >Jul 14 00:00:16 fedora kernel: audit: type=1327 audit(1657756816.649:156): proctitle="/sbin/auditd" >Jul 14 00:00:16 fedora audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 >Jul 14 00:00:16 fedora audit[637]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=60 a0=3 a1=ffffdea43c40 a2=3c a3=0 items=0 ppid=633 pid=637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditd" exe="/usr/sbin/auditd" subj=system_u:system_r:auditd_t:s0 key=(null) >Jul 14 00:00:16 fedora audit: PROCTITLE proctitle="/sbin/auditd" >Jul 14 00:00:16 fedora audit: CONFIG_CHANGE op=set audit_pid=637 old=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 >Jul 14 00:00:16 fedora audit[637]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=60 a0=3 a1=ffffdea418f0 a2=3c a3=0 items=0 ppid=633 pid=637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditd" exe="/usr/sbin/auditd" subj=system_u:system_r:auditd_t:s0 key=(null) >Jul 14 00:00:16 fedora audit: PROCTITLE proctitle="/sbin/auditd" >Jul 14 00:00:16 fedora auditd[637]: Init complete, auditd 3.0.8 listening for events (startup state enable) >Jul 14 00:00:16 fedora systemd[1]: Starting systemd-update-done.service - Update is Completed... >Jul 14 00:00:16 fedora audit: BPF prog-id=63 op=LOAD >Jul 14 00:00:16 fedora audit: BPF prog-id=64 op=LOAD >Jul 14 00:00:16 fedora audit: BPF prog-id=65 op=LOAD >Jul 14 00:00:16 fedora systemd[1]: Starting systemd-userdbd.service - User Database Manager... >Jul 14 00:00:16 fedora systemd[1]: Finished systemd-update-done.service - Update is Completed. >Jul 14 00:00:16 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:16 fedora systemd[1]: Started systemd-userdbd.service - User Database Manager. >Jul 14 00:00:16 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:16 fedora augenrules[643]: /sbin/augenrules: No change >Jul 14 00:00:16 fedora audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:unconfined_service_t:s0 op=add_rule key=(null) list=1 res=1 >Jul 14 00:00:16 fedora audit[657]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe5813e50 a2=420 a3=0 items=0 ppid=643 pid=657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:unconfined_service_t:s0 key=(null) >Jul 14 00:00:16 fedora audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 >Jul 14 00:00:16 fedora augenrules[657]: No rules >Jul 14 00:00:16 fedora systemd[1]: Started auditd.service - Security Auditing Service. >Jul 14 00:00:16 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:16 fedora systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... >Jul 14 00:00:16 fedora audit[662]: SYSTEM_BOOT pid=662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:16 fedora systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. >Jul 14 00:00:16 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:16 fedora systemd[1]: Reached target sysinit.target - System Initialization. >Jul 14 00:00:16 fedora systemd[1]: Started ostree-finalize-staged.path - OSTree Monitor Staged Deployment. >Jul 14 00:00:16 fedora systemd[1]: fstrim.timer: Not using persistent file timestamp Sat 2022-08-20 03:22:40 UTC as it is in the future. >Jul 14 00:00:16 fedora systemd[1]: Started fstrim.timer - Discard unused blocks once a week. >Jul 14 00:00:16 fedora systemd[1]: podman-auto-update.timer: Not using persistent file timestamp Sat 2022-08-20 01:57:56 UTC as it is in the future. >Jul 14 00:00:16 fedora systemd[1]: Started podman-auto-update.timer - Podman auto-update timer. >Jul 14 00:00:16 fedora systemd[1]: Started rpm-ostree-countme.timer - Weekly rpm-ostree Count Me timer. >Jul 14 00:00:16 fedora systemd[1]: Started rpm-ostreed-automatic.timer - rpm-ostree Automatic Update Trigger. >Jul 14 00:00:16 fedora systemd[1]: rpm-ostreed-upgrade-reboot.timer: Not using persistent file timestamp Sat 2022-08-20 02:02:33 UTC as it is in the future. >Jul 14 00:00:16 fedora systemd[1]: Started rpm-ostreed-upgrade-reboot.timer - rpm-ostree auto-update timer. >Jul 14 00:00:16 fedora systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. >Jul 14 00:00:16 fedora systemd[1]: Started zezere_ignition.timer - Trigger Ignition for Zezere until it finishes. >Jul 14 00:00:16 fedora systemd[1]: Reached target paths.target - Path Units. >Jul 14 00:00:16 fedora systemd[1]: Starting cockpit.socket - Cockpit Web Service Socket... >Jul 14 00:00:16 fedora systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. >Jul 14 00:00:16 fedora systemd[1]: Listening on podman.socket - Podman API Socket. >Jul 14 00:00:16 fedora systemd[1]: rpmdb-migrate.service - RPM database migration to /usr was skipped because of a failed condition check (ConditionPathExists=/var/lib/rpm/.migratedb). >Jul 14 00:00:16 fedora systemd[1]: rpmdb-rebuild.service - RPM database rebuild was skipped because of a failed condition check (ConditionPathExists=/usr/lib/sysimage/rpm/.rebuilddb). >Jul 14 00:00:17 fedora systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. >Jul 14 00:00:16 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:17 fedora systemd[1]: tmp.mount - Temporary Directory /tmp was skipped because of a failed condition check (ConditionPathIsSymbolicLink=!/tmp). >Jul 14 00:00:17 fedora audit: BPF prog-id=66 op=LOAD >Jul 14 00:00:17 fedora systemd-resolved[636]: Positive Trust Anchors: >Jul 14 00:00:17 fedora systemd-resolved[636]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d >Jul 14 00:00:17 fedora systemd-resolved[636]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test >Jul 14 00:00:17 fedora systemd[1]: Starting dbus-broker.service - D-Bus System Message Bus... >Jul 14 00:00:17 fedora dbus-broker-launch[669]: Looking up NSS user entry for 'dbus'... >Jul 14 00:00:17 fedora dbus-broker-launch[669]: NSS returned NAME 'dbus' and UID '81' >Jul 14 00:00:17 fedora dbus-broker-launch[669]: Looking up NSS group entry for 'netdev'... >Jul 14 00:00:17 fedora dbus-broker-launch[669]: NSS returned no entry for 'netdev' >Jul 14 00:00:17 fedora dbus-broker-launch[669]: Invalid group-name in /usr/share/dbus-1/system.d/ead-dbus.conf +20: group="netdev" >Jul 14 00:00:17 fedora dbus-broker-launch[669]: Looking up NSS user entry for 'systemd-timesync'... >Jul 14 00:00:17 fedora dbus-broker-launch[669]: NSS returned NAME 'systemd-timesync' and UID '993' >Jul 14 00:00:17 fedora dbus-broker-launch[669]: Looking up NSS user entry for 'systemd-resolve'... >Jul 14 00:00:17 fedora dbus-broker-launch[669]: NSS returned NAME 'systemd-resolve' and UID '990' >Jul 14 00:00:17 fedora dbus-broker-launch[669]: Looking up NSS user entry for 'polkitd'... >Jul 14 00:00:17 fedora dbus-broker-launch[669]: NSS returned NAME 'polkitd' and UID '999' >Jul 14 00:00:17 fedora dbus-broker-launch[669]: Invalid group-name in /usr/share/dbus-1/system.d/iwd-dbus.conf +21: group="netdev" >Jul 14 00:00:17 fedora dbus-broker-launch[669]: Looking up NSS user entry for 'systemd-oom'... >Jul 14 00:00:17 fedora dbus-broker-launch[669]: NSS returned NAME 'systemd-oom' and UID '985' >Jul 14 00:00:17 fedora dbus-broker-launch[669]: Looking up NSS user entry for 'setroubleshoot'... >Jul 14 00:00:17 fedora dbus-broker-launch[669]: NSS returned NAME 'setroubleshoot' and UID '978' >Jul 14 00:00:17 fedora systemd[1]: Started dbus-broker.service - D-Bus System Message Bus. >Jul 14 00:00:17 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-broker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:17 fedora systemd[1]: Listening on cockpit.socket - Cockpit Web Service Socket. >Jul 14 00:00:17 fedora systemd[1]: Reached target sockets.target - Socket Units. >Jul 14 00:00:17 fedora systemd[1]: Reached target basic.target - Basic System. >Jul 14 00:00:17 fedora dbus-broker-lau[669]: Ready >Jul 14 00:00:17 fedora audit: BPF prog-id=67 op=LOAD >Jul 14 00:00:17 fedora systemd[1]: Starting chronyd.service - NTP client/server... >Jul 14 00:00:17 fedora systemd[1]: Starting greenboot-rpm-ostree-grub2-check-fallback.service - Check for fallback boot... >Jul 14 00:00:17 fedora systemd[1]: Starting ostree-boot-complete.service - OSTree Complete Boot... >Jul 14 00:00:17 fedora systemd[1]: Starting parsec.service - Parsec Service... >Jul 14 00:00:17 fedora systemd[1]: Starting polkit.service - Authorization Manager... >Jul 14 00:00:17 fedora systemd-resolved[636]: Using system hostname 'fedora'. >Jul 14 00:00:17 fedora systemd[1]: sshd-keygen@ecdsa.service - OpenSSH ecdsa Server Key Generation was skipped because all trigger condition checks failed. >Jul 14 00:00:17 fedora systemd[1]: sshd-keygen@ed25519.service - OpenSSH ed25519 Server Key Generation was skipped because all trigger condition checks failed. >Jul 14 00:00:17 fedora systemd[1]: sshd-keygen@rsa.service - OpenSSH rsa Server Key Generation was skipped because all trigger condition checks failed. >Jul 14 00:00:17 fedora systemd[1]: Reached target sshd-keygen.target. >Jul 14 00:00:17 fedora audit: BPF prog-id=68 op=LOAD >Jul 14 00:00:17 fedora audit: BPF prog-id=69 op=LOAD >Jul 14 00:00:17 fedora audit: BPF prog-id=70 op=LOAD >Jul 14 00:00:17 fedora systemd[1]: Starting systemd-logind.service - User Login Management... >Jul 14 00:00:17 fedora systemd[1]: Starting zezere_ignition_banner.service - Update issue banner to include Zezere instructions... >Jul 14 00:00:17 fedora systemd[1]: Started systemd-resolved.service - Network Name Resolution. >Jul 14 00:00:17 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:17 fedora systemd[1]: Finished greenboot-rpm-ostree-grub2-check-fallback.service - Check for fallback boot. >Jul 14 00:00:17 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=greenboot-rpm-ostree-grub2-check-fallback comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:17 fedora systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. >Jul 14 00:00:17 fedora systemd-logind[681]: New seat seat0. >Jul 14 00:00:17 fedora systemd[1]: Started systemd-logind.service - User Login Management. >Jul 14 00:00:17 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-logind comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:17 fedora ostree[676]: error: ostree-finalize-staged.service failed on previous boot: Bootloader write config: grub2-mkconfig: Child process exited with code 1 >Jul 14 00:00:17 fedora systemd[1]: ostree-boot-complete.service: Main process exited, code=exited, status=1/FAILURE >Jul 14 00:00:17 fedora systemd[1]: ostree-boot-complete.service: Failed with result 'exit-code'. >Jul 14 00:00:17 fedora systemd[1]: Failed to start ostree-boot-complete.service - OSTree Complete Boot. >Jul 14 00:00:17 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=ostree-boot-complete comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Jul 14 00:00:17 fedora audit[692]: AVC avc: denied { read } for pid=692 comm="chronyd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:chronyd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:17 fedora chronyd[692]: chronyd version 4.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) >Jul 14 00:00:18 fedora systemd[1]: Started parsec.service - Parsec Service. >Jul 14 00:00:18 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:18 fedora chronyd[692]: Frequency 11.339 +/- 0.035 ppm read from /var/lib/chrony/drift >Jul 14 00:00:18 fedora audit[692]: AVC avc: denied { read } for pid=692 comm="chronyd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:chronyd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:18 fedora audit[692]: AVC avc: denied { read } for pid=692 comm="chronyd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:chronyd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:18 fedora audit[692]: AVC avc: denied { read } for pid=692 comm="chronyd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:chronyd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:18 fedora audit[692]: AVC avc: denied { read } for pid=692 comm="chronyd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:chronyd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:18 fedora chronyd[692]: Using right/UTC timezone to obtain leap second data >Jul 14 00:00:18 fedora chronyd[692]: Loaded seccomp filter (level 2) >Jul 14 00:00:18 fedora systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Jul 14 00:00:18 fedora systemd[1]: Started chronyd.service - NTP client/server. >Jul 14 00:00:18 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=chronyd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:18 fedora dbus-parsec[698]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Jul 14 00:00:18 fedora systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Jul 14 00:00:18 fedora systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Jul 14 00:00:18 fedora systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Jul 14 00:00:18 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Jul 14 00:00:18 fedora zezere-ignition[682]: Updated issue banner >Jul 14 00:00:18 fedora systemd[1]: zezere_ignition_banner.service: Deactivated successfully. >Jul 14 00:00:18 fedora systemd[1]: Finished zezere_ignition_banner.service - Update issue banner to include Zezere instructions. >Jul 14 00:00:18 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition_banner comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:18 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition_banner comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:18 fedora audit[678]: AVC avc: denied { read } for pid=678 comm="polkitd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:policykit_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:18 fedora polkitd[678]: Started polkitd version 0.120 >Jul 14 00:00:18 fedora audit[678]: AVC avc: denied { read } for pid=678 comm="polkitd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:policykit_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:18 fedora polkitd[678]: Loading rules from directory /etc/polkit-1/rules.d >Jul 14 00:00:18 fedora polkitd[678]: Loading rules from directory /usr/share/polkit-1/rules.d >Jul 14 00:00:18 fedora audit[678]: AVC avc: denied { read } for pid=678 comm="polkitd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:policykit_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:18 fedora polkitd[678]: Finished loading, compiling and executing 3 rules >Jul 14 00:00:18 fedora audit[678]: AVC avc: denied { read } for pid=678 comm="polkitd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:policykit_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:18 fedora systemd[1]: Started polkit.service - Authorization Manager. >Jul 14 00:00:18 fedora audit[678]: AVC avc: denied { read } for pid=678 comm="polkitd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:policykit_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:18 fedora polkitd[678]: Acquired the name org.freedesktop.PolicyKit1 on the system bus >Jul 14 00:00:18 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=polkit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:19 fedora systemd[1]: Starting ModemManager.service - Modem Manager... >Jul 14 00:00:19 fedora systemd[1]: Starting firewalld.service - firewalld - dynamic firewall daemon... >Jul 14 00:00:19 fedora systemd[1]: systemd-rfkill.service: Deactivated successfully. >Jul 14 00:00:19 fedora audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-rfkill comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:19 fedora audit[709]: AVC avc: denied { read } for pid=709 comm="firewalld" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:firewalld_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:19 fedora systemd[1]: Created slice system-dbus\x2d:1.2\x2dorg.fedoraproject.Setroubleshootd.slice - Slice /system/dbus-:1.2-org.fedoraproject.Setroubleshootd. >Jul 14 00:00:19 fedora systemd[1]: Started dbus-:1.2-org.fedoraproject.Setroubleshootd@0.service. >Jul 14 00:00:19 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.2-org.fedoraproject.Setroubleshootd@0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:19 fedora audit[708]: AVC avc: denied { read } for pid=708 comm="ModemManager" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:modemmanager_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:19 fedora ModemManager[708]: <info> ModemManager (version 1.18.8-1.fc36) starting in system bus... >Jul 14 00:00:19 fedora ModemManager[708]: [qrtr] socket lookup from 1:0 >Jul 14 00:00:19 fedora ModemManager[708]: [qrtr] initial lookup finished >Jul 14 00:00:19 fedora systemd[1]: Started ModemManager.service - Modem Manager. >Jul 14 00:00:19 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=ModemManager comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:19 fedora kernel: NET: Registered PF_QIPCRTR protocol family >Jul 14 00:00:21 fedora systemd[1]: Started firewalld.service - firewalld - dynamic firewall daemon. >Jul 14 00:00:21 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=firewalld comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:21 fedora systemd[1]: Reached target network-pre.target - Preparation for Network. >Jul 14 00:00:21 fedora systemd[1]: Starting NetworkManager.service - Network Manager... >Jul 14 00:00:21 fedora NetworkManager[717]: <info> [1657756821.3428] NetworkManager (version 1.38.2-1.fc36) is starting... (for the first time) >Jul 14 00:00:21 fedora NetworkManager[717]: <info> [1657756821.3452] Read config: /etc/NetworkManager/NetworkManager.conf >Jul 14 00:00:21 fedora systemd[1]: Started NetworkManager.service - Network Manager. >Jul 14 00:00:21 fedora NetworkManager[717]: <info> [1657756821.3558] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" >Jul 14 00:00:21 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NetworkManager comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:21 fedora systemd[1]: Reached target network.target - Network. >Jul 14 00:00:21 fedora systemd[1]: Starting NetworkManager-wait-online.service - Network Manager Wait Online... >Jul 14 00:00:21 fedora systemd[1]: Starting fail2ban.service - Fail2Ban Service... >Jul 14 00:00:21 fedora audit: BPF prog-id=71 op=LOAD >Jul 14 00:00:21 fedora systemd[1]: Starting postfix.service - Postfix Mail Transport Agent... >Jul 14 00:00:21 fedora systemd[1]: Starting sshd.service - OpenSSH server daemon... >Jul 14 00:00:21 fedora systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... >Jul 14 00:00:21 fedora systemd[1]: Started fail2ban.service - Fail2Ban Service. >Jul 14 00:00:21 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=fail2ban comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:21 fedora systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. >Jul 14 00:00:21 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-user-sessions comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:21 fedora systemd[1]: Started getty@tty1.service - Getty on tty1. >Jul 14 00:00:21 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=getty@tty1 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:21 fedora systemd[1]: Reached target getty.target - Login Prompts. >Jul 14 00:00:21 fedora restorecon[730]: /usr/sbin/restorecon: lstat(/var/spool/postfix/pid/master.pid) failed: No such file or directory >Jul 14 00:00:21 fedora audit[731]: AVC avc: denied { read } for pid=731 comm="sshd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:21 fedora sshd[731]: Server listening on 0.0.0.0 port 22. >Jul 14 00:00:21 fedora sshd[731]: Server listening on :: port 22. >Jul 14 00:00:21 fedora systemd[1]: Started sshd.service - OpenSSH server daemon. >Jul 14 00:00:21 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sshd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:21 fedora audit[740]: AVC avc: denied { read } for pid=740 comm="newaliases" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:sendmail_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.0594] manager[0xaaaadba97000]: monitoring kernel firmware directory '/lib/firmware'. >Jul 14 00:00:22 fedora systemd[1]: tmp.mount - Temporary Directory /tmp was skipped because of a failed condition check (ConditionPathIsSymbolicLink=!/tmp). >Jul 14 00:00:22 fedora audit: BPF prog-id=72 op=LOAD >Jul 14 00:00:22 fedora audit: BPF prog-id=73 op=LOAD >Jul 14 00:00:22 fedora systemd[1]: Starting systemd-hostnamed.service - Hostname Service... >Jul 14 00:00:22 fedora ModemManager[708]: <info> [base-manager] couldn't check support for device '/sys/devices/platform/scb/fd500000.pcie/pci0000:00/0000:00:00.0/0000:01:00.0/usb1/1-1/1-1.1': not supported by any plugin >Jul 14 00:00:22 fedora ModemManager[708]: <info> [base-manager] couldn't check support for device '/sys/devices/platform/scb/fd580000.ethernet': not supported by any plugin >Jul 14 00:00:22 fedora ModemManager[708]: <info> [base-manager] couldn't check support for device '/sys/devices/platform/soc/fe300000.mmcnr/mmc_host/mmc1/mmc1:0001/mmc1:0001:1': not supported by any plugin >Jul 14 00:00:22 fedora systemd[1]: Started systemd-hostnamed.service - Hostname Service. >Jul 14 00:00:22 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:22 fedora audit[740]: AVC avc: denied { read } for pid=740 comm="postalias" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_master_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.4667] hostname: hostname: using hostnamed >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.4727] dns-mgr[0xaaaadba71250]: init: dns=default,systemd-resolved rc-manager=symlink (auto) >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.4812] rfkill0: found Wi-Fi radio killswitch (at /sys/devices/platform/soc/fe300000.mmcnr/mmc_host/mmc1/mmc1:0001/mmc1:0001:1/ieee80211/phy0/rfkill0) (driver brcmfmac) >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.4824] manager[0xaaaadba97000]: rfkill: Wi-Fi hardware radio set enabled >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.4825] manager[0xaaaadba97000]: rfkill: WWAN hardware radio set enabled >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.5156] Loaded device plugin: NMWwanFactory (/usr/lib64/NetworkManager/1.38.2-1.fc36/libnm-device-plugin-wwan.so) >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.5277] Loaded device plugin: NMWifiFactory (/usr/lib64/NetworkManager/1.38.2-1.fc36/libnm-device-plugin-wifi.so) >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.7100] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.38.2-1.fc36/libnm-device-plugin-team.so) >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.7104] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.7107] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.7110] manager: Networking is enabled by state file >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.7176] settings: Loaded settings plugin: keyfile (internal) >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.7482] dhcp-init: Using DHCP client 'internal' >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.7484] device (lo): carrier: link connected >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.7493] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1) >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.7536] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.7552] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') >Jul 14 00:00:22 fedora systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... >Jul 14 00:00:22 fedora kernel: bcmgenet fd580000.ethernet: configuring instance for external RGMII (RX delay) >Jul 14 00:00:22 fedora kernel: bcmgenet fd580000.ethernet eth0: Link is Down >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.9331] device (wlan0): driver supports Access Point (AP) mode >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.9361] manager: (wlan0): new 802.11 Wi-Fi device (/org/freedesktop/NetworkManager/Devices/3) >Jul 14 00:00:22 fedora NetworkManager[717]: <info> [1657756822.9372] device (wlan0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') >Jul 14 00:00:23 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:23 fedora systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. >Jul 14 00:00:23 fedora NetworkManager[717]: <info> [1657756823.4890] device (wlan0): set-hw-addr: set MAC address to B2:3F:53:76:F3:81 (scanning) >Jul 14 00:00:23 fedora audit[760]: AVC avc: denied { read } for pid=760 comm="postfix" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_master_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:23 fedora NetworkManager[717]: <info> [1657756823.5206] modem-manager: ModemManager available >Jul 14 00:00:23 fedora systemd[1]: Starting wpa_supplicant.service - WPA supplicant... >Jul 14 00:00:23 fedora audit[780]: AVC avc: denied { read } for pid=780 comm="master" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_master_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:23 fedora audit[709]: NETFILTER_CFG table=firewalld:2 family=1 entries=1 op=nft_register_table pid=709 subj=system_u:system_r:firewalld_t:s0 comm="firewalld" >Jul 14 00:00:23 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=wpa_supplicant comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:23 fedora systemd[1]: Started wpa_supplicant.service - WPA supplicant. >Jul 14 00:00:23 fedora audit[771]: AVC avc: denied { read } for pid=771 comm="wpa_supplicant" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:NetworkManager_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:23 fedora wpa_supplicant[771]: Successfully initialized wpa_supplicant >Jul 14 00:00:24 fedora NetworkManager[717]: <info> [1657756824.3180] device (wlan0): supplicant interface state: internal-starting -> disconnected >Jul 14 00:00:24 fedora NetworkManager[717]: <info> [1657756824.3185] Wi-Fi P2P device controlled by interface wlan0 created >Jul 14 00:00:24 fedora NetworkManager[717]: <info> [1657756824.3195] manager: (p2p-dev-wlan0): new 802.11 Wi-Fi P2P device (/org/freedesktop/NetworkManager/Devices/4) >Jul 14 00:00:24 fedora NetworkManager[717]: <info> [1657756824.3203] device (p2p-dev-wlan0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') >Jul 14 00:00:24 fedora NetworkManager[717]: <info> [1657756824.3221] device (wlan0): state change: unavailable -> disconnected (reason 'supplicant-available', sys-iface-state: 'managed') >Jul 14 00:00:24 fedora NetworkManager[717]: <info> [1657756824.3447] device (p2p-dev-wlan0): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed') >Jul 14 00:00:24 fedora fail2ban-server[733]: Server ready >Jul 14 00:00:24 fedora audit[838]: AVC avc: denied { read } for pid=838 comm="sendmail" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:system_mail_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:24 fedora audit[843]: AVC avc: denied { read } for pid=843 comm="postdrop" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_postdrop_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:24 fedora audit[872]: AVC avc: denied { read } for pid=872 comm="postsuper" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_master_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora audit[908]: AVC avc: denied { read } for pid=908 comm="postlog" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_master_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora postfix/postfix-script[908]: starting the Postfix mail system >Jul 14 00:00:25 fedora audit[909]: AVC avc: denied { read } for pid=909 comm="master" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_master_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora postfix/master[911]: daemon started -- version 3.6.4, configuration /etc/postfix >Jul 14 00:00:25 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=postfix comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:25 fedora systemd[1]: Started postfix.service - Postfix Mail Transport Agent. >Jul 14 00:00:25 fedora audit[709]: NETFILTER_CFG table=firewalld:3 family=1 entries=452 op=nft_register_obj pid=709 subj=system_u:system_r:firewalld_t:s0 comm="firewalld" >Jul 14 00:00:25 fedora audit[709]: NETFILTER_CFG table=firewalld:3 family=1 entries=647 op=nft_register_chain pid=709 subj=system_u:system_r:firewalld_t:s0 comm="firewalld" >Jul 14 00:00:25 fedora audit[922]: AVC avc: denied { read } for pid=922 comm="qmgr" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_qmgr_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora audit[921]: AVC avc: denied { read } for pid=921 comm="pickup" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_pickup_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora audit[921]: AVC avc: denied { read } for pid=921 comm="pickup" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_pickup_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora audit[922]: AVC avc: denied { read } for pid=922 comm="qmgr" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_qmgr_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora audit[924]: AVC avc: denied { read } for pid=924 comm="cleanup" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_cleanup_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora audit[924]: AVC avc: denied { read } for pid=924 comm="cleanup" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_cleanup_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora postfix/pickup[921]: 6C4104902: uid=0 from=<fail2ban@vanoverloop.xyz> >Jul 14 00:00:25 fedora audit[921]: AVC avc: denied { read } for pid=921 comm="pickup" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_pickup_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora audit[925]: AVC avc: denied { read } for pid=925 comm="trivial-rewrite" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_master_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora audit[925]: AVC avc: denied { read } for pid=925 comm="trivial-rewrite" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_master_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora postfix/cleanup[924]: 6C4104902: message-id=<20220714000025.6C4104902@fedora.localdomain> >Jul 14 00:00:25 fedora postfix/qmgr[922]: 6C4104902: from=<fail2ban@vanoverloop.xyz>, size=408, nrcpt=1 (queue active) >Jul 14 00:00:25 fedora postfix/pickup[921]: warning: 8E2CD2A03: message dated 3258623 seconds into the future >Jul 14 00:00:25 fedora audit[927]: AVC avc: denied { read } for pid=927 comm="cleanup" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_cleanup_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora audit[927]: AVC avc: denied { read } for pid=927 comm="cleanup" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_cleanup_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora audit[928]: AVC avc: denied { read } for pid=928 comm="local" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_local_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora audit[928]: AVC avc: denied { read } for pid=928 comm="local" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_local_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora postfix/pickup[921]: 8E2CD2A03: uid=0 from=<fail2ban@vanoverloop.xyz> >Jul 14 00:00:25 fedora audit[921]: AVC avc: denied { read } for pid=921 comm="pickup" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_pickup_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora postfix/cleanup[924]: 8E2CD2A03: message-id=<20220714000025.8E2CD2A03@fedora.localdomain> >Jul 14 00:00:25 fedora audit[928]: AVC avc: denied { read } for pid=928 comm="local" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_local_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora postfix/qmgr[922]: 8E2CD2A03: from=<fail2ban@vanoverloop.xyz>, size=391, nrcpt=1 (queue active) >Jul 14 00:00:25 fedora audit[929]: AVC avc: denied { read } for pid=929 comm="local" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_local_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora audit[929]: AVC avc: denied { read } for pid=929 comm="local" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_local_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:25 fedora postfix/local[928]: 6C4104902: to=<root@fedora.localdomain>, orig_to=<root>, relay=local, delay=1.2, delays=1/0.07/0/0.11, dsn=2.0.0, status=sent (delivered to mailbox) >Jul 14 00:00:25 fedora postfix/qmgr[922]: 6C4104902: removed >Jul 14 00:00:26 fedora audit[929]: AVC avc: denied { read } for pid=929 comm="local" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_local_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:26 fedora postfix/local[929]: 8E2CD2A03: to=<root@fedora.localdomain>, orig_to=<root>, relay=local, delay=0, delays=0/0.02/0/0.95, dsn=2.0.0, status=sent (delivered to mailbox) >Jul 14 00:00:26 fedora postfix/qmgr[922]: 8E2CD2A03: removed >Jul 14 00:00:26 fedora systemd[1]: Created slice system-dbus\x2d:1.2\x2dorg.fedoraproject.SetroubleshootPrivileged.slice - Slice /system/dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged. >Jul 14 00:00:26 fedora audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:26 fedora systemd[1]: Started dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@0.service. >Jul 14 00:00:27 fedora NetworkManager[717]: <info> [1657756827.0750] device (eth0): carrier: link connected >Jul 14 00:00:27 fedora NetworkManager[717]: <info> [1657756827.0767] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') >Jul 14 00:00:27 fedora NetworkManager[717]: <info> [1657756827.0801] policy: auto-activating connection 'Wired connection 1' (5fe61ab7-0527-34bb-aa30-cdb21b797ac8) >Jul 14 00:00:27 fedora NetworkManager[717]: <info> [1657756827.0817] device (eth0): Activation: starting connection 'Wired connection 1' (5fe61ab7-0527-34bb-aa30-cdb21b797ac8) >Jul 14 00:00:27 fedora NetworkManager[717]: <info> [1657756827.0828] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') >Jul 14 00:00:27 fedora kernel: bcmgenet fd580000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx >Jul 14 00:00:27 fedora kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready >Jul 14 00:00:27 fedora NetworkManager[717]: <info> [1657756827.0842] manager: NetworkManager state is now CONNECTING >Jul 14 00:00:27 fedora NetworkManager[717]: <info> [1657756827.0847] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') >Jul 14 00:00:27 fedora audit[709]: NETFILTER_CFG table=firewalld:4 family=1 entries=6 op=nft_register_rule pid=709 subj=system_u:system_r:firewalld_t:s0 comm="firewalld" >Jul 14 00:00:27 fedora NetworkManager[717]: <info> [1657756827.1818] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') >Jul 14 00:00:27 fedora NetworkManager[717]: <info> [1657756827.1855] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) >Jul 14 00:00:27 fedora kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC=dc:a6:32:38:46:e7:28:87:ba:2a:e1:ff:08:00 SRC=10.0.3.1 DST=10.0.3.10 LEN=329 TOS=0x00 PREC=0xC0 TTL=64 ID=63880 PROTO=UDP SPT=67 DPT=68 LEN=309 >Jul 14 00:00:27 fedora NetworkManager[717]: <info> [1657756827.3220] dhcp4 (eth0): state changed new lease, address=10.0.3.10 >Jul 14 00:00:27 fedora NetworkManager[717]: <info> [1657756827.3257] policy: set-hostname: set hostname to 'pi' (from DHCPv4) >Jul 14 00:00:27 pi systemd-resolved[636]: System hostname changed to 'pi'. >Jul 14 00:00:27 pi systemd-hostnamed[744]: Hostname set to <pi> (transient) >Jul 14 00:00:27 pi NetworkManager[717]: <info> [1657756827.3394] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') >Jul 14 00:00:27 pi NetworkManager[717]: <info> [1657756827.3575] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') >Jul 14 00:00:27 pi NetworkManager[717]: <info> [1657756827.3625] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') >Jul 14 00:00:27 pi NetworkManager[717]: <info> [1657756827.3713] manager: NetworkManager state is now CONNECTED_LOCAL >Jul 14 00:00:27 pi NetworkManager[717]: <info> [1657756827.3746] manager: NetworkManager state is now CONNECTED_SITE >Jul 14 00:00:27 pi NetworkManager[717]: <info> [1657756827.3750] policy: set 'Wired connection 1' (eth0) as default for IPv4 routing and DNS >Jul 14 00:00:27 pi NetworkManager[717]: <info> [1657756827.3756] policy: set 'Wired connection 1' (eth0) as default for IPv6 routing and DNS >Jul 14 00:00:27 pi systemd-resolved[636]: eth0: Bus client set search domain list to: lan >Jul 14 00:00:27 pi systemd-resolved[636]: eth0: Bus client set default route setting: yes >Jul 14 00:00:27 pi systemd-resolved[636]: eth0: Bus client set DNS server list to: 10.0.3.1 >Jul 14 00:00:27 pi NetworkManager[717]: <info> [1657756827.5082] device (eth0): Activation: successful, device activated. >Jul 14 00:00:27 pi NetworkManager[717]: <info> [1657756827.5106] manager: NetworkManager state is now CONNECTED_GLOBAL >Jul 14 00:00:27 pi NetworkManager[717]: <info> [1657756827.5154] manager: startup complete >Jul 14 00:00:27 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NetworkManager-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:27 pi systemd[1]: Finished NetworkManager-wait-online.service - Network Manager Wait Online. >Jul 14 00:00:27 pi systemd[1]: Reached target network-online.target - Network is Online. >Jul 14 00:00:27 pi systemd[1]: Starting cockpit-motd.service - Cockpit motd updater service... >Jul 14 00:00:27 pi systemd[1]: Starting container-oauth2-proxy.service - Podman container-oauth2-proxy.service... >Jul 14 00:00:27 pi systemd[1]: Starting container-pihole.service - Podman container-pihole.service... >Jul 14 00:00:27 pi systemd[1]: Starting container-proxy-internal.service - Podman container-proxy-internal.service... >Jul 14 00:00:27 pi systemd[1]: Starting container-proxy.service - Podman container-proxy.service... >Jul 14 00:00:27 pi systemd[1]: Starting container-vaultwarden-server.service - Podman container-vaultwarden-server.service... >Jul 14 00:00:27 pi systemd[1]: Starting greenboot-healthcheck.service - greenboot Health Checks Runner... >Jul 14 00:00:27 pi systemd[1]: Starting pmcd.service - Performance Metrics Collector Daemon... >Jul 14 00:00:27 pi systemd[1]: Starting pod-gitea.service - Podman pod-gitea.service... >Jul 14 00:00:27 pi systemd[1]: Starting pod-home-assistant.service - Podman pod-home-assistant.service... >Jul 14 00:00:27 pi systemd[1]: Starting pod-nextcloud.service - Podman pod-nextcloud.service... >Jul 14 00:00:27 pi greenboot[1002]: Running Required Health Check Scripts... >Jul 14 00:00:27 pi systemd[1]: Starting pod-web.service - Podman pod-web.service... >Jul 14 00:00:27 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Jul 14 00:00:27 pi systemd[1]: cockpit-motd.service: Deactivated successfully. >Jul 14 00:00:27 pi systemd[1]: Finished cockpit-motd.service - Cockpit motd updater service. >Jul 14 00:00:27 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=cockpit-motd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:27 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=cockpit-motd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:28 pi 00_required_scripts_start.sh[1041]: Running greenboot Required Health Check Scripts >Jul 14 00:00:28 pi greenboot[1002]: Script '00_required_scripts_start.sh' SUCCESS >Jul 14 00:00:28 pi 01_repository_dns_check.sh[1058]: All domains have resolved correctly >Jul 14 00:00:28 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 1. >Jul 14 00:00:28 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Jul 14 00:00:28 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:28 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:28 pi greenboot[1002]: Script '01_repository_dns_check.sh' SUCCESS >Jul 14 00:00:28 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Jul 14 00:00:28 pi dbus-parsec[1092]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Jul 14 00:00:28 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Jul 14 00:00:28 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Jul 14 00:00:28 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Jul 14 00:00:28 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Jul 14 00:00:28 pi 02_watchdog.sh[1090]: Watchdog check is disabled >Jul 14 00:00:28 pi greenboot[1002]: Script '02_watchdog.sh' SUCCESS >Jul 14 00:00:28 pi greenboot[1002]: Running Wanted Health Check Scripts... >Jul 14 00:00:28 pi 00_wanted_scripts_start.sh[1108]: Running greenboot Wanted Health Check Scripts >Jul 14 00:00:28 pi greenboot[1002]: Script '00_wanted_scripts_start.sh' SUCCESS >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l ecb48473-7dd9-473d-ba43-787d5fc195cd >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that chronyd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'chronyd' --raw | audit2allow -M my-chronyd > # semodule -X 300 -i my-chronyd.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l ecb48473-7dd9-473d-ba43-787d5fc195cd >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that chronyd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'chronyd' --raw | audit2allow -M my-chronyd > # semodule -X 300 -i my-chronyd.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l ecb48473-7dd9-473d-ba43-787d5fc195cd >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that chronyd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'chronyd' --raw | audit2allow -M my-chronyd > # semodule -X 300 -i my-chronyd.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l ecb48473-7dd9-473d-ba43-787d5fc195cd >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that chronyd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'chronyd' --raw | audit2allow -M my-chronyd > # semodule -X 300 -i my-chronyd.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l ecb48473-7dd9-473d-ba43-787d5fc195cd >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that chronyd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'chronyd' --raw | audit2allow -M my-chronyd > # semodule -X 300 -i my-chronyd.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing polkitd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l f44d5ae8-a0d6-40c1-82db-5f3b5a9d0462 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing polkitd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that polkitd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'polkitd' --raw | audit2allow -M my-polkitd > # semodule -X 300 -i my-polkitd.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing polkitd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l f44d5ae8-a0d6-40c1-82db-5f3b5a9d0462 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing polkitd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that polkitd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'polkitd' --raw | audit2allow -M my-polkitd > # semodule -X 300 -i my-polkitd.pp > >Jul 14 00:00:29 pi NetworkManager[717]: <info> [1657756829.4549] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi systemd-resolved[636]: eth0: Bus client set DNS server list to: 10.0.3.1, fdac:aba8:b3ae::1 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing polkitd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l f44d5ae8-a0d6-40c1-82db-5f3b5a9d0462 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing polkitd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that polkitd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'polkitd' --raw | audit2allow -M my-polkitd > # semodule -X 300 -i my-polkitd.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing polkitd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that polkitd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'polkitd' --raw | audit2allow -M my-polkitd > # semodule -X 300 -i my-polkitd.pp > >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing polkitd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l f44d5ae8-a0d6-40c1-82db-5f3b5a9d0462 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing polkitd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that polkitd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'polkitd' --raw | audit2allow -M my-polkitd > # semodule -X 300 -i my-polkitd.pp > >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing polkitd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l f44d5ae8-a0d6-40c1-82db-5f3b5a9d0462 >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing firewalld from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l e0ba2d69-c9bb-4f11-92e7-fd2487a3aa99 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing firewalld from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that firewalld should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'firewalld' --raw | audit2allow -M my-firewalld > # semodule -X 300 -i my-firewalld.pp > >Jul 14 00:00:29 pi NetworkManager[717]: <info> [1657756829.5285] dhcp6 (eth0): state changed new lease >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing ModemManager from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l a29ee025-da16-4bd4-a4ed-ab6e2465ad9f >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing ModemManager from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that ModemManager should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'ModemManager' --raw | audit2allow -M my-ModemManager > # semodule -X 300 -i my-ModemManager.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing sshd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l fb5d3036-bfaa-4e08-bf2a-b438e323148d >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing sshd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that sshd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'sshd' --raw | audit2allow -M my-sshd > # semodule -X 300 -i my-sshd.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing newaliases from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 8f4eee3a-87a3-47c9-b464-837ebe9468fe >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing newaliases from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that newaliases should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'newaliases' --raw | audit2allow -M my-newaliases > # semodule -X 300 -i my-newaliases.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postalias from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l fb4965cf-81ff-4357-ae12-5942fd8fe25c >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postalias from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that postalias should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'postalias' --raw | audit2allow -M my-postalias > # semodule -X 300 -i my-postalias.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postalias from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l fb4965cf-81ff-4357-ae12-5942fd8fe25c >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postalias from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that postalias should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'postalias' --raw | audit2allow -M my-postalias > # semodule -X 300 -i my-postalias.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postalias from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l fb4965cf-81ff-4357-ae12-5942fd8fe25c >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postalias from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that postalias should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'postalias' --raw | audit2allow -M my-postalias > # semodule -X 300 -i my-postalias.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing wpa_supplicant from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l e3e72954-1467-44c6-a0be-9d1fabcfb20c >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing wpa_supplicant from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that wpa_supplicant should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'wpa_supplicant' --raw | audit2allow -M my-wpasupplicant > # semodule -X 300 -i my-wpasupplicant.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing sendmail from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 0e217529-ca23-4da1-b593-2fd89cf0f1f2 >Jul 14 00:00:29 pi audit[1236]: AVC avc: denied { read } for pid=1236 comm="pmcd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmcd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing sendmail from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that sendmail should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'sendmail' --raw | audit2allow -M my-sendmail > # semodule -X 300 -i my-sendmail.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postdrop from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 4f5780fe-008c-4886-9fe3-a475dea65bc8 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postdrop from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that postdrop should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'postdrop' --raw | audit2allow -M my-postdrop > # semodule -X 300 -i my-postdrop.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postalias from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l fb4965cf-81ff-4357-ae12-5942fd8fe25c >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postalias from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that postalias should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'postalias' --raw | audit2allow -M my-postalias > # semodule -X 300 -i my-postalias.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postalias from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l fb4965cf-81ff-4357-ae12-5942fd8fe25c >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postalias from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that postalias should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'postalias' --raw | audit2allow -M my-postalias > # semodule -X 300 -i my-postalias.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postalias from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l fb4965cf-81ff-4357-ae12-5942fd8fe25c >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing postalias from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that postalias should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'postalias' --raw | audit2allow -M my-postalias > # semodule -X 300 -i my-postalias.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing pickup from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 53c4c92f-237b-414c-aa63-3278bd6dc83d >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing pickup from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pickup should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pickup' --raw | audit2allow -M my-pickup > # semodule -X 300 -i my-pickup.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing qmgr from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l b6a6974b-8f83-442f-b910-0f3e5324ec50 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing qmgr from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that qmgr should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'qmgr' --raw | audit2allow -M my-qmgr > # semodule -X 300 -i my-qmgr.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing pickup from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 53c4c92f-237b-414c-aa63-3278bd6dc83d >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing pickup from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pickup should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pickup' --raw | audit2allow -M my-pickup > # semodule -X 300 -i my-pickup.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing qmgr from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l b6a6974b-8f83-442f-b910-0f3e5324ec50 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing qmgr from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that qmgr should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'qmgr' --raw | audit2allow -M my-qmgr > # semodule -X 300 -i my-qmgr.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing cleanup from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 64677dad-7a3b-4ab2-882c-022e7e77ae4a >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing cleanup from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that cleanup should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'cleanup' --raw | audit2allow -M my-cleanup > # semodule -X 300 -i my-cleanup.pp > >Jul 14 00:00:29 pi audit[1240]: AVC avc: denied { read } for pid=1240 comm="pmdaroot" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmcd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing cleanup from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 64677dad-7a3b-4ab2-882c-022e7e77ae4a >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing cleanup from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that cleanup should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'cleanup' --raw | audit2allow -M my-cleanup > # semodule -X 300 -i my-cleanup.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing pickup from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l aedb4e09-22cb-4f7c-8d56-d0151893a67a >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing pickup from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pickup should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pickup' --raw | audit2allow -M my-pickup > # semodule -X 300 -i my-pickup.pp > >Jul 14 00:00:29 pi audit[1241]: AVC avc: denied { read } for pid=1241 comm="pmdaproc" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmcd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing trivial-rewrite from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 521afb1f-74a2-4744-bec8-7deaa11548d9 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing trivial-rewrite from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that trivial-rewrite should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'trivial-rewrite' --raw | audit2allow -M my-trivialrewrite > # semodule -X 300 -i my-trivialrewrite.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing trivial-rewrite from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 521afb1f-74a2-4744-bec8-7deaa11548d9 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing trivial-rewrite from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that trivial-rewrite should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'trivial-rewrite' --raw | audit2allow -M my-trivialrewrite > # semodule -X 300 -i my-trivialrewrite.pp > >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing cleanup from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that cleanup should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'cleanup' --raw | audit2allow -M my-cleanup > # semodule -X 300 -i my-cleanup.pp > >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing cleanup from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 64677dad-7a3b-4ab2-882c-022e7e77ae4a >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing cleanup from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 64677dad-7a3b-4ab2-882c-022e7e77ae4a >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing cleanup from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that cleanup should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'cleanup' --raw | audit2allow -M my-cleanup > # semodule -X 300 -i my-cleanup.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing local from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l eb2658b5-1abf-4d18-963a-5442680b3102 >Jul 14 00:00:29 pi audit[1242]: AVC avc: denied { read } for pid=1242 comm="pmdaxfs" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmcd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing local from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that local should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'local' --raw | audit2allow -M my-local > # semodule -X 300 -i my-local.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing local from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l eb2658b5-1abf-4d18-963a-5442680b3102 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing local from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that local should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'local' --raw | audit2allow -M my-local > # semodule -X 300 -i my-local.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing pickup from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l aedb4e09-22cb-4f7c-8d56-d0151893a67a >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing pickup from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pickup should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pickup' --raw | audit2allow -M my-pickup > # semodule -X 300 -i my-pickup.pp > >Jul 14 00:00:29 pi audit[1243]: AVC avc: denied { read } for pid=1243 comm="pmdalinux" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmcd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing local from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l eb2658b5-1abf-4d18-963a-5442680b3102 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing local from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that local should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'local' --raw | audit2allow -M my-local > # semodule -X 300 -i my-local.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing local from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l eb2658b5-1abf-4d18-963a-5442680b3102 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing local from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that local should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'local' --raw | audit2allow -M my-local > # semodule -X 300 -i my-local.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing local from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l eb2658b5-1abf-4d18-963a-5442680b3102 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing local from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that local should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'local' --raw | audit2allow -M my-local > # semodule -X 300 -i my-local.pp > >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:29 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing local from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l eb2658b5-1abf-4d18-963a-5442680b3102 >Jul 14 00:00:29 pi setroubleshoot[710]: SELinux is preventing local from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that local should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'local' --raw | audit2allow -M my-local > # semodule -X 300 -i my-local.pp > >Jul 14 00:00:30 pi audit[1244]: AVC avc: denied { read } for pid=1244 comm="pmdakvm" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmcd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:30 pi audit[1244]: AVC avc: denied { search } for pid=1244 comm="pmdakvm" name="/" dev="tracefs" ino=1 scontext=system_u:system_r:pcp_pmcd_t:s0 tcontext=system_u:object_r:tracefs_t:s0 tclass=dir permissive=0 >Jul 14 00:00:30 pi systemd[1]: Started pmcd.service - Performance Metrics Collector Daemon. >Jul 14 00:00:30 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmcd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:30 pi systemd[1]: Starting pmie.service - Performance Metrics Inference Engine... >Jul 14 00:00:30 pi systemd[1]: Starting pmlogger.service - Performance Metrics Archive Logger... >Jul 14 00:00:30 pi zezere-ignition[1053]: INFO : Ignition 2.14.0 >Jul 14 00:00:30 pi zezere-ignition[1053]: INFO : Stage: fetch >Jul 14 00:00:30 pi zezere-ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.d" >Jul 14 00:00:30 pi zezere-ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Jul 14 00:00:30 pi zezere-ignition[1053]: DEBUG : parsed url from cmdline: "" >Jul 14 00:00:30 pi zezere-ignition[1053]: INFO : no config URL provided >Jul 14 00:00:30 pi zezere-ignition[1053]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Jul 14 00:00:30 pi zezere-ignition[1053]: INFO : no config at "/usr/lib/ignition/user.ign" >Jul 14 00:00:30 pi zezere-ignition[1053]: INFO : using config file at "/tmp/zezere-ignition-config-2jz8tpam.ign" >Jul 14 00:00:30 pi zezere-ignition[1053]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Jul 14 00:00:30 pi zezere-ignition[1053]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:31 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 8e794a19-abba-4d8d-938c-039330fc479a >Jul 14 00:00:31 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmcd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmcd' --raw | audit2allow -M my-pmcd > # semodule -X 300 -i my-pmcd.pp > >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:31 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 8e794a19-abba-4d8d-938c-039330fc479a >Jul 14 00:00:31 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmcd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmcd' --raw | audit2allow -M my-pmcd > # semodule -X 300 -i my-pmcd.pp > >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:31 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 8e794a19-abba-4d8d-938c-039330fc479a >Jul 14 00:00:31 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmcd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmcd' --raw | audit2allow -M my-pmcd > # semodule -X 300 -i my-pmcd.pp > >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:31 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 8e794a19-abba-4d8d-938c-039330fc479a >Jul 14 00:00:31 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmcd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmcd' --raw | audit2allow -M my-pmcd > # semodule -X 300 -i my-pmcd.pp > >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:31 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:31 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 8e794a19-abba-4d8d-938c-039330fc479a >Jul 14 00:00:31 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmcd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmcd' --raw | audit2allow -M my-pmcd > # semodule -X 300 -i my-pmcd.pp > >Jul 14 00:00:31 pi systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2565398540-merged.mount: Deactivated successfully. >Jul 14 00:00:31 pi zezere-ignition[1053]: INFO : GET error: Get "https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7": x509: certificate has expired or is not yet valid: current time 2022-07-14T00:00:31Z is before 2022-07-19T00:00:00Z >Jul 14 00:00:32 pi zezere-ignition[1053]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #2 >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Jul 14 00:00:32 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Jul 14 00:00:32 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 8e794a19-abba-4d8d-938c-039330fc479a >Jul 14 00:00:32 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmcd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmcd' --raw | audit2allow -M my-pmcd > # semodule -X 300 -i my-pmcd.pp > >Jul 14 00:00:32 pi zezere-ignition[1053]: INFO : GET error: Get "https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7": x509: certificate has expired or is not yet valid: current time 2022-07-14T00:00:32Z is before 2022-07-19T00:00:00Z >Jul 14 00:00:32 pi audit[1736]: AVC avc: denied { read } for pid=1736 comm="pmie" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmie_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:32 pi systemd[1]: Started pmie.service - Performance Metrics Inference Engine. >Jul 14 00:00:32 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Jul 14 00:00:32 pi systemd[1]: Started pmie_check.timer - Half-hourly check of PMIE instances. >Jul 14 00:00:32 pi systemd[1]: pmie_daily.timer: Not using persistent file timestamp Sat 2022-08-20 01:54:56 UTC as it is in the future. >Jul 14 00:00:32 pi systemd[1]: Started pmie_daily.timer - Daily processing of PMIE logs. >Jul 14 00:00:32 pi audit[1727]: AVC avc: denied { read } for pid=1727 comm="pmlogger" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmlogger_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:32 pi audit[1727]: AVC avc: denied { read } for pid=1727 comm="pmlogger" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmlogger_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Jul 14 00:00:32 pi chronyd[692]: Selected source 193.158.22.13 (2.fedora.pool.ntp.org) >Aug 20 17:12:57 pi systemd-journald[545]: Oldest entry in /var/log/journal/f6bc28022d7945348b2f18008b67b029/system.journal is older than the configured file retention duration (1month), suggesting rotation. >Aug 20 17:12:58 pi systemd-journald[545]: /var/log/journal/f6bc28022d7945348b2f18008b67b029/system.journal: Journal header limits reached or header out-of-date, rotating. >Aug 20 17:12:55 pi audit[692]: AVC avc: denied { read } for pid=692 comm="chronyd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:chronyd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:55 pi audit[692]: AVC avc: denied { read } for pid=692 comm="chronyd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:chronyd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:55 pi audit[1236]: AVC avc: denied { read } for pid=1236 comm="pmcd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmcd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:55 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie_farm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:55 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:55 pi audit[1757]: AVC avc: denied { read } for pid=1757 comm="pmdalinux" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmcd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:55 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie_daily comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:56 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=greenboot-healthcheck comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:56 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=greenboot-task-runner comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:56 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmlogger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:12:56 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:56 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=greenboot-grub2-set-success comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:56 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmlogger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:56 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmlogger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:56 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:56 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:56 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmlogger_farm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:57 pi audit[692]: AVC avc: denied { read } for pid=692 comm="chronyd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:chronyd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:57 pi audit[692]: AVC avc: denied { read } for pid=692 comm="chronyd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:chronyd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:57 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie_daily comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:58 pi audit[2649]: AVC avc: denied { read } for pid=2649 comm="pmlogger" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmlogger_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:58 pi audit[2649]: AVC avc: denied { read } for pid=2649 comm="pmlogger" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmlogger_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:58 pi zezere-ignition[1053]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #3 >Aug 20 17:12:58 pi zezere-ignition[1053]: INFO : GET result: Not Found >Aug 20 17:12:58 pi zezere-ignition[1053]: WARNING : failed to fetch config: resource not found >Aug 20 17:12:58 pi zezere-ignition[1053]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:12:58 pi zezere-ignition[1053]: CRITICAL : Ignition failed: resource not found >Aug 20 17:12:58 pi greenboot[1002]: Script '01_update_platforms_check.sh' FAILURE (exit code '1'). Continuing... >Aug 20 17:12:58 pi greenboot[1002]: Running Required Health Check Scripts... >Aug 20 17:12:58 pi greenboot[1002]: Running Wanted Health Check Scripts... >Jul 14 00:00:32 pi chronyd[692]: System clock wrong by 3258742.954349 seconds >Aug 20 17:12:55 pi systemd-resolved[636]: Clock change detected. Flushing caches. >Aug 20 17:12:58 pi 01_update_platforms_check.sh[1137]: There are problems connecting with the following URLs: >Aug 20 17:12:58 pi 01_update_platforms_check.sh[1137]: https://ostree.fedoraproject.org/iot >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:12:58 pi zezere-ignition[2115]: INFO : Ignition 2.14.0 >Aug 20 17:12:58 pi zezere-ignition[2115]: INFO : Stage: disks >Aug 20 17:12:58 pi zezere-ignition[2115]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:12:58 pi zezere-ignition[2115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:12:58 pi zezere-ignition[2115]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:12:58 pi zezere-ignition[2115]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:12:55 pi chronyd[692]: System clock was stepped by 3258742.954349 seconds >Aug 20 17:12:55 pi systemd[1]: Starting pmie_farm.service - pmie farm service... >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi zezere-ignition[2128]: INFO : Ignition 2.14.0 >Aug 20 17:12:58 pi zezere-ignition[2128]: INFO : Stage: mount >Aug 20 17:12:58 pi zezere-ignition[2128]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:12:58 pi zezere-ignition[2128]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:12:58 pi zezere-ignition[2128]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:12:58 pi zezere-ignition[2128]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:12:55 pi chronyd[692]: System clock TAI offset set to 37 seconds >Aug 20 17:12:55 pi systemd[1]: Starting pmie_check.service - Check PMIE instances are running... >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:12:58 pi zezere-ignition[2141]: INFO : Ignition 2.14.0 >Aug 20 17:12:58 pi zezere-ignition[2141]: INFO : Stage: files >Aug 20 17:12:58 pi zezere-ignition[2141]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:12:58 pi zezere-ignition[2141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:12:58 pi zezere-ignition[2141]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:12:58 pi zezere-ignition[2141]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:12:56 pi setroubleshoot[710]: failed to retrieve rpm info for path '/sys/kernel/tracing': >Aug 20 17:12:55 pi systemd[1]: Starting pmie_daily.service - Process PMIE logs... >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:12:58 pi zezere-ignition[2153]: INFO : Ignition 2.14.0 >Aug 20 17:12:58 pi zezere-ignition[2153]: INFO : Stage: umount >Aug 20 17:12:58 pi zezere-ignition[2153]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:12:58 pi zezere-ignition[2153]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:12:58 pi zezere-ignition[2153]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:12:58 pi zezere-ignition[2153]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:12:56 pi setroubleshoot[710]: SELinux is preventing pmdakvm from search access on the directory /sys/kernel/tracing. For complete SELinux messages run: sealert -l 391c5a59-f45c-482c-b943-313c7461ce95 >Aug 20 17:12:55 pi systemd[1]: Started pmie_farm.service - pmie farm service. >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:12:58 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:12:58 pi zezere-ignition[1012]: Running stage fetch with config file /tmp/zezere-ignition-config-2jz8tpam.ign >Aug 20 17:12:58 pi zezere-ignition[1012]: Running stage disks with config file /tmp/zezere-ignition-config-2jz8tpam.ign >Aug 20 17:12:58 pi zezere-ignition[1012]: Running stage mount with config file /tmp/zezere-ignition-config-2jz8tpam.ign >Aug 20 17:12:58 pi zezere-ignition[1012]: Running stage files with config file /tmp/zezere-ignition-config-2jz8tpam.ign >Aug 20 17:12:58 pi zezere-ignition[1012]: Running stage umount with config file /tmp/zezere-ignition-config-2jz8tpam.ign >Aug 20 17:12:56 pi setroubleshoot[710]: SELinux is preventing pmie from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 969629a8-7dc2-4596-bc46-94d73f0f3d90 >Aug 20 17:12:55 pi systemd[1]: Started pmie_farm_check.timer - Half-hourly check of pmie farm instances. >Aug 20 17:12:58 pi greenboot[2053]: Boot Status is GREEN - Health Check SUCCESS >Aug 20 17:12:58 pi greenboot[2053]: Running Green Scripts... >Aug 20 17:12:56 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 2cfa4e5c-fdb9-4616-80fc-ee24fde56ecc >Aug 20 17:12:55 pi systemd[1]: Started pmie_check.service - Check PMIE instances are running. >Aug 20 17:12:56 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 2cfa4e5c-fdb9-4616-80fc-ee24fde56ecc >Aug 20 17:12:55 pi systemd[1]: Started pmie_daily.service - Process PMIE logs. >Aug 20 17:12:57 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 84107c7f-c559-4be1-9c6d-4fc04980bcf4 >Aug 20 17:12:56 pi systemd[1]: Finished greenboot-healthcheck.service - greenboot Health Checks Runner. >Aug 20 17:12:57 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 84107c7f-c559-4be1-9c6d-4fc04980bcf4 >Aug 20 17:12:56 pi systemd[1]: Reached target boot-complete.target - Boot Completion Check. >Aug 20 17:12:57 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 8e794a19-abba-4d8d-938c-039330fc479a >Aug 20 17:12:56 pi setroubleshoot[710]: SELinux is preventing pmdakvm from search access on the directory /sys/kernel/tracing. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmdakvm should be allowed search access on the tracing directory by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmdakvm' --raw | audit2allow -M my-pmdakvm > # semodule -X 300 -i my-pmdakvm.pp > >Aug 20 17:12:57 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 8e794a19-abba-4d8d-938c-039330fc479a >Aug 20 17:12:56 pi setroubleshoot[710]: SELinux is preventing pmie from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmie should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmie' --raw | audit2allow -M my-pmie > # semodule -X 300 -i my-pmie.pp > >Aug 20 17:12:56 pi systemd[1]: Starting greenboot-grub2-set-success.service - Mark boot as successful in grubenv... >Aug 20 17:12:56 pi systemd[1]: Starting greenboot-task-runner.service - greenboot Success Scripts Runner... >Aug 20 17:12:56 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmlogger should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmlogger' --raw | audit2allow -M my-pmlogger > # semodule -X 300 -i my-pmlogger.pp > >Aug 20 17:12:56 pi systemd[1]: Finished greenboot-task-runner.service - greenboot Success Scripts Runner. >Aug 20 17:12:56 pi systemd[1]: Starting greenboot-status.service - greenboot MotD Generator... >Aug 20 17:12:56 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmlogger should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmlogger' --raw | audit2allow -M my-pmlogger > # semodule -X 300 -i my-pmlogger.pp > >Aug 20 17:12:56 pi systemd[1]: pmlogger.service: Failed with result 'protocol'. >Aug 20 17:12:56 pi systemd[1]: Failed to start pmlogger.service - Performance Metrics Archive Logger. >Aug 20 17:12:56 pi systemd[1]: pmlogger.service: Consumed 1.643s CPU time. >Aug 20 17:12:56 pi systemd[1]: Started pmlogger_check.timer - Half-hourly check of pmlogger instances. >Aug 20 17:12:56 pi systemd[1]: Started pmlogger_daily.timer - Daily processing of archive logs. >Aug 20 17:12:56 pi systemd[1]: Starting pmlogger_farm.service - pmlogger farm service... >Aug 20 17:12:56 pi systemd[1]: pmie_check.service: Deactivated successfully. >Aug 20 17:12:56 pi systemd[1]: Finished greenboot-grub2-set-success.service - Mark boot as successful in grubenv. >Aug 20 17:12:56 pi systemd[1]: pmlogger.service: Scheduled restart job, restart counter is at 1. >Aug 20 17:12:56 pi systemd[1]: Stopped pmlogger.service - Performance Metrics Archive Logger. >Aug 20 17:12:56 pi systemd[1]: pmlogger.service: Consumed 1.643s CPU time. >Aug 20 17:12:56 pi systemd[1]: Starting pmlogger.service - Performance Metrics Archive Logger... >Aug 20 17:12:56 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:12:56 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:12:56 pi systemd[1]: Started pmlogger_farm.service - pmlogger farm service. >Aug 20 17:12:56 pi systemd[1]: Started pmlogger_farm_check.timer - Half-hourly check of pmlogger farm instances. >Aug 20 17:12:56 pi systemd[1]: Reached target timers.target - Timer Units. >Aug 20 17:12:57 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that chronyd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'chronyd' --raw | audit2allow -M my-chronyd > # semodule -X 300 -i my-chronyd.pp > >Aug 20 17:12:57 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that chronyd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'chronyd' --raw | audit2allow -M my-chronyd > # semodule -X 300 -i my-chronyd.pp > >Aug 20 17:12:57 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmcd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmcd' --raw | audit2allow -M my-pmcd > # semodule -X 300 -i my-pmcd.pp > >Aug 20 17:12:57 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmcd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmcd' --raw | audit2allow -M my-pmcd > # semodule -X 300 -i my-pmcd.pp > >Aug 20 17:12:57 pi systemd[1]: pmie_daily.service: Deactivated successfully. >Aug 20 17:12:57 pi systemd[1]: pmie_daily.service: Consumed 1.508s CPU time. >Aug 20 17:12:58 pi systemd[1]: Created slice machine.slice - Slice /machine. >Aug 20 17:12:58 pi systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. >Aug 20 17:12:58 pi systemd[1]: Created slice machine-libpod_pod_79ea372dd1a8ce7ef2c969cbd7cf75ec2fbd961e91b694a058907d967e4fb842.slice - cgroup machine-libpod_pod_79ea372dd1a8ce7ef2c969cbd7cf75ec2fbd961e91b694a058907d967e4fb842.slice. >Aug 20 17:12:58 pi systemd[1]: Created slice machine-libpod_pod_b826c337d5e8277e5a9d0f9b433cad0636147b1b1c02e195c23f8ddc4ced0b45.slice - cgroup machine-libpod_pod_b826c337d5e8277e5a9d0f9b433cad0636147b1b1c02e195c23f8ddc4ced0b45.slice. >Aug 20 17:12:58 pi podman[1013]: 2022-08-20 17:12:58.892061787 +0000 UTC m=+7.127930815 system refresh >Aug 20 17:12:59 pi systemd[1]: Created slice machine-libpod_pod_a8df0c44b354b8703bdeea30933a12cc7135f89fa5d006e39599c07736d46755.slice - cgroup machine-libpod_pod_a8df0c44b354b8703bdeea30933a12cc7135f89fa5d006e39599c07736d46755.slice. >Aug 20 17:12:59 pi systemd[1]: Created slice machine-libpod_pod_645e04616c2c4282e0f20da2eafaa5fce670b11b756fe6cf042e2e027706279d.slice - cgroup machine-libpod_pod_645e04616c2c4282e0f20da2eafaa5fce670b11b756fe6cf042e2e027706279d.slice. >Aug 20 17:12:59 pi systemd[1]: Created slice machine-libpod_pod_69c38bffc6cf4d82ae114c9532823ec24762e5f289ec8e175316c69219940892.slice - cgroup machine-libpod_pod_69c38bffc6cf4d82ae114c9532823ec24762e5f289ec8e175316c69219940892.slice. >Aug 20 17:12:59 pi systemd[1]: Created slice machine-libpod_pod_3731cc27db45225d8d891c1410b72a0ff670500d479e8d00146b464e33981b33.slice - cgroup machine-libpod_pod_3731cc27db45225d8d891c1410b72a0ff670500d479e8d00146b464e33981b33.slice. >Aug 20 17:12:59 pi audit[2649]: AVC avc: denied { read } for pid=2649 comm="pmlogger" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmlogger_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:59 pi audit[2649]: AVC avc: denied { read } for pid=2649 comm="pmlogger" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmlogger_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:59 pi audit[2649]: AVC avc: denied { read } for pid=2649 comm="pmlogger" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmlogger_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:59 pi podman[1013]: 2022-08-20 17:12:58.910691445 +0000 UTC m=+7.146560529 image pull quay.io/oauth2-proxy/oauth2-proxy >Aug 20 17:12:59 pi audit[2649]: AVC avc: denied { read } for pid=2649 comm="pmlogger" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmlogger_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:59 pi audit[1236]: AVC avc: denied { read } for pid=1236 comm="pmcd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmcd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:59 pi audit[1236]: AVC avc: denied { read } for pid=1236 comm="pmcd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:pcp_pmcd_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:12:59 pi systemd[1]: Started pmlogger.service - Performance Metrics Archive Logger. >Aug 20 17:12:59 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmlogger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:12:59 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 84107c7f-c559-4be1-9c6d-4fc04980bcf4 >Aug 20 17:12:59 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that chronyd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'chronyd' --raw | audit2allow -M my-chronyd > # semodule -X 300 -i my-chronyd.pp > >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:12:59 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:12:59 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 84107c7f-c559-4be1-9c6d-4fc04980bcf4 >Aug 20 17:12:59 pi setroubleshoot[710]: SELinux is preventing chronyd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that chronyd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'chronyd' --raw | audit2allow -M my-chronyd > # semodule -X 300 -i my-chronyd.pp > >Aug 20 17:12:59 pi podman[1016]: 2022-08-20 17:12:58.916476971 +0000 UTC m=+7.158349356 image pull docker.io/vaultwarden/server:latest >Aug 20 17:12:59 pi podman[1030]: 2022-08-20 17:12:58.909677951 +0000 UTC m=+7.155386829 image pull docker.io/pihole/pihole:latest >Aug 20 17:13:00 pi podman[1029]: 2022-08-20 17:12:58.910616362 +0000 UTC m=+7.147894059 image pull docker.io/jc21/nginx-proxy-manager:latest >Aug 20 17:13:00 pi audit[2741]: CRYPTO_KEY_USER pid=2741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=server fp=SHA256:91:66:5d:fc:b4:87:a9:5b:84:e9:df:57:a3:9f:93:77:1b:f7:ee:ca:a4:ed:1b:f9:44:78:e6:4c:a8:27:4e:43 direction=? spid=2741 suid=0 exe="/usr/sbin/sshd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:00 pi audit[2740]: CRYPTO_SESSION pid=2740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=start direction=from-server cipher=aes256-gcm@openssh.com ksize=256 mac=<implicit> pfs=curve25519-sha256 spid=2741 suid=74 rport=55144 laddr=10.0.3.10 lport=22 exe="/usr/sbin/sshd" hostname=? addr=10.0.1.19 terminal=? res=success' >Aug 20 17:13:00 pi audit[2740]: CRYPTO_SESSION pid=2740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=start direction=from-client cipher=aes256-gcm@openssh.com ksize=256 mac=<implicit> pfs=curve25519-sha256 spid=2741 suid=74 rport=55144 laddr=10.0.3.10 lport=22 exe="/usr/sbin/sshd" hostname=? addr=10.0.1.19 terminal=? res=success' >Aug 20 17:13:00 pi kernel: overlayfs: idmapped layers are currently not supported >Aug 20 17:13:00 pi systemd[1]: var-lib-containers-storage-overlay-compat1964943507-lower\x2dmapped.mount: Deactivated successfully. >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:13:00 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 2cfa4e5c-fdb9-4616-80fc-ee24fde56ecc >Aug 20 17:13:00 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmlogger should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmlogger' --raw | audit2allow -M my-pmlogger > # semodule -X 300 -i my-pmlogger.pp > >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:13:00 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:13:00 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 2cfa4e5c-fdb9-4616-80fc-ee24fde56ecc >Aug 20 17:13:00 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmlogger should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmlogger' --raw | audit2allow -M my-pmlogger > # semodule -X 300 -i my-pmlogger.pp > >Aug 20 17:13:00 pi audit[2740]: USER_AUTH pid=2740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=pubkey_auth grantors=auth-key acct="pi" exe="/usr/sbin/sshd" hostname=? addr=10.0.1.19 terminal=? res=success' >Aug 20 17:13:00 pi audit[2740]: CRYPTO_KEY_USER pid=2740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=negotiate kind=auth-key fp=SHA256:2e:dc:cb:66:f4:70:b1:00:a3:c2:a6:15:6e:67:0a:14:0a:51:90:0f:f4:75:f6:88:9e:ca:60:6b:cc:c9:e0:5a exe="/usr/sbin/sshd" hostname=? addr=10.0.1.19 terminal=? res=success' >Aug 20 17:13:01 pi podman[1015]: 2022-08-20 17:12:58.911194044 +0000 UTC m=+7.148484371 image pull docker.io/jc21/nginx-proxy-manager:latest >Aug 20 17:13:01 pi podman[1023]: >Aug 20 17:13:01 pi audit[2740]: USER_ACCT pid=2740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:accounting grantors=pam_unix,pam_localuser acct="pi" exe="/usr/sbin/sshd" hostname=10.0.1.19 addr=10.0.1.19 terminal=ssh res=success' >Aug 20 17:13:01 pi audit[2740]: AVC avc: denied { read } for pid=2740 comm="sshd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:13:01 pi sshd[2740]: Accepted publickey for pi from 10.0.1.19 port 55144 ssh2: RSA SHA256:LtzLZvRwsQCjwqYVbmcKFApRkA/0dfaInspga8zJ4Fo >Aug 20 17:13:01 pi audit[2740]: CRYPTO_KEY_USER pid=2740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=session fp=? direction=both spid=2741 suid=74 rport=55144 laddr=10.0.3.10 lport=22 exe="/usr/sbin/sshd" hostname=? addr=10.0.1.19 terminal=? res=success' >Aug 20 17:13:01 pi audit[2740]: CRED_ACQ pid=2740 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct="pi" exe="/usr/sbin/sshd" hostname=10.0.1.19 addr=10.0.1.19 terminal=ssh res=success' >Aug 20 17:13:01 pi audit[2740]: USER_ROLE_CHANGE pid=2740 uid=0 auid=1000 ses=1 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='pam: default-context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 selected-context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 exe="/usr/sbin/sshd" hostname=10.0.1.19 addr=10.0.1.19 terminal=ssh res=success' >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:13:01 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 2cfa4e5c-fdb9-4616-80fc-ee24fde56ecc >Aug 20 17:13:01 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmlogger should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmlogger' --raw | audit2allow -M my-pmlogger > # semodule -X 300 -i my-pmlogger.pp > >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:13:01 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 2cfa4e5c-fdb9-4616-80fc-ee24fde56ecc >Aug 20 17:13:01 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmlogger should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmlogger' --raw | audit2allow -M my-pmlogger > # semodule -X 300 -i my-pmlogger.pp > >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:13:01 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 2cfa4e5c-fdb9-4616-80fc-ee24fde56ecc >Aug 20 17:13:01 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmlogger should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmlogger' --raw | audit2allow -M my-pmlogger > # semodule -X 300 -i my-pmlogger.pp > >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:13:01 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 2cfa4e5c-fdb9-4616-80fc-ee24fde56ecc >Aug 20 17:13:01 pi setroubleshoot[710]: SELinux is preventing pmlogger from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmlogger should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmlogger' --raw | audit2allow -M my-pmlogger > # semodule -X 300 -i my-pmlogger.pp > >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:13:01 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 8e794a19-abba-4d8d-938c-039330fc479a >Aug 20 17:13:01 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmcd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmcd' --raw | audit2allow -M my-pmcd > # semodule -X 300 -i my-pmcd.pp > >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:13:01 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:13:01 pi podman[1023]: 2022-08-20 17:13:01.384472757 +0000 UTC m=+9.620978759 container create f1b6a138a0a0a233ff7ad96641b4aff22d6720546194405d437b14f086178c00 (image=localhost/podman-pause:4.1.1-1658516809, name=645e04616c2c-infra, io.buildah.version=1.26.1, PODMAN_SYSTEMD_UNIT=pod-home-assistant.service) >Aug 20 17:13:01 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 8e794a19-abba-4d8d-938c-039330fc479a >Aug 20 17:13:01 pi setroubleshoot[710]: SELinux is preventing pmcd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pmcd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pmcd' --raw | audit2allow -M my-pmcd > # semodule -X 300 -i my-pmcd.pp > >Aug 20 17:13:01 pi podman[1023]: 2022-08-20 17:13:01.397453685 +0000 UTC m=+9.633959594 pod create 645e04616c2c4282e0f20da2eafaa5fce670b11b756fe6cf042e2e027706279d (image=, name=home-assistant) >Aug 20 17:13:01 pi podman[1023]: 645e04616c2c4282e0f20da2eafaa5fce670b11b756fe6cf042e2e027706279d >Aug 20 17:13:01 pi systemd[1]: Created slice user-1000.slice - User Slice of UID 1000. >Aug 20 17:13:01 pi systemd[1]: Starting user-runtime-dir@1000.service - User Runtime Directory /run/user/1000... >Aug 20 17:13:01 pi systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. >Aug 20 17:13:01 pi systemd-logind[681]: New session 1 of user pi. >Aug 20 17:13:01 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=user-runtime-dir@1000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:01 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:01 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:01 pi systemd[1]: Finished user-runtime-dir@1000.service - User Runtime Directory /run/user/1000. >Aug 20 17:13:01 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 2. >Aug 20 17:13:01 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:13:01 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:13:01 pi systemd[1]: Starting user@1000.service - User Manager for UID 1000... >Aug 20 17:13:01 pi dbus-parsec[2790]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:13:01 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:13:01 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:13:01 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:13:01 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:13:01 pi audit[2791]: USER_ACCT pid=2791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_localuser acct="pi" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:01 pi audit[2791]: CRED_ACQ pid=2791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='op=PAM:setcred grantors=? acct="pi" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:13:01 pi audit[2791]: USER_ROLE_CHANGE pid=2791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='pam: default-context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 selected-context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:01 pi systemd[2791]: pam_unix(systemd-user:session): session opened for user pi(uid=1000) by (uid=0) >Aug 20 17:13:01 pi audit[2791]: USER_START pid=2791 uid=0 auid=1000 ses=2 subj=system_u:system_r:init_t:s0 msg='op=PAM:session_open grantors=pam_selinux,pam_selinux,pam_loginuid,pam_namespace,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="pi" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:02 pi systemd[1]: Started libpod-f1b6a138a0a0a233ff7ad96641b4aff22d6720546194405d437b14f086178c00.scope - libcrun container. >Aug 20 17:13:02 pi audit: BPF prog-id=74 op=LOAD >Aug 20 17:13:02 pi podman[2783]: 2022-08-20 17:13:02.32737299 +0000 UTC m=+0.704774710 container init f1b6a138a0a0a233ff7ad96641b4aff22d6720546194405d437b14f086178c00 (image=localhost/podman-pause:4.1.1-1658516809, name=645e04616c2c-infra, io.buildah.version=1.26.1, PODMAN_SYSTEMD_UNIT=pod-home-assistant.service) >Aug 20 17:13:02 pi podman[2783]: 2022-08-20 17:13:02.375834202 +0000 UTC m=+0.753235921 container start f1b6a138a0a0a233ff7ad96641b4aff22d6720546194405d437b14f086178c00 (image=localhost/podman-pause:4.1.1-1658516809, name=645e04616c2c-infra, PODMAN_SYSTEMD_UNIT=pod-home-assistant.service, io.buildah.version=1.26.1) >Aug 20 17:13:02 pi podman[2783]: 2022-08-20 17:13:02.376140906 +0000 UTC m=+0.753542644 pod start 645e04616c2c4282e0f20da2eafaa5fce670b11b756fe6cf042e2e027706279d (image=, name=home-assistant) >Aug 20 17:13:02 pi podman[2783]: 645e04616c2c4282e0f20da2eafaa5fce670b11b756fe6cf042e2e027706279d >Aug 20 17:13:02 pi systemd[2791]: Queued start job for default target default.target. >Aug 20 17:13:02 pi systemd[2791]: Created slice app.slice - User Application Slice. >Aug 20 17:13:02 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pod-home-assistant comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:02 pi systemd[2791]: Started grub-boot-success.timer - Mark boot as successful after the user session has run 2 minutes. >Aug 20 17:13:02 pi systemd[2791]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. >Aug 20 17:13:02 pi systemd[2791]: Reached target paths.target - Paths. >Aug 20 17:13:02 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=user@1000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:02 pi systemd[2791]: Reached target timers.target - Timers. >Aug 20 17:13:02 pi systemd[2791]: Starting dbus.socket - D-Bus User Message Bus Socket... >Aug 20 17:13:02 pi systemd[2791]: Listening on podman.socket - Podman API Socket. >Aug 20 17:13:02 pi systemd[2791]: Starting systemd-tmpfiles-setup.service - Create User's Volatile Files and Directories... >Aug 20 17:13:02 pi systemd[2791]: Listening on dbus.socket - D-Bus User Message Bus Socket. >Aug 20 17:13:02 pi systemd[2791]: Reached target sockets.target - Sockets. >Aug 20 17:13:02 pi systemd[2791]: Finished systemd-tmpfiles-setup.service - Create User's Volatile Files and Directories. >Aug 20 17:13:02 pi systemd[1]: Started pod-home-assistant.service - Podman pod-home-assistant.service. >Aug 20 17:13:02 pi systemd[2791]: Reached target basic.target - Basic System. >Aug 20 17:13:02 pi systemd[2791]: Reached target default.target - Main User Target. >Aug 20 17:13:02 pi systemd[2791]: Startup finished in 750ms. >Aug 20 17:13:02 pi systemd[1]: Started user@1000.service - User Manager for UID 1000. >Aug 20 17:13:02 pi systemd[1]: Started session-1.scope - Session 1 of User pi. >Aug 20 17:13:02 pi systemd[1]: Starting container-hass-mosquitto.service - Podman container-mosquitto.service... >Aug 20 17:13:02 pi systemd[1]: Starting container-hass-postgres.service - Podman container-hass-postgres.service... >Aug 20 17:13:02 pi systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. >Aug 20 17:13:02 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:02 pi sshd[2740]: pam_unix(sshd:session): session opened for user pi(uid=1000) by (uid=0) >Aug 20 17:13:03 pi podman[1024]: >Aug 20 17:13:03 pi podman[3113]: 2022-08-20 17:13:02.910524718 +0000 UTC m=+0.272218225 image pull docker.io/eclipse-mosquitto >Aug 20 17:13:03 pi podman[1024]: 2022-08-20 17:13:03.227045851 +0000 UTC m=+11.472647453 container create 48412f81ab911b2563de7937a6c54cbf6598ff611e0be4bb2a59200d9f6df38f (image=localhost/podman-pause:4.1.1-1658516809, name=a8df0c44b354-infra, PODMAN_SYSTEMD_UNIT=pod-nextcloud.service, io.buildah.version=1.26.1) >Aug 20 17:13:03 pi podman[1024]: 2022-08-20 17:13:03.237029978 +0000 UTC m=+11.482631636 pod create a8df0c44b354b8703bdeea30933a12cc7135f89fa5d006e39599c07736d46755 (image=, name=nextcloud) >Aug 20 17:13:03 pi podman[1024]: a8df0c44b354b8703bdeea30933a12cc7135f89fa5d006e39599c07736d46755 >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:13:03 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:13:03 pi setroubleshoot[710]: SELinux is preventing sshd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 7e21cdd6-7d79-45a0-8e5c-cc8cbcff55d6 >Aug 20 17:13:03 pi setroubleshoot[710]: SELinux is preventing sshd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that sshd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'sshd' --raw | audit2allow -M my-sshd > # semodule -X 300 -i my-sshd.pp > >Aug 20 17:13:03 pi podman[1025]: >Aug 20 17:13:04 pi podman[3118]: 2022-08-20 17:13:02.903613796 +0000 UTC m=+0.266462297 image pull docker.io/postgres:14 >Aug 20 17:13:07 pi podman[1025]: 2022-08-20 17:13:07.635906096 +0000 UTC m=+15.871776235 container create 4b20f34528b5b05398842f871cda38afbd90eaf500951135b9dd6491d55f8c1a (image=localhost/podman-pause:4.1.1-1658516809, name=3731cc27db45-infra, PODMAN_SYSTEMD_UNIT=pod-web.service, io.buildah.version=1.26.1) >Aug 20 17:13:07 pi kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. >Aug 20 17:13:07 pi systemd-udevd[5606]: Using default interface naming scheme 'v250'. >Aug 20 17:13:07 pi NetworkManager[717]: <info> [1661015587.8792] manager: (cni-podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/5) >Aug 20 17:13:08 pi podman[1025]: 2022-08-20 17:13:08.053749265 +0000 UTC m=+16.289619238 pod create 3731cc27db45225d8d891c1410b72a0ff670500d479e8d00146b464e33981b33 (image=, name=web) >Aug 20 17:13:08 pi podman[1025]: 3731cc27db45225d8d891c1410b72a0ff670500d479e8d00146b464e33981b33 >Aug 20 17:13:08 pi systemd-udevd[5605]: Using default interface naming scheme 'v250'. >Aug 20 17:13:08 pi audit: ANOM_PROMISCUOUS dev=veth6a374b0a prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295 >Aug 20 17:13:08 pi NetworkManager[717]: <info> [1661015588.1046] manager: (veth6a374b0a): new Veth device (/org/freedesktop/NetworkManager/Devices/6) >Aug 20 17:13:08 pi kernel: cni-podman0: port 1(veth6a374b0a) entered blocking state >Aug 20 17:13:08 pi kernel: cni-podman0: port 1(veth6a374b0a) entered disabled state >Aug 20 17:13:08 pi kernel: device veth6a374b0a entered promiscuous mode >Aug 20 17:13:08 pi kernel: cni-podman0: port 1(veth6a374b0a) entered blocking state >Aug 20 17:13:08 pi kernel: cni-podman0: port 1(veth6a374b0a) entered forwarding state >Aug 20 17:13:08 pi kernel: cni-podman0: port 1(veth6a374b0a) entered disabled state >Aug 20 17:13:08 pi podman[1022]: >Aug 20 17:13:08 pi podman[1022]: 2022-08-20 17:13:08.50295982 +0000 UTC m=+16.740300664 container create 5c8631e3596e13ace3b0b45d2dad00f9e1adbefe22cf11e1c5566acae71dd485 (image=localhost/podman-pause:4.1.1-1658516809, name=69c38bffc6cf-infra, PODMAN_SYSTEMD_UNIT=pod-gitea.service, io.buildah.version=1.26.1) >Aug 20 17:13:08 pi podman[1022]: 2022-08-20 17:13:08.512979132 +0000 UTC m=+16.750319976 pod create 69c38bffc6cf4d82ae114c9532823ec24762e5f289ec8e175316c69219940892 (image=, name=gitea) >Aug 20 17:13:08 pi podman[1022]: 69c38bffc6cf4d82ae114c9532823ec24762e5f289ec8e175316c69219940892 >Aug 20 17:13:08 pi NetworkManager[717]: <info> [1661015588.6035] device (cni-podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') >Aug 20 17:13:08 pi NetworkManager[717]: <info> [1661015588.6058] device (veth6a374b0a): carrier: link connected >Aug 20 17:13:08 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6a374b0a: link becomes ready >Aug 20 17:13:08 pi kernel: cni-podman0: port 1(veth6a374b0a) entered blocking state >Aug 20 17:13:08 pi kernel: cni-podman0: port 1(veth6a374b0a) entered forwarding state >Aug 20 17:13:08 pi NetworkManager[717]: <info> [1661015588.6078] device (cni-podman0): carrier: link connected >Aug 20 17:13:08 pi NetworkManager[717]: <info> [1661015588.6220] device (cni-podman0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') >Aug 20 17:13:08 pi NetworkManager[717]: <info> [1661015588.6297] device (cni-podman0): Activation: starting connection 'cni-podman0' (0c81fc96-f3df-48d3-970f-3aee38e14aaa) >Aug 20 17:13:08 pi audit[5635]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=5635 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:08 pi NetworkManager[717]: <info> [1661015588.6353] device (cni-podman0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') >Aug 20 17:13:08 pi NetworkManager[717]: <info> [1661015588.6366] device (cni-podman0): state change: prepare -> config (reason 'none', sys-iface-state: 'external') >Aug 20 17:13:08 pi NetworkManager[717]: <info> [1661015588.6376] device (cni-podman0): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') >Aug 20 17:13:08 pi NetworkManager[717]: <info> [1661015588.6390] device (cni-podman0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') >Aug 20 17:13:08 pi systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... >Aug 20 17:13:08 pi systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. >Aug 20 17:13:08 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:08 pi NetworkManager[717]: <info> [1661015588.7424] device (cni-podman0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') >Aug 20 17:13:08 pi NetworkManager[717]: <info> [1661015588.7433] device (cni-podman0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') >Aug 20 17:13:08 pi NetworkManager[717]: <info> [1661015588.7449] device (cni-podman0): Activation: successful, device activated. >Aug 20 17:13:08 pi audit[5652]: NETFILTER_CFG table=nat:6 family=2 entries=1 op=nft_register_rule pid=5652 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[5655]: NETFILTER_CFG table=nat:7 family=2 entries=1 op=nft_register_rule pid=5655 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[5657]: NETFILTER_CFG table=nat:8 family=2 entries=2 op=nft_register_chain pid=5657 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[5666]: NETFILTER_CFG table=nat:9 family=2 entries=1 op=nft_register_chain pid=5666 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[5669]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_rule pid=5669 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[5671]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=5671 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[5673]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=5673 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[5675]: NETFILTER_CFG table=nat:13 family=2 entries=1 op=nft_register_rule pid=5675 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[5678]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_chain pid=5678 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[5681]: NETFILTER_CFG table=nat:15 family=2 entries=2 op=nft_register_chain pid=5681 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[5684]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=5684 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[5691]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_chain pid=5691 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[2740]: USER_START pid=2740 uid=0 auid=1000 ses=1 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_selinux,pam_loginuid,pam_selinux,pam_namespace,pam_keyinit,pam_keyinit,pam_limits,pam_systemd,pam_unix,pam_umask,pam_lastlog acct="pi" exe="/usr/sbin/sshd" hostname=10.0.1.19 addr=10.0.1.19 terminal=ssh res=success' >Aug 20 17:13:09 pi audit[5694]: CRYPTO_KEY_USER pid=5694 uid=0 auid=1000 ses=1 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=server fp=SHA256:91:66:5d:fc:b4:87:a9:5b:84:e9:df:57:a3:9f:93:77:1b:f7:ee:ca:a4:ed:1b:f9:44:78:e6:4c:a8:27:4e:43 direction=? spid=5694 suid=0 exe="/usr/sbin/sshd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:09 pi audit[5694]: CRED_ACQ pid=5694 uid=0 auid=1000 ses=1 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct="pi" exe="/usr/sbin/sshd" hostname=10.0.1.19 addr=10.0.1.19 terminal=ssh res=success' >Aug 20 17:13:09 pi audit[5695]: NETFILTER_CFG table=nat:18 family=2 entries=1 op=nft_register_rule pid=5695 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[5697]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_rule pid=5697 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi audit[2740]: AVC avc: denied { read } for pid=2740 comm="sshd" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:13:09 pi audit[2740]: USER_LOGIN pid=2740 uid=0 auid=1000 ses=1 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login id=1000 exe="/usr/sbin/sshd" hostname=? addr=10.0.1.19 terminal=/dev/pts/0 res=success' >Aug 20 17:13:09 pi audit[2740]: USER_START pid=2740 uid=0 auid=1000 ses=1 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login id=1000 exe="/usr/sbin/sshd" hostname=? addr=10.0.1.19 terminal=/dev/pts/0 res=success' >Aug 20 17:13:09 pi audit[2740]: CRYPTO_KEY_USER pid=2740 uid=0 auid=1000 ses=1 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=server fp=SHA256:91:66:5d:fc:b4:87:a9:5b:84:e9:df:57:a3:9f:93:77:1b:f7:ee:ca:a4:ed:1b:f9:44:78:e6:4c:a8:27:4e:43 direction=? spid=5700 suid=1000 exe="/usr/sbin/sshd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:09 pi podman[1013]: >Aug 20 17:13:09 pi audit[5703]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_rule pid=5703 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:09 pi podman[1013]: 2022-08-20 17:13:09.859461807 +0000 UTC m=+18.095330817 container create 795a01a9042ccf55388bfab87c9088d9e7689f602eb754e32c525b77d49ac752 (image=quay.io/oauth2-proxy/oauth2-proxy:latest, name=oauth2-proxy, PODMAN_SYSTEMD_UNIT=container-oauth2-proxy.service) >Aug 20 17:13:09 pi podman[1030]: >Aug 20 17:13:09 pi audit[5708]: NETFILTER_CFG table=nat:21 family=2 entries=1 op=nft_register_rule pid=5708 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:10 pi podman[1030]: 2022-08-20 17:13:10.145003514 +0000 UTC m=+18.390712596 container create f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.licenses=, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole) >Aug 20 17:13:10 pi podman[1016]: >Aug 20 17:13:10 pi firewalld[709]: WARNING: ZONE_ALREADY_SET: '10.88.0.2/32' already bound to 'trusted' >Aug 20 17:13:10 pi podman[1016]: 2022-08-20 17:13:10.281979424 +0000 UTC m=+18.523851827 container create c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.version=1.25.2, io.balena.qemu.version=7.0.0+balena1-aarch64, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, io.containers.autoupdate=registry, io.balena.architecture=aarch64, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki) >Aug 20 17:13:10 pi NetworkManager[717]: <info> [1661015590.7836] manager: (vethca8d2788): new Veth device (/org/freedesktop/NetworkManager/Devices/7) >Aug 20 17:13:10 pi audit: ANOM_PROMISCUOUS dev=vethca8d2788 prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295 >Aug 20 17:13:10 pi kernel: cni-podman0: port 2(vethca8d2788) entered blocking state >Aug 20 17:13:10 pi kernel: cni-podman0: port 2(vethca8d2788) entered disabled state >Aug 20 17:13:10 pi kernel: device vethca8d2788 entered promiscuous mode >Aug 20 17:13:10 pi NetworkManager[717]: <info> [1661015590.8025] device (vethca8d2788): carrier: link connected >Aug 20 17:13:10 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready >Aug 20 17:13:10 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethca8d2788: link becomes ready >Aug 20 17:13:10 pi kernel: cni-podman0: port 2(vethca8d2788) entered blocking state >Aug 20 17:13:10 pi kernel: cni-podman0: port 2(vethca8d2788) entered forwarding state >Aug 20 17:13:10 pi audit[6068]: NETFILTER_CFG table=nat:22 family=2 entries=1 op=nft_register_chain pid=6068 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:10 pi audit[6070]: NETFILTER_CFG table=nat:23 family=2 entries=1 op=nft_register_rule pid=6070 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:10 pi audit[6072]: NETFILTER_CFG table=nat:24 family=2 entries=1 op=nft_register_rule pid=6072 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:10 pi audit[6074]: NETFILTER_CFG table=nat:25 family=2 entries=1 op=nft_register_rule pid=6074 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:10 pi audit[6092]: NETFILTER_CFG table=nat:26 family=2 entries=1 op=nft_register_chain pid=6092 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:10 pi audit[6094]: NETFILTER_CFG table=nat:27 family=2 entries=1 op=nft_register_rule pid=6094 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:10 pi audit[6096]: NETFILTER_CFG table=nat:28 family=2 entries=1 op=nft_register_rule pid=6096 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6098]: NETFILTER_CFG table=nat:29 family=2 entries=1 op=nft_register_rule pid=6098 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6103]: NETFILTER_CFG table=nat:30 family=2 entries=1 op=nft_register_rule pid=6103 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi firewalld[709]: WARNING: ZONE_ALREADY_SET: '10.88.0.3/32' already bound to 'trusted' >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: Traceback (most recent call last): >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: result = self._handle_call( >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: return handler(*parameters, **additional_args) >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: build_module_type_cache() >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:13:11 pi SetroubleshootPrivileged.py[952]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:13:11 pi setroubleshoot[710]: SELinux is preventing sshd from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 7e21cdd6-7d79-45a0-8e5c-cc8cbcff55d6 >Aug 20 17:13:11 pi setroubleshoot[710]: SELinux is preventing sshd from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that sshd should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'sshd' --raw | audit2allow -M my-sshd > # semodule -X 300 -i my-sshd.pp > >Aug 20 17:13:11 pi NetworkManager[717]: <info> [1661015591.1822] manager: (vethbb47b8e2): new Veth device (/org/freedesktop/NetworkManager/Devices/8) >Aug 20 17:13:11 pi audit: ANOM_PROMISCUOUS dev=vethbb47b8e2 prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295 >Aug 20 17:13:11 pi kernel: cni-podman0: port 3(vethbb47b8e2) entered blocking state >Aug 20 17:13:11 pi kernel: cni-podman0: port 3(vethbb47b8e2) entered disabled state >Aug 20 17:13:11 pi kernel: device vethbb47b8e2 entered promiscuous mode >Aug 20 17:13:11 pi kernel: cni-podman0: port 3(vethbb47b8e2) entered blocking state >Aug 20 17:13:11 pi kernel: cni-podman0: port 3(vethbb47b8e2) entered forwarding state >Aug 20 17:13:11 pi NetworkManager[717]: <info> [1661015591.2279] device (vethbb47b8e2): carrier: link connected >Aug 20 17:13:11 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethbb47b8e2: link becomes ready >Aug 20 17:13:11 pi audit[6136]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=6136 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6138]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=6138 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6140]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_rule pid=6140 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6142]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_rule pid=6142 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi podman[1015]: >Aug 20 17:13:11 pi audit[6160]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=6160 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6162]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=6162 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6164]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=6164 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6166]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=6166 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi podman[1015]: 2022-08-20 17:13:11.456454957 +0000 UTC m=+19.693745284 container create 26a887dcca3f88b2f74d5a34e234764edcc9bfaa2a1fc7b08c0e21008d6f4691 (image=docker.io/jc21/nginx-proxy-manager:latest, name=proxy-internal, org.label-schema.schema-version=1.0, org.label-schema.cmd=docker run --rm -ti jc21/nginx-proxy-manager:latest, org.label-schema.description=Docker container for managing Nginx proxy hosts with a simple, powerful interface , org.label-schema.url=https://github.com/jc21/nginx-proxy-manager, io.containers.autoupdate=registry, maintainer=Jamie Curnow <jc@jc21.com>, PODMAN_SYSTEMD_UNIT=container-proxy-internal.service, org.label-schema.license=MIT, org.label-schema.name=nginx-proxy-manager, org.label-schema.vcs-url=https://github.com/jc21/nginx-proxy-manager.git) >Aug 20 17:13:11 pi podman[1029]: >Aug 20 17:13:11 pi audit[6169]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=6169 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi firewalld[709]: WARNING: ZONE_ALREADY_SET: '10.88.0.4/32' already bound to 'trusted' >Aug 20 17:13:11 pi NetworkManager[717]: <info> [1661015591.5462] manager: (veth8693126c): new Veth device (/org/freedesktop/NetworkManager/Devices/9) >Aug 20 17:13:11 pi audit: ANOM_PROMISCUOUS dev=veth8693126c prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295 >Aug 20 17:13:11 pi kernel: cni-podman0: port 4(veth8693126c) entered blocking state >Aug 20 17:13:11 pi kernel: cni-podman0: port 4(veth8693126c) entered disabled state >Aug 20 17:13:11 pi kernel: device veth8693126c entered promiscuous mode >Aug 20 17:13:11 pi kernel: cni-podman0: port 4(veth8693126c) entered blocking state >Aug 20 17:13:11 pi kernel: cni-podman0: port 4(veth8693126c) entered forwarding state >Aug 20 17:13:11 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth8693126c: link becomes ready >Aug 20 17:13:11 pi NetworkManager[717]: <info> [1661015591.5686] device (veth8693126c): carrier: link connected >Aug 20 17:13:11 pi podman[1029]: 2022-08-20 17:13:11.570896993 +0000 UTC m=+19.808174542 container create 802033efbdc52f81721a61d0a389c102df750e725685b80f038666e1fe1d3c2b (image=docker.io/jc21/nginx-proxy-manager:latest, name=proxy, org.label-schema.description=Docker container for managing Nginx proxy hosts with a simple, powerful interface , io.containers.autoupdate=registry, org.label-schema.url=https://github.com/jc21/nginx-proxy-manager, org.label-schema.cmd=docker run --rm -ti jc21/nginx-proxy-manager:latest, PODMAN_SYSTEMD_UNIT=container-proxy.service, org.label-schema.vcs-url=https://github.com/jc21/nginx-proxy-manager.git, org.label-schema.license=MIT, org.label-schema.schema-version=1.0, maintainer=Jamie Curnow <jc@jc21.com>, org.label-schema.name=nginx-proxy-manager) >Aug 20 17:13:11 pi audit[6201]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=6201 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6203]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_rule pid=6203 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6205]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_rule pid=6205 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6207]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_rule pid=6207 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 3. >Aug 20 17:13:11 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:11 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:11 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:13:11 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:13:11 pi audit[6226]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=6226 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6228]: NETFILTER_CFG table=nat:45 family=2 entries=1 op=nft_register_rule pid=6228 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6230]: NETFILTER_CFG table=nat:46 family=2 entries=1 op=nft_register_rule pid=6230 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6232]: NETFILTER_CFG table=nat:47 family=2 entries=1 op=nft_register_rule pid=6232 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi dbus-parsec[6225]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:13:11 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:13:11 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:13:11 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:13:11 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:13:11 pi audit[6234]: NETFILTER_CFG table=nat:48 family=2 entries=1 op=nft_register_rule pid=6234 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6236]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_rule pid=6236 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6238]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_rule pid=6238 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6240]: NETFILTER_CFG table=nat:51 family=2 entries=1 op=nft_register_rule pid=6240 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6242]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_rule pid=6242 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6244]: NETFILTER_CFG table=nat:53 family=2 entries=1 op=nft_register_rule pid=6244 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6246]: NETFILTER_CFG table=nat:54 family=2 entries=1 op=nft_register_rule pid=6246 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:11 pi audit[6248]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_rule pid=6248 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:12 pi audit[6250]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_rule pid=6250 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:12 pi audit[6252]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_rule pid=6252 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:12 pi audit[6254]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_rule pid=6254 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:12 pi firewalld[709]: WARNING: ZONE_ALREADY_SET: '10.88.0.5/32' already bound to 'trusted' >Aug 20 17:13:12 pi NetworkManager[717]: <info> [1661015592.1735] manager: (veth34a90a9c): new Veth device (/org/freedesktop/NetworkManager/Devices/10) >Aug 20 17:13:12 pi audit: ANOM_PROMISCUOUS dev=veth34a90a9c prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295 >Aug 20 17:13:12 pi kernel: cni-podman0: port 5(veth34a90a9c) entered blocking state >Aug 20 17:13:12 pi kernel: cni-podman0: port 5(veth34a90a9c) entered disabled state >Aug 20 17:13:12 pi kernel: device veth34a90a9c entered promiscuous mode >Aug 20 17:13:12 pi NetworkManager[717]: <info> [1661015592.1920] device (veth34a90a9c): carrier: link connected >Aug 20 17:13:12 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready >Aug 20 17:13:12 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth34a90a9c: link becomes ready >Aug 20 17:13:12 pi kernel: cni-podman0: port 5(veth34a90a9c) entered blocking state >Aug 20 17:13:12 pi kernel: cni-podman0: port 5(veth34a90a9c) entered forwarding state >Aug 20 17:13:12 pi audit[6285]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=6285 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:12 pi audit[6287]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=6287 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:12 pi audit[6289]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_rule pid=6289 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:12 pi audit[6291]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=6291 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:12 pi podman[3113]: 2022-08-20 17:13:12.314275952 +0000 UTC m=+9.675969459 volume create efe7140fe64be1f036226f07d6e8334ca35a57e710ea6c23703d387f1245ce63 >Aug 20 17:13:12 pi podman[3113]: >Aug 20 17:13:12 pi audit[6309]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_chain pid=6309 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:12 pi audit[6311]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_rule pid=6311 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:12 pi audit[6313]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=6313 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:12 pi audit[6315]: NETFILTER_CFG table=nat:66 family=2 entries=1 op=nft_register_rule pid=6315 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:12 pi podman[3113]: 2022-08-20 17:13:12.441349344 +0000 UTC m=+9.803042999 container create fdf7d8d189355d920cd9a5509c4715162f4de5e0f712aa7dd12f962d9095c204 (image=docker.io/library/eclipse-mosquitto:latest, name=hass-mosquitto, PODMAN_SYSTEMD_UNIT=container-hass-mosquitto.service, description=Eclipse Mosquitto MQTT Broker, maintainer=Roger Light <roger@atchoo.org>, io.containers.autoupdate=registry) >Aug 20 17:13:12 pi audit[6317]: NETFILTER_CFG table=nat:67 family=2 entries=1 op=nft_register_rule pid=6317 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:12 pi firewalld[709]: WARNING: ZONE_ALREADY_SET: '10.88.0.6/32' already bound to 'trusted' >Aug 20 17:13:12 pi systemd[1]: sysroot-tmp-crun.p4PC7B.mount: Deactivated successfully. >Aug 20 17:13:12 pi systemd[1]: Started libpod-48412f81ab911b2563de7937a6c54cbf6598ff611e0be4bb2a59200d9f6df38f.scope - libcrun container. >Aug 20 17:13:12 pi audit: BPF prog-id=75 op=LOAD >Aug 20 17:13:12 pi podman[3118]: >Aug 20 17:13:12 pi podman[3118]: 2022-08-20 17:13:12.847936623 +0000 UTC m=+10.210785068 container create 1b91561f7d41db345df39c9d941ae48147602b5ed1f081823073fe92ea37278c (image=docker.io/library/postgres:14, name=hass-postgres, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-hass-postgres.service) >Aug 20 17:13:12 pi podman[3714]: 2022-08-20 17:13:12.954804003 +0000 UTC m=+9.314066974 container init 48412f81ab911b2563de7937a6c54cbf6598ff611e0be4bb2a59200d9f6df38f (image=localhost/podman-pause:4.1.1-1658516809, name=a8df0c44b354-infra, PODMAN_SYSTEMD_UNIT=pod-nextcloud.service, io.buildah.version=1.26.1) >Aug 20 17:13:12 pi podman[3714]: 2022-08-20 17:13:12.995238508 +0000 UTC m=+9.354501441 container start 48412f81ab911b2563de7937a6c54cbf6598ff611e0be4bb2a59200d9f6df38f (image=localhost/podman-pause:4.1.1-1658516809, name=a8df0c44b354-infra, PODMAN_SYSTEMD_UNIT=pod-nextcloud.service, io.buildah.version=1.26.1) >Aug 20 17:13:12 pi podman[3714]: 2022-08-20 17:13:12.995469301 +0000 UTC m=+9.354732291 pod start a8df0c44b354b8703bdeea30933a12cc7135f89fa5d006e39599c07736d46755 (image=, name=nextcloud) >Aug 20 17:13:12 pi podman[3714]: a8df0c44b354b8703bdeea30933a12cc7135f89fa5d006e39599c07736d46755 >Aug 20 17:13:13 pi NetworkManager[717]: <info> [1661015593.0464] manager: (veth8fd30713): new Veth device (/org/freedesktop/NetworkManager/Devices/11) >Aug 20 17:13:13 pi audit: ANOM_PROMISCUOUS dev=veth8fd30713 prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295 >Aug 20 17:13:13 pi kernel: cni-podman0: port 6(veth8fd30713) entered blocking state >Aug 20 17:13:13 pi kernel: cni-podman0: port 6(veth8fd30713) entered disabled state >Aug 20 17:13:13 pi kernel: device veth8fd30713 entered promiscuous mode >Aug 20 17:13:13 pi kernel: cni-podman0: port 6(veth8fd30713) entered blocking state >Aug 20 17:13:13 pi kernel: cni-podman0: port 6(veth8fd30713) entered forwarding state >Aug 20 17:13:13 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pod-nextcloud comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:13 pi systemd[1]: Started pod-nextcloud.service - Podman pod-nextcloud.service. >Aug 20 17:13:13 pi NetworkManager[717]: <info> [1661015593.0874] device (veth8fd30713): carrier: link connected >Aug 20 17:13:13 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth8fd30713: link becomes ready >Aug 20 17:13:13 pi audit[6669]: NETFILTER_CFG table=nat:68 family=2 entries=1 op=nft_register_chain pid=6669 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[6671]: NETFILTER_CFG table=nat:69 family=2 entries=1 op=nft_register_rule pid=6671 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[6673]: NETFILTER_CFG table=nat:70 family=2 entries=1 op=nft_register_rule pid=6673 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi systemd[1]: Starting container-nextcloud-postgres.service - Podman container-postgres.service... >Aug 20 17:13:13 pi audit[6675]: NETFILTER_CFG table=nat:71 family=2 entries=1 op=nft_register_rule pid=6675 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi systemd[1]: Starting container-nextcloud-redis.service - Podman container-redis.service... >Aug 20 17:13:13 pi audit[6721]: NETFILTER_CFG table=nat:72 family=2 entries=1 op=nft_register_chain pid=6721 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[6725]: NETFILTER_CFG table=nat:73 family=2 entries=1 op=nft_register_rule pid=6725 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[6729]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_rule pid=6729 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[6732]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=6732 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[6734]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=6734 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[6736]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_rule pid=6736 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[6738]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=6738 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi systemd[1]: Started libpod-fdf7d8d189355d920cd9a5509c4715162f4de5e0f712aa7dd12f962d9095c204.scope - libcrun container. >Aug 20 17:13:13 pi audit: BPF prog-id=76 op=LOAD >Aug 20 17:13:13 pi audit[6742]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=6742 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[709]: NETFILTER_CFG table=firewalld:80 family=1 entries=6 op=nft_register_rule pid=709 subj=system_u:system_r:firewalld_t:s0 comm="firewalld" >Aug 20 17:13:13 pi NetworkManager[717]: <info> [1661015593.5691] manager: (vethf9df2be8): new Veth device (/org/freedesktop/NetworkManager/Devices/12) >Aug 20 17:13:13 pi podman[6679]: 2022-08-20 17:13:13.377681708 +0000 UTC m=+0.176080116 image pull docker.io/postgres:13 >Aug 20 17:13:13 pi audit: ANOM_PROMISCUOUS dev=vethf9df2be8 prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295 >Aug 20 17:13:13 pi podman[6685]: 2022-08-20 17:13:13.38206688 +0000 UTC m=+0.175438104 image pull docker.io/redis:alpine >Aug 20 17:13:13 pi kernel: cni-podman0: port 7(vethf9df2be8) entered blocking state >Aug 20 17:13:13 pi kernel: cni-podman0: port 7(vethf9df2be8) entered disabled state >Aug 20 17:13:13 pi kernel: device vethf9df2be8 entered promiscuous mode >Aug 20 17:13:13 pi NetworkManager[717]: <info> [1661015593.6049] device (vethf9df2be8): carrier: link connected >Aug 20 17:13:13 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready >Aug 20 17:13:13 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf9df2be8: link becomes ready >Aug 20 17:13:13 pi kernel: cni-podman0: port 7(vethf9df2be8) entered blocking state >Aug 20 17:13:13 pi kernel: cni-podman0: port 7(vethf9df2be8) entered forwarding state >Aug 20 17:13:13 pi audit[6779]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_chain pid=6779 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[6781]: NETFILTER_CFG table=nat:82 family=2 entries=1 op=nft_register_rule pid=6781 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi podman[3113]: 2022-08-20 17:13:13.665107967 +0000 UTC m=+11.026801511 container init fdf7d8d189355d920cd9a5509c4715162f4de5e0f712aa7dd12f962d9095c204 (image=docker.io/library/eclipse-mosquitto:latest, name=hass-mosquitto, PODMAN_SYSTEMD_UNIT=container-hass-mosquitto.service, description=Eclipse Mosquitto MQTT Broker, maintainer=Roger Light <roger@atchoo.org>, io.containers.autoupdate=registry) >Aug 20 17:13:13 pi audit[6784]: NETFILTER_CFG table=nat:83 family=2 entries=1 op=nft_register_rule pid=6784 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-hass-mosquitto comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:13 pi audit[6786]: NETFILTER_CFG table=nat:84 family=2 entries=1 op=nft_register_rule pid=6786 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit: BPF prog-id=77 op=LOAD >Aug 20 17:13:13 pi systemd[1]: Started libpod-4b20f34528b5b05398842f871cda38afbd90eaf500951135b9dd6491d55f8c1a.scope - libcrun container. >Aug 20 17:13:13 pi systemd[1]: Started container-hass-mosquitto.service - Podman container-mosquitto.service. >Aug 20 17:13:13 pi systemd[1]: Starting container-hass-zigbee2mqtt.service - Podman container-hass-zigbee2mqtt.service... >Aug 20 17:13:13 pi podman[5619]: 2022-08-20 17:13:13.80146468 +0000 UTC m=+5.525713896 container init 4b20f34528b5b05398842f871cda38afbd90eaf500951135b9dd6491d55f8c1a (image=localhost/podman-pause:4.1.1-1658516809, name=3731cc27db45-infra, PODMAN_SYSTEMD_UNIT=pod-web.service, io.buildah.version=1.26.1) >Aug 20 17:13:13 pi podman[5619]: 2022-08-20 17:13:13.838990143 +0000 UTC m=+5.563239267 container start 4b20f34528b5b05398842f871cda38afbd90eaf500951135b9dd6491d55f8c1a (image=localhost/podman-pause:4.1.1-1658516809, name=3731cc27db45-infra, io.buildah.version=1.26.1, PODMAN_SYSTEMD_UNIT=pod-web.service) >Aug 20 17:13:13 pi podman[5619]: 2022-08-20 17:13:13.839226937 +0000 UTC m=+5.563476246 pod start 3731cc27db45225d8d891c1410b72a0ff670500d479e8d00146b464e33981b33 (image=, name=web) >Aug 20 17:13:13 pi podman[5619]: 3731cc27db45225d8d891c1410b72a0ff670500d479e8d00146b464e33981b33 >Aug 20 17:13:13 pi systemd[1]: sysroot-tmp-crun.FhJE8w.mount: Deactivated successfully. >Aug 20 17:13:13 pi audit[6817]: NETFILTER_CFG table=nat:85 family=2 entries=1 op=nft_register_chain pid=6817 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[6819]: NETFILTER_CFG table=nat:86 family=2 entries=1 op=nft_register_rule pid=6819 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi hass-mosquitto[6718]: 1661015593: The 'bind_address' option is now deprecated and will be removed in a future version. The behaviour will default to true. >Aug 20 17:13:13 pi hass-mosquitto[6718]: 1661015593: mosquitto version 2.0.15 starting >Aug 20 17:13:13 pi hass-mosquitto[6718]: 1661015593: Config loaded from /mosquitto/config/mosquitto.conf. >Aug 20 17:13:13 pi hass-mosquitto[6718]: 1661015593: Opening ipv4 listen socket on port 1883. >Aug 20 17:13:13 pi hass-mosquitto[6718]: 1661015593: mosquitto version 2.0.15 running >Aug 20 17:13:13 pi audit[6821]: NETFILTER_CFG table=nat:87 family=2 entries=1 op=nft_register_rule pid=6821 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[6823]: NETFILTER_CFG table=nat:88 family=2 entries=1 op=nft_register_rule pid=6823 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[6825]: NETFILTER_CFG table=nat:89 family=2 entries=1 op=nft_register_rule pid=6825 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:13 pi audit[6827]: NETFILTER_CFG table=nat:90 family=2 entries=1 op=nft_register_rule pid=6827 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[6829]: NETFILTER_CFG table=nat:91 family=2 entries=1 op=nft_register_rule pid=6829 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[6831]: NETFILTER_CFG table=nat:92 family=2 entries=1 op=nft_register_rule pid=6831 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi firewalld[709]: WARNING: ZONE_ALREADY_SET: '10.88.0.18/32' already bound to 'trusted' >Aug 20 17:13:14 pi podman[3113]: 2022-08-20 17:13:14.066381898 +0000 UTC m=+11.428075423 container start fdf7d8d189355d920cd9a5509c4715162f4de5e0f712aa7dd12f962d9095c204 (image=docker.io/library/eclipse-mosquitto:latest, name=hass-mosquitto, maintainer=Roger Light <roger@atchoo.org>, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-hass-mosquitto.service, description=Eclipse Mosquitto MQTT Broker) >Aug 20 17:13:14 pi podman[3113]: fdf7d8d189355d920cd9a5509c4715162f4de5e0f712aa7dd12f962d9095c204 >Aug 20 17:13:14 pi NetworkManager[717]: <info> [1661015594.0838] manager: (veth886c4496): new Veth device (/org/freedesktop/NetworkManager/Devices/13) >Aug 20 17:13:14 pi kernel: cni-podman0: port 8(veth886c4496) entered blocking state >Aug 20 17:13:14 pi kernel: cni-podman0: port 8(veth886c4496) entered disabled state >Aug 20 17:13:14 pi kernel: device veth886c4496 entered promiscuous mode >Aug 20 17:13:14 pi kernel: cni-podman0: port 8(veth886c4496) entered blocking state >Aug 20 17:13:14 pi kernel: cni-podman0: port 8(veth886c4496) entered forwarding state >Aug 20 17:13:14 pi audit: ANOM_PROMISCUOUS dev=veth886c4496 prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295 >Aug 20 17:13:14 pi systemd[1]: sysroot-tmp-crun.GX4E7F.mount: Deactivated successfully. >Aug 20 17:13:14 pi NetworkManager[717]: <info> [1661015594.1559] device (veth886c4496): carrier: link connected >Aug 20 17:13:14 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth886c4496: link becomes ready >Aug 20 17:13:14 pi audit[6868]: NETFILTER_CFG table=nat:93 family=2 entries=1 op=nft_register_chain pid=6868 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi systemd[1]: Started libpod-1b91561f7d41db345df39c9d941ae48147602b5ed1f081823073fe92ea37278c.scope - libcrun container. >Aug 20 17:13:14 pi audit: BPF prog-id=78 op=LOAD >Aug 20 17:13:14 pi audit[6870]: NETFILTER_CFG table=nat:94 family=2 entries=1 op=nft_register_rule pid=6870 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[6872]: NETFILTER_CFG table=nat:95 family=2 entries=1 op=nft_register_rule pid=6872 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[6874]: NETFILTER_CFG table=nat:96 family=2 entries=1 op=nft_register_rule pid=6874 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pod-web comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:14 pi systemd[1]: Started pod-web.service - Podman pod-web.service. >Aug 20 17:13:14 pi systemd[1]: Starting container-php-fpm.service - Podman container-php-fpm.service... >Aug 20 17:13:14 pi podman[6685]: 2022-08-20 17:13:14.413942938 +0000 UTC m=+1.207313940 volume create d60edf009986cfa00f404d8623f3b14583ee9a9c44411bd37cab9bf3046a4fe3 >Aug 20 17:13:14 pi podman[6685]: >Aug 20 17:13:14 pi audit[6903]: NETFILTER_CFG table=nat:97 family=2 entries=1 op=nft_register_chain pid=6903 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[6905]: NETFILTER_CFG table=nat:98 family=2 entries=1 op=nft_register_rule pid=6905 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[6908]: NETFILTER_CFG table=nat:99 family=2 entries=1 op=nft_register_rule pid=6908 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[6910]: NETFILTER_CFG table=nat:100 family=2 entries=1 op=nft_register_rule pid=6910 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[6912]: NETFILTER_CFG table=nat:101 family=2 entries=1 op=nft_register_rule pid=6912 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[6914]: NETFILTER_CFG table=nat:102 family=2 entries=1 op=nft_register_rule pid=6914 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[6916]: NETFILTER_CFG table=nat:103 family=2 entries=1 op=nft_register_rule pid=6916 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[6918]: NETFILTER_CFG table=nat:104 family=2 entries=1 op=nft_register_rule pid=6918 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi podman[6685]: 2022-08-20 17:13:14.535127175 +0000 UTC m=+1.328498195 container create dd210ac28e21a69885a7cd5973cce5fd59303328446005507e7f433bcbfcd146 (image=docker.io/library/redis:alpine, name=nextcloud-redis, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-nextcloud-redis.service) >Aug 20 17:13:14 pi audit[6920]: NETFILTER_CFG table=nat:105 family=2 entries=1 op=nft_register_rule pid=6920 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi greenboot-status[6922]: Boot Status is GREEN - Health Check SUCCESS >Aug 20 17:13:14 pi systemd[1]: Finished greenboot-status.service - greenboot MotD Generator. >Aug 20 17:13:14 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=greenboot-status comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:14 pi audit[6924]: NETFILTER_CFG table=nat:106 family=2 entries=1 op=nft_register_rule pid=6924 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi podman[3118]: 2022-08-20 17:13:14.574382101 +0000 UTC m=+11.937230601 container init 1b91561f7d41db345df39c9d941ae48147602b5ed1f081823073fe92ea37278c (image=docker.io/library/postgres:14, name=hass-postgres, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-hass-postgres.service) >Aug 20 17:13:14 pi audit[6927]: NETFILTER_CFG table=nat:107 family=2 entries=1 op=nft_register_rule pid=6927 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[6929]: NETFILTER_CFG table=nat:108 family=2 entries=1 op=nft_register_rule pid=6929 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi audit[6932]: NETFILTER_CFG table=nat:109 family=2 entries=1 op=nft_register_rule pid=6932 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi systemd[1]: Started container-hass-postgres.service - Podman container-hass-postgres.service. >Aug 20 17:13:14 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-hass-postgres comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:14 pi audit[6934]: NETFILTER_CFG table=nat:110 family=2 entries=1 op=nft_register_rule pid=6934 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:14 pi firewalld[709]: WARNING: ZONE_ALREADY_SET: '10.88.0.17/32' already bound to 'trusted' >Aug 20 17:13:14 pi podman[3118]: 2022-08-20 17:13:14.718459411 +0000 UTC m=+12.081307893 container start 1b91561f7d41db345df39c9d941ae48147602b5ed1f081823073fe92ea37278c (image=docker.io/library/postgres:14, name=hass-postgres, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-hass-postgres.service) >Aug 20 17:13:14 pi podman[3118]: 1b91561f7d41db345df39c9d941ae48147602b5ed1f081823073fe92ea37278c >Aug 20 17:13:14 pi podman[6788]: 2022-08-20 17:13:14.310413279 +0000 UTC m=+0.551066049 image pull docker.io/koenkk/zigbee2mqtt >Aug 20 17:13:14 pi systemd[1]: Started libpod-5c8631e3596e13ace3b0b45d2dad00f9e1adbefe22cf11e1c5566acae71dd485.scope - libcrun container. >Aug 20 17:13:14 pi audit: BPF prog-id=79 op=LOAD >Aug 20 17:13:14 pi systemd[1]: Started libpod-795a01a9042ccf55388bfab87c9088d9e7689f602eb754e32c525b77d49ac752.scope - libcrun container. >Aug 20 17:13:14 pi audit: BPF prog-id=80 op=LOAD >Aug 20 17:13:14 pi systemd[1]: Started libpod-f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.scope - libcrun container. >Aug 20 17:13:14 pi audit: BPF prog-id=81 op=LOAD >Aug 20 17:13:14 pi podman[6885]: 2022-08-20 17:13:14.664679521 +0000 UTC m=+0.360873149 image pull docker.io/php:fpm-alpine >Aug 20 17:13:14 pi podman[5682]: 2022-08-20 17:13:14.916375 +0000 UTC m=+5.208885063 container init 5c8631e3596e13ace3b0b45d2dad00f9e1adbefe22cf11e1c5566acae71dd485 (image=localhost/podman-pause:4.1.1-1658516809, name=69c38bffc6cf-infra, PODMAN_SYSTEMD_UNIT=pod-gitea.service, io.buildah.version=1.26.1) >Aug 20 17:13:15 pi podman[1013]: 2022-08-20 17:13:15.163030981 +0000 UTC m=+23.398900009 container init 795a01a9042ccf55388bfab87c9088d9e7689f602eb754e32c525b77d49ac752 (image=quay.io/oauth2-proxy/oauth2-proxy:latest, name=oauth2-proxy, PODMAN_SYSTEMD_UNIT=container-oauth2-proxy.service) >Aug 20 17:13:15 pi systemd[1]: sysroot-tmp-crun.Sw5ljG.mount: Deactivated successfully. >Aug 20 17:13:15 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-oauth2-proxy comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:15 pi systemd[1]: Started container-oauth2-proxy.service - Podman container-oauth2-proxy.service. >Aug 20 17:13:15 pi podman[6679]: >Aug 20 17:13:15 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.timer - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:13:15 pi podman[1030]: 2022-08-20 17:13:15.377142757 +0000 UTC m=+23.622851691 container init f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.licenses=, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.version=2022.07.1, io.containers.autoupdate=registry, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole) >Aug 20 17:13:15 pi podman[6679]: 2022-08-20 17:13:15.39221703 +0000 UTC m=+2.190615456 container create 3dd752829c85e02463e7210bc0d86af083a6de75248c41d6e9ac413da80395e6 (image=docker.io/library/postgres:13, name=nextcloud-postgres, PODMAN_SYSTEMD_UNIT=container-nextcloud-postgres.service, io.containers.autoupdate=registry) >Aug 20 17:13:15 pi systemd[1]: Started container-pihole.service - Podman container-pihole.service. >Aug 20 17:13:15 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-pihole comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:15 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:13:15 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:15 pi podman[5682]: 2022-08-20 17:13:15.57743106 +0000 UTC m=+5.869941104 container start 5c8631e3596e13ace3b0b45d2dad00f9e1adbefe22cf11e1c5566acae71dd485 (image=localhost/podman-pause:4.1.1-1658516809, name=69c38bffc6cf-infra, PODMAN_SYSTEMD_UNIT=pod-gitea.service, io.buildah.version=1.26.1) >Aug 20 17:13:15 pi podman[5682]: 69c38bffc6cf4d82ae114c9532823ec24762e5f289ec8e175316c69219940892 >Aug 20 17:13:15 pi podman[5682]: 2022-08-20 17:13:15.578525195 +0000 UTC m=+5.871035258 pod start 69c38bffc6cf4d82ae114c9532823ec24762e5f289ec8e175316c69219940892 (image=, name=gitea) >Aug 20 17:13:15 pi podman[1013]: 2022-08-20 17:13:15.645493454 +0000 UTC m=+23.881362464 container start 795a01a9042ccf55388bfab87c9088d9e7689f602eb754e32c525b77d49ac752 (image=quay.io/oauth2-proxy/oauth2-proxy:latest, name=oauth2-proxy, PODMAN_SYSTEMD_UNIT=container-oauth2-proxy.service) >Aug 20 17:13:15 pi podman[1013]: 795a01a9042ccf55388bfab87c9088d9e7689f602eb754e32c525b77d49ac752 >Aug 20 17:13:15 pi systemd[1]: Started pod-gitea.service - Podman pod-gitea.service. >Aug 20 17:13:15 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pod-gitea comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:15 pi podman[1030]: 2022-08-20 17:13:15.697762232 +0000 UTC m=+23.943471092 container start f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.version=2022.07.1, io.containers.autoupdate=registry, org.opencontainers.image.licenses=, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole) >Aug 20 17:13:15 pi podman[1030]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e >Aug 20 17:13:15 pi systemd[1]: Starting container-gitea-postgres.service - Podman container-gitea-postgres.service... >Aug 20 17:13:15 pi systemd[1]: Started libpod-802033efbdc52f81721a61d0a389c102df750e725685b80f038666e1fe1d3c2b.scope - libcrun container. >Aug 20 17:13:15 pi audit: BPF prog-id=82 op=LOAD >Aug 20 17:13:15 pi systemd[1]: Started libpod-c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.scope - libcrun container. >Aug 20 17:13:15 pi systemd[1]: Started libpod-dd210ac28e21a69885a7cd5973cce5fd59303328446005507e7f433bcbfcd146.scope - libcrun container. >Aug 20 17:13:15 pi audit: BPF prog-id=83 op=LOAD >Aug 20 17:13:15 pi audit: BPF prog-id=84 op=LOAD >Aug 20 17:13:15 pi podman[1029]: 2022-08-20 17:13:15.98895393 +0000 UTC m=+24.226231424 container init 802033efbdc52f81721a61d0a389c102df750e725685b80f038666e1fe1d3c2b (image=docker.io/jc21/nginx-proxy-manager:latest, name=proxy, PODMAN_SYSTEMD_UNIT=container-proxy.service, org.label-schema.name=nginx-proxy-manager, org.label-schema.url=https://github.com/jc21/nginx-proxy-manager, org.label-schema.license=MIT, org.label-schema.schema-version=1.0, org.label-schema.description=Docker container for managing Nginx proxy hosts with a simple, powerful interface , io.containers.autoupdate=registry, maintainer=Jamie Curnow <jc@jc21.com>, org.label-schema.cmd=docker run --rm -ti jc21/nginx-proxy-manager:latest, org.label-schema.vcs-url=https://github.com/jc21/nginx-proxy-manager.git) >Aug 20 17:13:15 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-proxy comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:16 pi systemd[1]: Started container-proxy.service - Podman container-proxy.service. >Aug 20 17:13:16 pi podman[1029]: 2022-08-20 17:13:16.01414133 +0000 UTC m=+24.251418879 container start 802033efbdc52f81721a61d0a389c102df750e725685b80f038666e1fe1d3c2b (image=docker.io/jc21/nginx-proxy-manager:latest, name=proxy, PODMAN_SYSTEMD_UNIT=container-proxy.service, org.label-schema.description=Docker container for managing Nginx proxy hosts with a simple, powerful interface , org.label-schema.name=nginx-proxy-manager, io.containers.autoupdate=registry, org.label-schema.vcs-url=https://github.com/jc21/nginx-proxy-manager.git, maintainer=Jamie Curnow <jc@jc21.com>, org.label-schema.cmd=docker run --rm -ti jc21/nginx-proxy-manager:latest, org.label-schema.url=https://github.com/jc21/nginx-proxy-manager, org.label-schema.license=MIT, org.label-schema.schema-version=1.0) >Aug 20 17:13:16 pi podman[1029]: 802033efbdc52f81721a61d0a389c102df750e725685b80f038666e1fe1d3c2b >Aug 20 17:13:16 pi systemd[1]: Started libpod-26a887dcca3f88b2f74d5a34e234764edcc9bfaa2a1fc7b08c0e21008d6f4691.scope - libcrun container. >Aug 20 17:13:16 pi audit: BPF prog-id=85 op=LOAD >Aug 20 17:13:16 pi podman[7002]: 2022-08-20 17:13:16.034962816 +0000 UTC m=+0.240227351 image pull docker.io/postgres:11 >Aug 20 17:13:16 pi podman[6685]: 2022-08-20 17:13:16.41504895 +0000 UTC m=+3.208420007 container init dd210ac28e21a69885a7cd5973cce5fd59303328446005507e7f433bcbfcd146 (image=docker.io/library/redis:alpine, name=nextcloud-redis, PODMAN_SYSTEMD_UNIT=container-nextcloud-redis.service, io.containers.autoupdate=registry) >Aug 20 17:13:16 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-nextcloud-redis comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:16 pi systemd[1]: Started container-nextcloud-redis.service - Podman container-redis.service. >Aug 20 17:13:16 pi podman[6685]: 2022-08-20 17:13:16.562099208 +0000 UTC m=+3.355470321 container start dd210ac28e21a69885a7cd5973cce5fd59303328446005507e7f433bcbfcd146 (image=docker.io/library/redis:alpine, name=nextcloud-redis, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-nextcloud-redis.service) >Aug 20 17:13:16 pi podman[6685]: dd210ac28e21a69885a7cd5973cce5fd59303328446005507e7f433bcbfcd146 >Aug 20 17:13:16 pi podman[1015]: 2022-08-20 17:13:16.642623425 +0000 UTC m=+24.879913752 container init 26a887dcca3f88b2f74d5a34e234764edcc9bfaa2a1fc7b08c0e21008d6f4691 (image=docker.io/jc21/nginx-proxy-manager:latest, name=proxy-internal, io.containers.autoupdate=registry, org.label-schema.description=Docker container for managing Nginx proxy hosts with a simple, powerful interface , org.label-schema.url=https://github.com/jc21/nginx-proxy-manager, maintainer=Jamie Curnow <jc@jc21.com>, org.label-schema.license=MIT, org.label-schema.name=nginx-proxy-manager, org.label-schema.cmd=docker run --rm -ti jc21/nginx-proxy-manager:latest, org.label-schema.vcs-url=https://github.com/jc21/nginx-proxy-manager.git, PODMAN_SYSTEMD_UNIT=container-proxy-internal.service, org.label-schema.schema-version=1.0) >Aug 20 17:13:16 pi systemd[1]: Started container-proxy-internal.service - Podman container-proxy-internal.service. >Aug 20 17:13:16 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-proxy-internal comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:16 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.timer - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:13:16 pi podman[1016]: 2022-08-20 17:13:16.83062883 +0000 UTC m=+25.072501215 container init c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, org.opencontainers.image.licenses=GPL-3.0-only, io.containers.autoupdate=registry, io.balena.qemu.version=7.0.0+balena1-aarch64, io.balena.architecture=aarch64, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, org.opencontainers.image.version=1.25.2) >Aug 20 17:13:16 pi podman[1015]: 2022-08-20 17:13:16.863484087 +0000 UTC m=+25.100774469 container start 26a887dcca3f88b2f74d5a34e234764edcc9bfaa2a1fc7b08c0e21008d6f4691 (image=docker.io/jc21/nginx-proxy-manager:latest, name=proxy-internal, maintainer=Jamie Curnow <jc@jc21.com>, org.label-schema.name=nginx-proxy-manager, org.label-schema.vcs-url=https://github.com/jc21/nginx-proxy-manager.git, PODMAN_SYSTEMD_UNIT=container-proxy-internal.service, org.label-schema.license=MIT, org.label-schema.schema-version=1.0, io.containers.autoupdate=registry, org.label-schema.cmd=docker run --rm -ti jc21/nginx-proxy-manager:latest, org.label-schema.description=Docker container for managing Nginx proxy hosts with a simple, powerful interface , org.label-schema.url=https://github.com/jc21/nginx-proxy-manager) >Aug 20 17:13:16 pi podman[1015]: 26a887dcca3f88b2f74d5a34e234764edcc9bfaa2a1fc7b08c0e21008d6f4691 >Aug 20 17:13:16 pi systemd[1]: Started container-vaultwarden-server.service - Podman container-vaultwarden-server.service. >Aug 20 17:13:16 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-vaultwarden-server comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:17 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:13:17 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:17 pi podman[1016]: 2022-08-20 17:13:17.244198066 +0000 UTC m=+25.486070433 container start c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, org.opencontainers.image.licenses=GPL-3.0-only, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, io.balena.qemu.version=7.0.0+balena1-aarch64, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, io.containers.autoupdate=registry, io.balena.architecture=aarch64, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.version=1.25.2, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server) >Aug 20 17:13:17 pi podman[1016]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 >Aug 20 17:13:17 pi hass-postgres[6856]: >Aug 20 17:13:17 pi hass-postgres[6856]: PostgreSQL Database directory appears to contain a database; Skipping initialization >Aug 20 17:13:17 pi hass-postgres[6856]: >Aug 20 17:13:17 pi podman[6981]: 2022-08-20 17:13:17.370234415 +0000 UTC m=+1.860170000 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.licenses=, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, PODMAN_SYSTEMD_UNIT=container-pihole.service, io.containers.autoupdate=registry, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.title=docker-pi-hole) >Aug 20 17:13:17 pi systemd[1]: Started libpod-3dd752829c85e02463e7210bc0d86af083a6de75248c41d6e9ac413da80395e6.scope - libcrun container. >Aug 20 17:13:17 pi audit: BPF prog-id=86 op=LOAD >Aug 20 17:13:17 pi podman[7002]: >Aug 20 17:13:17 pi podman[7067]: 2022-08-20 17:13:17.538399212 +0000 UTC m=+0.484922169 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, org.opencontainers.image.version=1.25.2, io.containers.autoupdate=registry, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, io.balena.architecture=aarch64, io.balena.qemu.version=7.0.0+balena1-aarch64, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service) >Aug 20 17:13:17 pi podman[7002]: 2022-08-20 17:13:17.711489989 +0000 UTC m=+1.916754488 container create 8a066e4f9d572d346f082f84e0576276928a2967a42311ff0a0432e4b58ebc31 (image=docker.io/library/postgres:11, name=gitea-postgres, PODMAN_SYSTEMD_UNIT=container-gitea-postgres.service, io.containers.autoupdate=registry) >Aug 20 17:13:17 pi podman[6885]: >Aug 20 17:13:17 pi podman[6885]: 2022-08-20 17:13:17.819102565 +0000 UTC m=+3.515296194 container create 281971e998e590f8841e3b6d7df9ac3a82ada94e3b044cd5b5d71242d226747e (image=docker.io/library/php:fpm-alpine, name=php-fpm, PODMAN_SYSTEMD_UNIT=container-php-fpm.service, io.containers.autoupdate=registry) >Aug 20 17:13:18 pi systemd[1]: Starting pmie_check.service - Check PMIE instances are running... >Aug 20 17:13:18 pi systemd[1]: Starting pmie_farm_check.service - Check and migrate non-primary pmie farm instances... >Aug 20 17:13:18 pi systemd[1]: Starting pmlogger_check.service - Check pmlogger instances are running... >Aug 20 17:13:18 pi systemd[1]: Starting pmlogger_farm_check.service - Check and migrate non-primary pmlogger farm instances... >Aug 20 17:13:18 pi systemd[1]: Started pmie_check.service - Check PMIE instances are running. >Aug 20 17:13:18 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:18 pi systemd[1]: Started pmlogger_check.service - Check pmlogger instances are running. >Aug 20 17:13:18 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmlogger_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:18 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmlogger_farm_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:18 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie_farm_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:18 pi systemd[1]: Started pmlogger_farm_check.service - Check and migrate non-primary pmlogger farm instances. >Aug 20 17:13:18 pi systemd[1]: Started pmie_farm_check.service - Check and migrate non-primary pmie farm instances. >Aug 20 17:13:18 pi podman[6788]: >Aug 20 17:13:18 pi nextcloud-redis[7010]: 1:C 20 Aug 2022 17:13:18.455 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo >Aug 20 17:13:18 pi nextcloud-redis[7010]: 1:C 20 Aug 2022 17:13:18.455 # Redis version=7.0.4, bits=64, commit=00000000, modified=0, pid=1, just started >Aug 20 17:13:18 pi nextcloud-redis[7010]: 1:C 20 Aug 2022 17:13:18.455 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf >Aug 20 17:13:18 pi nextcloud-redis[7010]: 1:M 20 Aug 2022 17:13:18.459 * monotonic clock: POSIX clock_gettime >Aug 20 17:13:18 pi podman[6788]: 2022-08-20 17:13:18.496331312 +0000 UTC m=+4.736984007 container create 8243ecfa6162a3844bcab6aa50b82ba7963ff45f5ffc07207c09f2eebdb3785e (image=docker.io/koenkk/zigbee2mqtt:latest, name=hass-zigbee2mqtt, PODMAN_SYSTEMD_UNIT=container-hass-zigbee2mqtt.service, io.containers.autoupdate=registry) >Aug 20 17:13:18 pi nextcloud-redis[7010]: 1:M 20 Aug 2022 17:13:18.511 * Running mode=standalone, port=6379. >Aug 20 17:13:18 pi nextcloud-redis[7010]: 1:M 20 Aug 2022 17:13:18.511 # Server initialized >Aug 20 17:13:18 pi nextcloud-redis[7010]: 1:M 20 Aug 2022 17:13:18.512 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. >Aug 20 17:13:18 pi nextcloud-redis[7010]: 1:M 20 Aug 2022 17:13:18.777 * Ready to accept connections >Aug 20 17:13:18 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:18 pi systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. >Aug 20 17:13:19 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmlogger_farm_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:19 pi systemd[1]: pmlogger_farm_check.service: Deactivated successfully. >Aug 20 17:13:19 pi podman[6679]: 2022-08-20 17:13:19.039977962 +0000 UTC m=+5.838376517 container init 3dd752829c85e02463e7210bc0d86af083a6de75248c41d6e9ac413da80395e6 (image=docker.io/library/postgres:13, name=nextcloud-postgres, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-nextcloud-postgres.service) >Aug 20 17:13:19 pi systemd[1]: sysroot-tmp-crun.s4NOfZ.mount: Deactivated successfully. >Aug 20 17:13:19 pi podman[6679]: 2022-08-20 17:13:19.14468146 +0000 UTC m=+5.943079849 container start 3dd752829c85e02463e7210bc0d86af083a6de75248c41d6e9ac413da80395e6 (image=docker.io/library/postgres:13, name=nextcloud-postgres, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-nextcloud-postgres.service) >Aug 20 17:13:19 pi podman[6679]: 3dd752829c85e02463e7210bc0d86af083a6de75248c41d6e9ac413da80395e6 >Aug 20 17:13:19 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-nextcloud-postgres comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:19 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:19 pi systemd[1]: Started container-nextcloud-postgres.service - Podman container-postgres.service. >Aug 20 17:13:19 pi systemd[1]: pmie_check.service: Deactivated successfully. >Aug 20 17:13:19 pi systemd[1]: Starting container-nextcloud-fpm.service - Podman container-nextcloud-fpm.service... >Aug 20 17:13:19 pi systemd[1]: sysroot-tmp-crun.m8qcyN.mount: Deactivated successfully. >Aug 20 17:13:19 pi systemd[1]: Started libpod-8243ecfa6162a3844bcab6aa50b82ba7963ff45f5ffc07207c09f2eebdb3785e.scope - libcrun container. >Aug 20 17:13:19 pi systemd[1]: pmie_farm_check.service: Deactivated successfully. >Aug 20 17:13:19 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie_farm_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:19 pi audit: BPF prog-id=87 op=LOAD >Aug 20 17:13:19 pi podman[6788]: 2022-08-20 17:13:19.480026138 +0000 UTC m=+5.720678797 container init 8243ecfa6162a3844bcab6aa50b82ba7963ff45f5ffc07207c09f2eebdb3785e (image=docker.io/koenkk/zigbee2mqtt:latest, name=hass-zigbee2mqtt, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-hass-zigbee2mqtt.service) >Aug 20 17:13:19 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-hass-zigbee2mqtt comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:19 pi systemd[1]: Started container-hass-zigbee2mqtt.service - Podman container-hass-zigbee2mqtt.service. >Aug 20 17:13:19 pi nextcloud-postgres[7088]: >Aug 20 17:13:19 pi nextcloud-postgres[7088]: PostgreSQL Database directory appears to contain a database; Skipping initialization >Aug 20 17:13:19 pi nextcloud-postgres[7088]: >Aug 20 17:13:19 pi systemd[1]: Starting container-hass-app.service - Podman container-hass-app.service... >Aug 20 17:13:19 pi podman[6788]: 2022-08-20 17:13:19.582135555 +0000 UTC m=+5.822788250 container start 8243ecfa6162a3844bcab6aa50b82ba7963ff45f5ffc07207c09f2eebdb3785e (image=docker.io/koenkk/zigbee2mqtt:latest, name=hass-zigbee2mqtt, PODMAN_SYSTEMD_UNIT=container-hass-zigbee2mqtt.service, io.containers.autoupdate=registry) >Aug 20 17:13:19 pi podman[6788]: 8243ecfa6162a3844bcab6aa50b82ba7963ff45f5ffc07207c09f2eebdb3785e >Aug 20 17:13:19 pi hass-zigbee2mqtt[7556]: Using '/app/data' as data directory >Aug 20 17:13:19 pi proxy[6994]: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. >Aug 20 17:13:19 pi proxy-internal[7035]: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. >Aug 20 17:13:19 pi podman[7679]: 2022-08-20 17:13:19.902477052 +0000 UTC m=+0.293016623 image pull docker.io/homeassistant/raspberrypi4-64-homeassistant:stable >Aug 20 17:13:20 pi podman[7553]: 2022-08-20 17:13:19.647235779 +0000 UTC m=+0.403088726 image pull docker.io/nextcloud:fpm-alpine >Aug 20 17:13:20 pi oauth2-proxy[6953]: [2022/08/20 17:13:20] [provider.go:55] Performing OIDC Discovery... >Aug 20 17:13:21 pi audit: BPF prog-id=0 op=UNLOAD >Aug 20 17:13:21 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:21 pi oauth2-proxy[6953]: [2022/08/20 17:13:21] [main.go:60] ERROR: Failed to initialise OAuth2 Proxy: error intiailising provider: could not create provider data: error building OIDC ProviderVerifier: could not get verifier builder: error while discovery OIDC configuration: failed to discover OIDC configuration: error performing request: Get "https://auth.vanoverloop.xyz/realms/master/.well-known/openid-configuration": dial tcp 10.0.3.10:443: connect: connection refused >Aug 20 17:13:21 pi systemd[1]: libpod-795a01a9042ccf55388bfab87c9088d9e7689f602eb754e32c525b77d49ac752.scope: Deactivated successfully. >Aug 20 17:13:21 pi systemd[1]: dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully. >Aug 20 17:13:21 pi systemd[1]: dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 2.600s CPU time. >Aug 20 17:13:21 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:21 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:21 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 4. >Aug 20 17:13:21 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:13:21 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:13:21 pi podman[7876]: 2022-08-20 17:13:21.967613245 +0000 UTC m=+0.282778221 container died 795a01a9042ccf55388bfab87c9088d9e7689f602eb754e32c525b77d49ac752 (image=quay.io/oauth2-proxy/oauth2-proxy:latest, name=oauth2-proxy) >Aug 20 17:13:22 pi dbus-parsec[7929]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:13:22 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:13:22 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:13:22 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:13:22 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:13:22 pi systemd[1]: dbus-:1.2-org.fedoraproject.Setroubleshootd@0.service: Deactivated successfully. >Aug 20 17:13:22 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.2-org.fedoraproject.Setroubleshootd@0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:22 pi systemd[1]: dbus-:1.2-org.fedoraproject.Setroubleshootd@0.service: Consumed 9.365s CPU time. >Aug 20 17:13:22 pi pihole[6958]: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. >Aug 20 17:13:22 pi proxy[6994]: [s6-init] ensuring user provided files have correct perms...exited 0. >Aug 20 17:13:22 pi proxy-internal[7035]: [s6-init] ensuring user provided files have correct perms...exited 0. >Aug 20 17:13:22 pi proxy[6994]: [fix-attrs.d] applying ownership & permissions fixes... >Aug 20 17:13:22 pi proxy-internal[7035]: [fix-attrs.d] applying ownership & permissions fixes... >Aug 20 17:13:22 pi proxy[6994]: [fix-attrs.d] done. >Aug 20 17:13:22 pi proxy-internal[7035]: [fix-attrs.d] done. >Aug 20 17:13:22 pi proxy[6994]: [cont-init.d] executing container initialization scripts... >Aug 20 17:13:22 pi proxy-internal[7035]: [cont-init.d] executing container initialization scripts... >Aug 20 17:13:22 pi proxy-internal[7035]: [cont-init.d] 01_perms.sh: executing... >Aug 20 17:13:22 pi proxy[6994]: [cont-init.d] 01_perms.sh: executing... >Aug 20 17:13:23 pi podman[7553]: >Aug 20 17:13:23 pi podman[7553]: 2022-08-20 17:13:23.308916646 +0000 UTC m=+4.064769629 container create 958ee2e818e523891fb600e4aef33739375390eb9cbc8485a2949186f0aa167e (image=docker.io/library/nextcloud:fpm-alpine, name=nextcloud-fpm, PODMAN_SYSTEMD_UNIT=container-nextcloud-fpm.service, io.containers.autoupdate=registry) >Aug 20 17:13:23 pi podman[7067]: 2022-08-20 17:13:23.370310468 +0000 UTC m=+6.316833592 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=03dcf80ccec132d1129089609af861ca17d1841986bdea11d55bd57933285116) >Aug 20 17:13:23 pi pihole[6958]: [s6-init] ensuring user provided files have correct perms...exited 0. >Aug 20 17:13:23 pi pihole[6958]: [fix-attrs.d] applying ownership & permissions fixes... >Aug 20 17:13:23 pi pihole[6958]: [fix-attrs.d] 01-resolver-resolv: applying... >Aug 20 17:13:23 pi pihole[6958]: [fix-attrs.d] 01-resolver-resolv: exited 0. >Aug 20 17:13:23 pi pihole[6958]: [fix-attrs.d] done. >Aug 20 17:13:23 pi pihole[6958]: [cont-init.d] executing container initialization scripts... >Aug 20 17:13:23 pi pihole[6958]: [cont-init.d] 05-changer-uid-gid.sh: executing... >Aug 20 17:13:23 pi audit[709]: NETFILTER_CFG table=firewalld:111 family=1 entries=6 op=nft_unregister_rule pid=709 subj=system_u:system_r:firewalld_t:s0 comm="firewalld" >Aug 20 17:13:23 pi audit[8809]: NETFILTER_CFG table=nat:112 family=2 entries=3 op=nft_unregister_rule pid=8809 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:23 pi audit[8811]: NETFILTER_CFG table=nat:113 family=2 entries=1 op=nft_unregister_rule pid=8811 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:23 pi audit[8812]: NETFILTER_CFG table=nat:114 family=2 entries=1 op=nft_unregister_chain pid=8812 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:23 pi audit[8813]: NETFILTER_CFG table=nat:115 family=2 entries=1 op=nft_register_chain pid=8813 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:23 pi audit[8815]: NETFILTER_CFG table=nat:116 family=2 entries=1 op=nft_unregister_chain pid=8815 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:23 pi audit[8816]: NETFILTER_CFG table=nat:117 family=10 entries=2 op=nft_register_chain pid=8816 subj=system_u:system_r:iptables_t:s0 comm="ip6tables" >Aug 20 17:13:23 pi podman[7067]: unhealthy >Aug 20 17:13:23 pi audit[8818]: NETFILTER_CFG table=nat:118 family=10 entries=1 op=nft_unregister_chain pid=8818 subj=system_u:system_r:iptables_t:s0 comm="ip6tables" >Aug 20 17:13:23 pi audit[8819]: NETFILTER_CFG table=nat:119 family=10 entries=1 op=nft_register_chain pid=8819 subj=system_u:system_r:iptables_t:s0 comm="ip6tables" >Aug 20 17:13:23 pi podman[7679]: >Aug 20 17:13:23 pi audit[8821]: NETFILTER_CFG table=nat:120 family=10 entries=1 op=nft_unregister_chain pid=8821 subj=system_u:system_r:iptables_t:s0 comm="ip6tables" >Aug 20 17:13:23 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:13:23 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Failed with result 'exit-code'. >Aug 20 17:13:23 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:13:23 pi audit: ANOM_PROMISCUOUS dev=vethbb47b8e2 prom=0 old_prom=256 auid=4294967295 uid=0 gid=0 ses=4294967295 >Aug 20 17:13:23 pi kernel: cni-podman0: port 3(vethbb47b8e2) entered disabled state >Aug 20 17:13:23 pi kernel: device vethbb47b8e2 left promiscuous mode >Aug 20 17:13:23 pi kernel: cni-podman0: port 3(vethbb47b8e2) entered disabled state >Aug 20 17:13:24 pi NetworkManager[717]: <info> [1661015604.0158] device (vethbb47b8e2): released from master device cni-podman0 >Aug 20 17:13:24 pi audit[8844]: NETFILTER_CFG table=nat:121 family=2 entries=1 op=nft_unregister_rule pid=8844 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:24 pi audit[8847]: NETFILTER_CFG table=nat:122 family=2 entries=2 op=nft_unregister_rule pid=8847 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:24 pi audit[8848]: NETFILTER_CFG table=nat:123 family=2 entries=1 op=nft_unregister_chain pid=8848 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:24 pi systemd[1]: run-netns-netns\x2da8509f51\x2d0751\x2dadca\x2d4dab\x2d1d40b86ef142.mount: Deactivated successfully. >Aug 20 17:13:24 pi podman[7679]: 2022-08-20 17:13:24.149586446 +0000 UTC m=+4.540126035 container create 9c411440b0d38e0731c51dfc13ac5f126dc4a7bd580e7efb56feac08083646e4 (image=docker.io/homeassistant/raspberrypi4-64-homeassistant:stable, name=hass-app, io.hass.machine=raspberrypi4-64, org.opencontainers.image.url=https://www.home-assistant.io/, io.hass.base.version=2022.06.2, org.opencontainers.image.documentation=https://www.home-assistant.io/docs/, io.hass.version=2022.8.6, org.opencontainers.image.licenses=Apache License 2.0, io.hass.base.arch=aarch64, io.containers.autoupdate=registry, org.opencontainers.image.version=2022.8.6, io.hass.arch=aarch64, io.hass.base.name=python, org.opencontainers.image.authors=The Home Assistant Authors, org.opencontainers.image.title=Home Assistant, io.hass.base.image=homeassistant/aarch64-base:3.16, io.hass.type=core, org.opencontainers.image.created=2022-08-18 15:37:26+00:00, org.opencontainers.image.description=Open-source home automation platform running on Python 3, PODMAN_SYSTEMD_UNIT=container-hass-app.service, org.opencontainers.image.source=https://github.com/home-assistant/core) >Aug 20 17:13:24 pi pihole[6958]: [cont-init.d] 05-changer-uid-gid.sh: exited 0. >Aug 20 17:13:24 pi pihole[6958]: [cont-init.d] 20-start.sh: executing... >Aug 20 17:13:24 pi systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-795a01a9042ccf55388bfab87c9088d9e7689f602eb754e32c525b77d49ac752-userdata-shm.mount: Deactivated successfully. >Aug 20 17:13:24 pi systemd[1]: var-lib-containers-storage-overlay-ae5bafe7cae5d3ec541bb48f759406e9ca15a858d2a14f1e24e4f55357b89866-merged.mount: Deactivated successfully. >Aug 20 17:13:24 pi proxy-internal[7035]: Changing ownership of /data/logs to 0:0 >Aug 20 17:13:24 pi proxy[6994]: Changing ownership of /data/logs to 0:0 >Aug 20 17:13:24 pi proxy-internal[7035]: [cont-init.d] 01_perms.sh: exited 0. >Aug 20 17:13:24 pi proxy[6994]: [cont-init.d] 01_perms.sh: exited 0. >Aug 20 17:13:24 pi proxy-internal[7035]: [cont-init.d] 01_s6-secret-init.sh: executing... >Aug 20 17:13:24 pi proxy[6994]: [cont-init.d] 01_s6-secret-init.sh: executing... >Aug 20 17:13:25 pi systemd[1]: Started libpod-8a066e4f9d572d346f082f84e0576276928a2967a42311ff0a0432e4b58ebc31.scope - libcrun container. >Aug 20 17:13:25 pi audit: BPF prog-id=88 op=LOAD >Aug 20 17:13:25 pi pihole[6958]: ::: Starting docker specific checks & setup for docker pihole/pihole >Aug 20 17:13:25 pi proxy-internal[7035]: [cont-init.d] 01_s6-secret-init.sh: exited 0. >Aug 20 17:13:25 pi proxy[6994]: [cont-init.d] 01_s6-secret-init.sh: exited 0. >Aug 20 17:13:25 pi proxy-internal[7035]: [cont-init.d] done. >Aug 20 17:13:25 pi proxy[6994]: [cont-init.d] done. >Aug 20 17:13:25 pi proxy-internal[7035]: [services.d] starting services >Aug 20 17:13:25 pi proxy[6994]: [services.d] starting services >Aug 20 17:13:26 pi podman[7002]: 2022-08-20 17:13:26.01445865 +0000 UTC m=+10.219723389 container init 8a066e4f9d572d346f082f84e0576276928a2967a42311ff0a0432e4b58ebc31 (image=docker.io/library/postgres:11, name=gitea-postgres, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-gitea-postgres.service) >Aug 20 17:13:26 pi systemd[1]: Started container-gitea-postgres.service - Podman container-gitea-postgres.service. >Aug 20 17:13:26 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-gitea-postgres comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:26 pi proxy[6994]: [services.d] done. >Aug 20 17:13:26 pi proxy-internal[7035]: [services.d] done. >Aug 20 17:13:26 pi systemd[1]: Starting container-gitea-app.service - Podman container-gitea-app.service... >Aug 20 17:13:26 pi podman[7876]: 2022-08-20 17:13:26.112891423 +0000 UTC m=+4.428056418 container remove 795a01a9042ccf55388bfab87c9088d9e7689f602eb754e32c525b77d49ac752 (image=quay.io/oauth2-proxy/oauth2-proxy:latest, name=oauth2-proxy, PODMAN_SYSTEMD_UNIT=container-oauth2-proxy.service) >Aug 20 17:13:26 pi systemd[1]: container-oauth2-proxy.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:13:26 pi podman[7002]: 2022-08-20 17:13:26.206539083 +0000 UTC m=+10.411803563 container start 8a066e4f9d572d346f082f84e0576276928a2967a42311ff0a0432e4b58ebc31 (image=docker.io/library/postgres:11, name=gitea-postgres, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-gitea-postgres.service) >Aug 20 17:13:26 pi podman[7002]: 8a066e4f9d572d346f082f84e0576276928a2967a42311ff0a0432e4b58ebc31 >Aug 20 17:13:26 pi audit[8309]: USER_AUTH pid=8309 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:authentication grantors=pam_usertype,pam_localuser,pam_unix acct="pi" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:13:26 pi audit[8309]: USER_ACCT pid=8309 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:accounting grantors=pam_unix,pam_localuser acct="pi" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:13:26 pi audit[8309]: USER_CMD pid=8309 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='cwd="/var/home/pi" cmd=72706D2D6F737472656520737461747573 exe="/usr/bin/sudo" terminal=pts/0 res=success' >Aug 20 17:13:26 pi sudo[8309]: pi : TTY=pts/0 ; PWD=/var/home/pi ; USER=root ; COMMAND=/usr/bin/rpm-ostree status >Aug 20 17:13:26 pi audit[8309]: CRED_REFR pid=8309 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_localuser,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:13:26 pi sudo[8309]: pam_unix(sudo:session): session opened for user root(uid=0) by pi(uid=1000) >Aug 20 17:13:26 pi audit[8309]: USER_START pid=8309 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:13:26 pi systemd[1]: container-oauth2-proxy.service: Failed with result 'exit-code'. >Aug 20 17:13:26 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-oauth2-proxy comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:13:26 pi systemd[1]: container-oauth2-proxy.service: Consumed 3.788s CPU time. >Aug 20 17:13:26 pi systemd[1]: container-oauth2-proxy.service: Scheduled restart job, restart counter is at 1. >Aug 20 17:13:26 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-oauth2-proxy comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:26 pi systemd[1]: Stopped container-oauth2-proxy.service - Podman container-oauth2-proxy.service. >Aug 20 17:13:26 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-oauth2-proxy comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:26 pi systemd[1]: container-oauth2-proxy.service: Consumed 3.788s CPU time. >Aug 20 17:13:26 pi podman[8991]: 2022-08-20 17:13:26.386908149 +0000 UTC m=+0.237787397 image pull docker.io/gitea/gitea:latest >Aug 20 17:13:26 pi systemd[1]: Starting container-oauth2-proxy.service - Podman container-oauth2-proxy.service... >Aug 20 17:13:26 pi pihole[6958]: >Aug 20 17:13:26 pi pihole[6958]: [i] Installing configs from /etc/.pihole... >Aug 20 17:13:26 pi pihole[6958]: [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone! >Aug 20 17:13:27 pi pihole[6958]: [101B blob data] >Aug 20 17:13:27 pi systemd[1]: Starting rpm-ostreed.service - rpm-ostree System Management Daemon... >Aug 20 17:13:27 pi podman[8991]: >Aug 20 17:13:27 pi podman[8991]: 2022-08-20 17:13:27.729380537 +0000 UTC m=+1.580259785 container create c30bcd67f13ae78a7fb25156206a81106bfcf717f184f802609bcbcb5f673c91 (image=docker.io/gitea/gitea:latest, name=gitea-app, maintainer=maintainers@gitea.io, org.opencontainers.image.created=2022-08-18T20:10:51Z, org.opencontainers.image.revision=68cceb5321fac936147e8038c3ad26462de47b7d, org.opencontainers.image.source=https://github.com/go-gitea/gitea.git, org.opencontainers.image.url=https://github.com/go-gitea/gitea, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-gitea-app.service) >Aug 20 17:13:27 pi podman[9018]: 2022-08-20 17:13:27.541981681 +0000 UTC m=+0.779321589 image pull quay.io/oauth2-proxy/oauth2-proxy >Aug 20 17:13:27 pi rpm-ostree[9050]: Reading config file '/etc/rpm-ostreed.conf' >Aug 20 17:13:27 pi pihole[6958]: [110B blob data] >Aug 20 17:13:27 pi nextcloud-postgres[7088]: 2022-08-20 17:13:27.819 UTC [1] LOG: starting PostgreSQL 13.8 (Debian 13.8-1.pgdg110+1) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit >Aug 20 17:13:27 pi nextcloud-postgres[7088]: 2022-08-20 17:13:27.870 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 >Aug 20 17:13:27 pi nextcloud-postgres[7088]: 2022-08-20 17:13:27.870 UTC [1] LOG: listening on IPv6 address "::", port 5432 >Aug 20 17:13:27 pi nextcloud-postgres[7088]: 2022-08-20 17:13:27.889 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" >Aug 20 17:13:27 pi systemd[1]: Started libpod-9c411440b0d38e0731c51dfc13ac5f126dc4a7bd580e7efb56feac08083646e4.scope - libcrun container. >Aug 20 17:13:27 pi audit: BPF prog-id=89 op=LOAD >Aug 20 17:13:28 pi nextcloud-postgres[7088]: 2022-08-20 17:13:28.051 UTC [23] LOG: database system was shut down at 2022-08-20 17:10:45 UTC >Aug 20 17:13:28 pi proxy[6994]: ⯠Enabling IPV6 in hosts: /etc/nginx/conf.d >Aug 20 17:13:28 pi proxy[6994]: ⯠/etc/nginx/conf.d/include/block-exploits.conf >Aug 20 17:13:28 pi proxy-internal[7035]: ⯠Enabling IPV6 in hosts: /etc/nginx/conf.d >Aug 20 17:13:28 pi proxy-internal[7035]: ⯠/etc/nginx/conf.d/include/block-exploits.conf >Aug 20 17:13:28 pi vaultwarden-server[7012]: /--------------------------------------------------------------------\ >Aug 20 17:13:28 pi vaultwarden-server[7012]: | Starting Vaultwarden | >Aug 20 17:13:28 pi vaultwarden-server[7012]: | Version 1.25.2 | >Aug 20 17:13:28 pi vaultwarden-server[7012]: |--------------------------------------------------------------------| >Aug 20 17:13:28 pi vaultwarden-server[7012]: | This is an *unofficial* Bitwarden implementation, DO NOT use the | >Aug 20 17:13:28 pi vaultwarden-server[7012]: | official channels to report bugs/features, regardless of client. | >Aug 20 17:13:28 pi vaultwarden-server[7012]: | Send usage/configuration questions or feature requests to: | >Aug 20 17:13:28 pi vaultwarden-server[7012]: | https://vaultwarden.discourse.group/ | >Aug 20 17:13:28 pi vaultwarden-server[7012]: | Report suspected bugs/issues in the software itself at: | >Aug 20 17:13:28 pi vaultwarden-server[7012]: | https://github.com/dani-garcia/vaultwarden/issues/new | >Aug 20 17:13:28 pi vaultwarden-server[7012]: \--------------------------------------------------------------------/ >Aug 20 17:13:28 pi vaultwarden-server[7012]: >Aug 20 17:13:28 pi vaultwarden-server[7012]: [INFO] No .env file found. >Aug 20 17:13:28 pi vaultwarden-server[7012]: >Aug 20 17:13:28 pi podman[9018]: >Aug 20 17:13:28 pi proxy[6994]: ⯠/etc/nginx/conf.d/include/assets.conf >Aug 20 17:13:28 pi proxy-internal[7035]: ⯠/etc/nginx/conf.d/include/assets.conf >Aug 20 17:13:28 pi proxy[6994]: ⯠/etc/nginx/conf.d/include/force-ssl.conf >Aug 20 17:13:28 pi proxy-internal[7035]: ⯠/etc/nginx/conf.d/include/force-ssl.conf >Aug 20 17:13:28 pi proxy[6994]: ⯠/etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf >Aug 20 17:13:28 pi proxy-internal[7035]: ⯠/etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf >Aug 20 17:13:28 pi proxy[6994]: ⯠/etc/nginx/conf.d/include/ssl-ciphers.conf >Aug 20 17:13:28 pi proxy-internal[7035]: ⯠/etc/nginx/conf.d/include/ssl-ciphers.conf >Aug 20 17:13:28 pi podman[9018]: 2022-08-20 17:13:28.662101108 +0000 UTC m=+1.899441072 container create a0beee1966d3997daee759068459b728714895ebe340a168878953b846d1a78c (image=quay.io/oauth2-proxy/oauth2-proxy:latest, name=oauth2-proxy, PODMAN_SYSTEMD_UNIT=container-oauth2-proxy.service) >Aug 20 17:13:28 pi proxy[6994]: ⯠/etc/nginx/conf.d/include/ip_ranges.conf >Aug 20 17:13:28 pi proxy-internal[7035]: ⯠/etc/nginx/conf.d/include/ip_ranges.conf >Aug 20 17:13:28 pi NetworkManager[717]: <info> [1661015608.7120] manager: (veth21ef9e09): new Veth device (/org/freedesktop/NetworkManager/Devices/14) >Aug 20 17:13:28 pi audit: ANOM_PROMISCUOUS dev=veth21ef9e09 prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295 >Aug 20 17:13:28 pi kernel: cni-podman0: port 3(veth21ef9e09) entered blocking state >Aug 20 17:13:28 pi kernel: cni-podman0: port 3(veth21ef9e09) entered disabled state >Aug 20 17:13:28 pi kernel: device veth21ef9e09 entered promiscuous mode >Aug 20 17:13:28 pi systemd-udevd[9414]: Using default interface naming scheme 'v250'. >Aug 20 17:13:28 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready >Aug 20 17:13:28 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth21ef9e09: link becomes ready >Aug 20 17:13:28 pi kernel: cni-podman0: port 3(veth21ef9e09) entered blocking state >Aug 20 17:13:28 pi kernel: cni-podman0: port 3(veth21ef9e09) entered forwarding state >Aug 20 17:13:28 pi NetworkManager[717]: <info> [1661015608.7375] device (veth21ef9e09): carrier: link connected >Aug 20 17:13:28 pi audit[9431]: NETFILTER_CFG table=nat:124 family=2 entries=1 op=nft_register_chain pid=9431 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:28 pi audit[9433]: NETFILTER_CFG table=nat:125 family=2 entries=1 op=nft_register_rule pid=9433 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:28 pi audit[9435]: NETFILTER_CFG table=nat:126 family=2 entries=1 op=nft_register_rule pid=9435 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:28 pi audit[9437]: NETFILTER_CFG table=nat:127 family=2 entries=1 op=nft_register_rule pid=9437 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:28 pi proxy[6994]: ⯠/etc/nginx/conf.d/include/proxy.conf >Aug 20 17:13:28 pi proxy-internal[7035]: ⯠/etc/nginx/conf.d/include/proxy.conf >Aug 20 17:13:28 pi proxy-internal[7035]: ⯠/etc/nginx/conf.d/include/resolvers.conf >Aug 20 17:13:28 pi proxy[6994]: ⯠/etc/nginx/conf.d/include/resolvers.conf >Aug 20 17:13:28 pi proxy-internal[7035]: ⯠/etc/nginx/conf.d/production.conf >Aug 20 17:13:28 pi proxy[6994]: ⯠/etc/nginx/conf.d/production.conf >Aug 20 17:13:28 pi audit[9461]: NETFILTER_CFG table=nat:128 family=2 entries=1 op=nft_register_chain pid=9461 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:28 pi audit[9463]: NETFILTER_CFG table=nat:129 family=2 entries=1 op=nft_register_rule pid=9463 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:28 pi audit[9465]: NETFILTER_CFG table=nat:130 family=2 entries=1 op=nft_register_rule pid=9465 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:29 pi proxy-internal[7035]: ⯠/etc/nginx/conf.d/default.conf >Aug 20 17:13:29 pi proxy[6994]: ⯠/etc/nginx/conf.d/default.conf >Aug 20 17:13:29 pi audit[9470]: NETFILTER_CFG table=nat:131 family=2 entries=1 op=nft_register_rule pid=9470 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:29 pi proxy-internal[7035]: ⯠Enabling IPV6 in hosts: /data/nginx >Aug 20 17:13:29 pi proxy-internal[7035]: ⯠/data/nginx/proxy_host/1.conf >Aug 20 17:13:29 pi proxy[6994]: ⯠Enabling IPV6 in hosts: /data/nginx >Aug 20 17:13:29 pi proxy[6994]: ⯠/data/nginx/redirection_host/1.conf >Aug 20 17:13:29 pi audit[9482]: NETFILTER_CFG table=nat:132 family=2 entries=1 op=nft_register_rule pid=9482 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:29 pi firewalld[709]: WARNING: ZONE_ALREADY_SET: '10.88.0.19/32' already bound to 'trusted' >Aug 20 17:13:29 pi podman[6981]: 2022-08-20 17:13:29.170882339 +0000 UTC m=+13.660817980 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=3658185ab136f23176f4d1ed2a726bf6499c1ff289161af06f6c5c7543b205ce) >Aug 20 17:13:29 pi proxy[6994]: ⯠/data/nginx/proxy_host/1.conf >Aug 20 17:13:29 pi proxy-internal[7035]: ⯠/data/nginx/proxy_host/4.conf >Aug 20 17:13:29 pi proxy-internal[7035]: ⯠/data/nginx/proxy_host/2.conf >Aug 20 17:13:29 pi proxy-internal[7035]: ⯠/data/nginx/proxy_host/3.conf >Aug 20 17:13:29 pi nextcloud-postgres[7088]: 2022-08-20 17:13:29.207 UTC [1] LOG: database system is ready to accept connections >Aug 20 17:13:29 pi podman[6981]: unhealthy >Aug 20 17:13:29 pi systemd[1]: Started libpod-a0beee1966d3997daee759068459b728714895ebe340a168878953b846d1a78c.scope - libcrun container. >Aug 20 17:13:29 pi audit: BPF prog-id=90 op=LOAD >Aug 20 17:13:29 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:13:29 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:13:29 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Failed with result 'exit-code'. >Aug 20 17:13:29 pi podman[9018]: 2022-08-20 17:13:29.449201498 +0000 UTC m=+2.686541406 container init a0beee1966d3997daee759068459b728714895ebe340a168878953b846d1a78c (image=quay.io/oauth2-proxy/oauth2-proxy:latest, name=oauth2-proxy, PODMAN_SYSTEMD_UNIT=container-oauth2-proxy.service) >Aug 20 17:13:29 pi systemd[1]: Started container-oauth2-proxy.service - Podman container-oauth2-proxy.service. >Aug 20 17:13:29 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-oauth2-proxy comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:29 pi oauth2-proxy[9511]: [2022/08/20 17:13:29] [provider.go:55] Performing OIDC Discovery... >Aug 20 17:13:29 pi oauth2-proxy[9511]: [2022/08/20 17:13:29] [main.go:60] ERROR: Failed to initialise OAuth2 Proxy: error intiailising provider: could not create provider data: error building OIDC ProviderVerifier: could not get verifier builder: error while discovery OIDC configuration: failed to discover OIDC configuration: error performing request: Get "https://auth.vanoverloop.xyz/realms/master/.well-known/openid-configuration": dial tcp 10.0.3.10:443: connect: connection refused >Aug 20 17:13:29 pi systemd[1]: libpod-a0beee1966d3997daee759068459b728714895ebe340a168878953b846d1a78c.scope: Deactivated successfully. >Aug 20 17:13:29 pi podman[9018]: 2022-08-20 17:13:29.596184442 +0000 UTC m=+2.833524369 container start a0beee1966d3997daee759068459b728714895ebe340a168878953b846d1a78c (image=quay.io/oauth2-proxy/oauth2-proxy:latest, name=oauth2-proxy, PODMAN_SYSTEMD_UNIT=container-oauth2-proxy.service) >Aug 20 17:13:29 pi podman[9018]: a0beee1966d3997daee759068459b728714895ebe340a168878953b846d1a78c >Aug 20 17:13:29 pi audit: BPF prog-id=0 op=UNLOAD >Aug 20 17:13:29 pi proxy[6994]: ⯠/data/nginx/proxy_host/7.conf >Aug 20 17:13:29 pi proxy[6994]: ⯠/data/nginx/proxy_host/8.conf >Aug 20 17:13:29 pi proxy[6994]: ⯠/data/nginx/proxy_host/10.conf >Aug 20 17:13:29 pi proxy[6994]: ⯠/data/nginx/proxy_host/9.conf >Aug 20 17:13:29 pi podman[9529]: 2022-08-20 17:13:29.737724504 +0000 UTC m=+0.164143640 container died a0beee1966d3997daee759068459b728714895ebe340a168878953b846d1a78c (image=quay.io/oauth2-proxy/oauth2-proxy:latest, name=oauth2-proxy) >Aug 20 17:13:29 pi proxy[6994]: ⯠/data/nginx/proxy_host/4.conf >Aug 20 17:13:29 pi proxy[6994]: ⯠/data/nginx/proxy_host/2.conf >Aug 20 17:13:29 pi proxy[6994]: ⯠/data/nginx/proxy_host/13.conf >Aug 20 17:13:29 pi gitea-postgres[8895]: >Aug 20 17:13:29 pi gitea-postgres[8895]: PostgreSQL Database directory appears to contain a database; Skipping initialization >Aug 20 17:13:29 pi gitea-postgres[8895]: >Aug 20 17:13:30 pi proxy[6994]: ⯠/data/nginx/proxy_host/3.conf >Aug 20 17:13:30 pi proxy[6994]: ⯠/data/nginx/proxy_host/14.conf >Aug 20 17:13:30 pi proxy[6994]: ⯠/data/nginx/proxy_host/5.conf >Aug 20 17:13:30 pi audit[709]: NETFILTER_CFG table=firewalld:133 family=1 entries=6 op=nft_unregister_rule pid=709 subj=system_u:system_r:firewalld_t:s0 comm="firewalld" >Aug 20 17:13:30 pi audit[9926]: NETFILTER_CFG table=nat:134 family=2 entries=3 op=nft_unregister_rule pid=9926 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:30 pi audit[9928]: NETFILTER_CFG table=nat:135 family=2 entries=1 op=nft_unregister_rule pid=9928 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:30 pi audit[9929]: NETFILTER_CFG table=nat:136 family=2 entries=1 op=nft_unregister_chain pid=9929 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:30 pi audit[9930]: NETFILTER_CFG table=nat:137 family=2 entries=1 op=nft_register_chain pid=9930 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:30 pi audit[9932]: NETFILTER_CFG table=nat:138 family=2 entries=1 op=nft_unregister_chain pid=9932 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:30 pi audit[9933]: NETFILTER_CFG table=nat:139 family=10 entries=1 op=nft_register_chain pid=9933 subj=system_u:system_r:iptables_t:s0 comm="ip6tables" >Aug 20 17:13:30 pi audit[9936]: NETFILTER_CFG table=nat:140 family=10 entries=1 op=nft_unregister_chain pid=9936 subj=system_u:system_r:iptables_t:s0 comm="ip6tables" >Aug 20 17:13:30 pi audit[9937]: NETFILTER_CFG table=nat:141 family=10 entries=1 op=nft_register_chain pid=9937 subj=system_u:system_r:iptables_t:s0 comm="ip6tables" >Aug 20 17:13:30 pi audit[9939]: NETFILTER_CFG table=nat:142 family=10 entries=1 op=nft_unregister_chain pid=9939 subj=system_u:system_r:iptables_t:s0 comm="ip6tables" >Aug 20 17:13:30 pi audit: ANOM_PROMISCUOUS dev=veth21ef9e09 prom=0 old_prom=256 auid=4294967295 uid=0 gid=0 ses=4294967295 >Aug 20 17:13:30 pi kernel: cni-podman0: port 3(veth21ef9e09) entered disabled state >Aug 20 17:13:30 pi kernel: device veth21ef9e09 left promiscuous mode >Aug 20 17:13:30 pi kernel: cni-podman0: port 3(veth21ef9e09) entered disabled state >Aug 20 17:13:30 pi NetworkManager[717]: <info> [1661015610.7075] device (veth21ef9e09): released from master device cni-podman0 >Aug 20 17:13:30 pi audit[9957]: NETFILTER_CFG table=nat:143 family=2 entries=1 op=nft_unregister_rule pid=9957 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:30 pi audit[9962]: NETFILTER_CFG table=nat:144 family=2 entries=2 op=nft_unregister_rule pid=9962 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:30 pi audit[9963]: NETFILTER_CFG table=nat:145 family=2 entries=1 op=nft_unregister_chain pid=9963 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:30 pi systemd[1]: run-netns-netns\x2d16532ef5\x2d7ba1\x2dd8b2\x2d788b\x2dcf1caf191dc0.mount: Deactivated successfully. >Aug 20 17:13:30 pi systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0beee1966d3997daee759068459b728714895ebe340a168878953b846d1a78c-userdata-shm.mount: Deactivated successfully. >Aug 20 17:13:30 pi systemd[1]: var-lib-containers-storage-overlay-f21527e2d8d3b31626657d200d2f862adaa989671163a3093fb1f8ed7f3a0289-merged.mount: Deactivated successfully. >Aug 20 17:13:31 pi podman[9529]: 2022-08-20 17:13:31.722729772 +0000 UTC m=+2.149148908 container remove a0beee1966d3997daee759068459b728714895ebe340a168878953b846d1a78c (image=quay.io/oauth2-proxy/oauth2-proxy:latest, name=oauth2-proxy, PODMAN_SYSTEMD_UNIT=container-oauth2-proxy.service) >Aug 20 17:13:31 pi systemd[1]: container-oauth2-proxy.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:13:32 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:32 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:32 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 5. >Aug 20 17:13:32 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:13:32 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:13:32 pi systemd[1]: container-oauth2-proxy.service: Failed with result 'exit-code'. >Aug 20 17:13:32 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-oauth2-proxy comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:13:32 pi systemd[1]: container-oauth2-proxy.service: Consumed 4.170s CPU time. >Aug 20 17:13:32 pi dbus-parsec[10039]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:13:32 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:13:32 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:13:32 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:13:32 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:13:32 pi systemd[1]: container-oauth2-proxy.service: Scheduled restart job, restart counter is at 2. >Aug 20 17:13:32 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-oauth2-proxy comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:32 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-oauth2-proxy comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:32 pi systemd[1]: Stopped container-oauth2-proxy.service - Podman container-oauth2-proxy.service. >Aug 20 17:13:32 pi systemd[1]: container-oauth2-proxy.service: Consumed 4.170s CPU time. >Aug 20 17:13:32 pi systemd[1]: Starting container-oauth2-proxy.service - Podman container-oauth2-proxy.service... >Aug 20 17:13:32 pi vaultwarden-server[7012]: [2022-08-20 19:13:32.495][start][INFO] Rocket has launched from http://0.0.0.0:80 >Aug 20 17:13:32 pi podman[10056]: 2022-08-20 17:13:32.721198523 +0000 UTC m=+0.400723161 image pull quay.io/oauth2-proxy/oauth2-proxy >Aug 20 17:13:36 pi podman[10056]: >Aug 20 17:13:36 pi podman[10056]: 2022-08-20 17:13:36.154485369 +0000 UTC m=+3.834009913 container create 6e7660648be5cb6fe720bc98f5e6eb40c2cd05fc3bd5e02fc823f03fc03f8dfd (image=quay.io/oauth2-proxy/oauth2-proxy:latest, name=oauth2-proxy, PODMAN_SYSTEMD_UNIT=container-oauth2-proxy.service) >Aug 20 17:13:36 pi kernel: cni-podman0: port 3(vethd1879dfb) entered blocking state >Aug 20 17:13:36 pi kernel: cni-podman0: port 3(vethd1879dfb) entered disabled state >Aug 20 17:13:36 pi NetworkManager[717]: <info> [1661015616.1979] manager: (vethd1879dfb): new Veth device (/org/freedesktop/NetworkManager/Devices/15) >Aug 20 17:13:36 pi audit: ANOM_PROMISCUOUS dev=vethd1879dfb prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295 >Aug 20 17:13:36 pi systemd-udevd[10447]: Using default interface naming scheme 'v250'. >Aug 20 17:13:36 pi kernel: device vethd1879dfb entered promiscuous mode >Aug 20 17:13:36 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready >Aug 20 17:13:36 pi kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethd1879dfb: link becomes ready >Aug 20 17:13:36 pi kernel: cni-podman0: port 3(vethd1879dfb) entered blocking state >Aug 20 17:13:36 pi kernel: cni-podman0: port 3(vethd1879dfb) entered forwarding state >Aug 20 17:13:36 pi NetworkManager[717]: <info> [1661015616.2241] device (vethd1879dfb): carrier: link connected >Aug 20 17:13:36 pi audit[10467]: NETFILTER_CFG table=nat:146 family=2 entries=1 op=nft_register_chain pid=10467 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:36 pi audit[10469]: NETFILTER_CFG table=nat:147 family=2 entries=1 op=nft_register_rule pid=10469 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:36 pi audit[10474]: NETFILTER_CFG table=nat:148 family=2 entries=1 op=nft_register_rule pid=10474 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:36 pi audit[10476]: NETFILTER_CFG table=nat:149 family=2 entries=1 op=nft_register_rule pid=10476 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:36 pi audit[10505]: NETFILTER_CFG table=nat:150 family=2 entries=1 op=nft_register_chain pid=10505 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:36 pi audit[10507]: NETFILTER_CFG table=nat:151 family=2 entries=1 op=nft_register_rule pid=10507 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:36 pi audit[10511]: NETFILTER_CFG table=nat:152 family=2 entries=1 op=nft_register_rule pid=10511 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:36 pi audit[10515]: NETFILTER_CFG table=nat:153 family=2 entries=1 op=nft_register_rule pid=10515 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:36 pi systemd[1]: Started libpod-281971e998e590f8841e3b6d7df9ac3a82ada94e3b044cd5b5d71242d226747e.scope - libcrun container. >Aug 20 17:13:36 pi audit: BPF prog-id=91 op=LOAD >Aug 20 17:13:36 pi audit[10518]: NETFILTER_CFG table=nat:154 family=2 entries=1 op=nft_register_rule pid=10518 subj=system_u:system_r:iptables_t:s0 comm="iptables" >Aug 20 17:13:36 pi firewalld[709]: WARNING: ZONE_ALREADY_SET: '10.88.0.20/32' already bound to 'trusted' >Aug 20 17:13:36 pi systemd[1]: Started libpod-6e7660648be5cb6fe720bc98f5e6eb40c2cd05fc3bd5e02fc823f03fc03f8dfd.scope - libcrun container. >Aug 20 17:13:36 pi audit: BPF prog-id=92 op=LOAD >Aug 20 17:13:37 pi podman[6885]: 2022-08-20 17:13:37.020127534 +0000 UTC m=+22.716321180 container init 281971e998e590f8841e3b6d7df9ac3a82ada94e3b044cd5b5d71242d226747e (image=docker.io/library/php:fpm-alpine, name=php-fpm, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-php-fpm.service) >Aug 20 17:13:37 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmlogger_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:37 pi systemd[1]: pmlogger_check.service: Deactivated successfully. >Aug 20 17:13:37 pi systemd[1]: pmlogger_check.service: Consumed 4.545s CPU time. >Aug 20 17:13:37 pi systemd[1]: Started container-php-fpm.service - Podman container-php-fpm.service. >Aug 20 17:13:37 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-php-fpm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:37 pi podman[6885]: 2022-08-20 17:13:37.061591749 +0000 UTC m=+22.757785433 container start 281971e998e590f8841e3b6d7df9ac3a82ada94e3b044cd5b5d71242d226747e (image=docker.io/library/php:fpm-alpine, name=php-fpm, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-php-fpm.service) >Aug 20 17:13:37 pi podman[6885]: 281971e998e590f8841e3b6d7df9ac3a82ada94e3b044cd5b5d71242d226747e >Aug 20 17:13:37 pi systemd[1]: Starting container-nginx-web.service - Podman container-nginx-web.service... >Aug 20 17:13:37 pi podman[10056]: 2022-08-20 17:13:37.131417716 +0000 UTC m=+4.810942261 container init 6e7660648be5cb6fe720bc98f5e6eb40c2cd05fc3bd5e02fc823f03fc03f8dfd (image=quay.io/oauth2-proxy/oauth2-proxy:latest, name=oauth2-proxy, PODMAN_SYSTEMD_UNIT=container-oauth2-proxy.service) >Aug 20 17:13:37 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-oauth2-proxy comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:37 pi systemd[1]: Started container-oauth2-proxy.service - Podman container-oauth2-proxy.service. >Aug 20 17:13:37 pi podman[10056]: 2022-08-20 17:13:37.175531938 +0000 UTC m=+4.855056446 container start 6e7660648be5cb6fe720bc98f5e6eb40c2cd05fc3bd5e02fc823f03fc03f8dfd (image=quay.io/oauth2-proxy/oauth2-proxy:latest, name=oauth2-proxy, PODMAN_SYSTEMD_UNIT=container-oauth2-proxy.service) >Aug 20 17:13:37 pi podman[10056]: 6e7660648be5cb6fe720bc98f5e6eb40c2cd05fc3bd5e02fc823f03fc03f8dfd >Aug 20 17:13:37 pi oauth2-proxy[10546]: [2022/08/20 17:13:37] [provider.go:55] Performing OIDC Discovery... >Aug 20 17:13:37 pi podman[10562]: 2022-08-20 17:13:37.281722438 +0000 UTC m=+0.140079857 image pull docker.io/nginx >Aug 20 17:13:37 pi podman[7679]: 2022-08-20 17:13:37.416499539 +0000 UTC m=+17.807039110 container init 9c411440b0d38e0731c51dfc13ac5f126dc4a7bd580e7efb56feac08083646e4 (image=docker.io/homeassistant/raspberrypi4-64-homeassistant:stable, name=hass-app, io.hass.machine=raspberrypi4-64, org.opencontainers.image.url=https://www.home-assistant.io/, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-hass-app.service, org.opencontainers.image.licenses=Apache License 2.0, io.hass.base.name=python, org.opencontainers.image.description=Open-source home automation platform running on Python 3, io.hass.base.arch=aarch64, org.opencontainers.image.documentation=https://www.home-assistant.io/docs/, org.opencontainers.image.authors=The Home Assistant Authors, org.opencontainers.image.source=https://github.com/home-assistant/core, io.hass.type=core, org.opencontainers.image.created=2022-08-18 15:37:26+00:00, io.hass.base.image=homeassistant/aarch64-base:3.16, org.opencontainers.image.version=2022.8.6, io.hass.arch=aarch64, io.hass.version=2022.8.6, io.hass.base.version=2022.06.2, org.opencontainers.image.title=Home Assistant) >Aug 20 17:13:37 pi systemd[1]: Started container-hass-app.service - Podman container-hass-app.service. >Aug 20 17:13:37 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-hass-app comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:37 pi podman[7679]: 2022-08-20 17:13:37.64562283 +0000 UTC m=+18.036162364 container start 9c411440b0d38e0731c51dfc13ac5f126dc4a7bd580e7efb56feac08083646e4 (image=docker.io/homeassistant/raspberrypi4-64-homeassistant:stable, name=hass-app, PODMAN_SYSTEMD_UNIT=container-hass-app.service, org.opencontainers.image.title=Home Assistant, io.hass.arch=aarch64, org.opencontainers.image.authors=The Home Assistant Authors, org.opencontainers.image.url=https://www.home-assistant.io/, org.opencontainers.image.description=Open-source home automation platform running on Python 3, io.hass.base.arch=aarch64, io.containers.autoupdate=registry, io.hass.base.image=homeassistant/aarch64-base:3.16, io.hass.version=2022.8.6, org.opencontainers.image.documentation=https://www.home-assistant.io/docs/, org.opencontainers.image.version=2022.8.6, org.opencontainers.image.created=2022-08-18 15:37:26+00:00, io.hass.machine=raspberrypi4-64, org.opencontainers.image.source=https://github.com/home-assistant/core, io.hass.base.name=python, io.hass.base.version=2022.06.2, org.opencontainers.image.licenses=Apache License 2.0, io.hass.type=core) >Aug 20 17:13:37 pi podman[7679]: 9c411440b0d38e0731c51dfc13ac5f126dc4a7bd580e7efb56feac08083646e4 >Aug 20 17:13:38 pi gitea-postgres[8895]: 2022-08-20 17:13:38.309 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 >Aug 20 17:13:38 pi gitea-postgres[8895]: 2022-08-20 17:13:38.309 UTC [1] LOG: listening on IPv6 address "::", port 5432 >Aug 20 17:13:38 pi gitea-postgres[8895]: 2022-08-20 17:13:38.374 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" >Aug 20 17:13:39 pi oauth2-proxy[10546]: [2022/08/20 17:13:39] [providers.go:145] Warning: Your provider supports PKCE methods ["plain" "S256"], but you have not enabled one with --code-challenge-method >Aug 20 17:13:39 pi oauth2-proxy[10546]: [2022/08/20 17:13:39] [proxy.go:89] mapping path "/" => upstream "http://pi.lan:5180/" >Aug 20 17:13:39 pi oauth2-proxy[10546]: [2022/08/20 17:13:39] [oauthproxy.go:156] OAuthProxy configured for Keycloak OIDC Client ID: oauth2-proxy >Aug 20 17:13:39 pi oauth2-proxy[10546]: [2022/08/20 17:13:39] [oauthproxy.go:162] Cookie settings: name:_oauth2_proxy secure(https):true httponly:true expiry:168h0m0s domains:.vanoverloop.xyz path:/ samesite: refresh:disabled >Aug 20 17:13:39 pi gitea-postgres[8895]: 2022-08-20 17:13:39.450 UTC [21] LOG: database system was shut down at 2022-08-20 17:10:47 UTC >Aug 20 17:13:39 pi podman[10562]: >Aug 20 17:13:39 pi podman[10562]: 2022-08-20 17:13:39.67891031 +0000 UTC m=+2.537267729 container create 11868febf0fd32a94b834fdfdc0adcec5005c14c81e3a876d63235f54cef16c4 (image=docker.io/library/nginx:latest, name=nginx-web, PODMAN_SYSTEMD_UNIT=container-nginx-web.service, maintainer=NGINX Docker Maintainers <docker-maint@nginx.com>, io.containers.autoupdate=registry) >Aug 20 17:13:39 pi gitea-postgres[8895]: 2022-08-20 17:13:39.954 UTC [1] LOG: database system is ready to accept connections >Aug 20 17:13:39 pi systemd[1]: systemd-hostnamed.service: Deactivated successfully. >Aug 20 17:13:39 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:40 pi audit: BPF prog-id=0 op=UNLOAD >Aug 20 17:13:40 pi audit: BPF prog-id=0 op=UNLOAD >Aug 20 17:13:40 pi systemd[1]: Started libpod-11868febf0fd32a94b834fdfdc0adcec5005c14c81e3a876d63235f54cef16c4.scope - libcrun container. >Aug 20 17:13:40 pi audit: BPF prog-id=93 op=LOAD >Aug 20 17:13:40 pi podman[10562]: 2022-08-20 17:13:40.894084086 +0000 UTC m=+3.752441523 container init 11868febf0fd32a94b834fdfdc0adcec5005c14c81e3a876d63235f54cef16c4 (image=docker.io/library/nginx:latest, name=nginx-web, PODMAN_SYSTEMD_UNIT=container-nginx-web.service, maintainer=NGINX Docker Maintainers <docker-maint@nginx.com>, io.containers.autoupdate=registry) >Aug 20 17:13:40 pi systemd[1]: Started container-nginx-web.service - Podman container-nginx-web.service. >Aug 20 17:13:40 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-nginx-web comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:40 pi podman[10562]: 2022-08-20 17:13:40.921719494 +0000 UTC m=+3.780076895 container start 11868febf0fd32a94b834fdfdc0adcec5005c14c81e3a876d63235f54cef16c4 (image=docker.io/library/nginx:latest, name=nginx-web, PODMAN_SYSTEMD_UNIT=container-nginx-web.service, maintainer=NGINX Docker Maintainers <docker-maint@nginx.com>, io.containers.autoupdate=registry) >Aug 20 17:13:40 pi podman[10562]: 11868febf0fd32a94b834fdfdc0adcec5005c14c81e3a876d63235f54cef16c4 >Aug 20 17:13:41 pi pihole[6958]: Converting DNS1 to PIHOLE_DNS_ >Aug 20 17:13:41 pi pihole[6958]: Converting DNS2 to PIHOLE_DNS_ >Aug 20 17:13:41 pi pihole[6958]: Setting DNS servers based on PIHOLE_DNS_ variable >Aug 20 17:13:41 pi pihole[6958]: ::: Assigning password defined by Environment Variable >Aug 20 17:13:41 pi pihole[6958]: [â] New password set >Aug 20 17:13:41 pi pihole[6958]: [â] Set temperature unit to C >Aug 20 17:13:41 pi pihole[6958]: DNSMasq binding to default interface: eth0 >Aug 20 17:13:41 pi nginx-web[10710]: /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration >Aug 20 17:13:41 pi nginx-web[10710]: /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ >Aug 20 17:13:41 pi systemd[1]: Started rpm-ostreed.service - rpm-ostree System Management Daemon. >Aug 20 17:13:41 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rpm-ostreed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:41 pi rpm-ostree[9050]: In idle state; will auto-exit in 61 seconds >Aug 20 17:13:41 pi rpm-ostree[9050]: client(id:cli dbus:1.82 unit:session-1.scope uid:0) added; new total=1 >Aug 20 17:13:42 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 6. >Aug 20 17:13:42 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:13:42 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:42 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:42 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:13:42 pi dbus-parsec[10855]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:13:42 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:13:42 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:13:42 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:13:42 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:13:42 pi systemd[1]: Started libpod-c30bcd67f13ae78a7fb25156206a81106bfcf717f184f802609bcbcb5f673c91.scope - libcrun container. >Aug 20 17:13:42 pi audit: BPF prog-id=94 op=LOAD >Aug 20 17:13:42 pi podman[8991]: 2022-08-20 17:13:42.656727431 +0000 UTC m=+16.507606808 container init c30bcd67f13ae78a7fb25156206a81106bfcf717f184f802609bcbcb5f673c91 (image=docker.io/gitea/gitea:latest, name=gitea-app, org.opencontainers.image.source=https://github.com/go-gitea/gitea.git, org.opencontainers.image.url=https://github.com/go-gitea/gitea, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-gitea-app.service, maintainer=maintainers@gitea.io, org.opencontainers.image.created=2022-08-18T20:10:51Z, org.opencontainers.image.revision=68cceb5321fac936147e8038c3ad26462de47b7d) >Aug 20 17:13:42 pi systemd[1]: Started container-gitea-app.service - Podman container-gitea-app.service. >Aug 20 17:13:42 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-gitea-app comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:42 pi podman[8991]: 2022-08-20 17:13:42.698048462 +0000 UTC m=+16.548927728 container start c30bcd67f13ae78a7fb25156206a81106bfcf717f184f802609bcbcb5f673c91 (image=docker.io/gitea/gitea:latest, name=gitea-app, org.opencontainers.image.source=https://github.com/go-gitea/gitea.git, org.opencontainers.image.url=https://github.com/go-gitea/gitea, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-gitea-app.service, maintainer=maintainers@gitea.io, org.opencontainers.image.created=2022-08-18T20:10:51Z, org.opencontainers.image.revision=68cceb5321fac936147e8038c3ad26462de47b7d) >Aug 20 17:13:42 pi podman[8991]: c30bcd67f13ae78a7fb25156206a81106bfcf717f184f802609bcbcb5f673c91 >Aug 20 17:13:43 pi rpm-ostree[9050]: client(id:cli dbus:1.82 unit:session-1.scope uid:0) vanished; remaining=0 >Aug 20 17:13:43 pi rpm-ostree[9050]: In idle state; will auto-exit in 61 seconds >Aug 20 17:13:43 pi nginx-web[10710]: /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh >Aug 20 17:13:43 pi sudo[8309]: pam_unix(sudo:session): session closed for user root >Aug 20 17:13:43 pi audit[8309]: USER_END pid=8309 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:13:43 pi audit[8309]: CRED_DISP pid=8309 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_localuser,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:13:43 pi pihole[6958]: Added ENV to php: >Aug 20 17:13:43 pi pihole[6958]: "TZ" => "Europe/Brussels", >Aug 20 17:13:43 pi pihole[6958]: "PIHOLE_DOCKER_TAG" => "2022.07.1", >Aug 20 17:13:43 pi pihole[6958]: "PHP_ERROR_LOG" => "/var/log/lighttpd/error-pihole.log", >Aug 20 17:13:43 pi pihole[6958]: "ServerIP" => "0.0.0.0", >Aug 20 17:13:43 pi pihole[6958]: "CORS_HOSTS" => "", >Aug 20 17:13:43 pi pihole[6958]: "VIRTUAL_HOST" => "0.0.0.0", >Aug 20 17:13:44 pi hass-postgres[6856]: 2022-08-20 17:13:44.102 UTC [1] LOG: starting PostgreSQL 14.5 (Debian 14.5-1.pgdg110+1) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit >Aug 20 17:13:44 pi hass-postgres[6856]: 2022-08-20 17:13:44.104 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 >Aug 20 17:13:44 pi hass-postgres[6856]: 2022-08-20 17:13:44.104 UTC [1] LOG: listening on IPv6 address "::", port 5432 >Aug 20 17:13:44 pi hass-postgres[6856]: 2022-08-20 17:13:44.170 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" >Aug 20 17:13:44 pi hass-postgres[6856]: 2022-08-20 17:13:44.475 UTC [21] LOG: database system was shut down at 2022-08-20 17:10:46 UTC >Aug 20 17:13:44 pi pihole[6958]: Using IPv4 and IPv6 >Aug 20 17:13:44 pi pihole[6958]: ::: Preexisting ad list /etc/pihole/adlists.list detected ((exiting setup_blocklists early)) >Aug 20 17:13:44 pi pihole[6958]: https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts >Aug 20 17:13:45 pi hass-postgres[6856]: 2022-08-20 17:13:45.227 UTC [1] LOG: database system is ready to accept connections >Aug 20 17:13:45 pi hass-app[9061]: s6-rc: info: service s6rc-oneshot-runner: starting >Aug 20 17:13:45 pi hass-app[9061]: s6-rc: info: service s6rc-oneshot-runner successfully started >Aug 20 17:13:45 pi hass-app[9061]: s6-rc: info: service fix-attrs: starting >Aug 20 17:13:45 pi hass-app[9061]: s6-rc: info: service fix-attrs successfully started >Aug 20 17:13:45 pi hass-app[9061]: s6-rc: info: service legacy-cont-init: starting >Aug 20 17:13:45 pi nginx-web[10710]: 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf >Aug 20 17:13:45 pi hass-app[9061]: s6-rc: info: service legacy-cont-init successfully started >Aug 20 17:13:45 pi hass-app[9061]: s6-rc: info: service legacy-services: starting >Aug 20 17:13:46 pi php-fpm[10499]: [20-Aug-2022 19:13:46] NOTICE: fpm is running, pid 1 >Aug 20 17:13:46 pi php-fpm[10499]: [20-Aug-2022 19:13:46] NOTICE: ready to handle connections >Aug 20 17:13:46 pi hass-app[9061]: services-up: info: copying legacy longrun home-assistant (no readiness notification) >Aug 20 17:13:46 pi nginx-web[10710]: 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf >Aug 20 17:13:46 pi nginx-web[10710]: /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh >Aug 20 17:13:46 pi nginx-web[10710]: /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh >Aug 20 17:13:46 pi nginx-web[10710]: /docker-entrypoint.sh: Configuration complete; ready for start up >Aug 20 17:13:46 pi pihole[6958]: ::: Testing lighttpd config: Syntax OK >Aug 20 17:13:46 pi pihole[6958]: ::: All config checks passed, cleared for startup ... >Aug 20 17:13:46 pi pihole[6958]: ::: Enabling Query Logging >Aug 20 17:13:47 pi pihole[6958]: [i] Enabling logging... >Aug 20 17:13:47 pi pihole[6958]: [38B blob data] >Aug 20 17:13:47 pi pihole[6958]: ::: Docker start setup complete >Aug 20 17:13:47 pi pihole[6958]: Checking if custom gravity.db is set in /etc/pihole/pihole-FTL.conf >Aug 20 17:13:47 pi nginx-web[10710]: 2022/08/20 17:13:47 [notice] 1#1: using the "epoll" event method >Aug 20 17:13:47 pi nginx-web[10710]: 2022/08/20 17:13:47 [notice] 1#1: nginx/1.23.1 >Aug 20 17:13:47 pi nginx-web[10710]: 2022/08/20 17:13:47 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) >Aug 20 17:13:47 pi nginx-web[10710]: 2022/08/20 17:13:47 [notice] 1#1: OS: Linux 5.18.16-200.fc36.aarch64 >Aug 20 17:13:47 pi nginx-web[10710]: 2022/08/20 17:13:47 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 >Aug 20 17:13:47 pi nginx-web[10710]: 2022/08/20 17:13:47 [notice] 1#1: start worker processes >Aug 20 17:13:47 pi nginx-web[10710]: 2022/08/20 17:13:47 [notice] 1#1: start worker process 26 >Aug 20 17:13:47 pi nginx-web[10710]: 2022/08/20 17:13:47 [notice] 1#1: start worker process 27 >Aug 20 17:13:47 pi nginx-web[10710]: 2022/08/20 17:13:47 [notice] 1#1: start worker process 28 >Aug 20 17:13:47 pi nginx-web[10710]: 2022/08/20 17:13:47 [notice] 1#1: start worker process 29 >Aug 20 17:13:47 pi pihole[6958]: Pi-hole version is v5.11.4 (Latest: v5.11.4) >Aug 20 17:13:47 pi pihole[6958]: AdminLTE version is v5.13 (Latest: v5.13) >Aug 20 17:13:47 pi pihole[6958]: FTL version is v5.16.1 (Latest: v5.16.2) >Aug 20 17:13:47 pi pihole[6958]: Container tag is: 2022.07.1 >Aug 20 17:13:47 pi pihole[6958]: [cont-init.d] 20-start.sh: exited 0. >Aug 20 17:13:47 pi proxy-internal[7035]: [8/20/2022] [5:13:47 PM] [Global ] ⺠⹠info No valid environment variables for database provided, using default SQLite file '/data/database.sqlite' >Aug 20 17:13:47 pi proxy[6994]: [8/20/2022] [5:13:47 PM] [Global ] ⺠⹠info No valid environment variables for database provided, using default SQLite file '/data/database.sqlite' >Aug 20 17:13:47 pi proxy[6994]: [8/20/2022] [5:13:47 PM] [Global ] ⺠⹠info Generating SQLite knex configuration >Aug 20 17:13:47 pi proxy-internal[7035]: [8/20/2022] [5:13:47 PM] [Global ] ⺠⹠info Generating SQLite knex configuration >Aug 20 17:13:47 pi pihole[6958]: [cont-init.d] done. >Aug 20 17:13:47 pi proxy-internal[7035]: [8/20/2022] [5:13:47 PM] [Global ] ⺠⬤ debug Wrote db configuration to config file: ./config/production.json >Aug 20 17:13:47 pi proxy[6994]: [8/20/2022] [5:13:47 PM] [Global ] ⺠⬤ debug Wrote db configuration to config file: ./config/production.json >Aug 20 17:13:47 pi pihole[6958]: [services.d] starting services >Aug 20 17:13:47 pi pihole[6958]: Starting pihole-FTL (no-daemon) as pihole >Aug 20 17:13:47 pi pihole[6958]: Starting crond >Aug 20 17:13:47 pi pihole[6958]: Starting lighttpd >Aug 20 17:13:47 pi pihole[6958]: [services.d] done. >Aug 20 17:13:48 pi hass-app[9061]: s6-rc: info: service legacy-services successfully started >Aug 20 17:13:48 pi audit[11158]: AVC avc: denied { read } for pid=11158 comm="pickup" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_pickup_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:13:48 pi audit[11158]: AVC avc: denied { read } for pid=11158 comm="pickup" name="localtime" dev="mmcblk0p3" ino=35137 scontext=system_u:system_r:postfix_pickup_t:s0 tcontext=system_u:object_r:container_file_t:s0 tclass=lnk_file permissive=0 >Aug 20 17:13:49 pi gitea-app[10857]: Server listening on :: port 22. >Aug 20 17:13:49 pi gitea-app[10857]: Server listening on 0.0.0.0 port 22. >Aug 20 17:13:50 pi systemd[1]: Started dbus-:1.2-org.fedoraproject.Setroubleshootd@1.service. >Aug 20 17:13:50 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.2-org.fedoraproject.Setroubleshootd@1 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:52 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 7. >Aug 20 17:13:52 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:52 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:52 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:13:52 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:13:52 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:13:52 pi dbus-parsec[11202]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:13:52 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:13:52 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:13:52 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:13:52 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:13:52 pi zezere-ignition[11205]: INFO : Ignition 2.14.0 >Aug 20 17:13:52 pi zezere-ignition[11205]: INFO : Stage: fetch >Aug 20 17:13:52 pi zezere-ignition[11205]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:13:52 pi zezere-ignition[11205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:13:52 pi zezere-ignition[11205]: DEBUG : parsed url from cmdline: "" >Aug 20 17:13:52 pi zezere-ignition[11205]: INFO : no config URL provided >Aug 20 17:13:52 pi zezere-ignition[11205]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:13:52 pi zezere-ignition[11205]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:13:52 pi zezere-ignition[11205]: INFO : using config file at "/tmp/zezere-ignition-config-jucp1714.ign" >Aug 20 17:13:52 pi zezere-ignition[11205]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:13:52 pi zezere-ignition[11205]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:13:53 pi zezere-ignition[11205]: INFO : GET result: Not Found >Aug 20 17:13:53 pi zezere-ignition[11205]: WARNING : failed to fetch config: resource not found >Aug 20 17:13:53 pi zezere-ignition[11205]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:13:53 pi zezere-ignition[11205]: CRITICAL : Ignition failed: resource not found >Aug 20 17:13:53 pi zezere-ignition[11218]: INFO : Ignition 2.14.0 >Aug 20 17:13:53 pi zezere-ignition[11218]: INFO : Stage: disks >Aug 20 17:13:53 pi zezere-ignition[11218]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:13:53 pi zezere-ignition[11218]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:13:53 pi zezere-ignition[11218]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:13:53 pi zezere-ignition[11218]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:13:53 pi zezere-ignition[11224]: INFO : Ignition 2.14.0 >Aug 20 17:13:53 pi zezere-ignition[11224]: INFO : Stage: mount >Aug 20 17:13:53 pi zezere-ignition[11224]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:13:53 pi zezere-ignition[11224]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:13:53 pi zezere-ignition[11224]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:13:53 pi zezere-ignition[11224]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:13:53 pi zezere-ignition[11230]: INFO : Ignition 2.14.0 >Aug 20 17:13:53 pi zezere-ignition[11230]: INFO : Stage: files >Aug 20 17:13:53 pi zezere-ignition[11230]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:13:53 pi zezere-ignition[11230]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:13:53 pi zezere-ignition[11230]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:13:53 pi zezere-ignition[11230]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:13:53 pi zezere-ignition[11237]: INFO : Ignition 2.14.0 >Aug 20 17:13:53 pi zezere-ignition[11237]: INFO : Stage: umount >Aug 20 17:13:53 pi zezere-ignition[11237]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:13:53 pi zezere-ignition[11237]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:13:53 pi zezere-ignition[11237]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:13:53 pi zezere-ignition[11237]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:13:53 pi zezere-ignition[11203]: Running stage fetch with config file /tmp/zezere-ignition-config-jucp1714.ign >Aug 20 17:13:53 pi zezere-ignition[11203]: Running stage disks with config file /tmp/zezere-ignition-config-jucp1714.ign >Aug 20 17:13:53 pi zezere-ignition[11203]: Running stage mount with config file /tmp/zezere-ignition-config-jucp1714.ign >Aug 20 17:13:53 pi zezere-ignition[11203]: Running stage files with config file /tmp/zezere-ignition-config-jucp1714.ign >Aug 20 17:13:53 pi zezere-ignition[11203]: Running stage umount with config file /tmp/zezere-ignition-config-jucp1714.ign >Aug 20 17:13:53 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:13:53 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:13:53 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:53 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:55 pi systemd[1]: Started dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@1.service. >Aug 20 17:13:55 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@1 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:13:57 pi systemd[1]: sysroot-tmp-crun.fzwE9T.mount: Deactivated successfully. >Aug 20 17:13:57 pi systemd[1]: Started libpod-958ee2e818e523891fb600e4aef33739375390eb9cbc8485a2949186f0aa167e.scope - libcrun container. >Aug 20 17:13:57 pi audit: BPF prog-id=95 op=LOAD >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: Traceback (most recent call last): >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: result = self._handle_call( >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: return handler(*parameters, **additional_args) >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: build_module_type_cache() >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:13:57 pi SetroubleshootPrivileged.py[11262]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:13:57 pi setroubleshoot[11166]: SELinux is preventing pickup from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 7eeaa8a0-45a8-4b45-9291-fb2c448c2619 >Aug 20 17:13:57 pi setroubleshoot[11166]: SELinux is preventing pickup from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pickup should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pickup' --raw | audit2allow -M my-pickup > # semodule -X 300 -i my-pickup.pp > >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: Traceback (most recent call last): >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: result = self._handle_call( >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: return handler(*parameters, **additional_args) >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: build_module_type_cache() >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:13:58 pi SetroubleshootPrivileged.py[11262]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:13:57 pi setroubleshoot[11166]: SELinux is preventing pickup from read access on the lnk_file localtime. For complete SELinux messages run: sealert -l 7eeaa8a0-45a8-4b45-9291-fb2c448c2619 >Aug 20 17:13:57 pi setroubleshoot[11166]: SELinux is preventing pickup from read access on the lnk_file localtime. > > ***** Plugin catchall (100. confidence) suggests ************************** > > If you believe that pickup should be allowed read access on the localtime lnk_file by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'pickup' --raw | audit2allow -M my-pickup > # semodule -X 300 -i my-pickup.pp > >Aug 20 17:13:59 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:13:59 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:01 pi chronyd[692]: Selected source 116.203.219.116 (2.fedora.pool.ntp.org) >Aug 20 17:14:02 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 8. >Aug 20 17:14:02 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:02 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:02 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:14:02 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:14:03 pi dbus-parsec[11292]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:14:02 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:14:02 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:14:02 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:14:02 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:14:03 pi podman[7553]: 2022-08-20 17:14:03.045455809 +0000 UTC m=+43.801308904 container init 958ee2e818e523891fb600e4aef33739375390eb9cbc8485a2949186f0aa167e (image=docker.io/library/nextcloud:fpm-alpine, name=nextcloud-fpm, PODMAN_SYSTEMD_UNIT=container-nextcloud-fpm.service, io.containers.autoupdate=registry) >Aug 20 17:14:03 pi systemd[1]: sysroot-tmp-crun.LJUudV.mount: Deactivated successfully. >Aug 20 17:14:03 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-nextcloud-fpm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:03 pi systemd[1]: Started container-nextcloud-fpm.service - Podman container-nextcloud-fpm.service. >Aug 20 17:14:03 pi systemd[1]: Starting container-nextcloud-nginx.service - Podman container-nextcloud-nginx.service... >Aug 20 17:14:06 pi podman[7553]: 2022-08-20 17:14:06.225750404 +0000 UTC m=+46.981603518 container start 958ee2e818e523891fb600e4aef33739375390eb9cbc8485a2949186f0aa167e (image=docker.io/library/nextcloud:fpm-alpine, name=nextcloud-fpm, PODMAN_SYSTEMD_UNIT=container-nextcloud-fpm.service, io.containers.autoupdate=registry) >Aug 20 17:14:06 pi podman[7553]: 958ee2e818e523891fb600e4aef33739375390eb9cbc8485a2949186f0aa167e >Aug 20 17:14:06 pi nextcloud-fpm[11272]: Configuring Redis as session handler >Aug 20 17:14:06 pi podman[11295]: 2022-08-20 17:14:06.27669897 +0000 UTC m=+3.086187364 image pull docker.io/nginx >Aug 20 17:14:07 pi systemd[1]: dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully. >Aug 20 17:14:07 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@1 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:07 pi systemd[1]: dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@1.service: Consumed 2.169s CPU time. >Aug 20 17:14:08 pi systemd[1]: dbus-:1.2-org.fedoraproject.Setroubleshootd@1.service: Deactivated successfully. >Aug 20 17:14:08 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.2-org.fedoraproject.Setroubleshootd@1 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:08 pi systemd[1]: dbus-:1.2-org.fedoraproject.Setroubleshootd@1.service: Consumed 5.471s CPU time. >Aug 20 17:14:12 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 9. >Aug 20 17:14:12 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:12 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:12 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:14:12 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:14:13 pi dbus-parsec[11320]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:14:13 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:14:13 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:14:13 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:14:13 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:14:14 pi podman[11284]: 2022-08-20 17:14:14.736492261 +0000 UTC m=+14.719976594 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.title=docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service, io.containers.autoupdate=registry, org.opencontainers.image.licenses=, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole) >Aug 20 17:14:14 pi podman[11284]: 2022-08-20 17:14:14.899926822 +0000 UTC m=+14.883411192 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=3a5c44a4bd2bf62d8207b91b2fb8c8d56aaef5ec3124e2e4e8dc178231b3ccb0) >Aug 20 17:14:15 pi podman[11295]: >Aug 20 17:14:15 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:14:15 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:15 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 6.492s CPU time. >Aug 20 17:14:15 pi podman[11295]: 2022-08-20 17:14:15.253225528 +0000 UTC m=+12.062713923 container create a1db16389e8f7194d5a7e21ccd40f0f2d579034658ed6d4f969d1e2c526fce0f (image=docker.io/library/nginx:latest, name=nextcloud-nginx, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-nextcloud-nginx.service, maintainer=NGINX Docker Maintainers <docker-maint@nginx.com>) >Aug 20 17:14:16 pi systemd[1]: Started libpod-a1db16389e8f7194d5a7e21ccd40f0f2d579034658ed6d4f969d1e2c526fce0f.scope - libcrun container. >Aug 20 17:14:16 pi audit: BPF prog-id=96 op=LOAD >Aug 20 17:14:16 pi podman[11295]: 2022-08-20 17:14:16.606061782 +0000 UTC m=+13.415550250 container init a1db16389e8f7194d5a7e21ccd40f0f2d579034658ed6d4f969d1e2c526fce0f (image=docker.io/library/nginx:latest, name=nextcloud-nginx, PODMAN_SYSTEMD_UNIT=container-nextcloud-nginx.service, maintainer=NGINX Docker Maintainers <docker-maint@nginx.com>, io.containers.autoupdate=registry) >Aug 20 17:14:16 pi systemd[1]: sysroot-tmp-crun.BKnQcv.mount: Deactivated successfully. >Aug 20 17:14:16 pi systemd[1]: Started container-nextcloud-nginx.service - Podman container-nextcloud-nginx.service. >Aug 20 17:14:16 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=container-nextcloud-nginx comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:16 pi systemd[1]: Reached target multi-user.target - Multi-User System. >Aug 20 17:14:16 pi nextcloud-nginx[11362]: /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration >Aug 20 17:14:16 pi nextcloud-nginx[11362]: /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ >Aug 20 17:14:16 pi nextcloud-nginx[11362]: /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh >Aug 20 17:14:16 pi nextcloud-nginx[11362]: 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf >Aug 20 17:14:16 pi nextcloud-nginx[11362]: 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf >Aug 20 17:14:16 pi nextcloud-nginx[11362]: /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh >Aug 20 17:14:16 pi nextcloud-nginx[11362]: /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh >Aug 20 17:14:16 pi nextcloud-nginx[11362]: /docker-entrypoint.sh: Configuration complete; ready for start up >Aug 20 17:14:16 pi systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... >Aug 20 17:14:16 pi podman[11295]: 2022-08-20 17:14:16.748816869 +0000 UTC m=+13.558305226 container start a1db16389e8f7194d5a7e21ccd40f0f2d579034658ed6d4f969d1e2c526fce0f (image=docker.io/library/nginx:latest, name=nextcloud-nginx, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-nextcloud-nginx.service, maintainer=NGINX Docker Maintainers <docker-maint@nginx.com>) >Aug 20 17:14:16 pi podman[11295]: a1db16389e8f7194d5a7e21ccd40f0f2d579034658ed6d4f969d1e2c526fce0f >Aug 20 17:14:16 pi audit[11391]: SYSTEM_RUNLEVEL pid=11391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='old-level=N new-level=3 comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:16 pi systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. >Aug 20 17:14:16 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-update-utmp-runlevel comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:16 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-update-utmp-runlevel comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:16 pi systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. >Aug 20 17:14:16 pi systemd[1]: Startup finished in 5.082s (kernel) + 6.537s (initrd) + 1min 47.282s (userspace) = 1min 58.901s. >Aug 20 17:14:21 pi nextcloud-fpm[11272]: [20-Aug-2022 19:14:21] NOTICE: fpm is running, pid 1 >Aug 20 17:14:21 pi nextcloud-fpm[11272]: [20-Aug-2022 19:14:21] NOTICE: ready to handle connections >Aug 20 17:14:21 pi audit[11469]: AVC avc: denied { nlmsg_read } for pid=11469 comm="ss" scontext=system_u:system_r:container_t:s0:c504,c855 tcontext=system_u:system_r:container_t:s0:c504,c855 tclass=netlink_tcpdiag_socket permissive=0 >Aug 20 17:14:21 pi audit[11469]: AVC avc: denied { nlmsg_read } for pid=11469 comm="ss" scontext=system_u:system_r:container_t:s0:c504,c855 tcontext=system_u:system_r:container_t:s0:c504,c855 tclass=netlink_tcpdiag_socket permissive=0 >Aug 20 17:14:21 pi audit[11474]: AVC avc: denied { nlmsg_read } for pid=11474 comm="ss" scontext=system_u:system_r:container_t:s0:c504,c855 tcontext=system_u:system_r:container_t:s0:c504,c855 tclass=netlink_tcpdiag_socket permissive=0 >Aug 20 17:14:21 pi audit[11474]: AVC avc: denied { nlmsg_read } for pid=11474 comm="ss" scontext=system_u:system_r:container_t:s0:c504,c855 tcontext=system_u:system_r:container_t:s0:c504,c855 tclass=netlink_tcpdiag_socket permissive=0 >Aug 20 17:14:23 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 10. >Aug 20 17:14:23 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:23 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:23 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:14:23 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:14:23 pi systemd[1]: Started dbus-:1.2-org.fedoraproject.Setroubleshootd@2.service. >Aug 20 17:14:23 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.2-org.fedoraproject.Setroubleshootd@2 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:23 pi dbus-parsec[11480]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:14:23 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:14:23 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:14:23 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:14:23 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:14:24 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:14:24 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:24 pi gitea-app[10857]: 2022/08/20 19:14:24 cmd/web.go:106:runWeb() [I] Starting Gitea on PID: 12 >Aug 20 17:14:24 pi gitea-app[10857]: 2022/08/20 19:14:24 ...s/setting/setting.go:594:deprecatedSetting() [E] Deprecated fallback `[server]` `LFS_CONTENT_PATH` present. Use `[lfs]` `PATH` instead. This fallback will be removed in v1.18.0 >Aug 20 17:14:24 pi gitea-app[10857]: 2022/08/20 19:14:24 cmd/web.go:157:runWeb() [I] Global init >Aug 20 17:14:24 pi gitea-app[10857]: 2022/08/20 19:14:24 ...s/setting/setting.go:594:deprecatedSetting() [E] Deprecated fallback `[server]` `LFS_CONTENT_PATH` present. Use `[lfs]` `PATH` instead. This fallback will be removed in v1.18.0 >Aug 20 17:14:24 pi podman[11482]: 2022-08-20 17:14:24.877531258 +0000 UTC m=+0.369460685 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, io.balena.qemu.version=7.0.0+balena1-aarch64, io.containers.autoupdate=registry, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, io.balena.architecture=aarch64, org.opencontainers.image.version=1.25.2, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.created=2022-07-27T18:44:18+00:00) >Aug 20 17:14:24 pi podman[11482]: 2022-08-20 17:14:24.969640849 +0000 UTC m=+0.461570350 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=726644a3a98088658de47d81c5a4c4bb0429d1035f4c1682b43433b07aacf70c) >Aug 20 17:14:25 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:25 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:14:25 pi gitea-app[10857]: 2022/08/20 19:14:25 routers/init.go:104:GlobalInitInstalled() [I] Git Version: 2.36.2, Wire Protocol Version 2 Enabled (home: /data/gitea/home) >Aug 20 17:14:25 pi gitea-app[10857]: 2022/08/20 19:14:25 routers/init.go:107:GlobalInitInstalled() [I] AppPath: /usr/local/bin/gitea >Aug 20 17:14:25 pi gitea-app[10857]: 2022/08/20 19:14:25 routers/init.go:108:GlobalInitInstalled() [I] AppWorkPath: /app/gitea >Aug 20 17:14:25 pi gitea-app[10857]: 2022/08/20 19:14:25 routers/init.go:109:GlobalInitInstalled() [I] Custom path: /data/gitea >Aug 20 17:14:25 pi gitea-app[10857]: 2022/08/20 19:14:25 routers/init.go:110:GlobalInitInstalled() [I] Log path: /data/gitea/log >Aug 20 17:14:25 pi gitea-app[10857]: 2022/08/20 19:14:25 routers/init.go:111:GlobalInitInstalled() [I] Configuration file: /data/gitea/conf/app.ini >Aug 20 17:14:25 pi gitea-app[10857]: 2022/08/20 19:14:25 routers/init.go:112:GlobalInitInstalled() [I] Run Mode: Prod >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...dules/setting/log.go:288:newLogService() [I] Gitea v1.17.1 built with GNU Make 4.3, go1.18.5 : bindata, timetzdata, sqlite, sqlite_unlock_notify >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...dules/setting/log.go:335:newLogService() [I] Gitea Log Mode: Console(Console:info) >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...dules/setting/log.go:249:generateNamedLogger() [I] Router Log: Console(console:info) >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...les/setting/cache.go:76:newCacheService() [I] Cache Service Enabled >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...les/setting/cache.go:91:newCacheService() [I] Last Commit Cache Service Enabled >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...s/setting/session.go:73:newSessionService() [I] Session Service Enabled >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...s/storage/storage.go:176:initAttachments() [I] Initialising Attachment storage with type: >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...les/storage/local.go:46:NewLocalStorage() [I] Creating new Local Storage at /data/gitea/attachments >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...s/storage/storage.go:170:initAvatars() [I] Initialising Avatar storage with type: >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...les/storage/local.go:46:NewLocalStorage() [I] Creating new Local Storage at /data/gitea/avatars >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...s/storage/storage.go:188:initRepoAvatars() [I] Initialising Repository Avatar storage with type: >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...les/storage/local.go:46:NewLocalStorage() [I] Creating new Local Storage at /data/gitea/repo-avatars >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...s/storage/storage.go:182:initLFS() [I] Initialising LFS storage with type: >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...les/storage/local.go:46:NewLocalStorage() [I] Creating new Local Storage at /data/git/lfs >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...s/storage/storage.go:194:initRepoArchives() [I] Initialising Repository Archive storage with type: >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...les/storage/local.go:46:NewLocalStorage() [I] Creating new Local Storage at /data/gitea/repo-archive >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...s/storage/storage.go:200:initPackages() [I] Initialising Packages storage with type: >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 ...les/storage/local.go:46:NewLocalStorage() [I] Creating new Local Storage at /data/gitea/packages >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 routers/init.go:130:GlobalInitInstalled() [I] SQLite3 support is enabled >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 routers/common/db.go:20:InitDBEngine() [I] Beginning ORM engine initialization. >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 routers/common/db.go:27:InitDBEngine() [I] ORM engine initialization attempt #1/10... >Aug 20 17:14:26 pi gitea-app[10857]: 2022/08/20 19:14:26 cmd/web.go:160:runWeb() [I] PING DATABASE postgres >Aug 20 17:14:26 pi proxy[6994]: [8/20/2022] [5:14:26 PM] [Migrate ] ⺠⹠info Current database version: none >Aug 20 17:14:26 pi proxy-internal[7035]: [8/20/2022] [5:14:26 PM] [Migrate ] ⺠⹠info Current database version: none >Aug 20 17:14:27 pi proxy[6994]: [8/20/2022] [5:14:27 PM] [Setup ] ⺠⹠info Creating a new JWT key pair... >Aug 20 17:14:27 pi proxy-internal[7035]: [8/20/2022] [5:14:27 PM] [Setup ] ⺠⹠info Creating a new JWT key pair... >Aug 20 17:14:27 pi gitea-app[10857]: 2022/08/20 19:14:27 routers/init.go:135:GlobalInitInstalled() [W] Table user Column max_repo_creation db default is '-1', struct default is -1 >Aug 20 17:14:28 pi gitea-app[10857]: 2022/08/20 19:14:28 routers/init.go:135:GlobalInitInstalled() [W] Table push_mirror has column sync_on_commit but struct has not related field >Aug 20 17:14:29 pi systemd[1]: Started dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@2.service. >Aug 20 17:14:29 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@2 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: Traceback (most recent call last): >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: result = self._handle_call( >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: return handler(*parameters, **additional_args) >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: build_module_type_cache() >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:14:31 pi setroubleshoot[11481]: SELinux is preventing ss from nlmsg_read access on the netlink_tcpdiag_socket labeled container_t. For complete SELinux messages run: sealert -l 82a52597-347e-451d-afa2-fee1ecc35d27 >Aug 20 17:14:31 pi setroubleshoot[11481]: SELinux is preventing ss from nlmsg_read access on the netlink_tcpdiag_socket labeled container_t. > > ***** Plugin catchall_boolean (89.3 confidence) suggests ****************** > > If you want to allow virt to sandbox use netlink > Then you must tell SELinux about this by enabling the 'virt_sandbox_use_netlink' boolean. > > Do > setsebool -P virt_sandbox_use_netlink 1 > > ***** Plugin catchall (11.6 confidence) suggests ************************** > > If you believe that ss should be allowed nlmsg_read access on netlink_tcpdiag_socket labeled container_t by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'ss' --raw | audit2allow -M my-ss > # semodule -X 300 -i my-ss.pp > >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: Traceback (most recent call last): >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: result = self._handle_call( >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: return handler(*parameters, **additional_args) >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: build_module_type_cache() >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:14:31 pi setroubleshoot[11481]: SELinux is preventing ss from nlmsg_read access on the netlink_tcpdiag_socket labeled container_t. For complete SELinux messages run: sealert -l 82a52597-347e-451d-afa2-fee1ecc35d27 >Aug 20 17:14:31 pi setroubleshoot[11481]: SELinux is preventing ss from nlmsg_read access on the netlink_tcpdiag_socket labeled container_t. > > ***** Plugin catchall_boolean (89.3 confidence) suggests ****************** > > If you want to allow virt to sandbox use netlink > Then you must tell SELinux about this by enabling the 'virt_sandbox_use_netlink' boolean. > > Do > setsebool -P virt_sandbox_use_netlink 1 > > ***** Plugin catchall (11.6 confidence) suggests ************************** > > If you believe that ss should be allowed nlmsg_read access on netlink_tcpdiag_socket labeled container_t by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'ss' --raw | audit2allow -M my-ss > # semodule -X 300 -i my-ss.pp > >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: Traceback (most recent call last): >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: result = self._handle_call( >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: return handler(*parameters, **additional_args) >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: build_module_type_cache() >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:14:31 pi setroubleshoot[11481]: SELinux is preventing ss from nlmsg_read access on the netlink_tcpdiag_socket labeled container_t. For complete SELinux messages run: sealert -l 82a52597-347e-451d-afa2-fee1ecc35d27 >Aug 20 17:14:31 pi setroubleshoot[11481]: SELinux is preventing ss from nlmsg_read access on the netlink_tcpdiag_socket labeled container_t. > > ***** Plugin catchall_boolean (89.3 confidence) suggests ****************** > > If you want to allow virt to sandbox use netlink > Then you must tell SELinux about this by enabling the 'virt_sandbox_use_netlink' boolean. > > Do > setsebool -P virt_sandbox_use_netlink 1 > > ***** Plugin catchall (11.6 confidence) suggests ************************** > > If you believe that ss should be allowed nlmsg_read access on netlink_tcpdiag_socket labeled container_t by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'ss' --raw | audit2allow -M my-ss > # semodule -X 300 -i my-ss.pp > >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: The call org.fedoraproject.SetroubleshootPrivileged.get_rpm_nvr_by_scontext has failed with an exception: >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: Traceback (most recent call last): >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 455, in _method_callback >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: result = self._handle_call( >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/dasbus/server/handler.py", line 265, in _handle_call >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: return handler(*parameters, **additional_args) >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/share/setroubleshoot/SetroubleshootPrivileged.py", line 57, in get_rpm_nvr_by_scontext >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: rpmnvr = setroubleshoot.util.get_rpm_nvr_by_scontext(scontext) >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 625, in get_rpm_nvr_by_scontext >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: return get_rpm_nvr_by_type(str(selinux.context_type_get(context))) >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 513, in get_rpm_nvr_by_type >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: build_module_type_cache() >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: File "/usr/lib/python3.10/site-packages/setroubleshoot/util.py", line 561, in build_module_type_cache >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: with os.scandir("/var/lib/selinux/{}/active/modules".format(policytype)) as module_store: >Aug 20 17:14:31 pi SetroubleshootPrivileged.py[11523]: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/selinux/targeted/active/modules' >Aug 20 17:14:31 pi gitea-app[10857]: 2022/08/20 19:14:31 routers/init.go:136:GlobalInitInstalled() [I] ORM engine initialization successful! >Aug 20 17:14:31 pi setroubleshoot[11481]: SELinux is preventing ss from nlmsg_read access on the netlink_tcpdiag_socket labeled container_t. For complete SELinux messages run: sealert -l 82a52597-347e-451d-afa2-fee1ecc35d27 >Aug 20 17:14:31 pi setroubleshoot[11481]: SELinux is preventing ss from nlmsg_read access on the netlink_tcpdiag_socket labeled container_t. > > ***** Plugin catchall_boolean (89.3 confidence) suggests ****************** > > If you want to allow virt to sandbox use netlink > Then you must tell SELinux about this by enabling the 'virt_sandbox_use_netlink' boolean. > > Do > setsebool -P virt_sandbox_use_netlink 1 > > ***** Plugin catchall (11.6 confidence) suggests ************************** > > If you believe that ss should be allowed nlmsg_read access on netlink_tcpdiag_socket labeled container_t by default. > Then you should report this as a bug. > You can generate a local policy module to allow this access. > Do > allow this access for now by executing: > # ausearch -c 'ss' --raw | audit2allow -M my-ss > # semodule -X 300 -i my-ss.pp > >Aug 20 17:14:32 pi gitea-app[10857]: 2022/08/20 19:14:32 ...xer/stats/indexer.go:39:populateRepoIndexer() [I] Populating the repo stats indexer with existing repositories >Aug 20 17:14:32 pi gitea-app[10857]: 2022/08/20 19:14:32 ...er/issues/indexer.go:174:func2() [I] [63011678-3] PID 12: Initializing Issue Indexer: bleve >Aug 20 17:14:32 pi gitea-app[10857]: 2022/08/20 19:14:32 ...er/issues/indexer.go:270:func3() [I] [63011678-3] Issue Indexer Initialization took 60.754402ms >Aug 20 17:14:32 pi gitea-app[10857]: 2022/08/20 19:14:32 ...xer/stats/indexer.go:85:populateRepoIndexer() [I] Done (re)populating the repo stats indexer with existing repositories >Aug 20 17:14:33 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 11. >Aug 20 17:14:33 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:33 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:33 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:14:33 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:14:33 pi dbus-parsec[11534]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:14:33 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:14:33 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:14:33 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:14:33 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:14:34 pi gitea-app[10857]: 2022/08/20 19:14:34 cmd/web.go:217:listen() [I] [6301167a] Listen: http://0.0.0.0:3000 >Aug 20 17:14:34 pi gitea-app[10857]: 2022/08/20 19:14:34 cmd/web.go:221:listen() [I] [6301167a] AppURL(ROOT_URL): https://git.vanoverloop.xyz/ >Aug 20 17:14:34 pi gitea-app[10857]: 2022/08/20 19:14:34 cmd/web.go:224:listen() [I] [6301167a] LFS server enabled >Aug 20 17:14:34 pi gitea-app[10857]: 2022/08/20 19:14:34 ...s/graceful/server.go:61:NewServer() [I] [6301167a] Starting new Web server: tcp:0.0.0.0:3000 on PID: 12 >Aug 20 17:14:35 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:35: Logging to console and directory: '/app/data/log/2022-08-20.19-14-35' filename: log.txt >Aug 20 17:14:35 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:35: Starting Zigbee2MQTT version 1.27.0 (commit #a9b8808) >Aug 20 17:14:35 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:35: Starting zigbee-herdsman (0.14.46) >Aug 20 17:14:35 pi gitea-app[10857]: 2022/08/20 19:14:35 [6301167b] router: completed GET /Danacus/university-stuff/action/watch?redirect_to=%2FDanacus%2Funiversity-stuff%2Fcommits%2Fcommit%2F31ca387f733dbd56601f960364de9fa25b4c9a37%2FBewijzen%2520en%2520redeneren%2FHuistaak-Week8.pdf.xopp for 10.88.0.1:60460, 405 Method Not Allowed in 5.3ms @ web/goget.go:21(web.goGet) >Aug 20 17:14:37 pi hass-app[9061]: 2022-08-20 19:14:37.333 WARNING (SyncWorker_0) [homeassistant.loader] We found a custom integration scheduler which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant >Aug 20 17:14:37 pi hass-app[9061]: 2022-08-20 19:14:37.336 WARNING (SyncWorker_0) [homeassistant.loader] We found a custom integration homewizard_energy which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant >Aug 20 17:14:37 pi hass-app[9061]: 2022-08-20 19:14:37.338 WARNING (SyncWorker_0) [homeassistant.loader] We found a custom integration adaptive_lighting which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant >Aug 20 17:14:37 pi hass-app[9061]: 2022-08-20 19:14:37.340 WARNING (SyncWorker_0) [homeassistant.loader] We found a custom integration pirateweather which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant >Aug 20 17:14:37 pi hass-app[9061]: 2022-08-20 19:14:37.342 WARNING (SyncWorker_0) [homeassistant.loader] We found a custom integration hacs which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:37: zigbee-herdsman started (resumed) >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:37: Coordinator firmware version: '{"meta":{"maintrel":3,"majorrel":2,"minorrel":6,"product":0,"revision":20190619,"transportrev":2},"type":"zStack12"}' >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:37: Currently 8 devices are joined: >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:37: 0x60a423fffe04e361 (0x60a423fffe04e361): HG06337 - Lidl Silvercrest smart plug (EU, CH, FR, BS, DK) (Router) >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:37: 0x5c0272fffe285808 (0x5c0272fffe285808): HG06337 - Lidl Silvercrest smart plug (EU, CH, FR, BS, DK) (Router) >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:37: 0x60a423fffe07c407 (0x60a423fffe07c407): HG06337 - Lidl Silvercrest smart plug (EU, CH, FR, BS, DK) (Router) >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:37: 0x0c4314fffe5a707a (0x0c4314fffe5a707a): HG07834C - Lidl Livarno Lux E27 bulb RGB (Router) >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:37: 0x0c4314fffe35dc29 (0x0c4314fffe35dc29): HG07834A - Lidl Livarno Lux GU10 spot RGB (Router) >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:37: 0x60a423fffe23d626 (0x60a423fffe23d626): HG07834A - Lidl Livarno Lux GU10 spot RGB (Router) >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:37: 0x0c4314fffe76cf26 (0x0c4314fffe76cf26): HG07834A - Lidl Livarno Lux GU10 spot RGB (Router) >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:37: 0x0c4314fffe881a63 (0x0c4314fffe881a63): HG07834A - Lidl Livarno Lux GU10 spot RGB (Router) >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:warn 2022-08-20 19:14:37: `permit_join` set to `true` in configuration.yaml. >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:warn 2022-08-20 19:14:37: Allowing new devices to join. >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:warn 2022-08-20 19:14:37: Set `permit_join` to `false` once you joined all devices. >Aug 20 17:14:37 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:37: Zigbee: allowing new devices to join. >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: Connecting to MQTT server at mqtt://localhost >Aug 20 17:14:38 pi hass-mosquitto[6718]: 1661015678: New connection from 127.0.0.1:44884 on port 1883. >Aug 20 17:14:38 pi hass-mosquitto[6718]: 1661015678: New client connected from 127.0.0.1:44884 as mqttjs_e342e190 (p2, c1, k60). >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: Connected to MQTT server >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'zigbee2mqtt/bridge/state', payload 'online' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'zigbee2mqtt/bridge/config', payload '{"commit":"a9b8808","coordinator":{"meta":{"maintrel":3,"majorrel":2,"minorrel":6,"product":0,"revision":20190619,"transportrev":2},"type":"zStack12"},"log_level":"info","network":{"channel":11,"extendedPanID":"0xdddddddddddddddd","panID":6754},"permit_join":true,"version":"1.27.0"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/switch/0x60a423fffe04e361/switch/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"command_topic":"zigbee2mqtt/0x60a423fffe04e361/set","device":{"identifiers":["zigbee2mqtt_0x60a423fffe04e361"],"manufacturer":"Lidl","model":"Silvercrest smart plug (EU, CH, FR, BS, DK) (HG06337)","name":"0x60a423fffe04e361"},"json_attributes_topic":"zigbee2mqtt/0x60a423fffe04e361","name":"0x60a423fffe04e361","payload_off":"OFF","payload_on":"ON","state_topic":"zigbee2mqtt/0x60a423fffe04e361","unique_id":"0x60a423fffe04e361_switch_zigbee2mqtt","value_template":"{{ value_json.state }}"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/sensor/0x60a423fffe04e361/linkquality/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"device":{"identifiers":["zigbee2mqtt_0x60a423fffe04e361"],"manufacturer":"Lidl","model":"Silvercrest smart plug (EU, CH, FR, BS, DK) (HG06337)","name":"0x60a423fffe04e361"},"enabled_by_default":false,"entity_category":"diagnostic","icon":"mdi:signal","json_attributes_topic":"zigbee2mqtt/0x60a423fffe04e361","name":"0x60a423fffe04e361 linkquality","state_class":"measurement","state_topic":"zigbee2mqtt/0x60a423fffe04e361","unique_id":"0x60a423fffe04e361_linkquality_zigbee2mqtt","unit_of_measurement":"lqi","value_template":"{{ value_json.linkquality }}"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/switch/0x5c0272fffe285808/switch/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"command_topic":"zigbee2mqtt/0x5c0272fffe285808/set","device":{"identifiers":["zigbee2mqtt_0x5c0272fffe285808"],"manufacturer":"Lidl","model":"Silvercrest smart plug (EU, CH, FR, BS, DK) (HG06337)","name":"0x5c0272fffe285808"},"json_attributes_topic":"zigbee2mqtt/0x5c0272fffe285808","name":"0x5c0272fffe285808","payload_off":"OFF","payload_on":"ON","state_topic":"zigbee2mqtt/0x5c0272fffe285808","unique_id":"0x5c0272fffe285808_switch_zigbee2mqtt","value_template":"{{ value_json.state }}"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/sensor/0x5c0272fffe285808/linkquality/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"device":{"identifiers":["zigbee2mqtt_0x5c0272fffe285808"],"manufacturer":"Lidl","model":"Silvercrest smart plug (EU, CH, FR, BS, DK) (HG06337)","name":"0x5c0272fffe285808"},"enabled_by_default":false,"entity_category":"diagnostic","icon":"mdi:signal","json_attributes_topic":"zigbee2mqtt/0x5c0272fffe285808","name":"0x5c0272fffe285808 linkquality","state_class":"measurement","state_topic":"zigbee2mqtt/0x5c0272fffe285808","unique_id":"0x5c0272fffe285808_linkquality_zigbee2mqtt","unit_of_measurement":"lqi","value_template":"{{ value_json.linkquality }}"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/switch/0x60a423fffe07c407/switch/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"command_topic":"zigbee2mqtt/0x60a423fffe07c407/set","device":{"identifiers":["zigbee2mqtt_0x60a423fffe07c407"],"manufacturer":"Lidl","model":"Silvercrest smart plug (EU, CH, FR, BS, DK) (HG06337)","name":"0x60a423fffe07c407"},"json_attributes_topic":"zigbee2mqtt/0x60a423fffe07c407","name":"0x60a423fffe07c407","payload_off":"OFF","payload_on":"ON","state_topic":"zigbee2mqtt/0x60a423fffe07c407","unique_id":"0x60a423fffe07c407_switch_zigbee2mqtt","value_template":"{{ value_json.state }}"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/sensor/0x60a423fffe07c407/linkquality/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"device":{"identifiers":["zigbee2mqtt_0x60a423fffe07c407"],"manufacturer":"Lidl","model":"Silvercrest smart plug (EU, CH, FR, BS, DK) (HG06337)","name":"0x60a423fffe07c407"},"enabled_by_default":false,"entity_category":"diagnostic","icon":"mdi:signal","json_attributes_topic":"zigbee2mqtt/0x60a423fffe07c407","name":"0x60a423fffe07c407 linkquality","state_class":"measurement","state_topic":"zigbee2mqtt/0x60a423fffe07c407","unique_id":"0x60a423fffe07c407_linkquality_zigbee2mqtt","unit_of_measurement":"lqi","value_template":"{{ value_json.linkquality }}"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/light/0x0c4314fffe5a707a/light/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"brightness":true,"brightness_scale":254,"color_mode":true,"command_topic":"zigbee2mqtt/0x0c4314fffe5a707a/set","device":{"identifiers":["zigbee2mqtt_0x0c4314fffe5a707a"],"manufacturer":"Lidl","model":"Livarno Lux E27 bulb RGB (HG07834C)","name":"0x0c4314fffe5a707a"},"effect":true,"effect_list":["blink","breathe","okay","channel_change","finish_effect","stop_effect"],"json_attributes_topic":"zigbee2mqtt/0x0c4314fffe5a707a","max_mireds":500,"min_mireds":153,"name":"0x0c4314fffe5a707a","schema":"json","state_topic":"zigbee2mqtt/0x0c4314fffe5a707a","supported_color_modes":["xy","color_temp"],"unique_id":"0x0c4314fffe5a707a_light_zigbee2mqtt"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/sensor/0x0c4314fffe5a707a/linkquality/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"device":{"identifiers":["zigbee2mqtt_0x0c4314fffe5a707a"],"manufacturer":"Lidl","model":"Livarno Lux E27 bulb RGB (HG07834C)","name":"0x0c4314fffe5a707a"},"enabled_by_default":false,"entity_category":"diagnostic","icon":"mdi:signal","json_attributes_topic":"zigbee2mqtt/0x0c4314fffe5a707a","name":"0x0c4314fffe5a707a linkquality","state_class":"measurement","state_topic":"zigbee2mqtt/0x0c4314fffe5a707a","unique_id":"0x0c4314fffe5a707a_linkquality_zigbee2mqtt","unit_of_measurement":"lqi","value_template":"{{ value_json.linkquality }}"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/light/0x0c4314fffe35dc29/light/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"brightness":true,"brightness_scale":254,"color_mode":true,"command_topic":"zigbee2mqtt/0x0c4314fffe35dc29/set","device":{"identifiers":["zigbee2mqtt_0x0c4314fffe35dc29"],"manufacturer":"Lidl","model":"Livarno Lux GU10 spot RGB (HG07834A)","name":"0x0c4314fffe35dc29"},"effect":true,"effect_list":["blink","breathe","okay","channel_change","finish_effect","stop_effect"],"json_attributes_topic":"zigbee2mqtt/0x0c4314fffe35dc29","max_mireds":500,"min_mireds":153,"name":"0x0c4314fffe35dc29","schema":"json","state_topic":"zigbee2mqtt/0x0c4314fffe35dc29","supported_color_modes":["xy","color_temp"],"unique_id":"0x0c4314fffe35dc29_light_zigbee2mqtt"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/sensor/0x0c4314fffe35dc29/linkquality/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"device":{"identifiers":["zigbee2mqtt_0x0c4314fffe35dc29"],"manufacturer":"Lidl","model":"Livarno Lux GU10 spot RGB (HG07834A)","name":"0x0c4314fffe35dc29"},"enabled_by_default":false,"entity_category":"diagnostic","icon":"mdi:signal","json_attributes_topic":"zigbee2mqtt/0x0c4314fffe35dc29","name":"0x0c4314fffe35dc29 linkquality","state_class":"measurement","state_topic":"zigbee2mqtt/0x0c4314fffe35dc29","unique_id":"0x0c4314fffe35dc29_linkquality_zigbee2mqtt","unit_of_measurement":"lqi","value_template":"{{ value_json.linkquality }}"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/light/0x60a423fffe23d626/light/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"brightness":true,"brightness_scale":254,"color_mode":true,"command_topic":"zigbee2mqtt/0x60a423fffe23d626/set","device":{"identifiers":["zigbee2mqtt_0x60a423fffe23d626"],"manufacturer":"Lidl","model":"Livarno Lux GU10 spot RGB (HG07834A)","name":"0x60a423fffe23d626"},"effect":true,"effect_list":["blink","breathe","okay","channel_change","finish_effect","stop_effect"],"json_attributes_topic":"zigbee2mqtt/0x60a423fffe23d626","max_mireds":500,"min_mireds":153,"name":"0x60a423fffe23d626","schema":"json","state_topic":"zigbee2mqtt/0x60a423fffe23d626","supported_color_modes":["xy","color_temp"],"unique_id":"0x60a423fffe23d626_light_zigbee2mqtt"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/sensor/0x60a423fffe23d626/linkquality/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"device":{"identifiers":["zigbee2mqtt_0x60a423fffe23d626"],"manufacturer":"Lidl","model":"Livarno Lux GU10 spot RGB (HG07834A)","name":"0x60a423fffe23d626"},"enabled_by_default":false,"entity_category":"diagnostic","icon":"mdi:signal","json_attributes_topic":"zigbee2mqtt/0x60a423fffe23d626","name":"0x60a423fffe23d626 linkquality","state_class":"measurement","state_topic":"zigbee2mqtt/0x60a423fffe23d626","unique_id":"0x60a423fffe23d626_linkquality_zigbee2mqtt","unit_of_measurement":"lqi","value_template":"{{ value_json.linkquality }}"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/light/0x0c4314fffe76cf26/light/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"brightness":true,"brightness_scale":254,"color_mode":true,"command_topic":"zigbee2mqtt/0x0c4314fffe76cf26/set","device":{"identifiers":["zigbee2mqtt_0x0c4314fffe76cf26"],"manufacturer":"Lidl","model":"Livarno Lux GU10 spot RGB (HG07834A)","name":"0x0c4314fffe76cf26"},"effect":true,"effect_list":["blink","breathe","okay","channel_change","finish_effect","stop_effect"],"json_attributes_topic":"zigbee2mqtt/0x0c4314fffe76cf26","max_mireds":500,"min_mireds":153,"name":"0x0c4314fffe76cf26","schema":"json","state_topic":"zigbee2mqtt/0x0c4314fffe76cf26","supported_color_modes":["xy","color_temp"],"unique_id":"0x0c4314fffe76cf26_light_zigbee2mqtt"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/sensor/0x0c4314fffe76cf26/linkquality/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"device":{"identifiers":["zigbee2mqtt_0x0c4314fffe76cf26"],"manufacturer":"Lidl","model":"Livarno Lux GU10 spot RGB (HG07834A)","name":"0x0c4314fffe76cf26"},"enabled_by_default":false,"entity_category":"diagnostic","icon":"mdi:signal","json_attributes_topic":"zigbee2mqtt/0x0c4314fffe76cf26","name":"0x0c4314fffe76cf26 linkquality","state_class":"measurement","state_topic":"zigbee2mqtt/0x0c4314fffe76cf26","unique_id":"0x0c4314fffe76cf26_linkquality_zigbee2mqtt","unit_of_measurement":"lqi","value_template":"{{ value_json.linkquality }}"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/light/0x0c4314fffe881a63/light/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"brightness":true,"brightness_scale":254,"color_mode":true,"command_topic":"zigbee2mqtt/0x0c4314fffe881a63/set","device":{"identifiers":["zigbee2mqtt_0x0c4314fffe881a63"],"manufacturer":"Lidl","model":"Livarno Lux GU10 spot RGB (HG07834A)","name":"0x0c4314fffe881a63"},"effect":true,"effect_list":["blink","breathe","okay","channel_change","finish_effect","stop_effect"],"json_attributes_topic":"zigbee2mqtt/0x0c4314fffe881a63","max_mireds":500,"min_mireds":153,"name":"0x0c4314fffe881a63","schema":"json","state_topic":"zigbee2mqtt/0x0c4314fffe881a63","supported_color_modes":["xy","color_temp"],"unique_id":"0x0c4314fffe881a63_light_zigbee2mqtt"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'homeassistant/sensor/0x0c4314fffe881a63/linkquality/config', payload '{"availability":[{"topic":"zigbee2mqtt/bridge/state"}],"device":{"identifiers":["zigbee2mqtt_0x0c4314fffe881a63"],"manufacturer":"Lidl","model":"Livarno Lux GU10 spot RGB (HG07834A)","name":"0x0c4314fffe881a63"},"enabled_by_default":false,"entity_category":"diagnostic","icon":"mdi:signal","json_attributes_topic":"zigbee2mqtt/0x0c4314fffe881a63","name":"0x0c4314fffe881a63 linkquality","state_class":"measurement","state_topic":"zigbee2mqtt/0x0c4314fffe881a63","unique_id":"0x0c4314fffe881a63_linkquality_zigbee2mqtt","unit_of_measurement":"lqi","value_template":"{{ value_json.linkquality }}"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'zigbee2mqtt/bridge/state', payload 'online' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'zigbee2mqtt/0x60a423fffe04e361', payload '{"linkquality":null,"state":"OFF"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'zigbee2mqtt/0x5c0272fffe285808', payload '{"linkquality":null,"state":"OFF"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'zigbee2mqtt/0x60a423fffe07c407', payload '{"linkquality":null,"state":"OFF"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'zigbee2mqtt/0x0c4314fffe35dc29', payload '{"brightness":8,"color":{"x":0.488,"y":0.4148},"color_mode":"color_temp","color_temp":419,"linkquality":null,"state":"OFF"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'zigbee2mqtt/0x60a423fffe23d626', payload '{"brightness":8,"color":{"x":0.488,"y":0.4148},"color_mode":"color_temp","color_temp":419,"linkquality":null,"state":"OFF"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'zigbee2mqtt/0x0c4314fffe76cf26', payload '{"brightness":8,"color":{"x":0.488,"y":0.4148},"color_mode":"color_temp","color_temp":419,"linkquality":null,"state":"OFF"}' >Aug 20 17:14:38 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:14:38: MQTT publish: topic 'zigbee2mqtt/0x0c4314fffe881a63', payload '{"brightness":8,"color":{"x":0.488,"y":0.4148},"color_mode":"color_temp","color_temp":419,"linkquality":null,"state":"OFF"}' >Aug 20 17:14:40 pi hass-app[9061]: 2022-08-20 19:14:40.879 WARNING (Recorder) [homeassistant.components.recorder.util] The system could not validate that the sqlite3 database at //config/home-assistant_v2.db was shutdown cleanly >Aug 20 17:14:41 pi hass-app[9061]: 2022-08-20 19:14:41.165 WARNING (Recorder) [homeassistant.components.recorder.util] Ended unfinished session (id=502 from 2022-08-20 01:59:11.326658) >Aug 20 17:14:41 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@2 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:41 pi systemd[1]: dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@2.service: Deactivated successfully. >Aug 20 17:14:41 pi systemd[1]: dbus-:1.2-org.fedoraproject.SetroubleshootPrivileged@2.service: Consumed 2.397s CPU time. >Aug 20 17:14:42 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-:1.2-org.fedoraproject.Setroubleshootd@2 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:42 pi systemd[1]: dbus-:1.2-org.fedoraproject.Setroubleshootd@2.service: Deactivated successfully. >Aug 20 17:14:42 pi systemd[1]: dbus-:1.2-org.fedoraproject.Setroubleshootd@2.service: Consumed 6.167s CPU time. >Aug 20 17:14:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=239.255.255.250 LEN=118 TOS=0x00 PREC=0x00 TTL=2 ID=11389 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:14:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=fe80:0000:0000:0000:4ec7:1f5e:5274:16ba DST=ff02:0000:0000:0000:0000:0000:0000:000c LEN=134 TC=0 HOPLIMIT=2 FLOWLBL=968519 PROTO=UDP SPT=35818 DPT=1900 LEN=94 >Aug 20 17:14:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=255.255.255.255 LEN=118 TOS=0x00 PREC=0x00 TTL=64 ID=3208 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:14:43 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:43 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:43 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 12. >Aug 20 17:14:43 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:14:43 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:14:43 pi dbus-parsec[11568]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:14:43 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:14:43 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:14:43 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:14:43 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:14:44 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rpm-ostreed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:44 pi systemd[1]: rpm-ostreed.service: Deactivated successfully. >Aug 20 17:14:44 pi systemd[1]: rpm-ostreed.service: Consumed 3.629s CPU time. >Aug 20 17:14:45 pi proxy[6994]: [8/20/2022] [5:14:45 PM] [Setup ] ⺠⹠info Wrote JWT key pair to config file: /app/config/production.json >Aug 20 17:14:45 pi proxy[6994]: [8/20/2022] [5:14:45 PM] [Setup ] ⺠⹠info Logrotate Timer initialized >Aug 20 17:14:45 pi proxy[6994]: [8/20/2022] [5:14:45 PM] [Setup ] ⺠⹠info Logrotate completed. >Aug 20 17:14:45 pi proxy[6994]: [8/20/2022] [5:14:45 PM] [IP Ranges] ⺠⹠info Fetching IP Ranges from online services... >Aug 20 17:14:45 pi proxy[6994]: [8/20/2022] [5:14:45 PM] [IP Ranges] ⺠⹠info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json >Aug 20 17:14:45 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:45 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:14:45 pi podman[11571]: 2022-08-20 17:14:45.827866143 +0000 UTC m=+0.316697235 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.licenses=, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.title=docker-pi-hole, io.containers.autoupdate=registry, org.opencontainers.image.version=2022.07.1) >Aug 20 17:14:45 pi gitea-app[10857]: 2022/08/20 19:14:45 [63011685] router: completed GET /Danacus/university-stuff/src/commit/a3382ccbb2c61efe658e1a9719b94f15bdf2733d/IW/FlowFree/FlowFree.pdb for 10.88.0.1:57140, 200 OK in 352.7ms @ repo/view.go:732(repo.Home) >Aug 20 17:14:45 pi podman[11571]: 2022-08-20 17:14:45.909533554 +0000 UTC m=+0.398364683 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=687916c77d478ac15672a9e2b1dd5d985c1e9fd8f82f9d2ea1ad1b1b12078fa7) >Aug 20 17:14:45 pi proxy[6994]: [8/20/2022] [5:14:45 PM] [IP Ranges] ⺠⹠info Fetching https://www.cloudflare.com/ips-v4 >Aug 20 17:14:46 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:14:45 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:46 pi proxy[6994]: [8/20/2022] [5:14:46 PM] [IP Ranges] ⺠⹠info Fetching https://www.cloudflare.com/ips-v6 >Aug 20 17:14:46 pi proxy[6994]: [8/20/2022] [5:14:46 PM] [SSL ] ⺠⹠info Let's Encrypt Renewal Timer initialized >Aug 20 17:14:46 pi proxy[6994]: [8/20/2022] [5:14:46 PM] [SSL ] ⺠⹠info Renewing SSL certs close to expiry... >Aug 20 17:14:46 pi proxy[6994]: [8/20/2022] [5:14:46 PM] [IP Ranges] ⺠⹠info IP Ranges Renewal Timer initialized >Aug 20 17:14:46 pi proxy[6994]: [8/20/2022] [5:14:46 PM] [Global ] ⺠⹠info Backend PID 243 listening on port 3000 ... >Aug 20 17:14:52 pi hass-app[9061]: 2022-08-20 19:14:52.338 WARNING (MainThread) [homeassistant.components.mqtt.mixins] Manually configured MQTT sensor(s) found under platform key 'sensor', please move to the mqtt integration key, see https://www.home-assistant.io/integrations/sensor.mqtt/#new_format >Aug 20 17:14:52 pi hass-mosquitto[6718]: 1661015692: New connection from 127.0.0.1:41889 on port 1883. >Aug 20 17:14:52 pi hass-mosquitto[6718]: 1661015692: New client connected from 127.0.0.1:41889 as 6TOgaQ1ortSicJqVmfdI98 (p2, c1, k60). >Aug 20 17:14:53 pi hass-app[9061]: 2022-08-20 19:14:53.256 WARNING (MainThread) [homeassistant.helpers.frame] Detected integration that uses deprecated `async_get_registry` to access device registry, use async_get instead. Please report issue to the custom integration author for scheduler using this method at custom_components/scheduler/__init__.py, line 49: device_registry = await dr.async_get_registry(hass) >Aug 20 17:14:53 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:14:53 pi zezere-ignition[11606]: INFO : Ignition 2.14.0 >Aug 20 17:14:53 pi zezere-ignition[11606]: INFO : Stage: fetch >Aug 20 17:14:53 pi zezere-ignition[11606]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:14:53 pi zezere-ignition[11606]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:14:53 pi zezere-ignition[11606]: DEBUG : parsed url from cmdline: "" >Aug 20 17:14:53 pi zezere-ignition[11606]: INFO : no config URL provided >Aug 20 17:14:53 pi zezere-ignition[11606]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:14:53 pi zezere-ignition[11606]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:14:53 pi zezere-ignition[11606]: INFO : using config file at "/tmp/zezere-ignition-config-p1qg3ms3.ign" >Aug 20 17:14:53 pi zezere-ignition[11606]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:14:53 pi hass-app[9061]: 2022-08-20 19:14:53.803 ERROR (MainThread) [homeassistant.components.webostv] The 'webostv' option near /config/configuration.yaml:2 has been removed, please remove it from your configuration >Aug 20 17:14:53 pi zezere-ignition[11606]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:14:53 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:53 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:53 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 13. >Aug 20 17:14:53 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:14:53 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:14:54 pi dbus-parsec[11613]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:14:54 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:14:54 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:14:54 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:14:54 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:14:54 pi proxy-internal[7035]: [8/20/2022] [5:14:54 PM] [Setup ] ⺠⹠info Wrote JWT key pair to config file: /app/config/production.json >Aug 20 17:14:54 pi proxy-internal[7035]: [8/20/2022] [5:14:54 PM] [Setup ] ⺠⹠info Logrotate Timer initialized >Aug 20 17:14:54 pi proxy-internal[7035]: [8/20/2022] [5:14:54 PM] [Setup ] ⺠⹠info Logrotate completed. >Aug 20 17:14:54 pi proxy-internal[7035]: [8/20/2022] [5:14:54 PM] [IP Ranges] ⺠⹠info Fetching IP Ranges from online services... >Aug 20 17:14:54 pi proxy-internal[7035]: [8/20/2022] [5:14:54 PM] [IP Ranges] ⺠⹠info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json >Aug 20 17:14:54 pi proxy-internal[7035]: [8/20/2022] [5:14:54 PM] [IP Ranges] ⺠⹠info Fetching https://www.cloudflare.com/ips-v4 >Aug 20 17:14:54 pi proxy-internal[7035]: [8/20/2022] [5:14:54 PM] [IP Ranges] ⺠⹠info Fetching https://www.cloudflare.com/ips-v6 >Aug 20 17:14:54 pi zezere-ignition[11606]: INFO : GET result: Not Found >Aug 20 17:14:54 pi zezere-ignition[11606]: WARNING : failed to fetch config: resource not found >Aug 20 17:14:54 pi zezere-ignition[11606]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:14:54 pi zezere-ignition[11606]: CRITICAL : Ignition failed: resource not found >Aug 20 17:14:54 pi proxy-internal[7035]: [8/20/2022] [5:14:54 PM] [SSL ] ⺠⹠info Let's Encrypt Renewal Timer initialized >Aug 20 17:14:54 pi proxy-internal[7035]: [8/20/2022] [5:14:54 PM] [SSL ] ⺠⹠info Renewing SSL certs close to expiry... >Aug 20 17:14:54 pi zezere-ignition[11621]: INFO : Ignition 2.14.0 >Aug 20 17:14:54 pi zezere-ignition[11621]: INFO : Stage: disks >Aug 20 17:14:54 pi zezere-ignition[11621]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:14:54 pi zezere-ignition[11621]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:14:54 pi zezere-ignition[11621]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:14:54 pi zezere-ignition[11621]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:14:54 pi proxy-internal[7035]: [8/20/2022] [5:14:54 PM] [IP Ranges] ⺠⹠info IP Ranges Renewal Timer initialized >Aug 20 17:14:54 pi proxy[6994]: [8/20/2022] [5:14:54 PM] [Nginx ] ⺠⹠info Reloading Nginx >Aug 20 17:14:55 pi zezere-ignition[11627]: INFO : Ignition 2.14.0 >Aug 20 17:14:55 pi zezere-ignition[11627]: INFO : Stage: mount >Aug 20 17:14:55 pi zezere-ignition[11627]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:14:55 pi zezere-ignition[11627]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:14:55 pi zezere-ignition[11627]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:14:55 pi zezere-ignition[11627]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:14:55 pi proxy-internal[7035]: [8/20/2022] [5:14:55 PM] [Global ] ⺠⹠info Backend PID 243 listening on port 3000 ... >Aug 20 17:14:55 pi zezere-ignition[11636]: INFO : Ignition 2.14.0 >Aug 20 17:14:55 pi zezere-ignition[11636]: INFO : Stage: files >Aug 20 17:14:55 pi zezere-ignition[11636]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:14:55 pi zezere-ignition[11636]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:14:55 pi zezere-ignition[11636]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:14:55 pi zezere-ignition[11636]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:14:55 pi zezere-ignition[11644]: INFO : Ignition 2.14.0 >Aug 20 17:14:55 pi zezere-ignition[11644]: INFO : Stage: umount >Aug 20 17:14:55 pi zezere-ignition[11644]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:14:55 pi zezere-ignition[11644]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:14:55 pi zezere-ignition[11644]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:14:55 pi zezere-ignition[11644]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:14:55 pi zezere-ignition[11605]: Running stage fetch with config file /tmp/zezere-ignition-config-p1qg3ms3.ign >Aug 20 17:14:55 pi zezere-ignition[11605]: Running stage disks with config file /tmp/zezere-ignition-config-p1qg3ms3.ign >Aug 20 17:14:55 pi zezere-ignition[11605]: Running stage mount with config file /tmp/zezere-ignition-config-p1qg3ms3.ign >Aug 20 17:14:55 pi zezere-ignition[11605]: Running stage files with config file /tmp/zezere-ignition-config-p1qg3ms3.ign >Aug 20 17:14:55 pi zezere-ignition[11605]: Running stage umount with config file /tmp/zezere-ignition-config-p1qg3ms3.ign >Aug 20 17:14:55 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:14:55 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:55 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:14:55 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:14:55 pi proxy[6994]: [8/20/2022] [5:14:55 PM] [SSL ] ⺠⹠info Renew Complete >Aug 20 17:14:59 pi proxy-internal[7035]: [8/20/2022] [5:14:59 PM] [Nginx ] ⺠⹠info Reloading Nginx >Aug 20 17:14:59 pi proxy-internal[7035]: [8/20/2022] [5:14:59 PM] [SSL ] ⺠⹠info Renew Complete >Aug 20 17:14:59 pi hass-app[9061]: 2022-08-20 19:14:59.366 WARNING (MainThread) [homeassistant.config_entries] Config entry '10.0.4.9' for daikin integration not ready yet: Server disconnected; Retrying in background >Aug 20 17:14:59 pi hass-app[9061]: 2022-08-20 19:14:59.439 WARNING (MainThread) [homeassistant.config_entries] Config entry '10.0.4.8' for daikin integration not ready yet: Server disconnected; Retrying in background >Aug 20 17:15:02 pi gitea-app[10857]: 2022/08/20 19:15:02 [63011695] router: completed GET /Danacus/university-stuff/src/commit/875a4b71c8132b9ee2eea3b8ea48335ab272ee61/Declaratieve%20Talen/prolog/belichting for 10.88.0.1:50548, 200 OK in 223.2ms @ repo/view.go:732(repo.Home) >Aug 20 17:15:03 pi hass-app[9061]: 2022-08-20 19:15:03.032 WARNING (MainThread) [custom_components.hacs] You have 'DCSBL/ha-homewizard-energy' installed with HACS this repository has been removed from HACS, please consider removing it. Removal reason (Added to Home Assistant core) >Aug 20 17:15:03 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC=dc:a6:32:38:46:e7:28:87:ba:2a:e1:ff:08:00 SRC=10.0.4.8 DST=10.0.3.10 LEN=357 TOS=0x00 PREC=0x00 TTL=254 ID=43320 PROTO=UDP SPT=49156 DPT=30000 LEN=337 >Aug 20 17:15:04 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 14. >Aug 20 17:15:04 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:04 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:04 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:15:04 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:15:04 pi systemd[1]: Started rpm-ostreed-upgrade-reboot.service - rpm-ostree upgrade and reboot. >Aug 20 17:15:04 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rpm-ostreed-upgrade-reboot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:04 pi dbus-parsec[11852]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:15:04 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:15:04 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:15:04 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:15:04 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:15:04 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC=dc:a6:32:38:46:e7:28:87:ba:2a:e1:ff:08:00 SRC=10.0.4.9 DST=10.0.3.10 LEN=351 TOS=0x00 PREC=0x00 TTL=254 ID=58218 PROTO=UDP SPT=49190 DPT=30000 LEN=331 >Aug 20 17:15:04 pi systemd[1]: Starting rpm-ostreed.service - rpm-ostree System Management Daemon... >Aug 20 17:15:04 pi rpm-ostree[11857]: Reading config file '/etc/rpm-ostreed.conf' >Aug 20 17:15:04 pi rpm-ostree[11857]: In idle state; will auto-exit in 61 seconds >Aug 20 17:15:04 pi systemd[1]: Started rpm-ostreed.service - rpm-ostree System Management Daemon. >Aug 20 17:15:04 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rpm-ostreed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:04 pi rpm-ostree[11857]: client(id:cli dbus:1.93 unit:rpm-ostreed-upgrade-reboot.service uid:0) added; new total=1 >Aug 20 17:15:04 pi rpm-ostree[11857]: Locked sysroot >Aug 20 17:15:04 pi rpm-ostree[11857]: Initiated txn Upgrade for client(id:cli dbus:1.93 unit:rpm-ostreed-upgrade-reboot.service uid:0): /org/projectatomic/rpmostree1/fedora_iot >Aug 20 17:15:04 pi rpm-ostree[11857]: Process [pid: 11853 uid: 0 unit: rpm-ostreed-upgrade-reboot.service] connected to transaction progress >Aug 20 17:15:09 pi gitea-app[10857]: 2022/08/20 19:15:09 [6301169d] router: completed GET /Danacus/university-stuff/src/commit/6b3808900ac3849e64af3fe38cb55ddff18e12df/OGP/.metadata/.mylyn for 10.88.0.1:38732, 200 OK in 68.3ms @ repo/view.go:732(repo.Home) >Aug 20 17:15:09 pi rpm-ostree[11857]: libostree pull from 'fedora-iot' for fedora/stable/aarch64/iot complete > security: GPG: commit > security: SIGN: disabled http: TLS > non-delta: meta: 2 content: 0 > transfer: secs: 5 size: 788 bytes >Aug 20 17:15:10 pi rpm-ostree[11857]: 2 metadata, 0 content objects fetched; 788 B transferred in 5 seconds; 0 bytes content written >Aug 20 17:15:10 pi rpm-ostree[11853]: 2 metadata, 0 content objects fetched; 788 B transferred in 5 seconds; 0 bytes content written >Aug 20 17:15:12 pi hass-app[9061]: 2022-08-20 19:15:12.557 ERROR (MainThread) [pyemby.server] Error fetching Emby data: >Aug 20 17:15:12 pi hass-app[9061]: 2022-08-20 19:15:12.565 ERROR (MainThread) [pyemby.server] Unable to register emby client. >Aug 20 17:15:12 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:15:12: MQTT publish: topic 'zigbee2mqtt/0x5c0272fffe285808', payload '{"linkquality":31,"state":"OFF"}' >Aug 20 17:15:12 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:15:12: MQTT publish: topic 'zigbee2mqtt/0x5c0272fffe285808', payload '{"linkquality":26,"state":"OFF"}' >Aug 20 17:15:12 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:15:12: MQTT publish: topic 'zigbee2mqtt/0x60a423fffe07c407', payload '{"linkquality":70,"state":"OFF"}' >Aug 20 17:15:12 pi hass-app[9061]: 2022-08-20 19:15:12.922 ERROR (SyncWorker_5) [homeassistant.components.dhcp] Cannot watch for dhcp packets: [Errno 1] Operation not permitted >Aug 20 17:15:13 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:15:13: MQTT publish: topic 'zigbee2mqtt/0x60a423fffe04e361', payload '{"linkquality":70,"state":"OFF"}' >Aug 20 17:15:13 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:15:13: MQTT publish: topic 'zigbee2mqtt/0x60a423fffe04e361', payload '{"linkquality":70,"state":"OFF"}' >Aug 20 17:15:14 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 15. >Aug 20 17:15:14 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:14 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:14 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:15:14 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:15:14 pi dbus-parsec[12058]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:15:14 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:15:14 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:15:14 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:15:14 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:15:16 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:15:16 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:19 pi podman[12059]: 2022-08-20 17:15:19.518368814 +0000 UTC m=+3.026586326 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.licenses=, org.opencontainers.image.title=docker-pi-hole, io.containers.autoupdate=registry, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole) >Aug 20 17:15:19 pi systemd[2791]: Starting grub-boot-success.service - Mark boot as successful... >Aug 20 17:15:19 pi podman[12059]: 2022-08-20 17:15:19.6396684 +0000 UTC m=+3.147885671 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=5a71a94ae669b260e0774266113cbe57bb6ef0ca2c4b30407c82967cb70e9001) >Aug 20 17:15:19 pi systemd[2791]: Finished grub-boot-success.service - Mark boot as successful. >Aug 20 17:15:19 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:15:19 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:19 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 5.643s CPU time. >Aug 20 17:15:20 pi rpm-ostree[11853]: Checking out tree 48814d4...done >Aug 20 17:15:20 pi rpm-ostree[11857]: Librepo version: 1.14.3 with CURL_GLOBAL_ACK_EINTR support (libcurl/7.82.0 OpenSSL/3.0.5 zlib/1.2.11 brotli/1.0.9 libidn2/2.3.3 libpsl/0.21.1 (+libidn2/2.3.2) libssh/0.9.6/openssl/zlib nghttp2/1.46.0 OpenLDAP/2.6.2) >Aug 20 17:15:21 pi rpm-ostree[11853]: Inactive requests: >Aug 20 17:15:21 pi rpm-ostree[11853]: catatonit (already provided by catatonit-0.1.7-5.fc36.aarch64) >Aug 20 17:15:21 pi rpm-ostree[11853]: Enabled rpm-md repositories: updates fedora fedora-cisco-openh264 >Aug 20 17:15:24 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 16. >Aug 20 17:15:24 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:24 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:24 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:15:24 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:15:24 pi dbus-parsec[12083]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:15:24 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:15:24 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:15:24 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:15:24 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:15:25 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:15:25 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:25 pi podman[12086]: 2022-08-20 17:15:25.770767363 +0000 UTC m=+0.261291325 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, io.balena.qemu.version=7.0.0+balena1-aarch64, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.version=1.25.2, io.balena.architecture=aarch64, io.containers.autoupdate=registry, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden) >Aug 20 17:15:25 pi podman[12086]: 2022-08-20 17:15:25.879633538 +0000 UTC m=+0.370157519 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=5873ade15a239ee52646d8add2998ce63528697e76c031912ce3ea6f1a18525e) >Aug 20 17:15:25 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:15:25 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:32 pi rpm-ostree[11853]: Importing rpm-md...done >Aug 20 17:15:32 pi rpm-ostree[11853]: rpm-md repo 'updates' (cached); generated: 2022-08-20T01:39:05Z solvables: 19466 >Aug 20 17:15:32 pi rpm-ostree[11853]: rpm-md repo 'fedora' (cached); generated: 2022-05-04T21:15:55Z solvables: 58687 >Aug 20 17:15:32 pi rpm-ostree[11853]: rpm-md repo 'fedora-cisco-openh264' (cached); generated: 2022-04-07T16:52:38Z solvables: 4 >Aug 20 17:15:32 pi rpm-ostree[11857]: Preparing pkg txn; enabled repos: ['updates', 'fedora', 'fedora-cisco-openh264'] solvables: 78157 >Aug 20 17:15:32 pi rpm-ostree[11853]: Resolving dependencies...done >Aug 20 17:15:32 pi rpm-ostree[11857]: Txn Upgrade on /org/projectatomic/rpmostree1/fedora_iot failed: Could not depsolve transaction; 1 problem detected: > Problem: conflicting requests > - package podman-docker-4:4.2.0-2.fc36.noarch requires podman = 4:4.2.0-2.fc36, but none of the providers can be installed > - package podman-docker-3:4.0.2-1.fc36.noarch requires podman = 3:4.0.2-1.fc36, but none of the providers can be installed > - cannot install both podman-4:4.2.0-2.fc36.aarch64 and podman-4:4.1.1-3.fc36.aarch64 > - cannot install both podman-3:4.0.2-1.fc36.aarch64 and podman-4:4.1.1-3.fc36.aarch64 >Aug 20 17:15:32 pi rpm-ostree[11857]: Unlocked sysroot >Aug 20 17:15:32 pi rpm-ostree[11857]: Process [pid: 11853 uid: 0 unit: rpm-ostreed-upgrade-reboot.service] disconnected from transaction progress >Aug 20 17:15:32 pi rpm-ostree[11857]: client(id:cli dbus:1.93 unit:rpm-ostreed-upgrade-reboot.service uid:0) vanished; remaining=0 >Aug 20 17:15:32 pi rpm-ostree[11857]: In idle state; will auto-exit in 61 seconds >Aug 20 17:15:32 pi rpm-ostree[11853]: error: Could not depsolve transaction; 1 problem detected: >Aug 20 17:15:32 pi rpm-ostree[11853]: Problem: conflicting requests >Aug 20 17:15:32 pi rpm-ostree[11853]: - package podman-docker-4:4.2.0-2.fc36.noarch requires podman = 4:4.2.0-2.fc36, but none of the providers can be installed >Aug 20 17:15:32 pi rpm-ostree[11853]: - package podman-docker-3:4.0.2-1.fc36.noarch requires podman = 3:4.0.2-1.fc36, but none of the providers can be installed >Aug 20 17:15:32 pi rpm-ostree[11853]: - cannot install both podman-4:4.2.0-2.fc36.aarch64 and podman-4:4.1.1-3.fc36.aarch64 >Aug 20 17:15:32 pi rpm-ostree[11853]: - cannot install both podman-3:4.0.2-1.fc36.aarch64 and podman-4:4.1.1-3.fc36.aarch64 >Aug 20 17:15:32 pi systemd[1]: rpm-ostreed-upgrade-reboot.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:15:32 pi systemd[1]: rpm-ostreed-upgrade-reboot.service: Failed with result 'exit-code'. >Aug 20 17:15:32 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rpm-ostreed-upgrade-reboot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:15:34 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 17. >Aug 20 17:15:34 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:34 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:34 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:15:34 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:15:35 pi dbus-parsec[12112]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:15:35 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:15:35 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:15:35 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:15:35 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:15:35 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:15:35: MQTT publish: topic 'zigbee2mqtt/0x60a423fffe04e361', payload '{"linkquality":68,"state":"OFF"}' >Aug 20 17:15:35 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:15:35: MQTT publish: topic 'zigbee2mqtt/0x5c0272fffe285808', payload '{"linkquality":26,"state":"OFF"}' >Aug 20 17:15:35 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:15:35: MQTT publish: topic 'zigbee2mqtt/0x60a423fffe07c407', payload '{"linkquality":70,"state":"OFF"}' >Aug 20 17:15:35 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:15:35: MQTT publish: topic 'zigbee2mqtt/0x0c4314fffe35dc29', payload '{"brightness":8,"color":{"x":0.488,"y":0.4148},"color_mode":"color_temp","color_temp":419,"linkquality":null,"state":"OFF"}' >Aug 20 17:15:35 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:15:35: MQTT publish: topic 'zigbee2mqtt/0x60a423fffe23d626', payload '{"brightness":8,"color":{"x":0.488,"y":0.4148},"color_mode":"color_temp","color_temp":419,"linkquality":null,"state":"OFF"}' >Aug 20 17:15:35 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:15:35: MQTT publish: topic 'zigbee2mqtt/0x0c4314fffe76cf26', payload '{"brightness":8,"color":{"x":0.488,"y":0.4148},"color_mode":"color_temp","color_temp":419,"linkquality":null,"state":"OFF"}' >Aug 20 17:15:35 pi hass-zigbee2mqtt[7556]: Zigbee2MQTT:info 2022-08-20 19:15:35: MQTT publish: topic 'zigbee2mqtt/0x0c4314fffe881a63', payload '{"brightness":8,"color":{"x":0.488,"y":0.4148},"color_mode":"color_temp","color_temp":419,"linkquality":null,"state":"OFF"}' >Aug 20 17:15:42 pi gitea-app[10857]: 2022/08/20 19:15:42 [63011678-25] router: slow GET /Danacus/university-stuff/commits/commit/fc327651fc9032b58eebd90044f247d941995b4d/SOCS/hack.out for 10.88.0.1:45230, elapsed 3340.4ms @ repo/repo.go:43(repo.MustBeNotEmpty) >Aug 20 17:15:45 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 18. >Aug 20 17:15:45 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:45 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:45 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:15:45 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:15:45 pi dbus-parsec[12121]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:15:45 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:15:45 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:15:45 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:15:45 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:15:46 pi gitea-app[10857]: 2022/08/20 19:15:46 [630116bb] router: completed GET /Danacus/university-stuff/commits/commit/fc327651fc9032b58eebd90044f247d941995b4d/SOCS/hack.out for 10.88.0.1:45230, 200 OK in 6905.2ms @ repo/commit.go:37(repo.RefCommits) >Aug 20 17:15:47 pi gitea-app[10857]: 2022/08/20 19:15:47 [630116c3] router: completed GET /Danacus/university-stuff/src/commit/f1a5039ad050d55116ca92235371f77f4242345e/Declaratieve%20Talen/haskell/les1/dist-newstyle/tmp/environment.-176627 for 10.88.0.1:33690, 200 OK in 80.1ms @ repo/view.go:732(repo.Home) >Aug 20 17:15:50 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:15:50 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:52 pi gitea-app[10857]: 2022/08/20 19:15:52 [630116c8] router: completed GET /Danacus/university-stuff/action/star?redirect_to=%2FDanacus%2Funiversity-stuff%2Fcommits%2Fcommit%2Fc82e9bca75e5ec2e55ac8dff96303c6da5a97795%2FIW%2Fbeamer%2Fpresentatie.out for 10.88.0.1:33698, 405 Method Not Allowed in 0.5ms @ web/goget.go:21(web.goGet) >Aug 20 17:15:52 pi podman[12129]: 2022-08-20 17:15:52.858703049 +0000 UTC m=+2.344305651 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.description=Pi-hole in a docker container, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, io.containers.autoupdate=registry, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.licenses=) >Aug 20 17:15:52 pi podman[12129]: 2022-08-20 17:15:52.919640987 +0000 UTC m=+2.405243681 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=7958aeec79cb8cd242b461c2c96e0aa986e0214ba10f7d5792615790a750134c) >Aug 20 17:15:53 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:15:53 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:53 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 4.575s CPU time. >Aug 20 17:15:55 pi gitea-app[10857]: 2022/08/20 19:15:55 [630116cb] router: completed GET /Danacus/university-stuff/action/star?redirect_to=%2FDanacus%2Funiversity-stuff%2Fblame%2Fcommit%2F861fab1ebde9763752f62451cb075b31445d4966%2FSOCS%2Foef_2_1.dra for 10.88.0.1:37144, 405 Method Not Allowed in 0.9ms @ web/goget.go:21(web.goGet) >Aug 20 17:15:55 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 19. >Aug 20 17:15:55 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:55 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:55 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:15:55 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:15:55 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:15:55 pi dbus-parsec[12152]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:15:55 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:15:55 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:15:55 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:15:55 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:15:55 pi zezere-ignition[12154]: INFO : Ignition 2.14.0 >Aug 20 17:15:55 pi zezere-ignition[12154]: INFO : Stage: fetch >Aug 20 17:15:55 pi zezere-ignition[12154]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:15:55 pi zezere-ignition[12154]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:15:55 pi zezere-ignition[12154]: DEBUG : parsed url from cmdline: "" >Aug 20 17:15:55 pi zezere-ignition[12154]: INFO : no config URL provided >Aug 20 17:15:55 pi zezere-ignition[12154]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:15:55 pi zezere-ignition[12154]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:15:55 pi zezere-ignition[12154]: INFO : using config file at "/tmp/zezere-ignition-config-navk1_zs.ign" >Aug 20 17:15:55 pi zezere-ignition[12154]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:15:55 pi zezere-ignition[12154]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:15:56 pi zezere-ignition[12154]: INFO : GET result: Not Found >Aug 20 17:15:56 pi zezere-ignition[12154]: WARNING : failed to fetch config: resource not found >Aug 20 17:15:56 pi zezere-ignition[12154]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:15:56 pi zezere-ignition[12154]: CRITICAL : Ignition failed: resource not found >Aug 20 17:15:56 pi zezere-ignition[12163]: INFO : Ignition 2.14.0 >Aug 20 17:15:56 pi zezere-ignition[12163]: INFO : Stage: disks >Aug 20 17:15:56 pi zezere-ignition[12163]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:15:56 pi zezere-ignition[12163]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:15:56 pi zezere-ignition[12163]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:15:56 pi zezere-ignition[12163]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:15:56 pi zezere-ignition[12170]: INFO : Ignition 2.14.0 >Aug 20 17:15:56 pi zezere-ignition[12170]: INFO : Stage: mount >Aug 20 17:15:56 pi zezere-ignition[12170]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:15:56 pi zezere-ignition[12170]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:15:56 pi zezere-ignition[12170]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:15:56 pi zezere-ignition[12170]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:15:56 pi zezere-ignition[12176]: INFO : Ignition 2.14.0 >Aug 20 17:15:56 pi zezere-ignition[12176]: INFO : Stage: files >Aug 20 17:15:56 pi zezere-ignition[12176]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:15:56 pi zezere-ignition[12176]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:15:56 pi zezere-ignition[12176]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:15:56 pi zezere-ignition[12176]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:15:56 pi zezere-ignition[12182]: INFO : Ignition 2.14.0 >Aug 20 17:15:56 pi zezere-ignition[12182]: INFO : Stage: umount >Aug 20 17:15:56 pi zezere-ignition[12182]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:15:56 pi zezere-ignition[12182]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:15:56 pi zezere-ignition[12182]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:15:56 pi zezere-ignition[12182]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:15:56 pi zezere-ignition[12153]: Running stage fetch with config file /tmp/zezere-ignition-config-navk1_zs.ign >Aug 20 17:15:56 pi zezere-ignition[12153]: Running stage disks with config file /tmp/zezere-ignition-config-navk1_zs.ign >Aug 20 17:15:56 pi zezere-ignition[12153]: Running stage mount with config file /tmp/zezere-ignition-config-navk1_zs.ign >Aug 20 17:15:56 pi zezere-ignition[12153]: Running stage files with config file /tmp/zezere-ignition-config-navk1_zs.ign >Aug 20 17:15:56 pi zezere-ignition[12153]: Running stage umount with config file /tmp/zezere-ignition-config-navk1_zs.ign >Aug 20 17:15:56 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:15:56 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:15:56 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:15:56 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:01 pi gitea-app[10857]: 2022/08/20 19:16:01 [630116d1] router: completed GET /Danacus/university-stuff/src/commit/d11378bf2c5bac5f405b24ec1e0e89c8dcc59451/Computergrafiek/lecture07-acceleration(7).xopp for 10.88.0.1:37146, 200 OK in 89.5ms @ repo/view.go:732(repo.Home) >Aug 20 17:16:05 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 20. >Aug 20 17:16:05 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:05 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:05 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:16:05 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:16:05 pi dbus-parsec[12199]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:16:05 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:16:05 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:16:05 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:16:05 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:16:06 pi gitea-app[10857]: 2022/08/20 19:16:06 [630116d6] router: completed GET /Danacus/dotfiles/src/commit/de1473b8a3677023e398f245f4314877beb28745/Pictures/wallpapers for 10.88.0.1:57320, 200 OK in 76.5ms @ repo/view.go:732(repo.Home) >Aug 20 17:16:13 pi gitea-app[10857]: 2022/08/20 19:16:13 [630116dd] router: completed GET /Danacus/university-stuff/commits/commit/875a4b71c8132b9ee2eea3b8ea48335ab272ee61/Besturingssystemen/Zitting6/raid/raid.iml for 10.88.0.1:57326, 200 OK in 382.5ms @ repo/commit.go:37(repo.RefCommits) >Aug 20 17:16:15 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 21. >Aug 20 17:16:15 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:15 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:15 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:16:15 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:16:16 pi dbus-parsec[12213]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:16:16 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:16:16 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:16:16 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:16:16 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:16:23 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:16:23 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:23 pi podman[12215]: 2022-08-20 17:16:23.742253686 +0000 UTC m=+0.238373500 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.title=docker-pi-hole, io.containers.autoupdate=registry, org.opencontainers.image.licenses=, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service) >Aug 20 17:16:23 pi podman[12215]: 2022-08-20 17:16:23.800156289 +0000 UTC m=+0.296276399 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=743ed65693974f6d8e87e5ea8915b9f10450e8a292f015c216817d6d31fe79d6) >Aug 20 17:16:23 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:16:23 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:26 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 22. >Aug 20 17:16:26 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:16:26 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:26 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:16:26 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:26 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:26 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:16:26 pi dbus-parsec[12239]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:16:26 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:16:26 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:16:26 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:16:26 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:16:30 pi podman[12238]: 2022-08-20 17:16:30.144364415 +0000 UTC m=+3.879577399 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, io.balena.architecture=aarch64, io.balena.qemu.version=7.0.0+balena1-aarch64, io.containers.autoupdate=registry, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.version=1.25.2, org.opencontainers.image.licenses=GPL-3.0-only) >Aug 20 17:16:30 pi podman[12238]: 2022-08-20 17:16:30.199568864 +0000 UTC m=+3.934782033 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=273a535e7c33310996f667e9161952ca56b215dc814337669a7aa292245224e9) >Aug 20 17:16:30 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:16:30 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:30 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Consumed 7.651s CPU time. >Aug 20 17:16:33 pi systemd[1]: rpm-ostreed.service: Deactivated successfully. >Aug 20 17:16:33 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rpm-ostreed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:33 pi systemd[1]: rpm-ostreed.service: Consumed 15.052s CPU time. >Aug 20 17:16:35 pi gitea-app[10857]: 2022/08/20 19:16:35 [630116f2] router: completed GET /Danacus/dotfiles/src/commit/c7479986701906eb04a215afe042f858f03f1749/.config/river/exit.sh for 10.88.0.1:53508, 200 OK in 81.1ms @ repo/view.go:732(repo.Home) >Aug 20 17:16:36 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 23. >Aug 20 17:16:36 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:36 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:36 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:16:36 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:16:36 pi dbus-parsec[12272]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:16:36 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:16:36 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:16:36 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:16:36 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:16:37 pi gitea-app[10857]: 2022/08/20 19:16:37 [630116f5] router: completed GET /Danacus/university-stuff/src/commit/4eb41779c48bfc58b4fb3468d4d445499b8b8086/OGP/les2/lib/junit-platform-commons-1.4.0-javadoc.jar for 10.88.0.1:53516, 200 OK in 75.3ms @ repo/view.go:732(repo.Home) >Aug 20 17:16:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=239.255.255.250 LEN=118 TOS=0x00 PREC=0x00 TTL=2 ID=23229 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:16:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=fe80:0000:0000:0000:4ec7:1f5e:5274:16ba DST=ff02:0000:0000:0000:0000:0000:0000:000c LEN=134 TC=0 HOPLIMIT=2 FLOWLBL=968519 PROTO=UDP SPT=35818 DPT=1900 LEN=94 >Aug 20 17:16:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=255.255.255.255 LEN=118 TOS=0x00 PREC=0x00 TTL=64 ID=10055 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:16:46 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 24. >Aug 20 17:16:46 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:46 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:46 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:16:46 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:16:46 pi dbus-parsec[12280]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:16:46 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:16:46 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:16:46 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:16:46 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:16:54 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:16:54 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:55 pi podman[12281]: 2022-08-20 17:16:55.526524007 +0000 UTC m=+0.972794400 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.version=2022.07.1, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, io.containers.autoupdate=registry, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.licenses=, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole) >Aug 20 17:16:55 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:16:55 pi podman[12281]: 2022-08-20 17:16:55.590256875 +0000 UTC m=+1.036527342 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=66dfae270abb891a55304c9f28da33113a47d025f66b6de5b9f0f3a72638b695) >Aug 20 17:16:55 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:16:55 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:55 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 1.776s CPU time. >Aug 20 17:16:55 pi zezere-ignition[12305]: INFO : Ignition 2.14.0 >Aug 20 17:16:55 pi zezere-ignition[12305]: INFO : Stage: fetch >Aug 20 17:16:55 pi zezere-ignition[12305]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:16:55 pi zezere-ignition[12305]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:16:55 pi zezere-ignition[12305]: DEBUG : parsed url from cmdline: "" >Aug 20 17:16:55 pi zezere-ignition[12305]: INFO : no config URL provided >Aug 20 17:16:55 pi zezere-ignition[12305]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:16:55 pi zezere-ignition[12305]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:16:55 pi zezere-ignition[12305]: INFO : using config file at "/tmp/zezere-ignition-config-_o7fft09.ign" >Aug 20 17:16:55 pi zezere-ignition[12305]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:16:55 pi zezere-ignition[12305]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:16:56 pi zezere-ignition[12305]: INFO : GET result: Not Found >Aug 20 17:16:56 pi zezere-ignition[12305]: WARNING : failed to fetch config: resource not found >Aug 20 17:16:56 pi zezere-ignition[12305]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:16:56 pi zezere-ignition[12305]: CRITICAL : Ignition failed: resource not found >Aug 20 17:16:56 pi zezere-ignition[12313]: INFO : Ignition 2.14.0 >Aug 20 17:16:56 pi zezere-ignition[12313]: INFO : Stage: disks >Aug 20 17:16:56 pi zezere-ignition[12313]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:16:56 pi zezere-ignition[12313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:16:56 pi zezere-ignition[12313]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:16:56 pi zezere-ignition[12313]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:16:56 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 25. >Aug 20 17:16:56 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:56 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:56 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:16:56 pi zezere-ignition[12319]: INFO : Ignition 2.14.0 >Aug 20 17:16:56 pi zezere-ignition[12319]: INFO : Stage: mount >Aug 20 17:16:56 pi zezere-ignition[12319]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:16:56 pi zezere-ignition[12319]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:16:56 pi zezere-ignition[12319]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:16:56 pi zezere-ignition[12319]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:16:56 pi zezere-ignition[12325]: INFO : Ignition 2.14.0 >Aug 20 17:16:56 pi zezere-ignition[12325]: INFO : Stage: files >Aug 20 17:16:56 pi zezere-ignition[12325]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:16:56 pi zezere-ignition[12325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:16:56 pi zezere-ignition[12325]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:16:56 pi zezere-ignition[12325]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:16:56 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:16:56 pi zezere-ignition[12331]: INFO : Ignition 2.14.0 >Aug 20 17:16:56 pi zezere-ignition[12331]: INFO : Stage: umount >Aug 20 17:16:56 pi zezere-ignition[12331]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:16:56 pi zezere-ignition[12331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:16:56 pi zezere-ignition[12331]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:16:56 pi zezere-ignition[12331]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:16:57 pi zezere-ignition[12304]: Running stage fetch with config file /tmp/zezere-ignition-config-_o7fft09.ign >Aug 20 17:16:57 pi zezere-ignition[12304]: Running stage disks with config file /tmp/zezere-ignition-config-_o7fft09.ign >Aug 20 17:16:57 pi zezere-ignition[12304]: Running stage mount with config file /tmp/zezere-ignition-config-_o7fft09.ign >Aug 20 17:16:57 pi zezere-ignition[12304]: Running stage files with config file /tmp/zezere-ignition-config-_o7fft09.ign >Aug 20 17:16:57 pi zezere-ignition[12304]: Running stage umount with config file /tmp/zezere-ignition-config-_o7fft09.ign >Aug 20 17:16:57 pi dbus-parsec[12332]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:16:57 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:16:57 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:16:57 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:16:57 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:16:57 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:16:57 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:16:57 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:57 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:16:58 pi gitea-app[10857]: 2022/08/20 19:16:58 [6301170a] router: completed GET /Danacus/university-stuff/src/commit/e258876bec8aadcadec0488db242b3883b308301/Besturingssystemen/Zitting7/Opgave7 for 10.88.0.1:57818, 200 OK in 77.5ms @ repo/view.go:732(repo.Home) >Aug 20 17:17:07 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 26. >Aug 20 17:17:07 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:07 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:07 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:17:07 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:17:07 pi dbus-parsec[12352]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:17:07 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:17:07 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:17:07 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:17:07 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:17:14 pi chronyd[692]: Selected source 131.188.3.221 (2.fedora.pool.ntp.org) >Aug 20 17:17:17 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 27. >Aug 20 17:17:17 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:17 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:17 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:17:17 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:17:17 pi dbus-parsec[12353]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:17:17 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:17:17 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:17:17 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:17:17 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:17:26 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:17:26 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:26 pi podman[12356]: 2022-08-20 17:17:26.749484041 +0000 UTC m=+0.193052458 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.licenses=, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, io.containers.autoupdate=registry) >Aug 20 17:17:26 pi podman[12356]: 2022-08-20 17:17:26.832255194 +0000 UTC m=+0.275823741 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=85b69996ecfe30333b9f55f5697af20cfcab8000803bc82dd83bdde564f18897) >Aug 20 17:17:27 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:17:27 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:27 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 28. >Aug 20 17:17:27 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:27 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:27 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:17:27 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:17:27 pi dbus-parsec[12377]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:17:27 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:17:27 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:17:27 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:17:27 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:17:30 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:17:30 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:31 pi gitea-app[10857]: 2022/08/20 19:17:31 [6301172b] router: completed GET /Danacus/university-stuff/src/commit/0c9cfb2cf3f93b1147983ce5cd18605d5324ee41/Gegevensbanken/lahman2016-sql/lahman2016.sql for 10.88.0.1:41728, 200 OK in 57.6ms @ repo/view.go:732(repo.Home) >Aug 20 17:17:35 pi podman[12380]: 2022-08-20 17:17:35.136842922 +0000 UTC m=+4.633854277 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, io.balena.architecture=aarch64, io.balena.qemu.version=7.0.0+balena1-aarch64, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, org.opencontainers.image.version=1.25.2, io.containers.autoupdate=registry, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden) >Aug 20 17:17:35 pi podman[12380]: 2022-08-20 17:17:35.189548173 +0000 UTC m=+4.686559509 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=1f0f9fa3aabbc98c703f2e756314f6663d7d3b725771fc93e1125c99430f075e) >Aug 20 17:17:35 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:17:35 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:35 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Consumed 9.146s CPU time. >Aug 20 17:17:37 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 29. >Aug 20 17:17:37 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:37 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:37 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:17:37 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:17:38 pi dbus-parsec[12414]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:17:38 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:17:38 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:17:38 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:17:38 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:17:48 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 30. >Aug 20 17:17:48 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:48 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:48 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:17:48 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:17:48 pi dbus-parsec[12417]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:17:48 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:17:48 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:17:48 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:17:48 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:17:57 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:17:57 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:57 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:17:57 pi gitea-app[10857]: 2022/08/20 19:17:57 [63011745] router: completed GET /Danacus/university-stuff/action/star?redirect_to=%2FDanacus%2Funiversity-stuff%2Fcommits%2Fcommit%2F4c8df90242840ccec37c22a6ca349e23915788a9%2FIW for 10.88.0.1:38484, 405 Method Not Allowed in 0.8ms @ web/goget.go:21(web.goGet) >Aug 20 17:17:57 pi podman[12422]: 2022-08-20 17:17:57.712063638 +0000 UTC m=+0.206630729 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.version=2022.07.1, PODMAN_SYSTEMD_UNIT=container-pihole.service, io.containers.autoupdate=registry, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.licenses=) >Aug 20 17:17:57 pi podman[12422]: 2022-08-20 17:17:57.747383192 +0000 UTC m=+0.241950339 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=944a95e259d165c034bd6c1e73551935f52cc10f37f999d9b7c9bc608db12f1b) >Aug 20 17:17:57 pi zezere-ignition[12445]: INFO : Ignition 2.14.0 >Aug 20 17:17:57 pi zezere-ignition[12445]: INFO : Stage: fetch >Aug 20 17:17:57 pi zezere-ignition[12445]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:17:57 pi zezere-ignition[12445]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:17:57 pi zezere-ignition[12445]: DEBUG : parsed url from cmdline: "" >Aug 20 17:17:57 pi zezere-ignition[12445]: INFO : no config URL provided >Aug 20 17:17:57 pi zezere-ignition[12445]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:17:57 pi zezere-ignition[12445]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:17:57 pi zezere-ignition[12445]: INFO : using config file at "/tmp/zezere-ignition-config-j42g4iga.ign" >Aug 20 17:17:57 pi zezere-ignition[12445]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:17:57 pi zezere-ignition[12445]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:17:57 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:17:57 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:58 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 31. >Aug 20 17:17:58 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:17:58 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:58 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:58 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:17:58 pi dbus-parsec[12453]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:17:58 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:17:58 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:17:58 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:17:58 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:17:58 pi zezere-ignition[12445]: INFO : GET result: Not Found >Aug 20 17:17:58 pi zezere-ignition[12445]: WARNING : failed to fetch config: resource not found >Aug 20 17:17:58 pi zezere-ignition[12445]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:17:58 pi zezere-ignition[12445]: CRITICAL : Ignition failed: resource not found >Aug 20 17:17:58 pi zezere-ignition[12454]: INFO : Ignition 2.14.0 >Aug 20 17:17:58 pi zezere-ignition[12454]: INFO : Stage: disks >Aug 20 17:17:58 pi zezere-ignition[12454]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:17:58 pi zezere-ignition[12454]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:17:58 pi zezere-ignition[12454]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:17:58 pi zezere-ignition[12454]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:17:58 pi zezere-ignition[12460]: INFO : Ignition 2.14.0 >Aug 20 17:17:58 pi zezere-ignition[12460]: INFO : Stage: mount >Aug 20 17:17:58 pi zezere-ignition[12460]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:17:58 pi zezere-ignition[12460]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:17:58 pi zezere-ignition[12460]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:17:58 pi zezere-ignition[12460]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:17:58 pi zezere-ignition[12467]: INFO : Ignition 2.14.0 >Aug 20 17:17:58 pi zezere-ignition[12467]: INFO : Stage: files >Aug 20 17:17:58 pi zezere-ignition[12467]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:17:58 pi zezere-ignition[12467]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:17:58 pi zezere-ignition[12467]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:17:58 pi zezere-ignition[12467]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:17:58 pi zezere-ignition[12473]: INFO : Ignition 2.14.0 >Aug 20 17:17:58 pi zezere-ignition[12473]: INFO : Stage: umount >Aug 20 17:17:58 pi zezere-ignition[12473]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:17:58 pi zezere-ignition[12473]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:17:58 pi zezere-ignition[12473]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:17:58 pi zezere-ignition[12473]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:17:58 pi zezere-ignition[12423]: Running stage fetch with config file /tmp/zezere-ignition-config-j42g4iga.ign >Aug 20 17:17:58 pi zezere-ignition[12423]: Running stage disks with config file /tmp/zezere-ignition-config-j42g4iga.ign >Aug 20 17:17:58 pi zezere-ignition[12423]: Running stage mount with config file /tmp/zezere-ignition-config-j42g4iga.ign >Aug 20 17:17:58 pi zezere-ignition[12423]: Running stage files with config file /tmp/zezere-ignition-config-j42g4iga.ign >Aug 20 17:17:58 pi zezere-ignition[12423]: Running stage umount with config file /tmp/zezere-ignition-config-j42g4iga.ign >Aug 20 17:17:58 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:17:58 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:17:58 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:58 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:17:59 pi gitea-app[10857]: 2022/08/20 19:17:59 [63011747] router: completed GET /Danacus/university-stuff/raw/commit/7647538f1caeb643328630bbc4d9e35bd5189696/Numerieke/oef5.m for 10.88.0.1:38500, 200 OK in 79.8ms @ repo/download.go:123(repo.SingleDownload) >Aug 20 17:18:01 pi gitea-app[10857]: 2022/08/20 19:18:01 [63011749] router: completed GET /Danacus/university-stuff/src/commit/ec55cf89712ef058338e1cafe09a977bcfbae089/.vim/coc-settings.json for 10.88.0.1:38508, 200 OK in 83.0ms @ repo/view.go:732(repo.Home) >Aug 20 17:18:03 pi gitea-app[10857]: 2022/08/20 19:18:03 [6301174b] router: completed GET /Danacus/university-stuff/src/commit/4f6704b7814f72e9387672f07891ea3f05a9872c/Numerieke/Inleiding_en_Foutenanalyse_G0N90B.pdf.xopp~ for 10.88.0.1:38510, 200 OK in 73.9ms @ repo/view.go:732(repo.Home) >Aug 20 17:18:08 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 32. >Aug 20 17:18:08 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:08 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:08 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:18:08 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:18:08 pi dbus-parsec[12503]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:18:08 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:18:08 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:18:08 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:18:08 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:18:10 pi gitea-app[10857]: 2022/08/20 19:18:10 [63011752] router: completed GET /Danacus/university-stuff/src/commit/9b8bf2715ba9277f4aacb5ce2f2b3b715c9651ca for 10.88.0.1:54678, 200 OK in 126.0ms @ repo/view.go:732(repo.Home) >Aug 20 17:18:13 pi gitea-app[10857]: 2022/08/20 19:18:13 [63011755] router: completed GET /Danacus/university-stuff/src/commit/fbe44a25cd05f56bbbd056f5901103a6e651c608/IW/C/oefenzitting/oef/average for 10.88.0.1:54684, 200 OK in 74.5ms @ repo/view.go:732(repo.Home) >Aug 20 17:18:18 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 33. >Aug 20 17:18:18 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:18 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:18 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:18:18 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:18:19 pi dbus-parsec[12516]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:18:19 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:18:19 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:18:19 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:18:19 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:18:27 pi gitea-app[10857]: 2022/08/20 19:18:27 [63011763] router: completed GET /Danacus/university-stuff/src/commit/e61ff08c96fefd71992d56b60dd47516a7972f2c/OGP/les03/bin/AddOverflowException.class for 10.88.0.1:56886, 200 OK in 75.1ms @ repo/view.go:732(repo.Home) >Aug 20 17:18:28 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:18:28 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:28 pi podman[12525]: 2022-08-20 17:18:28.711720708 +0000 UTC m=+0.198349291 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, io.containers.autoupdate=registry, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.licenses=, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole) >Aug 20 17:18:28 pi systemd[2791]: Created slice background.slice - User Background Tasks Slice. >Aug 20 17:18:28 pi systemd[2791]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... >Aug 20 17:18:28 pi systemd[2791]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. >Aug 20 17:18:28 pi podman[12525]: 2022-08-20 17:18:28.779627693 +0000 UTC m=+0.266256332 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=0a803cc2826ac075faf7d252dbf7c02f4de17bc5b4b7422c8a4029e8a0a13533) >Aug 20 17:18:28 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:18:28 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:29 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 34. >Aug 20 17:18:29 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:18:29 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:29 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:29 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:18:29 pi dbus-parsec[12547]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:18:29 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:18:29 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:18:29 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:18:29 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:18:35 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:18:35 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:35 pi podman[12550]: 2022-08-20 17:18:35.699506927 +0000 UTC m=+0.195681707 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, io.balena.architecture=aarch64, io.balena.qemu.version=7.0.0+balena1-aarch64, org.opencontainers.image.licenses=GPL-3.0-only, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, io.containers.autoupdate=registry, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.version=1.25.2, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki) >Aug 20 17:18:35 pi podman[12550]: 2022-08-20 17:18:35.749577338 +0000 UTC m=+0.245752525 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=a9802ae81b88bb0f9fbfdbc3dc5264fa61dd739c7a8319d0f8a180b9700b112a) >Aug 20 17:18:35 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:18:35 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:39 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 35. >Aug 20 17:18:39 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:39 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:39 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:18:39 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:18:39 pi dbus-parsec[12574]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:18:39 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:18:39 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:18:39 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:18:39 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:18:41 pi kernel: filter_IN_home_REJECT: IN=eth0 OUT= MAC=dc:a6:32:38:46:e7:28:87:ba:2a:e1:ff:08:00 SRC=10.0.1.168 DST=10.0.3.10 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=65189 DF PROTO=TCP SPT=38379 DPT=443 WINDOW=200 RES=0x00 ACK RST URGP=0 >Aug 20 17:18:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=239.255.255.250 LEN=118 TOS=0x00 PREC=0x00 TTL=2 ID=30224 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:18:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=fe80:0000:0000:0000:4ec7:1f5e:5274:16ba DST=ff02:0000:0000:0000:0000:0000:0000:000c LEN=134 TC=0 HOPLIMIT=2 FLOWLBL=968519 PROTO=UDP SPT=35818 DPT=1900 LEN=94 >Aug 20 17:18:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=255.255.255.255 LEN=118 TOS=0x00 PREC=0x00 TTL=64 ID=15096 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:18:43 pi gitea-app[10857]: 2022/08/20 19:18:43 [63011773] router: completed GET /Danacus/university-stuff/src/commit/9b8bf2715ba9277f4aacb5ce2f2b3b715c9651ca/BvP/knights.py for 10.88.0.1:57914, 200 OK in 63.8ms @ repo/view.go:732(repo.Home) >Aug 20 17:18:49 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 36. >Aug 20 17:18:49 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:49 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:49 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:18:49 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:18:49 pi dbus-parsec[12581]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:18:49 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:18:49 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:18:49 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:18:49 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:18:58 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:18:58 pi gitea-app[10857]: 2022/08/20 19:18:58 [63011782] router: completed GET /Danacus/university-stuff/action/star?redirect_to=%2FDanacus%2Funiversity-stuff%2Fcommits%2Fcommit%2F3b6a42b1224679859459f1f1e7a46f3dc06a693e%2FOGP%2FTest%2F.classpath for 10.88.0.1:43128, 405 Method Not Allowed in 0.7ms @ web/goget.go:21(web.goGet) >Aug 20 17:18:58 pi zezere-ignition[12585]: INFO : Ignition 2.14.0 >Aug 20 17:18:58 pi zezere-ignition[12585]: INFO : Stage: fetch >Aug 20 17:18:58 pi zezere-ignition[12585]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:18:58 pi zezere-ignition[12585]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:18:58 pi zezere-ignition[12585]: DEBUG : parsed url from cmdline: "" >Aug 20 17:18:58 pi zezere-ignition[12585]: INFO : no config URL provided >Aug 20 17:18:58 pi zezere-ignition[12585]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:18:58 pi zezere-ignition[12585]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:18:58 pi zezere-ignition[12585]: INFO : using config file at "/tmp/zezere-ignition-config-pbmeki34.ign" >Aug 20 17:18:58 pi zezere-ignition[12585]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:18:58 pi zezere-ignition[12585]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:18:59 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:18:59 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:59 pi zezere-ignition[12585]: INFO : GET result: Not Found >Aug 20 17:18:59 pi zezere-ignition[12585]: WARNING : failed to fetch config: resource not found >Aug 20 17:18:59 pi zezere-ignition[12585]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:18:59 pi zezere-ignition[12585]: CRITICAL : Ignition failed: resource not found >Aug 20 17:18:59 pi zezere-ignition[12603]: INFO : Ignition 2.14.0 >Aug 20 17:18:59 pi zezere-ignition[12603]: INFO : Stage: disks >Aug 20 17:18:59 pi zezere-ignition[12603]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:18:59 pi zezere-ignition[12603]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:18:59 pi zezere-ignition[12603]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:18:59 pi zezere-ignition[12603]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:18:59 pi zezere-ignition[12610]: INFO : Ignition 2.14.0 >Aug 20 17:18:59 pi zezere-ignition[12610]: INFO : Stage: mount >Aug 20 17:18:59 pi zezere-ignition[12610]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:18:59 pi zezere-ignition[12610]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:18:59 pi zezere-ignition[12610]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:18:59 pi zezere-ignition[12610]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:18:59 pi zezere-ignition[12616]: INFO : Ignition 2.14.0 >Aug 20 17:18:59 pi zezere-ignition[12616]: INFO : Stage: files >Aug 20 17:18:59 pi zezere-ignition[12616]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:18:59 pi zezere-ignition[12616]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:18:59 pi zezere-ignition[12616]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:18:59 pi zezere-ignition[12616]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:18:59 pi zezere-ignition[12623]: INFO : Ignition 2.14.0 >Aug 20 17:18:59 pi zezere-ignition[12623]: INFO : Stage: umount >Aug 20 17:18:59 pi zezere-ignition[12623]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:18:59 pi zezere-ignition[12623]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:18:59 pi zezere-ignition[12623]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:18:59 pi zezere-ignition[12623]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:18:59 pi zezere-ignition[12584]: Running stage fetch with config file /tmp/zezere-ignition-config-pbmeki34.ign >Aug 20 17:18:59 pi zezere-ignition[12584]: Running stage disks with config file /tmp/zezere-ignition-config-pbmeki34.ign >Aug 20 17:18:59 pi zezere-ignition[12584]: Running stage mount with config file /tmp/zezere-ignition-config-pbmeki34.ign >Aug 20 17:18:59 pi zezere-ignition[12584]: Running stage files with config file /tmp/zezere-ignition-config-pbmeki34.ign >Aug 20 17:18:59 pi zezere-ignition[12584]: Running stage umount with config file /tmp/zezere-ignition-config-pbmeki34.ign >Aug 20 17:18:59 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:59 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:59 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:18:59 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:18:59 pi podman[12594]: 2022-08-20 17:18:59.767855018 +0000 UTC m=+0.245764931 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.licenses=, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, io.containers.autoupdate=registry, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520) >Aug 20 17:18:59 pi podman[12594]: 2022-08-20 17:18:59.83948128 +0000 UTC m=+0.317391193 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=a8d3441b3cc406a8a6fbb9f9cf3ef389b7a5d60a5e701960343c4eff14502bc8) >Aug 20 17:18:59 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 37. >Aug 20 17:18:59 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:18:59 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:59 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:18:59 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:18:59 pi dbus-parsec[12641]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:18:59 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:18:59 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:18:59 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:18:59 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:19:00 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:19:00 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:09 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 38. >Aug 20 17:19:09 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:09 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:09 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:19:09 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:19:10 pi dbus-parsec[12649]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:19:10 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:19:10 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:19:10 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:19:10 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:19:20 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 39. >Aug 20 17:19:20 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:20 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:20 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:19:20 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:19:20 pi dbus-parsec[12650]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:19:20 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:19:20 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:19:20 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:19:20 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:19:20 pi gitea-app[10857]: 2022/08/20 19:19:20 [63011798] router: completed GET /Danacus/university-stuff/src/commit/a79a26e9854260ea65641f0ff43cff2cbdd3838b/Declaratieve%20Talen/haskell/Spawn%20Analyser for 10.88.0.1:42830, 200 OK in 80.2ms @ repo/view.go:732(repo.Home) >Aug 20 17:19:22 pi gitea-app[10857]: 2022/08/20 19:19:22 [6301179a] router: completed GET /Danacus/university-stuff/action/watch?redirect_to=%2FDanacus%2Funiversity-stuff%2Fblame%2Fcommit%2F7414f8289ce00e41b20df70175e0c5d5b38df0cc%2FTAI%2FSyndroomdecodering.ipynb for 10.88.0.1:42844, 405 Method Not Allowed in 0.8ms @ web/goget.go:21(web.goGet) >Aug 20 17:19:30 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 40. >Aug 20 17:19:30 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:30 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:30 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:19:30 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:19:30 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:19:30 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:30 pi dbus-parsec[12660]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:19:30 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:19:30 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:19:30 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:19:30 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:19:30 pi podman[12661]: 2022-08-20 17:19:30.720553172 +0000 UTC m=+0.189340217 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.licenses=, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.version=2022.07.1, io.containers.autoupdate=registry, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.title=docker-pi-hole) >Aug 20 17:19:30 pi podman[12661]: 2022-08-20 17:19:30.780420056 +0000 UTC m=+0.249207248 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=8783ed7b4a982b8f86f58c567f4180a481d2ca5f717476e31e134e07a4e7d788) >Aug 20 17:19:30 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:19:30 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:36 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:19:36 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:39 pi NetworkManager[717]: <info> [1661015979.3945] device (wlan0): set-hw-addr: set MAC address to 62:52:40:2F:34:E6 (scanning) >Aug 20 17:19:39 pi podman[12683]: 2022-08-20 17:19:39.534470705 +0000 UTC m=+3.031560708 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, org.opencontainers.image.version=1.25.2, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, io.balena.architecture=aarch64, io.balena.qemu.version=7.0.0+balena1-aarch64, io.containers.autoupdate=registry, org.opencontainers.image.licenses=GPL-3.0-only) >Aug 20 17:19:39 pi NetworkManager[717]: <info> [1661015979.5438] device (wlan0): supplicant interface state: disconnected -> inactive >Aug 20 17:19:39 pi NetworkManager[717]: <info> [1661015979.5440] device (p2p-dev-wlan0): supplicant management interface state: disconnected -> inactive >Aug 20 17:19:39 pi podman[12683]: 2022-08-20 17:19:39.590224501 +0000 UTC m=+3.087314634 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=fb514768aaf6aeaef3437820ee73ea7d6301f8ef750963b36669826159a6f179) >Aug 20 17:19:39 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:19:39 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:39 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Consumed 5.941s CPU time. >Aug 20 17:19:40 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 41. >Aug 20 17:19:40 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:40 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:40 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:19:40 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:19:40 pi dbus-parsec[12706]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:19:40 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:19:40 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:19:40 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:19:40 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:19:48 pi gitea-app[10857]: 2022/08/20 19:19:48 [630117b4] router: completed GET /Danacus/university-stuff/action/star?redirect_to=%2FDanacus%2Funiversity-stuff%2Fblame%2Fcommit%2F2d6826d8ac2d62ee801094e1200728e22d20c70c%2FNumerieke%2Fzit09_matlab%2Fsplinestelsel.m for 10.88.0.1:39758, 405 Method Not Allowed in 0.8ms @ web/goget.go:21(web.goGet) >Aug 20 17:19:50 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 42. >Aug 20 17:19:50 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:50 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:19:50 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:19:50 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:19:51 pi dbus-parsec[12716]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:19:51 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:19:51 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:19:51 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:19:51 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:19:51 pi gitea-app[10857]: 2022/08/20 19:19:51 [630117b6] router: completed GET /Danacus/university-stuff/blame/commit/1150c51e957e67e4ede3ffa0785c911d0dd9f588/OGP/Iterators/src/Experiment.java for 10.88.0.1:39772, 200 OK in 593.8ms @ repo/blame.go:47(repo.RefBlame) >Aug 20 17:19:54 pi gitea-app[10857]: 2022/08/20 19:19:54 [630117ba] router: completed GET /Danacus/university-stuff/action/watch?redirect_to=%2FDanacus%2Funiversity-stuff%2Fcommits%2Fcommit%2F6b3808900ac3849e64af3fe38cb55ddff18e12df%2FNumerieke%2Foef12.m for 10.88.0.1:57242, 405 Method Not Allowed in 0.6ms @ web/goget.go:21(web.goGet) >Aug 20 17:19:57 pi gitea-app[10857]: 2022/08/20 19:19:57 [630117bd] router: completed GET /Danacus/Dotfiles/src/commit/a8916b36940328263874c585b63ae993d55584a0/.config/awesome/theme.lua for 10.88.0.1:57252, 200 OK in 188.1ms @ repo/view.go:732(repo.Home) >Aug 20 17:20:00 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:20:00 pi zezere-ignition[12733]: INFO : Ignition 2.14.0 >Aug 20 17:20:00 pi zezere-ignition[12733]: INFO : Stage: fetch >Aug 20 17:20:00 pi zezere-ignition[12733]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:20:00 pi zezere-ignition[12733]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:20:00 pi zezere-ignition[12733]: DEBUG : parsed url from cmdline: "" >Aug 20 17:20:00 pi zezere-ignition[12733]: INFO : no config URL provided >Aug 20 17:20:00 pi zezere-ignition[12733]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:20:00 pi zezere-ignition[12733]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:20:00 pi zezere-ignition[12733]: INFO : using config file at "/tmp/zezere-ignition-config-jidxu1ud.ign" >Aug 20 17:20:00 pi zezere-ignition[12733]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:20:00 pi zezere-ignition[12733]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:20:01 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 43. >Aug 20 17:20:01 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:20:01 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:01 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:01 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:20:01 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:20:01 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:01 pi dbus-parsec[12739]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:20:01 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:20:01 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:20:01 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:20:01 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:20:01 pi zezere-ignition[12733]: INFO : GET result: Not Found >Aug 20 17:20:01 pi zezere-ignition[12733]: WARNING : failed to fetch config: resource not found >Aug 20 17:20:01 pi zezere-ignition[12733]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:20:01 pi zezere-ignition[12733]: CRITICAL : Ignition failed: resource not found >Aug 20 17:20:01 pi zezere-ignition[12750]: INFO : Ignition 2.14.0 >Aug 20 17:20:01 pi zezere-ignition[12750]: INFO : Stage: disks >Aug 20 17:20:01 pi zezere-ignition[12750]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:20:01 pi zezere-ignition[12750]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:20:01 pi zezere-ignition[12750]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:20:01 pi zezere-ignition[12750]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:20:01 pi zezere-ignition[12758]: INFO : Ignition 2.14.0 >Aug 20 17:20:01 pi zezere-ignition[12758]: INFO : Stage: mount >Aug 20 17:20:01 pi zezere-ignition[12758]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:20:01 pi zezere-ignition[12758]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:20:01 pi zezere-ignition[12758]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:20:01 pi zezere-ignition[12758]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:20:01 pi zezere-ignition[12764]: INFO : Ignition 2.14.0 >Aug 20 17:20:01 pi zezere-ignition[12764]: INFO : Stage: files >Aug 20 17:20:01 pi zezere-ignition[12764]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:20:01 pi zezere-ignition[12764]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:20:01 pi zezere-ignition[12764]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:20:01 pi zezere-ignition[12764]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:20:01 pi zezere-ignition[12770]: INFO : Ignition 2.14.0 >Aug 20 17:20:01 pi zezere-ignition[12770]: INFO : Stage: umount >Aug 20 17:20:01 pi zezere-ignition[12770]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:20:01 pi zezere-ignition[12770]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:20:01 pi zezere-ignition[12770]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:20:01 pi zezere-ignition[12770]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:20:01 pi zezere-ignition[12732]: Running stage fetch with config file /tmp/zezere-ignition-config-jidxu1ud.ign >Aug 20 17:20:01 pi zezere-ignition[12732]: Running stage disks with config file /tmp/zezere-ignition-config-jidxu1ud.ign >Aug 20 17:20:01 pi zezere-ignition[12732]: Running stage mount with config file /tmp/zezere-ignition-config-jidxu1ud.ign >Aug 20 17:20:01 pi zezere-ignition[12732]: Running stage files with config file /tmp/zezere-ignition-config-jidxu1ud.ign >Aug 20 17:20:01 pi zezere-ignition[12732]: Running stage umount with config file /tmp/zezere-ignition-config-jidxu1ud.ign >Aug 20 17:20:01 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:01 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:01 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:20:01 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:20:08 pi podman[12740]: 2022-08-20 17:20:08.237461837 +0000 UTC m=+6.966009389 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.version=2022.07.1, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.licenses=, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, io.containers.autoupdate=registry, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520) >Aug 20 17:20:08 pi podman[12740]: 2022-08-20 17:20:08.300687463 +0000 UTC m=+7.029235312 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=0eb5016950a9cf41a0737b77ab37b98712ca9fa5e3de2b6ccb6ed6b58f248a70) >Aug 20 17:20:08 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:20:08 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:08 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 13.782s CPU time. >Aug 20 17:20:11 pi gitea-app[10857]: 2022/08/20 19:20:11 [630117cb] router: completed GET /Danacus/university-stuff/raw/commit/afc3fdac9d25d323ed31351bb8955eff746ce85a/OGP/.metadata/version.ini for 10.88.0.1:57108, 200 OK in 76.3ms @ repo/download.go:123(repo.SingleDownload) >Aug 20 17:20:11 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 44. >Aug 20 17:20:11 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:11 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:11 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:20:11 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:20:11 pi dbus-parsec[12814]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:20:11 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:20:11 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:20:11 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:20:11 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:20:12 pi gitea-app[10857]: 2022/08/20 19:20:12 [630117cc] router: completed GET /Danacus/university-stuff/action/watch?redirect_to=%2FDanacus%2Funiversity-stuff%2Fcommits%2Fcommit%2Fe61ff08c96fefd71992d56b60dd47516a7972f2c%2FSOCS%2FSamenvattingen%2Fltximg for 10.88.0.1:57114, 405 Method Not Allowed in 0.8ms @ web/goget.go:21(web.goGet) >Aug 20 17:20:21 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 45. >Aug 20 17:20:21 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:21 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:21 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:20:21 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:20:21 pi dbus-parsec[12815]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:20:21 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:20:21 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:20:21 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:20:21 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:20:23 pi gitea-app[10857]: 2022/08/20 19:20:23 [630117d7] router: completed GET /Danacus/university-stuff/src/commit/d7f3e031870dc04cc935bff1b38832d85b2df616/Numerieke/zit06_matlab/probleem2.m for 10.88.0.1:43324, 200 OK in 68.0ms @ repo/view.go:732(repo.Home) >Aug 20 17:20:25 pi gitea-app[10857]: 2022/08/20 19:20:25 [630117d9] router: completed GET /Danacus/dotfiles/action/watch?redirect_to=%2FDanacus%2Fdotfiles%2Fcommits%2Fcommit%2Fa3ffda66107a00a9b215c1bbf498b2a7ab12fd20%2F.config%2Fnvim%2Flua%2Fsession.lua for 10.88.0.1:49234, 405 Method Not Allowed in 0.8ms @ web/goget.go:21(web.goGet) >Aug 20 17:20:31 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 46. >Aug 20 17:20:31 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:31 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:31 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:20:31 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:20:32 pi dbus-parsec[12826]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:20:32 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:20:32 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:20:32 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:20:32 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:20:38 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:20:38 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:38 pi podman[12827]: 2022-08-20 17:20:38.704108828 +0000 UTC m=+0.191388174 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.version=2022.07.1, io.containers.autoupdate=registry, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.title=docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.licenses=, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.description=Pi-hole in a docker container) >Aug 20 17:20:38 pi podman[12827]: 2022-08-20 17:20:38.759998778 +0000 UTC m=+0.247278605 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=a90dcbf5c5dba1400b18ab3c960bb626387b82e0a7678bdde0a30d887808809b) >Aug 20 17:20:38 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:20:38 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:39 pi gitea-app[10857]: 2022/08/20 19:20:39 [630117e7] router: completed GET /Danacus/university-stuff/src/commit/861fab1ebde9763752f62451cb075b31445d4966/Logica/oef_fixed for 10.88.0.1:54512, 200 OK in 75.1ms @ repo/view.go:732(repo.Home) >Aug 20 17:20:40 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:20:40 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:40 pi podman[12854]: 2022-08-20 17:20:40.716402443 +0000 UTC m=+0.193476199 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, io.balena.qemu.version=7.0.0+balena1-aarch64, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, io.containers.autoupdate=registry, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, io.balena.architecture=aarch64, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.version=1.25.2) >Aug 20 17:20:40 pi podman[12854]: 2022-08-20 17:20:40.752500157 +0000 UTC m=+0.229573894 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=92c9e0b6d34edbd44a20fefff2be6740a697475e02e01ad5f27bb0eb9c718d2a) >Aug 20 17:20:40 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:20:40 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:42 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 47. >Aug 20 17:20:42 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:42 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:42 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:20:42 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:20:42 pi dbus-parsec[12878]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:20:42 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:20:42 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:20:42 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:20:42 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:20:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=239.255.255.250 LEN=118 TOS=0x00 PREC=0x00 TTL=2 ID=32079 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:20:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=fe80:0000:0000:0000:4ec7:1f5e:5274:16ba DST=ff02:0000:0000:0000:0000:0000:0000:000c LEN=134 TC=0 HOPLIMIT=2 FLOWLBL=968519 PROTO=UDP SPT=35818 DPT=1900 LEN=94 >Aug 20 17:20:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=255.255.255.255 LEN=118 TOS=0x00 PREC=0x00 TTL=64 ID=22482 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:20:45 pi gitea-app[10857]: 2022/08/20 19:20:45 [630117ed] router: completed GET /Danacus/university-stuff/src/commit/fbe44a25cd05f56bbbd056f5901103a6e651c608/IW/C/oef3/src for 10.88.0.1:40948, 200 OK in 62.3ms @ repo/view.go:732(repo.Home) >Aug 20 17:20:52 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 48. >Aug 20 17:20:52 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:52 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:20:52 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:20:52 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:20:52 pi dbus-parsec[12884]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:20:52 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:20:52 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:20:52 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:20:52 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:20:58 pi kernel: filter_IN_home_REJECT: IN=eth0 OUT= MAC=dc:a6:32:38:46:e7:28:87:ba:2a:e1:ff:08:00 SRC=10.0.1.168 DST=10.0.3.10 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=31117 DF PROTO=TCP SPT=39928 DPT=443 WINDOW=200 RES=0x00 ACK FIN URGP=0 >Aug 20 17:20:58 pi kernel: filter_IN_home_REJECT: IN=eth0 OUT= MAC=dc:a6:32:38:46:e7:28:87:ba:2a:e1:ff:08:00 SRC=10.0.1.168 DST=10.0.3.10 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=31118 DF PROTO=TCP SPT=39928 DPT=443 WINDOW=200 RES=0x00 ACK FIN URGP=0 >Aug 20 17:21:02 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:21:02 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 49. >Aug 20 17:21:02 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:21:02 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:02 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:02 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:21:02 pi zezere-ignition[12893]: INFO : Ignition 2.14.0 >Aug 20 17:21:02 pi zezere-ignition[12893]: INFO : Stage: fetch >Aug 20 17:21:02 pi zezere-ignition[12893]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:21:02 pi zezere-ignition[12893]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:21:02 pi zezere-ignition[12893]: DEBUG : parsed url from cmdline: "" >Aug 20 17:21:02 pi zezere-ignition[12893]: INFO : no config URL provided >Aug 20 17:21:02 pi zezere-ignition[12893]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:21:02 pi zezere-ignition[12893]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:21:02 pi zezere-ignition[12893]: INFO : using config file at "/tmp/zezere-ignition-config-34f5ra1r.ign" >Aug 20 17:21:02 pi zezere-ignition[12893]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:21:02 pi zezere-ignition[12893]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:21:02 pi dbus-parsec[12892]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:21:02 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:21:02 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:21:02 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:21:02 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:21:03 pi zezere-ignition[12893]: INFO : GET result: Not Found >Aug 20 17:21:03 pi zezere-ignition[12893]: WARNING : failed to fetch config: resource not found >Aug 20 17:21:03 pi zezere-ignition[12893]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:21:03 pi zezere-ignition[12893]: CRITICAL : Ignition failed: resource not found >Aug 20 17:21:03 pi zezere-ignition[12900]: INFO : Ignition 2.14.0 >Aug 20 17:21:03 pi zezere-ignition[12900]: INFO : Stage: disks >Aug 20 17:21:03 pi zezere-ignition[12900]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:21:03 pi zezere-ignition[12900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:21:03 pi zezere-ignition[12900]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:21:03 pi zezere-ignition[12900]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:21:03 pi zezere-ignition[12907]: INFO : Ignition 2.14.0 >Aug 20 17:21:03 pi zezere-ignition[12907]: INFO : Stage: mount >Aug 20 17:21:03 pi zezere-ignition[12907]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:21:03 pi zezere-ignition[12907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:21:03 pi zezere-ignition[12907]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:21:03 pi zezere-ignition[12907]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:21:03 pi zezere-ignition[12913]: INFO : Ignition 2.14.0 >Aug 20 17:21:03 pi zezere-ignition[12913]: INFO : Stage: files >Aug 20 17:21:03 pi zezere-ignition[12913]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:21:03 pi zezere-ignition[12913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:21:03 pi zezere-ignition[12913]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:21:03 pi zezere-ignition[12913]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:21:03 pi zezere-ignition[12919]: INFO : Ignition 2.14.0 >Aug 20 17:21:03 pi zezere-ignition[12919]: INFO : Stage: umount >Aug 20 17:21:03 pi zezere-ignition[12919]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:21:03 pi zezere-ignition[12919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:21:03 pi zezere-ignition[12919]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:21:03 pi zezere-ignition[12919]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:21:03 pi zezere-ignition[12891]: Running stage fetch with config file /tmp/zezere-ignition-config-34f5ra1r.ign >Aug 20 17:21:03 pi zezere-ignition[12891]: Running stage disks with config file /tmp/zezere-ignition-config-34f5ra1r.ign >Aug 20 17:21:03 pi zezere-ignition[12891]: Running stage mount with config file /tmp/zezere-ignition-config-34f5ra1r.ign >Aug 20 17:21:03 pi zezere-ignition[12891]: Running stage files with config file /tmp/zezere-ignition-config-34f5ra1r.ign >Aug 20 17:21:03 pi zezere-ignition[12891]: Running stage umount with config file /tmp/zezere-ignition-config-34f5ra1r.ign >Aug 20 17:21:03 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:21:03 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:21:03 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:03 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:09 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:21:09 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:12 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 50. >Aug 20 17:21:12 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:12 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:12 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:21:12 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:21:13 pi dbus-parsec[12933]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:21:13 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:21:13 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:21:13 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:21:13 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:21:13 pi podman[12925]: 2022-08-20 17:21:13.166023193 +0000 UTC m=+3.662778114 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.licenses=, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, io.containers.autoupdate=registry, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service) >Aug 20 17:21:13 pi podman[12925]: 2022-08-20 17:21:13.219983431 +0000 UTC m=+3.716738483 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=8e17f7a46b7d295808fd38306b5f03b43a6569629dcc2e01987f1485fc825bf4) >Aug 20 17:21:13 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:21:13 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:13 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 7.211s CPU time. >Aug 20 17:21:14 pi gitea-app[10857]: 2022/08/20 19:21:14 [6301180a] router: completed GET /Danacus/dotfiles/src/commit/56b660e47aafa25d3bcfc27bf4511a329f9c9550/Pictures for 10.88.0.1:36476, 200 OK in 65.7ms @ repo/view.go:732(repo.Home) >Aug 20 17:21:19 pi gitea-app[10857]: 2022/08/20 19:21:19 [6301180f] router: completed GET /Danacus/university-stuff/action/watch?redirect_to=%2FDanacus%2Funiversity-stuff%2Fcommits%2Fcommit%2F4c8df90242840ccec37c22a6ca349e23915788a9%2FTAI%2F.ipynb_checkpoints for 10.88.0.1:39802, 405 Method Not Allowed in 0.8ms @ web/goget.go:21(web.goGet) >Aug 20 17:21:23 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 51. >Aug 20 17:21:23 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:23 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:23 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:21:23 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:21:23 pi dbus-parsec[12953]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:21:23 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:21:23 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:21:23 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:21:23 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:21:27 pi gitea-app[10857]: 2022/08/20 19:21:27 [63011817] router: completed GET /Danacus/university-stuff/src/commit/e61ff08c96fefd71992d56b60dd47516a7972f2c/.metadata/.plugins/org.eclipse.core.resources/.root/.indexes for 10.88.0.1:51014, 200 OK in 63.5ms @ repo/view.go:732(repo.Home) >Aug 20 17:21:32 pi gitea-app[10857]: 2022/08/20 19:21:32 [6301181c] router: completed GET /Danacus/university-stuff/src/commit/275c6abfa6711912a3c464cd43f4dbfdc56e2326/BvP/default/lib/python3.7/site-packages/pylint/test/__pycache__/unittest_checker_similar.cpython-37.pyc for 10.88.0.1:51016, 200 OK in 78.7ms @ repo/view.go:732(repo.Home) >Aug 20 17:21:33 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 52. >Aug 20 17:21:33 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:33 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:33 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:21:33 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:21:33 pi dbus-parsec[12969]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:21:33 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:21:33 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:21:33 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:21:33 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:21:34 pi gitea-app[10857]: 2022/08/20 19:21:34 [6301181e] router: completed GET /Danacus/university-stuff/action/watch?redirect_to=%2FDanacus%2Funiversity-stuff%2Fblame%2Fcommit%2F4eb41779c48bfc58b4fb3468d4d445499b8b8086%2FBvP%2Fnqueens.py for 10.88.0.1:51028, 405 Method Not Allowed in 0.8ms @ web/goget.go:21(web.goGet) >Aug 20 17:21:40 pi gitea-app[10857]: 2022/08/20 19:21:40 [63011824] router: completed GET /robots.txt for 10.88.0.1:47474, 404 Not Found in 19.3ms @ context/user.go:18(context.UserAssignmentWeb) >Aug 20 17:21:40 pi gitea-app[10857]: 2022/08/20 19:21:40 [63011824-2] router: completed GET /Danacus/university-stuff/src/commit/fbe44a25cd05f56bbbd056f5901103a6e651c608/IW for 10.88.0.1:47484, 200 OK in 100.8ms @ repo/view.go:732(repo.Home) >Aug 20 17:21:41 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:21:41 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:43 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:43 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:21:43 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 53. >Aug 20 17:21:43 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:21:43 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:43 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:43 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:21:43 pi dbus-parsec[12992]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:21:43 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:21:43 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:21:43 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:21:43 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:21:43 pi systemd[1]: sysroot-tmp-crun.hmG5QF.mount: Deactivated successfully. >Aug 20 17:21:43 pi podman[12977]: 2022-08-20 17:21:43.714542507 +0000 UTC m=+2.221621536 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, org.opencontainers.image.version=1.25.2, io.balena.qemu.version=7.0.0+balena1-aarch64, io.containers.autoupdate=registry, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, io.balena.architecture=aarch64, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, org.opencontainers.image.created=2022-07-27T18:44:18+00:00) >Aug 20 17:21:43 pi podman[12977]: 2022-08-20 17:21:43.76953337 +0000 UTC m=+2.276612398 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=6dc99effbb4e767a12ad9e6e1b2133b62fca05061a3b4175e7f9c6c1e377d3d5) >Aug 20 17:21:43 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:21:43 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:43 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Consumed 4.283s CPU time. >Aug 20 17:21:47 pi podman[12985]: 2022-08-20 17:21:47.19229404 +0000 UTC m=+3.668119842 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, io.containers.autoupdate=registry, org.opencontainers.image.description=Pi-hole in a docker container, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.licenses=, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole) >Aug 20 17:21:47 pi podman[12985]: 2022-08-20 17:21:47.250472421 +0000 UTC m=+3.726298556 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=515f2577552b1b51f6142524c13f60f0e90183cc476dc8a9bac24d98dcaea8bb) >Aug 20 17:21:47 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:21:47 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:47 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 7.192s CPU time. >Aug 20 17:21:48 pi gitea-app[10857]: 2022/08/20 19:21:48 [6301182c] router: completed GET /Danacus/university-stuff/action/watch?redirect_to=%2FDanacus%2Funiversity-stuff%2Fcommits%2Fcommit%2F0c9cfb2cf3f93b1147983ce5cd18605d5324ee41%2FWetCom for 10.88.0.1:60430, 405 Method Not Allowed in 1.0ms @ web/goget.go:21(web.goGet) >Aug 20 17:21:53 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 54. >Aug 20 17:21:53 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:53 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:21:53 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:21:53 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:21:53 pi dbus-parsec[13023]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:21:53 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:21:53 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:21:53 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:21:53 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:21:57 pi gitea-app[10857]: 2022/08/20 19:21:57 [63011835] router: completed GET /Danacus/university-stuff/src/commit/4c8df90242840ccec37c22a6ca349e23915788a9/Gegevensbanken/GB2019%20-%206%20SQL2%20en%20relationele%20calculus.pdf.xopp~ for 10.88.0.1:47660, 200 OK in 78.1ms @ repo/view.go:732(repo.Home) >Aug 20 17:21:59 pi gitea-app[10857]: 2022/08/20 19:21:59 [63011837] router: completed GET /Danacus/university-stuff/src/commit/31ca387f733dbd56601f960364de9fa25b4c9a37/Declaratieve%20Talen/haskell/voorbereiding2 for 10.88.0.1:47672, 200 OK in 70.3ms @ repo/view.go:732(repo.Home) >Aug 20 17:22:03 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:22:03 pi zezere-ignition[13043]: INFO : Ignition 2.14.0 >Aug 20 17:22:03 pi zezere-ignition[13043]: INFO : Stage: fetch >Aug 20 17:22:03 pi zezere-ignition[13043]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:22:03 pi zezere-ignition[13043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:22:03 pi zezere-ignition[13043]: DEBUG : parsed url from cmdline: "" >Aug 20 17:22:03 pi zezere-ignition[13043]: INFO : no config URL provided >Aug 20 17:22:03 pi zezere-ignition[13043]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:22:03 pi zezere-ignition[13043]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:22:03 pi zezere-ignition[13043]: INFO : using config file at "/tmp/zezere-ignition-config-wc55jdjs.ign" >Aug 20 17:22:03 pi zezere-ignition[13043]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:22:03 pi zezere-ignition[13043]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:22:03 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 55. >Aug 20 17:22:03 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:22:03 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:03 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:03 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:22:04 pi dbus-parsec[13051]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:22:04 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:22:04 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:22:04 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:22:04 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:22:04 pi zezere-ignition[13043]: INFO : GET result: Not Found >Aug 20 17:22:04 pi zezere-ignition[13043]: WARNING : failed to fetch config: resource not found >Aug 20 17:22:04 pi zezere-ignition[13043]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:22:04 pi zezere-ignition[13043]: CRITICAL : Ignition failed: resource not found >Aug 20 17:22:04 pi zezere-ignition[13052]: INFO : Ignition 2.14.0 >Aug 20 17:22:04 pi zezere-ignition[13052]: INFO : Stage: disks >Aug 20 17:22:04 pi zezere-ignition[13052]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:22:04 pi zezere-ignition[13052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:22:04 pi zezere-ignition[13052]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:22:04 pi zezere-ignition[13052]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:22:04 pi zezere-ignition[13059]: INFO : Ignition 2.14.0 >Aug 20 17:22:04 pi zezere-ignition[13059]: INFO : Stage: mount >Aug 20 17:22:04 pi zezere-ignition[13059]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:22:04 pi zezere-ignition[13059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:22:04 pi zezere-ignition[13059]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:22:04 pi zezere-ignition[13059]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:22:04 pi zezere-ignition[13065]: INFO : Ignition 2.14.0 >Aug 20 17:22:04 pi zezere-ignition[13065]: INFO : Stage: files >Aug 20 17:22:04 pi zezere-ignition[13065]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:22:04 pi zezere-ignition[13065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:22:04 pi zezere-ignition[13065]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:22:04 pi zezere-ignition[13065]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:22:04 pi zezere-ignition[13071]: INFO : Ignition 2.14.0 >Aug 20 17:22:04 pi zezere-ignition[13071]: INFO : Stage: umount >Aug 20 17:22:04 pi zezere-ignition[13071]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:22:04 pi zezere-ignition[13071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:22:04 pi zezere-ignition[13071]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:22:04 pi zezere-ignition[13071]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:22:04 pi zezere-ignition[13042]: Running stage fetch with config file /tmp/zezere-ignition-config-wc55jdjs.ign >Aug 20 17:22:04 pi zezere-ignition[13042]: Running stage disks with config file /tmp/zezere-ignition-config-wc55jdjs.ign >Aug 20 17:22:04 pi zezere-ignition[13042]: Running stage mount with config file /tmp/zezere-ignition-config-wc55jdjs.ign >Aug 20 17:22:04 pi zezere-ignition[13042]: Running stage files with config file /tmp/zezere-ignition-config-wc55jdjs.ign >Aug 20 17:22:04 pi zezere-ignition[13042]: Running stage umount with config file /tmp/zezere-ignition-config-wc55jdjs.ign >Aug 20 17:22:04 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:22:04 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:22:04 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:04 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:05 pi gitea-app[10857]: 2022/08/20 19:22:05 [6301183d] router: completed GET /Danacus/university-stuff/src/commit/fc327651fc9032b58eebd90044f247d941995b4d/Declaratieve%20Talen/haskell/Week%203/Preparation%203/Testing.hs for 10.88.0.1:54922, 200 OK in 56.5ms @ repo/view.go:732(repo.Home) >Aug 20 17:22:09 pi gitea-app[10857]: 2022/08/20 19:22:09 [63011841] router: completed GET /Danacus/university-stuff/blame/commit/0c7fd6962972e3a7dd00087e40de01701b33ca31/Numerieke/zit08_matlab/test_lagrange3.m for 10.88.0.1:54924, 200 OK in 347.9ms @ repo/blame.go:47(repo.RefBlame) >Aug 20 17:22:14 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 56. >Aug 20 17:22:14 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:14 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:14 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:22:14 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:22:14 pi dbus-parsec[13092]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:22:14 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:22:14 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:22:14 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:22:14 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:22:17 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:22:17 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:18 pi gitea-app[10857]: 2022/08/20 19:22:18 [6301184a] router: completed GET /Danacus/university-stuff/src/commit/7d45c2001d65f9a1b8495eb7d6f8c8ca56614014/SoRTES/.2020_3_Memory.autosave.xopp for 10.88.0.1:52670, 200 OK in 55.7ms @ repo/view.go:732(repo.Home) >Aug 20 17:22:21 pi gitea-app[10857]: 2022/08/20 19:22:21 [6301184d] router: completed GET /Danacus/university-stuff/src/commit/6e1e945814294f6e4aed91e63230f19bf77812ec/Bewijzen%20en%20redeneren/Huistaak-Week8.pdf.xopp~ for 10.88.0.1:52682, 200 OK in 62.2ms @ repo/view.go:732(repo.Home) >Aug 20 17:22:22 pi podman[13094]: 2022-08-20 17:22:22.349516249 +0000 UTC m=+4.825215810 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, io.containers.autoupdate=registry, org.opencontainers.image.licenses=, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, PODMAN_SYSTEMD_UNIT=container-pihole.service) >Aug 20 17:22:22 pi podman[13094]: 2022-08-20 17:22:22.410457466 +0000 UTC m=+4.886157360 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=aa519d963ea50719bb1ed72121bf9f45bf7be52444f97dc918473d07a8c8ad7f) >Aug 20 17:22:22 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:22:22 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:22 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 9.541s CPU time. >Aug 20 17:22:24 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 57. >Aug 20 17:22:24 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:24 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:24 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:22:24 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:22:24 pi dbus-parsec[13127]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:22:24 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:22:24 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:22:24 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:22:24 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:22:24 pi gitea-app[10857]: 2022/08/20 19:22:24 [63011850] router: completed GET /Danacus/university-stuff/src/commit/6b3808900ac3849e64af3fe38cb55ddff18e12df/Logica/output1b.dat for 10.88.0.1:52690, 200 OK in 64.8ms @ repo/view.go:732(repo.Home) >Aug 20 17:22:34 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 58. >Aug 20 17:22:34 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:34 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:34 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:22:34 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:22:34 pi dbus-parsec[13138]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:22:34 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:22:34 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:22:34 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:22:34 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:22:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=239.255.255.250 LEN=118 TOS=0x00 PREC=0x00 TTL=2 ID=34958 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:22:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=fe80:0000:0000:0000:4ec7:1f5e:5274:16ba DST=ff02:0000:0000:0000:0000:0000:0000:000c LEN=134 TC=0 HOPLIMIT=2 FLOWLBL=968519 PROTO=UDP SPT=35818 DPT=1900 LEN=94 >Aug 20 17:22:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=255.255.255.255 LEN=118 TOS=0x00 PREC=0x00 TTL=64 ID=24868 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:22:44 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:22:44 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:44 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:44 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:44 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 59. >Aug 20 17:22:44 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:22:44 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:22:45 pi dbus-parsec[13153]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:22:45 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:22:45 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:22:45 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:22:45 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:22:51 pi podman[13145]: 2022-08-20 17:22:51.069527 +0000 UTC m=+6.546304634 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, org.opencontainers.image.version=1.25.2, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, io.balena.architecture=aarch64, io.containers.autoupdate=registry, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, io.balena.qemu.version=7.0.0+balena1-aarch64, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.created=2022-07-27T18:44:18+00:00) >Aug 20 17:22:51 pi podman[13145]: 2022-08-20 17:22:51.119402948 +0000 UTC m=+6.596180582 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=db0e3796d3f541b90a57ef7f5ec05d6ee012cf215057208df5b0c0f33e9acd23) >Aug 20 17:22:51 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:22:51 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:51 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Consumed 12.973s CPU time. >Aug 20 17:22:53 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:22:53 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:55 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 60. >Aug 20 17:22:55 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:55 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:55 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:22:55 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:22:55 pi podman[13171]: 2022-08-20 17:22:55.278950479 +0000 UTC m=+1.756231600 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.licenses=, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.version=2022.07.1, PODMAN_SYSTEMD_UNIT=container-pihole.service, io.containers.autoupdate=registry) >Aug 20 17:22:55 pi dbus-parsec[13190]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:22:55 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:22:55 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:22:55 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:22:55 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:22:55 pi podman[13171]: 2022-08-20 17:22:55.330124308 +0000 UTC m=+1.807405633 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=88b544404abb4085c3bd67ad88cf7a0a63c56158e1cdf6b3fe7ca4d1e9907631) >Aug 20 17:22:55 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:22:55 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:22:55 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 3.310s CPU time. >Aug 20 17:23:03 pi gitea-app[10857]: 2022/08/20 19:23:03 [63011877] router: completed GET /Danacus/university-stuff/src/commit/0c7fd6962972e3a7dd00087e40de01701b33ca31/Modellering%20en%20simulatie/oefeningen for 10.88.0.1:45470, 200 OK in 65.4ms @ repo/view.go:732(repo.Home) >Aug 20 17:23:05 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 61. >Aug 20 17:23:05 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:05 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:05 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:23:05 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:23:05 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:23:05 pi dbus-parsec[13210]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:23:05 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:23:05 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:23:05 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:23:05 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:23:05 pi zezere-ignition[13212]: INFO : Ignition 2.14.0 >Aug 20 17:23:05 pi zezere-ignition[13212]: INFO : Stage: fetch >Aug 20 17:23:05 pi zezere-ignition[13212]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:23:05 pi zezere-ignition[13212]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:23:05 pi zezere-ignition[13212]: DEBUG : parsed url from cmdline: "" >Aug 20 17:23:05 pi zezere-ignition[13212]: INFO : no config URL provided >Aug 20 17:23:05 pi zezere-ignition[13212]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:23:05 pi zezere-ignition[13212]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:23:05 pi zezere-ignition[13212]: INFO : using config file at "/tmp/zezere-ignition-config-flrkdk17.ign" >Aug 20 17:23:05 pi zezere-ignition[13212]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:23:05 pi zezere-ignition[13212]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:23:06 pi zezere-ignition[13212]: INFO : GET result: Not Found >Aug 20 17:23:06 pi zezere-ignition[13212]: WARNING : failed to fetch config: resource not found >Aug 20 17:23:06 pi zezere-ignition[13212]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:23:06 pi zezere-ignition[13212]: CRITICAL : Ignition failed: resource not found >Aug 20 17:23:06 pi zezere-ignition[13219]: INFO : Ignition 2.14.0 >Aug 20 17:23:06 pi zezere-ignition[13219]: INFO : Stage: disks >Aug 20 17:23:06 pi zezere-ignition[13219]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:23:06 pi zezere-ignition[13219]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:23:06 pi zezere-ignition[13219]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:23:06 pi zezere-ignition[13219]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:23:06 pi zezere-ignition[13225]: INFO : Ignition 2.14.0 >Aug 20 17:23:06 pi zezere-ignition[13225]: INFO : Stage: mount >Aug 20 17:23:06 pi zezere-ignition[13225]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:23:06 pi zezere-ignition[13225]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:23:06 pi zezere-ignition[13225]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:23:06 pi zezere-ignition[13225]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:23:06 pi zezere-ignition[13232]: INFO : Ignition 2.14.0 >Aug 20 17:23:06 pi zezere-ignition[13232]: INFO : Stage: files >Aug 20 17:23:06 pi zezere-ignition[13232]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:23:06 pi zezere-ignition[13232]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:23:06 pi zezere-ignition[13232]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:23:06 pi zezere-ignition[13232]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:23:06 pi zezere-ignition[13238]: INFO : Ignition 2.14.0 >Aug 20 17:23:06 pi zezere-ignition[13238]: INFO : Stage: umount >Aug 20 17:23:06 pi zezere-ignition[13238]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:23:06 pi zezere-ignition[13238]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:23:06 pi zezere-ignition[13238]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:23:06 pi zezere-ignition[13238]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:23:06 pi zezere-ignition[13211]: Running stage fetch with config file /tmp/zezere-ignition-config-flrkdk17.ign >Aug 20 17:23:06 pi zezere-ignition[13211]: Running stage disks with config file /tmp/zezere-ignition-config-flrkdk17.ign >Aug 20 17:23:06 pi zezere-ignition[13211]: Running stage mount with config file /tmp/zezere-ignition-config-flrkdk17.ign >Aug 20 17:23:06 pi zezere-ignition[13211]: Running stage files with config file /tmp/zezere-ignition-config-flrkdk17.ign >Aug 20 17:23:06 pi zezere-ignition[13211]: Running stage umount with config file /tmp/zezere-ignition-config-flrkdk17.ign >Aug 20 17:23:06 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:23:06 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:23:06 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:06 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:15 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 62. >Aug 20 17:23:15 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:15 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:15 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:23:15 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:23:15 pi dbus-parsec[13245]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:23:15 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:23:15 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:23:15 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:23:15 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:23:18 pi gitea-app[10857]: 2022/08/20 19:23:18 [63011886] router: completed GET /Danacus/university-stuff/action/star?redirect_to=%2FDanacus%2Funiversity-stuff%2Fcommits%2Fcommit%2Fafb00cdc62bc73b0cf5d5d6ee700f01273be80b4%2F.metadata%2F.plugins%2Forg.eclipse.ui.ide for 10.88.0.1:40028, 405 Method Not Allowed in 0.8ms @ web/goget.go:21(web.goGet) >Aug 20 17:23:25 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 63. >Aug 20 17:23:25 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:25 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:25 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:23:25 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:23:25 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:23:25 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:26 pi dbus-parsec[13250]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:23:26 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:23:26 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:23:26 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:23:26 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:23:31 pi gitea-app[10857]: 2022/08/20 19:23:31 [63011893] router: completed GET /Danacus/university-stuff/blame/commit/e258876bec8aadcadec0488db242b3883b308301/Logica/A10 for 10.88.0.1:53714, 200 OK in 167.2ms @ repo/blame.go:47(repo.RefBlame) >Aug 20 17:23:32 pi podman[13251]: 2022-08-20 17:23:32.62055161 +0000 UTC m=+6.599231715 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.licenses=, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.description=Pi-hole in a docker container, io.containers.autoupdate=registry) >Aug 20 17:23:32 pi podman[13251]: 2022-08-20 17:23:32.663236778 +0000 UTC m=+6.641917161 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=655b7a97d3f9f16fb33acecc9db7a7b902a84f922ae90e4e71e44c4292f06f07) >Aug 20 17:23:32 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:23:32 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:32 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 13.006s CPU time. >Aug 20 17:23:36 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 64. >Aug 20 17:23:36 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:36 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:36 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:23:36 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:23:36 pi dbus-parsec[13282]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:23:36 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:23:36 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:23:36 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:23:36 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:23:43 pi gitea-app[10857]: 2022/08/20 19:23:43 [6301189f] router: completed GET /Danacus/university-stuff/src/commit/861fab1ebde9763752f62451cb075b31445d4966/BvP/default/lib/python3.7/site-packages/isort for 10.88.0.1:45342, 200 OK in 59.0ms @ repo/view.go:732(repo.Home) >Aug 20 17:23:46 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 65. >Aug 20 17:23:46 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:46 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:46 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:23:46 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:23:46 pi dbus-parsec[13289]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:23:46 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:23:46 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:23:46 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:23:46 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:23:47 pi gitea-app[10857]: 2022/08/20 19:23:47 [630118a3] router: completed GET /Danacus/university-stuff/src/commit/875a4b71c8132b9ee2eea3b8ea48335ab272ee61/Logica/oef for 10.88.0.1:49774, 200 OK in 60.1ms @ repo/view.go:732(repo.Home) >Aug 20 17:23:48 pi gitea-app[10857]: 2022/08/20 19:23:48 [630118a3-7] router: completed GET /Danacus/university-stuff/src/commit/a3382ccbb2c61efe658e1a9719b94f15bdf2733d/Logica for 10.88.0.1:49788, 200 OK in 106.7ms @ repo/view.go:732(repo.Home) >Aug 20 17:23:50 pi gitea-app[10857]: 2022/08/20 19:23:50 [630118a6] router: completed GET /Danacus/university-stuff/src/commit/f1a5039ad050d55116ca92235371f77f4242345e/Besturingssystemen/Zitting7/Opgave4/VM/physical_address_width.S for 10.88.0.1:49796, 200 OK in 59.5ms @ repo/view.go:732(repo.Home) >Aug 20 17:23:51 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:23:51 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:56 pi podman[13306]: 2022-08-20 17:23:56.542938221 +0000 UTC m=+5.041417825 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, io.balena.architecture=aarch64, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, io.balena.qemu.version=7.0.0+balena1-aarch64, io.containers.autoupdate=registry, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.version=1.25.2, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8) >Aug 20 17:23:56 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 66. >Aug 20 17:23:56 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:56 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:56 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:23:56 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:23:56 pi podman[13306]: 2022-08-20 17:23:56.669828435 +0000 UTC m=+5.168308020 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=638acf0a31faf2a4e717fa604cad1adcb52df462a7f8a3c6bf567bd0d1478cdd) >Aug 20 17:23:56 pi dbus-parsec[13331]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:23:56 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:23:56 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:23:56 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:23:56 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:23:57 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:23:57 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:23:57 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Consumed 9.662s CPU time. >Aug 20 17:24:03 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:24:03 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:06 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:24:06 pi zezere-ignition[13346]: INFO : Ignition 2.14.0 >Aug 20 17:24:06 pi zezere-ignition[13346]: INFO : Stage: fetch >Aug 20 17:24:06 pi zezere-ignition[13346]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:24:06 pi zezere-ignition[13346]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:24:06 pi zezere-ignition[13346]: DEBUG : parsed url from cmdline: "" >Aug 20 17:24:06 pi zezere-ignition[13346]: INFO : no config URL provided >Aug 20 17:24:06 pi zezere-ignition[13346]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:24:06 pi zezere-ignition[13346]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:24:06 pi zezere-ignition[13346]: INFO : using config file at "/tmp/zezere-ignition-config-h0d1f5yn.ign" >Aug 20 17:24:06 pi zezere-ignition[13346]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:24:06 pi zezere-ignition[13346]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:24:06 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 67. >Aug 20 17:24:06 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:24:06 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:06 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:06 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:24:07 pi dbus-parsec[13352]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:24:07 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:24:07 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:24:07 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:24:07 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:24:07 pi podman[13337]: 2022-08-20 17:24:07.430767682 +0000 UTC m=+3.926185896 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.title=docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service, io.containers.autoupdate=registry, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.licenses=) >Aug 20 17:24:07 pi podman[13337]: 2022-08-20 17:24:07.489450423 +0000 UTC m=+3.984868785 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=178ac461c9654e1c2756519f10e4918e379061f370847b4a33d28636423faa76) >Aug 20 17:24:07 pi zezere-ignition[13346]: INFO : GET result: Not Found >Aug 20 17:24:07 pi zezere-ignition[13346]: WARNING : failed to fetch config: resource not found >Aug 20 17:24:07 pi zezere-ignition[13346]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:24:07 pi zezere-ignition[13346]: CRITICAL : Ignition failed: resource not found >Aug 20 17:24:07 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:24:07 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:07 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 7.648s CPU time. >Aug 20 17:24:07 pi zezere-ignition[13368]: INFO : Ignition 2.14.0 >Aug 20 17:24:07 pi zezere-ignition[13368]: INFO : Stage: disks >Aug 20 17:24:07 pi zezere-ignition[13368]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:24:07 pi zezere-ignition[13368]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:24:07 pi zezere-ignition[13368]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:24:07 pi zezere-ignition[13368]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:24:07 pi zezere-ignition[13374]: INFO : Ignition 2.14.0 >Aug 20 17:24:07 pi zezere-ignition[13374]: INFO : Stage: mount >Aug 20 17:24:07 pi zezere-ignition[13374]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:24:07 pi zezere-ignition[13374]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:24:07 pi zezere-ignition[13374]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:24:07 pi zezere-ignition[13374]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:24:07 pi zezere-ignition[13380]: INFO : Ignition 2.14.0 >Aug 20 17:24:07 pi zezere-ignition[13380]: INFO : Stage: files >Aug 20 17:24:07 pi zezere-ignition[13380]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:24:07 pi zezere-ignition[13380]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:24:07 pi zezere-ignition[13380]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:24:07 pi zezere-ignition[13380]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:24:07 pi zezere-ignition[13386]: INFO : Ignition 2.14.0 >Aug 20 17:24:07 pi zezere-ignition[13386]: INFO : Stage: umount >Aug 20 17:24:07 pi zezere-ignition[13386]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:24:07 pi zezere-ignition[13386]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:24:07 pi zezere-ignition[13386]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:24:07 pi zezere-ignition[13386]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:24:07 pi zezere-ignition[13345]: Running stage fetch with config file /tmp/zezere-ignition-config-h0d1f5yn.ign >Aug 20 17:24:07 pi zezere-ignition[13345]: Running stage disks with config file /tmp/zezere-ignition-config-h0d1f5yn.ign >Aug 20 17:24:07 pi zezere-ignition[13345]: Running stage mount with config file /tmp/zezere-ignition-config-h0d1f5yn.ign >Aug 20 17:24:07 pi zezere-ignition[13345]: Running stage files with config file /tmp/zezere-ignition-config-h0d1f5yn.ign >Aug 20 17:24:07 pi zezere-ignition[13345]: Running stage umount with config file /tmp/zezere-ignition-config-h0d1f5yn.ign >Aug 20 17:24:07 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:24:07 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:24:07 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:07 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:11 pi gitea-app[10857]: 2022/08/20 19:24:11 [630118bb] router: completed GET /robots.txt for 10.88.0.1:58730, 404 Not Found in 5.9ms @ context/user.go:18(context.UserAssignmentWeb) >Aug 20 17:24:12 pi gitea-app[10857]: 2022/08/20 19:24:12 [630118bc] router: completed GET /Danacus/university-stuff/action/star?redirect_to=%2FDanacus%2Funiversity-stuff%2Fblame%2Fbranch%2Fmaster%2FBvP%2Fdoolhof.py for 10.88.0.1:58746, 405 Method Not Allowed in 0.8ms @ web/goget.go:21(web.goGet) >Aug 20 17:24:17 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 68. >Aug 20 17:24:17 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:17 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:17 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:24:17 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:24:17 pi dbus-parsec[13392]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:24:17 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:24:17 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:24:17 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:24:17 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:24:27 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 69. >Aug 20 17:24:27 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:27 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:27 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:24:27 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:24:27 pi dbus-parsec[13395]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:24:27 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:24:27 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:24:27 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:24:27 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:24:27 pi gitea-app[10857]: 2022/08/20 19:24:27 [630118cb] router: completed GET /Danacus/university-stuff/src/commit/875a4b71c8132b9ee2eea3b8ea48335ab272ee61/OGP/Thieves/src/decks/SourceDeck.java for 10.88.0.1:34306, 200 OK in 64.1ms @ repo/view.go:732(repo.Home) >Aug 20 17:24:33 pi gitea-app[10857]: 2022/08/20 19:24:33 ...irror/mirror_pull.go:268:runSync() [E] [630118d0-2] SyncMirrors [repo: 12:Danacus/graphics_project_21-22]: failed to update mirror repository: >Aug 20 17:24:33 pi gitea-app[10857]: Stdout: Fetching origin >Aug 20 17:24:33 pi gitea-app[10857]: >Aug 20 17:24:33 pi gitea-app[10857]: Stderr: remote: Invalid username or password. >Aug 20 17:24:33 pi gitea-app[10857]: fatal: Authentication failed for 'https://github.com/ComputerGraphicsResearchGroup/graphics_project_21-22-Danacus.git/' >Aug 20 17:24:33 pi gitea-app[10857]: error: could not fetch origin >Aug 20 17:24:33 pi gitea-app[10857]: >Aug 20 17:24:33 pi gitea-app[10857]: Err: exit status 1 >Aug 20 17:24:37 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 70. >Aug 20 17:24:37 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:37 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:37 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:24:37 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:24:37 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:24:37 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:37 pi systemd[1]: Starting podman-auto-update.service - Podman auto-update service... >Aug 20 17:24:37 pi dbus-parsec[13411]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:24:37 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:24:37 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:24:37 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:24:37 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:24:37 pi systemd[1]: sysroot-tmp-crun.VRhFSN.mount: Deactivated successfully. >Aug 20 17:24:38 pi podman[13412]: 2022-08-20 17:24:38.001514542 +0000 UTC m=+0.227490080 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, io.containers.autoupdate=registry, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.licenses=, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.title=docker-pi-hole) >Aug 20 17:24:38 pi podman[13412]: 2022-08-20 17:24:38.079286541 +0000 UTC m=+0.305262098 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=f08081d4b7e4500acf58aaa3a44ae3705c155fd13d5bf090ccc7be4290b7256b) >Aug 20 17:24:38 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:24:38 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:39 pi podman[13413]: 2022-08-20 17:24:39.071584956 +0000 UTC m=+1.289519561 system auto-update >Aug 20 17:24:42 pi gitea-app[10857]: 2022/08/20 19:24:42 [630118d9] router: completed GET /Danacus/university-stuff/src/branch/master/Besturingssystemen/Oefenzitting1/src for 10.88.0.1:57864, 200 OK in 183.3ms @ repo/view.go:732(repo.Home) >Aug 20 17:24:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=239.255.255.250 LEN=118 TOS=0x00 PREC=0x00 TTL=2 ID=37188 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:24:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=fe80:0000:0000:0000:4ec7:1f5e:5274:16ba DST=ff02:0000:0000:0000:0000:0000:0000:000c LEN=134 TC=0 HOPLIMIT=2 FLOWLBL=968519 PROTO=UDP SPT=35818 DPT=1900 LEN=94 >Aug 20 17:24:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=255.255.255.255 LEN=118 TOS=0x00 PREC=0x00 TTL=64 ID=28981 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:24:47 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 71. >Aug 20 17:24:47 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:47 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:47 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:24:47 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:24:48 pi dbus-parsec[13451]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:24:48 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:24:48 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:24:48 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:24:48 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:24:55 pi gitea-app[10857]: 2022/08/20 19:24:55 [630118e7] router: completed GET /Danacus/university-stuff/commits/commit/c6914cb9b474d3b1143972584c4b821676206002/Numerieke/zit09_matlab/probleem4.m for 10.88.0.1:39686, 200 OK in 163.6ms @ repo/commit.go:37(repo.RefCommits) >Aug 20 17:24:57 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:24:57 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:57 pi podman[13462]: 2022-08-20 17:24:57.499792452 +0000 UTC m=+0.195508062 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, org.opencontainers.image.licenses=GPL-3.0-only, io.balena.qemu.version=7.0.0+balena1-aarch64, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.version=1.25.2, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, io.containers.autoupdate=registry, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, io.balena.architecture=aarch64, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki) >Aug 20 17:24:57 pi podman[13462]: 2022-08-20 17:24:57.549721394 +0000 UTC m=+0.245436838 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=e04f6b73d9054d71fc22c348edabf3339f7602ff09410ccb2ab6ddee35f7c332) >Aug 20 17:24:57 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:24:57 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:58 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 72. >Aug 20 17:24:58 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:58 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:24:58 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:24:58 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:24:59 pi dbus-parsec[13485]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:24:58 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:24:58 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:24:58 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:24:58 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:25:00 pi systemd[1]: Starting pmlogger_check.service - Check pmlogger instances are running... >Aug 20 17:25:00 pi systemd[1]: Started pmlogger_check.service - Check pmlogger instances are running. >Aug 20 17:25:00 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmlogger_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:03 pi gitea-app[10857]: 2022/08/20 19:25:03 [630118ef] router: completed GET /Danacus/university-stuff/src/commit/2c887c0964f81e3c45b4f5d8480fff84d43b44cd/.metadata/.plugins/org.eclipse.tips.ide for 10.88.0.1:39702, 200 OK in 75.3ms @ repo/view.go:732(repo.Home) >Aug 20 17:25:03 pi systemd[1]: pmlogger_check.service: Deactivated successfully. >Aug 20 17:25:03 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmlogger_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:03 pi systemd[1]: pmlogger_check.service: Consumed 2.424s CPU time. >Aug 20 17:25:06 pi podman[13413]: UNIT CONTAINER IMAGE POLICY UPDATED >Aug 20 17:25:06 pi podman[13413]: container-nginx-web.service 11868febf0fd (nginx-web) docker.io/nginx registry false >Aug 20 17:25:06 pi podman[13413]: container-nextcloud-nginx.service a1db16389e8f (nextcloud-nginx) docker.io/nginx registry false >Aug 20 17:25:06 pi podman[13413]: container-nextcloud-fpm.service 958ee2e818e5 (nextcloud-fpm) docker.io/nextcloud:fpm-alpine registry false >Aug 20 17:25:06 pi podman[13413]: container-nextcloud-redis.service dd210ac28e21 (nextcloud-redis) docker.io/redis:alpine registry false >Aug 20 17:25:06 pi podman[13413]: container-php-fpm.service 281971e998e5 (php-fpm) docker.io/php:fpm-alpine registry false >Aug 20 17:25:06 pi podman[13413]: container-nextcloud-postgres.service 3dd752829c85 (nextcloud-postgres) docker.io/postgres:13 registry false >Aug 20 17:25:06 pi podman[13413]: container-vaultwarden-server.service c7d652e29be6 (vaultwarden-server) docker.io/vaultwarden/server:latest registry false >Aug 20 17:25:06 pi podman[13413]: container-pihole.service f247765d76a1 (pihole) docker.io/pihole/pihole:latest registry false >Aug 20 17:25:06 pi podman[13413]: container-hass-postgres.service 1b91561f7d41 (hass-postgres) docker.io/postgres:14 registry false >Aug 20 17:25:06 pi podman[13413]: container-hass-app.service 9c411440b0d3 (hass-app) docker.io/homeassistant/raspberrypi4-64-homeassistant:stable registry false >Aug 20 17:25:06 pi podman[13413]: container-gitea-postgres.service 8a066e4f9d57 (gitea-postgres) docker.io/postgres:11 registry false >Aug 20 17:25:06 pi podman[13413]: container-gitea-app.service c30bcd67f13a (gitea-app) docker.io/gitea/gitea:latest registry false >Aug 20 17:25:06 pi podman[13413]: container-hass-mosquitto.service fdf7d8d18935 (hass-mosquitto) docker.io/eclipse-mosquitto registry false >Aug 20 17:25:06 pi podman[13413]: container-proxy-internal.service 26a887dcca3f (proxy-internal) docker.io/jc21/nginx-proxy-manager:latest registry false >Aug 20 17:25:06 pi podman[13413]: container-proxy.service 802033efbdc5 (proxy) docker.io/jc21/nginx-proxy-manager:latest registry false >Aug 20 17:25:06 pi podman[13413]: container-hass-zigbee2mqtt.service 8243ecfa6162 (hass-zigbee2mqtt) docker.io/koenkk/zigbee2mqtt registry false >Aug 20 17:25:06 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:25:06 pi zezere-ignition[13877]: INFO : Ignition 2.14.0 >Aug 20 17:25:06 pi zezere-ignition[13877]: INFO : Stage: fetch >Aug 20 17:25:06 pi zezere-ignition[13877]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:25:06 pi zezere-ignition[13877]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:25:06 pi zezere-ignition[13877]: DEBUG : parsed url from cmdline: "" >Aug 20 17:25:06 pi zezere-ignition[13877]: INFO : no config URL provided >Aug 20 17:25:06 pi zezere-ignition[13877]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:25:06 pi zezere-ignition[13877]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:25:06 pi zezere-ignition[13877]: INFO : using config file at "/tmp/zezere-ignition-config-jdhtcxjh.ign" >Aug 20 17:25:06 pi zezere-ignition[13877]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:25:06 pi zezere-ignition[13877]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:25:07 pi systemd[1]: podman-auto-update.service: Deactivated successfully. >Aug 20 17:25:07 pi systemd[1]: Finished podman-auto-update.service - Podman auto-update service. >Aug 20 17:25:07 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=podman-auto-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:07 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=podman-auto-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:07 pi systemd[1]: podman-auto-update.service: Consumed 4.807s CPU time. >Aug 20 17:25:07 pi zezere-ignition[13877]: INFO : GET result: Not Found >Aug 20 17:25:07 pi zezere-ignition[13877]: WARNING : failed to fetch config: resource not found >Aug 20 17:25:07 pi zezere-ignition[13877]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:25:07 pi zezere-ignition[13877]: CRITICAL : Ignition failed: resource not found >Aug 20 17:25:07 pi zezere-ignition[13884]: INFO : Ignition 2.14.0 >Aug 20 17:25:07 pi zezere-ignition[13884]: INFO : Stage: disks >Aug 20 17:25:07 pi zezere-ignition[13884]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:25:07 pi zezere-ignition[13884]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:25:07 pi zezere-ignition[13884]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:25:07 pi zezere-ignition[13884]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:25:07 pi zezere-ignition[13890]: INFO : Ignition 2.14.0 >Aug 20 17:25:07 pi zezere-ignition[13890]: INFO : Stage: mount >Aug 20 17:25:07 pi zezere-ignition[13890]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:25:07 pi zezere-ignition[13890]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:25:07 pi zezere-ignition[13890]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:25:07 pi zezere-ignition[13890]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:25:07 pi zezere-ignition[13896]: INFO : Ignition 2.14.0 >Aug 20 17:25:07 pi zezere-ignition[13896]: INFO : Stage: files >Aug 20 17:25:07 pi zezere-ignition[13896]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:25:07 pi zezere-ignition[13896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:25:07 pi zezere-ignition[13896]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:25:07 pi zezere-ignition[13896]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:25:08 pi zezere-ignition[13902]: INFO : Ignition 2.14.0 >Aug 20 17:25:08 pi zezere-ignition[13902]: INFO : Stage: umount >Aug 20 17:25:08 pi zezere-ignition[13902]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:25:08 pi zezere-ignition[13902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:25:08 pi zezere-ignition[13902]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:25:08 pi zezere-ignition[13902]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:25:08 pi zezere-ignition[13874]: Running stage fetch with config file /tmp/zezere-ignition-config-jdhtcxjh.ign >Aug 20 17:25:08 pi zezere-ignition[13874]: Running stage disks with config file /tmp/zezere-ignition-config-jdhtcxjh.ign >Aug 20 17:25:08 pi zezere-ignition[13874]: Running stage mount with config file /tmp/zezere-ignition-config-jdhtcxjh.ign >Aug 20 17:25:08 pi zezere-ignition[13874]: Running stage files with config file /tmp/zezere-ignition-config-jdhtcxjh.ign >Aug 20 17:25:08 pi zezere-ignition[13874]: Running stage umount with config file /tmp/zezere-ignition-config-jdhtcxjh.ign >Aug 20 17:25:08 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:25:08 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:25:08 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:08 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:08 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 73. >Aug 20 17:25:08 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:25:08 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:08 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:08 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:25:08 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:25:08 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:08 pi dbus-parsec[13908]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:25:08 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:25:08 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:25:08 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:25:08 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:25:09 pi gitea-app[10857]: 2022/08/20 19:25:09 [630118f5] router: completed GET /Danacus/university-stuff/action/watch?redirect_to=%2FDanacus%2Funiversity-stuff%2Fcommits%2Fcommit%2F0c7fd6962972e3a7dd00087e40de01701b33ca31%2FDeclaratieve%2520Talen%2Foef1.pl for 10.88.0.1:39494, 405 Method Not Allowed in 0.4ms @ web/goget.go:21(web.goGet) >Aug 20 17:25:11 pi gitea-app[10857]: 2022/08/20 19:25:11 [630118f7] router: completed GET /Danacus/university-stuff/src/commit/fbe44a25cd05f56bbbd056f5901103a6e651c608/2019-10-01-Note-10-35.xopp~ for 10.88.0.1:39508, 200 OK in 70.2ms @ repo/view.go:732(repo.Home) >Aug 20 17:25:15 pi podman[13909]: 2022-08-20 17:25:15.064911522 +0000 UTC m=+6.532852303 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.licenses=, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.version=2022.07.1, PODMAN_SYSTEMD_UNIT=container-pihole.service, io.containers.autoupdate=registry, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.title=docker-pi-hole) >Aug 20 17:25:15 pi systemd[1]: Starting pmlogger_farm_check.service - Check and migrate non-primary pmlogger farm instances... >Aug 20 17:25:15 pi systemd[1]: Started pmlogger_farm_check.service - Check and migrate non-primary pmlogger farm instances. >Aug 20 17:25:15 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmlogger_farm_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:15 pi podman[13909]: 2022-08-20 17:25:15.13961366 +0000 UTC m=+6.607554459 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=c66247fb3062d321e89709f22988bc46bca2ec360c65ef39fc9f694ef93a2093) >Aug 20 17:25:15 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:25:15 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:15 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 12.914s CPU time. >Aug 20 17:25:15 pi systemd[1]: pmlogger_farm_check.service: Deactivated successfully. >Aug 20 17:25:15 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmlogger_farm_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:18 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 74. >Aug 20 17:25:18 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:18 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:18 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:25:18 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:25:18 pi dbus-parsec[14021]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:25:18 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:25:18 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:25:18 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:25:18 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:25:26 pi gitea-app[10857]: 2022/08/20 19:25:26 [63011906] router: completed GET /Danacus/university-stuff/src/commit/a3382ccbb2c61efe658e1a9719b94f15bdf2733d/Modellering%20en%20simulatie for 10.88.0.1:56796, 200 OK in 119.6ms @ repo/view.go:732(repo.Home) >Aug 20 17:25:26 pi gitea-app[10857]: 2022/08/20 19:25:26 [63011906-8] router: completed GET /Danacus/dotfiles/raw/commit/806679a123b733236073f641b63626da770320df/.config/tilda/config_8 for 10.88.0.1:56804, 200 OK in 57.9ms @ repo/download.go:123(repo.SingleDownload) >Aug 20 17:25:28 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 75. >Aug 20 17:25:28 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:28 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:28 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:25:28 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:25:29 pi dbus-parsec[14037]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:25:29 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:25:29 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:25:29 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:25:29 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:25:29 pi gitea-app[10857]: 2022/08/20 19:25:29 [63011909] router: completed GET /Danacus/university-stuff/src/commit/5427b21dfb3dcd67dc859584a73fdd5f78dbe0f6/.metadata/.plugins/org.eclipse.debug.ui for 10.88.0.1:56818, 200 OK in 74.3ms @ repo/view.go:732(repo.Home) >Aug 20 17:25:34 pi gitea-app[10857]: 2022/08/20 19:25:34 [6301190e] router: completed GET /Danacus/university-stuff/src/commit/2de3d09e6964810a62b763d141ece0c862b4da87/Bewijzen%20en%20redeneren/huistaak9 for 10.88.0.1:56834, 200 OK in 78.2ms @ repo/view.go:732(repo.Home) >Aug 20 17:25:38 pi gitea-app[10857]: 2022/08/20 19:25:38 [63011912] router: completed GET /Danacus/dotfiles/raw/commit/c7479986701906eb04a215afe042f858f03f1749/.zshrc for 10.88.0.1:46410, 200 OK in 64.7ms @ repo/download.go:123(repo.SingleDownload) >Aug 20 17:25:39 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 76. >Aug 20 17:25:39 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:39 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:39 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:25:39 pi gitea-app[10857]: 2022/08/20 19:25:39 [63011913] router: completed GET /Danacus/dotfiles/action/star?redirect_to=%2FDanacus%2Fdotfiles%2Fcommits%2Fcommit%2F8ca078725528b2c705d7a8ba9abb12b6595d09b6%2F.config%2Fmpd%2Fstate for 10.88.0.1:46416, 405 Method Not Allowed in 0.8ms @ web/goget.go:21(web.goGet) >Aug 20 17:25:39 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:25:39 pi dbus-parsec[14058]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:25:39 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:25:39 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:25:39 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:25:39 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:25:41 pi gitea-app[10857]: 2022/08/20 19:25:41 [63011914] router: completed GET /Danacus/university-stuff/commit/b6c65ee436d42bb2379e77dd86466b7c9f50aa62.diff for 10.88.0.1:46432, 200 OK in 897.7ms @ repo/commit.go:383(repo.RawDiff) >Aug 20 17:25:45 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:25:45 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:45 pi podman[14066]: 2022-08-20 17:25:45.700727115 +0000 UTC m=+0.186889464 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, io.containers.autoupdate=registry, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.licenses=, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1) >Aug 20 17:25:45 pi podman[14066]: 2022-08-20 17:25:45.760139052 +0000 UTC m=+0.246301123 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=05ec9e2e1b190534f55e2cf4055b63fa86e42e39395ef41d45fc91bcd51e82b2) >Aug 20 17:25:45 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:25:45 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:46 pi gitea-app[10857]: 2022/08/20 19:25:46 [6301191a] router: completed GET /Danacus/university-stuff/commits/commit/43b2822b8e5888f019b8e48cd60143f7a8cf70f7/SOCS/oef_3_1.pre for 10.88.0.1:56258, 200 OK in 110.3ms @ repo/commit.go:37(repo.RefCommits) >Aug 20 17:25:49 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 77. >Aug 20 17:25:49 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:49 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:49 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:25:49 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:25:49 pi dbus-parsec[14095]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:25:49 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:25:49 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:25:49 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:25:49 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:25:50 pi gitea-app[10857]: 2022/08/20 19:25:50 [6301191e] router: completed GET /Danacus/university-stuff/src/commit/fbe44a25cd05f56bbbd056f5901103a6e651c608/Fundamenten for 10.88.0.1:56262, 200 OK in 79.7ms @ repo/view.go:732(repo.Home) >Aug 20 17:25:52 pi gitea-app[10857]: 2022/08/20 19:25:52 [63011920] router: completed GET /Danacus/university-stuff/commits/commit/4c8df90242840ccec37c22a6ca349e23915788a9/IW/C/oefenzitting/demos/double-free/double-free.c for 10.88.0.1:56278, 200 OK in 183.4ms @ repo/commit.go:37(repo.RefCommits) >Aug 20 17:25:58 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:25:58 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:58 pi podman[14111]: 2022-08-20 17:25:58.713623987 +0000 UTC m=+0.189688675 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, io.balena.qemu.version=7.0.0+balena1-aarch64, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, io.containers.autoupdate=registry, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, io.balena.architecture=aarch64, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.version=1.25.2, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki) >Aug 20 17:25:58 pi podman[14111]: 2022-08-20 17:25:58.769318176 +0000 UTC m=+0.245382994 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=5a1d1838d0f70e1e9a5a509179bf813402992d544cfb6d2a1b603bdc1dd101df) >Aug 20 17:25:58 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:25:58 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:59 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 78. >Aug 20 17:25:59 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:59 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:25:59 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:25:59 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:25:59 pi dbus-parsec[14134]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:25:59 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:25:59 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:25:59 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:25:59 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:26:00 pi gitea-app[10857]: 2022/08/20 19:26:00 [63011928] router: completed GET /Danacus/university-stuff/commits/commit/a40b406dedde43842e0f4dc84d849038580591bb/IW/LaTeX/__latexindent_temp.tex for 10.88.0.1:52478, 200 OK in 204.4ms @ repo/commit.go:37(repo.RefCommits) >Aug 20 17:26:01 pi gitea-app[10857]: 2022/08/20 19:26:01 [63011928-10] router: completed GET /Danacus/university-stuff/src/branch/master/Besturingssystemen/Oefenzitting1/.attach_pid42986 for 10.88.0.1:52492, 200 OK in 527.0ms @ repo/view.go:732(repo.Home) >Aug 20 17:26:08 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:26:08 pi zezere-ignition[14156]: INFO : Ignition 2.14.0 >Aug 20 17:26:08 pi zezere-ignition[14156]: INFO : Stage: fetch >Aug 20 17:26:08 pi zezere-ignition[14156]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:26:08 pi zezere-ignition[14156]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:26:08 pi zezere-ignition[14156]: DEBUG : parsed url from cmdline: "" >Aug 20 17:26:08 pi zezere-ignition[14156]: INFO : no config URL provided >Aug 20 17:26:08 pi zezere-ignition[14156]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:26:08 pi zezere-ignition[14156]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:26:08 pi zezere-ignition[14156]: INFO : using config file at "/tmp/zezere-ignition-config-oozhxswd.ign" >Aug 20 17:26:08 pi zezere-ignition[14156]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:26:08 pi zezere-ignition[14156]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:26:09 pi zezere-ignition[14156]: INFO : GET result: Not Found >Aug 20 17:26:09 pi zezere-ignition[14156]: WARNING : failed to fetch config: resource not found >Aug 20 17:26:09 pi zezere-ignition[14156]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:26:09 pi zezere-ignition[14156]: CRITICAL : Ignition failed: resource not found >Aug 20 17:26:09 pi zezere-ignition[14164]: INFO : Ignition 2.14.0 >Aug 20 17:26:09 pi zezere-ignition[14164]: INFO : Stage: disks >Aug 20 17:26:09 pi zezere-ignition[14164]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:26:09 pi zezere-ignition[14164]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:26:09 pi zezere-ignition[14164]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:26:09 pi zezere-ignition[14164]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:26:09 pi zezere-ignition[14170]: INFO : Ignition 2.14.0 >Aug 20 17:26:09 pi zezere-ignition[14170]: INFO : Stage: mount >Aug 20 17:26:09 pi zezere-ignition[14170]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:26:09 pi zezere-ignition[14170]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:26:09 pi zezere-ignition[14170]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:26:09 pi zezere-ignition[14170]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:26:09 pi zezere-ignition[14177]: INFO : Ignition 2.14.0 >Aug 20 17:26:09 pi zezere-ignition[14177]: INFO : Stage: files >Aug 20 17:26:09 pi zezere-ignition[14177]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:26:09 pi zezere-ignition[14177]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:26:09 pi zezere-ignition[14177]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:26:09 pi zezere-ignition[14177]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:26:09 pi zezere-ignition[14184]: INFO : Ignition 2.14.0 >Aug 20 17:26:09 pi zezere-ignition[14184]: INFO : Stage: umount >Aug 20 17:26:09 pi zezere-ignition[14184]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:26:09 pi zezere-ignition[14184]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:26:09 pi zezere-ignition[14184]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:26:09 pi zezere-ignition[14184]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:26:09 pi zezere-ignition[14155]: Running stage fetch with config file /tmp/zezere-ignition-config-oozhxswd.ign >Aug 20 17:26:09 pi zezere-ignition[14155]: Running stage disks with config file /tmp/zezere-ignition-config-oozhxswd.ign >Aug 20 17:26:09 pi zezere-ignition[14155]: Running stage mount with config file /tmp/zezere-ignition-config-oozhxswd.ign >Aug 20 17:26:09 pi zezere-ignition[14155]: Running stage files with config file /tmp/zezere-ignition-config-oozhxswd.ign >Aug 20 17:26:09 pi zezere-ignition[14155]: Running stage umount with config file /tmp/zezere-ignition-config-oozhxswd.ign >Aug 20 17:26:09 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:26:09 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:26:09 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:09 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:09 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 79. >Aug 20 17:26:09 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:26:09 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:09 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:10 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:26:10 pi dbus-parsec[14190]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:26:10 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:26:10 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:26:10 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:26:10 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:26:10 pi gitea-app[10857]: 2022/08/20 19:26:10 [63011932] router: completed GET /Danacus/university-stuff/src/commit/fbe44a25cd05f56bbbd056f5901103a6e651c608/SOCS for 10.88.0.1:60912, 200 OK in 197.6ms @ repo/view.go:732(repo.Home) >Aug 20 17:26:10 pi gitea-app[10857]: 2022/08/20 19:26:10 [63011932-8] router: completed GET /Danacus/university-stuff/src/commit/9c5960ff73049bf984f93e63e1aac96dc7f08bb2/OGP/les5/src/banking/money for 10.88.0.1:60922, 200 OK in 72.7ms @ repo/view.go:732(repo.Home) >Aug 20 17:26:14 pi gitea-app[10857]: 2022/08/20 19:26:14 [63011936] router: completed GET /Danacus/university-stuff/src/commit/a3382ccbb2c61efe658e1a9719b94f15bdf2733d/Gegevensbanken for 10.88.0.1:60930, 200 OK in 139.3ms @ repo/view.go:732(repo.Home) >Aug 20 17:26:16 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:26:16 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:16 pi podman[14210]: 2022-08-20 17:26:16.687433083 +0000 UTC m=+0.185494802 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.licenses=, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, io.containers.autoupdate=registry, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.description=Pi-hole in a docker container) >Aug 20 17:26:16 pi podman[14210]: 2022-08-20 17:26:16.751075802 +0000 UTC m=+0.249137836 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=a3527400500abc9abfc3d9e8df0965f437a78229c78cd540570aaa19fa2d149b) >Aug 20 17:26:16 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:26:16 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:20 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 80. >Aug 20 17:26:20 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:20 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:20 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:26:20 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:26:20 pi dbus-parsec[14231]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:26:20 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:26:20 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:26:20 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:26:20 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:26:21 pi gitea-app[10857]: 2022/08/20 19:26:21 [6301193d] router: completed GET /Danacus/dotfiles/commits/commit/1fcc9a32e0134359344ed27ffdb218b82f1833d7/.config/river for 10.88.0.1:49400, 200 OK in 184.4ms @ repo/commit.go:37(repo.RefCommits) >Aug 20 17:26:30 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 81. >Aug 20 17:26:30 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:30 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:30 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:26:30 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:26:30 pi dbus-parsec[14243]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:26:30 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:26:30 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:26:30 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:26:30 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:26:32 pi NetworkManager[717]: <info> [1661016392.4251] device (wlan0): set-hw-addr: set MAC address to 46:FA:80:28:99:D6 (scanning) >Aug 20 17:26:32 pi NetworkManager[717]: <info> [1661016392.4351] device (wlan0): supplicant interface state: inactive -> disconnected >Aug 20 17:26:32 pi NetworkManager[717]: <info> [1661016392.4353] device (p2p-dev-wlan0): supplicant management interface state: inactive -> disconnected >Aug 20 17:26:32 pi NetworkManager[717]: <info> [1661016392.4375] device (wlan0): supplicant interface state: disconnected -> inactive >Aug 20 17:26:32 pi NetworkManager[717]: <info> [1661016392.4377] device (p2p-dev-wlan0): supplicant management interface state: disconnected -> inactive >Aug 20 17:26:37 pi gitea-app[10857]: 2022/08/20 19:26:37 [6301194d] router: completed GET /Danacus/university-stuff/src/commit/a3382ccbb2c61efe658e1a9719b94f15bdf2733d/Algebra for 10.88.0.1:50752, 200 OK in 116.3ms @ repo/view.go:732(repo.Home) >Aug 20 17:26:40 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 82. >Aug 20 17:26:40 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:40 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:40 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:26:40 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:26:40 pi dbus-parsec[14253]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:26:40 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:26:40 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:26:40 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:26:40 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:26:41 pi gitea-app[10857]: 2022/08/20 19:26:41 [63011951] router: completed GET /Danacus/university-stuff/action/watch?redirect_to=%2FDanacus%2Funiversity-stuff%2Fcommits%2Fcommit%2Fe61ff08c96fefd71992d56b60dd47516a7972f2c%2FWetCom%2Fpresentatie1%2Fpres.snm for 10.88.0.1:50766, 405 Method Not Allowed in 0.8ms @ web/goget.go:21(web.goGet) >Aug 20 17:26:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=239.255.255.250 LEN=118 TOS=0x00 PREC=0x00 TTL=2 ID=41103 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:26:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=fe80:0000:0000:0000:4ec7:1f5e:5274:16ba DST=ff02:0000:0000:0000:0000:0000:0000:000c LEN=134 TC=0 HOPLIMIT=2 FLOWLBL=968519 PROTO=UDP SPT=35818 DPT=1900 LEN=94 >Aug 20 17:26:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=255.255.255.255 LEN=118 TOS=0x00 PREC=0x00 TTL=64 ID=39827 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:26:47 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:26:47 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:48 pi podman[14255]: 2022-08-20 17:26:48.470822362 +0000 UTC m=+0.967458686 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.licenses=, PODMAN_SYSTEMD_UNIT=container-pihole.service, io.containers.autoupdate=registry, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1) >Aug 20 17:26:48 pi podman[14255]: 2022-08-20 17:26:48.512414691 +0000 UTC m=+1.009051015 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=f5459014e82ddbe7f688a97046d7c8b159b139e1b4d56c787368d02b7463e35d) >Aug 20 17:26:48 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:26:48 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:48 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 1.870s CPU time. >Aug 20 17:26:49 pi gitea-app[10857]: 2022/08/20 19:26:49 [63011959] router: completed GET /Danacus/university-stuff/src/commit/a3382ccbb2c61efe658e1a9719b94f15bdf2733d/Lineaire%20Algebra for 10.88.0.1:54766, 200 OK in 95.1ms @ repo/view.go:732(repo.Home) >Aug 20 17:26:50 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 83. >Aug 20 17:26:50 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:50 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:50 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:26:50 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:26:51 pi dbus-parsec[14282]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:26:51 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:26:51 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:26:51 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:26:51 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:26:56 pi gitea-app[10857]: 2022/08/20 19:26:56 [63011960] router: completed GET /Danacus/university-stuff/action/star?redirect_to=%2FDanacus%2Funiversity-stuff%2Fblame%2Fcommit%2F6b3808900ac3849e64af3fe38cb55ddff18e12df%2FNumerieke%2Fzit09_matlab%2FevalBspline.m for 10.88.0.1:57138, 405 Method Not Allowed in 0.9ms @ web/goget.go:21(web.goGet) >Aug 20 17:26:59 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:26:59 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:26:59 pi podman[14286]: 2022-08-20 17:26:59.710135154 +0000 UTC m=+0.202202388 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, io.containers.autoupdate=registry, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.version=1.25.2, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, io.balena.architecture=aarch64, io.balena.qemu.version=7.0.0+balena1-aarch64, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server) >Aug 20 17:26:59 pi podman[14286]: 2022-08-20 17:26:59.769515038 +0000 UTC m=+0.261582291 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=123007c7d00670ffd06bb84d422254c6fcc4bbe5569afdbc34f4bb96990c0ffe) >Aug 20 17:26:59 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:26:59 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:01 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 84. >Aug 20 17:27:01 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:01 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:01 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:27:01 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:27:01 pi dbus-parsec[14317]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:27:01 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:27:01 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:27:01 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:27:01 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:27:11 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 85. >Aug 20 17:27:11 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:11 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:11 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:27:11 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:27:11 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:27:11 pi dbus-parsec[14318]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:27:11 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:27:11 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:27:11 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:27:11 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:27:11 pi zezere-ignition[14320]: INFO : Ignition 2.14.0 >Aug 20 17:27:11 pi zezere-ignition[14320]: INFO : Stage: fetch >Aug 20 17:27:11 pi zezere-ignition[14320]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:27:11 pi zezere-ignition[14320]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:27:11 pi zezere-ignition[14320]: DEBUG : parsed url from cmdline: "" >Aug 20 17:27:11 pi zezere-ignition[14320]: INFO : no config URL provided >Aug 20 17:27:11 pi zezere-ignition[14320]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:27:11 pi zezere-ignition[14320]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:27:11 pi zezere-ignition[14320]: INFO : using config file at "/tmp/zezere-ignition-config-34id9027.ign" >Aug 20 17:27:11 pi zezere-ignition[14320]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:27:11 pi zezere-ignition[14320]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:27:12 pi zezere-ignition[14320]: INFO : GET result: Not Found >Aug 20 17:27:12 pi zezere-ignition[14320]: WARNING : failed to fetch config: resource not found >Aug 20 17:27:12 pi zezere-ignition[14320]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:27:12 pi zezere-ignition[14320]: CRITICAL : Ignition failed: resource not found >Aug 20 17:27:12 pi zezere-ignition[14328]: INFO : Ignition 2.14.0 >Aug 20 17:27:12 pi zezere-ignition[14328]: INFO : Stage: disks >Aug 20 17:27:12 pi zezere-ignition[14328]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:27:12 pi zezere-ignition[14328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:27:12 pi zezere-ignition[14328]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:27:12 pi zezere-ignition[14328]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:27:12 pi zezere-ignition[14335]: INFO : Ignition 2.14.0 >Aug 20 17:27:12 pi zezere-ignition[14335]: INFO : Stage: mount >Aug 20 17:27:12 pi zezere-ignition[14335]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:27:12 pi zezere-ignition[14335]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:27:12 pi zezere-ignition[14335]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:27:12 pi zezere-ignition[14335]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:27:12 pi zezere-ignition[14341]: INFO : Ignition 2.14.0 >Aug 20 17:27:12 pi zezere-ignition[14341]: INFO : Stage: files >Aug 20 17:27:12 pi zezere-ignition[14341]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:27:12 pi zezere-ignition[14341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:27:12 pi zezere-ignition[14341]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:27:12 pi zezere-ignition[14341]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:27:12 pi zezere-ignition[14347]: INFO : Ignition 2.14.0 >Aug 20 17:27:12 pi zezere-ignition[14347]: INFO : Stage: umount >Aug 20 17:27:12 pi zezere-ignition[14347]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:27:12 pi zezere-ignition[14347]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:27:12 pi zezere-ignition[14347]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:27:12 pi zezere-ignition[14347]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:27:12 pi zezere-ignition[14319]: Running stage fetch with config file /tmp/zezere-ignition-config-34id9027.ign >Aug 20 17:27:12 pi zezere-ignition[14319]: Running stage disks with config file /tmp/zezere-ignition-config-34id9027.ign >Aug 20 17:27:12 pi zezere-ignition[14319]: Running stage mount with config file /tmp/zezere-ignition-config-34id9027.ign >Aug 20 17:27:12 pi zezere-ignition[14319]: Running stage files with config file /tmp/zezere-ignition-config-34id9027.ign >Aug 20 17:27:12 pi zezere-ignition[14319]: Running stage umount with config file /tmp/zezere-ignition-config-34id9027.ign >Aug 20 17:27:12 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:27:12 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:27:12 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:12 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:18 pi systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... >Aug 20 17:27:18 pi systemd-tmpfiles[14353]: /usr/lib/tmpfiles.d/pkg-man-db.conf:1: Duplicate line for path "/var/cache/man", ignoring. >Aug 20 17:27:18 pi systemd-tmpfiles[14353]: /usr/lib/tmpfiles.d/tmp.conf:12: Duplicate line for path "/var/tmp", ignoring. >Aug 20 17:27:18 pi systemd-tmpfiles[14353]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. >Aug 20 17:27:18 pi systemd-tmpfiles[14353]: /usr/lib/tmpfiles.d/var.conf:19: Duplicate line for path "/var/cache", ignoring. >Aug 20 17:27:18 pi systemd-tmpfiles[14353]: /usr/lib/tmpfiles.d/var.conf:21: Duplicate line for path "/var/lib", ignoring. >Aug 20 17:27:18 pi systemd-tmpfiles[14353]: /usr/lib/tmpfiles.d/var.conf:23: Duplicate line for path "/var/spool", ignoring. >Aug 20 17:27:18 pi systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. >Aug 20 17:27:18 pi systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. >Aug 20 17:27:18 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:18 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:18 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:27:18 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:19 pi podman[14354]: 2022-08-20 17:27:19.093735676 +0000 UTC m=+0.196397424 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.licenses=, io.containers.autoupdate=registry, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole) >Aug 20 17:27:19 pi podman[14354]: 2022-08-20 17:27:19.150430186 +0000 UTC m=+0.253092101 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=78f2d2172d031cdb8ecd47ba5de8b0bc56f2636689230795932539ebeab2a8af) >Aug 20 17:27:19 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:27:19 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:21 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 86. >Aug 20 17:27:21 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:21 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:21 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:27:21 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:27:21 pi dbus-parsec[14375]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:27:21 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:27:21 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:27:21 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:27:21 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:27:22 pi gitea-app[10857]: 2022/08/20 19:27:22 [6301197a] router: completed GET /Danacus/university-stuff/src/commit/6e1e945814294f6e4aed91e63230f19bf77812ec/Gedistribueerde%20Systemen/2021-10-26-Note-14-01.xopp for 10.88.0.1:48174, 200 OK in 62.2ms @ repo/view.go:732(repo.Home) >Aug 20 17:27:29 pi gitea-app[10857]: 2022/08/20 19:27:29 [63011981] router: completed GET /Danacus/university-stuff/issues?assignee=1&labels&milestone=0&q&sort=oldest&state=closed&type=all for 10.88.0.1:43452, 200 OK in 94.5ms @ repo/issue.go:386(repo.Issues) >Aug 20 17:27:31 pi gitea-app[10857]: 2022/08/20 19:27:31 [63011983] router: completed GET /Danacus/university-stuff/src/commit/a3382ccbb2c61efe658e1a9719b94f15bdf2733d/OGP/Generics/.idea for 10.88.0.1:43466, 200 OK in 56.5ms @ repo/view.go:732(repo.Home) >Aug 20 17:27:31 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 87. >Aug 20 17:27:31 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:31 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:31 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:27:31 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:27:32 pi dbus-parsec[14396]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:27:32 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:27:32 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:27:32 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:27:32 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:27:33 pi gitea-app[10857]: 2022/08/20 19:27:33 [63011985] router: completed GET /Danacus/university-stuff/src/commit/c6914cb9b474d3b1143972584c4b821676206002/Declaratieve%20Talen/prolog/Pipelines/prolog-pipelines.pl for 10.88.0.1:43480, 200 OK in 64.6ms @ repo/view.go:732(repo.Home) >Aug 20 17:27:35 pi gitea-app[10857]: 2022/08/20 19:27:35 [63011987] router: completed GET /Danacus/university-stuff/src/commit/2de3d09e6964810a62b763d141ece0c862b4da87/Besturingssystemen/2019-10-24-Note-13-55.xopp for 10.88.0.1:41230, 200 OK in 64.2ms @ repo/view.go:732(repo.Home) >Aug 20 17:27:38 pi gitea-app[10857]: 2022/08/20 19:27:38 [6301198a] router: completed GET /Danacus/university-stuff/src/commit/2c887c0964f81e3c45b4f5d8480fff84d43b44cd/Besturingssystemen/oef2/out/production/oef2/bridge for 10.88.0.1:41234, 200 OK in 55.0ms @ repo/view.go:732(repo.Home) >Aug 20 17:27:42 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 88. >Aug 20 17:27:42 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:42 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:42 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:27:42 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:27:42 pi dbus-parsec[14413]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:27:42 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:27:42 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:27:42 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:27:42 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:27:49 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:27:49 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:49 pi podman[14416]: 2022-08-20 17:27:49.689879215 +0000 UTC m=+0.196228018 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.title=docker-pi-hole, io.containers.autoupdate=registry, org.opencontainers.image.licenses=, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole) >Aug 20 17:27:49 pi podman[14416]: 2022-08-20 17:27:49.75019468 +0000 UTC m=+0.256543650 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=81eef2f2ffc2f0da197f36d6e04bd89557ea552a8640bfa604a7a412fbe3556b) >Aug 20 17:27:49 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:27:49 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:52 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 89. >Aug 20 17:27:52 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:52 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:27:52 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:27:52 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:27:52 pi dbus-parsec[14438]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:27:52 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:27:52 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:27:52 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:27:52 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:28:00 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:00 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:28:00 pi systemd[1]: Starting pmie_check.service - Check PMIE instances are running... >Aug 20 17:28:00 pi systemd[1]: Started pmie_check.service - Check PMIE instances are running. >Aug 20 17:28:00 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:01 pi systemd[1]: pmie_check.service: Deactivated successfully. >Aug 20 17:28:01 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:01 pi podman[14446]: 2022-08-20 17:28:01.856059683 +0000 UTC m=+1.350352748 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, io.balena.architecture=aarch64, io.containers.autoupdate=registry, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, io.balena.qemu.version=7.0.0+balena1-aarch64, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.version=1.25.2) >Aug 20 17:28:01 pi podman[14446]: 2022-08-20 17:28:01.909420217 +0000 UTC m=+1.403713300 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=ac3dfd3e194b6e9ef76a8c61c3431fad182f4bb659843dc8b9aa47c08d595e23) >Aug 20 17:28:02 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:28:02 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:02 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Consumed 2.608s CPU time. >Aug 20 17:28:02 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 90. >Aug 20 17:28:02 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:28:02 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:02 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:02 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:28:02 pi dbus-parsec[14575]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:28:02 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:28:02 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:28:02 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:28:02 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:28:12 pi systemd[1]: Starting pmie_farm_check.service - Check and migrate non-primary pmie farm instances... >Aug 20 17:28:12 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:28:12 pi systemd[1]: Started pmie_farm_check.service - Check and migrate non-primary pmie farm instances. >Aug 20 17:28:12 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie_farm_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:12 pi zezere-ignition[14617]: INFO : Ignition 2.14.0 >Aug 20 17:28:12 pi zezere-ignition[14617]: INFO : Stage: fetch >Aug 20 17:28:12 pi zezere-ignition[14617]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:28:12 pi zezere-ignition[14617]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:28:12 pi zezere-ignition[14617]: DEBUG : parsed url from cmdline: "" >Aug 20 17:28:12 pi zezere-ignition[14617]: INFO : no config URL provided >Aug 20 17:28:12 pi zezere-ignition[14617]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:28:12 pi zezere-ignition[14617]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:28:12 pi zezere-ignition[14617]: INFO : using config file at "/tmp/zezere-ignition-config-g3hl7xy3.ign" >Aug 20 17:28:12 pi zezere-ignition[14617]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:28:12 pi zezere-ignition[14617]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:28:12 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 91. >Aug 20 17:28:12 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:28:12 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:12 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:12 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:28:12 pi systemd[1]: pmie_farm_check.service: Deactivated successfully. >Aug 20 17:28:12 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=pmie_farm_check comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:13 pi dbus-parsec[14669]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:28:13 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:28:13 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:28:13 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:28:13 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:28:13 pi zezere-ignition[14617]: INFO : GET result: Not Found >Aug 20 17:28:13 pi zezere-ignition[14617]: WARNING : failed to fetch config: resource not found >Aug 20 17:28:13 pi zezere-ignition[14617]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:28:13 pi zezere-ignition[14617]: CRITICAL : Ignition failed: resource not found >Aug 20 17:28:13 pi zezere-ignition[14671]: INFO : Ignition 2.14.0 >Aug 20 17:28:13 pi zezere-ignition[14671]: INFO : Stage: disks >Aug 20 17:28:13 pi zezere-ignition[14671]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:28:13 pi zezere-ignition[14671]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:28:13 pi zezere-ignition[14671]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:28:13 pi zezere-ignition[14671]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:28:13 pi zezere-ignition[14677]: INFO : Ignition 2.14.0 >Aug 20 17:28:13 pi zezere-ignition[14677]: INFO : Stage: mount >Aug 20 17:28:13 pi zezere-ignition[14677]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:28:13 pi zezere-ignition[14677]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:28:13 pi zezere-ignition[14677]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:28:13 pi zezere-ignition[14677]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:28:13 pi zezere-ignition[14683]: INFO : Ignition 2.14.0 >Aug 20 17:28:13 pi zezere-ignition[14683]: INFO : Stage: files >Aug 20 17:28:13 pi zezere-ignition[14683]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:28:13 pi zezere-ignition[14683]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:28:13 pi zezere-ignition[14683]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:28:13 pi zezere-ignition[14683]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:28:13 pi zezere-ignition[14689]: INFO : Ignition 2.14.0 >Aug 20 17:28:13 pi zezere-ignition[14689]: INFO : Stage: umount >Aug 20 17:28:13 pi zezere-ignition[14689]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:28:13 pi zezere-ignition[14689]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:28:13 pi zezere-ignition[14689]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:28:13 pi zezere-ignition[14689]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:28:13 pi zezere-ignition[14577]: Running stage fetch with config file /tmp/zezere-ignition-config-g3hl7xy3.ign >Aug 20 17:28:13 pi zezere-ignition[14577]: Running stage disks with config file /tmp/zezere-ignition-config-g3hl7xy3.ign >Aug 20 17:28:13 pi zezere-ignition[14577]: Running stage mount with config file /tmp/zezere-ignition-config-g3hl7xy3.ign >Aug 20 17:28:13 pi zezere-ignition[14577]: Running stage files with config file /tmp/zezere-ignition-config-g3hl7xy3.ign >Aug 20 17:28:13 pi zezere-ignition[14577]: Running stage umount with config file /tmp/zezere-ignition-config-g3hl7xy3.ign >Aug 20 17:28:13 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:28:13 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:28:13 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:13 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:19 pi gitea-app[10857]: 2022/08/20 19:28:19 [630119b3] router: completed GET /Danacus/university-stuff/action/watch?redirect_to=%2FDanacus%2Funiversity-stuff%2Fblame%2Fcommit%2F4f6704b7814f72e9387672f07891ea3f05a9872c%2FBesturingssystemen%2FOefenzitting1%2Fsrc%2FNoSchedulingAlgorithm.java for 10.88.0.1:49946, 405 Method Not Allowed in 0.8ms @ web/goget.go:21(web.goGet) >Aug 20 17:28:20 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:28:20 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:22 pi gitea-app[10857]: 2022/08/20 19:28:22 [630119b6] router: completed GET /Danacus/university-stuff/src/commit/fbe44a25cd05f56bbbd056f5901103a6e651c608/bri_taak_2 for 10.88.0.1:49958, 200 OK in 146.8ms @ repo/view.go:732(repo.Home) >Aug 20 17:28:23 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 92. >Aug 20 17:28:23 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:28:23 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:23 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:23 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:28:23 pi dbus-parsec[14712]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:28:23 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:28:23 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:28:23 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:28:23 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:28:24 pi podman[14695]: 2022-08-20 17:28:24.255918395 +0000 UTC m=+3.730996512 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.title=docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, io.containers.autoupdate=registry, org.opencontainers.image.licenses=) >Aug 20 17:28:24 pi podman[14695]: 2022-08-20 17:28:24.320408321 +0000 UTC m=+3.795486569 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=d0ec9203821c54e557afa4d9b06b87d4ad5eec293599b315698a2754ee724b0a) >Aug 20 17:28:24 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:28:24 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:24 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 7.362s CPU time. >Aug 20 17:28:26 pi gitea-app[10857]: 2022/08/20 19:28:26 [630119ba] router: completed GET /Danacus/university-stuff/src/commit/fbe44a25cd05f56bbbd056f5901103a6e651c608/R for 10.88.0.1:49618, 200 OK in 86.1ms @ repo/view.go:732(repo.Home) >Aug 20 17:28:33 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 93. >Aug 20 17:28:33 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:33 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:33 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:28:33 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:28:33 pi dbus-parsec[14736]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:28:33 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:28:33 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:28:33 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:28:33 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:28:39 pi gitea-app[10857]: 2022/08/20 19:28:39 [630119c7] router: completed GET /Danacus/university-stuff/commits/commit/a5a26716c0c3bf5cf9658499ffae462e2adabb0b/BvP/oef_5_E4.py for 10.88.0.1:38310, 200 OK in 189.2ms @ repo/commit.go:37(repo.RefCommits) >Aug 20 17:28:41 pi gitea-app[10857]: 2022/08/20 19:28:41 [630119c9] router: completed GET /Danacus/university-stuff/src/commit/7647538f1caeb643328630bbc4d9e35bd5189696/Gegevensbanken for 10.88.0.1:38318, 200 OK in 60.5ms @ repo/view.go:732(repo.Home) >Aug 20 17:28:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=239.255.255.250 LEN=118 TOS=0x00 PREC=0x00 TTL=2 ID=43191 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:28:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=fe80:0000:0000:0000:4ec7:1f5e:5274:16ba DST=ff02:0000:0000:0000:0000:0000:0000:000c LEN=134 TC=0 HOPLIMIT=2 FLOWLBL=968519 PROTO=UDP SPT=35818 DPT=1900 LEN=94 >Aug 20 17:28:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=255.255.255.255 LEN=118 TOS=0x00 PREC=0x00 TTL=64 ID=46994 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:28:43 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 94. >Aug 20 17:28:43 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:43 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:43 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:28:43 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:28:43 pi dbus-parsec[14751]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:28:43 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:28:43 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:28:43 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:28:43 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:28:46 pi gitea-app[10857]: 2022/08/20 19:28:46 [630119ce] router: completed GET /Danacus/dotfiles/src/commit/ea869f52e6c09d9a6eb6b3ea39288a9f7194cad1/.config/i3/rotate_normal.sh for 10.88.0.1:38906, 200 OK in 57.9ms @ repo/view.go:732(repo.Home) >Aug 20 17:28:50 pi audit[14758]: USER_AUTH pid=14758 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:authentication grantors=pam_usertype,pam_localuser,pam_unix acct="pi" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:28:50 pi audit[14758]: USER_ACCT pid=14758 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:accounting grantors=pam_unix,pam_localuser acct="pi" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:28:50 pi audit[14758]: USER_CMD pid=14758 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='cwd="/var/home/pi" cmd=72706D2D6F737472656520737461747573 exe="/usr/bin/sudo" terminal=pts/0 res=success' >Aug 20 17:28:50 pi sudo[14758]: pi : TTY=pts/0 ; PWD=/var/home/pi ; USER=root ; COMMAND=/usr/bin/rpm-ostree status >Aug 20 17:28:50 pi audit[14758]: CRED_REFR pid=14758 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_localuser,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:28:50 pi audit[14758]: USER_START pid=14758 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:28:50 pi sudo[14758]: pam_unix(sudo:session): session opened for user root(uid=0) by pi(uid=1000) >Aug 20 17:28:50 pi systemd[1]: Starting rpm-ostreed.service - rpm-ostree System Management Daemon... >Aug 20 17:28:50 pi rpm-ostree[14763]: Reading config file '/etc/rpm-ostreed.conf' >Aug 20 17:28:50 pi rpm-ostree[14763]: In idle state; will auto-exit in 62 seconds >Aug 20 17:28:50 pi systemd[1]: Started rpm-ostreed.service - rpm-ostree System Management Daemon. >Aug 20 17:28:50 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rpm-ostreed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:51 pi rpm-ostree[14763]: client(id:cli dbus:1.98 unit:session-1.scope uid:0) added; new total=1 >Aug 20 17:28:51 pi rpm-ostree[14763]: client(id:cli dbus:1.98 unit:session-1.scope uid:0) vanished; remaining=0 >Aug 20 17:28:51 pi rpm-ostree[14763]: In idle state; will auto-exit in 61 seconds >Aug 20 17:28:51 pi sudo[14758]: pam_unix(sudo:session): session closed for user root >Aug 20 17:28:51 pi audit[14758]: USER_END pid=14758 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:28:51 pi audit[14758]: CRED_DISP pid=14758 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_localuser,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:28:53 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 95. >Aug 20 17:28:53 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:53 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:53 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:28:53 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:28:54 pi dbus-parsec[14770]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:28:54 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:28:54 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:28:54 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:28:54 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:28:55 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:28:55 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:59 pi podman[14773]: 2022-08-20 17:28:59.010155993 +0000 UTC m=+3.504730340 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, PODMAN_SYSTEMD_UNIT=container-pihole.service, io.containers.autoupdate=registry, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.licenses=, org.opencontainers.image.title=docker-pi-hole) >Aug 20 17:28:59 pi podman[14773]: 2022-08-20 17:28:59.070351915 +0000 UTC m=+3.564926447 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=8021410a1d6b15ddd5f42e5a6c844dcacf7f104871753f42df8b10e44a53f29c) >Aug 20 17:28:59 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:28:59 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:28:59 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 6.828s CPU time. >Aug 20 17:29:02 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:29:02 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:02 pi podman[14798]: 2022-08-20 17:29:02.704500213 +0000 UTC m=+0.188132492 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, io.containers.autoupdate=registry, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.version=1.25.2, io.balena.qemu.version=7.0.0+balena1-aarch64, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, io.balena.architecture=aarch64, org.opencontainers.image.licenses=GPL-3.0-only) >Aug 20 17:29:02 pi podman[14798]: 2022-08-20 17:29:02.759393068 +0000 UTC m=+0.243025476 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=69b686b19da2f422e13c0b82a4b438439b847b9da697d393625bf623b1c69dfb) >Aug 20 17:29:02 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:29:02 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:04 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 96. >Aug 20 17:29:04 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:04 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:04 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:29:04 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:29:04 pi dbus-parsec[14821]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:29:04 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:29:04 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:29:04 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:29:04 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:29:06 pi gitea-app[10857]: 2022/08/20 19:29:06 [630119e2] router: completed GET /Danacus/university-stuff/raw/commit/a1a69e5a9fbd2a9b0a738065dd27e30e848612fa/Numerieke/zit08_matlab/evalueer_lagrange2.m for 10.88.0.1:49894, 200 OK in 89.6ms @ repo/download.go:123(repo.SingleDownload) >Aug 20 17:29:12 pi gitea-app[10857]: 2022/08/20 19:29:12 [630119e8] router: completed GET /Danacus/university-stuff/src/commit/fbe44a25cd05f56bbbd056f5901103a6e651c608/Algoritmen for 10.88.0.1:49904, 200 OK in 108.5ms @ repo/view.go:732(repo.Home) >Aug 20 17:29:14 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 97. >Aug 20 17:29:14 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:14 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:14 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:29:14 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:29:14 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:29:14 pi dbus-parsec[14835]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:29:14 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:29:14 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:29:14 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:29:14 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:29:14 pi zezere-ignition[14837]: INFO : Ignition 2.14.0 >Aug 20 17:29:14 pi zezere-ignition[14837]: INFO : Stage: fetch >Aug 20 17:29:14 pi zezere-ignition[14837]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:29:14 pi zezere-ignition[14837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:29:14 pi zezere-ignition[14837]: DEBUG : parsed url from cmdline: "" >Aug 20 17:29:14 pi zezere-ignition[14837]: INFO : no config URL provided >Aug 20 17:29:14 pi zezere-ignition[14837]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:29:14 pi zezere-ignition[14837]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:29:14 pi zezere-ignition[14837]: INFO : using config file at "/tmp/zezere-ignition-config-hx1q9dxk.ign" >Aug 20 17:29:14 pi zezere-ignition[14837]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:29:14 pi zezere-ignition[14837]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:29:15 pi zezere-ignition[14837]: INFO : GET result: Not Found >Aug 20 17:29:15 pi zezere-ignition[14837]: WARNING : failed to fetch config: resource not found >Aug 20 17:29:15 pi zezere-ignition[14837]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:29:15 pi zezere-ignition[14837]: CRITICAL : Ignition failed: resource not found >Aug 20 17:29:15 pi zezere-ignition[14844]: INFO : Ignition 2.14.0 >Aug 20 17:29:15 pi zezere-ignition[14844]: INFO : Stage: disks >Aug 20 17:29:15 pi zezere-ignition[14844]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:29:15 pi zezere-ignition[14844]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:29:15 pi zezere-ignition[14844]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:29:15 pi zezere-ignition[14844]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:29:15 pi zezere-ignition[14850]: INFO : Ignition 2.14.0 >Aug 20 17:29:15 pi zezere-ignition[14850]: INFO : Stage: mount >Aug 20 17:29:15 pi zezere-ignition[14850]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:29:15 pi zezere-ignition[14850]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:29:15 pi zezere-ignition[14850]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:29:15 pi zezere-ignition[14850]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:29:15 pi zezere-ignition[14856]: INFO : Ignition 2.14.0 >Aug 20 17:29:15 pi zezere-ignition[14856]: INFO : Stage: files >Aug 20 17:29:15 pi zezere-ignition[14856]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:29:15 pi zezere-ignition[14856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:29:15 pi zezere-ignition[14856]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:29:15 pi zezere-ignition[14856]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:29:15 pi zezere-ignition[14862]: INFO : Ignition 2.14.0 >Aug 20 17:29:15 pi zezere-ignition[14862]: INFO : Stage: umount >Aug 20 17:29:15 pi zezere-ignition[14862]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:29:15 pi zezere-ignition[14862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:29:15 pi zezere-ignition[14862]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:29:15 pi zezere-ignition[14862]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:29:15 pi zezere-ignition[14836]: Running stage fetch with config file /tmp/zezere-ignition-config-hx1q9dxk.ign >Aug 20 17:29:15 pi zezere-ignition[14836]: Running stage disks with config file /tmp/zezere-ignition-config-hx1q9dxk.ign >Aug 20 17:29:15 pi zezere-ignition[14836]: Running stage mount with config file /tmp/zezere-ignition-config-hx1q9dxk.ign >Aug 20 17:29:15 pi zezere-ignition[14836]: Running stage files with config file /tmp/zezere-ignition-config-hx1q9dxk.ign >Aug 20 17:29:15 pi zezere-ignition[14836]: Running stage umount with config file /tmp/zezere-ignition-config-hx1q9dxk.ign >Aug 20 17:29:15 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:29:15 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:29:15 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:15 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:18 pi gitea-app[10857]: 2022/08/20 19:29:18 [630119ee] router: completed GET /Danacus/university-stuff/src/commit/9b03f76a4db00167f6aeb176ae344cb456eda24c/Bewijzen%20en%20redeneren/huistaak10/taak.log for 10.88.0.1:46042, 200 OK in 89.1ms @ repo/view.go:732(repo.Home) >Aug 20 17:29:20 pi gitea-app[10857]: 2022/08/20 19:29:20 [630119f0] router: completed GET /Danacus/university-stuff/action/watch?redirect_to=%2FDanacus%2Funiversity-stuff%2Fblame%2Fcommit%2Fa5a26716c0c3bf5cf9658499ffae462e2adabb0b%2FBvP%2Foef_3_E2.py for 10.88.0.1:46058, 405 Method Not Allowed in 0.9ms @ web/goget.go:21(web.goGet) >Aug 20 17:29:22 pi gitea-app[10857]: 2022/08/20 19:29:22 [630119f2] router: completed GET /Danacus/university-stuff/src/branch/master/Numerieke/zit16_matlab/oef1.m for 10.88.0.1:46060, 200 OK in 240.7ms @ repo/view.go:732(repo.Home) >Aug 20 17:29:22 pi gitea-app[10857]: 2022/08/20 19:29:22 [630119f2-8] router: completed GET /Danacus/university-stuff/commits/commit/e258876bec8aadcadec0488db242b3883b308301/IW/LaTeX/cv.log for 10.88.0.1:46072, 200 OK in 174.2ms @ repo/commit.go:37(repo.RefCommits) >Aug 20 17:29:24 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 98. >Aug 20 17:29:24 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:24 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:24 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:29:24 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:29:24 pi dbus-parsec[14890]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:29:24 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:29:24 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:29:24 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:29:24 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:29:29 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:29:29 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:32 pi podman[14893]: 2022-08-20 17:29:32.920708994 +0000 UTC m=+3.409008207 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.licenses=, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, io.containers.autoupdate=registry, org.opencontainers.image.description=Pi-hole in a docker container) >Aug 20 17:29:32 pi podman[14893]: 2022-08-20 17:29:32.989763367 +0000 UTC m=+3.478062635 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=9462ca8e16539a9b6eed29ee5307d2571ec4618fae7530f0183d7a9bf6652b86) >Aug 20 17:29:33 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:29:33 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:33 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Consumed 6.645s CPU time. >Aug 20 17:29:34 pi gitea-app[10857]: 2022/08/20 19:29:34 [630119fd] router: completed GET /Danacus/university-stuff/commit/a5a26716c0c3bf5cf9658499ffae462e2adabb0b for 10.88.0.1:51464, 200 OK in 1169.9ms @ repo/commit.go:255(repo.Diff) >Aug 20 17:29:34 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 99. >Aug 20 17:29:34 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:34 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:34 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:29:34 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:29:35 pi dbus-parsec[14927]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:29:35 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:29:35 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:29:35 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:29:35 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:29:36 pi gitea-app[10857]: 2022/08/20 19:29:36 [63011a00] router: completed GET /Danacus/university-stuff/src/commit/d22594b8b330f48c49fa0042e493f3038376a832/AI for 10.88.0.1:46660, 200 OK in 67.8ms @ repo/view.go:732(repo.Home) >Aug 20 17:29:45 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 100. >Aug 20 17:29:45 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:45 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:45 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:29:45 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:29:45 pi dbus-parsec[14936]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:29:45 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:29:45 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:29:45 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:29:45 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:29:48 pi gitea-app[10857]: 2022/08/20 19:29:48 [63011a0c] router: completed GET /Danacus/university-stuff/src/commit/3b6a42b1224679859459f1f1e7a46f3dc06a693e/BvP/mini_sudoku.py for 10.88.0.1:49172, 200 OK in 67.0ms @ repo/view.go:732(repo.Home) >Aug 20 17:29:51 pi systemd[1]: rpm-ostreed.service: Deactivated successfully. >Aug 20 17:29:51 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rpm-ostreed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:55 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 101. >Aug 20 17:29:55 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:55 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:29:55 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:29:55 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:29:55 pi dbus-parsec[14945]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:29:55 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:29:55 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:29:55 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:29:55 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:30:03 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:30:03 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:03 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:03 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:30:03 pi systemd[1]: sysroot-tmp-crun.TiAiMq.mount: Deactivated successfully. >Aug 20 17:30:03 pi podman[14968]: 2022-08-20 17:30:03.743582363 +0000 UTC m=+0.229145584 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.title=docker-pi-hole, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.licenses=, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service, io.containers.autoupdate=registry, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520) >Aug 20 17:30:03 pi podman[14968]: 2022-08-20 17:30:03.800828931 +0000 UTC m=+0.286392281 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=49ea29a1890ee66f8dfea5b67e6aadf6e727d8f4b042a091a4d33af4b7893555) >Aug 20 17:30:03 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:30:03 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:03 pi podman[14967]: 2022-08-20 17:30:03.990780327 +0000 UTC m=+0.483282077 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, io.balena.architecture=aarch64, io.balena.qemu.version=7.0.0+balena1-aarch64, io.containers.autoupdate=registry, org.opencontainers.image.created=2022-07-27T18:44:18+00:00, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, org.opencontainers.image.licenses=GPL-3.0-only, org.opencontainers.image.version=1.25.2, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki) >Aug 20 17:30:04 pi podman[14967]: 2022-08-20 17:30:04.039464365 +0000 UTC m=+0.531966227 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=87596ec5c9c7d202c4eaf055e12990620bc944b0828cf906ceabd2168aeae1f8) >Aug 20 17:30:04 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:30:04 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:05 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 102. >Aug 20 17:30:05 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:05 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:05 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:30:05 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:30:05 pi dbus-parsec[15010]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:30:05 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:30:05 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:30:05 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:30:05 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:30:08 pi gitea-app[10857]: 2022/08/20 19:30:08 [63011a20] router: completed GET /Danacus/dotfiles/action/star?redirect_to=%2FDanacus%2Fdotfiles%2Fblame%2Fcommit%2F22b2916933829c962f57d5b968508e9e46f00f0d%2F.config%2Fi3%2Ffactorio.sh%21 for 10.88.0.1:59148, 405 Method Not Allowed in 0.9ms @ web/goget.go:21(web.goGet) >Aug 20 17:30:15 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:30:15 pi zezere-ignition[15012]: INFO : Ignition 2.14.0 >Aug 20 17:30:15 pi zezere-ignition[15012]: INFO : Stage: fetch >Aug 20 17:30:15 pi zezere-ignition[15012]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:30:15 pi zezere-ignition[15012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:30:15 pi zezere-ignition[15012]: DEBUG : parsed url from cmdline: "" >Aug 20 17:30:15 pi zezere-ignition[15012]: INFO : no config URL provided >Aug 20 17:30:15 pi zezere-ignition[15012]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:30:15 pi zezere-ignition[15012]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:30:15 pi zezere-ignition[15012]: INFO : using config file at "/tmp/zezere-ignition-config-92j_2ijd.ign" >Aug 20 17:30:15 pi zezere-ignition[15012]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:30:15 pi zezere-ignition[15012]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:30:15 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 103. >Aug 20 17:30:15 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:30:15 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:15 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:15 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:30:16 pi dbus-parsec[15018]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:30:16 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:30:16 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:30:16 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:30:16 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:30:16 pi zezere-ignition[15012]: INFO : GET result: Not Found >Aug 20 17:30:16 pi zezere-ignition[15012]: WARNING : failed to fetch config: resource not found >Aug 20 17:30:16 pi zezere-ignition[15012]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:30:16 pi zezere-ignition[15012]: CRITICAL : Ignition failed: resource not found >Aug 20 17:30:16 pi zezere-ignition[15021]: INFO : Ignition 2.14.0 >Aug 20 17:30:16 pi zezere-ignition[15021]: INFO : Stage: disks >Aug 20 17:30:16 pi zezere-ignition[15021]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:30:16 pi zezere-ignition[15021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:30:16 pi zezere-ignition[15021]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:30:16 pi zezere-ignition[15021]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:30:16 pi zezere-ignition[15027]: INFO : Ignition 2.14.0 >Aug 20 17:30:16 pi zezere-ignition[15027]: INFO : Stage: mount >Aug 20 17:30:16 pi zezere-ignition[15027]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:30:16 pi zezere-ignition[15027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:30:16 pi zezere-ignition[15027]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:30:16 pi zezere-ignition[15027]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:30:16 pi zezere-ignition[15033]: INFO : Ignition 2.14.0 >Aug 20 17:30:16 pi zezere-ignition[15033]: INFO : Stage: files >Aug 20 17:30:16 pi zezere-ignition[15033]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:30:16 pi zezere-ignition[15033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:30:16 pi zezere-ignition[15033]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:30:16 pi zezere-ignition[15033]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:30:16 pi zezere-ignition[15039]: INFO : Ignition 2.14.0 >Aug 20 17:30:16 pi zezere-ignition[15039]: INFO : Stage: umount >Aug 20 17:30:16 pi zezere-ignition[15039]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:30:16 pi zezere-ignition[15039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:30:16 pi zezere-ignition[15039]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:30:16 pi zezere-ignition[15039]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:30:16 pi zezere-ignition[15011]: Running stage fetch with config file /tmp/zezere-ignition-config-92j_2ijd.ign >Aug 20 17:30:16 pi zezere-ignition[15011]: Running stage disks with config file /tmp/zezere-ignition-config-92j_2ijd.ign >Aug 20 17:30:16 pi zezere-ignition[15011]: Running stage mount with config file /tmp/zezere-ignition-config-92j_2ijd.ign >Aug 20 17:30:16 pi zezere-ignition[15011]: Running stage files with config file /tmp/zezere-ignition-config-92j_2ijd.ign >Aug 20 17:30:16 pi zezere-ignition[15011]: Running stage umount with config file /tmp/zezere-ignition-config-92j_2ijd.ign >Aug 20 17:30:16 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:30:16 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:30:16 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:16 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:24 pi gitea-app[10857]: 2022/08/20 19:30:24 [63011a30] router: completed GET /Danacus/university-stuff/src/commit/a3382ccbb2c61efe658e1a9719b94f15bdf2733d/Numerieke for 10.88.0.1:54224, 200 OK in 146.2ms @ repo/view.go:732(repo.Home) >Aug 20 17:30:24 pi gitea-app[10857]: 2022/08/20 19:30:24 [63011a30-8] router: completed GET /Danacus/dotfiles/src/commit/38aac0904f2f8536cca9fee8e5d826afcea7038a/.config/rofi/.sidetab.rasi.un~ for 10.88.0.1:54238, 200 OK in 66.0ms @ repo/view.go:732(repo.Home) >Aug 20 17:30:26 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 104. >Aug 20 17:30:26 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:26 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:26 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:30:26 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:30:26 pi dbus-parsec[15060]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:30:26 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:30:26 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:30:26 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:30:26 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:30:31 pi audit[15062]: USER_ACCT pid=15062 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:accounting grantors=pam_unix,pam_localuser acct="pi" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:30:31 pi audit[15062]: USER_CMD pid=15062 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='cwd="/var/home/pi" cmd="dmseg" exe="/usr/bin/sudo" terminal=pts/0 res=failed' >Aug 20 17:30:34 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:30:34 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:34 pi podman[15064]: 2022-08-20 17:30:34.750013907 +0000 UTC m=+0.215124175 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.licenses=, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, io.containers.autoupdate=registry, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520, org.opencontainers.image.title=docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.description=Pi-hole in a docker container) >Aug 20 17:30:34 pi podman[15064]: 2022-08-20 17:30:34.81017794 +0000 UTC m=+0.275288504 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=312027b0d3ba6fad8528c997d3e5eb41847f4c52a1a310a13c261c6f9dcb060b) >Aug 20 17:30:34 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:30:34 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:36 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 105. >Aug 20 17:30:36 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:36 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:36 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:30:36 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:30:36 pi dbus-parsec[15085]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:30:36 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:30:36 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:30:36 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:30:36 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:30:36 pi audit[15086]: USER_ACCT pid=15086 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:accounting grantors=pam_unix,pam_localuser acct="pi" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:30:36 pi audit[15086]: USER_CMD pid=15086 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='cwd="/var/home/pi" cmd="dmesg" exe="/usr/bin/sudo" terminal=pts/0 res=success' >Aug 20 17:30:36 pi sudo[15086]: pi : TTY=pts/0 ; PWD=/var/home/pi ; USER=root ; COMMAND=/usr/bin/dmesg >Aug 20 17:30:36 pi audit[15086]: CRED_REFR pid=15086 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:30:36 pi sudo[15086]: pam_unix(sudo:session): session opened for user root(uid=0) by pi(uid=1000) >Aug 20 17:30:36 pi audit[15086]: USER_START pid=15086 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:30:37 pi sudo[15086]: pam_unix(sudo:session): session closed for user root >Aug 20 17:30:36 pi audit[15086]: USER_END pid=15086 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:30:36 pi audit[15086]: CRED_DISP pid=15086 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:30:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=239.255.255.250 LEN=118 TOS=0x00 PREC=0x00 TTL=2 ID=50840 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:30:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=fe80:0000:0000:0000:4ec7:1f5e:5274:16ba DST=ff02:0000:0000:0000:0000:0000:0000:000c LEN=134 TC=0 HOPLIMIT=2 FLOWLBL=968519 PROTO=UDP SPT=35818 DPT=1900 LEN=94 >Aug 20 17:30:42 pi kernel: filter_IN_public_REJECT: IN=eth0 OUT= MAC= SRC=10.0.3.10 DST=255.255.255.255 LEN=118 TOS=0x00 PREC=0x00 TTL=64 ID=52793 DF PROTO=UDP SPT=33477 DPT=1900 LEN=98 >Aug 20 17:30:46 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 106. >Aug 20 17:30:46 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:46 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:46 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:30:46 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:30:46 pi dbus-parsec[15089]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:30:46 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:30:46 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:30:46 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:30:46 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:30:56 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 107. >Aug 20 17:30:56 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:56 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:30:56 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:30:56 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:30:57 pi dbus-parsec[15092]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:30:57 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:30:57 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:30:57 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:30:57 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:31:04 pi systemd[1]: Started c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service - /usr/bin/podman healthcheck run c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518. >Aug 20 17:31:04 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:31:04 pi podman[15103]: 2022-08-20 17:31:04.686095334 +0000 UTC m=+0.189011148 container exec c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, org.opencontainers.image.revision=ce9d93003cd37a79edba1ba830a6c6d3fa22c2c8, io.balena.qemu.version=7.0.0+balena1-aarch64, io.containers.autoupdate=registry, org.opencontainers.image.documentation=https://github.com/dani-garcia/vaultwarden/wiki, org.opencontainers.image.url=https://hub.docker.com/r/vaultwarden/server, org.opencontainers.image.version=1.25.2, org.opencontainers.image.licenses=GPL-3.0-only, PODMAN_SYSTEMD_UNIT=container-vaultwarden-server.service, io.balena.architecture=aarch64, org.opencontainers.image.source=https://github.com/dani-garcia/vaultwarden, org.opencontainers.image.created=2022-07-27T18:44:18+00:00) >Aug 20 17:31:04 pi podman[15103]: 2022-08-20 17:31:04.739316523 +0000 UTC m=+0.242232448 container exec_died c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 (image=docker.io/vaultwarden/server:latest, name=vaultwarden-server, execID=6e175ec72deca51acac2fb6eb4b66ebd7e2ff8fb64a47be5653f19abcba4f449) >Aug 20 17:31:04 pi systemd[1]: c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518.service: Deactivated successfully. >Aug 20 17:31:04 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=c7d652e29be68fbf06776fb74d6f1b2bc3b2609b60f8d0ab39bd66708219b518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:31:05 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:31:05 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:31:05 pi podman[15127]: 2022-08-20 17:31:05.692934123 +0000 UTC m=+0.189167610 container exec f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, org.opencontainers.image.licenses=, org.opencontainers.image.title=docker-pi-hole, PODMAN_SYSTEMD_UNIT=container-pihole.service, io.containers.autoupdate=registry, org.opencontainers.image.url=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.description=Pi-hole in a docker container, org.opencontainers.image.created=2022-07-10T13:08:55.626Z, org.opencontainers.image.source=https://github.com/pi-hole/docker-pi-hole, org.opencontainers.image.version=2022.07.1, org.opencontainers.image.revision=7e69551be1b76d175fffc1b8c53733e74ee82520) >Aug 20 17:31:05 pi podman[15127]: 2022-08-20 17:31:05.750114321 +0000 UTC m=+0.246347937 container exec_died f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e (image=docker.io/pihole/pihole:latest, name=pihole, execID=4cc98c9482d6bc81031583d9eab3a1615b72e9b2f8107457ab06ac788f6c3145) >Aug 20 17:31:05 pi systemd[1]: f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service: Deactivated successfully. >Aug 20 17:31:05 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:31:06 pi gitea-app[10857]: 2022/08/20 19:31:06 [63011a5a] router: completed GET /Danacus/university-stuff/src/commit/2de3d09e6964810a62b763d141ece0c862b4da87/TMI for 10.88.0.1:43192, 200 OK in 64.6ms @ repo/view.go:732(repo.Home) >Aug 20 17:31:07 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 108. >Aug 20 17:31:07 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:31:07 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:31:07 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:31:07 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:31:07 pi dbus-parsec[15153]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:31:07 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:31:07 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:31:07 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:31:07 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:31:08 pi gitea-app[10857]: 2022/08/20 19:31:08 [63011a5c] router: completed GET /Danacus/university-stuff/src/commit/0c7fd6962972e3a7dd00087e40de01701b33ca31/IW/C/test for 10.88.0.1:43202, 200 OK in 55.9ms @ repo/view.go:732(repo.Home) >Aug 20 17:31:11 pi gitea-app[10857]: 2022/08/20 19:31:11 [63011a5e] router: completed GET /Danacus/university-stuff/blame/commit/f902cd3073bcdc14a41b3650c59ed0869c298b69/Declaratieve%20Talen/prolog/oef4/1-Junior%20Interpreter/tests.pl for 10.88.0.1:43218, 200 OK in 307.7ms @ repo/blame.go:47(repo.RefBlame) >Aug 20 17:31:11 pi audit[15168]: USER_ACCT pid=15168 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:accounting grantors=pam_unix,pam_localuser acct="pi" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:31:11 pi audit[15168]: USER_CMD pid=15168 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='cwd="/var/home/pi" cmd=646D657367202D2D68656C70 exe="/usr/bin/sudo" terminal=pts/0 res=success' >Aug 20 17:31:11 pi sudo[15168]: pi : TTY=pts/0 ; PWD=/var/home/pi ; USER=root ; COMMAND=/usr/bin/dmesg --help >Aug 20 17:31:11 pi audit[15168]: CRED_REFR pid=15168 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:31:11 pi sudo[15168]: pam_unix(sudo:session): session opened for user root(uid=0) by pi(uid=1000) >Aug 20 17:31:11 pi audit[15168]: USER_START pid=15168 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:31:11 pi sudo[15168]: pam_unix(sudo:session): session closed for user root >Aug 20 17:31:11 pi audit[15168]: USER_END pid=15168 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:31:11 pi audit[15168]: CRED_DISP pid=15168 uid=1000 auid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success' >Aug 20 17:31:13 pi gitea-app[10857]: 2022/08/20 19:31:13 [63011a61] router: completed GET /Danacus/university-stuff/commits/commit/31ca387f733dbd56601f960364de9fa25b4c9a37/Bewijzen%20en%20redeneren/Voorbereiding-Week9.pdf.xopp for 10.88.0.1:43224, 200 OK in 251.4ms @ repo/commit.go:37(repo.RefCommits) >Aug 20 17:31:15 pi gitea-app[10857]: 2022/08/20 19:31:15 [63011a63] router: completed GET /Danacus/university-stuff/src/commit/39e385e4007ef64f37130772ca622db2f33aff73/AI for 10.88.0.1:55786, 200 OK in 79.9ms @ repo/view.go:732(repo.Home) >Aug 20 17:31:17 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 109. >Aug 20 17:31:17 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:31:17 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:31:17 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:31:17 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:31:17 pi systemd[1]: Starting zezere_ignition.service - Run Ignition for Zezere... >Aug 20 17:31:17 pi gitea-app[10857]: 2022/08/20 19:31:17 [63011a65] router: completed GET /Danacus/dotfiles/src/commit/8696566273c48e6fd533de23845026a32ba58785/.config/picom/kill.sh for 10.88.0.1:55792, 200 OK in 66.8ms @ repo/view.go:732(repo.Home) >Aug 20 17:31:17 pi dbus-parsec[15188]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:31:17 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:31:17 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:31:17 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:31:17 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:31:17 pi zezere-ignition[15191]: INFO : Ignition 2.14.0 >Aug 20 17:31:17 pi zezere-ignition[15191]: INFO : Stage: fetch >Aug 20 17:31:17 pi zezere-ignition[15191]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:31:17 pi zezere-ignition[15191]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:31:17 pi zezere-ignition[15191]: DEBUG : parsed url from cmdline: "" >Aug 20 17:31:17 pi zezere-ignition[15191]: INFO : no config URL provided >Aug 20 17:31:17 pi zezere-ignition[15191]: INFO : reading system config file "/usr/lib/ignition/user.ign" >Aug 20 17:31:17 pi zezere-ignition[15191]: INFO : no config at "/usr/lib/ignition/user.ign" >Aug 20 17:31:17 pi zezere-ignition[15191]: INFO : using config file at "/tmp/zezere-ignition-config-ody89hxl.ign" >Aug 20 17:31:17 pi zezere-ignition[15191]: DEBUG : parsing config with SHA512: 4202432df9c76a84fcaf0bb3dd81d9aec055ae366dee2bee1f88d5233bae27ac14ab0d1f9a3fbfa25e382806a76e601d9375479e30d495e751038695e77877a2 >Aug 20 17:31:17 pi zezere-ignition[15191]: INFO : GET https://provision.fedoraproject.org/netboot/aarch64/ignition/dc:a6:32:38:46:e7: attempt #1 >Aug 20 17:31:18 pi zezere-ignition[15191]: INFO : GET result: Not Found >Aug 20 17:31:18 pi zezere-ignition[15191]: WARNING : failed to fetch config: resource not found >Aug 20 17:31:18 pi zezere-ignition[15191]: CRITICAL : failed to acquire config: resource not found >Aug 20 17:31:18 pi zezere-ignition[15191]: CRITICAL : Ignition failed: resource not found >Aug 20 17:31:18 pi zezere-ignition[15199]: INFO : Ignition 2.14.0 >Aug 20 17:31:18 pi zezere-ignition[15199]: INFO : Stage: disks >Aug 20 17:31:18 pi zezere-ignition[15199]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:31:18 pi zezere-ignition[15199]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:31:18 pi zezere-ignition[15199]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:31:18 pi zezere-ignition[15199]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:31:18 pi zezere-ignition[15205]: INFO : Ignition 2.14.0 >Aug 20 17:31:18 pi zezere-ignition[15205]: INFO : Stage: mount >Aug 20 17:31:18 pi zezere-ignition[15205]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:31:18 pi zezere-ignition[15205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:31:18 pi zezere-ignition[15205]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:31:18 pi zezere-ignition[15205]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:31:18 pi zezere-ignition[15211]: INFO : Ignition 2.14.0 >Aug 20 17:31:18 pi zezere-ignition[15211]: INFO : Stage: files >Aug 20 17:31:18 pi zezere-ignition[15211]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:31:18 pi zezere-ignition[15211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:31:18 pi zezere-ignition[15211]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:31:18 pi zezere-ignition[15211]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:31:18 pi zezere-ignition[15218]: INFO : Ignition 2.14.0 >Aug 20 17:31:18 pi zezere-ignition[15218]: INFO : Stage: umount >Aug 20 17:31:18 pi zezere-ignition[15218]: INFO : no config dir at "/usr/lib/ignition/base.d" >Aug 20 17:31:18 pi zezere-ignition[15218]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/file" >Aug 20 17:31:18 pi zezere-ignition[15218]: CRITICAL : failed to acquire config: open /run/ignition.json: no such file or directory >Aug 20 17:31:18 pi zezere-ignition[15218]: CRITICAL : Ignition failed: open /run/ignition.json: no such file or directory >Aug 20 17:31:18 pi zezere-ignition[15190]: Running stage fetch with config file /tmp/zezere-ignition-config-ody89hxl.ign >Aug 20 17:31:18 pi zezere-ignition[15190]: Running stage disks with config file /tmp/zezere-ignition-config-ody89hxl.ign >Aug 20 17:31:18 pi zezere-ignition[15190]: Running stage mount with config file /tmp/zezere-ignition-config-ody89hxl.ign >Aug 20 17:31:18 pi zezere-ignition[15190]: Running stage files with config file /tmp/zezere-ignition-config-ody89hxl.ign >Aug 20 17:31:18 pi zezere-ignition[15190]: Running stage umount with config file /tmp/zezere-ignition-config-ody89hxl.ign >Aug 20 17:31:18 pi systemd[1]: zezere_ignition.service: Deactivated successfully. >Aug 20 17:31:18 pi systemd[1]: Finished zezere_ignition.service - Run Ignition for Zezere. >Aug 20 17:31:18 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:31:18 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=zezere_ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:31:19 pi gitea-app[10857]: 2022/08/20 19:31:19 [63011a67] router: completed GET /Danacus/university-stuff/src/commit/0bc8943e0212b046bf6f76d5b5e47be5ad43e585/.metadata/version.ini for 10.88.0.1:55796, 200 OK in 64.6ms @ repo/view.go:732(repo.Home) >Aug 20 17:31:21 pi gitea-app[10857]: 2022/08/20 19:31:21 [63011a69] router: completed GET /Danacus/university-stuff/action/watch?redirect_to=%2FDanacus%2Funiversity-stuff%2Fcommits%2Fcommit%2Fba7ff65c04626371df53d00cf55f00e69e8cfc83%2FPrinciples%2520of%2520machine%2520learning%2FComputational%2520learning%2520theory.xopp for 10.88.0.1:55798, 405 Method Not Allowed in 0.9ms @ web/goget.go:21(web.goGet) >Aug 20 17:31:24 pi gitea-app[10857]: 2022/08/20 19:31:24 [63011a6b] router: completed GET /Danacus/university-stuff/commits/commit/c82e9bca75e5ec2e55ac8dff96303c6da5a97795/Logica/homework6.txt for 10.88.0.1:55814, 200 OK in 143.5ms @ repo/commit.go:37(repo.RefCommits) >Aug 20 17:31:27 pi systemd[1]: dbus-parsec.service: Scheduled restart job, restart counter is at 110. >Aug 20 17:31:27 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:31:27 pi audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' >Aug 20 17:31:27 pi systemd[1]: Stopped dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:31:27 pi systemd[1]: Starting dbus-parsec.service - PARSEC-encrypted DBus secrets daemon... >Aug 20 17:31:27 pi dbus-parsec[15242]: Error: ParsecClient(Client(Interface(OpcodeDoesNotExist))) >Aug 20 17:31:27 pi systemd[1]: dbus-parsec.service: Main process exited, code=exited, status=1/FAILURE >Aug 20 17:31:27 pi systemd[1]: dbus-parsec.service: Failed with result 'exit-code'. >Aug 20 17:31:27 pi systemd[1]: Failed to start dbus-parsec.service - PARSEC-encrypted DBus secrets daemon. >Aug 20 17:31:27 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-parsec comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >Aug 20 17:31:31 pi gitea-app[10857]: 2022/08/20 19:31:31 [63011a73] router: completed GET /Danacus/university-stuff/src/branch/master/BvP/default/lib64 for 10.88.0.1:34564, 200 OK in 278.7ms @ repo/view.go:732(repo.Home) >Aug 20 17:31:36 pi systemd[1]: Started f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e.service - /usr/bin/podman healthcheck run f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e. >Aug 20 17:31:36 pi audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=f247765d76a1e9e2fa6ca5c82570625a4a07353133a269d58aea1405b2f5422e comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 2119992
: 1906656