Bug 1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI)
Summary: [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: RHCOS
Version: 4.7
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.7.0
Assignee: Timothée Ravier
QA Contact: Michael Nguyen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-13 00:07 UTC by Elena German
Modified: 2021-02-24 15:52 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:52:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:52:45 UTC

Description Elena German 2021-01-13 00:07:17 UTC
Description of problem:


Version-Release number of selected component (if applicable):
Cluster version is 4.7.0-0.nightly-2021-01-10-070949
toolbox-0.0.8-1.rhaos4.7.el8.noarch

How reproducible:
always on virtual environment

Steps to Reproduce:
1. Choose node from the list:
    oc get nodes
2. Open debug session to this node:
    oc debug node/master-0-0
3. chroot /host
4. Choose some interface from the list:
    ip ad
5. Run toolbox container:
    toolbox
6. Run tcpdump on chosen interface:
    tcpdump -nn -s 0 -i enp4s0 -w /host/var/tmp/master-0-0_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap
7. Stop it
8. From within the toolbox container, try to run redhat-support-tool to attach the file directly to an existing Red Hat Support case:
    redhat-support-tool addattachment -c 01234567 /host/var/tmp/master-0-0_12_01_2021-23_08_52-UTC.pcap

Actual results:
bash: redhat-support-tool: command not found

Expected results:
success, maximum error, but regarding case with such number not exists

Additional info:
1. Test executed for standard deployment (not restricted network)
2. Used virtual setup
3. Logs:
[kni@provisionhost-0-0 ~]$ oc get nodes
NAME         STATUS   ROLES    AGE     VERSION
master-0-0   Ready    master   6h24m   v1.20.0+394a5a3
master-0-1   Ready    master   6h23m   v1.20.0+394a5a3
master-0-2   Ready    master   6h24m   v1.20.0+394a5a3
worker-0-0   Ready    worker   5h48m   v1.20.0+394a5a3
worker-0-1   Ready    worker   5h50m   v1.20.0+394a5a3
[kni@provisionhost-0-0 ~]$ 
[kni@provisionhost-0-0 ~]$ oc debug node/master-0-0
Starting pod/master-0-0-debug ...
To use host binaries, run `chroot /host`
Pod IP: 192.168.123.133
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:f7:43:be brd ff:ff:ff:ff:ff:ff
    inet6 fd00:1101::3/64 scope global dynamic 
       valid_lft 8sec preferred_lft 8sec
    inet6 fe80::5054:ff:fef7:43be/64 scope link 
       valid_lft forever preferred_lft forever
3: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000
    link/ether 52:54:00:a7:89:86 brd ff:ff:ff:ff:ff:ff
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ae:e7:37:23:cc:41 brd ff:ff:ff:ff:ff:ff
5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 52:54:00:a7:89:86 brd ff:ff:ff:ff:ff:ff
    inet 192.168.123.133/24 brd 192.168.123.255 scope global dynamic noprefixroute br-ex
       valid_lft 2779sec preferred_lft 2779sec
    inet6 fe80::be7c:2e5c:c6e1:64aa/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
6: br-int: <BROADCAST,MULTICAST> mtu 1400 qdisc noop state DOWN group default qlen 1000
    link/ether 56:99:6b:a7:b6:5d brd ff:ff:ff:ff:ff:ff
7: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether 5a:80:fa:42:4f:b6 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5880:faff:fe42:4fb6/64 scope link 
       valid_lft forever preferred_lft forever
8: ovn-k8s-mp0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether ae:db:fd:fd:e5:e0 brd ff:ff:ff:ff:ff:ff
    inet 10.129.0.2/23 brd 10.129.1.255 scope global ovn-k8s-mp0
       valid_lft forever preferred_lft forever
    inet6 fe80::acdb:fdff:fefd:e5e0/64 scope link 
       valid_lft forever preferred_lft forever
9: br-local: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether d6:c4:02:16:12:4f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d4c4:2ff:fe16:124f/64 scope link 
       valid_lft forever preferred_lft forever
10: ovn-k8s-gw0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 0a:58:a9:fe:00:01 brd ff:ff:ff:ff:ff:ff
    inet 169.254.0.1/20 brd 169.254.15.255 scope global ovn-k8s-gw0
       valid_lft forever preferred_lft forever
    inet6 fe80::858:a9ff:fefe:1/64 scope link 
       valid_lft forever preferred_lft forever
11: 93a70b26402384b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue master ovs-system state UP group default 
    link/ether e2:42:7c:f5:c3:03 brd ff:ff:ff:ff:ff:ff link-netns 7882c0b4-8fe6-424d-a389-6c6b414781be
    inet6 fe80::e042:7cff:fef5:c303/64 scope link 
       valid_lft forever preferred_lft forever
12: 63f7288d0520b48@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 46:a4:5d:e1:9c:b4 brd ff:ff:ff:ff:ff:ff link-netns b0bc99a1-85bc-4693-aa7b-dbce22398bbd
    inet6 fe80::44a4:5dff:fee1:9cb4/64 scope link 
       valid_lft forever preferred_lft forever
13: bf18853dd405873@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 22:41:c2:50:e5:55 brd ff:ff:ff:ff:ff:ff link-netns 01b93d68-835b-47ac-abff-92a49dfb37c7
    inet6 fe80::2041:c2ff:fe50:e555/64 scope link 
       valid_lft forever preferred_lft forever
14: 1d3c86493e5fee2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue master ovs-system state UP group default 
    link/ether 0a:a0:fe:14:5f:a6 brd ff:ff:ff:ff:ff:ff link-netns 92f30d8f-5eb0-4109-b8ab-b103a4e486da
    inet6 fe80::8a0:feff:fe14:5fa6/64 scope link 
       valid_lft forever preferred_lft forever
15: 7d171005a47f71c@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue master ovs-system state UP group default 
    link/ether ba:25:cd:fa:bf:91 brd ff:ff:ff:ff:ff:ff link-netns eef44ca0-3271-4fc0-abc3-9272183c17ff
    inet6 fe80::b825:cdff:fefa:bf91/64 scope link 
       valid_lft forever preferred_lft forever
18: 37a5126db0faa46@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue master ovs-system state UP group default 
    link/ether e2:84:04:73:d3:80 brd ff:ff:ff:ff:ff:ff link-netns 61a93549-ecdb-4563-aa13-8aa698e8ac68
    inet6 fe80::e084:4ff:fe73:d380/64 scope link 
       valid_lft forever preferred_lft forever
19: 0e6142a3a29c254@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 8e:ac:4e:2e:9f:45 brd ff:ff:ff:ff:ff:ff link-netns a8a0344b-9c24-423d-b978-33181026f3a4
    inet6 fe80::8cac:4eff:fe2e:9f45/64 scope link 
       valid_lft forever preferred_lft forever
20: 41d8c6069df6349@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 92:25:82:61:bf:a3 brd ff:ff:ff:ff:ff:ff link-netns a7e0c29e-f070-4cc3-8ef4-df10cd2dc6e5
    inet6 fe80::9025:82ff:fe61:bfa3/64 scope link 
       valid_lft forever preferred_lft forever
21: 0e4123a766304ed@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue master ovs-system state UP group default 
    link/ether 92:d3:ea:e2:bd:91 brd ff:ff:ff:ff:ff:ff link-netns 020c5ca1-bf76-4621-8c04-5891c90da934
    inet6 fe80::90d3:eaff:fee2:bd91/64 scope link 
       valid_lft forever preferred_lft forever
23: a83f31cad9ae6dd@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 96:39:e5:c7:d6:b0 brd ff:ff:ff:ff:ff:ff link-netns 161678be-eb56-4eca-bdc8-c43ca613f374
    inet6 fe80::9439:e5ff:fec7:d6b0/64 scope link 
       valid_lft forever preferred_lft forever
24: 9ee9f67f7cf2757@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether b2:14:d8:2c:7e:52 brd ff:ff:ff:ff:ff:ff link-netns c10bc040-f132-4e5f-9d97-be2103d3df7f
    inet6 fe80::b014:d8ff:fe2c:7e52/64 scope link 
       valid_lft forever preferred_lft forever
25: 0f62f9cbd31e960@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 1e:6c:9e:84:8b:5d brd ff:ff:ff:ff:ff:ff link-netns b2ce44e4-4163-4a20-bebd-7dc06b9912f2
    inet6 fe80::1c6c:9eff:fe84:8b5d/64 scope link 
       valid_lft forever preferred_lft forever
26: 015cebb1a403fa3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 9a:05:25:15:e9:d1 brd ff:ff:ff:ff:ff:ff link-netns bdc732af-9e7b-45f9-8cc0-c0ccf3ffeafe
    inet6 fe80::9805:25ff:fe15:e9d1/64 scope link 
       valid_lft forever preferred_lft forever
27: 22f334de371aed1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether ce:94:fd:b8:1c:d6 brd ff:ff:ff:ff:ff:ff link-netns 33d5ef09-48fa-4a83-9bab-ad9953d22158
    inet6 fe80::cc94:fdff:feb8:1cd6/64 scope link 
       valid_lft forever preferred_lft forever
28: e9413bc244e1556@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether d2:8c:03:27:a5:40 brd ff:ff:ff:ff:ff:ff link-netns 476e3344-5518-43a2-9726-c77cfcfa4eb0
    inet6 fe80::d08c:3ff:fe27:a540/64 scope link 
       valid_lft forever preferred_lft forever
29: 8422fa45c36f100@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether b6:17:fe:41:e3:2b brd ff:ff:ff:ff:ff:ff link-netns ccff154f-dc83-4755-ad0f-cf5753fe6003
    inet6 fe80::b417:feff:fe41:e32b/64 scope link 
       valid_lft forever preferred_lft forever
31: 205e044f528618c@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 6a:c9:f4:43:37:92 brd ff:ff:ff:ff:ff:ff link-netns 2646d6ec-53bc-4bca-8323-af6e4830bbd3
    inet6 fe80::68c9:f4ff:fe43:3792/64 scope link 
       valid_lft forever preferred_lft forever
32: 058200f6b3e9148@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 16:53:2c:14:44:47 brd ff:ff:ff:ff:ff:ff link-netns 2e241e4c-6cc9-4521-a61a-42d7ded2c913
    inet6 fe80::1453:2cff:fe14:4447/64 scope link 
       valid_lft forever preferred_lft forever
33: 72771067f9a37b6@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 6a:cf:b5:56:d4:c1 brd ff:ff:ff:ff:ff:ff link-netns abe52d31-9ace-4e01-8b3c-78f1c0c41d13
    inet6 fe80::68cf:b5ff:fe56:d4c1/64 scope link 
       valid_lft forever preferred_lft forever
34: 36c4cb144b75242@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 7e:6c:a3:75:69:aa brd ff:ff:ff:ff:ff:ff link-netns b18ab3a9-d8c7-49c1-8b09-48f636d7311b
    inet6 fe80::7c6c:a3ff:fe75:69aa/64 scope link 
       valid_lft forever preferred_lft forever
35: 41431a5252039db@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 62:1d:90:c2:48:8a brd ff:ff:ff:ff:ff:ff link-netns ad763f4c-eb4a-416f-a133-093e72845487
    inet6 fe80::601d:90ff:fec2:488a/64 scope link 
       valid_lft forever preferred_lft forever
36: 8fd13f94ad1ef9f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether ca:ca:1d:3f:aa:30 brd ff:ff:ff:ff:ff:ff link-netns 95a14784-ee11-4811-9a48-97412f7128b3
    inet6 fe80::c8ca:1dff:fe3f:aa30/64 scope link 
       valid_lft forever preferred_lft forever
37: 14b857b33772c1b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether c6:20:78:c5:cf:cd brd ff:ff:ff:ff:ff:ff link-netns db313d9d-c355-456a-bf97-3e83acd050be
    inet6 fe80::c420:78ff:fec5:cfcd/64 scope link 
       valid_lft forever preferred_lft forever
38: d1ec3a5a195ecb8@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 02:ba:fc:bd:c6:b1 brd ff:ff:ff:ff:ff:ff link-netns 77660e42-a329-432a-9d3a-ae36613e0520
    inet6 fe80::ba:fcff:febd:c6b1/64 scope link 
       valid_lft forever preferred_lft forever
39: a18fe937796041c@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether ce:b7:f6:84:fd:31 brd ff:ff:ff:ff:ff:ff link-netns c831a2b5-960f-41b4-8468-a100541acac6
    inet6 fe80::ccb7:f6ff:fe84:fd31/64 scope link 
       valid_lft forever preferred_lft forever
40: 5e1b0cea932fa69@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether ce:ea:b6:76:a2:7e brd ff:ff:ff:ff:ff:ff link-netns 9d47de9d-19af-4101-add3-e04a8c5e02cd
    inet6 fe80::ccea:b6ff:fe76:a27e/64 scope link 
       valid_lft forever preferred_lft forever
41: 0f296db845ea8c5@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether c2:c0:70:92:bc:1f brd ff:ff:ff:ff:ff:ff link-netns 5d85d9c3-a377-4155-bd26-bed4218ecc13
    inet6 fe80::c0c0:70ff:fe92:bc1f/64 scope link 
       valid_lft forever preferred_lft forever
42: 953675b7279be2f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 5e:be:6c:b3:7b:75 brd ff:ff:ff:ff:ff:ff link-netns 88c8b6c7-defd-46f7-82ac-f510cdd70f0d
    inet6 fe80::5cbe:6cff:feb3:7b75/64 scope link 
       valid_lft forever preferred_lft forever
44: 19b79a7e91b6de2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether da:73:a3:29:ae:c8 brd ff:ff:ff:ff:ff:ff link-netns 2de2b2ba-c668-45bc-be4f-d7420a9fa6b9
    inet6 fe80::d873:a3ff:fe29:aec8/64 scope link 
       valid_lft forever preferred_lft forever
45: 4da6426e5eedf1b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 4e:a9:13:b6:d6:63 brd ff:ff:ff:ff:ff:ff link-netns 0ae564f8-e469-4e45-89ab-6d87f8d147fb
    inet6 fe80::4ca9:13ff:feb6:d663/64 scope link 
       valid_lft forever preferred_lft forever
46: dfd2d683cc92d0b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 16:29:7a:e6:df:90 brd ff:ff:ff:ff:ff:ff link-netns a54fa8d2-10d2-447b-bc94-1af3b3fa6d26
    inet6 fe80::1429:7aff:fee6:df90/64 scope link 
       valid_lft forever preferred_lft forever
48: 6aed78d641d5905@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue master ovs-system state UP group default 
    link/ether e2:f8:01:11:9a:79 brd ff:ff:ff:ff:ff:ff link-netns edcff7c9-30ca-4986-aa23-3bc9e19a8103
    inet6 fe80::e0f8:1ff:fe11:9a79/64 scope link 
       valid_lft forever preferred_lft forever
49: 224ea656cd262f7@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether ae:7b:da:8a:d1:db brd ff:ff:ff:ff:ff:ff link-netns 5677eaa7-d63b-47cd-b86f-a6296744ec17
    inet6 fe80::ac7b:daff:fe8a:d1db/64 scope link 
       valid_lft forever preferred_lft forever
50: 5e99f93555d1b13@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue master ovs-system state UP group default 
    link/ether 16:cb:f9:0e:63:cb brd ff:ff:ff:ff:ff:ff link-netns d39618f3-58e9-44d5-93b0-7bc6e28960d9
    inet6 fe80::14cb:f9ff:fe0e:63cb/64 scope link 
       valid_lft forever preferred_lft forever
51: 23498e7001cf4dc@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether fa:51:07:5f:65:36 brd ff:ff:ff:ff:ff:ff link-netns d65f444c-6a47-429d-a8d4-179f2277739a
    inet6 fe80::f851:7ff:fe5f:6536/64 scope link 
       valid_lft forever preferred_lft forever
52: 96e8708e6b8f20c@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue master ovs-system state UP group default 
    link/ether de:05:89:32:75:7a brd ff:ff:ff:ff:ff:ff link-netns 9d9934d4-0678-4760-8ee7-ab7f721b9f07
    inet6 fe80::dc05:89ff:fe32:757a/64 scope link 
       valid_lft forever preferred_lft forever
53: 4c61df4fd37127f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 16:4c:e4:35:f8:5c brd ff:ff:ff:ff:ff:ff link-netns e06943b7-13de-4545-9fbe-8ec40288136b
    inet6 fe80::144c:e4ff:fe35:f85c/64 scope link 
       valid_lft forever preferred_lft forever
54: b240f08c10ea1f9@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether 4e:30:22:3b:ae:d8 brd ff:ff:ff:ff:ff:ff link-netns 118a2574-c10a-4c9b-ae3b-9028aa7501cb
    inet6 fe80::4c30:22ff:fe3b:aed8/64 scope link 
       valid_lft forever preferred_lft forever
55: 5ca9a351909ffad@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether d6:ae:c9:f0:f5:0b brd ff:ff:ff:ff:ff:ff link-netns 2692730c-7736-4216-92ce-4041d680de1d
    inet6 fe80::d4ae:c9ff:fef0:f50b/64 scope link 
       valid_lft forever preferred_lft forever
56: a5a7e146bc6a43a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
    link/ether ce:30:25:e6:dc:d8 brd ff:ff:ff:ff:ff:ff link-netns c78133be-ff8f-4f0c-a1a2-af579be43594
    inet6 fe80::cc30:25ff:fee6:dcd8/64 scope link 
       valid_lft forever preferred_lft forever
sh-4.4# toolbox
Error: error creating container storage: the container name "support-tools" is already in use by "259e9d770a9cdf9d1e47fc853c144ff2489e81ae64436abc97131759009be537". You have to remove that container to be able to reuse that name.: that name is already in use
Error: `/proc/self/exe run -it --name support-tools --privileged --ipc=host --net=host --pid=host -e HOST=/host -e NAME=support-tools -e IMAGE=registry.redhat.io/rhel8/support-tools:latest -v /run:/run -v /var/log:/var/log -v /etc/machine-id:/etc/machine-id -v /etc/localtime:/etc/localtime -v /:/host registry.redhat.io/rhel8/support-tools:latest` failed: exit status 125
Container 'toolbox-' already exists. Trying to start...
(To remove the container and start with a fresh toolbox, run: sudo podman rm 'toolbox-')
toolbox-
Container started successfully. To exit, type 'exit'.
[root@toolbox /]# tcpdump -nn -s 0 -i enp4s0 -w /host/var/tmp/master-0-0_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap
dropped privs to tcpdump
tcpdump: listening on enp4s0, link-type EN10MB (Ethernet), capture size 262144 bytes
^C169 packets captured
171 packets received by filter
0 packets dropped by kernel
[root@toolbox /]# redhat-support-tool addattachment -c 01234567 /host/var/tmp/master-0-0_12_01_2021-22_13_47-UTC.pcap
bash: redhat-support-tool: command not found
[root@toolbox /]# exit
exit
Error: exec session exited with non-zero exit code 127: OCI runtime error
sh-4.4# sudo podman rm 'toolbox-'
07d94efa2de96ce7370069dc6222b6eb73a58db30d44c520eb56cdf275f77e3c
sh-4.4# toolbox
Error: error creating container storage: the container name "support-tools" is already in use by "259e9d770a9cdf9d1e47fc853c144ff2489e81ae64436abc97131759009be537". You have to remove that container to be able to reuse that name.: that name is already in use
Error: `/proc/self/exe run -it --name support-tools --privileged --ipc=host --net=host --pid=host -e HOST=/host -e NAME=support-tools -e IMAGE=registry.redhat.io/rhel8/support-tools:latest -v /run:/run -v /var/log:/var/log -v /etc/machine-id:/etc/machine-id -v /etc/localtime:/etc/localtime -v /:/host registry.redhat.io/rhel8/support-tools:latest` failed: exit status 125
Spawning a container 'toolbox-' with image 'registry.redhat.io/rhel8/support-tools'
[root@toolbox /]# tcpdump -nn -s 0 -i enp4s0 -w /host/var/tmp/master-0-0_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap
dropped privs to tcpdump
tcpdump: listening on enp4s0, link-type EN10MB (Ethernet), capture size 262144 bytes
^C8 packets captured
8 packets received by filter
0 packets dropped by kernel
[root@toolbox /]# ls /host/var/tmp/
master-0-0_12_01_2021-23_08_52-UTC.pcap
[root@toolbox /]# redhat-support-tool addattachment -c 01234567 /host/var/tmp/master-0-0_12_01_2021-23_08_52-UTC.pcap
bash: redhat-support-tool: command not found
[root@toolbox /]# exit
exit
Error: exec session exited with non-zero exit code 127: OCI runtime error
sh-4.4#

Comment 1 Timothée Ravier 2021-01-15 13:08:04 UTC
Can not reproduce this one so far with a toolbox fixed from https://bugzilla.redhat.com/show_bug.cgi?id=1915318 so the same "fix" might help here.

Comment 2 Timothée Ravier 2021-01-15 15:31:47 UTC
Investigation will continue next sprint.

Comment 3 Micah Abbott 2021-01-18 19:46:27 UTC
Targeting this for 4.7 in the hope of fixing multiple problems with one PR - https://github.com/coreos/toolbox/pull/67

Comment 5 Michael Nguyen 2021-01-25 22:21:17 UTC
I can't seem to reproduce this to verify.  Everything seems to be working.  

From the BZ summary it looks like there is another container named "support-tools".  The toolbox runs a container named "toolbox-$USER".

@elgerman can you provide the output of `podman ps -a` after you chroot into the host?

$ oc debug node/ip-10-0-154-51.us-west-2.compute.internal 
Starting pod/ip-10-0-154-51us-west-2computeinternal-debug ...
To use host binaries, run `chroot /host`
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# toolbox
Spawning a container 'toolbox-root' with image 'registry.redhat.io/rhel8/support-tools'
Detected RUN label in the container image. Using that as the default...
[root@ip-10-0-154-51 /]# which redhat-support-tools
/usr/bin/which: no redhat-support-tools in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin)
[root@ip-10-0-154-51 /]# which redhat-support-tool 
/usr/bin/redhat-support-tool
[root@ip-10-0-154-51 /]# rpm -ql $(which redhat-support-tool)
package /usr/bin/redhat-support-tool is not installed
[root@ip-10-0-154-51 /]# rpm -qf $(which redhat-support-tool)
redhat-support-tool-0.11.2-2.el8.noarch
[root@ip-10-0-154-51 /]# exit
exit
sh-4.4# toolbox
Container 'toolbox-root' already exists. Trying to start...
(To remove the container and start with a fresh toolbox, run: sudo podman rm 'toolbox-root')
toolbox-root
Container started successfully. To exit, type 'exit'.
bash-4.2# redhat-support-tool 
Welcome to the Red Hat Support Tool.
Command (? for help): q
bash-4.2# exit
exit
bash-4.2# exit          
exit
sh-4.4# rpm -q toolbox
toolbox-0.0.8-2.rhaos4.7.el8.noarch
sh-4.4# exit
exit
sh-4.2# exit  
exit

Removing debug pod ...
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.nightly-2021-01-25-160335   True        False         51m     Cluster version is 4.7.0-0.nightly-2021-01-25-160335

Comment 7 Elena German 2021-02-03 12:31:10 UTC
[kni@provisionhost-0-0 ~]$ oc get nodes
NAME         STATUS   ROLES    AGE   VERSION
master-0-0   Ready    master   33h   v1.20.0+3b90e69
master-0-1   Ready    master   33h   v1.20.0+3b90e69
master-0-2   Ready    master   33h   v1.20.0+3b90e69
worker-0-0   Ready    worker   32h   v1.20.0+3b90e69
worker-0-1   Ready    worker   32h   v1.20.0+3b90e69
[kni@provisionhost-0-0 ~]$ oc debug node/worker-0-0
Starting pod/worker-0-0-debug ...
To use host binaries, run `chroot /host`
Pod IP: 192.168.123.133
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# podman ps -a
CONTAINER ID  IMAGE   COMMAND  CREATED  STATUS  PORTS   NAMES
sh-4.4# sudo podman ps -a
CONTAINER ID  IMAGE   COMMAND  CREATED  STATUS  PORTS   NAMES
sh-4.4#

Comment 8 Michael Nguyen 2021-02-03 23:34:04 UTC
@elgerman Apologies for not being more descriptive.  Can you run through the reproduction steps, then run `podman ps -a`?

1. Choose node from the list:
    oc get nodes
2. Open debug session to this node:
    oc debug node/master-0-0
3. chroot /host
4. Choose some interface from the list:
    ip ad
5. Run toolbox container:
    toolbox
6. Run tcpdump on chosen interface:
    tcpdump -nn -s 0 -i enp4s0 -w /host/var/tmp/master-0-0_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap
7. Stop it
8. From within the toolbox container, try to run redhat-support-tool to attach the file directly to an existing Red Hat Support case:
    redhat-support-tool addattachment -c 01234567 /host/var/tmp/master-0-0_12_01_2021-23_08_52-UTC.pcap

Then exit out of the toolbox container and run `podman ps -a`

Comment 9 Elena German 2021-02-09 14:31:43 UTC
Good news: it seems like it was fixed by PR - https://github.com/coreos/toolbox/pull/67, at least on the 4.7.0-0.nightly-2021-02-08-052658 I didn't succeed to reproduce it

Comment 12 errata-xmlrpc 2021-02-24 15:52:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.