Bug 1759603 - Stuck on waiting for domain to get an ip address
Summary: Stuck on waiting for domain to get an ip address
Keywords:
Status: CLOSED DUPLICATE of bug 1572916
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 29
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-10-08 16:11 UTC by Mattia Verga
Modified: 2019-10-09 17:01 UTC (History)
23 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-09 17:01:40 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
Guest machine error messages (5.62 KB, image/png)
2019-10-08 16:11 UTC, Mattia Verga
no flags Details
error running timeout 60 vagrant up --debug (23.87 KB, text/plain)
2019-10-09 16:29 UTC, Mattia Verga
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1572916 0 unspecified CLOSED kernel after 4.17.0-0.rc2.git0.1.fc29 waits for random entropy on boot 2021-02-22 00:41:40 UTC

Description Mattia Verga 2019-10-08 16:11:11 UTC
Created attachment 1623516 [details]
Guest machine error messages

On a fresh, fully updated, F31 machine, when I try to start ('vagrant up') a vagrant box from fedora-bodhi Vagrantfile, the process gets stuck on 'bodhi: Waiting for domain to get an IP address...' forever.

BUT, if I open the virtual machine from Virtual Machine Manager and I move my mouse over the window, the guest machine shows the errors in the attached screenshot and the 'vagrant up' process moves on.

Comment 1 Pavel Valena 2019-10-09 12:34:17 UTC
Hello, thanks for opening the Bug!

> On a fresh, fully updated, F31 machine, when I try to start ('vagrant up') a vagrant box from fedora-bodhi Vagrantfile, the process gets stuck on 'bodhi: Waiting for domain to get an IP address...' forever.
>
> BUT, if I open the virtual machine from Virtual Machine Manager and I move my mouse over the window, the guest machine shows the errors in the attached screenshot and the 'vagrant up' process moves on.

So it's probably stuck on 'ssh is not available', could you login into the VM and `sudo systemctl status sshd`?
'7 urandom warnings' error seems to me like there may not be enough 'randomness' to start services requiring it, which is non-related to vagrant. Moving a 'mouse' => generating randomness => unstucking service seems to correspond.

Possibly related to https://bugzilla.redhat.com/show_bug.cgi?id=1572916.

In case the sshd is not the cause:
  Are you using VirtualBox or Libvirt?
  Do other VMs boot as usual?
  Could you try other boxes / Vagrantfiles?
  Could you please attach result of `timeout 60 vagrant up --debug`?
  What Vagrantfile are you using, specifically?

Comment 2 Mattia Verga 2019-10-09 16:28:09 UTC
Hi,
I'm using the Vagrantfile provided in https://raw.githubusercontent.com/fedora-infra/bodhi/develop/Vagrantfile under Libvirt.

I think you're right about sshd, because even if I don't move the pointer on the VM window, as soon as I start typing the username to login the machine gets unstuck. So, I cannot see sshd status when the machine is stuck.

I can't run `timeout 60 vagrant up --debug` because I got the attached error.

I have no other Vagrantfiles, only one other VM which runs Fedora Rawhide and it boots without problems.
But I've tried to change the base image in Bodhi Vagrantfile from F29 to F30 and the F30 base image starts without manual intervention (after a couple of minutes).
So maybe it's the kernel in the F29Cloud Base Image that it's affected by the bug in the kernel you pointed out.

Comment 3 Mattia Verga 2019-10-09 16:29:10 UTC
Created attachment 1623860 [details]
error running timeout 60 vagrant up --debug

Comment 4 Pavel Valena 2019-10-09 17:01:40 UTC
Well, I don't think this should to be used in Vagrantfile:

```
opts = GetoptLong.new(

```

Anyway, closing, as this is not vagrant-related issue and it was probably solved in the attached ticket.

*** This bug has been marked as a duplicate of bug 1572916 ***


Note You need to log in before you can comment on or make changes to this bug.