Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 919799

Summary: Failed to launch 60 Instances on two compute nodes
Product: Red Hat OpenStack Reporter: Ofer Blaut <oblaut>
Component: openstack-novaAssignee: RHOS Maint <rhos-maint>
Status: CLOSED NOTABUG QA Contact: Ofer Blaut <oblaut>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 2.0 (Folsom)CC: ndipanov
Target Milestone: ---   
Target Release: 2.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-03-19 16:59:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Nova log files - starting - 2013-03-10 09:14:30 none

Description Ofer Blaut 2013-03-10 07:36:26 UTC
Created attachment 707750 [details]
Nova log files  - starting - 2013-03-10 09:14:30

Description of problem:
Failed to launch 60 Instances on two compute nodes 
only 26 out of them moved to active the rest are in ERROR state .

attached logs files 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. update default quota

nova quota-update --instances 100 49202b0e09a4409c97475392341b57db
nova quota-update --cores 120 49202b0e09a4409c97475392341b57db
nova quota-show --tenant 49202b0e09a4409c97475392341b57db

2. launch 40 + 20 VMs
nova boot VM-FED17 --image 3a8b059f-6c4d-482f-883c-84b80bc7b86b --flavor 1 --num-instances 20
nova boot VM-FED17 --image 3a8b059f-6c4d-482f-883c-84b80bc7b86b --flavor 1 --num-instances 40

3. Check status if VMs

nova list | grep ACTIVE |  wc -l



2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 2 Nikola Dipanov 2013-03-18 17:24:03 UTC
I am a bit confused by the log files attached here, as they show some connectivity issues.

I am thinking that this could be caused by lack of resources (in that case we should see scheduler hinting that.

Alternatively - this could be caused by the recent nproc issue we had bug #917534

Could you please retry this with snap5 and inform us if it still happens.

Comment 3 Ofer Blaut 2013-03-19 13:08:24 UTC
I have retested on snap5

manually creating vms works
nova boot VM-FED17 --image 311a8c1c-e2e8-4c98-a02f-18b322f09b82  --flavor 1 --num-instances 2
works
nova boot VM-FED17 --image 311a8c1c-e2e8-4c98-a02f-18b322f09b82  --flavor 1 --num-instances 20 endup in Error state 


i got the following error
{u'message': u'NoValidHost', u'code': 500, u'created': u'2013-03-19T12:35:12Z'}


i have updtaed quota of core/instance/quantum ports

Comment 5 Nikola Dipanov 2013-03-19 16:59:07 UTC
NoValidHost is actually the intended behaviour here (unless you have enough ram for 20 instances). Nova will allow 50% over-subscription on RAM (I am assuming here that RAM is the problem).

I suggest you take a look at ram_allocation_ratio config option as it is used to determine how much RAM over-subscription is allowed (simmilar options exist for CPUs and disk as well and they are all used by the scheduler when choosing hosts)

You need to set these  appropriately for your setup, if you want to make these kind of tests (or of course provide enough RAM/disk for 60 instances to run).

I will close this as not-a-bug.