Bug 839373 - Deltacloud/RHEVM: Multiple jobs can refer to the same running instance
Summary: Deltacloud/RHEVM: Multiple jobs can refer to the same running instance
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise MRG
Classification: Red Hat
Component: condor-deltacloud-gahp
Version: 2.2
Hardware: x86_64
OS: Linux
low
low
Target Milestone: ---
: ---
Assignee: grid-maint-list
QA Contact: MRG Quality Engineering
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-07-11 18:36 UTC by Luigi Toscano
Modified: 2016-05-26 20:14 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-26 20:14:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 803895 1 None None None 2021-01-20 06:05:38 UTC

Internal Links: 803895

Description Luigi Toscano 2012-07-11 18:36:57 UTC
Description of problem:
Create a deltacloud job (tested with RHEV-M), submit it.
Wait for it to be running.
Submit the same job again.

Result: the status of the new job switches to running almost immediately because it finds the existing running instance; so both jobs refer to the _same_ image.

Version-Release number of selected component (if applicable):
condor-deltacloud-gahp-7.6.5-0.16
condor-7.6.5-0.16

Expected results:
Subsequent request of a running machine should be not considered (hold state maybe?)

Comment 1 Luigi Toscano 2012-07-12 12:46:23 UTC
Please note that when the instance is terminated by invoking shutdown inside it, the first job terminates; the second job instead stays alive and restarts the instance.

Example of job:
--------------------------------------------------
universe = grid
grid_resource = deltacloud http://dc.example.com:3002/api 
executable = rhevm_job
deltacloud_username = user@domain
deltacloud_password_file = user_pwd_file

deltacloud_instance_name = instance_name

log = /tmp/job_deltacloud_basic_$(cluster)_$(process).log
notification = NEVER
queue
-------------------------------------------------

Comment 3 Timothy St. Clair 2012-07-17 16:44:22 UTC
presently we use instance.id which appears to not be unique, per "instance".

///////////////////////////////////////////////////////////
struct deltacloud_instance {
  char *href; /**< The full URL to this instance */
  char *id; /**< The ID of this instance */
  char *name; /**< The name of this instance */
  char *owner_id; /**< The owner ID of this instance */
  char *image_id; /**< The ID of this image this instance was launched from */
  char *image_href; /**< The full URL to the image */
  char *realm_id; /**< The ID of the realm this instance is in */
  char *realm_href; /**< The full URL to the realm this instance is in */
  char *state; /**< The current state of this instance (RUNNING, STOPPED, etc) */
  char *launch_time; /**< The time that this instance was launched */
  struct deltacloud_hardware_profile hwp; /**< The hardware profile this instance was launched with */
  struct deltacloud_action *actions; /**< A list of actions that can be taken on this instance */
  struct deltacloud_address *public_addresses; /**< A list of the public addresses assigned to this instance */
  struct deltacloud_address *private_addresses; /**< A list of the private addresses assigned to this instance */
  struct deltacloud_instance_auth auth; /**< The authentication method used to connect to this instance */

  struct deltacloud_instance *next;
};
///////////////////////////////////////////////////////////

GridJobId = "deltacloud mrgqe6v32 924095d8-98e1-4774-9781-a01a8813b8b8"  
is the same for both?

I need to dig some more, but it's possible we may need to create a hash with the  launch_time && always iterate over ->next.

Comment 4 Luigi Toscano 2012-07-17 16:49:57 UTC
(In reply to comment #3)

> GridJobId = "deltacloud mrgqe6v32 924095d8-98e1-4774-9781-a01a8813b8b8"  
> is the same for both?

Yes, it is.

Comment 5 Anne-Louise Tangring 2016-05-26 20:14:13 UTC
MRG-Grid is in maintenance and only customer escalations will be considered. This issue can be reopened if a customer escalation associated with it occurs.


Note You need to log in before you can comment on or make changes to this bug.