Bug 1588042 - vm.hardware.nics[0].lan nil for RHV VMs
Summary: vm.hardware.nics[0].lan nil for RHV VMs
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Providers
Version: 5.8.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: GA
: 5.9.3
Assignee: Alona Kaplan
QA Contact: Ilanit Stein
URL:
Whiteboard:
Depends On: 1572917
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-06 14:05 UTC by Satoe Imaishi
Modified: 2022-07-09 09:54 UTC (History)
13 users (show)

Fixed In Version: 5.9.3.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1572917
Environment:
Last Closed: 2018-07-12 13:15:53 UTC
Category: ---
Cloudforms Team: RHEVM
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ManageIQ manageiq-providers-ovirt pull 238 0 None None None 2018-06-06 14:05:43 UTC
Github ManageIQ manageiq-providers-ovirt pull 260 0 None None None 2018-06-07 07:36:23 UTC
Red Hat Product Errata RHSA-2018:2184 0 None None None 2018-07-12 13:16:43 UTC

Comment 2 CFME Bot 2018-06-06 14:09:17 UTC
New commit detected on ManageIQ/manageiq-providers-ovirt/gaprindashvili:

https://github.com/ManageIQ/manageiq-providers-ovirt/commit/4bb0c1992649bb377d1d402f05d537fb0d804f80
commit 4bb0c1992649bb377d1d402f05d537fb0d804f80
Author:     Boris Od <boris.od>
AuthorDate: Tue May 29 08:35:58 2018 -0400
Commit:     Boris Od <boris.od>
CommitDate: Tue May 29 08:35:58 2018 -0400

    Merge pull request #238 from AlonaKaplan/lans

    vm.hardware.nics[i].lan should return the network attached to the vm
    (cherry picked from commit fc84f4bc8d5df5d5a11fdee34f6beb9d183dde50)

    Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1588042

 app/models/manageiq/providers/redhat/infra_manager/inventory/strategies/v4.rb | 5 +
 app/models/manageiq/providers/redhat/infra_manager/refresh/parse/parser.rb | 8 +-
 app/models/manageiq/providers/redhat/infra_manager/refresh/parse/strategies/api4.rb | 4 +-
 app/models/manageiq/providers/redhat/infra_manager/refresh/parse/strategies/host_inventory.rb | 8 +-
 app/models/manageiq/providers/redhat/infra_manager/refresh/parse/strategies/vm_inventory.rb | 11 +-
 app/models/manageiq/providers/redhat/inventory/collector.rb | 6 +
 app/models/manageiq/providers/redhat/inventory/collector/infra_manager.rb | 4 +
 app/models/manageiq/providers/redhat/inventory/parser/infra_manager.rb | 17 +-
 spec/models/manageiq/providers/redhat/infra_manager/refresh/refresher_4_async_graph_spec.rb | 4 +-
 spec/models/manageiq/providers/redhat/infra_manager/refresh/refresher_4_async_spec.rb | 4 +-
 spec/models/manageiq/providers/redhat/infra_manager/refresh/refresher_4_custom_attributes_spec.rb | 3 +
 spec/models/manageiq/providers/redhat/infra_manager/refresh/refresher_graph_target_vm_spec.rb | 6 +
 spec/models/manageiq/providers/redhat/infra_manager/refresh/refresher_target_vm_4_spec.rb | 1 +
 spec/vcr_cassettes/manageiq/providers/redhat/infra_manager/refresh/ovirt_sdk_publish_vm_to_template.yml | 12 +
 spec/vcr_cassettes/manageiq/providers/redhat/infra_manager/refresh/ovirt_sdk_refresh_graph_target_template.yml | 12 +
 spec/vcr_cassettes/manageiq/providers/redhat/infra_manager/refresh/ovirt_sdk_refresh_graph_target_vm.yml | 12 +
 spec/vcr_cassettes/manageiq/providers/redhat/infra_manager/refresh/ovirt_sdk_refresh_recording.yml | 24 +
 spec/vcr_cassettes/manageiq/providers/redhat/infra_manager/refresh/ovirt_sdk_refresh_recording_custom_attrs.yml | 12 +
 spec/vcr_cassettes/manageiq/providers/redhat/infra_manager/refresh/ovirt_sdk_target_template_disconnect.yml | 12 +
 19 files changed, 142 insertions(+), 23 deletions(-)

Comment 5 Ilanit Stein 2018-06-07 10:28:56 UTC
In reply to Ian's comment above:

Thanks Ian,

I tried these commands, that failed:
# vmdb
# rails c
irb(main):002:0> vm = $evm(:vm).find_by_name("1_vm")     
SyntaxError: (irb):2: syntax error, unexpected '(', expecting end-of-input
vm = $evm(:vm).find_by_name("1_vm")
          ^

So I used:

#vmdb
#rails c
irb(main):006:0> v = Vm.where(name: "1_vm").last
...
irb(main):007:0> v.hardware.nics[0].lan
=> nil

and I see it returns nil.

Comment 6 Alona Kaplan 2018-06-07 10:32:46 UTC
Hi Ilanit,

The bug was moved yesterday to on_dev since it wasn't backported to gaprindashvili (the backport caused travis to failure so it was reverted).


Travis failure was fixed and the fix will be backported the next build.

Comment 7 CFME Bot 2018-06-12 15:09:22 UTC
New commits detected on ManageIQ/manageiq-providers-ovirt/gaprindashvili:

https://github.com/ManageIQ/manageiq-providers-ovirt/commit/7d88aefaf9fc77142cf18d912ac55da231a7f3e3
commit 7d88aefaf9fc77142cf18d912ac55da231a7f3e3
Author:     Boris Od <boris.od>
AuthorDate: Tue May 29 08:35:58 2018 -0400
Commit:     Boris Od <boris.od>
CommitDate: Tue May 29 08:35:58 2018 -0400

    Merge pull request #238 from AlonaKaplan/lans

    vm.hardware.nics[i].lan should return the network attached to the vm
    (cherry picked from commit fc84f4bc8d5df5d5a11fdee34f6beb9d183dde50)

    Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1588042

 app/models/manageiq/providers/redhat/infra_manager/inventory/strategies/v4.rb | 5 +
 app/models/manageiq/providers/redhat/infra_manager/refresh/parse/parser.rb | 8 +-
 app/models/manageiq/providers/redhat/infra_manager/refresh/parse/strategies/api4.rb | 4 +-
 app/models/manageiq/providers/redhat/infra_manager/refresh/parse/strategies/host_inventory.rb | 8 +-
 app/models/manageiq/providers/redhat/infra_manager/refresh/parse/strategies/vm_inventory.rb | 11 +-
 app/models/manageiq/providers/redhat/inventory/collector.rb | 6 +
 app/models/manageiq/providers/redhat/inventory/collector/infra_manager.rb | 4 +
 app/models/manageiq/providers/redhat/inventory/parser/infra_manager.rb | 17 +-
 spec/models/manageiq/providers/redhat/infra_manager/refresh/refresher_4_async_graph_spec.rb | 4 +-
 spec/models/manageiq/providers/redhat/infra_manager/refresh/refresher_4_async_spec.rb | 4 +-
 spec/models/manageiq/providers/redhat/infra_manager/refresh/refresher_4_custom_attributes_spec.rb | 3 +
 spec/models/manageiq/providers/redhat/infra_manager/refresh/refresher_graph_target_vm_spec.rb | 6 +
 spec/models/manageiq/providers/redhat/infra_manager/refresh/refresher_target_vm_4_spec.rb | 1 +
 spec/vcr_cassettes/manageiq/providers/redhat/infra_manager/refresh/ovirt_sdk_publish_vm_to_template.yml | 12 +
 spec/vcr_cassettes/manageiq/providers/redhat/infra_manager/refresh/ovirt_sdk_refresh_graph_target_template.yml | 12 +
 spec/vcr_cassettes/manageiq/providers/redhat/infra_manager/refresh/ovirt_sdk_refresh_graph_target_vm.yml | 12 +
 spec/vcr_cassettes/manageiq/providers/redhat/infra_manager/refresh/ovirt_sdk_refresh_recording.yml | 24 +
 spec/vcr_cassettes/manageiq/providers/redhat/infra_manager/refresh/ovirt_sdk_refresh_recording_custom_attrs.yml | 12 +
 spec/vcr_cassettes/manageiq/providers/redhat/infra_manager/refresh/ovirt_sdk_target_template_disconnect.yml | 12 +
 19 files changed, 142 insertions(+), 23 deletions(-)


https://github.com/ManageIQ/manageiq-providers-ovirt/commit/cf381288ed0cfc7b4f30952bd5c5ba7de681caed
commit cf381288ed0cfc7b4f30952bd5c5ba7de681caed
Author:     Boris Od <boris.od>
AuthorDate: Thu Jun  7 01:39:41 2018 -0400
Commit:     Boris Od <boris.od>
CommitDate: Thu Jun  7 01:39:41 2018 -0400

    Merge pull request #260 from AlonaKaplan/fix_travis_network_id_nil

    Fix travis network id nil
    (cherry picked from commit 78dc6000e37045f33de1dced62ea15893931812a)

    https://bugzilla.redhat.com/show_bug.cgi?id=1588042

 app/models/manageiq/providers/redhat/inventory/parser/infra_manager.rb | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comment 9 Ilanit Stein 2018-06-24 07:56:32 UTC
This bug cannot be tested on CFME-5.9.3.2 due to RHV blocker bug 1593202.  
I tested it on CFME-5.9.3.2, that was upgraded from CFME-5.9.3.1, with the bellow commands, and it seem to still return nil for VM lan.
Therefore moving bug back to assigned. 

#vmdb
#rails c
irb(main):006:0> v = Vm.where(name: "1_vm").last
...
irb(main):007:0> v.hardware.nics[0].lan
=> nil

Comment 12 Ilanit Stein 2018-06-25 10:45:29 UTC
Verified on CFME-5.9.3.2 + Fix for bug 1593202 / RHV-4.2.3.5

Using the steps mentioned in comment #11.

Log:

[root@dhcp-8-198-153 vmdb]# rails c
Loading production environment (Rails 5.0.6)


irb(main):001:0> workspace = MiqAeEngine::MiqAeWorkspaceRuntime.new
=> #<MiqAeEngine::MiqAeWorkspaceRuntime:0x0000000002df13b0 @readonly=false, @nodes=[], @current=[], @datastore_cache={}, @class_methods={}, @dom_search=#<MiqAeEngine::MiqAeDomainSearch:0x0000000002df0af0 @fqns_id_cache={}, @fqns_id_class_cache={}, @partial_ns=[], @prepend_namespace=nil>, @persist_state_hash={}, @current_state_info={}, @state_machine_objects=[], @ae_user=nil, @rbac=false>


irb(main):002:0> workspace.ae_user = User.where(:userid => 'admin').first
PostgreSQLAdapter#log_after_checkout, connection_pool: size: 5, connections: 1, in use: 1, waiting_in_queue: 0
=> #<User id: 1, name: "Administrator", email: nil, icon: nil, created_on: "2018-06-20 04:24:20", updated_on: "2018-06-25 10:28:32", userid: "admin", settings: {}, lastlogon: "2018-06-25 10:28:32", lastlogoff: nil, current_group_id: 2, first_name: nil, last_name: nil, password_digest: "$2a$10$oiAnkTpbV/UCL.Zd3AuTOu0UeSbYz4is2UfJsaLqFbA...">


irb(main):003:0> $evm = MiqAeMethodService::MiqAeService.new(workspace)
=> #<MiqAeMethodService::MiqAeService:0x000000000b7cc2d8 @drb_server_references=[], @inputs={}, @workspace=#<MiqAeEngine::MiqAeWorkspaceRuntime:0x0000000002df13b0 @readonly=false, @nodes=[], @current=[], @datastore_cache={}, @class_methods={}, @dom_search=#<MiqAeEngine::MiqAeDomainSearch:0x0000000002df0af0 @fqns_id_cache={}, @fqns_id_class_cache={}, @partial_ns=[], @prepend_namespace=nil>, @persist_state_hash={}, @current_state_info={}, @state_machine_objects=[], @ae_user=#<User id: 1, name: "Administrator", email: nil, icon: nil, created_on: "2018-06-20 04:24:20", updated_on: "2018-06-25 10:28:32", userid: "admin", settings: {}, lastlogon: "2018-06-25 10:28:32", lastlogoff: nil, current_group_id: 2, first_name: nil, last_name: nil, password_digest: "$2a$10$oiAnkTpbV/UCL.Zd3AuTOu0UeSbYz4is2UfJsaLqFbA...">, @rbac=false>, @persist_state_hash={}, @logger=#<Vmdb::Loggers::MulticastLogger:0x0000000002559d60 @loggers=#<Set: {#<VMDBLogger:0x0000000002559f90 @progname=nil, @level=1, @default_formatter=#<Logger::Formatter:0x0000000002559ef0 @datetime_format=nil>, @formatter=#<VMDBLogger::Formatter:0x0000000002559dd8 @datetime_format=nil>, @logdev=#<Logger::LogDevice:0x0000000002559ea0 @shift_size=1048576, @shift_age=0, @filename=#<Pathname:/var/www/miq/vmdb/log/automation.log>, @dev=#<File:/var/www/miq/vmdb/log/automation.log>, @mon_owner=nil, @mon_count=0, @mon_mutex=#<Thread::Mutex:0x0000000002559e78>>, @write_lock=#<Thread::Mutex:0x0000000002559db0>, @local_levels={}>}>, @level=1, @thread_hash_level_key=:"ThreadSafeLogger#19582640@level">>


irb(main):005:0> vm = $evm.vmdb(:vm).find_by_name("cfme-5.9.1.0")   
PostgreSQLAdapter#log_after_checkout, connection_pool: size: 5, connections: 1, in use: 1, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkin, connection_pool: size: 5, connections: 1, in use: 0, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkout, connection_pool: size: 5, connections: 1, in use: 1, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkin, connection_pool: size: 5, connections: 1, in use: 0, waiting_in_queue: 0
=> #<MiqAeServiceManageIQ_Providers_Redhat_InfraManager_Vm:0x7e68230 @object=#<ManageIQ::Providers::Redhat::InfraManager::Vm id: 19, vendor: "redhat", format: nil, version: nil, name: "cfme-5.9.1.0", description: nil, location: "f5a9dd0a-35ec-4413-b096-b623bfb03480.ovf", config_xml: nil, autostart: nil, host_id: nil, last_sync_on: nil, created_on: "2018-06-25 07:26:14", updated_on: "2018-06-25 07:26:14", storage_id: 12, guid: "9367bef5-4ea0-49d0-ac29-9d109a59fa92", ems_id: 5, last_scan_on: nil, last_scan_attempt_on: nil, uid_ems: "f5a9dd0a-35ec-4413-b096-b623bfb03480", retires_on: nil, retired: nil, boot_time: nil, tools_status: nil, standby_action: nil, power_state: "off", state_changed_on: "2018-06-25 07:26:14", previous_state: nil, connection_state: "connected", last_perf_capture_on: nil, registered: nil, busy: nil, smart: nil, memory_reserve: 1024, memory_reserve_expand: nil, memory_limit: 32768, memory_shares: nil, memory_shares_level: nil, cpu_reserve: nil, cpu_reserve_expand: nil, cpu_limit: nil, cpu_shares: nil, cpu_shares_level: nil, cpu_affinity: nil, ems_created_on: nil, template: false, evm_owner_id: nil, ems_ref_obj: "--- \"/api/vms/f5a9dd0a-35ec-4413-b096-b623bfb03480...", miq_group_id: 1, linked_clone: nil, fault_tolerance: nil, type: "ManageIQ::Providers::Redhat::InfraManager::Vm", ems_ref: "/api/vms/f5a9dd0a-35ec-4413-b096-b623bfb03480", ems_cluster_id: 6, retirement_warn: nil, retirement_last_warn: nil, vnc_port: nil, flavor_id: nil, availability_zone_id: nil, cloud: false, retirement_state: nil, cloud_network_id: nil, cloud_subnet_id: nil, cloud_tenant_id: nil, raw_power_state: "down", publicly_available: nil, orchestration_stack_id: nil, retirement_requester: nil, tenant_id: 1, resource_group_id: nil, deprecated: nil, storage_profile_id: nil, cpu_hot_add_enabled: nil, cpu_hot_remove_enabled: nil, memory_hot_add_enabled: nil, memory_hot_add_limit: nil, memory_hot_add_increment: nil>, @virtual_columns=["active", "aggressive_mem_recommended_change", "aggressive_mem_recommended_change_pct", "aggressive_recommended_mem", "aggressive_recommended_vcpus", "aggressive_vcpus_recommended_change", "aggressive_vcpus_recommended_change_pct", "allocated_disk_storage", "archived", "conservative_mem_recommended_change", "conservative_mem_recommended_change_pct", "conservative_recommended_mem", "conservative_recommended_vcpus", "conservative_vcpus_recommended_change", "conservative_vcpus_recommended_change_pct", "cpu_cores_per_socket", "cpu_total_cores", "cpu_usagemhz_rate_average_avg_over_time_period", "cpu_usagemhz_rate_average_high_over_time_period", "cpu_usagemhz_rate_average_low_over_time_period", "cpu_usagemhz_rate_average_max_over_time_period", "custom_1", "custom_2", "custom_3", "custom_4", "custom_5", "custom_6", "custom_7", "custom_8", "custom_9", "debris_size", "derived_memory_used_avg_over_time_period", "derived_memory_used_high_over_time_period", "derived_memory_used_low_over_time_period", "derived_memory_used_max_over_time_period", "disconnected", "disk_1_disk_type", "disk_1_mode", "disk_1_partitions_aligned", "disk_1_size", "disk_1_size_on_disk", "disk_1_used_percent_of_provisioned", "disk_2_disk_type", "disk_2_mode", "disk_2_partitions_aligned", "disk_2_size", "disk_2_size_on_disk", "disk_2_used_percent_of_provisioned", "disk_3_disk_type", "disk_3_mode", "disk_3_partitions_aligned", "disk_3_size", "disk_3_size_on_disk", "disk_3_used_percent_of_provisioned", "disk_4_disk_type", "disk_4_mode", "disk_4_partitions_aligned", "disk_4_size", "disk_4_size_on_disk", "disk_4_used_percent_of_provisioned", "disk_5_disk_type", "disk_5_mode", "disk_5_partitions_aligned", "disk_5_size", "disk_5_size_on_disk", "disk_5_used_percent_of_provisioned", "disk_6_disk_type", "disk_6_mode", "disk_6_partitions_aligned", "disk_6_size", "disk_6_size_on_disk", "disk_6_used_percent_of_provisioned", "disk_7_disk_type", "disk_7_mode", "disk_7_partitions_aligned", "disk_7_size", "disk_7_size_on_disk", "disk_7_used_percent_of_provisioned", "disk_8_disk_type", "disk_8_mode", "disk_8_partitions_aligned", "disk_8_size", "disk_8_size_on_disk", "disk_8_used_percent_of_provisioned", "disk_9_disk_type", "disk_9_mode", "disk_9_partitions_aligned", "disk_9_size", "disk_9_size_on_disk", "disk_9_used_percent_of_provisioned", "disk_size", "disks_aligned", "ems_cluster_name", "evm_owner_email", "evm_owner_name", "evm_owner_userid", "first_drift_state_timestamp", "has_rdm_disk", "host_name", "hostnames", "href_slug", "ipaddresses", "is_evm_appliance", "last_compliance_status", "last_compliance_timestamp", "last_drift_state_timestamp", "mac_addresses", "max_cpu_usage_rate_average_avg_over_time_period", "max_cpu_usage_rate_average_avg_over_time_period_without_overhead", "max_cpu_usage_rate_average_high_over_time_period", "max_cpu_usage_rate_average_high_over_time_period_without_overhead", "max_cpu_usage_rate_average_low_over_time_period", "max_cpu_usage_rate_average_low_over_time_period_without_overhead", "max_cpu_usage_rate_average_max_over_time_period", "max_mem_usage_absolute_average_avg_over_time_period", "max_mem_usage_absolute_average_avg_over_time_period_without_overhead", "max_mem_usage_absolute_average_high_over_time_period", "max_mem_usage_absolute_average_high_over_time_period_without_overhead", "max_mem_usage_absolute_average_low_over_time_period", "max_mem_usage_absolute_average_low_over_time_period_without_overhead", "max_mem_usage_absolute_average_max_over_time_period", "mem_cpu", "memory_exceeds_current_host_headroom", "moderate_mem_recommended_change", "moderate_mem_recommended_change_pct", "moderate_recommended_mem", "moderate_recommended_vcpus", "moderate_vcpus_recommended_change", "moderate_vcpus_recommended_change_pct", "num_cpu", "num_disks", "num_hard_disks", "orphaned", "os_image_name", "overallocated_mem_pct", "overallocated_vcpus_pct", "owned_by_current_ldap_group", "owned_by_current_user", "owning_ldap_group", "paravirtualization", "parent_blue_folder_1_name", "parent_blue_folder_2_name", "parent_blue_folder_3_name", "parent_blue_folder_4_name", "parent_blue_folder_5_name", "parent_blue_folder_6_name", "parent_blue_folder_7_name", "parent_blue_folder_8_name", "parent_blue_folder_9_name", "platform", "provisioned_storage", "ram_size", "ram_size_in_bytes", "recommended_mem", "recommended_vcpus", "region_description", "region_number", "snapshot_size", "storage_name", "thin_provisioned", "uncommitted_storage", "used_disk_storage", "used_storage", "used_storage_by_state", "v_annotation", "v_datastore_path", "v_host_vmm_product", "v_is_a_template", "v_owning_blue_folder", "v_owning_blue_folder_path", "v_owning_cluster", "v_owning_datacenter", "v_owning_folder", "v_owning_folder_path", "v_owning_resource_pool", "v_pct_free_disk_space", "v_pct_used_disk_space", "v_snapshot_newest_description", "v_snapshot_newest_name", "v_snapshot_newest_timestamp", "v_snapshot_newest_total_size", "v_snapshot_oldest_description", "v_snapshot_oldest_name", "v_snapshot_oldest_timestamp", "v_snapshot_oldest_total_size", "v_total_snapshots", "vendor_display", "vm_misc_size", "vm_ram_size", "vmsafe_agent_address", "vmsafe_agent_port", "vmsafe_enable", "vmsafe_fail_open", "vmsafe_immutable_vm", "vmsafe_timeout_ms"], @associations=["accounts", "compliances", "datacenter", "direct_service", "directories", "ems_blue_folder", "ems_cluster", "ems_events", "ems_folder", "ext_management_system", "files", "groups", "guest_applications", "hardware", "host", "last_compliance", "miq_provision", "operating_system", "owner", "resource_pool", "service", "snapshots", "storage", "tenant", "users"]>


irb(main):006:0> vm.hardware.nics[0].lan
PostgreSQLAdapter#log_after_checkout, connection_pool: size: 5, connections: 1, in use: 1, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkin, connection_pool: size: 5, connections: 1, in use: 0, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkout, connection_pool: size: 5, connections: 1, in use: 1, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkin, connection_pool: size: 5, connections: 1, in use: 0, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkout, connection_pool: size: 5, connections: 1, in use: 1, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkin, connection_pool: size: 5, connections: 1, in use: 0, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkout, connection_pool: size: 5, connections: 1, in use: 1, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkin, connection_pool: size: 5, connections: 1, in use: 0, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkout, connection_pool: size: 5, connections: 1, in use: 1, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkin, connection_pool: size: 5, connections: 1, in use: 0, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkout, connection_pool: size: 5, connections: 1, in use: 1, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkin, connection_pool: size: 5, connections: 1, in use: 0, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkout, connection_pool: size: 5, connections: 1, in use: 1, waiting_in_queue: 0
PostgreSQLAdapter#log_after_checkin, connection_pool: size: 5, connections: 1, in use: 0, waiting_in_queue: 0
=> #<MiqAeServiceLan:0x7168ff8 @object=#<Lan id: 3, switch_id: 3, name: "ovirtmgmt", tag: nil, created_on: "2018-06-25 07:26:10", updated_on: "2018-06-25 07:26:10", uid_ems: "2ec65083-017f-40db-936c-8e53a0553975", allow_promiscuous: nil, forged_transmits: nil, mac_changes: nil, computed_allow_promiscuous: nil, computed_forged_transmits: nil, computed_mac_changes: nil, parent_id: nil>, @virtual_columns=["href_slug", "region_description", "region_number"], @associations=["guest_devices", "hosts", "switch", "templates", "vms"]>
irb(main):007:0>

Comment 14 errata-xmlrpc 2018-07-12 13:15:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2184


Note You need to log in before you can comment on or make changes to this bug.