Bug 1464154 - Error generating reports after upgrading to 4.5
Summary: Error generating reports after upgrading to 4.5
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Reporting
Version: 5.8.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: GA
: 5.9.0
Assignee: Yuri Rudman
QA Contact: Niyaz Akhtar Ansari
URL:
Whiteboard:
Depends On:
Blocks: 1478565
TreeView+ depends on / blocked
 
Reported: 2017-06-22 14:20 UTC by Saif Ali
Modified: 2020-12-14 08:56 UTC (History)
8 users (show)

Fixed In Version: 5.9.0.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1478565 (view as bug list)
Environment:
Last Closed: 2018-03-06 15:47:29 UTC
Category: ---
Cloudforms Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Report (4.38 KB, text/x-vhdl)
2017-06-22 14:20 UTC, Saif Ali
no flags Details

Description Saif Ali 2017-06-22 14:20:20 UTC
Created attachment 1290742 [details]
Report

Description of problem:
Error generating reports after upgrading to 4.5. The report based on Virtual Machine

Version-Release number of selected component (if applicable):
5.8.0.17-20170525183055_6317a22

How reproducible:


Steps to Reproduce:
1. import the attached report
2. run report
3.

Actual results:


Expected results:


Additional info:

Comment 2 Saif Ali 2017-06-22 14:20:56 UTC
[----] E, [2017-06-21T10:14:16.732421 #12401:637130] ERROR -- : [ActiveRecord::StatementInvalid]: PG::NumericValueOutOfRange: ERROR:  integer out of range
: SELECT "vms".*, (SELECT "hardwares"."cpu_cores_per_socket" FROM "hardwares" WHERE "hardwares"."vm_or_template_id" = "vms"."id") AS cpu_cores_per_socket, ("vms"."connection_state" IS NULL OR "vms"."connection_state" != 'connected'
) AS disconnected, (SELECT "hardwares"."memory_mb" FROM "hardwares" WHERE "hardwares"."vm_or_template_id" = "vms"."id") AS mem_cpu, (SELECT "hardwares"."cpu_total_cores" FROM "hardwares" WHERE "hardwares"."vm_or_template_id" = "vms
"."id") AS cpu_total_cores, (SELECT "hardwares"."memory_mb" FROM "hardwares" WHERE "hardwares"."vm_or_template_id" = "vms"."id") AS ram_size, (SELECT ("hardwares"."memory_mb" * 1048576) FROM "hardwares" WHERE "hardwares"."vm_or_tem
plate_id" = "vms"."id") AS ram_size_in_bytes, (SELECT "storages"."name" FROM "storages" WHERE "storages"."id" = "vms"."storage_id") AS storage_name, (SELECT ((COALESCE(((SELECT SUM("disks"."size") FROM "disks" WHERE "hardwares"."id
" = "disks"."hardware_id")), 0)) + (COALESCE((CAST("hardwares"."memory_mb" AS bigint)), 0)) * 1048576) FROM "hardwares" WHERE "hardwares"."vm_or_template_id" = "vms"."id") AS provisioned_storage, ((SELECT COUNT(*) FROM "snapshots" 
WHERE "vms"."id" = "snapshots"."vm_or_template_id")) AS v_total_snapshots, (SELECT ((SELECT SUM((COALESCE("disks"."size_on_disk", "disks"."size", 0))) FROM "disks" WHERE "hardwares"."id" = "disks"."hardware_id")) FROM "hardwares" W
HERE "hardwares"."vm_or_template_id" = "vms"."id") AS used_disk_storage, "vms"."id" AS t0_r0, "vms"."vendor" AS t0_r1, "vms"."format" AS t0_r2, "vms"."version" AS t0_r3, "vms"."name" AS t0_r4, "vms"."description" AS t0_r5, "vms"."l
ocation" AS t0_r6, "vms"."config_xml" AS t0_r7, "vms"."autostart" AS t0_r8, "vms"."host_id" AS t0_r9, "vms"."last_sync_on" AS t0_r10, "vms"."created_on" AS t0_r11, "vms"."updated_on" AS t0_r12, "vms"."storage_id" AS t0_r13, "vms"."
guid" AS t0_r14, "vms"."ems_id" AS t0_r15, "vms"."last_scan_on" AS t0_r16, "vms"."last_scan_attempt_on" AS t0_r17, "vms"."uid_ems" AS t0_r18, "vms"."retires_on" AS t0_r19, "vms"."retired" AS t0_r20, "vms"."boot_time" AS t0_r21, "vm
s"."tools_status" AS t0_r22, "vms"."standby_action" AS t0_r23, "vms"."power_state" AS t0_r24, "vms"."state_changed_on" AS t0_r25, "vms"."previous_state" AS t0_r26, "vms"."connection_state" AS t0_r27, "vms"."last_perf_capture_on" AS
 t0_r28, "vms"."registered" AS t0_r29, "vms"."busy" AS t0_r30, "vms"."smart" AS t0_r31, "vms"."memory_reserve" AS t0_r32, "vms"."memory_reserve_expand" AS t0_r33, "vms"."memory_limit" AS t0_r34, "vms"."memory_shares" AS t0_r35, "vm
s"."memory_shares_level" AS t0_r36, "vms"."cpu_reserve" AS t0_r37, "vms"."cpu_reserve_expand" AS t0_r38, "vms"."cpu_limit" AS t0_r39, "vms"."cpu_shares" AS t0_r40, "vms"."cpu_shares_level" AS t0_r41, "vms"."cpu_affinity" AS t0_r42,
 "vms"."ems_created_on" AS t0_r43, "vms"."template" AS t0_r44, "vms"."evm_owner_id" AS t0_r45, "vms"."ems_ref_obj" AS t0_r46, "vms"."miq_group_id" AS t0_r47, "vms"."linked_clone" AS t0_r48, "vms"."fault_tolerance" AS t0_r49, "vms".
"type" AS t0_r50, "vms"."ems_ref" AS t0_r51, "vms"."ems_cluster_id" AS t0_r52, "vms"."retirement_warn" AS t0_r53, "vms"."retirement_last_warn" AS t0_r54, "vms"."vnc_port" AS t0_r55, "vms"."flavor_id" AS t0_r56, "vms"."availability_
zone_id" AS t0_r57, "vms"."cloud" AS t0_r58, "vms"."retirement_state" AS t0_r59, "vms"."cloud_network_id" AS t0_r60, "vms"."cloud_subnet_id" AS t0_r61, "vms"."cloud_tenant_id" AS t0_r62, "vms"."raw_power_state" AS t0_r63, "vms"."pu
blicly_available" AS t0_r64, "vms"."orchestration_stack_id" AS t0_r65, "vms"."retirement_requester" AS t0_r66, "vms"."tenant_id" AS t0_r67, "vms"."resource_group_id" AS t0_r68, "vms"."deprecated" AS t0_r69, "vms"."storage_profile_i
d" AS t0_r70, "vms"."cpu_hot_add_enabled" AS t0_r71, "vms"."cpu_hot_remove_enabled" AS t0_r72, "vms"."memory_hot_add_enabled" AS t0_r73, "vms"."memory_hot_add_limit" AS t0_r74, "vms"."memory_hot_add_increment" AS t0_r75 FROM "vms" 
WHERE "vms"."type" IN ('ManageIQ::Providers::InfraManager::Vm', 'ManageIQ::Providers::Microsoft::InfraManager::Vm', 'ManageIQ::Providers::Redhat::InfraManager::Vm', 'VmXen', 'ManageIQ::Providers::Vmware::InfraManager::Vm') AND "vms
"."template" = $1  Method:[rescue in _async_generate_table]
[----] E, [2017-06-21T10:14:16.732594 #12401:637130] ERROR -- : /opt/rh/cfme-gemset/gems/activerecord-5.0.3/lib/active_record/connection_adapters/postgresql_adapter.rb:598:in `async_exec'
/opt/rh/cfme-gemset/gems/activerecord-5.0.3/lib/active_record/connection_adapters/postgresql_adapter.rb:598:in `block in exec_no_cache'
/opt/rh/cfme-gemset/gems/activerecord-5.0.3/lib/active_record/connection_adapters/abstract_adapter.rb:590:in `block in log'
/opt/rh/cfme-gemset/gems/activesupport-5.0.3/lib/active_support/notifications/instrumenter.rb:21:in `instrument'
/opt/rh/cfme-gemset/gems/activerecord-5.0.3/lib/active_record/connection_adapters/abstract_adapter.rb:583:in `log'
/opt/rh/cfme-gemset/gems/activerecord-5.0.3/lib/active_record/connection_adapters/postgresql_adapter.rb:598:in `exec_no_cache'
/opt/rh/cfme-gemset/gems/activerecord-5.0.3/lib/active_record/connection_adapters/postgresql_adapter.rb:587:in `execute_and_clear'
/opt/rh/cfme-gemset/gems/activerecord-5.0.3/lib/active_record/connection_adapters/postgresql/database_statements.rb:103:in `exec_query'
/opt/rh/cfme-gemset/gems/activerecord-5.0.3/lib/active_record/connection_adapters/abstract/database_statements.rb:373:in `select'
/opt/rh/cfme-gemset/gems/activerecord-5.0.3/lib/active_record/connection_adapters/abstract/database_statements.rb:41:in `select_all'
/opt/rh/cfme-gemset/gems/activerecord-5.0.3/lib/active_record/connection_adapters/abstract/query_cache.rb:95:in `select_all'
/opt/rh/cfme-gemset/gems/activerecord-5.0.3/lib/active_record/relation/finder_methods.rb:391:in `find_with_associations'

Comment 6 Yuri Rudman 2017-07-13 13:31:39 UTC
Second PR (https://github.com/ManageIQ/manageiq/pull/15554) will resolve reported issue, no need to wait for the first PR to be merged

Comment 7 CFME Bot 2017-07-13 13:31:42 UTC
New commit detected on ManageIQ/manageiq/master:
https://github.com/ManageIQ/manageiq/commit/5231493a22270f37d9fa85591bafaf31863ad593

commit 5231493a22270f37d9fa85591bafaf31863ad593
Author:     Yuri Rudman <yrudman>
AuthorDate: Wed Jul 12 12:17:22 2017 -0400
Commit:     Yuri Rudman <yrudman>
CommitDate: Thu Jul 13 08:10:48 2017 -0400

    cast virtual attribute 'ram_size_in_bytes' to bigint
    https://bugzilla.redhat.com/show_bug.cgi?id=1464154

 app/models/hardware.rb | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)


Note You need to log in before you can comment on or make changes to this bug.