Bug 2040303

Summary: The sgpio tool is not adapted to the scenario where the SCSI host id is greater than 10
Product: Red Hat Enterprise Linux 8 Reporter: Chancel <13129778215>
Component: sgpioAssignee: Lukáš Nykrýn <lnykryn>
Status: CLOSED WONTFIX QA Contact: qe-baseos-daemons
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 8.4   
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: ---
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-07-13 07:31:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Chancel 2022-01-13 12:20:16 UTC
Description of problem:
The sgpio tool is not adapted to the scenario where the SCSI host id is greater than 10.

Version-Release number of selected component (if applicable):
sgpio-1.2.0.10-21.el8.x86_64.rpm


How reproducible:must be present


Steps to Reproduce:
1.Insert enough hard drives without using a raid card;
2.Install OS using PCH mode;
3.yum install sgpio lsscsi;
4.Run lsscsi to check which hard disk has a host id greater than 10.Such as sdg.
5.Run "sgpio -d sdg -s locate".

Actual results:
It will output "buffer overflow" on the terminal.

Expected results:
Command executed successfully.

Additional info:
After analysis, this is a bug of the sgpio tool.The problem is that in the led_set(int port_num) function in the sgpio.c file.

The specific error at:
if(sprintf(disks[index].name,"Port %d", port_num) < 0){
    printf("Error: Unable to write port number to buffor!\n");
    return -1;
}

The size of disks[index].name is 7, once the host id is greater than 10, an error will be reported.

I hope this problem will be fix in the new version, thanks.

Comment 2 RHEL Program Management 2023-07-13 07:31:54 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.