Bug 9659 - Error recovery on Magnetic, 4mm, 8mm Tape IO using ioctl
Summary: Error recovery on Magnetic, 4mm, 8mm Tape IO using ioctl
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: kernel   
(Show other bugs)
Version: 6.1
Hardware: All Linux
Target Milestone: ---
Assignee: Arjan van de Ven
QA Contact: Brian Brock
Depends On:
TreeView+ depends on / blocked
Reported: 2000-02-21 21:12 UTC by Andrew Cseko, Jr.
Modified: 2007-04-18 16:26 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2003-04-14 18:02:58 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

Description Andrew Cseko, Jr. 2000-02-21 21:12:54 UTC
I recently uncovered some problems with the tape device driver in RedHat
Linux 6.1 when I transferred a program I had written from Solaris to
Linux on a Pentium.

In my particular application I am using a tape library system to
automatically cycle through 18 tapes in sequence.  My application needs
to wait for the library system to replace the current tape with the next
tape in sequence and then wait for the drive to come to online/ready

To get my application to work correctly under Linux I had to add code to
close and re-open the file handle to the tape device to get a correct
tape status message.  I've appended a sample of the code to illustrate
this situation. I read the man page on the driver and it does state that
the tape drive status that is returned is from when the device is
opened.  I believe this is an incorrect functionality.

I also uncovered two other problems.  In my application I read from many
4mm tapes and have encountered bad blocks on tapes that resulted in tape
read errors.  Under Solaris I was able to identify the bad block using
the block_no field in the status message structure and then implemented
a fix that skip over this bad block and then continued processing.
Under your version of the tape IO the block_no and file_no fields are
impossible values, e.g. over 1 million, and therefore my code to recover
data beyond a bad block no longer functions.

Also there seems to be situations where a seek to the end of data on a
tape does not return a correct position.  In one particular case I new
that I had 17 files on a tape but the seek to end of tape command left
me at file six.  I was able to correctly seek to the end of data on a
Solaris machine.  I believe it may have something to do with whether an
end-of-file or end-of-data marker is not written to the tape before the
tape is ejected.  For example the program writing data to the tape was
aborted with a CTRL-C command so it never got a chance to close the tape
file.  Then the tape was ejected.  No additional data was written to
tape.  At this point there may not be an end-of-file or end-of-data
marker on the tape.

Have these issues been resolved? Are you still working on the driver? If
not, who I can contact for additional information?

Please don't hesitate to contact me if you have any questions.


Andrew Cseko
Institute for Defense Analysis


    for( i_tapes=0 ; i_tapes < n_tapes ; i_tapes++ )
      if (n_tapes != 1)
          fprintf( stdout, "TAPE %u of %u.\n", i_tapes+1, n_tapes );

#if defined(OS_WINDOWS)
      taperewind(); // rewind tape
      tapespace( 1, 1 ); // skip of VAX VMS volume header

#if defined(OS_UNIX) && defined(OS_UNIX_LINUX)
#define mt_flags  mt_gstat

      if (n_tapes != 1)
          struct mtop  mt_command = { 0, 1 };
          struct mtget mt_status;

           * wait/make_sure tape drive is on-line and ready to be

Comment 1 Jeff Johnson 2001-01-06 21:57:01 UTC
These appear to be kernel tape driver issues, changing component.

Comment 2 Stephen John Smoogen 2003-01-25 04:04:49 UTC
Is this bug still appearing in Red Hat Linux 7.3 or 8.0?

Comment 3 Jay Turner 2003-04-14 18:02:58 UTC
Closing out due to bit-rot.

Note You need to log in before you can comment on or make changes to this bug.