Bug 116028 - Misleading time estimates
Summary: Misleading time estimates
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: dump
Version: 9
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Jindrich Novy
QA Contact: Ben Levenson
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2004-02-17 18:25 UTC by Damian Menscher
Modified: 2013-07-02 22:58 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2005-01-13 15:21:27 UTC
Embargoed:


Attachments (Terms of Use)
backup logs (5.24 KB, text/plain)
2004-02-29 22:57 UTC, Damian Menscher
no flags Details
Improve ? the block estimates when dumping directories (1.47 KB, patch)
2004-03-02 10:15 UTC, Stelian Pop
no flags Details | Diff

Description Damian Menscher 2004-02-17 18:25:33 UTC
Description of problem:
When running a dump, it provides information about how much time is 
remaining, counting down in 5-minute increments.  But its estimates 
are off, leading the admin to wonder if the backups were really 
successful.  Here's an example for an uncompressed dump to a remote 
tape drive (redhat9, intel cpu):

  DUMP: 86.92% done at 727 kB/s, finished in 0:54
  DUMP: 88.12% done at 727 kB/s, finished in 0:49
  DUMP: 89.13% done at 726 kB/s, finished in 0:45
  DUMP: Closing /dev/nst0
  DUMP: Volume 1 completed at: Mon Feb 16 09:17:00 2004

I've seen this on other systems as well.  Here's an example for a 
compressed (bzip2) disk-to-disk dump (redhat 8.0, dual amd cpu):

  DUMP: 38.30% done at 557 kB/s, finished in 1:12
  DUMP: 43.94% done at 575 kB/s, finished in 1:03
  DUMP: 50.42% done at 600 kB/s, finished in 0:54
  DUMP: Closing /backup/home.2
  DUMP: Volume 1 completed at: Tue Feb 17 05:00:20 2004

Version-Release number of selected component (if applicable):
dump-0.4b28-7 

How reproducible:
Always

Steps to Reproduce:
1. dump a large filesystem (large enough to take > 1hr)

Comment 1 Stelian Pop 2004-02-29 09:59:39 UTC
Are you sure this is not a multi-volume backup, and the logs you are
quoting are related to the first volume only ? (so dump would be right
in its time estimates, since that is the time remaining for ALL the
volumes).

Stelian.

Comment 2 Damian Menscher 2004-02-29 22:57:35 UTC
Created attachment 98149 [details]
backup logs

I suppose it's possible that it's trying to split across volumes, but if so
it's doing so without my knowledge.  Here are the specifics of one particular
case where I saw it fail (and I'll attach the output):
									       

I'm dumping stuff from an ext3 partition.  Because of issues with inconsistent
backups if data is changed during the dump, I'm creating a snapshot and then
dumping that.  My original filesystem is /dev/Volume00/astrolv, so I create a
snapshot with '/sbin/lvcreate -L 250M -s -n b_astrolv /dev/Volume00/astrolv'. 
I mount it on /backup/astro with '/bin/mount -o ro /dev/Volume00/b_astrolv
/backup/astro'.  Then I call dump with TAPE=backup@tapehost:/dev/nst0 and
RSH=/usr/bin/ssh as '/sbin/dump -u0 /backup/astro'.  Finally I umount and
remove the snapshot.
									       

This seems to work properly most of the time, and even give accurate time
estimates.  In the case of interest, I'm backing up 16-18G to a Travan 40 drive
(20G uncompressed), so I don't think space is an issue.  Interestingly, the
size estimate was 18G, but it only dumped 16G.	I'm attaching the full logs.

Comment 3 Stelian Pop 2004-03-01 15:46:30 UTC
Ok, you are not doing multi-volume dumps.

Looking at the logs, we can clearly see that what is wrong is the size
estimates. Time estimates are calculated based on the size estimates,
and the 45 minutes left is consistent with the difference between the
estimated total size and the real size (45 minutes at 720 KB/s =
2000000 KB = 2000000 blocks).

As its name tells, size estimates is an "estimate", and it's based on
a quick calculation of the total size of data. However, to make it
quicker, several assumptions are made and those assumptions can make
the result to be a bit off (in particular, when calculating the size
of a directory, the size of the inode is taken into account when doing
the estimates, but the real dump will only dump the contents). In your
case, the estimate is 10% off the real result, but I still consider
this value to be acceptable.

Anyway, this is all informational only, even if the estimates are a
bit off the dump is still correct.

(and a small note regarding the snapshot: you can dump the snapshot
without mounting it, by specifying the device node directly to dump:
dump /dev/Volume00/b_astrolv')

Comment 4 Damian Menscher 2004-03-01 19:22:37 UTC
Ok, I agree there's no bug here, so this should be closed, or perhaps 
changed to a RFE for better size estimates.  (If you're dumping only 
the *used* portion of an inode, perhaps you could get better size 
estimates by assuming that each file wastes half an inode.)

With regard to your comment on mounting, for some reason I thought 
I'd determined that you had to mount (or at least have an fstab 
entry) in order to get incrementals to play nice with /etc/dumpdates.

Comment 5 Stelian Pop 2004-03-02 10:15:14 UTC
Created attachment 98181 [details]
Improve ? the block estimates when dumping directories

Comment 6 Stelian Pop 2004-03-02 10:15:55 UTC
Sure, let's give a try to the 'half-inode consumption' idea. Find a
patch attached, against 0.4b35. Please test it and report back.

Regarding the snapshot mounting, no you don't have to mount or have an
fstab entry for it, incrementals work by writing the *device name*
into /etc/fstab, so it will work nice.
(please try the latest version however, you appear to be using a
rather old dump and things may have changed since then...)

Stelian.

Comment 7 Jindrich Novy 2004-09-14 12:57:47 UTC
Hi Damian,

do you see some misleading estimates with recent dump-0.4b37?

Jindrich


Note You need to log in before you can comment on or make changes to this bug.