Bug 711383 - wrong peaks while delete snapshot
Summary: wrong peaks while delete snapshot
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: vnstat
Version: 14
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Adrian Reber
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-06-07 10:36 UTC by Harald Reindl
Modified: 2012-04-02 18:54 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-04-02 18:54:20 UTC
Type: ---


Attachments (Terms of Use)

Description Harald Reindl 2011-06-07 10:36:23 UTC
please can this be fixed in any way, looks like only affecting x86_64 because on the ESXi-Cluster is one i386-VM with no wrong peaks in vnstat - and yes it makes sense using "vnstat" in a vm-guest :-)

every night from friday to saturday from our fedora-vmware-guests
is made a snapshot by "VMware Data Recovery" to take a consistent
backup and while deleting the snapshot something triggers horrible
wrong values to "vnstat" which makes monthly summary useless
see below :-(

>>Am 06.06.2011 17:15, schrieb Jerry James:
>> 2011/6/5 Reindl Harald <h.reindl@thelounge.net>:
>> has anybody an idea for which package i should file a bugreport
>> for this? i guess "vnstat" is only the postman
> 
> Maybe.  I notice that the bad value is 16777216.00 TiB, which is 2**64
> ... on a 64-bit machine.  A quick look through the vnstat code shows
> that it is using floating point arithmetic with rounding to print this
> value, so that's exactly what I would expect to see if somebody
> managed to stuff -1 (or any other negative value of small magnitude)
> into an unsigned 64-bit variable

      05/09/11      2.35 GiB |   72.14 GiB |   74.49 GiB |    7.23 Mbit/s
      05/10/11      1.47 GiB |   11.41 GiB |   12.88 GiB |    1.25 Mbit/s
      05/11/11      1.11 GiB |    6.19 GiB |    7.30 GiB |  708.76 kbit/s
      05/12/11      1.17 GiB |    5.82 GiB |    6.99 GiB |  678.38 kbit/s
      05/13/11      1.12 GiB |    6.50 GiB |    7.62 GiB |  739.88 kbit/s
      05/14/11   33554432.00 TiB |    4.10 GiB | 33554432.00 TiB | 3336.00 Tbit/s
      05/15/11    778.85 MiB |    4.45 GiB |    5.21 GiB |  505.87 kbit/s
      05/16/11      1.30 GiB |    7.37 GiB |    8.67 GiB |  842.06 kbit/s
      05/17/11      1.38 GiB |    8.18 GiB |    9.56 GiB |  928.20 kbit/s
      05/18/11      1.21 GiB |    6.83 GiB |    8.04 GiB |  780.32 kbit/s
      05/19/11      1.03 GiB |    5.68 GiB |    6.72 GiB |  652.10 kbit/s
      05/20/11      1.11 GiB |    5.18 GiB |    6.29 GiB |  610.67 kbit/s
      05/21/11   16777216.00 TiB |    3.97 GiB | 16777216.00 TiB | 1668.00 Tbit/s
      05/22/11    902.15 MiB |    6.74 GiB |    7.62 GiB |  739.58 kbit/s
      05/23/11      1.28 GiB |   16.56 GiB |   17.84 GiB |    1.73 Mbit/s
      05/24/11      1.60 GiB |   11.42 GiB |   13.02 GiB |    1.26 Mbit/s
      05/25/11      1.47 GiB |    6.65 GiB |    8.12 GiB |  788.78 kbit/s
      05/26/11      1.23 GiB |    7.40 GiB |    8.64 GiB |  838.46 kbit/s
      05/27/11      1.43 GiB |    6.75 GiB |    8.19 GiB |  794.70 kbit/s
      05/28/11   33554432.00 TiB |    5.44 GiB | 33554432.00 TiB | 3336.00 Tbit/s
      05/29/11    855.65 MiB |    4.89 GiB |    5.72 GiB |  555.47 kbit/s
      05/30/11      1.43 GiB |    9.20 GiB |   10.62 GiB |    1.03 Mbit/s
      05/31/11      1.77 GiB |    9.52 GiB |   11.29 GiB |    1.10 Mbit/s
      06/01/11      1.51 GiB |    9.43 GiB |   10.94 GiB |    1.06 Mbit/s
      06/02/11    906.48 MiB |    5.90 GiB |    6.79 GiB |  658.85 kbit/s
      06/03/11      2.36 GiB |    9.40 GiB |   11.77 GiB |    1.14 Mbit/s
      06/04/11   16777216.00 TiB |    5.15 GiB | 16777216.00 TiB | 1668.00 Tbit/s
      06/05/11      3.04 GiB |    5.00 GiB |    8.04 GiB |  781.07 kbit/s
      06/06/11      3.66 GiB |    7.63 GiB |   11.28 GiB |    1.10 Mbit/s
      06/07/11    562.88 MiB |    2.65 GiB |    3.20 GiB |  596.86 kbit/s

Comment 1 Adrian Reber 2011-06-07 10:51:03 UTC
Can try to read out /proc/net/dev just before and after the snapshot, so that I can see what values it reads from there?

How are you running vnstat. Via cronjob or as as daemon?

It happens while deleting the snapshot? Or while taking the snapshot?

Comment 2 Harald Reindl 2011-06-07 11:01:33 UTC
> Can try to read out /proc/net/dev just before and after the snapshot, so that I
> can see what values it reads from there?

not easy possible because you do not know exactly the point
means: hard to reproduce

> How are you running vnstat. Via cronjob or as as daemon?

cat /etc/cron.d/vnstat 
MAILTO=root
# to enable interface monitoring via vnstat remove comment on next line
*/2 * * * *  vnstat /usr/sbin/vnstat.cron

> It happens while deleting the snapshot? Or while taking the snapshot?

i think while remove the snapshot becauuse this is also a short time where the guest-network is not resposible for some seconds, while taking the snapshot i notice no interruption

Comment 3 Adrian Reber 2011-06-07 13:07:06 UTC
I will try to provide a version with some debug output.

Comment 4 Harald Reindl 2011-06-07 13:13:11 UTC
it would be interesting to have some checks if values are possible (the TB/Sec are way too high for 10 GBE vmxnet3) and only in this case dump some informations related how this values are received and maybe some sanity to say "even if such measure comes from whereever this can not be true, ignore it"

Comment 5 Adrian Reber 2011-06-07 13:24:15 UTC
I have a version for you to test. Each time it reads /proc/net/dev it prints out the line it read. Please redirect the output from your crontab to some file so that I can have a look at it.

You could also try to run vnstat as a daemon to see if that works better.

http://koji.fedoraproject.org/koji/taskinfo?taskID=3115924

Comment 6 Harald Reindl 2011-06-24 12:02:49 UTC
additional info:

this appears alo for tap0-devices (openvpn) on the other side of the city
the openvpn server is hosted on the VMware we are speaking about
below "vnstat" from my machine @home connected via openvpn

[root@srv-rhsoft:~]$ vnstat -d -i tap0

 tap0  /  daily

         day         rx      |     tx      |    total    |   avg. rate
     ------------------------+-------------+-------------+---------------
      06/12/11     18.76 MiB |   16.54 MiB |   35.30 MiB |    3.35 kbit/s
      06/13/11    137.26 MiB |   64.99 MiB |  202.25 MiB |   19.18 kbit/s
      06/14/11    124.79 MiB |   26.01 MiB |  150.80 MiB |   14.30 kbit/s
      06/15/11     44.18 MiB |   17.62 MiB |   61.80 MiB |    5.86 kbit/s
      06/16/11     69.97 MiB |  121.88 MiB |  191.86 MiB |   18.19 kbit/s
      06/17/11     27.45 MiB |   18.24 MiB |   45.69 MiB |    4.33 kbit/s
      06/18/11      4.04 GiB | 16777216.00 TiB | 16777216.00 TiB | 1668.00 Tbit/s
      06/19/11     85.19 MiB |   55.96 MiB |  141.15 MiB |   13.38 kbit/s
      06/20/11    204.30 MiB |  175.66 MiB |  379.96 MiB |   36.03 kbit/s
      06/21/11     76.83 MiB |   51.89 MiB |  128.72 MiB |   12.20 kbit/s
      06/22/11     79.85 MiB |   60.94 MiB |  140.79 MiB |   13.35 kbit/s
      06/23/11     92.42 MiB |   49.43 MiB |  141.85 MiB |   13.45 kbit/s
      06/24/11    112.09 MiB |   50.71 MiB |  162.80 MiB |   26.49 kbit/s
     ------------------------+-------------+-------------+---------------
     estimated       192 MiB |      85 MiB |     277 MiB |

Comment 7 Adrian Reber 2012-04-02 18:22:47 UTC
Did you ever had a chance to try the test build I provided?

A possible solution to your problem could also be setting MaxBandwidth in the configuration file.

Comment 8 Harald Reindl 2012-04-02 18:37:07 UTC
sorry for missing feedback :-(

"MaxBandwidth" 1000 and use vnstat as service instead cron is 
indeed the solution, maybe one of both would be enough, but
this works

Comment 9 Adrian Reber 2012-04-02 18:54:20 UTC
Okay. Closing it as NOTABUG. Thanks for the report.


Note You need to log in before you can comment on or make changes to this bug.