Prevent Circular buffer overrun

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Prevent Circular buffer overrun

Mitja Pirih
Hi,

I am starting to experience Circular buffer overrun errors. This errors are more frequent than before. The only change that I made was to add more ffmpeg instances (from 24 to 30/35). All instances are on a nvidia M2000 card. From man and other google sources I understood this error usually means slow network performance, but I suspect it is not the case here as I am using mumudvb locally to grab video from 3 dual channel tuners and deliver multicast data on lo interface to the card. The load on GPU is approx 25%. RAM on card is approx 55% used. I think I am hitting a bottleneck. Anyone experienced enough to point me to the right direction?


Thanks

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.111                Driver Version: 384.111                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro M2000        Off  | 00000000:01:00.0 Off |                  N/A |
| 74%   75C    P0    39W /  75W |   2424MiB /  4038MiB |     25%      Default |
+-------------------------------+----------------------+----------------------+

top - 13:07:55 up 27 min,  1 user,  load average: 4.71, 4.43, 3.72
Tasks: 252 total,   1 running, 251 sleeping,   0 stopped,   0 zombie
%Cpu0  : 21.2 us,  5.0 sy,  0.0 ni, 73.5 id,  0.0 wa,  0.0 hi,  0.3 si,  0.0 st
%Cpu1  : 24.0 us,  2.7 sy,  0.0 ni, 72.9 id,  0.0 wa,  0.0 hi,  0.3 si,  0.0 st
%Cpu2  : 21.6 us,  4.3 sy,  0.0 ni, 73.4 id,  0.0 wa,  0.0 hi,  0.7 si,  0.0 st
%Cpu3  : 18.7 us,  4.1 sy,  0.0 ni, 75.5 id,  0.3 wa,  0.0 hi,  1.4 si,  0.0 st
%Cpu4  : 18.9 us,  3.0 sy,  0.0 ni, 77.4 id,  0.0 wa,  0.0 hi,  0.7 si,  0.0 st
%Cpu5  : 12.5 us, 11.5 sy,  0.0 ni, 75.0 id,  0.0 wa,  0.0 hi,  1.0 si,  0.0 st
%Cpu6  : 18.8 us,  3.0 sy,  0.0 ni, 76.5 id,  0.0 wa,  0.0 hi,  1.7 si,  0.0 st
%Cpu7  : 18.7 us,  3.7 sy,  0.0 ni, 76.6 id,  0.0 wa,  0.0 hi,  1.0 si,  0.0 st
KiB Mem :  7830636 total,  2448092 free,  4157680 used,  1224864 buff/cache
KiB Swap:  8253436 total,  8253436 free,        0 used.  2466980 avail Mem


Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
Average:           lo  14454.69  14454.69  20429.68  20429.68      0.00      0.00      0.00      0.00
Average:         eth0   1829.76     26.03   2397.29      9.03      0.00      0.00   1805.93      1.96


ffmpeg version N-87875-gf685bbc Copyright (c) 2000-2017 the FFmpeg developers
  built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.5) 20160609
  configuration: --prefix=/home/mitja/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/mitja/ffmpeg_build/include --extra-ldflags=-L/home/mitja/ffmpeg_build/lib --extra-libs=-lpthread --bindir=/home/mitja/bin --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib64 --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libspeex --enable-nonfree --enable-nvenc
  libavutil      55. 79.100 / 55. 79.100
  libavcodec     57.108.100 / 57.108.100
  libavformat    57. 84.100 / 57. 84.100
  libavdevice    57. 11.100 / 57. 11.100
  libavfilter     6.108.100 /  6.108.100
  libswscale      4.  9.100 /  4.  9.100
  libswresample   2. 10.100 /  2. 10.100
  libpostproc    54.  8.100 / 54.  8.100
Hyper fast Audio and Video encoder

_______________________________________________
ffmpeg-user mailing list
[hidden email]
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".
Reply | Threaded
Open this post in threaded view
|

Re: Prevent Circular buffer overrun

Jernej Stopinšek
Hi,

try to play around with kernel parameters and increase values for:

net.ipv4.udp_rmem_min
net.ipv4.udp_mem
net.core.rmem_default
net.core.rmem_max
net.core.netdev_max_backlog
net.core.netdev_budget

also check interface Ring parameters:
ethtool -g [interface]

and increase RX and TX to maximum:
in my case:

ethtool -G eth0 rx 4096 tx 4096


Check out this document:
https://blog.packagecloud.io/eng/2016/06/22/monitoring-tuning-linux-networking-stack-receiving-data/ 

and enable RPS and RFS.

This solved my problem with Circular buffer overrun when adding more instances.


_______________________________________________
ffmpeg-user mailing list
[hidden email]
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".
Reply | Threaded
Open this post in threaded view
|

Re: Prevent Circular buffer overrun

Mitja Pirih
On 20. 03. 2018 14:57, Jernej Stopinšek wrote:

> try to play around with kernel parameters and increase values for:
>
> net.ipv4.udp_rmem_min
> net.ipv4.udp_mem
> net.core.rmem_default
> net.core.rmem_max
> net.core.netdev_max_backlog
> net.core.netdev_budget
>
> also check interface Ring parameters:
> ethtool -g [interface]
>
> and increase RX and TX to maximum:
> in my case:
>
> ethtool -G eth0 rx 4096 tx 4096
>
>
> Check out this document:
> https://blog.packagecloud.io/eng/2016/06/22/monitoring-tuning-linux-networking-stack-receiving-data/ 
>
> and enable RPS and RFS.
>
> This solved my problem with Circular buffer overrun when adding more instances.

Tested your suggestion but it does not help. I've tested from 10% up
from defaults values up to 10x from default values (following a couple
of tuning guides). The only thing I can not test is Ring parameters as I
am getting all multicast traffic on lo (loopback) interface. Still
checking on RPS and RFS if there is any real use on lo as it is a
virtual interface.


_______________________________________________
ffmpeg-user mailing list
[hidden email]
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".
Reply | Threaded
Open this post in threaded view
|

Re: Prevent Circular buffer overrun

Mitja Pirih
On 22. 03. 2018 14:44, Mitja Pirih wrote:

> On 20. 03. 2018 14:57, Jernej Stopinšek wrote:
>> try to play around with kernel parameters and increase values for:
>>
>> net.ipv4.udp_rmem_min
>> net.ipv4.udp_mem
>> net.core.rmem_default
>> net.core.rmem_max
>> net.core.netdev_max_backlog
>> net.core.netdev_budget
>>
>> also check interface Ring parameters:
>> ethtool -g [interface]
>>
>> and increase RX and TX to maximum:
>> in my case:
>>
>> ethtool -G eth0 rx 4096 tx 4096
>>
>>
>> Check out this document:
>> https://blog.packagecloud.io/eng/2016/06/22/monitoring-tuning-linux-networking-stack-receiving-data/ 
>>
>> and enable RPS and RFS.
>>
>> This solved my problem with Circular buffer overrun when adding more instances.
> Tested your suggestion but it does not help. I've tested from 10% up
> from defaults values up to 10x from default values (following a couple
> of tuning guides). The only thing I can not test is Ring parameters as I
> am getting all multicast traffic on lo (loopback) interface. Still
> checking on RPS and RFS if there is any real use on lo as it is a
> virtual interface.

Please correct my understanding of circular buffer overrun: usually it
happens when one experiences a poor network performance. In that case
the solution would be to manipulate buffers related to network. Are
there other cases when it happens?

I do experience approx 2-3 times daily the same error. What
distinguishes my configuration is that tuners (8) and encoder (1) are on
the same device, so at this stage there is no network involved yet. All
traffic from the tuners is dumped to the same loopback interface.
Loopback interface is hitted by an average of 400Mbits of data (200up +
200down), eth0 has an average of 10Mbps (9Mbps up + 100kbps down).

What would be your steps to correctly diagnose where the problem
(bottleneck?) is in this case?

Thanks.

Br,
Mitja
_______________________________________________
ffmpeg-user mailing list
[hidden email]
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".