Discussion:
[Libav-user] "Circular buffer overrun" error when reading UDP stream
Adi Shavit
2013-07-28 14:57:07 UTC
Permalink
Hi,

I'm decoding a UDP stream and getting "Circular buffer overrun. To
avoid, increase fifo_size URL option. To survive in such case, use
overrun_nonfatal option".
How exactly do I :

1. "increase fifo_size URL option. "
2. "use overrun_nonfatal option".

I couldn't find this is the docs.
Thanks,
Adi
Alex Cohn
2013-07-28 17:39:23 UTC
Permalink
Post by Adi Shavit
Hi,
I'm decoding a UDP stream and getting "Circular buffer overrun. To
avoid, increase fifo_size URL option. To survive in such case, use
overrun_nonfatal option".
1. "increase fifo_size URL option. "
2. "use overrun_nonfatal option".
I couldn't find this is the docs.
Thanks,
Adi
See http://ffmpeg.gusari.org/viewtopic.php?f=12&t=624. For ffmpeg
command line, you can specify the UDP buffer size in the URL, e.g.
udp://localhost:5002?fifo_size=1000000.

Buy if you get this message because you followed my advice about max
15 FPS, you are doing something wrong on your side: you should pull
the buffers faster. Did you add a sleep in the decoder loop? If your
client is at 100% CPU utilization, faster settings for h264 decoder
may help.

The only "legitimate" case when increase of the fifo size may really
be the correct solution, is a slow, unreliable network. In such case,
UDP packets may arrive significantly out of order, and nothing but
fifo size will compensate that.

Note that overrun_nonfatal will break the video output. If the network
is very bad, and you have IDR frames sent often enough, or if you use
an intra-refresh stream.

BR,
Alex Cohn
Adi Shavit
2013-07-30 06:37:50 UTC
Permalink
Hi Alex,
Post by Alex Cohn
See http://ffmpeg.gusari.org/viewtopic.php?f=12&t=624. For ffmpeg
command line, you can specify the UDP buffer size in the URL, e.g.
udp://localhost:5002?fifo_size=1000000.
Thanks. I did actually see this post, though I get errors from the
overrun_nonfatal argument.
Post by Alex Cohn
Buy if you get this message because you followed my advice about max
15 FPS, you are doing something wrong on your side: you should pull
the buffers faster.
Yes, I'm dropping frames where I can to decrease fps (though not as
you suggested (yet)).
But I'm decoding multiple streams simultaneously so my CPU is pretty stressed.
I don't use any sleep because I actually prefer processing frames early.
I also don't mind dropped frames in case of processing lag.
Post by Alex Cohn
Did you add a sleep in the decoder loop? If your
client is at 100% CPU utilization, faster settings for h264 decoder
may help.
What settings are those?
Post by Alex Cohn
The only "legitimate" case when increase of the fifo size may really
be the correct solution, is a slow, unreliable network. In such case,
UDP packets may arrive significantly out of order, and nothing but
fifo size will compensate that.
ok...
Post by Alex Cohn
Note that overrun_nonfatal will break the video output. If the network
is very bad, and you have IDR frames sent often enough, or if you use
an intra-refresh stream.
Sorry, I don't understand what this means.


Basically, my requirements are as follows:

1. Decode multiple video streams simultaneously from a single MTP stream.
2. I do not display the frames, only process the pixels.
3. I'm not very sensitive about dropped frames.
4. I prefer to process the frames as early as possible, even before
they would have been displayed in "playing-time".

If the UDP network stream is bad, then just wait until you get some
proper packets/frames.

Thanks,
Adi
Alex Cohn
2013-07-30 10:56:08 UTC
Permalink
Post by Adi Shavit
Post by Alex Cohn
Note that overrun_nonfatal will break the video output. If the network
is very bad, and you have IDR frames sent often enough, or if you use
an intra-refresh stream.
Sorry, I don't understand what this means.
1. Decode multiple video streams simultaneously from a single MTP stream.
2. I do not display the frames, only process the pixels.
3. I'm not very sensitive about dropped frames.
4. I prefer to process the frames as early as possible, even before
they would have been displayed in "playing-time".
If the UDP network stream is bad, then just wait until you get some
proper packets/frames.
Thanks,
Adi
My bad. Few words got dropped from the last mail. Nevermind, let us
address your scenario.

If your processing consistently fall behind the input stream(s), then
overrun_nonfatal or increasing the cyclic buffer will not really help.

You can skip h264 post-processing to save decoding time. Other than
that, if you detect that your process takes too long, you should apply
it only when the decoder catches up with the stream. In the worst
case, if an IDR frame arrives, you can discard all frames up to IDR,
and essentially start again.

If you cannot configure the encoder, the latter option depends totally
on the nature of your incoming stream. If you can configure the
source, you should be very careful about forcing IDR frames too often:
first of all, these frames take much more bandwidth, and second, they
take longer to decode.

Sincerely,
Alex Cohn
Adi Shavit
2013-07-30 11:16:44 UTC
Permalink
<snip>
Post by Alex Cohn
You can skip h264 post-processing to save decoding time.
How do I do this programatically?
I didn't actually configure and codec explicitly, it was automatically
selected. I don't even know when the stream will be H.264.
Post by Alex Cohn
Other than that, if you detect that your process takes too long, you should apply
it only when the decoder catches up with the stream. In the worst
case, if an IDR frame arrives, you can discard all frames up to IDR,
and essentially start again.
How do I detect an IDR (or non-IDR frame)?
Can I freely discard non-IDR frames without decoding them?
Post by Alex Cohn
If you cannot configure the encoder, the latter option depends totally
on the nature of your incoming stream. If you can configure the
first of all, these frames take much more bandwidth, and second, they
take longer to decode.
Thank!
Adi
Alex Cohn
2013-07-30 14:12:25 UTC
Permalink
Post by Adi Shavit
Post by Alex Cohn
You can skip h264 post-processing to save decoding time.
How do I do this programatically?
I didn't actually configure and codec explicitly, it was automatically
selected. I don't even know when the stream will be H.264.
See, for example,
http://ffmpeg.org/pipermail/ffmpeg-devel/2011-October/115966.html.

You can also try to use multithreaded decoder. But this is not going
to help if you have more video streams in parallel than CPU cores.

BTW, what is your platform?

Good luck,
Alex Cohn
Adi Shavit
2013-07-30 14:26:03 UTC
Permalink
Post by Alex Cohn
Post by Adi Shavit
How do I do this programatically?
I didn't actually configure and codec explicitly, it was automatically
selected. I don't even know when the stream will be H.264.
See, for example,
http://ffmpeg.org/pipermail/ffmpeg-devel/2011-October/115966.html.
Thanks.
Post by Alex Cohn
You can also try to use multithreaded decoder. But this is not going
to help if you have more video streams in parallel than CPU cores.
I can give it a shot.
How do I set it up?
Post by Alex Cohn
BTW, what is your platform?
I'm currently testing on Windows, but ultimately Linux.

Adi
Kalileo
2013-07-30 14:53:22 UTC
Permalink
Post by Adi Shavit
Post by Alex Cohn
You can also try to use multithreaded decoder. But this is not going
to help if you have more video streams in parallel than CPU cores.
I can give it a shot.
How do I set it up?
I think Alex means something like this, here as an example with 2 threads:

AVDictionary *opts = NULL;
av_dict_set(&opts, "threads", "2", 0);
if (avcodec_open2(pVideoCodecCtx, pVideoCodec, &opts)

Vahid Kowsari
2013-07-30 15:18:30 UTC
Permalink
Doesnt the decoder do this as default? If I just use the API to decode a
video it creates multiple threads automatically.
Post by Kalileo
Post by Adi Shavit
Post by Alex Cohn
You can also try to use multithreaded decoder. But this is not going
to help if you have more video streams in parallel than CPU cores.
I can give it a shot.
How do I set it up?
AVDictionary *opts = NULL;
av_dict_set(&opts, "threads", "2", 0);
if (avcodec_open2(pVideoCodecCtx, pVideoCodec,&opts)
…
_______________________________________________
Libav-user mailing list
http://ffmpeg.org/mailman/listinfo/libav-user
Kalileo
2013-07-30 15:32:36 UTC
Permalink
Post by Kalileo
AVDictionary *opts = NULL;
av_dict_set(&opts, "threads", "2", 0);
if (avcodec_open2(pVideoCodecCtx, pVideoCodec, &opts)

Doesnt the decoder do this as default? If I just use the API to decode a video it creates multiple threads automatically.
Yes, it does (at least libx264 does), but you can also specify exactly how many you want it to use. Useful if you have many decoders running and want to limit how many threads they take per encoder.
Alex Cohn
2013-07-30 14:15:28 UTC
Permalink
Post by Adi Shavit
How do I detect an IDR (or non-IDR frame)?
Can I freely discard non-IDR frames without decoding them?
Look for AV_PKT_FLAG_KEY in packet->flags

Alex
Adi Shavit
2013-07-30 14:31:26 UTC
Permalink
Great!
If I want to drop a particular frame, and (0 == AV_PKT_FLAG_KEY &
packet->flags) then I can just skip the whole packet and no even
decode it.Cool.
Post by Alex Cohn
Post by Adi Shavit
How do I detect an IDR (or non-IDR frame)?
Can I freely discard non-IDR frames without decoding them?
Look for AV_PKT_FLAG_KEY in packet->flags
Alex
_______________________________________________
Libav-user mailing list
http://ffmpeg.org/mailman/listinfo/libav-user
Alex Cohn
2013-07-30 14:39:49 UTC
Permalink
Post by Adi Shavit
Great!
If I want to drop a particular frame, and (0 == AV_PKT_FLAG_KEY &
packet->flags) then I can just skip the whole packet and no even
decode it.Cool.
Wrong. If you drop a frame, then you must drop all frames after it
until you have (0 != AV_PKT_FLAG_KEY & packet->flags).

Alex
Adi Shavit
2013-07-30 14:59:07 UTC
Permalink
Post by Alex Cohn
Wrong. If you drop a frame, then you must drop all frames after it
until you have (0 != AV_PKT_FLAG_KEY & packet->flags).
Aha, thanks for the clarification.
Not as useful as I had hoped...
Adi
Loading...