Enconding and streaming raw pixel stream via WiFi

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Enconding and streaming raw pixel stream via WiFi

Ralf Ramsauer
Hi,

I'd like to use ffmpeg/ffserver to encode and stream a raw pixel stream.

My intention is to capture a v4l2 cam device, do some magic with opencv
and stream with ffserver and RTSP via WiFi, as the device is a flying
microcopter platform with an unstable WiFi connection.

This is the reason why I'd like to use RTSP via UDP. I decided to pass
the raw pixel stream via an unnamed pipe to ffmpeg. Tricky, but this
keeps all encoding/streaming stuff out of my C++ code. ffmpeg decodes
the stream and sends it to a ffserver instance running on the same
platform. ffserver binds to the WiFi interface, other computers on the
same network are allowed to watch the RTSP stream.

I'm open for better solutions. :)

However, the pixel stream has no fix framerate, it produces as many
frames as possible. IOW, a new frame is sent to stdout as soon as it is
available. Effectively, this results in a framerate of about ~28fps.

This is how I invoke ffmpeg:

./my_opencv_app | ffmpeg -f rawvideo -r 30 -pixel_format gray
-video_size 640x480 -tune zerolatency -i - http://localhost:8081/foo.ffm

Please find the corresponding ffserver.conf below.

Everything *somehow* works, but not as intended. The video delay is
constantly increasing, and it produces high network load (~1.4MB/s) for
rather simple pictures (grayscaled).

- My opencv app currently produces as many frames as possible. Should it
  produce frames at a (more or less) constant frame rate?

- Apparently, this approach makes ffmpeg use the mpeg4 codec per
  default. This results in high network load, though my frames are only
  grayscaled.
  So I tried to switch to h26[45]. This broke everything. I get a bunch
  of errors and no video stream when I try to watch it with mplayer.

  I tried to replace my opencv app by a direct stream of the webcam with
  ffmpeg. Same issues.

- What are 'recommended' codec settings for my use case? Which codec
  would probably be the best for me? (Sorry, I'm not an A/V guy ;) )

- I have to specify the framerate as a ffmpeg parameter "-r 30", and in
  the ffserver.conf. Why do I have to specify it twice? Why do I have to
  specify a framerate at all?

- I'd like to keep latency as low as possible. Are there some special
  tweaks for achieving low latency with ffmpeg?

- Is h26[45] suitable for streaming with an unstable connection? Is it
  robust against random failures?

If it helps, I can push my stuff to some repository.

Anything helps!

Thanks
  Ralf

ffserver.conf:

HttpPort 8081
RtspPort 5554
HttpBindAddress 0.0.0.0
RTSPBindAddress 0.0.0.0
MaxClients 5
MaxBandwidth 100000
CustomLog -

NoDefaults

<Feed foo.ffm>
        File /tmp/foo.ffm
        FileMaxSize 1M
        ACL allow 127.0.0.1
</Feed>

<Stream foo.sdp>
        Feed foo.ffm
        Format rtp
        Noaudio

        VideoSize 640x480
        VideoFrameRate 30

        ACL allow 0.0.0.0
</Stream>
_______________________________________________
ffmpeg-user mailing list
[hidden email]
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".
Reply | Threaded
Open this post in threaded view
|

Re: Enconding and streaming raw pixel stream via WiFi

Moritz Barsnick
On Wed, May 17, 2017 at 19:29:52 +0200, Ralf Ramsauer wrote:

So many questions in just one e-mail. *sigh* ;-)

> My intention is to capture a v4l2 cam device, do some magic with opencv
> and stream with ffserver and RTSP via WiFi, as the device is a flying
> microcopter platform with an unstable WiFi connection.

May I ask what you to with opencv? Is it something an ffmpeg filter
could perhaps do?

> However, the pixel stream has no fix framerate, it produces as many
> frames as possible. IOW, a new frame is sent to stdout as soon as it is
> available. Effectively, this results in a framerate of about ~28fps.

I'm not sure ffmpeg handles variable framerates as a raw input (though
my opinion is it should).

> This is how I invoke ffmpeg:
>
> ./my_opencv_app | ffmpeg -f rawvideo -r 30 -pixel_format gray
> -video_size 640x480 -tune zerolatency -i - http://localhost:8081/foo.ffm
>
> Please find the corresponding ffserver.conf below.
>
> Everything *somehow* works, but not as intended. The video delay is
> constantly increasing, and it produces high network load (~1.4MB/s) for
> rather simple pictures (grayscaled).
>
> - My opencv app currently produces as many frames as possible. Should it
>   produce frames at a (more or less) constant frame rate?

I'm not sure, I'll let someone else answer. Optimally, ffmpeg would add
the wallclock as timestamp when receiving, and make a VFR stream of it.

> - Apparently, this approach makes ffmpeg use the mpeg4 codec per
>   default. This results in high network load, though my frames are only
>   grayscaled.

This depends on how ffmpeg was configured and built, and on ffmpeg's
defaults for the output format. If your ffmpeg supports other codecs,
you are free to say "-c:v othercodec".

(BTW, I believe ffmpeg's mpeg4 encoder defaults to 200 kbits/s, that is
not really high network load!)

>   So I tried to switch to h26[45]. This broke everything. I get a bunch
>   of errors and no video stream when I try to watch it with mplayer.

Now I have a question: If you have an issue you would like help with,
why don't you show us these errors? Where do we get crystal balls from?
In other words: Command line and complete, uncut console output
missing.

>   I tried to replace my opencv app by a direct stream of the webcam with
>   ffmpeg. Same issues.

ffmpeg -f v4l2 -i /dev/video0 ??

> - What are 'recommended' codec settings for my use case? Which codec
>   would probably be the best for me? (Sorry, I'm not an A/V guy ;) )

What is your case? I.e. what do the clients require? How much bandwidth
do you have available? How much latency can you tolerate? How much CPU
power do you have to spare? A/V is not trivial, indeed.

> - I have to specify the framerate as a ffmpeg parameter "-r 30", and in
>   the ffserver.conf. Why do I have to specify it twice? Why do I have to
>   specify a framerate at all?

I don't know much about ffserver. I pass.

> - I'd like to keep latency as low as possible. Are there some special
>   tweaks for achieving low latency with ffmpeg?

Certainly. Let me google that for you:
https://trac.ffmpeg.org/wiki/StreamingGuide#Latency

Sorry, I left some stuff unanswered for others to pick up. :)

Moritz
_______________________________________________
ffmpeg-user mailing list
[hidden email]
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".
Reply | Threaded
Open this post in threaded view
|

Re: Enconding and streaming raw pixel stream via WiFi

Ralf Ramsauer
Hi Moritz,

On 05/17/2017 10:11 PM, Moritz Barsnick wrote:
> On Wed, May 17, 2017 at 19:29:52 +0200, Ralf Ramsauer wrote:
>
> So many questions in just one e-mail. *sigh* ;-)
Apologies, and I already tried to compress it ;)
>
>> My intention is to capture a v4l2 cam device, do some magic with opencv
>> and stream with ffserver and RTSP via WiFi, as the device is a flying
>> microcopter platform with an unstable WiFi connection.
>
> May I ask what you to with opencv? Is it something an ffmpeg filter
> could perhaps do?
Pretty trivial things: GaussianBlur and Canny filters. Maybe this could
be done by ffmpeg, but I'd like to stick to opencv (for demonstration
purposes...).
>
>> However, the pixel stream has no fix framerate, it produces as many
>> frames as possible. IOW, a new frame is sent to stdout as soon as it is
>> available. Effectively, this results in a framerate of about ~28fps.
>
> I'm not sure ffmpeg handles variable framerates as a raw input (though
> my opinion is it should).
Well, in fact framerate is not variable, I just don't know it exactly.
There's currently nothing like a periodic timer that produces frames at
a certain time. Frames are sent when available, and on average this
results in about 28fps.

>
>> This is how I invoke ffmpeg:
>>
>> ./my_opencv_app | ffmpeg -f rawvideo -r 30 -pixel_format gray
>> -video_size 640x480 -tune zerolatency -i - http://localhost:8081/foo.ffm
>>
>> Please find the corresponding ffserver.conf below.
>>
>> Everything *somehow* works, but not as intended. The video delay is
>> constantly increasing, and it produces high network load (~1.4MB/s) for
>> rather simple pictures (grayscaled).
>>
>> - My opencv app currently produces as many frames as possible. Should it
>>   produce frames at a (more or less) constant frame rate?
>
> I'm not sure, I'll let someone else answer. Optimally, ffmpeg would add
> the wallclock as timestamp when receiving, and make a VFR stream of it.
In this case, I would expect that ffmpeg should be able to deal with a
dynamic frame rate. Hmm...
>
>> - Apparently, this approach makes ffmpeg use the mpeg4 codec per
>>   default. This results in high network load, though my frames are only
>>   grayscaled.
>
> This depends on how ffmpeg was configured and built, and on ffmpeg's
> defaults for the output format. If your ffmpeg supports other codecs,
> you are free to say "-c:v othercodec".
Ok, so why do I have to define the codec in my ffserver.conf and in the
ffmpeg command line?

And what happens if they are not the same?

>
> (BTW, I believe ffmpeg's mpeg4 encoder defaults to 200 kbits/s, that is
> not really high network load!)
>
>>   So I tried to switch to h26[45]. This broke everything. I get a bunch
>>   of errors and no video stream when I try to watch it with mplayer.
>
> Now I have a question: If you have an issue you would like help with,
> why don't you show us these errors? Where do we get crystal balls from?
> In other words: Command line and complete, uncut console output
> missing.
Sure. I didn't want to flood my initial mail with this information.
Maybe the following reconstruction explains this decision:

ffserver.conf:

HttpPort 8081
RtspPort 5554
NoDefaults

<Feed test.ffm>
        File /tmp/test.ffm
        FileMaxSize 1M
        ACL allow 127.0.0.1
</Feed>

<Stream test.sdp>
        Feed test.ffm
        Format rtp
        Noaudio

        VideoSize 640x480
        VideoCodec libx264
        PixelFormat yuv420p
        VideoFrameRate 30
</Stream>

ffserver:
ffserver -f ffserver.conf

producer:
ffmpeg -f v4l2 -r 30 -i /dev/video0 -pixel_format mjpeg -video_size
640x480 http://127.0.0.1:8081/test.ffm

Output of ffmpeg indicates that it uses libx264, as defined in
ffserver's config.

consumer:
mplayer rtsp://127.0.0.1:5554/test.sdp

And here comes the look into the magic crystal ball:
[h264 @ 0x7f1db6b30f60]non-existing PPS 0 referenced
[h264 @ 0x7f1db6b30f60]non-existing PPS 0 referenced
[h264 @ 0x7f1db6b30f60]decode_slice_header error
[h264 @ 0x7f1db6b30f60]no frame!
[h264 @ 0x7f1db6b30f60]non-existing PPS 0 referenced
[h264 @ 0x7f1db6b30f60]non-existing PPS 0 referenced
[h264 @ 0x7f1db6b30f60]decode_slice_header error
[h264 @ 0x7f1db6b30f60]no frame!
[h264 @ 0x7f1db6b30f60]non-existing PPS 0 referenced
[h264 @ 0x7f1db6b30f60]non-existing PPS 0 referenced
[h264 @ 0x7f1db6b30f60]decode_slice_header error
[h264 @ 0x7f1db6b30f60]no frame!
[...]

This message repeats a couple of times. And after a while:

[lavf] stream 0: video (h264), -vid 0
VIDEO:  [H264]  640x480  0bpp  30.000 fps    0.0 kbps ( 0.0 kbyte/s)
==========================================================================
Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family
libavcodec version 57.89.100 (external)
Selected video codec: [ffh264] vfm: ffmpeg (FFmpeg H.264)
==========================================================================
Clip info:
 title: No Title
Audio: no sound
Starting playback...
[h264 @ 0x7fa780d15f60]co located POCs unavailable
[h264 @ 0x7fa780d15f60]mmco: unref short failure
[h264 @ 0x7fa780d15f60]co located POCs unavailable
Movie-Aspect is undefined - no prescaling applied.
VO: [xv] 640x480 => 640x480 Planar YV12
V:27020.6   0/  0  5%  0%  0.0% 0 0
[rtsp @ 0x7fa781751ec0]max delay reached. need to consume packet
[rtsp @ 0x7fa781751ec0]RTP: missed 156 packets
[h264 @ 0x7fa780d15f60]cabac decode of qscale diff failed at 4 23
[h264 @ 0x7fa780d15f60]error while decoding MB 4 23, bytestream 1531
[h264 @ 0x7fa780d15f60]concealing 325 DC, 325 AC, 325 MV errors in P frame
V:27020.7   0/  0  6%  0%  0.0% 0 0
[h264 @ 0x7fa780d15f60]co located POCs unavailable
[h264 @ 0x7fa780d15f60]mmco: unref short failure
[h264 @ 0x7fa780d15f60]co located POCs unavailable
[h264 @ 0x7fa780d15f60]co located POCs unavailable
V:27022.7   0/  0  4%  0%  0.0% 0 0


After a while, mplayer opens a window with a distorted image. Time seems
to be somehow stretched as well. The image 'undistortes' after a moment,
but it's still wobbly.

Screenshot: https://ramses-pyramidenbau.de/~ralf/ffmpeg.png

The funny thing is that
ffmpeg -f v4l2 -i /dev/video0 -pixel_format mjpeg -video_size 640x480
-c:v libx264 test.mp4

produces a x264 video file that is playable with mplayer without any
errors or warnings.
>
>>   I tried to replace my opencv app by a direct stream of the webcam with
>>   ffmpeg. Same issues.
>
> ffmpeg -f v4l2 -i /dev/video0 ??
Yep, exactly, this is basically what I tried. See above.
>
>> - What are 'recommended' codec settings for my use case? Which codec
>>   would probably be the best for me? (Sorry, I'm not an A/V guy ;) )
>
> What is your case? I.e. what do the clients require? How much bandwidth
> do you have available? How much latency can you tolerate? How much CPU
> power do you have to spare? A/V is not trivial, indeed.
There's only one single client, likely an Android tablet running VLC.
But that shouldn't matter for the choice of the codec. Bandwidth can
vary from about 50Mbits to almost nothing. Losses for some seconds are
no problem, but the stream should automatically revive and be failsafe.
Missed frames shall be dropped. That's the reason why I chose RTSP. The
lower latency, the better. On average, latency should not exceed ~400ms.
I think that's feasible.

On the destination platform I have two Cortex A15 CPUs at 2.27GHz to
spare. If x264 or x265 is too heavy, I'll fall back to mpeg4. However,
anything is better than transmitting raw video. I'll have to juggle with
codecs, but first of all I want to get it running on my local machine
(with enough computational power) via local loopback without WiFi.
Seems this is hard enough :)

>
>> - I have to specify the framerate as a ffmpeg parameter "-r 30", and in
>>   the ffserver.conf. Why do I have to specify it twice? Why do I have to
>>   specify a framerate at all?
>
> I don't know much about ffserver. I pass.
>
>> - I'd like to keep latency as low as possible. Are there some special
>>   tweaks for achieving low latency with ffmpeg?
>
> Certainly. Let me google that for you:
> https://trac.ffmpeg.org/wiki/StreamingGuide#Latency
Thanks a lot!

  Ralf

>
> Sorry, I left some stuff unanswered for others to pick up. :)
>
> Moritz
> _______________________________________________
> ffmpeg-user mailing list
> [hidden email]
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> [hidden email] with subject "unsubscribe".
>
_______________________________________________
ffmpeg-user mailing list
[hidden email]
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".