Reducing image2pipe png decoder latency

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Reducing image2pipe png decoder latency

Maxim Khitrov
Hi all,

I'm trying to encode two image streams into a single h264 video, with
the second stream scaled and overlaid over the first one. The video is
then streamed in real time as mpeg-ts, so I'm trying to reduce latency
as much as possible.

I have an image generator program that prints the current timecode to
stdout and burns it into each pair of images, so by taking a
screenshot of the terminal and ffplay windows, I can see exactly how
much latency (in frames) there is between the images that are sent to
ffmpeg and the live video stream.

When both images are JPGs, the latency is 3 frames at 30 fps. However,
I need the second image to be transparent, so the production version
will overlay PNG over JPG, and this combination results in 11 frames
of latency, which isn't ideal. At first, I thought this was the result
of RGB to YUV conversion or maybe some other filter graph computation,
but this delay seems independent of the image size, leading me to
believe that something else is going on. At the same time, if I only
encode one image stream (no filter graph), both JPG-only and PNG-only
versions result in ~3 frames of latency.

I'm running out of ideas for what else could be responsible for the
additional buffering and/or computational delays, so I thought I'd ask
here. The command and ffmpeg output are below. If you have any other
recommendations for improving real-time encode performance, please
share them.

Thanks,
Max

ffmpeg -f image2pipe -avioflags direct -fflags nobuffer -fpsprobesize
0 -framerate 30 -probesize 22023 -an -c:v mjpeg -pixel_format yuvj420p
-video_size 1280x720 -i \\.\pipe\ffmpeg-OTdHYsp8CvAJMsnK.0 -f
image2pipe -avioflags direct -fflags nobuffer -fpsprobesize 0
-framerate 30 -probesize 5811 -an -c:v png -pixel_format rgba
-video_size 640x360 -i \\.\pipe\ffmpeg-OTdHYsp8CvAJMsnK.1
-filter_complex
'sws_flags=fast_bilinear;[1:v]format=yuva420p[f1];[f1][0:v]scale2ref=w=iw:h=ih[s1][s0];[s0][s1]overlay=eval=init[v0]'
-filter_threads 1 -map '[v0]' -c:v libx264 -profile:v high -preset
superfast -tune zerolatency -x264-params
keyint=90:min-keyint=30:bframes=0 -intra-refresh 1 -r 30 -thread_type
slice -f mpegts -

[image2pipe @ 000001f9e6b9eec0] Stream #0: not enough frames to
estimate rate; consider increasing probesize
Input #0, image2pipe, from '\\.\pipe\ffmpeg-Mb7lgdkRbbyjzpol.0':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: mjpeg, yuvj420p(bt470bg/unknown/unknown),
1280x720, 30 tbr, 30 tbn, 30 tbc
[image2pipe @ 000001f9e6bcc5c0] Stream #0: not enough frames to
estimate rate; consider increasing probesize
Input #1, image2pipe, from '\\.\pipe\ffmpeg-Mb7lgdkRbbyjzpol.1':
  Duration: N/A, bitrate: N/A
    Stream #1:0: Video: png, rgba(pc), 640x360, 30 tbr, 30 tbn, 30 tbc
Stream mapping:
  Stream #0:0 (mjpeg) -> scale2ref:ref
  Stream #1:0 (png) -> format
  overlay -> Stream #0:0 (libx264)
[libx264 @ 000001f9e6ba1dc0] using cpu capabilities: MMX2 SSE2Fast
SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 000001f9e6ba1dc0] profile High, level 3.1, 4:2:0, 8-bit
Output #0, mpegts, to 'pipe:':
  Metadata:
    encoder         : Lavf58.35.100
    Stream #0:0: Video: h264 (libx264), yuvj420p(pc), 1280x720,
q=-1--1, 30 fps, 90k tbn, 30 tbc (default)
    Metadata:
      encoder         : Lavc58.62.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
[mjpeg @ 000001f9e6bcaf00] error count: 64
[mjpeg @ 000001f9e6bcaf00] error y=2 x=1
[mjpeg @ 000001f9e6bcaf00] mjpeg_decode_dc: bad vlc: 0:0
(000001f9e6bcb5c8)bits/s speed=0.968x
[mjpeg @ 000001f9e6bcaf00] error dc
[mjpeg @ 000001f9e6bcaf00] error y=44 x=57
[mjpeg @ 000001f9e6bcaf00] mjpeg_decode_dc: bad vlc: 0:0
(000001f9e6bcb5c8)bits/s speed=0.975x
[mjpeg @ 000001f9e6bcaf00] error dc
[mjpeg @ 000001f9e6bcaf00] error y=44 x=68
frame=  310 fps= 30 q=23.0 Lsize=    1663kB time=00:00:10.33
bitrate=1318.2kbits/s speed=0.998x
video:1552kB audio:0kB subtitle:0kB other streams:0kB global
headers:0kB muxing overhead: 7.149707%
[libx264 @ 000001f9e6ba1dc0] frame I:1     Avg QP: 5.79  size: 10606
[libx264 @ 000001f9e6ba1dc0] frame P:309   Avg QP: 9.00  size:  5108
[libx264 @ 000001f9e6ba1dc0] mb I  I16..4: 92.3%  0.2%  7.5%
[libx264 @ 000001f9e6ba1dc0] mb P  I16..4: 82.6%  1.4%  1.4%  P16..4:
6.9%  0.0%  0.0%  0.0%  0.0%    skip: 7.8%
[libx264 @ 000001f9e6ba1dc0] 8x8 transform intra:1.6% inter:94.1%
[libx264 @ 000001f9e6ba1dc0] coded y,uvDC,uvAC intra: 1.5% 4.3% 3.0%
inter: 26.6% 35.7% 5.2%
[libx264 @ 000001f9e6ba1dc0] i16 v,h,dc,p: 78% 11% 10%  0%
[libx264 @ 000001f9e6ba1dc0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  4%  5%
45% 40%  0%  0%  0%  0%  6%
[libx264 @ 000001f9e6ba1dc0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 61% 19%
13%  1%  1%  1%  1%  1%  1%
[libx264 @ 000001f9e6ba1dc0] i8c dc,h,v,p: 72% 12% 16%  0%
[libx264 @ 000001f9e6ba1dc0] Weighted P-Frames: Y:1.0% UV:0.0%
[libx264 @ 000001f9e6ba1dc0] kb/s:1230.27
_______________________________________________
ffmpeg-user mailing list
[hidden email]
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".
Reply | Threaded
Open this post in threaded view
|

Re: Reducing image2pipe png decoder latency

Carl Eugen Hoyos-2
Am Mi., 11. Dez. 2019 um 14:24 Uhr schrieb Maxim Khitrov <[hidden email]>:

> ffmpeg -f image2pipe -avioflags direct -fflags nobuffer -fpsprobesize
> 0 -framerate 30 -probesize 22023 -an -c:v mjpeg -pixel_format yuvj420p
> -video_size 1280x720 -i \\.\pipe\ffmpeg-OTdHYsp8CvAJMsnK.0 -f
> image2pipe -avioflags direct -fflags nobuffer -fpsprobesize 0
> -framerate 30 -probesize 5811 -an -c:v png -pixel_format rgba
> -video_size 640x360 -i \\.\pipe\ffmpeg-OTdHYsp8CvAJMsnK.1
> -filter_complex
> 'sws_flags=fast_bilinear;[1:v]format=yuva420p[f1];[f1][0:v]scale2ref=w=iw:h=ih[s1][s0];[s0][s1]overlay=eval=init[v0]'
> -filter_threads 1 -map '[v0]' -c:v libx264 -profile:v high -preset
> superfast -tune zerolatency -x264-params
> keyint=90:min-keyint=30:bframes=0 -intra-refresh 1 -r 30 -thread_type
> slice -f mpegts -
>
> [image2pipe @ 000001f9e6b9eec0] Stream #0: not enough frames to
> estimate rate; consider increasing probesize
> Input #0, image2pipe, from '\\.\pipe\ffmpeg-Mb7lgdkRbbyjzpol.0':
>   Duration: N/A, bitrate: N/A
>     Stream #0:0: Video: mjpeg, yuvj420p(bt470bg/unknown/unknown),
> 1280x720, 30 tbr, 30 tbn, 30 tbc
> [image2pipe @ 000001f9e6bcc5c0] Stream #0: not enough frames to
> estimate rate; consider increasing probesize
> Input #1, image2pipe, from '\\.\pipe\ffmpeg-Mb7lgdkRbbyjzpol.1':
>   Duration: N/A, bitrate: N/A
>     Stream #1:0: Video: png, rgba(pc), 640x360, 30 tbr, 30 tbn, 30 tbc
> Stream mapping:
>   Stream #0:0 (mjpeg) -> scale2ref:ref
>   Stream #1:0 (png) -> format
>   overlay -> Stream #0:0 (libx264)
> [libx264 @ 000001f9e6ba1dc0] using cpu capabilities: MMX2 SSE2Fast
> SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
> [libx264 @ 000001f9e6ba1dc0] profile High, level 3.1, 4:2:0, 8-bit
> Output #0, mpegts, to 'pipe:':
>   Metadata:
>     encoder         : Lavf58.35.100
>     Stream #0:0: Video: h264 (libx264), yuvj420p(pc), 1280x720,
> q=-1--1, 30 fps, 90k tbn, 30 tbc (default)
>     Metadata:
>       encoder         : Lavc58.62.100 libx264
>     Side data:
>       cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
> [mjpeg @ 000001f9e6bcaf00] error count: 64
> [mjpeg @ 000001f9e6bcaf00] error y=2 x=1
> [mjpeg @ 000001f9e6bcaf00] mjpeg_decode_dc: bad vlc: 0:0
> (000001f9e6bcb5c8)bits/s speed=0.968x
> [mjpeg @ 000001f9e6bcaf00] error dc
> [mjpeg @ 000001f9e6bcaf00] error y=44 x=57
> [mjpeg @ 000001f9e6bcaf00] mjpeg_decode_dc: bad vlc: 0:0
> (000001f9e6bcb5c8)bits/s speed=0.975x
> [mjpeg @ 000001f9e6bcaf00] error dc
> [mjpeg @ 000001f9e6bcaf00] error y=44 x=68
> frame=  310 fps= 30 q=23.0 Lsize=    1663kB time=00:00:10.33
> bitrate=1318.2kbits/s speed=0.998x
> video:1552kB audio:0kB subtitle:0kB other streams:0kB global
> headers:0kB muxing overhead: 7.149707%
> [libx264 @ 000001f9e6ba1dc0] frame I:1     Avg QP: 5.79  size: 10606
> [libx264 @ 000001f9e6ba1dc0] frame P:309   Avg QP: 9.00  size:  5108
> [libx264 @ 000001f9e6ba1dc0] mb I  I16..4: 92.3%  0.2%  7.5%
> [libx264 @ 000001f9e6ba1dc0] mb P  I16..4: 82.6%  1.4%  1.4%  P16..4:
> 6.9%  0.0%  0.0%  0.0%  0.0%    skip: 7.8%
> [libx264 @ 000001f9e6ba1dc0] 8x8 transform intra:1.6% inter:94.1%
> [libx264 @ 000001f9e6ba1dc0] coded y,uvDC,uvAC intra: 1.5% 4.3% 3.0%
> inter: 26.6% 35.7% 5.2%
> [libx264 @ 000001f9e6ba1dc0] i16 v,h,dc,p: 78% 11% 10%  0%
> [libx264 @ 000001f9e6ba1dc0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  4%  5%
> 45% 40%  0%  0%  0%  0%  6%
> [libx264 @ 000001f9e6ba1dc0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 61% 19%
> 13%  1%  1%  1%  1%  1%  1%
> [libx264 @ 000001f9e6ba1dc0] i8c dc,h,v,p: 72% 12% 16%  0%
> [libx264 @ 000001f9e6ba1dc0] Weighted P-Frames: Y:1.0% UV:0.0%
> [libx264 @ 000001f9e6ba1dc0] kb/s:1230.27

There is something strange about your console output (many lines
seem to be missing), if this is a patched binary of FFmpeg, please
ask for support wherever you found the binary.

Carl Eugen
_______________________________________________
ffmpeg-user mailing list
[hidden email]
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".
Reply | Threaded
Open this post in threaded view
|

Re: Reducing image2pipe png decoder latency

Maxim Khitrov
On Wed, Dec 11, 2019 at 9:42 AM Carl Eugen Hoyos <[hidden email]> wrote:
> There is something strange about your console output (many lines
> seem to be missing), if this is a patched binary of FFmpeg, please
> ask for support wherever you found the binary.
>
> Carl Eugen

I'm using ffmpeg-20191211-4110029-win64-static binary from
ffmpeg.zeranoe.com/builds/. Here's the verbose output:

[image2pipe @ 00000275256eefc0] Stream #0: not enough frames to
estimate rate; consider increasing probesize
Input #0, image2pipe, from '\\.\pipe\ffmpeg-fjTn1bOtYSpsze3Q.0':
  Duration: N/A, start: 1576092972.633333, bitrate: N/A
    Stream #0:0: Video: mjpeg, 1 reference frame,
yuvj420p(bt470bg/unknown/unknown, center), 1280x720, 30 tbr, 30 tbn,
30 tbc
[image2pipe @ 00000275256f4440] Stream #0: not enough frames to
estimate rate; consider increasing probesize
Input #1, image2pipe, from '\\.\pipe\ffmpeg-fjTn1bOtYSpsze3Q.1':
  Duration: N/A, start: 1576092972.633333, bitrate: N/A
    Stream #1:0: Video: png, 1 reference frame, rgba(pc), 640x360, 30
tbr, 30 tbn, 30 tbc
[Parsed_scale2ref_0 @ 0000027525717b80] w:iw h:ih flags:'bilinear' interl:0
Stream mapping:
  Stream #0:0 (mjpeg) -> scale2ref:ref
  Stream #1:0 (png) -> scale2ref:default
  overlay -> Stream #0:0 (libx264)
[Parsed_scale2ref_0 @ 000002752572fcc0] w:iw h:ih flags:'bilinear' interl:0
[graph 0 input from stream 1:0 @ 0000027525755840] w:640 h:360
pixfmt:rgba tb:1/30 fr:30/1 sar:0/1 sws_param:flags=2
[graph 0 input from stream 0:0 @ 0000027525755940] w:1280 h:720
pixfmt:yuvj420p tb:1/30 fr:30/1 sar:0/1 sws_param:flags=2
[Parsed_scale2ref_0 @ 000002752572fcc0] w:1280 h:720 fmt:yuvj420p
sar:0/1 -> w:1280 h:720 fmt:yuva420p sar:0/1 flags:0x2
[Parsed_overlay_1 @ 000002752572b980] x:0.000000 xi:0 y:0.000000 yi:0
[Parsed_overlay_1 @ 000002752572b980] main w:1280 h:720 fmt:yuvj420p
overlay w:1280 h:720 fmt:yuva420p
[Parsed_overlay_1 @ 000002752572b980] [framesync @ 00000275257565e8]
Selected 1/30 time base
[Parsed_overlay_1 @ 000002752572b980] [framesync @ 00000275257565e8]
Sync level 2
[libx264 @ 00000275256f71c0] using cpu capabilities: MMX2 SSE2Fast
SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 00000275256f71c0] profile High, level 3.1, 4:2:0, 8-bit
[mpegts @ 00000275256f4f40] service 1 using PCR in pid=256, pcr_period=100ms
[mpegts @ 00000275256f4f40] muxrate VBR, sdt every 500 ms, pat/pmt every 100 ms
Output #0, mpegts, to 'pipe:':
  Metadata:
    encoder         : Lavf58.35.101
    Stream #0:0: Video: h264 (libx264), 1 reference frame,
yuvj420p(pc), 1280x720, q=-1--1, 30 fps, 90k tbn, 30 tbc (default)
    Metadata:
      encoder         : Lavc58.64.101 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
[Parsed_overlay_1 @ 000002752572b980] [framesync @ 00000275257565e8]
Sync level 1 speed=0.973x
[Parsed_overlay_1 @ 000002752572b980] [framesync @ 00000275257565e8]
Sync level 0
No more output streams to write to, finishing.
frame=  241 fps= 30 q=22.0 Lsize=    1296kB time=00:00:08.00
bitrate=1326.9kbits/s speed=   1x
video:1209kB audio:0kB subtitle:0kB other streams:0kB global
headers:0kB muxing overhead: 7.180827%
Input file #0 (\\.\pipe\ffmpeg-fjTn1bOtYSpsze3Q.0):
  Input stream #0:0 (video): 240 packets read (5200330 bytes); 240
frames decoded;
  Total: 240 packets (5200330 bytes) demuxed
Input file #1 (\\.\pipe\ffmpeg-fjTn1bOtYSpsze3Q.1):
  Input stream #1:0 (video): 240 packets read (657386 bytes); 240
frames decoded;
  Total: 240 packets (657386 bytes) demuxed
Output file #0 (pipe:):
  Output stream #0:0 (video): 241 frames encoded; 241 packets muxed
(1238005 bytes);
  Total: 241 packets (1238005 bytes) muxed
[AVIOContext @ 00000275256f7f80] Statistics: 0 seeks, 241 writeouts
[libx264 @ 00000275256f71c0] frame I:1     Avg QP: 5.37  size:  8448
[libx264 @ 00000275256f71c0] frame P:240   Avg QP: 8.85  size:  5123
[libx264 @ 00000275256f71c0] mb I  I16..4: 93.3%  0.1%  6.6%
[libx264 @ 00000275256f71c0] mb P  I16..4: 83.3%  1.5%  1.3%  P16..4:
6.6%  0.0%  0.0%  0.0%  0.0%    skip: 7.3%
[libx264 @ 00000275256f71c0] 8x8 transform intra:1.7% inter:93.2%
[libx264 @ 00000275256f71c0] coded y,uvDC,uvAC intra: 1.5% 4.4% 3.1%
inter: 27.8% 34.2% 3.5%
[libx264 @ 00000275256f71c0] i16 v,h,dc,p: 78% 11% 10%  0%
[libx264 @ 00000275256f71c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  3%  5%
46% 40%  0%  0%  0%  0%  5%
[libx264 @ 00000275256f71c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 62% 19%
13%  1%  1%  1%  1%  1%  1%
[libx264 @ 00000275256f71c0] i8c dc,h,v,p: 72% 12% 15%  0%
[libx264 @ 00000275256f71c0] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 00000275256f71c0] kb/s:1232.87
[AVIOContext @ 00000275256fa340] Statistics: 5243794 bytes read, 1 seeks
[AVIOContext @ 0000027525723d00] Statistics: 661981 bytes read, 1 seeks
_______________________________________________
ffmpeg-user mailing list
[hidden email]
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".
Reply | Threaded
Open this post in threaded view
|

Re: Reducing image2pipe png decoder latency

kumowoon1025
In reply to this post by Maxim Khitrov
Are you able to make it so the image generator program doesn’t compress the images beforehand? I think avoiding format conversions and avoiding any alpha channel would be faster, maybe what you want as the end result can be approximated without it is possible without it.
_______________________________________________
ffmpeg-user mailing list
[hidden email]
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".
Reply | Threaded
Open this post in threaded view
|

Re: Reducing image2pipe png decoder latency

Maxim Khitrov
On Thu, Dec 12, 2019 at 1:42 AM Ted Park <[hidden email]> wrote:
>
> Are you able to make it so the image generator program doesn’t compress the images beforehand? I think avoiding format conversions and avoiding any alpha channel would be faster, maybe what you want as the end result can be approximated without it is possible without it.

The generator program is just for testing. The production version has
to transport these images over the network first, so compression is
important.

Losing the alpha channel is not ideal. I can do that by encoding both
images to JPG and blending the two together, but the top image is
mostly blank, so that just results in a dark background. Send the
background, overlay, and a separate grayscale alpha mask for the
overlay, all as JPGs, is another option, but that increases image
encoding time and bandwidth requirements.

Also, it looks like I made a mistake when testing PNG-only stream. I'm
now seeing the same 10+ frame latency with a single PNG input as with
JPG background and PNG overlay, which actually makes me feel a little
better since that eliminates the filter graph as the source of the
delay (unless it's the RGB -> YUV conversion?). I think it has to be
the PNG decoder.

-Max
_______________________________________________
ffmpeg-user mailing list
[hidden email]
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".
Reply | Threaded
Open this post in threaded view
|

Re: Reducing image2pipe png decoder latency

kumowoon1025
> Losing the alpha channel is not ideal. I can do that by encoding both
> images to JPG and blending the two together, but the top image is
> mostly blank, so that just results in a dark background. Send the
> background, overlay, and a separate grayscale alpha mask for the
> overlay, all as JPGs, is another option, but that increases image
> encoding time and bandwidth requirements.
I can’t say for sure since I don’t know what your source is, but there probably was no alpha channel to begin with. If the top image is blank maybe you want to switch the order and/or use a different blend mode? As you say, generating a mask and applying it will be too complicated if 10 frames of delay is unacceptable.
 
> Also, it looks like I made a mistake when testing PNG-only stream. I'm
> now seeing the same 10+ frame latency with a single PNG input as with
> JPG background and PNG overlay, which actually makes me feel a little
> better since that eliminates the filter graph as the source of the
> delay (unless it's the RGB -> YUV conversion?). I think it has to be
> the PNG decoder.

What exactly is this latency being measured as, by the way? I thought it was between the two streams of images, but I guess not? Are you sure it’s not the network or the image generator taking longer?

As long as you have a png in there the slower speed is unavoidable because of the conversion to rgb, but also the png stream is probably a lot bigger, so if you have to transfer it over a network to process (back into yuv it sounds like) you should probably try to keep it in yuv. Anything you do in rgb should be doable in yuv too with a little math, as long as you’re not rotoscoping things out or something.
_______________________________________________
ffmpeg-user mailing list
[hidden email]
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".
Reply | Threaded
Open this post in threaded view
|

Re: Reducing image2pipe png decoder latency

Maxim Khitrov
On Mon, Dec 16, 2019 at 11:04 AM Ted Park <[hidden email]> wrote:
>
> > Losing the alpha channel is not ideal. I can do that by encoding both
> > images to JPG and blending the two together, but the top image is
> > mostly blank, so that just results in a dark background. Send the
> > background, overlay, and a separate grayscale alpha mask for the
> > overlay, all as JPGs, is another option, but that increases image
> > encoding time and bandwidth requirements.
> I can’t say for sure since I don’t know what your source is, but there probably was no alpha channel to begin with. If the top image is blank maybe you want to switch the order and/or use a different blend mode? As you say, generating a mask and applying it will be too complicated if 10 frames of delay is unacceptable.

The background JPG is a video capture, the overlay is a HUD-like UI,
which is mostly transparent except for the actual UI elements.
Anything I do with color-keying or blend modes is still going to be
suboptimal relative to preserving the original alpha channel, which is
definitely there in the source.

> > Also, it looks like I made a mistake when testing PNG-only stream. I'm
> > now seeing the same 10+ frame latency with a single PNG input as with
> > JPG background and PNG overlay, which actually makes me feel a little
> > better since that eliminates the filter graph as the source of the
> > delay (unless it's the RGB -> YUV conversion?). I think it has to be
> > the PNG decoder.
>
> What exactly is this latency being measured as, by the way? I thought it was between the two streams of images, but I guess not? Are you sure it’s not the network or the image generator taking longer?
>
> As long as you have a png in there the slower speed is unavoidable because of the conversion to rgb, but also the png stream is probably a lot bigger, so if you have to transfer it over a network to process (back into yuv it sounds like) you should probably try to keep it in yuv. Anything you do in rgb should be doable in yuv too with a little math, as long as you’re not rotoscoping things out or something.

My frame generator is writing two synchronized streams of images to
ffmpeg. Each pair of images is combined into one video frame. The
timecode of each frame is written both to stdout (just before the
images are submitted to ffmpeg) and burned into each image. I'm then
playing ffmpeg output video stream with ffplay. By taking a screenshot
of my desktop, I can see the current frame that was just written to
ffmpeg (from stdout) and the current frame that's being played in
ffplay. The difference is the total latency of the encoder pipeline.

The only variable I'm changing is whether the overlay image is encoded
as JPG vs PNG. Everything else is staying the same, and this is all
done locally right now, so no network latency. RGB to YUV conversion
has to happen somewhere because the source of the overlay is an RGB
OpenGL texture. Does it seem likely that this conversion alone would
account for 6+ additional frames of latency (200ms at 30 fps)?

The fact that the latency does not seem to depend on the size of the
image, with even 300x200 PNGs adding the same latency as 1920x1080,
leads me to believe that this is caused by buffering happening
somewhere in the png decoder.

-Max
_______________________________________________
ffmpeg-user mailing list
[hidden email]
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".
Reply | Threaded
Open this post in threaded view
|

Re: Reducing image2pipe png decoder latency

kumowoon1025
In reply to this post by Maxim Khitrov
At this point, everything has been rendered onto the black background anyway. I don’t know how the png gets decoded slower, how are these pipes being created?
_______________________________________________
ffmpeg-user mailing list
[hidden email]
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".