I am writing an application that displays, encodes, and muxes live video using ffmpeg as the backend. The audio and video encoding occurs asynchronously in the background, each in its own thread, such that the encoded packets arrive to the av_interleaved_write_frame() function call at different times. H.264 video encoding by itself works fine. However when I add audio, the audio is out of sync from the video even though the audio and video pts are in sync (from avdevice using the matroska muxer). The cause of the problem is not clear to me. Specifically, here are my questions:
1. Can the av_interleaved_write_frame() function handle multithreaded asynchronous calls for audio and video streams?
2. The transcoding example uses a filter graph using buffer/abuffer filters. My current implementation does not use the buffer/abuffer filters because I am not applying any filters before encoding. Are they required for my situation?
3. The encoding happens randomly within the stream such that the first pts received by the muxer is not zero. Is this required?
On Tue, Feb 26, 2019, at 1:33 PM, BIGLER Don (Framatome) wrote:
> I am writing an application that displays, encodes, and muxes live
> video using ffmpeg as the backend. The audio and video encoding occurs
> asynchronously in the background, each in its own thread, such that the
> encoded packets arrive to the av_interleaved_write_frame() function
> call at different times.
This mailing list (ffmpeg-user) is only for questions involving the FFmpeg command-line tools (ffmpeg, ffplay, ffprobe). Usage questions involving the FFmpeg libraries (libavcodec, libavformat, etc) should be asked at libav-user.
ffmpeg-user mailing list
[hidden email] https://ffmpeg.org/mailman/listinfo/ffmpeg-user
To unsubscribe, visit link above, or email
[hidden email] with subject "unsubscribe".