Delay filter reconfiguration

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Delay filter reconfiguration

androiddevmar11
Hello,
my filter graph looks like this: I have "n" inputs (video files). Each input is connected to "abuffer". Each "abuffer" to "adelay". Each "adelay" to "amix". I nead to mix many inputs into one aac file in such way:
1. Get sound frames (starting from 10 second to 15 second) from one input and mix them to aac file at time stamp lets say 25s to 30s. From the same input take frames from 17 second to 20 second and mix them at time stamp lets say 35s to 38s. I assume first I need to seek to 10 second of this input in order to be at correct place in file. Delay filter should be configured with 25000ms (place where I want to mix to). After firts part of frames are mixed I need to reconfigure delay filter. Next frames should be added starting from 35s. Should I reconfigure delay filter with value 10 (35s-25s)?

In other words I want to take frames from "n" inputs from any time and add them to aac file at any place.

I am trying to free delay filter, recreate it with new parameter and recreate links in graph but it is not working. Is it good approach? Thank you for help and hints.
Reply | Threaded
Open this post in threaded view
|

Re: Delay filter reconfiguration

Paul B Mahol
On 6/10/14, androiddevmar11 <[hidden email]> wrote:

> Hello,
> my filter graph looks like this: I have "n" inputs (video files). Each
> input
> is connected to "abuffer". Each "abuffer" to "adelay". Each "adelay" to
> "amix". I nead to mix many inputs into one aac file in such way:
> 1. Get sound frames (starting from 10 second to 15 second) from one input
> and mix them to aac file at time stamp lets say 25s to 30s. From the same
> input take frames from 17 second to 20 second and mix them at time stamp
> lets say 35s to 38s. I assume first I need to seek to 10 second of this
> input in order to be at correct place in file. Delay filter should be
> configured with 25000ms (place where I want to mix to). After firts part of
> frames are mixed I need to reconfigure delay filter. Next frames should be
> added starting from 35s. Should I reconfigure delay filter with value 10
> (35s-25s)?
>
> In other words I want to take frames from "n" inputs from any time and add
> them to aac file at any place.
>
> I am trying to free delay filter, recreate it with new parameter and
> recreate links in graph but it is not working. Is it good approach? Thank
> you for help and hints.

By "it is not working" you mean exactly what is returned?

adelay filter sets silence and so using it for your case is not optimal solution
(if I understood you correctly amix, would sum samples from 2 inputs).

>
>
>
> --
> View this message in context:
> http://ffmpeg-users.933282.n4.nabble.com/Delay-filter-reconfiguration-tp4665727.html
> Sent from the FFmpeg-users mailing list archive at Nabble.com.
> _______________________________________________
> ffmpeg-user mailing list
> [hidden email]
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
_______________________________________________
ffmpeg-user mailing list
[hidden email]
http://ffmpeg.org/mailman/listinfo/ffmpeg-user
Reply | Threaded
Open this post in threaded view
|

Re: Delay filter reconfiguration

androiddevmar11
Hello,
Actually there can be more than two inputs. To be specific, in most typical scenario I will have one AAC file and more than one video files. I want to get some frames from specific time stamp of video file and add them to specific time stamp in AAC file. More than one pieces of audio can be taken from one video input and added to one or more places in AAC file. I am doing proof of concept based on this code: https://gist.github.com/MrArtichaut/11136813. My init_filter_graph function looks like this:

static int init_filter_graph(AVFilterGraph **graph, AVFilterContext **src0, AVFilterContext **src1,
AVFilterContext **sink)
{
    AVFilterGraph *filter_graph;
    AVFilterContext *abuffer1_ctx;
    AVFilter        *abuffer1;
        AVFilterContext *abuffer0_ctx;
    AVFilter        *abuffer0;
    AVFilterContext *adelay_ctx;
    AVFilter        *adelay;
    AVFilterContext *mix_ctx;
    AVFilter        *mix_filter;
    AVFilterContext *abuffersink_ctx;
    AVFilter        *abuffersink;

        char args[512];

    int err;

    /* Create a new filter graph, which will contain all the filters. */
    filter_graph = avfilter_graph_alloc();
    if (!filter_graph) {
        //av_log(NULL, AV_LOG_ERROR, "Unable to create filter graph.\n");
        LOGE("Unable to create filter graph.\n");
        return AVERROR(ENOMEM);
    }

        /****** abuffer 0 ********/

    /* Create the abuffer filter, it will be used for feeding the data into the graph. */
    abuffer0 = avfilter_get_by_name("abuffer");
    if (!abuffer0) {
        //av_log(NULL, AV_LOG_ERROR, "Could not find the abuffer filter.\n");
        LOGE("Could not find the abuffer filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

        /* buffer audio source: the decoded frames from the decoder will be inserted here. */
    if (!input_codec_context_0->channel_layout)
        input_codec_context_0->channel_layout = av_get_default_channel_layout(input_codec_context_0->channels);
    snprintf(args, sizeof(args),
                         "sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
             input_codec_context_0->sample_rate,
             av_get_sample_fmt_name(input_codec_context_0->sample_fmt), input_codec_context_0->channel_layout);


        err = avfilter_graph_create_filter(&abuffer0_ctx, abuffer0, "src0",
                                       args, NULL, filter_graph);
    if (err < 0) {
        //av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
        LOGE("Cannot create audio buffer source\n");
        return err;
    }

        /****** abuffer 1 ******* */

        /* Create the abuffer filter;
     * it will be used for feeding the data into the graph. */
    abuffer1 = avfilter_get_by_name("abuffer");
    if (!abuffer1) {
        //av_log(NULL, AV_LOG_ERROR, "Could not find the abuffer filter.\n");
        LOGE("Could not find the abuffer filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

        /* buffer audio source: the decoded frames from the decoder will be inserted here. */
    if (!input_codec_context_1->channel_layout)
        input_codec_context_1->channel_layout = av_get_default_channel_layout(input_codec_context_1->channels);
    snprintf(args, sizeof(args),"sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
             input_codec_context_1->sample_rate,
             av_get_sample_fmt_name(input_codec_context_1->sample_fmt), input_codec_context_1->channel_layout);


        err = avfilter_graph_create_filter(&abuffer1_ctx, abuffer1, "src1",
                                       args, NULL, filter_graph);
    if (err < 0) {
        //av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
    LOGE("Cannot create audio buffer source\n");
        return err;
    }

    /****** adelay ******* */
    adelay = avfilter_get_by_name("adelay");
    if (!adelay) {
        LOGE("Could not find the adelay filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }
        /* buffer audio source: the decoded frames from the decoder will be inserted here. */
    snprintf(args, sizeof(args),"delays=%d", 1000);
        err = avfilter_graph_create_filter(&adelay_ctx, adelay, "del1", args, NULL, filter_graph);

    /* ***** amix ******* */
    /* Create mix filter. */
    mix_filter = avfilter_get_by_name("amix");
    if (!mix_filter) {
        //av_log(NULL, AV_LOG_ERROR, "Could not find the mix filter.\n");
        LOGE("Could not find the mix filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

    snprintf(args, sizeof(args), "inputs=2:duration=shortest");

        err = avfilter_graph_create_filter(&mix_ctx, mix_filter, "amix",
                                       args, NULL, filter_graph);

    if (err < 0) {
        av_log(NULL, AV_LOG_ERROR, "Cannot create audio amix filter\n");
        LOGE("Cannot create audio amix filter\n");
        return err;
    }

    /* Finally create the abuffersink filter;
     * it will be used to get the filtered data out of the graph. */
    abuffersink = avfilter_get_by_name("abuffersink");
    if (!abuffersink) {
        //av_log(NULL, AV_LOG_ERROR, "Could not find the abuffersink filter.\n");
        LOGE("Could not find the abuffersink filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

    abuffersink_ctx = avfilter_graph_alloc_filter(filter_graph, abuffersink, "sink");
    if (!abuffersink_ctx) {
        //av_log(NULL, AV_LOG_ERROR, "Could not allocate the abuffersink instance.\n");
        LOGE("Could not allocate the abuffersink instance.\n");
        return AVERROR(ENOMEM);
    }

    /* Same sample fmts as the output file. */
    err = av_opt_set_int_list(abuffersink_ctx, "sample_fmts",
                              ((int[]){ SAMPLE_FORMAT, AV_SAMPLE_FMT_NONE }),
                              AV_SAMPLE_FMT_NONE, AV_OPT_SEARCH_CHILDREN);

    uint8_t ch_layout[64];
    av_get_channel_layout_string(ch_layout, sizeof(ch_layout), 0, OUTPUT_CHANNELS);
    av_opt_set    (abuffersink_ctx, "channel_layout", ch_layout, AV_OPT_SEARCH_CHILDREN);

    if (err < 0) {
        //av_log(NULL, AV_LOG_ERROR, "Could set options to the abuffersink instance.\n");
        LOGE("Could set options to the abuffersink instance.\n");
        return err;
    }

    err = avfilter_init_str(abuffersink_ctx, NULL);
    if (err < 0) {
        //av_log(NULL, AV_LOG_ERROR, "Could not initialize the abuffersink instance.\n");
        LOGE("Could not initialize the abuffersink instance.\n");
        return err;
    }


    /* Connect the filters; */
    err = avfilter_link(abuffer0_ctx, 0, adelay_ctx, 0);
        err = avfilter_link(adelay_ctx, 0, mix_ctx, 0);
        if (err >= 0)
        err = avfilter_link(abuffer1_ctx, 0, mix_ctx, 1);
        if (err >= 0)
        err = avfilter_link(mix_ctx, 0, abuffersink_ctx, 0);
    if (err < 0) {
        av_log(NULL, AV_LOG_ERROR, "Error connecting filters\n");
        return err;
    }

    /* Configure the graph. */
    err = avfilter_graph_config(filter_graph, NULL);
    if (err < 0) {
        //av_log(NULL, AV_LOG_ERROR, "Error while configuring graph : %s\n", get_error_text(err));
        LOGE("Error while configuring graph : %s\n", get_error_text(err));
        return err;
    }

    // ------------------------------------------------
    // reconfigure delay filter:
        avfilter_free(adelay_ctx);

        snprintf(args, sizeof(args),"delays=%d", 2000);
        err = avfilter_graph_create_filter(&adelay_ctx, adelay, "del1", args, NULL, filter_graph);

    err = avfilter_link(abuffer0_ctx, 0, adelay_ctx, 0);
        err = avfilter_link(adelay_ctx, 0, mix_ctx, 0);
        err = avfilter_graph_config(filter_graph, NULL);
    // ------------------------------------------------

    char* dump = avfilter_graph_dump(filter_graph, NULL);
    //av_log(NULL, AV_LOG_ERROR, "Graph :\n%s\n", dump);
    LOGE("Graph :\n%s\n", dump);

    *graph = filter_graph;
    *src0   = abuffer0_ctx;
        *src1   = abuffer1_ctx;
    *sink  = abuffersink_ctx;

    return 0;
}

Please have a look at "reconfigure delay filter" part. Program crashes at av_buffersink_get_frame (funciton process_all() from link) giving error:

Execution stopped at: 0x6AEF0400
In thread 1 (OS thread id 2758)
In buffersink.c

You mentioned that using adelay is not optimal. What is the better solution? Other filter?  Thank you for help.

Reply | Threaded
Open this post in threaded view
|

Re: Delay filter reconfiguration

Paul B Mahol
On 6/11/14, androiddevmar11 <[hidden email]> wrote:

> Hello,
> Actually there can be more than two inputs. To be specific, in most typical
> scenario I will have one AAC file and more than one video files. I want to
> get some frames from specific time stamp of video file and add them to
> specific time stamp in AAC file. More than one pieces of audio can be taken
> from one video input and added to one or more places in AAC file. I am
> doing
> proof of concept based on this code:
> https://gist.github.com/MrArtichaut/11136813. My init_filter_graph function
> looks like this:
>
> static int init_filter_graph(AVFilterGraph **graph, AVFilterContext **src0,
> AVFilterContext **src1,
> AVFilterContext **sink)
> {
>     AVFilterGraph *filter_graph;
>     AVFilterContext *abuffer1_ctx;
>     AVFilter        *abuffer1;
> AVFilterContext *abuffer0_ctx;
>     AVFilter        *abuffer0;
>     AVFilterContext *adelay_ctx;
>     AVFilter        *adelay;
>     AVFilterContext *mix_ctx;
>     AVFilter        *mix_filter;
>     AVFilterContext *abuffersink_ctx;
>     AVFilter        *abuffersink;
>
> char args[512];
>
>     int err;
>
>     /* Create a new filter graph, which will contain all the filters. */
>     filter_graph = avfilter_graph_alloc();
>     if (!filter_graph) {
>         //av_log(NULL, AV_LOG_ERROR, "Unable to create filter graph.\n");
>         LOGE("Unable to create filter graph.\n");
>         return AVERROR(ENOMEM);
>     }
>
> /****** abuffer 0 ********/
>
>     /* Create the abuffer filter, it will be used for feeding the data into
> the graph. */
>     abuffer0 = avfilter_get_by_name("abuffer");
>     if (!abuffer0) {
>         //av_log(NULL, AV_LOG_ERROR, "Could not find the abuffer
> filter.\n");
>         LOGE("Could not find the abuffer filter.\n");
>         return AVERROR_FILTER_NOT_FOUND;
>     }
>
> /* buffer audio source: the decoded frames from the decoder will be
> inserted here. */
>     if (!input_codec_context_0->channel_layout)
>         input_codec_context_0->channel_layout =
> av_get_default_channel_layout(input_codec_context_0->channels);
>     snprintf(args, sizeof(args),
> "sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
>              input_codec_context_0->sample_rate,
>              av_get_sample_fmt_name(input_codec_context_0->sample_fmt),
> input_codec_context_0->channel_layout);
>
>
> err = avfilter_graph_create_filter(&abuffer0_ctx, abuffer0, "src0",
>                                        args, NULL, filter_graph);
>     if (err < 0) {
>         //av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer
> source\n");
>         LOGE("Cannot create audio buffer source\n");
>         return err;
>     }
>
> /****** abuffer 1 ******* */
>
> /* Create the abuffer filter;
>      * it will be used for feeding the data into the graph. */
>     abuffer1 = avfilter_get_by_name("abuffer");
>     if (!abuffer1) {
>         //av_log(NULL, AV_LOG_ERROR, "Could not find the abuffer
> filter.\n");
>         LOGE("Could not find the abuffer filter.\n");
>         return AVERROR_FILTER_NOT_FOUND;
>     }
>
> /* buffer audio source: the decoded frames from the decoder will be
> inserted here. */
>     if (!input_codec_context_1->channel_layout)
>         input_codec_context_1->channel_layout =
> av_get_default_channel_layout(input_codec_context_1->channels);
>     snprintf(args,
> sizeof(args),"sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
>              input_codec_context_1->sample_rate,
>              av_get_sample_fmt_name(input_codec_context_1->sample_fmt),
> input_codec_context_1->channel_layout);
>
>
> err = avfilter_graph_create_filter(&abuffer1_ctx, abuffer1, "src1",
>                                        args, NULL, filter_graph);
>     if (err < 0) {
>         //av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer
> source\n");
>     LOGE("Cannot create audio buffer source\n");
>         return err;
>     }
>
>     /****** adelay ******* */
>     adelay = avfilter_get_by_name("adelay");
>     if (!adelay) {
>         LOGE("Could not find the adelay filter.\n");
>         return AVERROR_FILTER_NOT_FOUND;
>     }
> /* buffer audio source: the decoded frames from the decoder will be
> inserted here. */
>     snprintf(args, sizeof(args),"delays=%d", 1000);
> err = avfilter_graph_create_filter(&adelay_ctx, adelay, "del1", args,
> NULL,
> filter_graph);
>
>     /* ***** amix ******* */
>     /* Create mix filter. */
>     mix_filter = avfilter_get_by_name("amix");
>     if (!mix_filter) {
>         //av_log(NULL, AV_LOG_ERROR, "Could not find the mix filter.\n");
>         LOGE("Could not find the mix filter.\n");
>         return AVERROR_FILTER_NOT_FOUND;
>     }
>
>     snprintf(args, sizeof(args), "inputs=2:duration=shortest");
>
> err = avfilter_graph_create_filter(&mix_ctx, mix_filter, "amix",
>                                        args, NULL, filter_graph);
>
>     if (err < 0) {
>         av_log(NULL, AV_LOG_ERROR, "Cannot create audio amix filter\n");
>         LOGE("Cannot create audio amix filter\n");
>         return err;
>     }
>
>     /* Finally create the abuffersink filter;
>      * it will be used to get the filtered data out of the graph. */
>     abuffersink = avfilter_get_by_name("abuffersink");
>     if (!abuffersink) {
>         //av_log(NULL, AV_LOG_ERROR, "Could not find the abuffersink
> filter.\n");
>         LOGE("Could not find the abuffersink filter.\n");
>         return AVERROR_FILTER_NOT_FOUND;
>     }
>
>     abuffersink_ctx = avfilter_graph_alloc_filter(filter_graph,
> abuffersink,
> "sink");
>     if (!abuffersink_ctx) {
>         //av_log(NULL, AV_LOG_ERROR, "Could not allocate the abuffersink
> instance.\n");
>         LOGE("Could not allocate the abuffersink instance.\n");
>         return AVERROR(ENOMEM);
>     }
>
>     /* Same sample fmts as the output file. */
>     err = av_opt_set_int_list(abuffersink_ctx, "sample_fmts",
>                               ((int[]){ SAMPLE_FORMAT, AV_SAMPLE_FMT_NONE
> }),
>                               AV_SAMPLE_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
>
>     uint8_t ch_layout[64];
>     av_get_channel_layout_string(ch_layout, sizeof(ch_layout), 0,
> OUTPUT_CHANNELS);
>     av_opt_set    (abuffersink_ctx, "channel_layout", ch_layout,
> AV_OPT_SEARCH_CHILDREN);
>
>     if (err < 0) {
>         //av_log(NULL, AV_LOG_ERROR, "Could set options to the abuffersink
> instance.\n");
>         LOGE("Could set options to the abuffersink instance.\n");
>         return err;
>     }
>
>     err = avfilter_init_str(abuffersink_ctx, NULL);
>     if (err < 0) {
>         //av_log(NULL, AV_LOG_ERROR, "Could not initialize the abuffersink
> instance.\n");
>         LOGE("Could not initialize the abuffersink instance.\n");
>         return err;
>     }
>
>
>     /* Connect the filters; */
>     err = avfilter_link(abuffer0_ctx, 0, adelay_ctx, 0);
> err = avfilter_link(adelay_ctx, 0, mix_ctx, 0);
> if (err >= 0)
>         err = avfilter_link(abuffer1_ctx, 0, mix_ctx, 1);
> if (err >= 0)
>         err = avfilter_link(mix_ctx, 0, abuffersink_ctx, 0);
>     if (err < 0) {
>         av_log(NULL, AV_LOG_ERROR, "Error connecting filters\n");
>         return err;
>     }
>
>     /* Configure the graph. */
>     err = avfilter_graph_config(filter_graph, NULL);
>     if (err < 0) {
>         //av_log(NULL, AV_LOG_ERROR, "Error while configuring graph :
> %s\n",
> get_error_text(err));
>         LOGE("Error while configuring graph : %s\n", get_error_text(err));
>         return err;
>     }
>
>     // ------------------------------------------------
>     // reconfigure delay filter:
> avfilter_free(adelay_ctx);
>
> snprintf(args, sizeof(args),"delays=%d", 2000);
> err = avfilter_graph_create_filter(&adelay_ctx, adelay, "del1", args,
> NULL,
> filter_graph);
>
>     err = avfilter_link(abuffer0_ctx, 0, adelay_ctx, 0);
> err = avfilter_link(adelay_ctx, 0, mix_ctx, 0);
> err = avfilter_graph_config(filter_graph, NULL);
>     // ------------------------------------------------
>
>     char* dump = avfilter_graph_dump(filter_graph, NULL);
>     //av_log(NULL, AV_LOG_ERROR, "Graph :\n%s\n", dump);
>     LOGE("Graph :\n%s\n", dump);
>
>     *graph = filter_graph;
>     *src0   = abuffer0_ctx;
> *src1   = abuffer1_ctx;
>     *sink  = abuffersink_ctx;
>
>     return 0;
> }
>
> Please have a look at "reconfigure delay filter" part. Program crashes at
> av_buffersink_get_frame (funciton process_all() from link) giving error:
>
> Execution stopped at: 0x6AEF0400
> In thread 1 (OS thread id 2758)
> In buffersink.c
>
> You mentioned that using adelay is not optimal. What is the better
> solution?
> Other filter?  Thank you for help.

Perhaps only asetpts filter. It is not optimal to use adelay+amix
because of extra overhead with calculation of samples while you
basically only change frame timestamp.
Also be aware that adelay filter when used in your case need to delay
all available channels and not just first one, so something like
"delays=1000 | 1000" should be used for stereo.

>
>
>
>
>
> --
> View this message in context:
> http://ffmpeg-users.933282.n4.nabble.com/Delay-filter-reconfiguration-tp4665727p4665765.html
> Sent from the FFmpeg-users mailing list archive at Nabble.com.
> _______________________________________________
> ffmpeg-user mailing list
> [hidden email]
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
_______________________________________________
ffmpeg-user mailing list
[hidden email]
http://ffmpeg.org/mailman/listinfo/ffmpeg-user
Reply | Threaded
Open this post in threaded view
|

Re: Delay filter reconfiguration

androiddevmar11
This post was updated on .
Hello,
thank you for hints. I am still trying to use adelal because concept of PTS is not clear for me yet. Today I relased that I am probably not able to mix many inputs at one go. Let's say that my mix scenario looks following:

                                              ——————                      —————
1 Input, video             5s del  |   10 frames |    4s del      |  15 frame |
                              ————————————————————— ————————

                                         ———————————————————
2 Input, video             1s del |   20 frames                               |
                              ——————————————————— ————

                              ————————————————————————————————
3 Input, AAC file       |   60 frames                                                                  |
                              —————————————————————— ——————————

At input 1 I configure "adelay to value 5 seconds. After 10 frames from first input are read I need to destroy all graph and create delay filter with value 4s. But I cannot do that because at input 2 frames reading is not finished yet. I cannot destroy filter and recreate it after frames from input 2 are read because at that moment I am too far in output file so I will not achieve correct delay at input 1.  So for me it looks now that i cannot mix many input at one go. I have to mix input 1 with input 3 first. Than mix input 2 to the result of first mixing. But this is not very effecitve. I believe that FFMPEG allows me to do it in better way. Where is mistake in this approach??? Thank you for help.

   
Reply | Threaded
Open this post in threaded view
|

Re: Delay filter reconfiguration

androiddevmar11
Hello,
after last tests I noticed one more problem with this approach. In order to read data from input 1 I am doing seek to the moment where I want to read from. Based on frame index I am calculating time in ms where I am in the input file. Where I reach appropriate moment reading of input file is finished. Now I am destroying whole graph and recreating it with new value of delay. But the problem probably is that not all data is read from sink. Some times piece of sound form input 1 can be found at correct place in out file some times not. I would say it is random. So I cannot event mix data from one input to the AAC file. Here is the function which is doing mixing:

static int process_all(JNIEnv * env, jobject this) {
        int ret = 0;

        int data_present = 0;
        int finished = 0;

        int nb_inputs = utarray_len(audio_mixing_data_list);

        int total_out_samples = 0;
        int nb_finished = 0;
        AudioMixingDataType *element = NULL;

        while (nb_finished < nb_inputs) {
                int data_present_in_graph = 0;

                for (int i = 0; i < nb_inputs; i++) {

                        element = (AudioMixingDataType*)get_element_at(audio_mixing_data_list, i);
                        if (element == NULL || element->input_finished || element->input_to_read == 0) {
                                continue;
                        }

                        element->input_to_read = 0;

                        AVFrame *frame = NULL;

                        if (init_input_frame(&frame) > 0) {
                                process_all_handle_error(ret);
                                return 0;
                        }

                        // Decode one frame with of audio samples.
                        if ((ret = decode_audio_frame(frame, element->input_format_context,
                                        element->input_codec_context, &data_present, &finished, element))) {
                                process_all_handle_error(ret);
                                return 0;
                        }

                        if ((element->in_file_range_ms.length != -1) && get_time_stamp_for_frame_index(element->current_frame_index, element) > element->in_file_range_ms.length){

                                // in this moment reading of of piece of sound from video file is finished:
                                finished = 1;
                                data_present = 0;

                        }

                        /**
                         * If we are at the end of the file and there are no more samples
                         * in the decoder which are delayed, we are actually finished.
                         * This must not be treated as an error.
                         */
                        if (finished && !data_present) {

                                // get next range (location, length) form java code:
                                Range outFileRange =  get_next_out_file_subtrack_range(env, this);
                                Range inFileRange =  get_next_in_file_subtrack_range(env, this);

                                if (outFileRange.location != -1 && outFileRange.length != -1 && inFileRange.location != -1 && inFileRange.length != -1){

                                        free_graph();
                                        // seek to next piece of sound in video file:
                                        set_input_file_subtrack_range(inFileRange.location, inFileRange.length, element);
                                        set_output_file_subtrack_range_and_delay(outFileRange.location, outFileRange.length, element);
                                        int err = init_filter_graph(&graph, &sink);
                                        LOGE("Init err = %s\n", get_error_text(err));
                                        finished = 0;
                                        element->current_frame_index = 0;

                                }else{

                                        element->input_finished = 1;
                                        nb_finished++;
                                        ret = 0;
                                        LOGE("Input n°%d finished. Write NULL frame \n", i);

                                        ret = av_buffersrc_write_frame(element->buffer_filter_context, NULL);
                                        if (ret < 0) {
                                                av_log(NULL, AV_LOG_ERROR,
                                                                "Error writing EOF null frame for input %d\n", i);
                                                process_all_handle_error(ret);
                                                return 0;
                                        }
                                }
                        } else if (data_present) {
                                 process_all_data_present(element, frame);
                        }
                        if (frame != NULL){
                                av_frame_free(&frame);
                        }
                        data_present_in_graph = data_present | data_present_in_graph;
                }
                process_all_data_present_in_graph(data_present_in_graph, data_present, element);
        }

        return 0;
}


static void process_all_data_present(AudioMixingDataType * const element, AVFrame * const frame){
        /** If there is decoded data, convert and store it */
        /* push the audio data from decoded frame into the filter graph */
        int ret = av_buffersrc_write_frame(element->buffer_filter_context, frame);
        if (ret < 0) {
                LOGE("Error while feeding the audio filtergraph\n");
                process_all_handle_error(ret);
                return;
        }
}

static void process_all_data_present_in_graph(int data_present_in_graph, int data_present, AudioMixingDataType * const element){
        int nb_inputs = utarray_len(audio_mixing_data_list);
        if (data_present_in_graph) {
                AVFrame *filt_frame = av_frame_alloc();

                /* pull filtered audio from the filter graph */
                while (1) {
                        int ret = av_buffersink_get_frame(sink, filt_frame);
                        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
                                for (int i = 0; i < nb_inputs; i++) {
                                        if (av_buffersrc_get_nb_failed_requests(
                                                        element->buffer_filter_context) > 0) {
                                                element->input_to_read = 1;
                                               
                                                LOGE("Need to read input %d\n", i);
                                        }
                                }

                                break;
                        }
                        if (ret < 0){
                                process_all_handle_error(ret);
                                return;
                        }

                        ret = encode_audio_frame(filt_frame, output_format_context,
                                        output_codec_context, &data_present);
                        if (ret < 0){
                                process_all_handle_error(ret);
                                return;
                        }
                        av_frame_unref(filt_frame);
                }

                av_frame_free(&filt_frame);
        } else {
                av_log(NULL, AV_LOG_INFO, "No data in graph\n");
                for (int i = 0; i < nb_inputs; i++) {
                        AudioMixingDataType *el = get_element_at(audio_mixing_data_list, i);
                        if (el != NULL){
                                el->input_to_read = 1;
                        }
                }
        }
}

static void set_input_file_subtrack_range(jint location, jint length, AudioMixingDataType * const el){

        if (el != NULL){
                if (location != -1 && length != -1){
                        el->in_file_range_ms.location = location;
                        el->in_file_range_ms.length = length;
                        seek_frame(location, el);
                }
        }
}

static void set_output_file_subtrack_range_and_delay(jint location, jint length, AudioMixingDataType* element){
        if (element != NULL)
        {
                int delay = location;
                if (element->delay_in_output_file != -1 && element->out_file_range_ms.location != -1 && element->out_file_range_ms.length){
                        delay = location - (element->out_file_range_ms.location + element->out_file_range_ms.length);
                }
                element->out_file_range_ms.location = location;
                element->out_file_range_ms.length = length;
                element->delay_in_output_file = delay;
                LOGE("Set_output_file_subtrack_range_and_delay %d:",delay);
        }
}

In this code "element" is just data gathered for one input. One element in UTArray list. Could sombody look at it and propose correct solution of destroying and recreating filters? Thank you very much.

BR, M