[Libav-user] Error encoding rgba to mpeg

Devin Neal devneal17 at gmail.com
Tue May 14 09:52:23 EEST 2019


I'm trying to convert a series of RGBA pixel arrays into mpeg4 video.
My approach is to do this one frame at a time, converting each RBGA
frame to YUV with sws_scale(), then encoding it to mpeg4 as is done
at http://ffmpeg.org/doxygen/4.1/encode__video_8c_source.html.

This works fine as long as I don't encode the first frame of my
input. If I do, and attempt to play the resulting file with ffplay,
the only video is a strange gray pattern moving across the screen and
the only audio is garbled static. If I instead use the initial dummy
image from the link above as the first frame and encode every other
frame, the video is generated as expected (with the dummy image as
the first frame).

If I convert the dummy image to RGBA and back to YUV, video is still
generated. If I convert it to RGBA, copy my data into the frame, then
convert back to YUV, the problem returns.

If I create a working video, then append my frame data to it
(including the first frame), the resulting video works as expected
(prefixed with the initial video).

If I change the codec from mpeg4 to libx264, the resulting video
works as expected, although the video quality is reduced and seeking
is broken.

This leads me to believe the YUV frames do some sort of
initialization that RGBA frames do not, but checking the hexdump of
the working and broken video files shows that the first 61 bytes of
each are identical, although the working one is much larger (1MB
more hex data for 1 second of video).

I've encountered this problem on ffmpeg version n4.0.4 and n4.1.3. A
stripped version of my code is attached. Thanks for any help you
can provide.
-------------- next part --------------
AVPacket *pkt = av_packet_alloc();
if (!pkt)
    exit(1);

for (i = 0; i < 60; i++) {
    pkt->data = NULL;
    pkt->size = 0;
    fflush(stdout);

    // TODO: encode the first frame
    if (i == 0) {
        AVFrame *frame = av_frame_alloc();
        // get and configure a frame
        if (!frame) {
            fprintf(stderr, "Could not allocate video frame\n");
            exit(1);
        }
        frame->format = AV_PIX_FMT_YUV420P;
        frame->width  = ctx->width;
        frame->height = ctx->height;
        ret = av_image_alloc(
            frame->data, frame->linesize,
            ctx->width, ctx->height,
            ctx->pix_fmt, 32);
        if (ret < 0) {
            fprintf(stderr, "Could not allocate the video frame data\n");
            exit(1);
        } else {}
        /* prepare a dummy image */
        /* do exactly what's done in the link... */
        frame->pts = i;
        encode(ctx, frame, pkt, f);
        av_frame_free(&frame);
        continue;
    }

    // configure an RGBA frame
    AVFrame *frame2 = av_frame_alloc();
    if (!frame2) {
        fprintf(stderr, "Could not allocate video frame2\n");
        exit(1);
    }
    frame2->format = AV_PIX_FMT_RGBA;
    frame2->width  = ctx->width;
    frame2->height = ctx->height;
    ret = av_image_alloc(
        frame2->data, frame2->linesize,
        ctx->width, ctx->height,
        AV_PIX_FMT_RGBA, 32);
    if (ret < 0) {
        fprintf(stderr, "Could not allocate the video frame2 data\n");
        exit(1);
    } else {}

    int bytes_per_frame = 4 * 2560 * 1440;
    if (i != 0)
        memcpy(frame2->data[0],
               frame_data + bytes_per_frame * i,
               bytes_per_frame);

    // configure YUV frame
    AVFrame *frame3 = av_frame_alloc();
    if (!frame3) {
        fprintf(stderr, "Could not allocate video frame3\n");
        exit(1);
    }
    frame3->format = ctx->pix_fmt;
    frame3->width  = ctx->width;
    frame3->height = ctx->height;
    ret = av_image_alloc(
        frame3->data, frame3->linesize,
        ctx->width, ctx->height,
        ctx->pix_fmt, 32);
    if (ret < 0) {
        fprintf(stderr, "Could not allocate the video frame3 data\n");
        exit(1);
    } else {}

    // convert RGBA->YUV
    sws_scale(rgba_yuv_swsctx,
              (const uint8_t * const *)frame2->data,
              frame2->linesize,
              0,
              ctx->height,
              frame3->data,
              frame3->linesize);

    frame3->pts = i;
    encode(ctx, frame3, pkt, f);

    av_frame_free(&frame2);
    av_frame_free(&frame3);
}


More information about the Libav-user mailing list