[FFmpeg-devel] [PATCH 2/3] avformat/mov: Fix parsing of saio/siaz atoms in encrypted content.

Jacob Trimble modmaker at google.com
Sat Jan 6 00:58:53 EET 2018


On Fri, Jan 5, 2018 at 2:01 PM, Carl Eugen Hoyos <ceffmpeg at gmail.com> wrote:
> 2018-01-05 22:29 GMT+01:00 Jacob Trimble <modmaker-at-google.com at ffmpeg.org>:
>> On Fri, Jan 5, 2018 at 12:41 PM, Carl Eugen Hoyos <ceffmpeg at gmail.com> wrote:
>>> 2018-01-05 20:49 GMT+01:00 Jacob Trimble <modmaker-at-google.com at ffmpeg.org>:
>>>
>>>> +    entry_count = avio_rb32(pb);
>>>> +    encryption_index->auxiliary_offsets = av_malloc_array(sizeof(size_t), entry_count);
>>>
>>> (sizeof(variable) instead of sizeof(type), please.)
>>>
>>> But since this could be used for a dos attack, please change this
>>> to something similar to 1112ba01.
>>> If it is easy to avoid it, very short files should not allocate
>>> gigabytes.
>>
>> Switched to calculating the size based on the number of remaining
>> bytes and returning an error if it doesn't match what is read.
>
> Sorry if I miss something:
>
>> +    entry_count = (atom.size - 8 - (has_saio_type ? 8 : 0)) / (version == 0 ? 4 : 8);
>> +    if (avio_rb32(pb) != entry_count) {
>> +        av_log(c->fc, AV_LOG_ERROR, "incorrect entry_count in saio\n");
>> +        return AVERROR_INVALIDDATA;
>> +    }
>> +    encryption_index->auxiliary_offsets =
>> +        av_malloc_array(sizeof(*encryption_index->auxiliary_offsets), entry_count);
>
> Does this avoid a 1G allocation for a file of a few bytes?
>
> Didn't you simply increase the number of needed bytes to change in a valid
> mov file to behave maliciously from one to two?
>

>From what I can tell, the mov_read_default method (which is what reads
child atoms) will verify "atom.size" to fit inside the parent atom.
This means that for "atom.size" to be excessively large for the file
size, the input would need to be non-seekable (since I think the
top-level atom uses the file size) and all the atoms would need to
have invalid sizes.  But technically I guess it is possible.

But this is basically malloc some number of bytes then read that many
bytes.  The only alternative I can think of (in the face of
non-seekable inputs) is to try-read in chunks until we hit EOF or the
end of the expected size.  This seems like a lot of extra work that is
not mirrored elsewhere.  There are several cases where this isn't
explicitly checked.

Also, how does this attack work?  If the number is way too big, well
get EOM and error out.  If it is large but there is enough memory,
we'll hit EOF and error out.  Would it be better to explicitly check
for EOF to avoid the loop running for longer than needed?  Or how
about freeing the memory on error so we don't keep the useless memory
around on the error case (even though the app will probably just free
the context anyway).

> Thank you, Carl Eugen
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


More information about the ffmpeg-devel mailing list