[Ffmpeg-devel-irc] ffmpeg.log.20170806

burek burek021 at gmail.com
Mon Aug 7 03:05:02 EEST 2017


[00:08:50 CEST] <Roberth1990> hello, I'm new to this audio normalization thing, im trying to make the dynaudnorm work, but with its default settings it seems way too agressive, lets say one person talks, and another person talks louder(raising his voice), with dynaudnorm the louder voice gets even quieter than the other voice, how can I avoid this?
[02:01:03 CEST] <SomeClown> Quick question: Is it possible to use ffmpeg-python to retrieve all metadata from an M4V file? If so, can someone point me to appropriate documentation?
[02:04:19 CEST] <JEEB> "all metadata" is often a problematic thing since whether or not everything considered "metadata" would be defined well, and second of all if it supported for reading by the libraries
[02:04:26 CEST] <JEEB> also no idea about the python wrapper
[02:04:43 CEST] <JEEB> in theory you can get a lot of stuff by just using ffprobe's `-of json` mode
[02:07:54 CEST] <SomeClown> Hmm... yeah, for all the power of Python, I'm finding it exceedingly difficult to read any metadata at all from Apple's media files (iTunes videos, music). Haven't found a single library that can do it, at least not yet.
[02:08:27 CEST] <JEEB> no idea who made that wrapper or how far it goes
[02:08:35 CEST] <SomeClown> Thanks, though... I'll play with ffmpeg for some other things.
[02:08:44 CEST] <JEEB> ffprobe's json output would at least be simple enough to parse from python :P
[02:08:47 CEST] <JEEB> from json import loads
[02:09:21 CEST] <JEEB> then use subprocess or something to execute ffprobe and catch stdout
[02:09:51 CEST] <JEEB> (of course all that data is also available from the APIs as well but as I said I have no idea how well that library of yours integrates things)
[02:09:58 CEST] <SomeClown> I could certainly parse the json, but I'm trying to avoid running a command line tool from inside (using subprocess call).
[02:10:02 CEST] <JEEB> ok
[02:10:09 CEST] <SomeClown> Thanks, though!
[02:10:15 CEST] <JEEB> then godspeed because the wrappers tend to age fast
[02:10:29 CEST] <JEEB> with rust I'm using the mozilla-maintained bindgen
[02:10:41 CEST] <JEEB> and thankfully golang has pkg-config and header parsing by default
[02:11:45 CEST] <JEEB> manual wrapper writing just isn't a thing you get things done with with FFmpeg due to the changes being done every now and then, and additional things being added even if backwards compatibility is still kept
[02:12:01 CEST] <JEEB> anyways, in the API the metadata should be on the libavformat level
[02:12:14 CEST] <JEEB> open the demuxer and probe the input etc
[02:12:36 CEST] <JEEB> then the AVFormatContext should have a *lot* of data
[03:15:36 CEST] <thebombzen> sometimes it's just easier to execute ffmpeg.c or ffprobe.c and then grab the output
[03:15:51 CEST] <thebombzen> than it is to try to link to the libraries at least, because bindings and wrappers and shenanigans
[06:10:31 CEST] <xacktm> hey guys, is it possible to get nvenc working with a gtx 1060?  I get a "Cannot init CUDA" with yesterday's git HEAD https://bpaste.net/show/b6dc126a1354
[06:10:53 CEST] <xacktm> this forum post has a similar issue but there were no leads https://devtalk.nvidia.com/default/topic/740805/linux/nvenc-h-264-encoder-initialization-failed/2
[06:37:14 CEST] <FlorianBd> Hi! How can I turn   ffmpeg http://192.168.12.34/video/mjpg.cgi?profileid=1' -acodec copy -vcodec copy file.mp4   into something with motion detection that makes a new file every time a new motion is detected?
[07:32:01 CEST] <mnr200> stream from rtsp server is not playing with ffplay >> like this  ffplay rtsp://localhost:8089/live.sdp
[07:37:17 CEST] <mnr200> what could be the possible problem?
[07:45:05 CEST] <mnr200> okay, got it working
[07:46:12 CEST] <mnr200> okay now I have another problem that I couldn't solve earlier, I got error like this >> Unknown input format: 'dshow'
[07:46:31 CEST] <mnr200> with this command >> ffmpeg -f dshow -i video="Virtual-Camera" -preset ultrafast -vcodec libx264 -tune zerolatency -b 900k -f mpegts rtsp://localhost:8554/live.sdp
[09:07:35 CEST] <npn> If I download an mp4, I can start playing it immediately while the download runs. The information about the total size seems to be present from the start. If I run ffmpeg on an mp4 - no encoding options just adding a watermark, I can not open the file until ffmpeg is complete. Is there any way to be able to start watching while ffmpeg is still running?
[09:17:01 CEST] <Blubberbub_> adding a watermark probably needs re-encoding
[09:20:07 CEST] <ritsuka> npn: you have to enable the fast start option
[09:20:34 CEST] <npn> Sorry for bad explanation. I AM reencoding, I meant- I'm not changing the format
[09:21:41 CEST] <npn> ffmpeg -i input.mp4 -vf drawtext="fontfile=/path/to/font.ttf: text='Test Watermark': fontcolor=white: fontsize=24: box=1: boxcolor=black at 0.5: boxborderw=5: x=(w-text_w)/2: y=(h-text_h)/2" -codec:a copy output.mp4
[09:24:22 CEST] <Blubberbub_> do i understand you correctly, that you have ffmpeg write to a file that is being read from your webserver at the same time?
[09:25:25 CEST] <ritsuka> npn: sorry I didn't read your message carefully eh
[09:26:19 CEST] <npn> Yes- I have multiple users, and I want to watermark each video in a user specific way. I was hoping that the user could access the video immediately, while it is still being encoded.
[09:28:13 CEST] <npn> This https://stackoverflow.com/questions/6353519/is-it-possible-to-play-an-output-video-file-from-an-encoder-as-its-being-encode says NO for mp4, but maybe other formats would work?
[09:29:23 CEST] <bencoh> mpegts, matroska, ...
[09:29:51 CEST] <npn> The other constraint is that it needs to be HTML5/flowplayer compatible ;-) Webm ?
[09:30:00 CEST] <bencoh> fragmented mp4
[09:31:10 CEST] <bencoh> actually I'd expect flowplayer to support mpegts since they can play hls (iirc?), but ...
[09:31:11 CEST] <Blubberbub_> can that even work? i would assume it would break because the webserver can read the file faster than ffmpeg could write it and then decide its finished. If it is even allowed to open the file that is being written in the first palce
[09:31:39 CEST] <bencoh> Blubberbub_: true
[09:32:08 CEST] <bencoh> you'd have to throttle somehow
[09:32:15 CEST] <Blubberbub_> fragmenting could work
[09:32:37 CEST] <Blubberbub_> like have multiple files, so the user can already watch/load parts of the video
[09:33:06 CEST] <bencoh> Blubberbub_: you're basically reimplementing (a subset of) hls/dash/whatever :)
[09:33:47 CEST] <Blubberbub_> true
[09:34:48 CEST] <Blubberbub_> i wondered: is there a codec that allows "subregion encoding"? lets say i have a watermark and i add it only to the lower right corner: then only reencode the lower right corner, but copy the rest of the data?
[09:39:00 CEST] <npn> So close... using the fragmented mp4, yes I can start playing the video immediately, but the total length only shows as far as it has been fragmented. So the user gets cut off half way and then has to refresh the page to find out the true length. Is there any way to read the total length of the input file, and stick that on the output file from the start?
[09:42:53 CEST] <Blubberbub_> i would guess hls allows for something like that
[09:43:41 CEST] <npn> Tx
[09:43:52 CEST] <Blubberbub_> i don't know though
[09:44:33 CEST] <Blubberbub_> encoding every video for every user might take a lot of space...
[09:45:53 CEST] <npn> Yes indeed
[09:47:43 CEST] <Blubberbub_> a fast way to add a watermark would be great. But that would require deep knowledge of the codec, i guess.
[13:33:55 CEST] <lavalike> how do I translate x264 options to -x264-params format?
[13:34:52 CEST] <lavalike> I tried blindly s/ /:/g such as: -c:v libx264 -x264-params "cabac=1:ref=5:deblock=1:-2:-2:analyse=0x3:0x133&
[13:35:13 CEST] <lavalike> but the result has ref=3 so something's not right
[13:38:54 CEST] <furq> that's correct, so i guess it's being overwritten with the -preset medium defaults
[13:39:21 CEST] <furq> maybe try manually setting a preset before -x264-params
[13:39:50 CEST] <lavalike> ok!
[13:41:54 CEST] <lavalike> now some are right but others are wrong still
[13:42:05 CEST] <lavalike> I guess preset slow matches some of the settings I want to replicate, hehe
[13:43:03 CEST] <lavalike> could it be that   deblock=1:-2:-2  needs the : transformed to something else since they are used as a separator in the -x264-params ?
[13:58:18 CEST] <DHE> if you have a profile or level defined as well, ref might be clamped
[14:02:29 CEST] <ITR> I'm trying to copy a 32bit wav file to a video with "ffmpeg -loop 1 -framerate 30 -i ..\Pictures\P5.png -i ..\Music\NeverSawItComing.wav -c:v libx264 -preset medium -tune stillimage -crf 18 -c:a copy -shortest -pix_fmt yuv420p P5.mkv", but the audio gets corrupted, does anyone know what the error might be? Removing -c:a copy fixes it, but I assume that lowers audio quality?
[14:03:01 CEST] <ITR> By corrupted I mean that the sound becomes really horrible to listen to
[14:03:02 CEST] <DHE> you'll get a default audio codec with default bitrates.
[15:50:35 CEST] <TheWild> hello
[15:51:13 CEST] <TheWild> ffplay - how to disable spectrum when playing audio files? It just wastes processing power.
[15:51:49 CEST] <JEEB> just use a proper player like mpv instead which utilizes the libav* libraries underneath :)
[15:51:57 CEST] <JEEB> ffplay is just a Proof of Concept kind of thing
[15:53:27 CEST] <TheWild> I'm connected via SSH to the computer that serves as a jukebox.
[15:53:35 CEST] <TheWild> well, got it anyway. ffplay -nodisp
[15:53:51 CEST] <JEEB> yea, I see mpv or something working similarly enough
[15:54:07 CEST] <JEEB> since I use it to connect to pulseaudio and having no window of its own with audio :)
[15:54:18 CEST] <JEEB> (which is the default mode)
[16:12:17 CEST] <TheWild> awesome. Thanks for letting me know about mpv.
[16:15:34 CEST] <BtbN> Is there some mechanism in the hls muxer I could use to avoid ending up with several million files in one directory?
[16:15:48 CEST] <BtbN> Like, putting them in subdirs, with the first X digits of the segment index
[17:29:50 CEST] <DHE> BtbN: live? there's a flag for delete_segments so old segments go out
[17:30:00 CEST] <BtbN> I want all of them
[17:30:15 CEST] <BtbN> But not in one directory. That gets horribly slow
[17:30:31 CEST] <DHE> there are some flags to put timestamps or indexes into the directory names, but you'll have to look into the details yourself.
[17:30:43 CEST] <BtbN> There are no timestamps
[17:30:47 CEST] <BtbN> just an incrementing index
[17:31:01 CEST] <BtbN> And I can't see any format-string to get like the first X digits of it
[17:31:27 CEST] <BtbN> so segments 1500000 would end up in directory 150 or so
[17:34:36 CEST] <DHE> how about the option use_localtime_mkdir ? see the examples
[17:35:20 CEST] <BtbN> It's not a live stream
[17:35:28 CEST] <BtbN> So there is no useful localtime
[17:35:43 CEST] <DHE> ... right...
[17:40:13 CEST] <DHE> I don't have a good answer besides "rewrite it after it's done"
[17:40:38 CEST] <BtbN> it's actually over one million segments.
[17:40:50 CEST] <DHE> holy crap....
[17:41:13 CEST] <BtbN> I'm tempted to just hack it into the hlsenc.c
[17:41:32 CEST] <DHE> even if those are 1 second increments that's over 11 days
[17:41:49 CEST] <BtbN> it's two second segments, and 14 days
[17:43:38 CEST] <BtbN> Just gonna put some counter into there, and once it reaches 10k or something, it gets reset and another counter gets incremented, and that other counter is used for another directory name
[17:45:09 CEST] <c_14> BtbN: you could try using hls_flags +single_file ?
[17:45:25 CEST] <c_14> depends on if the clients support that of course
[17:46:56 CEST] <BtbN> I actually want those segments
[17:47:02 CEST] <BtbN> I don't really care about the playlist
[17:47:26 CEST] <BtbN> I have two gigantic single files right now
[17:49:21 CEST] <BtbN> it's for a script that operates on .ts or .flv segments, and stitches them together and streams them to YT
[17:57:33 CEST] <luc4> Hello! I built ffmpeg for armv8 cortexa53 and Im testing the performance of the transcoding. When running it I see using cpu capabilities: ARMv6 NEON. Is this correct?
[17:58:36 CEST] <JEEB> I will guess you have NEON and that's that primary thing
[18:01:37 CEST] <JodaZ> wouldn't you want to use an accelerator for transcoding on soc's ?
[18:07:27 CEST] <JEEB> JodaZ: depends on your needs and how crappy the API/SoC is
[18:07:36 CEST] <bencoh> errm, A53 should be armv8 though
[18:07:49 CEST] <bencoh> I suppose you're using it as a 32b target
[18:08:04 CEST] <JEEB> I've seen people have fallback openh264 or something because not everything encodes properly
[18:08:34 CEST] <bencoh> (armv8 is a 64b-capable architecture)
[18:09:02 CEST] <bencoh> (and neon set is different on armv8)
[18:09:24 CEST] <JEEB> yea, aarch64 is another thing there. a lot of people have their system on such SoCs armv7 due to vendor/whatever limitations
[18:09:41 CEST] <luc4> bencoh: yes, this is a cortexa53 64bit cpu, but the firmware is 32bit
[18:10:59 CEST] <luc4> bencoh: does this mean that when building for aarch32 the result is that only armv6 optimisations are applied?
[18:11:28 CEST] <JEEB> aarch64 optimizations of course aren't applied because it's not a 64bit binary
[18:11:39 CEST] <JEEB> NEON is as far as 32bit optimizations go
[18:12:01 CEST] <luc4> JEEB: neon is ok, but I see only armv6 optimisations
[18:12:22 CEST] <bencoh> neon is armv6-neon in your case
[18:12:23 CEST] <JEEB> I think that just means "I could use armv6 instructions"
[18:12:35 CEST] <bencoh> (which is not armv8 neon)
[18:12:37 CEST] <JEEB> there is no real other optimizations for ARM
[18:12:44 CEST] <luc4> JEEB: maybe there is no specific optimisations for armv7 and armv8?
[18:12:44 CEST] <JEEB> (in FFmpeg, other than aarch64)
[18:12:45 CEST] <kepstin> my impression is that armv8-compatible 64bit can run armv7 32bit stuff
[18:12:53 CEST] <bencoh> kepstin: it can, indeed
[18:13:08 CEST] <JEEB> luc4: well NEON is the only instruction set so that's what you really care about
[18:13:19 CEST] <JEEB> *only SIMD instruction set
[18:13:35 CEST] <luc4> so would you say this is a proper build that is using whatever ffmpeg can use?
[18:13:55 CEST] <JEEB> unless you disabled something I'd say yes
[18:14:06 CEST] <luc4> Im not trying to fix a specific issue, Im just experimenting.
[18:14:49 CEST] <luc4> the weird thing is that the same exact board seems to be running ffmpeg faster when the firmware is a firmware built for armv6 with much older compiler
[18:15:02 CEST] <luc4> :-)
[18:15:14 CEST] <luc4> this is confusing me a bit
[18:16:37 CEST] <kepstin> I suspect the differences between armv6 and armv7 are pretty minimal for this application, assuming neon support is used.
[18:18:01 CEST] <kepstin> i guess it has the thumb-2 encoding support too, which might help.
[20:50:37 CEST] <vektorweg1> can ffmpeg capture gl? couldnt find anything in the docs.
[20:51:30 CEST] <JEEB> there are some screen capture modules but not sure if it's what you want
[20:52:57 CEST] <DHE> hardware accelerated capturing would be nice (on linux). all I know of is x11grab. CUDA grab could be used to make nvenc run really fast, right?
[20:54:34 CEST] <JEEB> no idea. I only know there's a nice API for screen capture on recent windows versions because virtualdub's author blogged about it
[20:54:43 CEST] <JEEB> (FFmpeg doesn't use that API anyways)
[20:57:04 CEST] <vektorweg1> afaik new windows built-in screen capture is servilely slowing down games on relatively new hardware
[20:57:24 CEST] <BtbN> gdicapture is bad
[20:57:33 CEST] <BtbN> Use OBS if you want to capture a game
[20:57:54 CEST] <BtbN> DHE, CUDA grab? I don't think that's a thing.
[20:58:19 CEST] <BtbN> JEEB, they probably mean desktop duplication.
[20:58:26 CEST] <BtbN> OBS uses that for Monitor Capture
[20:58:43 CEST] <vektorweg1> BtbN: i was trying to avoid obs, because this software seems still pretty unstable to me and ffmpeg would be sufficient.
[20:59:04 CEST] <BtbN> what's unstable about it?
[20:59:05 CEST] <jkqxz> For single windows in X, you can use DRI2 to pick out the DRM object currently being used for scanout.  On Intel at least you can then give that straight to VAAPI to encode with no CPU involvement.
[20:59:16 CEST] <BtbN> We just ran a week long 24/7 marathon with it, and it didn't crash or fail once
[20:59:31 CEST] <vektorweg1> BtbN: i had crashs and output inconsistencies in obs.
[20:59:35 CEST] <DHE> BtbN: google says no, but an opengl capture can pass it over to cuda. that's probably what I was thinking of
[20:59:49 CEST] <BtbN> But how would you do OpenGL capture either?
[21:00:04 CEST] <BtbN> Windows gives you D3D surfaces at best
[21:00:28 CEST] <DHE> there is an opengl function that reads the framebuffer
[21:00:49 CEST] <BtbN> Of another application?
[21:01:25 CEST] <vektorweg1> BtbN: if there are tools that can record gl, why shouldn't ffmpeg be able to do it?
[21:01:36 CEST] <DHE> Maybe? I've had programs read their own framebuffers but also pick up window overlays (eg: right-click the title bar of the application)
[21:02:00 CEST] <BtbN> Because nobody wrote the sources
[21:02:09 CEST] <BtbN> A large part of the OBS code base is the capture code
[21:02:12 CEST] <vektorweg1> afaik, its reading the framebuffer and registering glswaps for timing.
[21:02:15 CEST] <BtbN> It's well tested and mature, use it
[21:02:41 CEST] <BtbN> OBS on Windows just injects a DLL and makes the game render into a texture, and sends that texture over to itself
[21:02:50 CEST] <BtbN> On Linux you just can't do that
[21:02:51 CEST] <vektorweg1> obs can capture a lot of other things too.
[21:03:05 CEST] <BtbN> OBS can capture anything using OpenGL or D3D on Windows that way
[21:05:47 CEST] <vektorweg1> hmm
[21:22:01 CEST] <vektorweg1> hm. how do overlays work then? how do some programs render in the gl context of another program?
[21:29:01 CEST] <BtbN> They inject a DLL, hook the swap function, and render their own stuff before actually swapping.
[21:44:54 CEST] <cq1> That's clever.
[21:48:06 CEST] <BtbN> It's pretty standard procedure on Windows. Even though it's batshit insane
[21:48:26 CEST] <JEEB> ye, even sound drivers can do random bullshit like that
[21:48:56 CEST] <JEEB> "yes we will totally filter your audio" "but it's float not integer?" "surefine.jpg" <MASSIVE NOISE>
[21:53:11 CEST] <vektorweg1> i kinda happy now with pulseaudio about this, because you can pick the audio you want and not just capture everything.
[21:55:04 CEST] <strangepr0gram> hello i dont kno if this is the right place to ask but is is possible using ffmpeg or something else to remove these kinds of artifacts, for example present in this video https://www.shutterstock.com/video/clip-3566666-stock-footage-rapeseed-flower.html?src=recentview/3/3p --- If you look carefully enough you can see the blocks of pixels "jump arou
[21:55:04 CEST] <strangepr0gram> nd"
[21:56:45 CEST] <strangepr0gram> https://www.shutterstock.com/video/clip-26536403-stock-footage-panning-shot-of-flowers-barbarea-vulgaris.html?src=/V2nvmYJlWhQAYQsmdnJNIA:2:12/3p Can more easily be seen in this vid
[22:01:12 CEST] <kiroma> Does FFmpeg use -O3 or -O2 by default?
[22:02:50 CEST] <JEEB> kiroma: O2 but I think it disables a lot of the vectorization stuff
[22:02:56 CEST] <JEEB> due to miscompilation cases
[22:03:21 CEST] <cq1> JEEB: Woah, miscompilation? What do you mean?
[22:03:57 CEST] <JEEB> basically the last time people re-enabled vectorization etc optimizations there were failures in some versions of GCC etc for some architectures
[22:04:00 CEST] <BtbN> ffmpeg uses -O3 by default on clang and gcc
[22:04:04 CEST] <JEEB> oh right, yes
[22:04:23 CEST] <JEEB> I just remember the point of "it probably doesn't matter too much because vectorization"
[22:04:32 CEST] <JEEB> (which is disabled)
[22:04:50 CEST] <cq1> Bizarre.
[22:05:01 CEST] <JEEB> https://ffmpeg.org/pipermail/ffmpeg-devel/2016-May/194009.html
[22:05:07 CEST] <JEEB> here's parts of the mailing list discussion
[22:06:10 CEST] <cq1> Very interesting, thanks for the link.
[22:06:26 CEST] <cq1> I guess I am surprised that vectorization is that buggy, but I guess the codebase is pretty large.
[22:07:09 CEST] <JEEB> it seems to be a thing that happens every now and then. someone tries to re-enable vectorization. it might even get to master. then people find some compiler versions or architectures where things fail. then it gets retracted again.
[22:09:15 CEST] <kiroma> By the way, I've tried to compile with --enable-lto and it works but there are some warnings about type mismatches, should I be worried about them?
[22:09:54 CEST] <JEEB> no idea
[22:10:20 CEST] <JEEB> cq1: also do note that a lot of "heavy" stuff like H.264 or VP9 decoding has hand-written assembly in it
[22:17:48 CEST] <cq1> Certainly, understood.
[22:17:58 CEST] <cq1> I'm more surprised at the lack of stability than the fact that it's viable to disable it.
[22:18:05 CEST] <JEEB> yea
[22:18:48 CEST] <cq1> Although, I guess I shouldn't be too surprised about the lack of stability. I once discovered that the mere two lines of code struct T { std::string foobar; }; T* x = new T({""}); caused GCC 4.7.0 to spit out "internal compiler error".
[22:20:21 CEST] <JEEB> of course I wonder how much you would get in some less SIMD-induced things
[22:20:35 CEST] <JEEB> like HEVC decoding, which has a limited amount of SIMD but nowhere on the level of, say, VP9
[22:30:53 CEST] <Blaxar> tbh cq1: std::string is not a pointer in itself, that's why :d
[22:36:21 CEST] <strangepr0gram> Another question. is there a way to easily test 2 files to see quality differences?
[22:37:58 CEST] <strangepr0gram> like play both files at the same time but alternate between compressed versions each x seconds or something
[22:39:29 CEST] <JEEB> for actual quality there's no better way than eyes. for metrics there's psnr and ssim etc
[22:39:42 CEST] <JEEB> but do note that those do not match necessarily what the eye perceives as quality
[22:39:52 CEST] <JEEB> esp. not PSNR (favours blurring like hell)
[22:40:18 CEST] <JEEB> also, as usual you need to have the "original sample"
[22:40:23 CEST] <cq1> Blaxar: It's valid C++, as std::string has a non-explicit constructor from const char*, and thus the conversion is implicit. Regardless, even if it were invalid C++ the compiler shouldn't spit out "internal compiler error please submit a full bug report"
[22:40:30 CEST] <JEEB> you cannot do such comparisons without the source
[22:40:43 CEST] <JEEB> because you do not know what you are supposed to hit closest to
[22:40:54 CEST] <strangepr0gram> JEEB: yeah i know eyes
[22:41:45 CEST] <strangepr0gram> i mean i convert original vid into like 5 versions and id use ffmpeg or something to play the first one for 3 seconds then the next one for 3 seconds [starting from second 3] etc u get the idea
[22:41:59 CEST] <strangepr0gram> Extremely new to this
[22:43:50 CEST] <JEEB> basically, if you're looking for automated calculations you will also have to try and optimize the encodes not to human psychovisuals but specifically for some metric (thus, it becomes irrelevant to make a PSNR test of encodes if your encoder tried to optimize for visual perception)
[22:44:56 CEST] <JEEB> for visual comparison I would recommend getting something with which you can quickly swap between samples
[22:45:22 CEST] <JEEB> (and of course whatever you have that converts the YCbCr into RGB should be known to be same in all cases)
[22:45:41 CEST] <strangepr0gram> yea.
[22:46:40 CEST] <JEEB> what I've recently done is load up things with ffms2 or lwlibavvideosource in vapoursynth, and then use something like vapoursynth editor to preview
[22:46:52 CEST] <JEEB> not playback, but lets you compare things
[22:47:13 CEST] <JEEB> (you should generally compare things in motion)
[22:47:17 CEST] <markmedes2> Hello! I have a question: Is it possible to force ffmpeg to ignore input errors and continue encoding? I tried -err_detect ignore_err but it's not working
[22:48:44 CEST] <kiroma> What input errors are you getting?
[22:49:39 CEST] <markmedes2> Invalid data found when processing input
[22:49:44 CEST] <markmedes2> That's on the audio stream
[22:52:50 CEST] <markmedes2> I'm using ffmpeg to re-encode isdb-t video files from digital television, and the files usually have lots of errors due to the nature of airway transmissions and 4G interference
[22:59:39 CEST] <markmedes2> kiroma: https://pastebin.com/DCYGzmqQ
[23:00:16 CEST] <markmedes2> now I see it's in the video stream, not the audio
[23:02:13 CEST] <markmedes2> error while decoding MB 98 49, bytestream -8
[23:15:20 CEST] <c_14> use one of the values for -ec I guess (guess_mvs, deblock, favor_inter)
[23:15:50 CEST] <markmedes2> is that hint for me?
[23:16:17 CEST] <c_14> yes
[23:16:27 CEST] <markmedes2> ah ok, lemme check that
[23:16:53 CEST] <c_14> It's in the manpages, search for conceal
[23:20:01 CEST] <markmedes2> gonna take something close to an hour to test this flag, cause the error is at 01:58:30.31 and ffmpeg scan the whole file even if I use -ss
[23:22:23 CEST] <kiroma> put -ss before -i to skip most of the file
[23:22:41 CEST] <markmedes2> k
[23:23:33 CEST] <markmedes2> brilliant
[23:36:28 CEST] <markmedes2> I'm I doing this right? Cause none of the -ec flags worked
[23:36:56 CEST] <markmedes2> time ffmpeg -ss 02:13:00 -i /home/user/mnt2/HBPVR/AMENINAROUBLIVROS.mts -to 02:16:00 -filter:v scale="640:trunc(ow/a/2)*2" -map 0:0 -map 0:1 -map 0:2 -vcodec libx264 -crf 20 -c:a copy -err_detect ignore_err -ec favor_inter /home/user/Desktop/output_favorinter.avi
[00:00:00 CEST] --- Mon Aug  7 2017


More information about the Ffmpeg-devel-irc mailing list