[Ffmpeg-devel-irc] ffmpeg.log.20180228

burek burek021 at gmail.com
Thu Mar 1 03:05:01 EET 2018


[00:00:24 CET] <alexpigment> well, it's probably because people are moving between formats and using similar command lines without knowing the peculirities of each encoder
[00:00:36 CET] <alexpigment> and so someone smart decided "we should really not error on yuv420p for hardware encoders"
[00:00:46 CET] <JEEB> no, I don't think that was the case
[00:00:52 CET] <JEEB> what is the case you're talking about?
[00:01:00 CET] <alexpigment> i believe it's nvenc
[00:01:04 CET] <alexpigment> maybe qsv as well
[00:01:07 CET] <JEEB> can you please shed info so I can explain why it might happen the way it does
[00:01:51 CET] <alexpigment> hold on, testing for you
[00:02:30 CET] <JEEB> of course the difference between yuv420p and nv12 is that the coded information is the same, so it isn't resampling. but I bet the reason for anything like that to happen is not as smart as that :P
[00:02:43 CET] <BtbN> nvenc natively support yuv420p, there is no hacking going on
[00:02:48 CET] <BtbN> Any conversion is driver-internal.
[00:02:49 CET] <JEEB> ok, so that was it
[00:02:58 CET] <JEEB> the encoder supports 4:2:0 planar input
[00:03:04 CET] <JEEB> which is what yuv420p is
[00:03:20 CET] <JEEB> (NV12 is 4:2:0 half-packed)
[00:03:59 CET] <JEEB> but it shouldn't show NV12 in the encoder configuration then :P
[00:04:09 CET] <BtbN> it doesn't
[00:04:18 CET] <JEEB> yup
[00:04:35 CET] <alexpigment> yuv420p nv12
[00:04:38 CET] <alexpigment> it shows both
[00:04:44 CET] <jkqxz> Conversion is pretty much free on discrete cards with ~infinite memory bandwidth, so it makes sense to just take it there.  It won't when you're in main memory which is much more limited.
[00:04:48 CET] <alexpigment> but i'm pretty sure it always outputs nv12
[00:04:53 CET] <JEEB> the DECODER outputs NV12, yes
[00:05:11 CET] <alexpigment> i was talking about the encoder
[00:05:17 CET] <JEEB> then it cannot output NV12
[00:05:24 CET] <JEEB> it outputs H.264 which IIRC is internally planar
[00:05:25 CET] <BtbN> the encoder does output h264 or hevc
[00:05:42 CET] <JEEB> but in any case, it doesn't output any fscking pixel format
[00:06:31 CET] <jkqxz> H.264 4:2:0 is internally interleaved (blocks of 16x16 luma followed by 8x8 of each chroma).
[00:06:52 CET] <alexpigment> ok, i'll just claim ignorance on this
[00:07:10 CET] <JEEB> jkqxz: ok, consider me stupid then. :)
[00:07:35 CET] <BtbN> even x264 prefers nv12
[00:07:51 CET] <JEEB> ya, we could add support for NV12 in libx264.c
[00:08:04 CET] <JEEB> oh wait there is already
[00:08:15 CET] <jkqxz> So H.264 isn't YUV420P or NV12 in any real sense except insofar as it contains the same pixels in some order.
[00:08:32 CET] <JEEB> yes, taht was the point I was more or less trying to make :P
[00:08:46 CET] <JEEB> (and it's compressed anyways)
[00:09:43 CET] <jkqxz> H.264 can be planar so the components are separate - that's separate_colour_plane_flag.  But that's not usable with subsampling.
[00:10:28 CET] <alexpigment> well, the original point i was trying to make was simply that when there is zero option but 48000, it doesn't make sense to fail. and there are other cases in ffmpeg where it does ignore what you specify
[00:10:51 CET] <alexpigment> for example, if you specify a bitrate for pcm, it says "one of the following blah blah blah isn't being used"
[00:11:50 CET] <JEEB> that's actually one of those things where I'd actually rather fail - you've set a setting but it can't be applied anywhere.
[00:12:02 CET] <alexpigment> i think that's just a developer mindset
[00:12:12 CET] <JEEB> uhhhh
[00:12:13 CET] <alexpigment> it's not an end-user friendly option
[00:12:30 CET] <JEEB> I've seen plenty of systems which silently ignore or just warn you about an option they can't recognize
[00:12:41 CET] <jkqxz> The "an option isn't being used" warning really is just a warning.
[00:12:42 CET] <JEEB> generally that's a Very Bad Idea. the user set an option, most likely for a reason
[00:13:06 CET] <JEEB> yes, I know it's a warning and I don't care enough to start PRs about it
[00:13:06 CET] <alexpigment> jkqxz: right. a warning is fine in this case
[00:13:18 CET] <alexpigment> anyway, you do you
[00:13:37 CET] <alexpigment> i disagree wholeheartedly on this type of thing, but it doesn't really bother me going forward, now that i know
[00:13:43 CET] <alexpigment> it's going to bite a few other people in the ass, i'm sure
[00:14:16 CET] <shtomik> good night to all ;)
[00:14:18 CET] <JEEB> well you have set your feed to the encoder be not 48kHz, and the encoder only supports 48kHz. you have set that option FOR A REASON. if not, you remove it and go ahead
[00:14:32 CET] <JEEB> and yes, I agree that ffmpeg.c does things with multiple standards
[00:14:35 CET] <JEEB> sometimes it warns
[00:14:40 CET] <alexpigment> JEEB: you're saying this to a person who can read the command line
[00:14:42 CET] <JEEB> for example with AVOptions
[00:15:10 CET] <JEEB> if you've bought a fuck expensive application that uses ffmpeg.c in the background then you go complain at that person
[00:15:17 CET] <alexpigment> once ffmpeg is being used behind the scenes on some front end or application, it won't be obvious why it fails
[00:15:26 CET] <JEEB> or company
[00:15:40 CET] <alexpigment> JEEB: look, it's fine. you have a developer mindset. i've had this same type of conversation time and time again with you
[00:15:44 CET] <alexpigment> i don't agree with your standpoint on software
[00:15:53 CET] <alexpigment> i think most companies would fail with you at the helm :)
[00:16:06 CET] <alexpigment> but that's just my 2cents and i respect your opinion, even though i disagree with it
[00:16:31 CET] <JEEB> the wrapper on top of the libraries or the command line app should be the one doing things the right way and making the necessary things so that the filter chains etc go through
[00:16:52 CET] <JEEB> if the wrapper is stupid as fuck and doesn't even try to do that then the the creator of that wrapper has failed
[00:17:50 CET] <JEEB> I value user-friendliness but I also value "you asked for X but this encoder cannot do X", especially since the darn encoder library has no resampling
[00:18:09 CET] <alexpigment> yes but ffmpeg does
[00:18:18 CET] <alexpigment> and it definitely does it automatically for you
[00:18:21 CET] <alexpigment> anyway, i'm done with this
[00:18:26 CET] <alexpigment> i respect your opinion
[00:18:42 CET] <JEEB> ffmpeg.c does when you don't tell it how to create the goddamn filter chain
[00:18:45 CET] <JEEB> filter chain goes to encoder
[00:19:19 CET] <alexpigment> ?
[00:19:20 CET] <hojuruku> jkqxz: JEEB - is there some way of using a pipe or a fifo to encode to work around this bug or you have to encode to intermediate files first and then mux it all together?
[00:19:47 CET] <alexpigment> is -ar not aformat=sample_rate?
[00:19:52 CET] <hojuruku> gstreamer are going to run off and fix their code too, because if the hardware player can play it now that ffmpeg has muxed it but gstreamer can't which is interesting.
[00:20:04 CET] <JEEB> it is, it tells the filter chain output you want. the output of that filter chain is what then gets put into the encoder
[00:20:18 CET] <JEEB> when you don't set the audio rate it will look up what the encoder can take, most likely
[00:20:24 CET] <JEEB> and add an implicit conversion filter there
[00:20:25 CET] <JEEB> but
[00:20:37 CET] <JEEB> when you ask the filter chain's output to be X, that is a hard requirement from you
[00:20:47 CET] <hojuruku> JEEB: libx264 supports NV12 internally now too right? i though you already allowed libx264 to use nv12 if available as an input instead of yuv420p
[00:21:04 CET] <JEEB> hojuruku: I just checked libx264.c and it has the NV12 define
[00:21:09 CET] <JEEB> as I noted it already did support NV12 :P
[00:21:22 CET] <BtbN> pretty sure it internally converts everything to NV12
[00:21:27 CET] <JEEB> yes
[00:21:28 CET] <BtbN> so NV12 is the original input format
[00:21:32 CET] <JEEB> it internally converts to NV12
[00:22:42 CET] <hojuruku> with the audio and video going out into separate files now will there be sync issues or it should be ok?
[00:25:03 CET] <JEEB> alexpigment: the thing is that you can't make you not be able to write the things you can write on a command line like a graphical user interface can only let you pick certain things in an interface :P (because it can query before hand which sample rates an encoder can support etc). even in graphical interfaces I'm all for giving the user what the user asks for, but also making absolutely sure that whatever
[00:25:09 CET] <JEEB> is picked can actually be done :P
[00:25:15 CET] <shtomik> Okay, if I set the sample rate to enc_ctx, it's info that required encoder right? And if I set aformat - on output of filter I take that format ?
[00:25:54 CET] <JEEB> in a command line app the best thing you can do is just tell the user that he's asked for something that can't be done after the fact, which does suck - yes
[00:58:47 CET] <hojuruku> humm what's this do: -vf 'format=nv12|vaapi,hwupload' the | character
[00:58:55 CET] <hojuruku> https://github.com/jp9000/obs-studio/pull/941#issuecomment-346235933
[01:00:26 CET] <atomnuker> hojuruku: you shouldn't use x11grab, you should use kmsgrab, its far faster
[01:01:12 CET] <hojuruku> ok atomnuker i was intersted in what fomat= the | character does.
[01:01:47 CET] <atomnuker> it converts to nv12
[01:08:29 CET] <hojuruku> https://github.com/jp9000/obs-studio/pull/941#issuecomment-368129704 I wonder what he means by "ffmpeg's implementation". Does ffmpeg allow hardware acceleration to happen automatically if available?
[01:10:10 CET] <hojuruku> atomnuker: so it takes a nv12 from -f x11grab (in that example - i know i know kmsgrab better) and converts it to vaapi's internal format
[01:10:45 CET] <atomnuker> no, it takes bgr0 from x11grab (I think) and converts that to something that can be uploaded to the hardware
[01:28:06 CET] <jkqxz> hojuruku:  The '|' character is saying "or".  The 'format=nv12|vaapi,hwupload' construction is used when you don't know whether the input will decode in hardware or not - hwupload passes through vaapi surfaces directly, so you can use the same filter chain without knowing in advance which will happen.
[01:28:54 CET] <jkqxz> Using it and -hwaccel with x11grab does nothing is just cargo-culting on the part of whoever wrote that.
[06:34:06 CET] <kota1> I'm using ffmpeg to do a stream, and I"m running into inconvenient behavior
[06:34:11 CET] <kota1> I'm using the realtime filter to output the stream in, well, realtime
[06:34:16 CET] <kota1> My files have subtitles that I'm burning into the stream
[06:34:25 CET] <kota1> If I try to seek the file using output seeking, the subtitles appear at the proper time (a line spoken at 00:30 is seen/heard at the same time subtitles appear for 00:30), but it doesn't jump to that location of the file
[06:34:35 CET] <kota1> If I try to seek the file using input seeking, it seeks immediately, but the subtitles do not appear at the proper time (a line spoken at 00:30 in the new timecode does not have its appropriate subtitle, the subtitles start appearing as though the file hadn't been seeked at all)
[06:34:44 CET] <kota1> Is there a way to reconcile this behavior so that I can seek a file, output it in real time for the stream, and have the subtitles timed properly?
[06:34:48 CET] <kota1> ffmpeg -ss 00:30:00 -i file.mkv -vf subtitles=file.mkv,realtime gives me mistimed subtitles
[06:34:52 CET] <kota1> ffmpeg -i file.mkv -ss 00:30:00 -vf subtitles=file.mkv,realtime gives me aligned subs, but takes 00:30:00 to start streaming
[07:42:33 CET] <nersesyan> hello. I have some questions.. Can anyone help me?
[08:01:01 CET] <furq> not unless you ask the question
[08:01:31 CET] <furq> kota1: does it work if you use -re instead of the realtime filter
[08:22:14 CET] <nersesyan> I have problem with output , I have http input link from dvb-t2 card and when i type ffmpeg -i http://IP/play/stream_name -c:v copy -c:a copy -f flv rtmp://IP/myapp/strem_name output has no video, but when i type ffmpeg -i http://IP/play/stream_name -c:v copy -c:a copy -f hls /tmp/myapp/stream_name.m3u8 i can see the video
[08:27:51 CET] <furq> nersesyan: run the rtmp command and pastebin the console output
[08:29:38 CET] <nersesyan> furq: you mean paste output here?
[08:32:02 CET] <Nethe> Hi guys. I have a little issue during the ffmpeg compilation. I installed the latest Nvidia Cuda, and tried to compile the ffmpeg with those parameters: --enable-cuda --enable-cuvid --enable-nvenc --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib64 --enable-gpl --enable-libx264 --disable-x86asm --enable-libx265 --enable-libfdk-aac --enable-nonfree
[08:32:12 CET] <furq> nersesyan: put it on a pastebin
[08:32:19 CET] <Nethe> but the compilation said me: ERROR: cuvid requested, but not all dependencies are satisfied: cuda
[08:32:23 CET] <Nethe> ok sorry
[08:34:58 CET] <Nethe> Ok. Here are the parameters for compilation: https://pastebin.com/iheDrDjK . I am still getting ERROR: cuvid requested, but not all dependencies are satisfied: cuda, but latest nvidia cuda is installed. I am also using latest ffmpeg snapshot.
[08:35:13 CET] <nersesyan> furq: https://pastebin.com/3RPP5SCN
[08:37:20 CET] <furq> i don't see anything obviously wrong there
[08:41:29 CET] <nersesyan> furq: but when i play on vlc , only audio played. And when I type output -f hls /tmp/iptv/stream_name.m3u8 everything is ok
[08:49:23 CET] <nersesyan> furq: here is the ffprobe output https://pastebin.com/w8ck47Jq
[09:32:17 CET] <Nethe> Nobody? :/
[09:35:17 CET] <blap> not me
[10:16:46 CET] <shreyansh_k> Hi guys, College student here. Stupid test requires webcam access. Faked a webcam with v4l2loopback at /dev/video0. How do I feed it a black image continously to satisfy test requirement.
[10:17:14 CET] <shreyansh_k> able to feed the screen with "ffmpeg -f x11grab -r 15 -s 1280x720 -i :0.0+0,0 -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 /dev/video0" but ideally don't want to see them anything.
[10:17:32 CET] <shreyansh_k> *show
[10:19:34 CET] <Nacht> shreyansh_k: Is this something ? http://www.bogotobogo.com/FFMpeg/ffmpeg_video_test_patterns_src.php
[10:30:13 CET] <shreyansh_k> Nacht: able to generate black stream with 'ffplay -f lavfi -graph "color=c=black [out0]" dummy'. How do I dump it to the camera?
[10:41:36 CET] <shreyansh_k> Nacht: thanks. I figured it. 'fmpeg -f lavfi -graph "color=c=black [out0]" -i out0 -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 /dev/video1'
[10:49:39 CET] <furq> shreyansh_k: you don't need the output label and mapping
[10:50:04 CET] <furq> just -f lavfi -i color=c=black
[10:51:14 CET] <furq> or i guess -f lavfi -i color=c=black:s=1280x720
[12:36:22 CET] <kms_> how to smooth zoom picture?
[12:36:34 CET] <kms_> without jiggle
[12:37:02 CET] <kms_> and without 10x pre upscaling
[12:42:19 CET] <durandal_1707> cant, need resampling piixels
[12:48:41 CET] <olspookishmagus> I have a 3GP that has an variably encoded AAC audio channel which I'm trying to extract, and even though I use: ffmpeg -i foo.3gp -vn -c:a copy foo.m4a
[12:48:53 CET] <olspookishmagus> I get a file with a constant bitrate
[12:49:32 CET] <kms_> durandal_1707 resample every frame?
[12:50:39 CET] <olspookishmagus> From: (..., Bit rate mode : Variable, Bit rate : 32.0 kb/s, Maximum bit rate : 34.4 kb/s, ...) To: (..., Bit rate mode : Constant, Bit rate : 32.0 kb/s, ...)
[12:51:15 CET] <olspookishmagus> I will agree that there's nothing much to worry about
[13:09:56 CET] <durandal_1707> kms_: pixels
[13:10:23 CET] <kms_> how todo this?
[13:11:22 CET] <durandal_1707>  its very complicated iirc
[13:12:39 CET] <durandal_1707> you interpolate pixels
[13:13:10 CET] <durandal_1707> nearest would cause jiggle
[13:13:32 CET] <kms_> now i do this:    scale=-5:3600,zoompan=z='min(zoom+0.0005,5)':x='(iw/2-(iw/zoom/2))':y='(ih/2-(ih/zoom/2))':d=150:fps=30
[13:14:00 CET] <kms_> but it is very slow in process
[13:14:10 CET] <durandal_1707> you cant,  need to modify zoompan code
[13:41:13 CET] <olspookishmagus> FYI: I've emailed: ffmpeg-users on what I was asking in here just before
[13:41:51 CET] <olspookishmagus> (in case someone else would like to see (if there will be) the answer)
[13:52:07 CET] <kiroma> Why do you need to recode a video entirely if you only want to change PTS?
[13:53:33 CET] <JEEB> because there's no simple option to override PTS with ffmpeg.c
[13:53:38 CET] <JEEB> it's possible in API just fine
[13:53:48 CET] <JEEB> it's stupid, I know :P
[13:54:31 CET] <kiroma> Ah okay.
[13:57:48 CET] <JEEB> (of course with various formats it really isn't /that/ simple where you have to take reordering (with b-frames etc) into mention
[14:10:58 CET] <kiroma> No I see, I was just wondering why was the case, and if my understanding was correct.
[15:36:09 CET] <classsict> hi, somebody can helpme with vaapi and filters
[15:36:42 CET] <classsict> I already read the docs, but I don't understand when or why I have to initializate the vaapi device
[15:37:20 CET] <classsict> I get this error :  Failed setup for format vaapi_vld: hwaccel initialisation returned error.
[15:42:56 CET] <jkqxz> That probably means your stream is not supported by the hardware.
[15:53:27 CET] <classsict> like this: Codec h264 profile 66 not supported for hardware decode
[15:54:07 CET] <jkqxz> Indeed.  No hardware supports H.264 baseline profile.
[15:55:48 CET] <classsict> so, I have a ip camera
[15:56:26 CET] <classsict> yes! I change to main in configuration and it works :D!
[16:00:53 CET] <classsict> thanks.
[16:46:53 CET] <zerodefect> Using the C-API, looking to encode an SRT to a DVB bitmap. Is anybody able to point me to some examples somewhere?
[16:56:21 CET] <pgorley> hi, is there a way to access the X264Context using libavcodec?
[16:57:22 CET] <pgorley> more specifically, i'd like to have access to the x264_picture_t
[17:06:13 CET] <kota1> furq: In my experience there hasn't been any changes, and the -re flag reads through the input in realtime iirc. I'll try it I suppose
[17:06:44 CET] <hellyeah> hey
[17:06:56 CET] <hellyeah> can i convert rtsp to http stream with ffmeg?
[17:07:27 CET] <BtbN> you can write to a streamable file and serve that via a http server of your choice, if that's what you mean by http streaming
[17:07:29 CET] <kota1> furq: Yeah, same behavior as before
[17:08:19 CET] <hellyeah> so putting a file contains rtsp stream inside http server
[17:08:25 CET] <hellyeah> ?
[17:08:33 CET] <hellyeah> hm
[17:08:37 CET] <saml> what's rtsp
[17:08:45 CET] <BtbN> more like mkv or ts
[17:08:52 CET] <hellyeah> ip cam thing
[17:09:04 CET] <saml> if you have a file, you can serve it via http
[17:09:17 CET] <saml> there's also Dash and HLS
[17:09:37 CET] <hellyeah> i dont have file
[17:09:42 CET] <hellyeah> i can record it to file
[17:09:49 CET] <hellyeah> but file might be really big
[17:11:01 CET] <saml> you want to create a web application that can view rtsp?
[17:11:25 CET] <saml> and you don't want to archive it?  (saving it to a file for later replay and download
[17:11:33 CET] <kota1> I'm using ffmpeg to do a stream, and I"m running into inconvenient behavior
[17:11:37 CET] <kota1> I'm using the realtime filter to output the stream in, well, realtime
[17:11:42 CET] <kota1> My files have subtitles that I'm burning into the stream
[17:11:45 CET] <kota1> If I try to seek the file using output seeking, the subtitles appear at the proper time (a line spoken at 00:30 is seen/heard at the same time subtitles appear for 00:30), but it doesn't jump to that location of the file
[17:11:48 CET] <kota1> If I try to seek the file using input seeking, it seeks immediately, but the subtitles do not appear at the proper time (a line spoken at 00:30 in the new timecode does not have its appropriate subtitle, the subtitles start appearing as though the file hadn't been seeked at all)
[17:11:52 CET] <kota1> Is there a way to reconcile this behavior so that I can seek a file, output it in real time for the stream, and have the subtitles timed properly?
[17:11:56 CET] <kota1> ffmpeg -ss 00:30:00 -i file.mkv -vf subtitles=file.mkv,realtime gives me mistimed subtitles
[17:11:59 CET] <kota1> ffmpeg -i file.mkv -ss 00:30:00 -vf subtitles=file.mkv,realtime gives me aligned subs, but takes 00:30:00 to start streaming
[17:16:07 CET] <saml> kota1, why use realtime filter?
[17:16:25 CET] <saml> ffmpeg -i file.mkv -ss 00:30:00 -vf subtitles=file.mkv
[17:16:28 CET] <saml> without realtime works?
[18:18:22 CET] <alexpigment> is libfdk_aac still the only encoder that can de HE-AAC?
[18:18:27 CET] <alexpigment> or did that get added to the native encoder
[18:21:00 CET] <alexpigment> it seems like it's writing HE-AAC, but i don't see anything in the resultant file that identifies it as such
[18:21:16 CET] <BtbN> Just use opus
[18:21:23 CET] <alexpigment> this is not for quality reasons
[18:21:29 CET] <alexpigment> this is just building up a test bank of files to test against
[18:21:54 CET] <alexpigment> i guess i'll have to build a new copy of ffmpeg that explicitly includes fdk
[19:00:05 CET] <saml> fdk aac is good
[19:00:14 CET] <saml> that's what i'm told
[19:00:28 CET] <saml> how do you build ffmpeg? can you share your ffmpeg building routine alexpigment ?
[19:00:36 CET] <saml> is it dockerized in the cloud?
[19:01:01 CET] <alexpigment> no, i build using this: https://github.com/rdp/ffmpeg-windows-build-helpers
[19:01:03 CET] <alexpigment> on linux
[19:01:04 CET] <alexpigment> for windows
[19:01:09 CET] <saml> wow leet
[19:01:21 CET] <alexpigment> a bit convoluted, sure, but it was easy enough for my non-developer brain to understand
[19:01:50 CET] <saml> i need to build ffmpeg myself to include fdk aac as well
[19:02:03 CET] <saml> but it's for alpinelinux
[19:02:15 CET] <alexpigment> well with that script, if you do nothing to it, the first prompt during build is whether you want to include it
[19:02:46 CET] <alexpigment> yeah, i'm not sure what all is need to compile for linux, but i'm sure there are tutorials out there
[19:03:18 CET] <saml> yeah
[19:04:38 CET] <DHE> for linux it's pretty easy. build and install fdk, then build ffmpeg. the only thing  you need to watch out for is the pkg-config path if you're using the default prefix=/usr/local
[19:09:00 CET] <c_14> kota1: use -copyts
[20:17:11 CET] <GamleGaz> is memcpy a valid way to set frame data?
[20:17:45 CET] <GamleGaz> ie would memcpy((uint8_t*) frame->data[0], buffer, nr_bytes) work?
[20:21:01 CET] <BtbN> if you absolutely need to copy, sure
[20:21:48 CET] <GamleGaz> is there better way to copy audio data into a frame?
[20:22:07 CET] <BtbN> don't copy at all, reference where it already is.
[20:23:44 CET] <DHE> there's an AVBufferRef system where the frame is reference counted under the hood, allowing for shared memory on all the "copies"
[20:44:04 CET] <GamleGaz> alright, thanks
[20:45:47 CET] <GamleGaz> also is there any difference in encoding/muxing the entire video first, then doing the audio vs alternating between them?
[22:52:09 CET] <marcurling> Hello, can you pause/interrupt an encoding and take it back later?
[22:57:47 CET] <DHE> on unix, you can probably CTRL+Z to pause execution. for file-to-file transcoding this is safe. just don't close the terminal
[22:58:32 CET] <marcurling> the thing is that I'd like to reboot ;(
[22:58:49 CET] <marcurling> but thanks, indeed
[23:00:10 CET] <DHE> oh dear
[23:01:16 CET] <jkqxz> No chance you happen to be running in a VM which you can snapshot?
[23:02:07 CET] <marcurling> Oh, that is a good idea for the future, jkqxz. Congrats!
[00:00:00 CET] --- Thu Mar  1 2018


More information about the Ffmpeg-devel-irc mailing list