[Ffmpeg-devel-irc] ffmpeg.log.20171224

burek burek021 at gmail.com
Mon Dec 25 03:05:01 EET 2017


[00:34:07 CET] <therage3> ok, so these are the two 1920x1080 video streams available for a video on youtube, after i download them and parse them through ffprobe: https://pastebin.com/raw/yJrXXTHD    https://pastebin.com/raw/bkDCsM7b
[00:34:20 CET] <therage3> is there a way to figure out from just that which one is higher quality?
[00:37:31 CET] <Djfe> not really, first of all: quality is objective
[00:37:37 CET] <Djfe> *subjectively
[00:38:00 CET] <therage3> i see
[00:38:15 CET] <Djfe> second: vp9 might be the newer codec but depending on the settings the output might look worse
[00:38:24 CET] <Djfe> how have you encoded these videos?
[00:38:32 CET] <Djfe> Visual comparison is probably the best
[00:39:12 CET] <therage3> that's the thing, *I* didn't encode them, youtube did
[00:39:21 CET] <therage3> i downloaded them from youtube (youtube-dl)
[00:39:33 CET] <therage3> so all the information i have is what ffprobe yanks out
[00:39:37 CET] <Djfe> The next important factor: YouTube will reencode those videos probably anyway, it doesn't have to, but if you have a file with higher bitrate available and a fast flatrate internet connection you might wanna upload that instead and let youtube do the transcoding
[00:39:49 CET] <Djfe> oh lol, I see
[00:39:53 CET] <therage3> wait, what??
[00:40:16 CET] <Djfe> I see, I thought you were the uploader
[00:40:19 CET] <therage3> no, no, I don't want to *upload* anything on youtube
[00:40:24 CET] <therage3> yeah
[00:40:27 CET] <Djfe> but you're the downloader and want the best looking output
[00:40:33 CET] <therage3> right, correct
[00:41:03 CET] <Djfe> H.264 is better supported across hardware, but VP9 is getting there. If your device supports vp9 playback, choose vp9 ;)
[00:41:26 CET] <therage3> i see, that's in terms of cross-compatibility, but necessarily video quality
[00:41:30 CET] <Djfe> it'll likely look a bit better
[00:41:34 CET] <therage3> i see!
[00:41:41 CET] <therage3> ok that's an interesting fact
[00:44:08 CET] <Djfe> personally I'm looking forward to AV1, which will be finalized end of this month. It's the best open source codec to date and will be 30-35% better compared to hevc (all around not only at high resolution).
[00:44:25 CET] <therage3> that sounds good
[00:44:30 CET] <Djfe> YouTube, Netflix, Twitch everybody will switch to it on the web, once it's out
[00:44:36 CET] <Djfe> (from vp9)
[00:45:05 CET] <Djfe> though the encoders and decoders still need to be optimized before they can be used and hardware support will take >1 year till it's out I think
[00:45:55 CET] <therage3> i see
[00:46:08 CET] <Djfe> The programms will be optimized for better performance (faster encode times) once the bitstream is frozen (end of this month).
[00:46:57 CET] <therage3> youtube's stuff sort of sucks... the audio that comes with their video is lower quality than their separate audio-only streams for a lot of their videos
[00:47:10 CET] <therage3> so to get the best video and audio, I have to download each separately and mux them
[00:47:30 CET] <Djfe> http://www.streamingmedia.com/Articles/Editorial/Featured-Articles/AV1-A-Status-Update-120214.aspx
[00:48:01 CET] <Djfe> I didn't know that they offer audio separately, interesting ^^
[00:48:27 CET] <CoreX> just go with dash
[00:48:29 CET] <CoreX> 248          webm       1920x1080  1080p 2423k , vp9, 24fps, video only, 16.96MiB
[00:48:29 CET] <CoreX> 137          mp4        1920x1080  DASH video 2516k , avc1.640028, 24fps, video only, 17.08MiB
[00:48:31 CET] <Djfe> 150kbit/s aac is already pretty transparent though. Are they using opus for vp9?
[00:48:46 CET] <therage3> no, but *I* am lol
[00:48:53 CET] <therage3> Opus is what I used to mux in with the vp9
[00:49:25 CET] <therage3> CoreX: I see, DASH is the one that isn't vp9, which Djfe suggested is perhaps the better one
[00:49:55 CET] <Djfe> vp9 isn't transported over dash?
[00:50:15 CET] <Djfe> dash is only the container so I thought, that it shouldn't matter
[00:50:50 CET] <kazuma_> anyone know why im getting this error when trying to demux a .aac audio stream?
[00:50:52 CET] <kazuma_> [ipod @ 0000000000552240] aac bitstream error
[00:50:52 CET] <kazuma_> Last message repeated 11150 times
[00:51:35 CET] <Djfe> https://developers.google.com/youtube/v3/live/guides/encoding-with-dash#understanding-dash
[00:52:02 CET] <Djfe> therage3: I dunno why vp9 doesn't say dash, but it should actually be dash afaik
[00:52:03 CET] <therage3> Djfe: https://pastebin.com/raw/MnMLNMtU  << these are the available ones for download
[00:52:17 CET] <Djfe> dash is a protocol not a container (misused the terminology term)
[00:52:27 CET] <therage3> I see
[00:53:11 CET] <Djfe> kazuma: you could report it as a bug, if you want it fixed: https://ffmpeg.org/bugreports.html
[00:53:32 CET] <Djfe> kazuma: I don't know what the error means, do you hear any difference after encoding it to let's say flac?
[00:54:36 CET] <therage3> Djfe: i think I may extract frames from these two videos and compare them to see how much they diffeer
[00:54:39 CET] <therage3> differ*
[00:55:10 CET] <Djfe> maybe the devs added dash video as a name instead of 144p or the resolution because they couldn't read the resolution at this stage unlike vp9
[00:55:39 CET] <Djfe> sounds good! :)
[00:56:04 CET] <Djfe> maybe you can get the original from the uploader (depends though)
[00:56:26 CET] <Djfe> anyways, I need to go to bed now, bye gn :)
[00:56:37 CET] <therage3> LOL
[00:56:46 CET] <therage3> the original was uploaded by Sega's official account
[00:56:49 CET] <therage3> I doubt that'll ever happen
[08:00:30 CET] <vush> does someone see an error if these two lines are in a loop?
[08:00:32 CET] <vush>     raw_image = p2.stdout.read(1280*720)
[08:00:32 CET] <vush>     frame = cv2.imdecode(np.frombuffer(raw_image, np.uint8), 1)
[08:01:12 CET] <vush> im using subprocess. Popen and PIPE.
[08:02:58 CET] <vush> and in the second run, frame is empty
[08:05:29 CET] <vush> p2 looks as follows:
[08:05:30 CET] <vush> p2 = Popen(["ffmpeg", "-i", "pipe:0", "-f", "rawvideo", "-filter:v", "fps=0.5", "-qscale:v", "1", "-f", "image2pipe", "-"], stdin=p1.stdout, stdout=PIPE, bufsize=10**8)
[08:05:40 CET] <vush> its messy i know, added some parameters, deleted some
[08:06:12 CET] <vush> -f rawvideo may be stupid, doesnt work either way
[08:16:16 CET] <vush> maybe, because the frames come in bursts (while the amount of frames depends on fps obviously) of about 4 seconds there must be something i am missing. like.. read the second frame from the raw data?
[08:37:03 CET] <vush> i read a solution about ramdisk. should note that i dont want to write/save ffmpeg output frames. it would be a solution but first im still trying to get the buffers right?
[08:37:38 CET] <vush> i guess 1280*720 is wrong
[09:27:48 CET] <vush> well the pipe breaks, nothing wrong with ffmpeg actually. gonna look around. thanks!
[09:37:12 CET] <ZexaronS> Hello
[09:37:44 CET] <ZexaronS> what's the "decomb" equivalent in ffmpeg, (that's how a deinterlace filter is called in handbrake)
[09:38:51 CET] <ZexaronS> since I need to do burn-in dvbsub, which handbrake doesn't support, need to use ffmpeg, I used ffmpeg in the past to do pullup/detelecine/deinterlace but it was quite some time ago, I do still have the command lines which I'm looking at now
[09:48:40 CET] <ZexaronS> bwdif sounds similar
[10:04:14 CET] <vush> (i can stream to opencv now! it may have been parameters -f image2pipe and -vcodec rawvideo. before i went for images on HDD)
[11:41:49 CET] <ZexaronS> hello
[11:41:59 CET] <ZexaronS> what the heck is this "no filter found bwdif"
[11:43:33 CET] <ZexaronS> Im really tired of these various editions with some things not included
[11:43:45 CET] <ZexaronS> I have zeranoe for winx64
[11:50:36 CET] <ZexaronS> ah oops forgot to copy ffmpeg.exe to the correct folder
[13:21:48 CET] <sazawal> I am using the command, "ffmpeg -ss timestamp -i filename -vframes 1 image.jpg" to extract frames every 0.01 seconds from an mp4 file. But the output I am getting has the same frame repeated a number of times and then appears a different frame. In other words, the frames are not continuous. What am I doing wrong? Or is there a problem in reading the mp4 file?
[13:22:17 CET] <ZexaronS> Hey
[13:22:57 CET] <ZexaronS> Is this still true that fps option in filters isn't meant to be used with yadif or bwdif ? Because it works and it creates ok results, but that's what the documentation says
[13:24:02 CET] <ZexaronS> I could try removing it and recoding with the -r HZ option to drop duplicates
[13:24:26 CET] <ZexaronS> they seem to be 99% duplicates, there's a few pixels that change
[14:36:54 CET] <Guest36343> Hi, I'm using the following command to capture my screen and laptop video camera and place the video in the bottom right of the screen. It works just greta
[14:37:54 CET] <pos> i'm doing -vcodec copy of some rtsp streams, the output files from some sources end up with invalid timestamps (appear to be several hours long even if they only last a few minutes
[14:38:12 CET] <pos> i've tried -fflags +genpts to no avail
[14:39:50 CET] <Guest36343> Hi, I'm using the following command to capture my screen and laptop video camera at the same time. The command places the video in the bottom right of the screen. It works just great. However there is a snag; I can't see the video of myself on screen while I'm recording and I want to be able to see this so I can be sure I align myself with the camer . How can I do this?
[14:39:50 CET] <Guest36343> ffmpeg -f alsa -i default \
[14:39:50 CET] <Guest36343> -f x11grab -s `xdpyinfo | grep 'dimensions:'|awk '{print $2}'` -r 25 \
[14:39:50 CET] <Guest36343> -i :0.0 -f video4linux2 \
[14:39:52 CET] <Guest36343> -i /dev/video0 -filter_complex '[2:v]scale=380:-1[cam];[1:v][cam]overlay=W-w-8:H-h-8' \
[14:39:54 CET] <Guest36343> -c:a flac \
[14:39:56 CET] <Guest36343> -qscale 0 screen_and_video_grab.mkv
[14:55:06 CET] <sazawal> I am using the command, "ffmpeg -ss timestamp -i filename -vframes 1 image.jpg" to extract frames every 0.01 seconds from an mp4 file. But the output I am getting has the same frame repeated a number of times and then appears a different frame. In other words, the frames are not continuous. What am I doing wrong? Or is there a problem in reading the mp4 file?
[14:55:49 CET] <DHE> that only seeks to keyframes
[14:56:08 CET] <DHE> also 0.01 seconds is pretty tight. is your input video actually 100fps (or higher)?
[14:56:36 CET] <sazawal> DHE: I see. No its 25 fps i guess
[14:56:43 CET] <DHE> there's better ways. I suggest doing them all at once in a single command...
[14:57:19 CET] <sazawal> you mean including the fps argument in the command, right?
[14:57:44 CET] <DHE> well, what interval do you actually want images in? all frames? every nth?
[14:57:52 CET] <sazawal> I could do this, but then it would generate like 1000s of pics at once, is there a workaround to do it one at a time?
[14:58:27 CET] <sazawal> DHE: if it is 25 fps then i would like at 25 fps, or more, is there a way to choose this?
[14:58:59 CET] <DHE> if you put -ss after the input it will do decoding in order to select the exact frame you want. but this doesn't scale well to multiple runs to get 1 frame at a time.
[14:59:25 CET] <DHE> other than the disk space required, a single shot to generate the whole movie's worth of snapshots is probably the best way to do it
[15:01:49 CET] <sazawal> DHE: I see. There is also a command where the ss argument is placed after the inputfile argument. And it was quite slow when I checked it. How is it different from the one I am using?
[15:03:09 CET] <DHE> -ss on the input will do a file seek, but it always lands on a keyframe. -ss on the output will skip that much time worth of the input by decoding the video but discarding the result until the desired timeframe is reached
[15:04:21 CET] <sazawal> I see
[15:05:48 CET] <sazawal> my input file is a movie of 700 MBs. If I use fps as the argument, like you suggested, would the size of all snapshots more than 700 MBs?
[15:06:35 CET] <DHE> probably, because most video codecs operate using previous frames as references for decoding, whereas each jpeg must stand alone.
[15:07:43 CET] <DHE> also jpegs are variable quality/compression so it is highly content-dependent
[15:08:29 CET] <sazawal> what about pngs? Actually I need pngs, so anyway I am gonna convert the jpgs to pngs. I could directly generate pngs
[15:11:18 CET] <DHE> pngs are lossless, but have worse compression typically. especially with photographic pictures
[16:49:07 CET] <kepstin> if you need pngs, you should definitely generate them directly rather than convert to jpeg first
[16:51:22 CET] <sazawal> kepstin: DHE , right. Well, I still think converting the whole video into frames is not good for me. If I can get a way to do them one by one, then I can delete the previous one before generating the first one.
[16:51:43 CET] <kepstin> what are you actually using the result for?
[16:52:05 CET] <kepstin> if this is some custom tool you're writing, you should consider using a pipe with raw frames rather than saving to files.
[16:52:50 CET] <sazawal> I am writing a script to generate soft subtitles from a hardcoded one. After getting the frames, I will use OCR for text recognition.
[16:53:21 CET] <MarkGG> Why does the sound become muted? I'm trying to add a text caption to the top of a video, and the output is muted even though the original has sound:  ffmpeg -i public/tmp/5a3e761780cbb.mp4_topbar.jpg -i public/videos/5a3e761780cbb.mp4  -filter_complex '[0:v][1:v] vstack=inputs=2 [out]' -map [out] public/videos/proc_5a3e761780cbb.mp4 -y
[16:53:46 CET] <sazawal> kepstin: Sorry, What is the concept of piping ?
[16:53:50 CET] <kepstin> MarkGG: if you use the '-map' option, ffmpeg will only include the streams explicitly listed and ignore the others
[16:54:10 CET] <kepstin> MarkGG: so you also have to map the audio, add e.g. "-map 1:a" to include the audio from the mp4 file
[16:55:50 CET] <kepstin> sazawal: hmm, if you're just writing a shell script, a pipe probably won't work for you. Might be best to just break the video into segments (e.g. 10s-1m long), then work on a segment at a time.
[16:56:40 CET] <MarkGG> kepstin: Thank you! I will test this right away
[16:56:59 CET] <zukunf> hei
[16:57:09 CET] <zukunf> hardcore upscaling settings?
[16:57:33 CET] <zukunf> something that is capable of enhancing to the max
[16:57:49 CET] <sazawal> kepstin: Yes I am writing a shell script. This sounds like a good idea to break the video into segments. By the way, the breaking would happen at the point I provide to ffmpeg? Or only at the keyframes?
[16:58:12 CET] <kepstin> sazawal: for this purpose, you'd probably want to break at keyframes so you're not re-encoding the video multiple times.
[16:58:24 CET] <kepstin> so you might get segments of varying length, but that's ok.
[16:58:54 CET] <sazawal> kepstin: I see. Does ffmpeg give the timestamps of keyframes?
[16:59:30 CET] <zukunf> what's a general purpose scaler?
[16:59:54 CET] <zukunf> in my case is backwards, gif to mp4
[16:59:55 CET] <kepstin> sazawal: you shouldn't need them - I'd suggest using ffmpeg with -c copy and the "segment" muxer to write segmented files, then process each segment file into pngs one at a time in your script.
[17:00:27 CET] <kepstin> zukunf: why do you need to scale at all? probably best to just encode at original resolution and scale at playback if needed
[17:00:51 CET] <kepstin> zukunf: and is this a sort of pixel-art thing or photographic?
[17:01:17 CET] <zukunf> kepstin: original is smaller than target output
[17:01:44 CET] <kepstin> ah, you have limitations in target? like you're encoding for a dvd or bd or something with a limited set of supported resolutions?
[17:02:24 CET] <kepstin> or merging multiple videos into a single stream, i guess
[17:02:26 CET] <sazawal> kepstin: I am not sure I understand you. But I can learn from the internet more about it. Still, I need the timing of the keyframes so that I can place the rendered text at the right timestamps in the subtitle file.
[17:03:00 CET] <kepstin> sazawal: you know the number of frames in each segment, because it's the number of png files you get. If you also know the framerate, you can calculate the timestamp from the frame number.
[17:03:15 CET] <kepstin> and just keep a running count as you do each segment
[17:03:31 CET] <sazawal> kepstin: Oh right, this makes sense.
[17:04:33 CET] <kepstin> sazawal: anyways, what kind of visual content does the gif have? some sort of pixel-art animation or something photographic/cg?
[17:04:55 CET] <zukunf> I need something simple
[17:05:12 CET] <sazawal> kepstin: Another question. I am not much aware how the videos are encoded. But, is the length of the video between two keyframes is of approximately fixed length? What if a video has keyframes between 1 hour interval?
[17:05:29 CET] <kepstin> er, i mentioned the wrong person there.
[17:05:34 CET] <kepstin> zukunf: anyways, what kind of visual content does the gif have? some sort of pixel-art animation or something photographic/cg?
[17:05:46 CET] <sazawal> kepstin: No, its a movie file. And I am extracting the hardcoded subtitles
[17:05:50 CET] <kepstin> sazawal: if the video had keyframes that far apart, you wouldn't be able to seek in it :)
[17:06:15 CET] <zukunf> kepstin: it's like a vid. So not a pixel art but real ppl
[17:06:41 CET] <sazawal> kepstin: Oh sorry, I thought you were asking me.
[17:07:07 CET] <zukunf> everything I search is gif to mp4 so I am doubpting to use those filters.
[17:07:09 CET] <kepstin> sazawal: keyframe interval varies, but for typical, uh, internet downloadable video each interval will normally be no more than a couple minutes.
[17:07:52 CET] <kepstin> zukunf: ok, so this is simple then. You can just use ffmpeg's standard scaler and it'll look ok. What's the original resolution and final video output size you want?
[17:08:49 CET] <sazawal> kepstin: Alright. Just for the information, the piping thing you mentioned, if I cannot do it in shell script, then what is the other option, C++? Well, I can integrate a C program with the shell script, if it is handy.
[17:08:59 CET] <zukunf> not much at all from 346x147 to 854x480
[17:09:42 CET] <kepstin> zukunf: hmm, so your gif is a bit wider than the video - you want black bars top/bottom?
[17:09:47 CET] <zukunf> thing is, there still some room for sharpening.
[17:10:12 CET] <zukunf> na, that part doesn't really matter.
[17:11:16 CET] <ZexaronS> I've got a weird MPEGTS circumstance which FFMPEG will not detect, but it's playable in MPC-HC and Handbrake
[17:11:49 CET] <ZexaronS> ffmpeg just says "none"
[17:11:59 CET] <kepstin> zukunf: ffmpeg -i file.gif -vf "scale=854:360,pad=0:60 out.mp4" is the basic command you'll need, but you might want some extra options to do additional filtering (like sharpening) and set video quality.
[17:11:59 CET] <ZexaronS> so does ffprobe, but i'll open a ticket
[17:12:44 CET] <kepstin> ZexaronS: even if you set -probesize and/or -analyzeduration longer?
[17:12:59 CET] <kepstin> ZexaronS: it would be helpful to see the complete ffmpeg output as well. (pastbin it)
[17:13:19 CET] <ZexaronS> kepstin: oops didn't knew there's a setting like that, but I was WONDERING about it for months and never got to search for it
[17:14:36 CET] <ZexaronS> yes the MPEGTS are all produced brokenly, I mostly used Handbrake for these ones because it's not something important or archival but I need to do some subtitle burn-in so I had to start using ffmpeg
[17:14:53 CET] <kepstin> ZexaronS: -probesize takes an amount in bytes, defaults to ~5mb and -analyzeduration takes a value in microseconds, defaults to 5 seconds. ffmpeg stops at whichever it hits first.
[17:15:03 CET] <ZexaronS> they're fine, just at the beginning at the end they're not proper, but that's how the device operates normally
[17:15:23 CET] <ZexaronS> Thanks for the headsup
[17:17:09 CET] <ZexaronS> microseconds, miliseconds, sure?
[17:27:45 CET] <ZexaronS> kepstin: nope, I put in 300 MB and 900000000 and still nothing
[17:28:25 CET] <kepstin> ZexaronS: can you pastebin the ffmpeg output please?
[17:28:35 CET] <ZexaronS> already on it
[17:30:07 CET] <MarkGG> kepstin: Should I be doing this differently? (When adding the 1:a to the -map parameter, it has a new error: "Unable to find a suitable output format for '[out']"   --- Command: ffmpeg -i public/tmp/5a3e761780cbb.mp4_topbar.jpg -i public/videos/5a3e761780cbb.mp4  -filter_complex '[0:v][1:v] vstack=inputs=2 [out]' -map 1:a [out] public/videos/proc_5a3e761780cbb.mp4 -y
[17:30:36 CET] <kepstin> MarkGG: you need two separate -map options, one for each stream
[17:30:45 CET] <kepstin> MarkGG: so "-map [out] -map 1:a"
[17:31:15 CET] <MarkGG> Does it matter which order those are in?
[17:31:27 CET] <MarkGG> e.g. could it be "-map 1:a -map [out]"?
[17:31:37 CET] <kepstin> the order they're in is the order that the streams will be included in the output file
[17:31:45 CET] <kepstin> for one audio and one video, it doesn't really matter
[17:32:00 CET] <MarkGG> Makes perfect sense, thank you! I appreciate it
[17:55:23 CET] <diverdude> hi, what lib should i reference to get access to av_init_packet?
[18:11:13 CET] <ZexaronS> Kepstin: it's a mpegts from an internal recorder on a hdtv, all of these have common broken frames in the beginning, all of them start out rough with various error that are local to that frame, a few moments into the file is where good packets start, handbrake had some problems with the ending of these files leading to this fix for example https://github.com/HandBrake/HandBrake/commit/68762017fe256de8df143331d77e06d317edd977
[18:11:37 CET] <ZexaronS> Kepstin: even mediainfo seems to detect it fine, but there are missing streams and some texts off a bit
[18:11:46 CET] <ZexaronS> https://pastebin.com/zn6jL8sJ
[19:00:46 CET] <ZexaronS> i'm buringin in subtitles using overlay and I get spammed messages about "changing frame properties on the fly is not supported by all filters" even tho all is ok in result and this is a green message, not error, it's doing this message as it's applying the overlay, as subs are pretty much constant, it constantly messages
[19:14:38 CET] <kepstin> hmm, that's probably an issue with the original video stream - mpeg-ts streams can change properties like frame size/aspect through the stream
[19:14:49 CET] <kepstin> so you get that warning if something like that happens
[19:15:05 CET] <kepstin> if it works, you can ignore it. but like it says, some filters might have issues.
[19:57:01 CET] <Guest88180> I'm using the following command to capture my screen cast and laptop video camera at the same time. Video from the camera appears in a smallish window in bottom right of the screen cast. This works just great. However there is a snag; I can't see the video of myself on screen while I'm recording and I want to be able to see myself in the bottom right window  so I can be sure I align myself with the camera. How can I do this?
[19:57:01 CET] <Guest88180> ffmpeg -f alsa -i default \
[19:57:02 CET] <Guest88180> -f x11grab -s `xdpyinfo | grep 'dimensions:'|awk '{print $2}'` -r 25 \
[19:57:02 CET] <Guest88180> -i :0.0 -f video4linux2 \
[19:57:03 CET] <Guest88180> -i /dev/video0 -filter_complex '[2:v]scale=380:-1[cam];[1:v][cam]overlay=W-w-8:H-h-8' \
[19:57:05 CET] <Guest88180> -c:a flac \
[19:57:07 CET] <Guest88180> -qscale 0 screen_and_video_grab.mkv
[20:06:25 CET] <BtbN> Guest88180, can't you just use OBS?
[20:09:53 CET] <Guest88180> I tried out OBS last night, amongst many other softwares,  and I couldn't figure it out. Ideally I will find a way to get ffmpeg to do this. I will look at OBS again later.
[20:31:00 CET] <c3r1c3-Win> Guest88180: Add a display capture and a V4L2 (Video4Linux2) capture source, select the webcam. Adjust placement and size to taste.
[20:34:44 CET] <Guest88180> <c3r1c3-Win>  I'm not that good with ffmpeg. It was a bit of a struggle to get to here.  Code above does the whole job, except does not display the video camera output in the bottom right corner. How would I adjust the given code to do that?
[20:35:40 CET] <c3r1c3-Win> Guest88180: I don't use ffmpeg for stuff like that, so I couldn't really tell you.
[21:12:12 CET] <ZexaronS> kepstin: Well indeed all of them come out corrupted like this, it's suppose to be normal, the original device can playback it's own normally, but there is sometimes rarely where it won't play them, maybe this file is one of those occasions
[21:13:19 CET] <ZexaronS> kepstin: but judging from the fact that MPC-HC is able to play it, it's ffmpeg that subpar in the extra detection capabilities so I was thinking of trying to fix because one way or another I need this to work so I can burn the subtitles into the video
[21:14:02 CET] <ZexaronS> either fix ffmpeg to detect better or fix repair the format
[21:14:38 CET] <kepstin> Well, if you have a file that works with some player but not ffmpeg, then you should probably open a ticket and include a sample
[21:16:09 CET] <ZexaronS> One of which I can do myself, fixing the file it self, otherwise I can only open a ticket on trac and hope this is even something that's acceptable focus for the developers, worst case that it is considered to be nonstandard file and not ffmpeg's problem to care
[21:19:11 CET] <ZexaronS> However it may just the player's extra above-standard ability to handle broken files, ultimately it may not be ffmpegs fault so I'm aware that I can't expect this to be a priority or acceptable at all but I'll try
[23:10:11 CET] <zyme> I like running ffplay on windows 10's ubuntu cli, it's like cool looking automatic easy-ASCII art out of any video, and is almost watchable with the font size way down lol. =)
[23:12:14 CET] <therage3> i just use Linux natively, no need to go through Windows' Linux subsystem
[23:25:32 CET] <zyme> dumb question, can you convert a HEVC mkv with ffmpeg to h.263 .tc video file format, and if anyone knows a short command for it offhand, I'm interested in testing performance and compatibility with hardware that doesn't like xvid but loves divx h.263... and I know it likes h264 .tc files from ffmpeg...
[00:00:00 CET] --- Mon Dec 25 2017


More information about the Ffmpeg-devel-irc mailing list