burek021 at gmail.com
Sun Sep 17 03:05:01 EEST 2017
[00:02:34 CEST] <DHE> no, it's a safety setting in ffmpeg in case someone gives you a 32000x32000 resolution video in an attempt to cause your system to swap itself to death
[00:03:11 CEST] <DHE> you're looking for -maxrate for most codecs
[00:04:16 CEST] <Ober> hmm usage() is not returning that as a valid option
[00:04:31 CEST] <DHE> what are you looking at?
[00:04:39 CEST] <Ober> --help output
[00:04:43 CEST] <DHE> of ffmpeg?
[00:04:53 CEST] <Ober> Unrecognized option '-maxrate'.
[00:05:08 CEST] <Ober> this is 3.3.4
[00:06:47 CEST] <DHE> $ ffmpeg -f lavfi -i testsrc2 -t 30 -c libx264 -maxrate 1M -b 1M -bufsize 1M output.mp4 # generates a test pattern video named output.mp4, 30 seconds long, about 1 megabit on the video
[00:07:16 CEST] <Ober> what version? HEAD?
[00:07:40 CEST] <DHE> this particular version I'm running is a git build from june 2017
[00:07:52 CEST] <DHE> but it should still work as long as it has libx264 enabled
[00:07:52 CEST] <Ober> ahh. brew here
[00:11:55 CEST] <Ober> order args no doubt
[00:12:00 CEST] <Ober> order of args rather
[00:12:39 CEST] <Ober> thanks for the example
[00:16:46 CEST] <DHE> ffmpeg [input 1 opts] -i input1 [[input 2 opts] -i input2] ... [output1 opts] output1 [[output2 opts] output2]...
[00:17:01 CEST] <DHE> some args are global, so their position doesn't matter
[00:18:33 CEST] <Ober> clearly for options that are codec specific their position matters
[00:35:51 CEST] <thebombzen> Ober: you can mitigate this by using -maxrate:v which causes it to only apply to video streams
[00:36:02 CEST] <thebombzen> same with -b, which you should by habit use -b:v anyway rather than just -b
[00:36:13 CEST] <thebombzen> otherwise -b will apply to the audio streams too
[00:36:55 CEST] <Ober> right.
[00:37:38 CEST] <thebombzen> so the order of options doesn't matter much for codecs. what does matter is that for output options, latter options take precedence
[00:37:53 CEST] <thebombzen> so if you use -c copy -c:v libx264, it'll copy all streams except it'll use libx264 for all video streams
[00:38:11 CEST] <thebombzen> using -c:v libx264 -c copy will copy everything, because -c copy applies to all streams and it comes later
[00:38:43 CEST] <thebombzen> the order of global options like -hide_banner or -y doesn't matter much (although -y should be an output option, I feel)
[00:42:11 CEST] <microchip_> Ober: keep in mind. Last option overwrites all previous ones :)
[00:42:53 CEST] <DHE> but order still matters. it has to come before the output filename, but after the input filename
[00:43:01 CEST] <microchip_> right
[00:43:05 CEST] <DHE> I suspect that was his problemn
[00:50:14 CEST] <Ober> yeah it was
[05:28:37 CEST] <berndj> can i get a video-diff with ffmpeg, to see if (and how) two video files differ?
[05:33:55 CEST] <blap> you can diff the files
[05:37:45 CEST] <berndj> i don't want a bytewise diff; i want something like a frame-by-frame subtract operation
[07:03:53 CEST] <blap> berndj: interesting idea
[07:04:16 CEST] <blap> you could use it to detect manipulated video, or encoding errors
[07:05:04 CEST] <berndj> yes. right now i just want to see if anything changed in some random youtube video i downloaded some time ago, and apparently again two days ago. same video id, but different (bytewise) contents
[07:06:17 CEST] <blap> searching...
[07:09:13 CEST] <blap> the simplest plugin would be one that creates an output video simply consisting of per-pixel difference of each frame, perhaps normalized/expanded for visibility
[07:13:58 CEST] <blap> berndj: maybe the blend filter
[07:14:52 CEST] <blap> http://trac.ffmpeg.org/ticket/5586 this has an example of usage
[07:15:41 CEST] <Ober> do does webm have equivalents to maxrate bufsize and bitrate?
[07:16:14 CEST] <blap> https://ffmpeg.org/ffmpeg-filters.html#blend
[07:16:54 CEST] <blap> use 'difference'
[07:18:16 CEST] <jasom> how do I encode streams that aren't present at the beginning of my input? I have a TS where the audio starts at ~.250 seconds in and -map 0:a:0 says "Stream map '0:a:0' matches no streams."
[07:19:54 CEST] <blap> you're welcome berndj :)
[07:19:59 CEST] <berndj> bleh, the version of ffmpeg i have doesn't have filter_complex (debian jessie version)
[07:20:16 CEST] <berndj> thanks though, at least i have something to look forward to
[07:22:07 CEST] <blap> or compile it. apt-get build-dep ffmpeg should pull-in all needed libraries and headers
[07:44:21 CEST] <berndj> oh well, i did an eyeball-diff
[07:52:45 CEST] <Schwarzbaer> Hi. I'm using 'ffplay -i /dev/video0 -vf "vflip,hflip"' to use a weirdly upside-down webcam; that works well. I've set up a very basic ffserver, use ffmpeg to stream the camera to it, use ffplay to watch it; works nice. Then I added the aforementioned -vf arguments to the feeding ffmpeg; nothing happens. The image is as upside-down as it was before. Why, and how do I fix it?
[07:55:15 CEST] <blap> the video filter works for me
[08:02:26 CEST] <jasom> aha, I found the answer buried in the wiki; specifyign -probesize 50M -analyzeduration 50M before the -i flag works. Couldn't find either of those in the official docs though
[08:03:31 CEST] <jasom> ah, it's in ffmpeg-formats
[08:09:09 CEST] <Ober> jasom: lisp? really?
[08:09:44 CEST] <jasom> Ober: no, not really; I'm just in that channel to specifically troll you
[08:11:12 CEST] <jasom> Ober: wait, are you akkad?
[08:32:26 CEST] <Ober> yes
[08:32:33 CEST] <Ober> why fore?
[08:32:34 CEST] <Ober> for
[08:37:10 CEST] <Ober> jasom: ever use Clame?
[08:37:27 CEST] <Ober> clave rather
[08:45:45 CEST] <jasom> Ober: nope
[08:47:06 CEST] <Ober> jasom: you know akkad?
[08:47:22 CEST] <jasom> Ober: remember seeing you in #lisp in the past
[11:18:01 CEST] <Fyr> guys, when doing a weather forecast people replace green color on video with another video. is it possible to that with FFMPEG?
[11:22:38 CEST] <klaxa> https://ffmpeg.org/ffmpeg-filters.html#chromakey
[11:24:08 CEST] <Fyr> thanks
[11:24:24 CEST] <Fyr> I didn't know what I should google.
[11:26:11 CEST] <Fyr> I've found out how to put subtitles with scaling onto a video.
[11:26:27 CEST] <Fyr> using chromakey!
[11:42:15 CEST] <furq> didn't you want good antialiasing
[11:42:50 CEST] <furq> this was 15 minutes ago so you probably already found out for yourself, but you are not going to get nice edges with chromakey
[12:49:04 CEST] <Fyr> furq, I guess, those people that do weather forecast don't use FFMPEG.
[12:49:51 CEST] <furq> i should hope not
[12:50:01 CEST] <Fyr> =)
[12:50:47 CEST] <furq> i did something similar to this before but i can't remember what i did now
[12:50:56 CEST] <furq> something like color=#00000000,format=yuva420p
[12:51:08 CEST] <furq> then draw on that, resize and overlay
[12:58:01 CEST] <CoreX> green screen porn?
[12:59:28 CEST] <Fyr> CoreX, I'm trying to smooth subtitles on video when burning them.
[13:35:43 CEST] <Fyr> guys, one should add +faststart for MP4. is there the same option for MKV?
[13:36:15 CEST] <JEEB> no, matroska doesn't require the index for playback
[13:36:25 CEST] <JEEB> since it keeps the codec specific init data in a specific structure that's separate
[13:36:41 CEST] <Fyr> I found an option: reserve_index_space
[13:36:41 CEST] <JEEB> while in mp4 that data is in the index, which has to be written last due to well, duh
[13:37:15 CEST] <JEEB> yea, but it's not as useful because the initialization data is generally written in the beginning of the file
[13:37:21 CEST] <JEEB> since it's not in the index in matroska
[13:37:30 CEST] <Fyr> ok
[13:37:52 CEST] <JEEB> as in, if you start writing into a .mkv file you should be able to play it as soon as the init data is written
[14:43:02 CEST] <arpu> is it possible to use the av1 encoder with ffmpeg?
[14:44:47 CEST] <ZexaronS> bwaahha i looked at some video codecs in depth and noticed for the first time ever XvidDivX is called MPEG-4 ASP , asp for Advanced Simple Profile lfao
[14:44:50 CEST] <ZexaronS> lmfao
[14:45:07 CEST] <ZexaronS> advanced simple
[14:45:54 CEST] <ZexaronS> this whole european boreucrat standards just got to a new level of ridicolousness
[14:46:30 CEST] <ZexaronS> actual programmers in computers/gaming need to work on these things, not mathematicians and physicists from france, belgium and germany
[14:46:46 CEST] <ZexaronS> they have no clue wtf are they talking about
[14:48:46 CEST] <ZexaronS> there's really no proper archival codec out there, thinking of just making it myself then, on github, or if I could find such a project that exists already
[14:49:03 CEST] <ZexaronS> it should be built from ground up to go well with stuff like ZFS/BTRFS
[14:49:06 CEST] <JEEB> EU is currently trying to standardize ffv1 and matroska for archival
[14:49:13 CEST] <JEEB> ffv1 being a lossless video format
[14:49:21 CEST] <JEEB> and matroska being a container
[14:50:02 CEST] <JEEB> (and I think FLAC is being poked as the audio, although usually there's not enough audio data compared to compressed video that would raise the file size largely even if it was raw PCM
[14:50:41 CEST] <JEEB> arpu: libaom is not included yet because the whole AV1 stuff isn't stabilized yet
[14:50:54 CEST] <JEEB> so even if you'd build libaom and encode stuff with it today
[14:51:14 CEST] <JEEB> that might not be playback'able when the AV1 format is finished
[14:51:31 CEST] <JEEB> since the whole thing is still in development from the format point of view as well
[14:52:51 CEST] <ZexaronS> But that's a bit too hardcore unless you have a 500 billion dollar underground datacenter ... I'm not keen on totally uncompressed, I'm thinking of a really simple and super-future-proof-endtimes-dig-up-old-data-drive-from-frozen-lake kind of proof ... nees to have a ton of features build in and the design to be easily decodable, so the mpeg-TS kind of redundancy is what should it be based upon, but not sure if going to to having an
[14:52:51 CEST] <ZexaronS> index/metadata per-frame would be so wise either, that would definitely prove to be very corruption-resistant, so you'd get all the non-corrupt frames out piece of cake, but it add to the size
[14:53:41 CEST] <JEEB> well, consider that compressed lossless video will go to dozens of megabits if not hundreds
[14:53:56 CEST] <JEEB> and then you have a few megabits of audio at most
[14:54:05 CEST] <JEEB> you might as well not compress at that point :P
[14:55:22 CEST] <ZexaronS> but it wouldn't be like H264, it wouldn't be doing those kinds of tradeoffs of quality, this archival format would be also meant to made for quality
[14:55:57 CEST] <JEEB> you make no sense to be honest at this point
[14:56:07 CEST] <ZexaronS> well I meant about slight lossy compression, like 95% ... for example in JPEG, if you see size diff from 100% and 96% it's a lot, but it doesn't look that much different
[14:56:38 CEST] <ZexaronS> then it would also have lossless compression mode and the totally uncompressed mode
[14:56:50 CEST] <JEEB> bits and pieces of something semi-coherent and a lot of stuff that just is out there most likely due to you not understanding the problem space
[14:57:23 CEST] <ZexaronS> No no I don't need to understand the problems out there, I'm talking a format/codec that I want
[14:57:45 CEST] <ZexaronS> for myself, but i would share it ofcourse
[14:58:00 CEST] <JEEB> please just don't NIH shit for NIH's sake because you don't understand things
[14:58:23 CEST] <JEEB> also my "might as well not compress at that point" comment was towards audio, if that was not yet clear enough
[14:58:39 CEST] <JEEB> video should 100% be losslessly compressed for archival
[14:59:18 CEST] <JEEB> and lossy compression for archival just doesn't make sense
[14:59:23 CEST] <ZexaronS> I have no idea what NIH is and the stuff I was talking about is what I would want someday, If I get a good job and have the necessary resources I will pay developers to do it for me, as I don't have the necessary programming experience and have a ton of other projects I'd be working on to have time for learning deep c++
[14:59:33 CEST] <JEEB> Not Invented Here (syndrome)
[15:00:13 CEST] <JEEB> just fucking use ffv1+flac+matroska if you need to archive shit.
[15:00:23 CEST] <JEEB> since the EU folk are moving to standardizing that trio for archival
[15:00:32 CEST] <JEEB> and ffv1 actually can give good lossless compression ratios
[15:01:29 CEST] <ZexaronS> You are correct but for me personally, I don't have the resources of a 100 billion datacenter to have it all uncompressed, that's a dream so I'm not going to assume I'll ever have my own underground facility, so I can't go this extreme route
[15:01:40 CEST] <JEEB> are you even reading me?
[15:01:52 CEST] <JEEB> where am I saying uncompressed?
[15:02:02 CEST] <ZexaronS> okay okay
[15:02:04 CEST] <JEEB> other than regarding audio
[15:02:08 CEST] <JEEB> which is miniscule in the fucking size
[15:02:13 CEST] <JEEB> compared to video
[15:02:23 CEST] <JEEB> because generally you have 2 channels of PCM
[15:02:38 CEST] <ZexaronS> I didn't catch that, fine, but you don't need to take it like ... I was saying it in general.
[15:03:02 CEST] <JEEB> I'm just telling you there are solutions for the fucking archival problem
[15:03:08 CEST] <ZexaronS> I didn't knew ffv1 is lossless compression
[15:03:18 CEST] <JEEB> I said it multiple times for fuck's sake
[15:03:34 CEST] <ZexaronS> the size seemed pretty big when I tried ... but yeah I didn't knew much 3 years ago when I was doing those tests/initial research
[15:03:51 CEST] <JEEB> lossless is always big, and there's parameters to tweak
[15:04:00 CEST] <JEEB> I mean, bigger than highly compressed lossy shit
[15:04:06 CEST] <JEEB> but you're doing *archival*
[15:04:13 CEST] <JEEB> you'd archive something to conserve it
[15:04:19 CEST] <JEEB> as it was originally
[15:04:47 CEST] <JEEB> so you build your RAID, with redundancy, take checksums, do back-ups
[15:04:55 CEST] <JEEB> maybe use tapes
[15:05:16 CEST] <JEEB> check the consistency of data periodically between the copies of data etc
[15:06:43 CEST] <ZexaronS> Actually, I get your point from the fact of visuals, the thing is, most of my things are not really visually important, they're just interviews, people sitting and talking, some of them are visually important, the context/evidence is transferred to a future viewer they same no matter if visual quality is 100 or 80 or even 50%, imo
[15:07:26 CEST] <JEEB> then you go off for a slippery slope
[15:07:44 CEST] <JEEB> because it's easy to define the input as the thing you archive
[15:08:06 CEST] <JEEB> but if someone wants you to arvhive something, how do you define how much loss they're OK with?
[15:08:23 CEST] <ZexaronS> The stuff that I'd archive right now, ofcourse in future visuals might get a lot more important when HDR/WCG Rec2020 is involved and I think we'll have to even put features in for authenticity stuff, the virtual reality graphics will be booming with all the fakes
[15:09:42 CEST] <JEEB> also lossless copies are easier to validate
[15:09:49 CEST] <ZexaronS> and ffv1 has no things build in that would help the analyzer or an analysis tool to get metadata/info about the video to look for clues that would help with figuring out if it's forged/simulated
[15:09:57 CEST] <ZexaronS> built*
[15:10:13 CEST] <ZexaronS> allright then we use ffv1 as base and make ffv2
[15:10:17 CEST] <JEEB> that is not a fucking job of a fucking video format. you sign the fucking archived thing
[15:10:33 CEST] <JEEB> then if the signature is OK you know that data has not been played around with since archival
[15:10:46 CEST] <JEEB> if the archival process was fooled with you have a separate problem
[15:11:07 CEST] <ZexaronS> well that's more complicated then in a post-doomsday scenario with .. high-tech is very fragile
[15:11:57 CEST] <JEEB> and? if your requirements are to hold through such a scenario then you have to design with it in mind
[15:12:07 CEST] <ZexaronS> This is crazy, I can't even find a full video of something from 3 years ago, only short crappy copies ... things move so fast that old things get forgotten, data is made but also erased out there
[15:12:21 CEST] <JEEB> it's not a job of your fucking video format. a video encoder takes in raw images and outputs decode'able pieces of data
[15:13:01 CEST] <JEEB> if you need validity of that data you use signing or something to "stamp" the matroska container of an archived "unit"
[15:13:08 CEST] <ZexaronS> Well look I have it more thought out in my mind than I can bring out so it feels I'm throwing random things up, but I meant ...
[15:13:51 CEST] <JEEB> and then you can have per-sample metadata for checksums
[15:13:56 CEST] <JEEB> which is different from trust
[15:14:14 CEST] <JEEB> and rather putting data validity checkers there for backup verification
[15:14:58 CEST] <JEEB> and the whole nuclear doomsday scenario is anyways its own set of requirements and you just handle it on its own level, if it is required
[15:15:05 CEST] <ZexaronS> I meant that it would have support for whatever camera natively would use it for encoding/saving, and authenticity stuff (metadata about camera sensor, advanced deep stuff, stuff that doesn't exist yet) would be written in some fashing along with the codec, but why with the codec, well it has to be per-frame if you want 100% of the video covered
[15:15:28 CEST] <ZexaronS> So if that video gets shared, you have it in there, in the source,
[15:15:28 CEST] <JEEB> or you just do per-sample metadata like I mentioned
[15:16:38 CEST] <JEEB> so if you have your magical camera metadata there per-frame, you have per-frame checksum metadata there and then you sign the fucker at the end of that single unit (whatever that is in the end)
[15:17:57 CEST] <ZexaronS> I get the point of authenticity of the datacenter it self, that one thing, this is another thing, if the source gets shared, so you have many sites hosting the source, ...SHA265 won't be enough, the supercomputers outthere in future could crack it, use a forged video, bruteforce the necessary finetuning of the bits to make the hash identical, and were screwed
[15:18:10 CEST] <ZexaronS> the datacenter would also be offline, not connected to net
[15:19:18 CEST] <arpu> JEEB, thx for this information , i found this https://bugzilla.mozilla.org/show_bug.cgi?id=1368838 so firefox nightly can play av1 now!
[15:19:29 CEST] <arpu> so i want to try this
[15:19:32 CEST] <JEEB> if someone cares enough to brute force N sample checksums and forge your signature with the private key that you have not published anywhere I'd commend them
[15:20:03 CEST] <JEEB> arpu: yea they started enabling libaom-based decoding
[15:20:20 CEST] <JEEB> and yes the comments there note exactly why FFmpeg doesn't yet support libaom out of the box
[15:20:31 CEST] <JEEB> because the format is still being developed
[15:20:38 CEST] <arpu> ok!
[15:20:52 CEST] <ZexaronS> When i get enthusiastic i may talk optimistic, a bit inaccurate, ISO people are EU boreucrat mathematicians/physicsists ... MPEG not so much from what I'm reading... canada is the founding place heh
[15:20:53 CEST] <JEEB> just use ffmpeg + libaom's cli encoder to create test clips if you really want to
[15:21:25 CEST] <JEEB> ZexaronS: anyways you're way out there enough that you're just creating fucking noise.
[15:21:47 CEST] <JEEB> you have no idea about things yet you think you can call things names and then you have this vague shit that you think you know should go somehow
[15:22:08 CEST] <JEEB> while it's pretty obvious taht there's layers for all of that shit that you might or might not need in your post-apocalypse archival piece of shit
[15:22:13 CEST] <ZexaronS> that's because I'm thinking about solutions to the problems that don't exist yet
[15:23:01 CEST] <JEEB> for example, there are reasons to ridicule MPEG-4 Part 2 (which you incorrectly called MPEG-4 ASP - ASP is a goddamn profile of Part 2), but all of your comments just show how little you know of these subjects
[15:23:22 CEST] <ZexaronS> vague, because I'm not really seriously putting down a draft, im merely chatting about some of the stuff going on in my head, If I had more resources and meet interesting people up for such a project, it'd be more serious and I'd have drafts drawn already and I'd shown you those instead
[15:23:57 CEST] <JEEB> ZexaronS: you are fucking spamming this channel with even more shit at this point, which is what I'm trying to say. Maybe I should just be ignoring you instead of telling this to you, but whatever.
[15:24:55 CEST] <ZexaronS> well that ASP thing is wikipedia, so my fault, I should have known not to take that as a fact
[15:25:41 CEST] <ZexaronS> secondly, you seem like you want to talk, but you get limited because i'm geing out of the box, like innovations are made not being inside the box, so whatever
[15:26:16 CEST] <JEEB> no, most of your shit is adding random requirements and then trying to dump things onto incorrect layers (like putting more and more shit into the video coding layer instead of the container layer)
[15:27:01 CEST] <JEEB> and no, you are not out there. people have been working with archival for years for fuck's sake
[15:27:06 CEST] <ZexaronS> thirdly you took my words too literally, I have merely brushed on the idea, but indeed that's more of my mistake, I aplogize for not putting more effort into my initial statements and being too haste,
[15:27:13 CEST] <JEEB> they design their systems against their requirements
[15:27:25 CEST] <JEEB> if they have to go through a motherfucking nuclear war then the requirements start there
[15:27:36 CEST] <JEEB> then it goes to more and more details and/or implementation detals
[15:27:59 CEST] <JEEB> if you need to not only make sure the data is valid but that it is also come from the source you think it comes from then there's signing blah blah
[15:28:29 CEST] <JEEB> fucking hell, this is not new and/or groundbreaking. it's just something that you come up while you keep a calm head and think about the problems you need to solve
[15:30:59 CEST] <ZexaronS> Yes I was mixing a lot of different things, the per-frame, codec-integrated camera-authentication stuff is more meant for the video taken on a camera when the video/audio is saved, how exactly would that authentication part be done, via hashing, encrypting, I can't say as I didn't dig into it yet, yes this part has no connection to the datacenter thing and the signing you mentioned there, but seem we can have multiple layers of such
[15:30:59 CEST] <ZexaronS> security/authentication not just one
[15:31:36 CEST] <Schwarzbaer> I'm using ffmpeg with a -vf, and write the output to a file. The filter is applied. I send the stream to an ffserver. The filter is not applied. This is bizarre...
[15:31:57 CEST] <JEEB> Schwarzbaer: my condolences on having ffserver in the mess
[15:32:30 CEST] <JEEB> but yea, not many here know anything of it and generally people are recommended to not utilize it
[15:33:58 CEST] <ZexaronS> What I just mentioned, is more for the stuff like UFO/alien videos ... in order for the person to be trusted, he would have to provide the exact camera he took the video with and along the exact source file of the video, then the analyzers would go compare the camera with other same cameras to make sure hardware isn't hacked first, then they would go check the video file to see if it really came from that camera and look for various
[15:33:58 CEST] <ZexaronS> clues and annomalies that video editing software would leave behind and ofcourse annomalies that would be seen in the visuals that it was actually computer generated graphics
[15:34:28 CEST] <ZexaronS> Now that I properly explained it, there's no way this isn't a good idea
[15:36:15 CEST] <JEEB> that would literally mean that each video would have to be identifiable to its own camera and that you trust hardware that is 100% in the hands of someone you don't trust to not have been tampered in a way that doesn't require physical modification. and in that case, that it wasn't reverse'd.
[15:37:23 CEST] <ZexaronS> Because, all of these codes throw a ton of sensor data away, no doubt forensics will have hard time with a lossy thing like that with a lot of false-positives artifacts that the lossy compression makes ... there's a ton of videos on youtube where such artifacts are fooling a ton of people and the blackhat monetization spammers abuse this a lot
[15:39:59 CEST] <ZexaronS> JEEB: yes it's actually quite complicated at the end, to do it really good you need so many checks and backups to really make sure, it's definitely not something any company would ever think about doing in consumer market, and companies really don't care about the truth to begin with
[15:40:44 CEST] <Schwarzbaer> JEEB, then how *do* people stream their videos? Also, I'd suspect the problem to be on ffmpeg's side; ffserver shouldn't even know about the filter.
[15:40:48 CEST] <ZexaronS> Google is fine with ad money generated from 20 million people being fooled with fake ufo videos each week
[15:40:49 CEST] <JEEB> also this is out of scope of this channel, but seriously - you'd be checking trust against something given to you by a non-trusted actor? all the darn metadata in the world can be faked and the software on the camera reverted back to a normal state afterwards. also privacy advocates would just kill you for marking each video against its camera. because those need to be unique
[15:41:10 CEST] <JEEB> Schwarzbaer: there are various solutions for various means of "streaming"
[15:41:25 CEST] <JEEB> Schwarzbaer: define your use case of "streaming" and someone might be able to help
[15:42:14 CEST] <ZexaronS> I guess you're right, I'm glossing over it, it's not something we do regularly, it's a good insight, policing things to keep on one topic strictly really doesn't help innovation either, there's a lot of value in sometimes looking and conneting several topics together to get a better undersanding
[15:43:32 CEST] <JEEB> yes, but you are clearly out there and lacking any understanding on the actual topics of this channel. see your mocking of MPEG-4 Part2 which had ZERO of this: https://guru.multimedia.cx/15-reasons-why-mpeg4-sucks/ (this blog incorrectly calls it "mpeg4" instead of "mpeg-4 video" or "mpeg-4 part 2", but at that point other video formats in mpeg-4 did not yet have wide usage)
[15:44:46 CEST] <ZexaronS> JEEB: emm, sorry, with the term "metadata" i just meant like 10 things, I can't think of right now, how to explain them, various features, secondly, I wasn't that direct on the video uniqueness ... I may have said it wrong, I meant more that it would prove the video came from that camera model and it wasn't altered, not the exact camera serial number
[15:45:36 CEST] <ZexaronS> Those are details that can be changed/finetuned to ofcourse alleviate those issues you mentioned
[15:46:16 CEST] <Schwarzbaer> It's less a matter of having a use case than of wanting to be able to use the toolset. Simplest case would be "make this webcam available to be watched over the internet", but in general I want to be able to point people and tools at an HTTP endpoint, and mix the video that they'll see.
[15:46:44 CEST] <JEEB> Schwarzbaer: for web-based streaming people usually use something like nginx-rtmp
[15:46:59 CEST] <JEEB> you feed it rtmp, and it outputs HLS/DASH for browsers and RTMP for flash
[15:47:24 CEST] <JEEB> the former supported by mobile clients as well, as well many other stuff at this point :P
[15:47:36 CEST] <ZexaronS> You are correct that I don't understand fully how stuff works currently, but if I'm building something to replace it, the question would be, why would we even have to fully understand it, we already see from basics/fundamentals it's underpar, not really hard to see it, it's like seeing it from space, or seeing it from the earth, don't even need to send a probe to see it in detail imo
[15:48:27 CEST] <ZexaronS> Like the Sun is H264 ... it's yellow and red ... in details, more yellow and red shades
[15:49:00 CEST] <JEEB> Schwarzbaer: ffserver is a thing that is in the FFmpeg code repository and some people are fighting hard to keep it in there. but it's a problem filled thing that nobody can recommend nor help with
[15:49:17 CEST] <JEEB> so as soon as you mention ffserver people will generally take a step backwards
[15:49:33 CEST] <JEEB> "if it works for you, great. but don't come around here with any issues because nobody here uses/works with it"
[15:50:28 CEST] <Schwarzbaer> That's bizarre. How *do* people mangle their streams then, putting picture-in-picture, greenscreen, and all the stuff that ffmpeg offers?
[15:51:36 CEST] <ZexaronS> good talk tho, no need to annoyed, I usually speak to the world, don't intend to rant towards anyone in particular
[15:52:07 CEST] <Schwarzbaer> For example, I was thinking of another setup with an output stream consisting of four tiles, each showing a feed, with the feeds going online or offline independent of each other. I wouldn't even know how to do something like that without ffserver.
[15:52:58 CEST] <JEEB> people either use ffmpeg.c or their own applications on top of the FFmpeg APIs and feed to a media server
[15:53:05 CEST] <JEEB> also did ffserver even do that?
[15:53:17 CEST] <JEEB> I'd think every time you'd lose a source things would go woo-woo
[15:53:47 CEST] <JEEB> because the overlay filter would still be waiting for the image on the missing source
[15:54:11 CEST] <JEEB> and the overlying application would have to know when it has or doesn't have an input, and it would have to know how to re-connect to that input
[15:54:24 CEST] <JEEB> it's certainly possible but I'd be surprised if any of ff* tools support that
[15:54:46 CEST] <Schwarzbaer> I see... And here I was hoping that it'd simply go black, or rather go defined-default-image...
[15:55:14 CEST] <JEEB> that generally is something that the user of that filter defines
[15:55:24 CEST] <JEEB> the filter itself is just taking in N inputs and overlaying
[15:55:34 CEST] <JEEB> I mean, that way you are handling the complexity where it should be
[15:55:47 CEST] <JEEB> Schwarzbaer: also I have never used ffserver just like many others here.
[15:55:54 CEST] <JEEB> so if it handles that, cool
[15:56:14 CEST] <JEEB> still doesn't make it any less of a black box in the sense of wtf it's doing there
[16:04:28 CEST] <Schwarzbaer> Okay, I've looked a little into nginx-rtmp now (meaning that I'm still skimming the intro page). Apparently it acts as a replacement for ffserver, meaning that I point ffmpegs at it to feed media streams, and (hopefully) ffplays to watch those streams. So in my "feeds going on and off" scenario, could I have a handful of locations that do or do not get feeds at any given time, and one continuously running ffmpeg
[16:04:29 CEST] <Schwarzbaer> instance that reads from those locations, mixes the output image, using default images if a given location is currently not being fed, and then feeds the output back into nginx-rtmp?
[16:05:15 CEST] <Schwarzbaer> Also, where can I read up on what RTMP, HLS, and DASH actually *are*?
[16:05:36 CEST] <JEEB> yes, or your own app if ffmpeg.c doesn't do the fallbacks and re-synchronization or whatever like you need
[16:06:05 CEST] <JEEB> basically you separate the actual transcoding and the serving of content
[16:06:11 CEST] <JEEB> which is a common separation line
[16:06:47 CEST] <JEEB> Schwarzbaer: if you want technicalities then for RTMP I guess https://wiki.multimedia.cx/index.php/RTMP is good
[16:06:58 CEST] <Schwarzbaer> Thanks.
[16:07:12 CEST] <JEEB> HLS and DASH are just ways of streaming video through HTTP with small chunks and a constantly updating playlist/manifest
[16:07:24 CEST] <JEEB> so the player keeps querying that file for new segments etc
[16:07:43 CEST] <Schwarzbaer> Also, there doesn't seem to be a Debian package for nginx-rtmp, which is unfortunate and bizarre.
[16:08:27 CEST] <JEEB> also I would generally keep the hell away from ffplay if not for testing purposes
[16:08:43 CEST] <JEEB> it's a proof-of-concept player done on top of the FFmpeg libraries and SDL(2)
[16:08:48 CEST] <BtbN> nginx-rtmp is a nginx module
[16:08:57 CEST] <BtbN> and nginx does not support truly dynamic modules
[16:08:58 CEST] <JEEB> if you want a player that bases on top of FFmpeg then I recommend mpv
[16:09:02 CEST] <BtbN> so you have to compile it in yourself
[16:09:25 CEST] <Schwarzbaer> Well, on a day-to-day basis I'm usually using mplayer.
[16:09:33 CEST] <JEEB> mpv is the least retarded fork of mplayer
[16:09:38 CEST] <BtbN> mplayer is old, half-dead and horrible
[16:10:28 CEST] <Schwarzbaer> I'll give mpv a try then.
[16:12:09 CEST] <JEEB> but for testing in browsers I recommend hls.js and dash.js since most players utilize one of those for their playback
[16:12:22 CEST] <JEEB> and for android you have the exoplayer example app
[16:12:31 CEST] <JEEB> and for iOS you can just embed the HLS url in a video tag
[16:16:52 CEST] <Schwarzbaer> TBH right now I don't care about phones, since I still don't have a smartphone. And web players are, right now, also a flight of fancy at best.
[16:17:30 CEST] <Schwarzbaer> Actually, to go on a bit of a tangent, do js players play nice with Tor?
[16:17:49 CEST] <Schwarzbaer> (bbiab)
[16:17:54 CEST] <JEEB> if the HTTP requests go through OK I don't see Tor as anything else than a proxy :P
[16:18:18 CEST] <JEEB> that said the tor network's management might not like you doing high bandwidth stuff over it
[16:19:14 CEST] <BtbN> That's the whole point of piping absolutely everything through http
[16:19:25 CEST] <BtbN> tor will usually be way too slow for any kind of streaming
[16:20:10 CEST] <JEEB> no, the point of HTTP and segments was to utilize the existing content delivery infra :D
[16:20:21 CEST] <JEEB> that's why things moved away from actual streaming protocols
[16:26:39 CEST] <wondiws> hello
[16:37:50 CEST] <Schwarzbaer> Well, I didn't *plan* to stream high bandwidth media, and I think that as my current upload is limited to 60kb/s, Tor management probably wouldn't even notice. It's more a matter of "If I should ever want to, could I?", to which the answer is apparently "Yes." And since I've got ffmpeg in the pipeline, I should be able to downsample the media streams arbitrarily anyway. Anybody wanna watch my 4x3 pixels webcam
[16:37:50 CEST] <Schwarzbaer> stream? :D
[16:39:17 CEST] <BtbN> 60kb/s is barely enough for music
[16:42:06 CEST] <JEEB> depends on your quality expecations etc
[16:42:29 CEST] <JEEB> x264 can compress your shit down, as noted by me setting 6 kbps as my maxrate and wondering where my image goes after the 16 megabit buffer gets filled
[16:47:29 CEST] <Schwarzbaer> As long as I can stream around strings within my LAN, I'll be happy for the moment. ^^
[17:29:42 CEST] <Schwarzbaer> Well then... Seems like I've just build nginx with rtmp. Time to look into configuring it. After getting some food...
[18:34:03 CEST] <arpu> any on an idea why i get Invalid data found when processing input on ffmpeg -i $(youtube-dl -J "https://www.youtube.com/watch?v=XmL19DOP_Ls" | jq -r ".requested_formats.manifest_url")
[18:34:24 CEST] <arpu> this should work since this commit message http://git.videolan.org/?p=ffmpeg.git;a=commit;h=96d70694aea64616c68db8be306c159c73fb3980
[18:34:54 CEST] <BtbN> Because it's far from perfect.
[18:39:03 CEST] <arpu> but why its in the commit message ? stevenliu
[18:39:43 CEST] <JEEB> because some URL at some point worked
[18:39:45 CEST] <JEEB> probably?
[18:40:18 CEST] <BtbN> I still think that changelog should not have been pushed in the commit message
[18:41:19 CEST] <JEEB> I tested a certain stream I had around that contained +5 seconds of audio compared to video, and I couldn't even get ffmpeg.c rolling :)
[18:41:30 CEST] <JEEB> since ffmpeg.c said I was buffering too many packets or something
[18:41:53 CEST] <JEEB> (the end point was the same, but there was for some reason +5 seconds of audio from before the video segments started)
[18:42:26 CEST] <JEEB> tested with two other DASH clients and it worked a-OK with those
[18:42:56 CEST] <JEEB> but since i've gotten such a message generally it feels like it's not DASH-specific and the demuxer was just pushing out all samples from the manifest :)
[18:43:12 CEST] <BtbN> DASH is stupidly complex
[23:54:29 CEST] <Schwarzbaer> Woohoo, I got nginx-rtmp to work, and indeed now my video filters get applied correctly. Now I just have to learn about how to configure access controls...
[23:54:58 CEST] <Schwarzbaer> And maybe about using an embedded player after all, just for the kick of it.
[23:56:35 CEST] <MelchiorGaspar> got a quick question,.... what FFmpeg cmd will display the source files code res info?
[23:58:18 CEST] <JEEB> use ffprobe for getting info on a file
[23:58:32 CEST] <JEEB> if you are going to be using it by an app that reads its output, use -of json
[23:58:35 CEST] <JEEB> which outputs JSON
[23:58:40 CEST] <JEEB> then you can do stuff like -show_streams
[23:58:45 CEST] <JEEB> or -show_frames
[23:59:00 CEST] <JEEB> which gives a brickload of stuff for you to parse
[23:59:19 CEST] <JEEB> -show_streams shows the basic info on the streams in the file
[00:00:00 CEST] --- Sun Sep 17 2017
More information about the Ffmpeg-devel-irc