[Ffmpeg-devel-irc] ffmpeg.log.20170313

burek burek021 at gmail.com
Tue Mar 14 03:05:01 EET 2017


[00:52:52 CET] <faLUCE> is there any function in libAV which I can use in order to obtain the number of bytes for a given format ?
[00:53:31 CET] <faLUCE> for example:  AV_SAMPLE_FMT_FLT must return 4 (32 bits)
[01:02:08 CET] <faLUCE> found it:   av_get_bytes_per_sample()
[01:06:53 CET] <the_k_> can anyone help?
[01:06:56 CET] <the_k_> ffmpeg -i rtsp://10.0.0.10:554/Streaming/Channels/2 -vf "select=gt(scene\,0.005),setpts=N/(25*TB)" -f segment -segment_time 21600 -strftime 1 fd_motion_%%Y-%%m-%%d-%%H;%%M;%%S.ts
[01:07:42 CET] <the_k_> i added everything after "-vf" to make it so that i can read the file as it's being written, but this makes the recording really blurry
[01:07:51 CET] <the_k_> oops no
[01:08:20 CET] <the_k_> i added the "-f segment ..... " and everything following that
[01:09:16 CET] <the_k_> can anyone suggest what i could change to keep the original quality of the stream?
[01:45:34 CET] <the_k_> a friend seems to think that changing the output filename extension from .ts to .mkv will alter what ffmpeg does when outputting the file, could this be true?
[01:51:37 CET] <ps-auxw> the_k_: I'm pretty sure it does, yes.
[01:52:48 CET] <ps-auxw> If you put .mkv, it should use the matroska muxer and will probably use different default encoders for audio and video.
[01:54:00 CET] <ps-auxw> The output being blurry with your previous commandline might be due to insufficient bitrate or otherwise undesirable encoder default settings.
[01:56:39 CET] <ps-auxw> the_k_: Try with something like "-acodec copy -vcodec libx264 -crf 18 out.mkv", if the filesize is too big, make the crf number bigger, if quality isn't high enough make it smaller. You can also try different -preset settings for encoding time/quality trade-offs.
[02:05:42 CET] <the_k_> yeah i just looked at the file in a hex viewer
[02:05:45 CET] <the_k_> i couldn't believe it
[02:06:15 CET] <the_k_> can't i just dump it raw? it's an rtsp stream
[02:06:27 CET] <the_k_> i really don't want to reencode it
[02:11:04 CET] <ps-auxw> You can try -vcodec copy, but I'm not sure you can use -vf with that then.
[02:11:39 CET] <ps-auxw> Most filters work in a decode-process-encode way. Maybe all, I'm not sure.
[02:14:03 CET] <the_k_> -c:all copy
[02:14:20 CET] <the_k_> friend said to try this and it outputs fine to mkv
[02:15:07 CET] <the_k_> i'm not sure if it's actually doing anything though as when i tried the command without that and was outputting to mkv it was also fine.. no blur
[02:19:06 CET] <the_k_> it's possible to write another copy straight to disk without the motion detection part in the same command, right?
[02:29:08 CET] <the_k_> weird.. i get over 50% cpu when i do
[02:29:19 CET] <the_k_> but i dont' if i'm using two instances of ffmpeg
[02:29:24 CET] <the_k_> also it loses a ton of framesa
[02:30:24 CET] <the_k_> would be a lot better if i could use a single rtsp stream because then it means there's more available bandwidth for the camera and then it leaves room for one more device to connect to it
[05:17:36 CET] <the_k_> is there a way to stop ffmpeg from terminating when it reaches the end of an input file?
[05:18:00 CET] <the_k_> i need it to wait till the file is a bit bigger
[05:20:05 CET] <ZeroWalker> pipe it and close it manually or something perhaps
[05:22:30 CET] <the_k_> ahh
[05:22:31 CET] <the_k_> ok
[05:22:41 CET] <the_k_> hmm
[05:22:55 CET] <the_k_> wouldn't piping it stop it from writing a file though?
[05:23:13 CET] <the_k_> it would need two outputs then.. which always makes it drop frames
[05:30:36 CET] <ZeroWalker> well you are supposed to have one input and one output
[05:31:01 CET] <ZeroWalker> the output can be a standard one, and the input could be a pipe
[05:31:43 CET] <ZeroWalker> then you just write to that pipe somehow and when you are done with that input file you just close the pipe
[05:33:17 CET] <the_k_> so it's not good to have one input and two outputs
[05:33:19 CET] <the_k_> ?
[05:34:39 CET] <the_k_> i need one file to be written so that i get a HD recording 24/7 of the camera for security reasons
[05:34:52 CET] <ZeroWalker> and the other output?
[05:35:17 CET] <the_k_> and the secondary file is just so i can do motion detection on it so that i can fairly quickly scan through to check for any idiots that have been around as has been the case lately
[05:35:28 CET] <the_k_> gang of 3 tried to push the way in through the front door
[05:35:32 CET] <the_k_> all dressed in black
[05:36:19 CET] <ZeroWalker> so it's a different encoding, not just a copy
[05:36:27 CET] <the_k_> so i require these things: the best HD recording i can obtain .. written to file that doesn't skip a beat
[05:36:53 CET] <the_k_> and an easy way to view movement / line crosses throughout the day
[05:37:03 CET] <the_k_> oh and also a live feed.. doesn't need to be HD but is nice
[05:37:14 CET] <ZeroWalker> well it's all up to the performance of the hardware rly
[05:37:17 CET] <the_k_> the motion detection is, yes.. and the quality again doesn't really matter
[05:37:30 CET] <the_k_> i920 cpu, overclocked to 3.9mHz
[05:37:43 CET] <ZeroWalker> ghz i would assume
[05:37:45 CET] <the_k_> i have an nvidia 1080 gfx card though..
[05:37:49 CET] <the_k_> ah yea :D
[05:37:52 CET] <the_k_> haha
[05:37:57 CET] <the_k_> no it's a new model calculator!
[05:38:05 CET] <ZeroWalker> hmm is it called i920?
[05:38:06 CET] <ZeroWalker> ;P
[05:38:15 CET] <the_k_> yeah chinese branding
[05:38:18 CET] <ZeroWalker> well with the gpu you could use NVENC to encode
[05:38:26 CET] <the_k_> right yeah
[05:38:29 CET] <ZeroWalker> that's super fast and then performance is no problem
[05:38:32 CET] <the_k_> anything to split the loaed
[05:38:33 CET] <the_k_> load
[05:38:36 CET] <the_k_> mm
[05:38:39 CET] <ZeroWalker> though, size and quality will take a tole
[05:38:49 CET] <the_k_> for the HD recording?
[05:38:57 CET] <ZeroWalker> for anything that uses NVENC
[05:39:03 CET] <the_k_> i'm sure raw dumping is less intensive
[05:39:17 CET] <the_k_> so as long as the initial file recording is best quality i don't care
[05:39:21 CET] <ZeroWalker> NVENC is fast, but quality is bad, so you have to bump up the bitrate to get it closer to x264 level
[05:39:24 CET] <the_k_> the motion detection bit can use gpu
[05:39:39 CET] <ZeroWalker> well there is no such thing as "best quality" unless you do lossless
[05:39:58 CET] <the_k_> that's not a problem. i just need to see if there's any idiots outside
[05:39:58 CET] <ZeroWalker> you have to find what you think works best in terms of quality/size/performance
[05:40:09 CET] <the_k_> 1) live 2) looking back at the day
[05:40:27 CET] <ZeroWalker> well then you probably dont' need any "good" quality
[05:40:33 CET] <the_k_> i think it's a raw dump that i'm doing wiht this? :...
[05:40:47 CET] <the_k_> e:\fd\ffmpeg -i rtsp://fd:554/Streaming/Channels/1 -c copy -f segment -segment_time 86400 -strftime 1 cam_fd_HD.ts
[05:40:51 CET] <ZeroWalker> a raw dump would probably be massive in size
[05:40:58 CET] <the_k_> that's a raw dump, right?
[05:41:03 CET] <the_k_> i don't mind
[05:41:10 CET] <ZeroWalker> hmm, yeah so it sends in mpeg2
[05:41:13 CET] <the_k_> i have a 3tb drive for it
[05:41:20 CET] <the_k_> erm
[05:41:22 CET] <ZeroWalker> well then it's easy enough, you can just do the copy thing
[05:41:25 CET] <the_k_> it's sending in h.264
[05:41:31 CET] <ZeroWalker> oh
[05:41:38 CET] <ZeroWalker> didn't know .ts supported that
[05:41:46 CET] <ZeroWalker> but that's even better
[05:41:55 CET] <the_k_> MPEG-4 AVC
[05:42:04 CET] <the_k_> i've never heard of the ts format before.. it's new to me
[05:42:09 CET] <ZeroWalker> yeah if you get that without encoding, then the camera handles that for you
[05:42:16 CET] <the_k_> but it worked for the job of being able to play the file at the end of the file live
[05:42:31 CET] <the_k_> so that saves me bandwidth and allows others to view the live view of the camera
[05:42:31 CET] <ZeroWalker> well it's a weird format, i personally would suggest mp4 or mkv, but whatever works
[05:42:51 CET] <the_k_> right
[05:42:57 CET] <the_k_> can test mp4 now
[05:42:59 CET] <ZeroWalker> well with that setup, does it actually close randomly?
[05:43:09 CET] <the_k_> no
[05:43:12 CET] <ZeroWalker> i would have thought it would only close after the server closes
[05:43:15 CET] <ZeroWalker> ah
[05:43:17 CET] <the_k_> i close it manually
[05:43:28 CET] <ZeroWalker> so, what's the problem with that one?
[05:43:31 CET] <the_k_> i segment the file up into 24 hourly files
[05:43:34 CET] <the_k_> erm
[05:43:38 CET] <ZeroWalker> ah
[05:43:48 CET] <the_k_> the problems only come in when i want to do motion detection
[05:43:51 CET] <the_k_> and live viewing it
[05:44:11 CET] <ZeroWalker> you can't live view it while it's copynig?
[05:44:23 CET] <the_k_> i can have two inputs into two seperate instances of ffmpeg so that i get my hd recording and my motion dectection file
[05:44:48 CET] <ZeroWalker> hmm
[05:44:52 CET] <the_k_> then i need a live view, so i play the hd recording near the end of the file (currently having issues with that for some reason and it's gone from 3s lag from live to 20)
[05:45:18 CET] <ZeroWalker> well i mean, the motion detection files, can't you just check the last 24 hours instead?
[05:45:40 CET] <the_k_> but that's two inputs .. two rtsp streams the camera has to fill the network with.. and like i say.. i want to be able to have other ppl and devices able to view the live feed
[05:45:54 CET] <ZeroWalker> hmm, pretty sure you should be able to display it with ffplay or something while you are copying, so you always have a live window
[05:46:02 CET] <the_k_> instead of live? no.. no doorbell up here and i prefer a live view
[05:46:10 CET] <the_k_> lots of deliveries
[05:46:41 CET] <ZeroWalker> you would get live if you display it right before you copy, or vice versa, in the same command thing
[05:46:43 CET] <the_k_> hmm.. did try that but it dropped frames
[05:46:56 CET] <ZeroWalker> hmm, on the result file or the display?
[05:47:03 CET] <the_k_> maybe i should have had the live view first
[05:47:09 CET] <the_k_> both
[05:47:20 CET] <ZeroWalker> yeah try the live view first
[05:47:20 CET] <the_k_> the image starts losing some of it's horizontal lines
[05:47:27 CET] <the_k_> so you see the bottom of the image crawl upwards
[05:47:34 CET] <the_k_> ok
[05:47:38 CET] <the_k_> -f nut - | e:\fd\mpv\mpv.exe -
[05:47:45 CET] <the_k_> that's what i added to get a live view
[05:47:52 CET] <the_k_> but it uses a pipe so..
[05:48:02 CET] <the_k_> not sure that'll work but can test..
[05:48:11 CET] <ZeroWalker> that's... odd, it it feels like it doesn't read the entire frame then, and it then you will get this "circular frame" thingy occuring
[05:48:14 CET] <the_k_> e:\fd\ffmpeg -i rtsp://fd:554/Streaming/Channels/1 -f nut - | e:\fd\mpv\mpv.exe - -c copy -f segment -segment_time 86400 -strftime 1 cam_fd_HD.ts
[05:48:31 CET] <the_k_> it says something about circular something in the output
[05:48:32 CET] <the_k_> sec
[05:48:38 CET] <ZeroWalker> not sure what mpv is, isn't there like ffplay?
[05:48:53 CET] <the_k_> it's a player more updated
[05:49:00 CET] <the_k_> ffplay was apparantly outdated and buggy or something
[05:49:06 CET] <the_k_> ppl here and elsewhere preffered it
[05:49:13 CET] <ZeroWalker> ah
[05:49:17 CET] <the_k_> it did seem better anyweay
[05:49:31 CET] <the_k_> [udp @ 000000000064c6e0] 'circular_buffer_size' option was set but it is not supported on this build (pthread support is required)
[05:49:31 CET] <the_k_> [udp @ 000000000064c7a0] 'circular_buffer_size' option was set but it is not supported on this build (pthread support is required)
[05:49:53 CET] <the_k_> ah that's weird.. it doesn't even bring up mpv
[05:50:09 CET] <ZeroWalker> hmm, not sure what that does, but it does say that the build doesn't support it, so it might be well to try another version
[05:50:13 CET] <the_k_> e:\fd\ffmpeg -i rtsp://fd:554/Streaming/Channels/1 -f nut - | e:\fd\mpv\mpv.exe - -c copy -f segment -segment_time 86400 -strftime 1 cam_fd_HD.ts
[05:50:16 CET] <the_k_> this must be badly formatted
[05:50:27 CET] <the_k_> i can't see a pipe int hemiddle of the command working
[05:50:45 CET] <ZeroWalker> hmm looks weird, you just open the exe file, then again
[05:50:46 CET] <the_k_> well this is the latest build
[05:50:48 CET] <ZeroWalker> hmm
[05:51:00 CET] <the_k_> then again?
[05:51:02 CET] <ZeroWalker> what happens if you open the stream manually in mpv while you are copying in ffmpeg?
[05:51:10 CET] <the_k_> it ends
[05:51:10 CET] <the_k_> sec
[05:51:16 CET] <the_k_> lemme make sure i'm right
[05:51:26 CET] <the_k_> e:\fd\ffmpeg -i rtsp://fd:554/Streaming/Channels/1 -c copy -f segment -segment_time 86400 -strftime 1 cam_fd_HD.ts
[05:51:29 CET] <the_k_> with this :
[05:52:01 CET] <the_k_> some players handle end of file differently
[05:52:17 CET] <the_k_> ah i'm 10s lagged but this is working now
[05:52:31 CET] <the_k_> 2 lagged
[05:52:33 CET] <the_k_> this is fine
[05:52:35 CET] <the_k_> hnn
[05:52:36 CET] <the_k_> hmm
[05:52:43 CET] <ZeroWalker> wait so you are currently copying and you have it play in mpv separately?
[05:52:51 CET] <the_k_> yeah
[05:52:56 CET] <ZeroWalker> well, that's great
[05:52:57 CET] <the_k_> i just opened the file up in mpv
[05:53:03 CET] <ZeroWalker> oh
[05:53:07 CET] <ZeroWalker> i meant, open the stream
[05:53:14 CET] <the_k_> in mpv?
[05:53:16 CET] <ZeroWalker> i rtsp://fd:554/Streaming/Channels/1 this
[05:53:18 CET] <ZeroWalker> yeah
[05:53:20 CET] <the_k_> no that would be bad
[05:53:23 CET] <the_k_> waste of bandwidth
[05:53:40 CET] <the_k_> the stream is max bandwidth the cam can output
[05:53:48 CET] <the_k_> highest fps, highest quality
[05:53:51 CET] <ZeroWalker> ah
[05:54:00 CET] <ZeroWalker> but, you have tried it?
[05:54:03 CET] <the_k_> and if i stream it twice then everything will drop frames
[05:54:26 CET] <the_k_> yeah but i ALSO need at least one more client to be able to get the stream from the webpage it serves
[05:54:37 CET] <the_k_> this is why i really need to be conservatve
[05:55:01 CET] <ZeroWalker> ah, not that knowledgeable about how rtsp works
[05:55:21 CET] <the_k_> so at the moment i can play the file that's outputted and i have a good recording.. but if i want motion detection i have to start another ffmpeg up
[05:55:24 CET] <ZeroWalker> but hmm well the problem with playing the file is that it can play it faster then it records (or vice versa)
[05:55:31 CET] <the_k_> well it just eats bandwidth
[05:55:44 CET] <the_k_> it's not like a network multicast
[05:55:49 CET] <the_k_> some cameras can do that
[05:56:24 CET] <the_k_> i.e. just spams the network but everyone can access it .. infinite clients without any extra bandwidth
[05:56:29 CET] <the_k_> like tuning into a radio
[05:56:46 CET] <the_k_> that would be ideal. the camera even has an option to set that but it doesn't work
[05:56:57 CET] <ZeroWalker> ah, heard about multicast, but never knew what it was, nice
[05:57:10 CET] <the_k_> it breaks the webpage live view that it serves and it doesn't reduce bandwidth
[05:57:24 CET] <the_k_> so i just turned it back to UDP
[05:57:45 CET] <the_k_> which means for each client that accesses the live view.. it sends out another stream
[06:00:55 CET] <ZeroWalker> e:\fd\ffmpeg -i rtsp://fd:554/Streaming/Channels/1 -f nut - | ffplay - | - -c copy -f segment -segment_time 86400 -strftime 1 cam_fd_HD.ts
[06:01:19 CET] <ZeroWalker> does something like that work. not really how how you normally make a passthrough in a pipe
[06:02:43 CET] <the_k_> when you make a pipe you can't just carry on the command
[06:02:54 CET] <the_k_> it dumps the output data into the pipe
[06:03:03 CET] <ZeroWalker> think you can use -f tee
[06:03:26 CET] <ZeroWalker> https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs
[06:03:57 CET] <ZeroWalker> https://ffmpeg.org/ffmpeg-formats.html#tee
[06:04:10 CET] <the_k_> '-' is not recognized as an internal or external command,
[06:04:28 CET] <the_k_> tha'ts what i get from that
[06:04:36 CET] <ZeroWalker> yea mine is plain wrong, but you should try using tee
[06:04:41 CET] <ZeroWalker> seems to do precisely what you want
[06:04:44 CET] <the_k_> ah
[06:04:52 CET] <the_k_> ah right nice!
[06:22:11 CET] <the_k_> [mpegts @ 00000000047415c0] H.264 bitstream malformed, no startcode found, use the video bitstream filter 'h264_mp4toannexb' to fix it ('-bsf:v h264_mp4toannexb' option with ffmpeg)
[06:22:11 CET] <the_k_> av_interleaved_write_frame(): Invalid data found when processing input
[06:22:34 CET] <the_k_> oh it's working now
[06:22:46 CET] <the_k_> skipping a couple frames but not too many
[06:25:32 CET] <the_k_> losing quite a lot of frames actually
[06:26:04 CET] <the_k_> e:\fd\ffmpeg -i rtsp://fd:554/Streaming/Channels/1 -c copy -f segment -segment_time 86400 -strftime 1 cam_fd_HD.ts -f tee -vf "select=gt(scene\,0.005),setpts=N/(25*TB)" -f segment -segment_time 21600 -strftime 1 fd_motion_%%Y-%%m-%%d--%%H;%%M;%%S.ts -f tee -f nut - | e:\fd\mpv\mpv.exe -
[06:26:10 CET] <the_k_> is my command formatted right?
[11:09:29 CET] <feliwir> wow... This file was changed a lot recently: https://github.com/FFmpeg/FFmpeg/blob/6e913f212907048d7009cf2f15551781c69b9985/libavcodec/vp56.c
[11:11:41 CET] <feliwir> vp56_render_mb isn't inside the latest ffmpeg version i have locally
[11:13:22 CET] <kfolman> Hi guys. I'm fiddling around with a problem mapping audio tracks. Discreet channels map find, and mono channel is mapped fine as well. However if i mix those two, i run into problems. track 0:1 and 0:2 are mono tracks. ch 0:3 has a two channel discreet track. But i can't seem to nail the command where i'm able to map 0:1 and 0:2:1 to a two channel output. This is my syntax so far. http://pastebin.com/GaGXZwiM
[12:21:42 CET] <faLUCE> Hello. How can I list with ffmpeg the PCM sound formats supported by AAC encoder?
[12:49:48 CET] <mmsky> https://thepasteb.in/p/mwh1z8vBKJPt5
[12:50:07 CET] <mmsky> hello anyway
[12:50:31 CET] <mmsky> can anyone help me with that? (above)
[12:50:59 CET] <c_14> ldd $HOME/bin/ffmpeg
[12:51:48 CET] <c_14> It's probably linking (at runtime) to a system version of ffmpeg, prefix the command with LD_LIBRARY_PATH="$HOME/ffmpeg_build/lib"
[12:52:02 CET] <mmsky> https://thepasteb.in/p/zmh86EK6X2wiZ
[12:52:21 CET] <c_14> yeah, exactly what I said
[12:52:45 CET] <c_14> either prefix the command at runtime with the LD_LIBRARY_PATH or set LD_RUN_PATH at compiletime
[12:52:56 CET] <c_14> Or --enable-static --disable-shared
[12:53:12 CET] <c_14> So that the libav* libraries are linked into the binaries statically instead of dynamically
[12:56:37 CET] <furq> cool, a new pastebin
[12:56:40 CET] <furq> i'll add it to the ball
[13:02:26 CET] <mmsky> ok, for now it is works. I tried before with static but got other error with vaapi version, but now it is able to run. I will testing it
[13:02:34 CET] <mmsky> thank very much;)
[13:23:38 CET] <mmsky> another error with library
[13:23:51 CET] <mmsky> https://thepasteb.in/p/oYhlE5JBrJnfZ
[13:24:47 CET] <mmsky> i run with: ~/bin/ffmpeg -f decklink -i 'DeckLink Duo (0)@12' -c:v libx264 -pix_fmt yuv420p -preset ultrafast -y -f avi abc1.avi
[13:30:41 CET] <c_14> where is libDeckLinkAPI.so?
[13:37:14 CET] <acresearch> JEEB: hey, you online? the video works, but i get (AUDIO codec not supported),,, what should i do? this was the last command you sent me (ffmpeg -i input.mkv -c:v libx264 -preset veryfast -level 41 -crf 21 -vf scale=1920:1080 -c:a copy -sn out.mp4)
[13:39:08 CET] <acresearch> anyone here who can assist me?
[13:39:55 CET] <JEEB> acresearch: switch -c:a copy to -c:a aac -b:a 192k -ac 2
[13:40:01 CET] <JEEB> that should work
[13:40:47 CET] <acresearch> JEEB: so the final command is this? (ffmpeg -i input.mkv -c:v libx264 -preset veryfast -level 41 -crf 21 -vf scale=1920:1080 -c:a aac -b:a 192k -ac 2 -sn out.mp4)
[13:41:42 CET] <IntruderSRB> can someone point me in the right direction (documentation/tutorial/stackoverflow). I'm trying to pass byte array that contains multiple NAL units (one of them is VideoNAL H264) to my C lib (through JNI). My C lib will than utilize libavcodec to parse that byte array.
[13:41:58 CET] <JEEB> acresearch: looks good enough
[13:42:02 CET] <IntruderSRB> most of the stuff I found are related to file inputs ... any suggestions/point-to will be appritiated :)
[13:42:04 CET] <JEEB> that will transcode your DTS audio to AAC
[13:42:06 CET] <acresearch> JEEB: thanks
[13:42:18 CET] <JEEB> and makes it stereo
[13:43:03 CET] <acresearch> oh ok i see
[13:47:17 CET] <mmsky> ok i dont install drivers but included headers in compilation. But now got another error when run command above:  "Segmentation fault
[13:47:17 CET] <mmsky> "
[13:49:04 CET] <mmsky> with that decklink is trouble at least, dunno what could be wrong?
[13:50:51 CET] <mmsky> i install .deb from Blackmagic_Desktop_Video_Linux_10.8.4/deb/amd64/desktopvideo_10.8.4a4_amd64.deb
[16:41:23 CET] <DelphiWorld> yo everyone
[16:41:49 CET] <DelphiWorld> how to transcode the entire content of a containair, including all audio tracks?
[16:44:15 CET] <furq> DelphiWorld: -map 0
[16:44:50 CET] <DelphiWorld> furq, but for a stream where i dont know how many audio stream, how to get them all?
[16:45:54 CET] <DelphiWorld> ah
[16:46:05 CET] <DelphiWorld> so -map 0 will copy everything, including subtitles
[20:13:26 CET] <tatack> Hi guys, can ffmpeg act as rtmp server (receiving rtmp stream as input)?
[20:13:39 CET] <tatack> According to https://www.ffmpeg.org/ffmpeg-protocols.html#rtmp it seems so, but i can't find how use the listen parameter :-(
[20:14:05 CET] <furq> tatack: -listen 1 -i rtmp://0.0.0.0:1935/live/foo
[20:14:43 CET] <BtbN> That's an rtmp client though.
[20:16:37 CET] <furq> ?
[20:17:13 CET] <furq> tatack: you probably want to use a proper rtmp server anyway
[20:17:38 CET] <furq> that works the same way as all of ffmpeg's server functionality, i.e. badly
[20:18:17 CET] <tatack> furq: :-) ok, thanks, i will try.
[20:19:21 CET] <DHE> yeah, get a real rtsp server and have ffmpeg feed it instead
[20:22:40 CET] <Fenrirthviti> tatack: nginx-rtmp is kinda the go-to free solution
[20:22:52 CET] <Fenrirthviti> for small scale private RTMP servers
[20:26:11 CET] <tatack> Fenrirthviti: thanks, i just looked at nginx. seems nice.
[20:26:23 CET] <furq> it is nice
[20:27:49 CET] <Fenrirthviti> It's got some quirks, but overall it's pretty solid for most things rtmp
[21:19:43 CET] <sinanksu> hi
[21:29:19 CET] <feliwir> Call me crazy but to me it seems like it does do Colums first and then rows: https://github.com/FFmpeg/FFmpeg/blob/c87ea47481d35b0219e2e22d60f2a431286f725d/libavcodec/vp3dsp.c#L56
[21:29:42 CET] <feliwir> atleast it seems like this from the calculation. So the comment would be wrong
[21:30:36 CET] <feliwir> also the original vp62 sourcecode does do it the opposite way: https://gist.github.com/feliwir/8aadbfdaca177ec67ebd6abc6d31b222
[21:35:23 CET] <faLUCE> is it possible to decode a "raw" AAC file? If I call avcodec_encode_audio2() and then fwrite() the encoded packets, I can't play the produced file with ffplay. I can play it if it's encoded with MP2 codec
[21:36:38 CET] <acresearch> hey JEEB you online?
[21:36:49 CET] <durandal_1707> never
[21:37:54 CET] <acresearch> JEEB: ok finished converting, the video went down from 23 GB to 1.4GB   big difference, i don't think it is HD anymore, the last command you sent me was this (ffmpeg -i in.mkv -c:v libx264 -preset veryfast -level 41 -crf 21 -vf scale=1920:1080 -strict -2 -c:a aac -b:a 192k -ac 2 -sn out.mp4)
[21:38:58 CET] <furq> 1080p looks pretty hd to me
[21:48:39 CET] <acresearch> furq: how do i know it is truly 1080p? could the convertion compressed it wrongly?
[21:49:14 CET] <furq> because it says "-vf scale=1920:1080"
[21:49:49 CET] <acresearch> ok
[21:51:58 CET] <JEEB> acresearch: it's not Ultra HD (2160p), but it is HD (1080p) :P
[21:52:12 CET] <JEEB> your TV cannot play Ultra HD anyways
[22:07:00 CET] <faLUCE> any advice about that? It seems that AAC encoder needs an AAC muxer. in ffmpeg 3.2, while in the previous version (2.8) it doesn't
[22:07:11 CET] <furq> it works fine here
[22:07:23 CET] <faLUCE> furq: what?
[22:07:30 CET] <acresearch> JEEB: hmmm, well i have a working copy, i want to try to see if i can increase the HD, the next step after 1080 is ?
[22:07:31 CET] <furq> decoding aac
[22:07:37 CET] <BtbN> I don't think there even is such a thing as an aac muxer
[22:07:53 CET] <furq> no idea about with the api, but ffmpeg decodes adts just fine
[22:08:59 CET] <faLUCE> If I simply add a fwrite() after avcodec_encode_audio2(), I can't decode the produced file
[22:09:35 CET] <faLUCE> so, it seems that a container (avformat) is needed
[22:10:36 CET] <faLUCE> I just added "fwrite(output_packet.data, 1, output_packet.size, encodedFile);" in transcode_aac.c example
[22:12:39 CET] <BtbN> of course, you always need to mux stuff. Writing raw data to disk rarely ever works.
[22:12:57 CET] <BtbN> aac and mp3 should be formats where it does though. Probably still needs some mini-container, like a header or something
[22:14:10 CET] <BtbN> Yeah, the .aac files are made using the adts muxer.
[22:16:14 CET] <faLUCE> BtbN: exactly. But in the previous version it were not
[22:16:55 CET] <JEEB> are you sure? that the parser/muxer weren't just silently enabled
[22:17:19 CET] <BtbN> Using which encoder? Maybe you're using fdk, faac or something, and it happens to output adts
[22:17:21 CET] <faLUCE> JEEB: I'm pretty sure. In addition, even if I do a simple fwrite, I see a header
[22:17:52 CET] <faLUCE> and I remember that I could decode the "raw" aac file in the previous version
[22:18:52 CET] <BtbN> those raw aac files are usually adts. If you write actual raw aac to disk, I don't think anything will be able to parse it
[22:19:29 CET] <faLUCE> this is pretty bad. I think it's a failure in the API, because with mp2 I can do that. And mp2 is conceptually the same as aac
[22:24:33 CET] <faLUCE> otherwise, the previous version of ffmpeg automatically added adts mux to aac encoded frames... what do you think?
[22:26:00 CET] <JEEB> or the muxer was just quietly enabled
[22:26:06 CET] <JEEB> or it was separated recently
[22:27:05 CET] <faLUCE> JEEB: uhmmmmm
[23:20:55 CET] <aptalca> While attempting hardware encode with intel haswell, I'm getting the following error "avcodec_open2 returned -38 for encoder 'h264_vaapi'" Does anyone know what code 38 means? Thanks.
[23:25:00 CET] <aptalca> Log here between lines 100 and 117: http://pastebin.com/NKkhfpsj
[23:35:50 CET] <jkqxz> It's just an errno.
[23:36:00 CET] <jkqxz> So ENOSYS.
[23:36:16 CET] <jkqxz> The interesting bit of log is "Encoding entrypoint not found (7 / 6).".
[23:37:25 CET] <jkqxz> Which suggests that your driver doesn't support encoding H.264 at high profile.
[23:51:16 CET] <feliwir> so for YUV 420 i get 3 planes created in ffmpeg. But i have no clue how to figure out the size of each plane in bytes (depending on width * height)
[23:52:44 CET] <DHE> ENOSYS - function not implemented. hardware capabilities (in part or in whole) not available
[23:53:21 CET] <JEEB> feliwir: there's something that gives you the size per sample (although you could look at the pix_fmt as well and make assumptions), and then you have to know that it's Y,Cb,Cr and that Cb and Cr in 4:2:0 are subsampled in width and height
[23:54:06 CET] <feliwir> JEEB, i just want to know how to do the calculation :( Completly not depending ffmpeg
[23:54:18 CET] <JEEB> and then you have linesize which tells you the amount of uint8_t to go to the next line in the plane
[23:54:33 CET] <JEEB> (which can be more than just width*bytes_per_sample due to alignment etc
[23:54:49 CET] <JEEB> feliwir: well YUV420P is 4:2:0, planar, 8bit
[23:55:02 CET] <DHE> but the number of bytes for a plane should be linesize * height, right?
[23:55:14 CET] <JEEB> yes
[23:55:17 CET] <kerio> are our eyes really so bad that they're fine with 4 bits of color
[23:55:18 CET] <JEEB> although it can be more
[23:55:22 CET] <feliwir> it's 6 bytes per 4 pixels for YUV 420 i've read
[23:55:38 CET] <DHE> kerio: more like 2 pixels can use the same 8 bits of colour
[23:55:43 CET] <iive> kerio: even worse
[23:56:03 CET] <iive> kerio: eyes are most sensitive to luminance
[23:56:22 CET] <feliwir> so it would would be: (width*height/4)*6 bytes for a plane i'd guess?
[23:56:28 CET] <JEEB> feliwir: in 4:2:0 chroma is "one sample for 2x2 area" and then you have chroma location which has two defaults (one MPEG-1 style and another MPEG-2+ style)
[23:56:44 CET] <JEEB> top-left for mpeg-2 was it?
[23:56:50 CET] <JEEB> and mpeg-1 had just top or bottom?
[23:58:08 CET] <JEEB> feliwir: at the very minimum, without any alignment it would be (width*height) + 2*((width/2)*(height/2))
[23:59:27 CET] <feliwir> okay, thanks
[23:59:50 CET] <feliwir> that is per plane? Because it seems like every component has a seperate plane in ffmpeg
[00:00:00 CET] --- Tue Mar 14 2017


More information about the Ffmpeg-devel-irc mailing list