[Ffmpeg-devel-irc] ffmpeg.log.20170309

burek burek021 at gmail.com
Fri Mar 10 03:05:01 EET 2017


[00:04:41 CET] <blue_misfit> hey guys, why would I get dup and drop frames when transcoding an intra-only format like ProRes?
[00:12:23 CET] <blue_misfit> also, what does "clipping frame in rate conversion" mean?\
[00:12:43 CET] <thebombzen> it means you're coercing the framerate
[00:12:56 CET] <thebombzen> try using -vsync passthrough or -vsync drop and see what happens
[00:13:44 CET] <blue_misfit> very odd
[00:14:01 CET] <blue_misfit> this is a 1080p23.98 prores source and I'm trying to transcode it to the same format
[00:14:12 CET] <blue_misfit> interesting
[00:14:13 CET] <blue_misfit> I'll try
[00:18:37 CET] <blue_misfit> so using -vsync drop makes a file that doesn't play properly
[00:21:28 CET] <blue_misfit> using -vsync passthrough I get the desired number of frames in the output
[00:22:06 CET] <blue_misfit> but it seems to hold lastFrame-1 for the last 2 frames instead of displaying the final frame :)
[00:22:13 CET] <furq> does the source have vfr timestamps
[00:22:20 CET] <blue_misfit> how can I tell?
[00:22:30 CET] <blue_misfit> it shouldn't as it's prores and should be CFR afaik
[00:24:31 CET] <blue_misfit> [mov @ 00000000025687a0] Non-monotonous DTS in output stream 0:0; previous: 181181, current: 181181; changing to 181182. This may result in incorrect timestamps in the output file.
[00:24:31 CET] <blue_misfit> [mov @ 00000000025687a0] Non-monotonous DTS in output stream 0:0; previous: 181182, current: 181181; changing to 181183. This may result in incorrect timestamps in the output file.
[00:24:31 CET] <blue_misfit> [mov @ 00000000025687a0] Non-monotonous DTS in output stream 0:0; previous: 181183, current: 181181; changing to 181184. This may result in incorrect timestamps in the output file.
[00:24:31 CET] <blue_misfit> [mov @ 00000000025687a0] Non-monotonous DTS in output stream 0:0; previous: 181184, current: 181181; changing to 181185. This may result in incorrect timestamps in the output file.
[00:24:34 CET] <blue_misfit> [mov @ 00000000025687a0] Non-monotonous DTS in output stream 0:0; previous: 181185, current: 181181; changing to 181186. This may result in incorrect timestamps in the output file.
[00:24:39 CET] <blue_misfit> [mov @ 00000000025687a0] Non-monotonous DTS in output stream 0:0; previous: 181186, current: 181181; changing to 181187. This may result in incorrect timestamps in the output file.
[00:24:42 CET] <blue_misfit> oy sorry guys
[00:24:45 CET] <blue_misfit> http://pasted.co/fead7a9c
[00:25:32 CET] <blue_misfit> looks like the last several frames have the same dts??
[00:56:30 CET] <blue_misfit> yeah timestamps are crazy at the end of the file
[00:56:49 CET] <blue_misfit> all frames leading up to the end have dts values that are 1001 longer than the previous
[00:56:56 CET] <blue_misfit> then there's a frame with 7007
[00:56:58 CET] <blue_misfit> then 6 with 1
[02:54:51 CET] <seggil> trying to tract down why ffmpeg when desk recording stop working(record blank video) for me on fedora 25 in last few days, somebody else got this ?
[11:39:58 CET] <furqan> i need some starting pointers to make simple program with ffmpeg
[11:40:20 CET] <furqan> i want to open video files with different format and concatenate into one format
[11:40:34 CET] <furqan> could someone give me directions
[11:42:25 CET] <JEEB> see the examples under docs/examples
[11:43:43 CET] <furqan> i am still not able to open mp4 file with avformat_open_input
[11:44:36 CET] <JEEB> then something is badly wrong :P
[11:45:09 CET] <JEEB> anyways, post code so someone can help you, I have a $dayjob to take care of
[11:45:15 CET] <JEEB> (in a pastebin, not on the channel)
[11:45:21 CET] <JEEB> and link the pastebin or something here
[11:46:23 CET] <furqan> https://gist.github.com/furqantariq/7384130b21b1b5c7ea8b4c8f0659a193
[12:26:42 CET] <mcjack> furqan: maybe try to read the actual number avformat_open_input returns& it gives clues about what went wrong&
[12:27:02 CET] <furqan> i solved that
[12:27:10 CET] <mcjack> cool
[12:27:11 CET] <furqan> i used this with another file it worked
[12:50:39 CET] <feliwir> hey, how do i compile ffmpeg so it becomes debuggable? "./configure --enable-debug=3" didn't work for me
[12:58:40 CET] <flux> I've used --disable-stripping --disable-optimizations --disable-pthreads and I guess I didn't even need --enable-debug
[13:07:53 CET] <feliwir> works, thanks
[13:13:12 CET] <faLUCE> Hello:   ffmpeg -i file -acodec copy -vcodec mjpeg -f mpegts output.ts    <--- output.ts doesn't contain any video, but only audio. If I switch to matroska (fmpeg -i file -acodec copy -vcodec mjpeg -f matroska output.mkv) it works. Is there any known issue/bug?
[13:42:35 CET] <blackdot> hello. i'm trying to encode my files with the HAP codec (listed here: https://ffmpeg.org/ffmpeg-all.html#Hap ). with regular hap, it works, but when i try to use hap_alpha or hap_q, i get "Unknown encoder 'hap_alpha'". i'm doing it like this: http://pastebin.com/355kcWvn
[13:43:11 CET] <blackdot> am I using the syntax wrong?
[13:48:17 CET] <furqan> hello, what functions do i look into to remove audio from mp4
[13:49:57 CET] <blackdot> ah niice, my thing works with -c:v hap -format hap_alpha
[13:50:32 CET] <blackdot> furqan: you could just add -an before your output file path. that removes the audio completely afaik
[13:50:49 CET] <furqan> no using libraries
[13:51:10 CET] <blackdot> ah. that's beyond my skill then :)
[16:28:41 CET] <Conder> hello, is there some difference between Dolby Atmos and Dolby TrueHD?
[16:32:14 CET] <alexpigment> atmos is an extension of truehd but is backwards compatible with it
[16:32:35 CET] <alexpigment> the short answer is "yes, there's a difference"
[16:33:06 CET] <alexpigment> the long answer is: i'm not terribly familiar familiar with *what* differences there are and how they're implemented (metadata vs actual stream data)
[16:34:07 CET] <alexpigment> i want to say you'll only hear a difference when you have more than a 7.1 setup
[16:35:16 CET] <faLUCE> Hello:   ffmpeg -i file -acodec copy -vcodec mjpeg -f mpegts output.ts    <--- output.ts doesn't contain any video, but only audio. If I switch to matroska (fmpeg -i file -acodec copy -vcodec mjpeg -f matroska output.mkv) it works. Is there any known issue/bug?
[16:36:22 CET] <alexpigment> faluce: is mjpeg valid for the mpegts container?
[16:37:04 CET] <alexpigment> https://en.wikipedia.org/wiki/Comparison_of_video_container_formats
[16:39:45 CET] <Conder> thx
[16:40:29 CET] <faLUCE> alexpigment: it is valid. And the muxer doesn't give any warning about it....
[16:40:44 CET] <faLUCE> https://en.wikipedia.org/wiki/MPEG_transport_stream
[16:41:00 CET] <alexpigment> ah
[16:42:12 CET] <alexpigment> i've had trouble before with the mpegts container in FFMPEG before, specifically with H.264 + pcm_bluray
[16:42:21 CET] <furqan> http://stackoverflow.com/questions/42699121/how-to-concatenate-videos-and-adding-background-audio-by-using-ffmpeg-library
[16:42:25 CET] <alexpigment> rather, pcm_l16be for bluray
[16:42:29 CET] <furqan> please
[16:42:58 CET] <alexpigment> anyway, point being that i had to do the same thing re: using MKV instead because of a silent failure to include a stream
[16:43:40 CET] <furq> furqan: -f concat -i concatlist -i foo.mp3 -map 0:v -map 1:a out.mp4
[16:43:55 CET] <furq> !muxer concat
[16:44:24 CET] <furq> er
[16:44:26 CET] <furq> !demuxer concat
[16:44:26 CET] <nfobot> furq: http://ffmpeg.org/ffmpeg-formats.html#concat-1
[16:44:33 CET] <furqan> by using c++
[16:44:43 CET] <furqan> with ffmpeg library
[16:44:51 CET] <furq> oh
[16:44:54 CET] <furq> you're on your own then
[16:45:42 CET] <furq> i assume this is for some kind of programming exercise because you could do this just as well with 10 lines of shell
[16:45:53 CET] <furqan> yea something like that
[16:46:36 CET] <furq> https://ffmpeg.org/doxygen/trunk/examples.html
[16:46:46 CET] <furq> that's about as much help as i can give
[16:46:54 CET] <alexpigment> faLUCE: did you also get an error about "deprecated pixel format used, make sure you did set range correctly?
[16:47:02 CET] <alexpigment> "
[16:47:24 CET] <furq> it's a nick completion minefield in here right now
[16:50:09 CET] <faLUCE> alexpigment: which debug option do I have to set in order to check that error? -v debug, -v info or what?
[16:50:39 CET] <alexpigment> i'm not sure. i tested using a static zeranoe win64 build from last month or so
[16:50:57 CET] <alexpigment> but i see the error there when trying it. i then tested mpeg2video and didnt' get the error
[16:51:24 CET] <alexpigment> the thing is, yuvj420p, yuvj422p and yuvj444p are the only ones supported by the encoder
[16:51:47 CET] <alexpigment> so i don't know what the problem is
[16:52:06 CET] <faLUCE> alexpigment: anyway, you too have an issue, right?
[16:52:19 CET] <alexpigment> (and i'm not implying that this warning is important
[16:52:24 CET] <alexpigment> just that it shows up
[16:52:26 CET] <faLUCE> then, this bug is confirmed... I think
[16:52:50 CET] <Tragomox> hello. is it possible to use the display aspect ratio instead of the pixel aspect ratio when extracting a still using the following command? "ffmpeg -i input.flv -ss 00:00:14.435 -vframes 1 out.png"
[16:52:50 CET] <alexpigment> well, perhaps
[16:53:03 CET] <alexpigment> it also shows up when doing MKV, so...?
[16:54:24 CET] <kepstin> Tragomox: if you want to rescale the image so it has the correct dar when using 1:1 pixel aspect ratio, you'll have to ... rescale the image. by adding a scale filter.
[16:54:26 CET] <faLUCE> alexpigment: all is unclear...
[16:54:47 CET] <alexpigment> faLUCE: maybe this might be relevant, http://stackoverflow.com/questions/23067722/swscaler-warning-deprecated-pixel-format-used
[16:55:43 CET] <kepstin> all that warning means is that some code is using the old alias for jpeg-range yuv instead of the new method of indicating it - not an error, just a message telling programmers that they should update the code.
[16:56:59 CET] <alexpigment> faLUCE: well, to go back to the problem i described earlier with H.264 and PCM not working in mpegts via ffmpeg, i ended up having to use tsMuxeR. have you ever used it, and if so, is it possible that you can incorpate this into your workflow?
[17:01:17 CET] <Tragomox> kepstin, ok, thanks.
[17:01:49 CET] <faLUCE> alexpigment: the problem is that I obtain the same issue if I mux a native mjpeg stream....
[17:02:10 CET] <alexpigment> ahh
[17:02:53 CET] <alexpigment> out of curiosity - why MJPEG?
[17:03:30 CET] <alexpigment> rather than, say, AVC in intra-only mode
[17:04:25 CET] <faLUCE> alexpigment: I stream from a web camera. And the camera supports MJPEG, other than YUYV422é
[17:04:51 CET] <alexpigment> gotcha
[17:06:06 CET] <alexpigment> anyway, i've never seen a TS file with MJPEG in the wild, so even if it's technically supported (the wikipedia pages seem to disagree for some reason), it makes sense that it would be either a) unimplemented or b) broken in FFMPEG
[17:08:49 CET] <JeffATL> my video files play but won't start picture for a few seconds; ffprobe says "Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 720x480, 1920 kb/s): unspecified pixel format". my process is, split audio off and clean it up in audacity, save as flac, recombine v and a using -map. problem starts when i cut up the result with -ss and -to.
[17:09:29 CET] <faLUCE> alexpigment: it seems broken, because I see the muxed packets, but they can't be demuxed
[17:10:11 CET] <alexpigment> faLUCE: probably so. i consider the mpegts muxer in ffmpeg a work in progress ;)
[18:38:43 CET] <AreaScout> Hi all, if i use FFmpeg > then 2.8 FF_THREAD_FRAME doesn't look to use all my cores on decoding, with <= 2.8 everything is ok, did i miss some API changes in the new versions, is it different to handle ?
[18:43:05 CET] <AreaScout> codec i am using is h264
[19:25:41 CET] <JEEB> AreaScout: is the amount of threads spawned the same?
[19:26:35 CET] <AreaScout> JEEB on <= 2.8 yes on > 2.8 nope
[19:27:14 CET] <JEEB> what kind of numbers are we talking about?
[19:28:49 CET] <AreaScout> here is a picture with FFmpeg 2.8 running on one CPU ( core 5-8 is a separate CPU arm A15 ) https://www.areascout.at/IMG_1364.PNG
[19:29:39 CET] <AreaScout> and FFmpeg runs on core 5-8 on that picture, well balanced as you can see
[19:29:44 CET] <AreaScout> everything works fine
[19:30:22 CET] <AreaScout> this is on FFmpeg higher than 2.8 https://www.areascout.at/IMG_1365.PNG
[19:31:37 CET] <JEEB> well the question was about the amount of threads
[19:31:40 CET] <JEEB> not the CPU usage
[19:32:02 CET] <JEEB> you should be able to compare
[19:32:27 CET] <AreaScout> i only have that picture now on the hand, moonlight is the process
[19:33:08 CET] <kepstin> are you actually running into an issue where it's not decoding the video fast enough?
[19:33:12 CET] <JEEB> so you can't just check the amount of the threads right now? then I can't really help. because in general I'm not getting any worse perf
[19:33:32 CET] <AreaScout> kepstin yes
[19:34:19 CET] <kepstin> AreaScout: and it's the same video both times? different videos can have different restrictions on how much parallelism you get
[19:34:21 CET] <AreaScout> JEEB, just in 10min i can test, if you are still here
[19:34:35 CET] <JEEB> not necessarily at the keyboard but I have my client around 24/7
[19:34:41 CET] <JEEB> so I can see any highlights
[19:35:00 CET] <AreaScout> kepstin it's the same game very time, it's an Game Streaming client
[19:35:12 CET] <AreaScout> JEEB, ok cool
[19:35:31 CET] <kepstin> AreaScout: ok, so different video, but the video encoder is the same, and the content is similar?
[19:35:56 CET] <AreaScout> the video encoding is done by NVIDIA
[19:36:06 CET] <AreaScout> yes it its always the same
[19:36:24 CET] <kepstin> for low latency stuff like game streaming you might want to look into doing sliced encoding which is easier to  do multithreaded decoding, but I dunno if that'll help here
[19:36:53 CET] <AreaScout> sliced is much slower on my hardware then frame threating
[19:36:56 CET] <AreaScout> https://www.youtube.com/watch?v=_uM-pRK6RPY
[19:38:04 CET] <kepstin> why are you doing software decoding at all, anyways? don't a lot of these arm chips on e.g. phones and stuff have hardware decoder isps?
[19:38:37 CET] <kepstin> i mean, you have enough cpu available that it should be working :/
[19:39:28 CET] <AreaScout> MFC (hardware decoding) runs slower then FFmpeg, the latency is very bad, it's only written for watching a video on TV where it doesn't matter if your video starts 4 or 5 seconds later
[19:40:03 CET] <BtbN> frame threading itself seems pretty bad latency wise
[19:40:23 CET] <BtbN> as you are bound to have a couple frames delay, so each thread can have one
[19:40:35 CET] <kepstin> yeah, that's why slices are better for latency
[19:40:43 CET] <AreaScout> but i can reach a latency of 190ms, thats ok for gaming
[19:40:56 CET] <BtbN> that's not a very high number of threads then
[19:41:06 CET] Action: kepstin knows a lot of gamers who would disagree, but that's another matter :)
[19:41:09 CET] <BtbN> and 200ms for gaming doesn't seem ok to me. Usually you go for sub 10ms
[19:41:19 CET] <kepstin> what are you setting the "threads" avoption on the decoder to?
[19:41:54 CET] <AreaScout> maybe it's my hardware but even a board with HW decoder like the ODROID C2, i can't reach 10ms
[19:42:04 CET] Action: kepstin notes that FF_THREAD_FRAME appears to still be respected in the ffmpeg h264 decoder, fwiw., but it's the default so that doesn't really matter.
[19:42:31 CET] <AreaScout> i measure this by recording the audio echo effect
[19:42:40 CET] <kepstin> er, the default is to allow both slid and frame threading (presumably which is used depends on the input video)
[19:43:02 CET] <AreaScout> hmmm didn't try that :)
[19:43:27 CET] <AreaScout> i tried this OR that only
[19:44:00 CET] <kepstin> the default is FF_THREAD_FRAME|FF_THREAD_SLICE, which means the decoder can use either option depending on whether the input video has slices or not
[19:44:08 CET] <kepstin> but yeah - what are you setting as the thread count?
[19:44:16 CET] <AreaScout> 4
[19:44:31 CET] <AreaScout> it's 8 core but on two CPU's
[19:44:49 CET] <kepstin> big/little cluster thing, so only 4 cores acitve at a time?
[19:45:08 CET] <AreaScout> no Heterogeneous Multi-Processing (HMP)
[19:45:16 CET] <BtbN> So, Ryzen?
[19:46:39 CET] <AreaScout> the HMP decides to switch the most CPU extensive threads to the strongest CPU
[19:47:13 CET] <AreaScout> ODROID XU3/4
[19:47:30 CET] <AreaScout> arm exynos 5422
[19:49:27 CET] <kepstin> yeah, that's a big.LITTLE system, using the newer method where the linux cpu scheduler to allocate tasks to cores rather than the older method which swapped between core clusters.
[19:50:25 CET] <AreaScout> yep
[19:50:44 CET] <kepstin> but the end effect is that you only have 4 high performance cores for video decoding :)
[19:51:50 CET] <AreaScout> yes that's enough for 1080p and i am pretty much as fast like the ODROID C2 with HW decoder
[19:52:28 CET] <AreaScout> but the C2 can do 2160p
[19:52:42 CET] <AreaScout> it's a different soc
[19:53:21 CET] <AreaScout> Amlogic with very low latency HW decoder
[19:53:22 CET] <kepstin> but yeah, weird that you're seeing a big difference between 2.8 and later versions, I don't thing the frame threaded decoder handling has really changed much if at all.
[19:54:03 CET] <AreaScout> strange yes, i have tried everything
[19:54:15 CET] <AreaScout> 3.0, 3.1, 3.2
[19:54:28 CET] <AreaScout> each with the same configure switches
[19:54:43 CET] <AreaScout> and only <= 2.8 works
[20:47:59 CET] <Kirito> What's the simplest way to "center crop" a 1080p video to 720p with ffmpeg? That is, crop out 1280x720 from the center of the video, instead of scaling it
[20:50:46 CET] <BtbN> Use the crop filter, and crop equal amounts of pixels from left/right and top/bottom
[21:07:37 CET] <llogan> i think crop automatically centers
[21:30:10 CET] <Zypho> Does anyone know of an easy way to calculate the bitrate of a video stream where ffprobe is returning N/A on bitrate for both the video and audio stream?
[21:30:40 CET] <Zypho> I believe the answer I am looking for is something around using the -show_packets` flag in ffprobe
[21:35:27 CET] <alexpigment> Zypho: is this supposed to be an automated process, or is it a 1-time thing?
[21:35:38 CET] <alexpigment> i.e. do you just need to calculate the bitrate for one video?
[21:36:55 CET] <Zypho> @alexpigment automated
[21:37:00 CET] <alexpigment> k, nm
[21:37:09 CET] <Zypho> some videos are fine, others I need to expect they won't have bitrate defined
[21:38:20 CET] <alexpigment> well, i don't suppose a simple size % length is accurate enough for your needs?
[21:38:39 CET] <alexpigment> rather, [size divided by length]
[21:39:02 CET] <Zypho> unfortonately not because the container could have both an audio and video stream, both returning no bitrate and I need the bitrate for each individual stream
[21:39:48 CET] <alexpigment> yeah, i hear you. i guess you're dealing with a sample set that is unknowable, so the bitrate could be 96kbps aac or 1.5mbps PCM?
[21:40:06 CET] <alexpigment> the audio bitrate, i mean
[21:40:30 CET] <Zypho> correct
[21:41:06 CET] <alexpigment> is ffprobe giving you the duration at all?
[21:41:16 CET] <alexpigment> i would think it would be weird that it knows the duration and yet gives N/A for the bitrate
[21:42:31 CET] <Zypho> Yeah I get the duration back for each stream
[21:43:10 CET] <Zypho> actually no, sorry
[21:43:13 CET] <Zypho> just double checked
[21:43:21 CET] <Zypho> no duration on the streams are returned
[21:43:33 CET] <alexpigment> ok, so i think that's the problem that needs to be addressed
[21:43:46 CET] <Zypho> I do have the combined duration, and the combined bitrate
[21:44:00 CET] <fengshaun> is there a way to calculate the time range of a media file given the byte-range? I'm trying to on-the-fly transcode a media file, but getting byte-range doesn't help when ffmpeg requires time range
[21:44:06 CET] <fengshaun> I'm looking for a way to calculate that
[21:44:17 CET] <fengshaun> mostly mp4 files
[21:44:46 CET] <fengshaun> ffprobe can do that, but it reads the entire file every time
[21:44:50 CET] <fengshaun> http://stackoverflow.com/questions/37134341/web-video-bytes-range-to-time
[21:45:55 CET] <alexpigment> zypho: 1/2 of the combined time (assuming there are 1 video and 1 audio stream) divided by the combined bitrate would give you the overall bitrate, for what it's worth
[21:46:05 CET] <alexpigment> i suspect that doesn't help though
[21:47:27 CET] <Zypho> :(
[21:47:47 CET] <alexpigment> https://trac.ffmpeg.org/wiki/FFprobeTips
[21:48:01 CET] <alexpigment> have you looked at the "Get duration by decoding" section?
[21:49:18 CET] <alexpigment> and i presume you're already using "-select_streams"?
[21:49:39 CET] <Zypho> yes I am
[21:49:48 CET] <alexpigment> i figured i'd ask just in case
[21:50:15 CET] <alexpigment> but the decoding method may be your only option
[21:52:01 CET] <Zypho> Still returns me back bitrate N/A
[21:52:30 CET] <alexpigment> but it does give you the difinitive time, right?
[21:52:37 CET] <Zypho> it does yea
[21:52:54 CET] <alexpigment> and are you able to deduce elsewhere what the per-stream file size is?
[21:53:58 CET] <Zypho> I could probablt sum the values from show_packets
[22:04:52 CET] <alexpigment> Zypho: i tested a random sample of videos, and the only ones that report a bitrate of n/a are ones that don't actually play in players
[22:05:01 CET] <alexpigment> WMP/VLC/etc
[22:05:56 CET] <alexpigment> what format(s) are you seeing that's giving you N/A for bitrate? i presume they play in your video players
[23:00:36 CET] <Maverick|MSG> when specifying ffmpeg's -to (to tell ffmpeg to stop recording at a certain timestamp), can the number of minutes in the timestamp be > 60?
[23:01:04 CET] <Maverick|MSG> so rather than writing 01:30:00 I could write 00:90:00?
[23:01:47 CET] <BtbN> try it I'd say
[23:02:04 CET] <Maverick|MSG> yeah, I would if I could right now but I'm away from my ffmpeg-installed computer
[23:02:35 CET] <Maverick|MSG> the documentation says the minutes must be two digits, so it might work
[23:17:50 CET] <c7j8d9> when encoding a 30mbps bitrate h264 to hevc should i be telling it to do 15mbps to get similar quality (hypothetical bitrates) at half the size? Am I thinking of this in the right way?
[23:19:05 CET] <kepstin> c7j8d9: what encoder are you using? x265?
[23:19:40 CET] <kepstin> c7j8d9: if so, you probably want to do a crf encode with the highest crf you can which doesn't cause a noticable quality drop, and the slowest settings.
[23:20:06 CET] <kepstin> it will take a long time, but the video will probably be marginally to moderately smaller than original h264.
[23:20:27 CET] <c7j8d9> trying to get h265 working with nvidia gpu acceleration
[23:20:40 CET] <BtbN> half the size is pretty doable with x265 vs. x264 at comparable qualities
[23:20:50 CET] <kepstin> lol hardware encoder
[23:20:51 CET] <BtbN> nvenc hevc is even below x264 quality levels
[23:21:16 CET] <kepstin> if the source h264 was made with x264, the nvidia hevc encoder will only make it worse, yeah
[23:22:07 CET] <haroldp> I have these crappy IP cams that will only do rtsp but I need people to be able to live stream them from a website.
[23:22:46 CET] <haroldp> Does it make sense to use ffmpeg to pull the video off the cams and then nginx to publish the live streams?
[23:23:04 CET] <haroldp> ....and rtmp for use with a flash player
[23:23:32 CET] <kepstin> haroldp: yeah, that should work fine. You can either use nginx-rtmp to run ffmpeg to pull the data, or have ffmpeg generated e.g. hls or dash segments and serve them in plain http for an html5 player
[23:24:04 CET] <haroldp> thanks.
[23:24:33 CET] <haroldp> so I can use ffmpeg from the command line and just send the output to a .mp4 file, and that works fine
[23:25:36 CET] <haroldp> I was looking at using "exec_push" in nginx.  does that make sense?
[23:25:48 CET] <haroldp> (possibly off topic here, sorry)
[23:35:45 CET] <c7j8d9> so just because hardware encoding takes an hour instead of 18 hours its not worth it?
[23:36:05 CET] <c7j8d9> for hevc
[23:36:12 CET] <RossW> I know it's probably overkill - but I already have ffmpeg on a machine. Can I use it to overlay a PNG over the top of a jpeg, and output a jpeg? (Or better still, output to stdout?) If so, what incantation would I use?
[00:00:00 CET] --- Fri Mar 10 2017


More information about the Ffmpeg-devel-irc mailing list