[FFmpeg-user] How to copy all of 5 audio streams to mp4 file

Moritz Barsnick barsnick at gmx.net
Sun Feb 24 19:41:59 EET 2019


On Sun, Feb 24, 2019 at 01:19:20 +0100, Ulf Zibis wrote:
> > It's almost dead-easy extracting a changeset from those sources and
> > applying it on top of ffmpeg master HEAD.
> 
> Would you be capable to create such a patch? I really would appreciate
> this.

Attached. Use at your own risk. I only merged the changes and fixed
obvious merge issues.

> Additionally you need to add libdvdread-vgtmpeg
> <http://github.com/concalma/libdvdread-vgtmpeg> for the new |dvdurl|
> protocol.

That's already part of vgtmpeg's changes. It's *you* who needs to
provide this library when building.

Moritz
-------------- next part --------------
>From 037d5b4f4bbdcfc42bb699c597aad331186bf735 Mon Sep 17 00:00:00 2001
From: Moritz Barsnick <barsnick at gmx.net>
Date: Sat, 23 Feb 2019 14:03:17 +0100
Subject: [PATCH] merge from vgtmpeg4.0.25

---
 .gitattributes              |    1 +
 README.md                   |   90 +-
 VERSION                     |    1 +
 VGTMPEG_CHANGELOG           |   57 +
 ar.sh                       |    3 +
 configure                   |   39 +-
 doc/vgtmpeg.texi            | 1559 ++++++++++++++
 doc/vgtmpeg_dvd.texi        |   30 +
 ffbuild/version.sh          |    8 +-
 fftools/Makefile            |   27 +-
 fftools/cmdutils.c          |   18 +-
 fftools/ffmpeg.h            |    7 +
 fftools/ffmpeg_opt.c        |  103 +-
 fftools/nldump_format.h     |   37 +
 fftools/nlffmsg.h           |   91 +
 fftools/nlinput.h           |   53 +
 fftools/nljsonmsg.h         |   32 +
 fftools/nlreport.h          |   46 +
 fftools/vgtmpeg.c           | 4994 +++++++++++++++++++++++++++++++++++++++++++
 fftools/vgtmpeg.h           |   42 +
 fftools/vgtmpeg_opts.h      |    7 +
 fftools/vgtmpeg_support.c   | 1176 ++++++++++
 libavformat/Makefile        |    6 +
 libavformat/bdurl.c         | 1078 ++++++++++
 libavformat/bdurl.h         |   64 +
 libavformat/dvdurl.c        | 1657 ++++++++++++++
 libavformat/dvdurl.h        |  120 ++
 libavformat/dvdurl_common.c | 1644 ++++++++++++++
 libavformat/dvdurl_common.h |  838 ++++++++
 libavformat/dvdurl_lang.c   |  285 +++
 libavformat/dvdurl_lang.h   |   52 +
 libavformat/mpeg.c          |  221 +-
 libavformat/mpegts.c        |  104 +-
 libavformat/optmedia.h      |   53 +
 libavformat/protocols.c     |    5 +
 libavformat/utils.c         |    8 +
 36 files changed, 14504 insertions(+), 52 deletions(-)
 create mode 100644 VERSION
 create mode 100644 VGTMPEG_CHANGELOG
 create mode 100755 ar.sh
 create mode 100644 doc/vgtmpeg.texi
 create mode 100644 doc/vgtmpeg_dvd.texi
 create mode 100644 fftools/nldump_format.h
 create mode 100644 fftools/nlffmsg.h
 create mode 100644 fftools/nlinput.h
 create mode 100644 fftools/nljsonmsg.h
 create mode 100644 fftools/nlreport.h
 create mode 100644 fftools/vgtmpeg.c
 create mode 100644 fftools/vgtmpeg.h
 create mode 100644 fftools/vgtmpeg_opts.h
 create mode 100644 fftools/vgtmpeg_support.c
 create mode 100644 libavformat/bdurl.c
 create mode 100644 libavformat/bdurl.h
 create mode 100644 libavformat/dvdurl.c
 create mode 100644 libavformat/dvdurl.h
 create mode 100644 libavformat/dvdurl_common.c
 create mode 100644 libavformat/dvdurl_common.h
 create mode 100644 libavformat/dvdurl_lang.c
 create mode 100644 libavformat/dvdurl_lang.h
 create mode 100644 libavformat/optmedia.h

diff --git a/.gitattributes b/.gitattributes
index 5a19b963b6..f5827ea69b 100644
--- a/.gitattributes
+++ b/.gitattributes
@@ -1,2 +1,3 @@
+*   -merge
 *.pnm -diff -text
 tests/ref/fate/sub-scc eol=crlf
diff --git a/README.md b/README.md
index 447347c700..dc26564671 100644
--- a/README.md
+++ b/README.md
@@ -1,43 +1,77 @@
-FFmpeg README
-=============
+# About
 
-FFmpeg is a collection of libraries and tools to process multimedia content
-such as audio, video, subtitles and related metadata.
+`vgtmpeg` is a ffmpeg drop-in replacement that adds a number of additional features to the stock ffmpeg and libavformat/libavcodec libraries:
 
-## Libraries
+* ***DVD reading*** capability through the addition of a new `dvdurl` protocol using [libdvdread-vgtmpeg](http://github.com/concalma/libdvdread-vgtmpeg) 
+* ***Bluray reading*** capability through the use of libbluray
+* ***Rich metadata*** availability of DVD/Bluray information into transcoded streams: chapters,language info,subtitles are all passed on to the transcoded content if the output supports it
+* ***pipe control and reporting*** `vgtmpeg` ands a control interface to ffmpeg to start, stop, transcode as well as retrieve extra progress information. For example `vgtmpeg` can output thumbnails of the ongoing transcode through the pipe. This offers simple richer integration control of `vgtmpeg` into other applications
+* ***multiplatform releases*** Releases of vgtmpeg include precompiled binaries for major platforms Windows, OS X, Linux both in 32 and 64bit with a vast array of built-in formats and codecs [Download them here](http://godromo.com/gmt/vgtmpeg)
 
-* `libavcodec` provides implementation of a wider range of codecs.
-* `libavformat` implements streaming protocols, container formats and basic I/O access.
-* `libavutil` includes hashers, decompressors and miscellaneous utility functions.
-* `libavfilter` provides a mean to alter decoded Audio and Video through chain of filters.
-* `libavdevice` provides an abstraction to access capture and playback devices.
-* `libswresample` implements audio mixing and resampling routines.
-* `libswscale` implements color conversion and scaling routines.
 
-## Tools
 
-* [ffmpeg](https://ffmpeg.org/ffmpeg.html) is a command line toolbox to
-  manipulate, convert and stream multimedia content.
-* [ffplay](https://ffmpeg.org/ffplay.html) is a minimalistic multimedia player.
-* [ffprobe](https://ffmpeg.org/ffprobe.html) is a simple analysis tool to inspect
-  multimedia content.
-* Additional small tools such as `aviocat`, `ismindex` and `qt-faststart`.
+# Download binaries 
 
-## Documentation
+Precompiled binaries for multiple platforms ( Windows 32/64, MacOS and Linux ) at [vgtmpeg home page](http://godromo.com/gmt/vgtmpeg)
+### See it action
 
-The offline documentation is available in the **doc/** directory.
+vgtmpeg is the underlying transcoding engine in all the native transcoding cloud apps available at [godromo.com](http://godromo.com/gmt)
 
-The online documentation is available in the main [website](https://ffmpeg.org)
-and in the [wiki](https://trac.ffmpeg.org).
+# Author
 
-### Examples
+  * Alberto Vigata [learn more](http://vigata.com/about)
 
-Coding examples are available in the **doc/examples** directory.
 
-## License
+--------
+
+## Compiling
+`vgtmpeg` uses most of the standard libraries like libx264 and libx265 as ffmpeg does, so building the source tree is mostly similar to that of ffmpeg.
+
+
+For DVD support though, `vgtmpeg` uses [libdvdread-vgtmpeg](http://github.com/concalma/libdvdread-vgtmpeg) and it must be installed in your build system.
+
+## DVD/Bluray support
+vgtmpeg adds support for DVDs and BD in its version of libavformat. DVD/BD support is implemented by adding a new ‘dvdurl’ protocol that can parse DVD folders, DVD ISO files, DVD devices and more. The ‘bdurl’ protocol can parse bluray folders. All the regular features available in vgtmpeg/ffmpeg are still available when a dvd url or a bd url is used. From direct stream copy to all sorts of filtering and transcoding possibilities.
+
+Using DVDs with vgtmpeg
+Strictly one can open a DVD folder, ISO file.. by using a DVD url like this:
+
+``` vgtmpeg -i dvd://path_to_dvd  outfile```
+
+When using the above format vgtmpeg will inspect the ‘path_to_dvd’ location looking for a DVD image in the form of a ISO file, or a DVD folder. ‘path_to_dvd’ can also be any of the individual files inside the VIDEO_TS folder, ‘vgtmpeg’ will figure out the rest.
+
+By default, the title with the longest duration is opened when using the above syntax. If you want to rely on this behavior, the use of the dvd:// is not required and just specifying the path will suffice. One can however, ask for specific titles to be used as the input using a url query var:
+
+``` vgtmpeg -i dvd://path_to_dvd?title=5 outfile```
+This will open the title 5 (if available) of the DVD. If you want to know what is available on a DVD simply type:
+
+``` vgtmpeg -i dvd://path_to_dvd```
+Using Bluray folders with vgtmpeg
+Strictly one can open a Bluray folder,by using a BD url like this:
+
+``` vgtmpeg -i bd://path_to_bd  outfile```
+When using the above format vgtmpeg will inspect the ‘path_to_bd’ location looking for a Bluray folder image. The folder will be inspected for a bluray like structure and analyzed looking for titles and video and audio streams.
+
+By default, the title with the longest duration is opened when using the above syntax. If you want to rely on this behavior, the use of the bd:// is not required and just specifying the path will suffice. One can however, ask for specific titles to be used as the input using a url query var:
+
+``` vgtmpeg -i bd://path_to_bd?title=5 outfile```
+This will open the title 5 (if available) of the BD. If you want to know what is available on a BD simply type:
+
+``` vgtmpeg -i bd://path_to_bd```
+###DVD and Bluray paths
+The path to use for the -i option is flexible. You can point to an IFO file, a VIDEO_TS folder, the root of a VIDEO_TS folder or an ISO file containing a VIDEO_TS folder. In any of the cases, vgtmpeg will try to figure out the root file of the DVD from this information and if successful will open the DVD and load the information in the IFO files.
+
+At the moment only Bluray folders are supported and you should point to the root of the Bluray folder.
+
+
+
+
+
+# License
+
+vgtmpeg is available under the terms of the GNU General Public License, Version 2. Please note that
+under the GPL, there is absolutely no warranty of any kind, to the extent permitted by the law.
 
-FFmpeg codebase is mainly LGPL-licensed with optional components licensed under
-GPL. Please refer to the LICENSE file for detailed information.
 
 ## Contributing
 
diff --git a/VERSION b/VERSION
new file mode 100644
index 0000000000..19a7345d51
--- /dev/null
+++ b/VERSION
@@ -0,0 +1 @@
+4.0.25
diff --git a/VGTMPEG_CHANGELOG b/VGTMPEG_CHANGELOG
new file mode 100644
index 0000000000..351b525958
--- /dev/null
+++ b/VGTMPEG_CHANGELOG
@@ -0,0 +1,57 @@
+4.0.25 (rc2):
+- libopus added for opus codec support (encode and decode)
+- fixed mac build 
+
+4.0.24 (rc1):
+- Merge with ffmpeg 4.0 series. vgtmpeg should now be more compatible with 4.0+ ffmpeg
+- Versioning scheme now makes it easier to identify upstream ffmpeg branch. {ffmpeg_ver}.{vgtmpeg_rev}
+- DVD/bluray urls no longer needs explicit output mapping. Default, titles, video and audio tracks are selected. This fixes common error "Output file #0 does not contain any stream" when not specifying mapping
+- libx265/libx264/libvpx updated to latest versions
+- libfaac AAC encoder dropped in favor of the much higher quality fdk-aac. Use encoder 'libfdk_aac' to use this aac encoder
+- 32bit binaries have been deprecated for all platforms
+
+2.2.38
+- Merge with ffmpeg 2.6.x series. In particular, this version is merged and compatible with 2.6.1
+- Download links to older versions are now provided on the site and should be used when compatibility with older versions is required
+
+2.0.35
+- fixed bug that was disabling keyboard interaction while in ffmpeg mode
+
+2.0.34
+- added h265 encoding support with x265 
+- added vp9 encoding support and libvpx-vp9 
+- vgtmpeg now comes with 384 codecs and 270 formats by default without extra libraries. see 'vgtmpeg -codecs' and 'vgtmpeg -formats' for actual list
+- this version is merged off ffmpeg version 2.5.4
+- now compatible ffmpeg version is reported in command line. System integrators can use this to gauge ffmpeg compatilibity.
+- vgtmpeg does no longer need explicit '-map' command line settings to map intput streams to output streams. Defaults are provided if not indicated.
+- updated libdvdread to 5.x series and released vgtmpeg version in github http://github.com/concalma/libdvdread-vgtmpeg
+- miscellaneous bugfixes
+
+1.4.63
+- fixes a bug that would end the transcode quickly if not using server mode (i.e. using command line)
+
+1.4.62
+- fixed report duration from DVD parsing. some DVDs were reporting wrong length
+- libdvdread messages are now only seen when set logging to verbose
+- fix important issue with some DVDs where a whole cell would be skipped
+- DVD urls are not reported with / slashes
+- speed improvements when opening DVDs with lots of titles (30+)
+- libvpx bumped up to 1.0.0
+- bluray now much richer with chapters, language on audio tracks
+
+1.4.02
+- Synchronized with ffmpeg/libavformat 0.10. All new filters and formats are supported
+- Updated libvpx to version 1.0. This seems to be the initial release of libvpx supporting faster encoding for vp8.
+
+1.3.22
+- Added experimental bluray support. Bluray support can be used through the bluray url protocol. bd://
+
+1.2.11
+- Added xvid 1.3.2 encoder as a supported format. Xvid is supported in multithread mode in all platforms and architectures.
+- Fixed character encoding issue between DVD audio languages and ffmpeg metadata. Now when converting to mp4 or other output muxes supporting language metadata, the language metadata is fully preserved from the DVD source.
+
+1.2.10
+- Fixed bug that was reporting incorrent duration of streams and DVD titles
+
+1.2.9
+- Initial public release with DVD support
diff --git a/ar.sh b/ar.sh
new file mode 100755
index 0000000000..f9f0e38c08
--- /dev/null
+++ b/ar.sh
@@ -0,0 +1,3 @@
+#!/bin/sh
+
+~/tools/autorights/autorights.pl . --marker "@@--"  --recursive  --template gpl --years 2010-2018 --tagline "a Versed Generalist Transcoder" --authors "Alberto Vigata" --holders "Alberto Vigata" --program "vgtmpeg"
diff --git a/configure b/configure
index bf40c1dcb9..368cad5c31 100755
--- a/configure
+++ b/configure
@@ -1858,11 +1858,14 @@ LICENSE_LIST="
     version3
 "
 
+#>>vgtmpeg
 PROGRAM_LIST="
     ffplay
     ffprobe
     ffmpeg
+    vgtmpeg
 "
+#<<vgtmpeg
 
 SUBSYSTEM_LIST="
     dct
@@ -3574,6 +3577,10 @@ ffmpeg_select="aformat_filter anull_filter atrim_filter format_filter
                null_filter
                trim_filter"
 ffmpeg_suggest="ole32 psapi shell32"
+#>>vgtmpeg
+vgtmpeg_suggest="ole32 psapi shell32"
+#<<vgtmpeg
+
 ffplay_deps="avcodec avformat swscale swresample sdl2"
 ffplay_select="rdft crop_filter transpose_filter hflip_filter vflip_filter rotate_filter"
 ffplay_suggest="shell32"
@@ -6104,7 +6111,10 @@ enabled libaribb24        && { check_pkg_config libaribb24 "aribb24 > 1.0.3" "ar
 enabled lv2               && require_pkg_config lv2 lilv-0 "lilv/lilv.h" lilv_world_new
 enabled libiec61883       && require libiec61883 libiec61883/iec61883.h iec61883_cmp_connect -lraw1394 -lavc1394 -lrom1394 -liec61883
 enabled libass            && require_pkg_config libass libass ass/ass.h ass_library_init
-enabled libbluray         && require_pkg_config libbluray libbluray libbluray/bluray.h bd_open
+#>>vgtmpeg 
+# adding $ldl so libbluray presence checks pass during configure even if build is static 
+enabled libbluray  && require_pkg_config libbluray libbluray libbluray/bluray.h bd_open -lbluray $ldl
+#<<vgtmpeg 
 enabled libbs2b           && require_pkg_config libbs2b libbs2b bs2b.h bs2b_open
 enabled libcelt           && require libcelt celt/celt.h celt_decode -lcelt0 &&
                              { check_lib libcelt celt/celt.h celt_decoder_create_custom -lcelt0 ||
@@ -6313,6 +6323,12 @@ if enabled sdl2; then
     fi
 fi
 
+#>>vgtmpeg
+enabled dvd_protocol && require_pkg_config libdvdread dvdread dvdread/dvd_reader.h DVDOpenFile -ldvdread $ldl 
+enabled bd_protocol && require_pkg_config libbluray libbluray libbluray/bluray.h bd_open -lbluray $ldl 
+#<<vgtmpeg
+
+
 if enabled decklink; then
     case $target_os in
         mingw32*|mingw64*|win32|win64)
@@ -6470,6 +6486,17 @@ enabled vdpau &&
 enabled vdpau &&
     check_lib vdpau_x11 "vdpau/vdpau.h vdpau/vdpau_x11.h" vdp_device_create_x11 -lvdpau -lX11
 
+
+enabled vdpau && enabled xlib &&
+    check_func_headers "vdpau/vdpau.h vdpau/vdpau_x11.h" vdp_device_create_x11 -lvdpau &&
+#>>vgtmpeg
+    prepend ffmpeg_libs $($ldflags_filter "-lvdpau") &&
+    prepend vgtmpeg_libs $($ldflags_filter "-lvdpau") &&
+#<<vgtmpeg
+    enable vdpau_x11
+
+
+
 enabled crystalhd && check_lib crystalhd "stdint.h libcrystalhd/libcrystalhd_if.h" DtsCrystalHDVersion -lcrystalhd
 
 if enabled x86; then
@@ -6838,6 +6865,16 @@ haiku)
     ;;
 esac
 
+enabled_all dxva2 dxva2api_cobj CoTaskMemFree &&
+#>>vgtmpeg
+    prepend ffmpeg_libs $($ldflags_filter "-lole32" "-luser32") &&
+    prepend vgtmpeg_libs $($ldflags_filter "-lole32" "-luser32") &&
+    enable dxva2_lib
+#<<vgtmpeg
+
+! enabled_any memalign posix_memalign aligned_malloc &&
+    enabled simd_align_16 && enable memalign_hack
+
 flatten_extralibs(){
     nested_entries=
     list_name=$1
diff --git a/doc/vgtmpeg.texi b/doc/vgtmpeg.texi
new file mode 100644
index 0000000000..4053d44c9f
--- /dev/null
+++ b/doc/vgtmpeg.texi
@@ -0,0 +1,1559 @@
+\input texinfo @c -*- texinfo -*-
+
+ at settitle vgtmpeg Documentation
+ at titlepage
+ at center @titlefont{vgtmpeg Documentation}
+ at end titlepage
+
+ at top
+
+ at contents
+
+ at chapter Synopsis
+
+vgtmpeg [@var{global_options}] @{[@var{input_file_options}] -i @file{input_file}@} ... @{[@var{output_file_options}] @file{output_file}@} ...
+
+ at chapter Description
+ at c man begin DESCRIPTION
+vgtmpeg is a ffmpeg/avconv clone that adds a number of additional features to the stock ffmpeg and libavformat/libavcodec libraries augmenting its functionality. One of the most important ones being support for DVD folders/ISO input.
+vgtmpeg is command compatible with ffmpeg, therefore the following document will use ffmpeg/vgtmpeg alternatively when giving examples about its use.
+
+
+ at command{ffmpeg} reads from an arbitrary number of input "files" (which can be regular
+files, pipes, network streams, grabbing devices, etc.), specified by the
+ at code{-i} option, and writes to an arbitrary number of output "files", which are
+specified by a plain output filename. Anything found on the command line which
+cannot be interpreted as an option is considered to be an output filename.
+
+Each input or output file can, in principle, contain any number of streams of
+different types (video/audio/subtitle/attachment/data). The allowed number and/or
+types of streams may be limited by the container format. Selecting which
+streams from which inputs will go into which output is either done automatically
+or with the @code{-map} option (see the Stream selection chapter).
+
+To refer to input files in options, you must use their indices (0-based). E.g.
+the first input file is @code{0}, the second is @code{1}, etc. Similarly, streams
+within a file are referred to by their indices. E.g. @code{2:3} refers to the
+fourth stream in the third input file. Also see the Stream specifiers chapter.
+
+As a general rule, options are applied to the next specified
+file. Therefore, order is important, and you can have the same
+option on the command line multiple times. Each occurrence is
+then applied to the next input or output file.
+Exceptions from this rule are the global options (e.g. verbosity level),
+which should be specified first.
+
+Do not mix input and output files -- first specify all input files, then all
+output files. Also do not mix options which belong to different files. All
+options apply ONLY to the next input or output file and are reset between files.
+
+ at itemize
+ at item
+To set the video bitrate of the output file to 64 kbit/s:
+ at example
+ffmpeg -i input.avi -b:v 64k -bufsize 64k output.avi
+ at end example
+
+ at item
+To force the frame rate of the output file to 24 fps:
+ at example
+ffmpeg -i input.avi -r 24 output.avi
+ at end example
+
+ at item
+To force the frame rate of the input file (valid for raw formats only)
+to 1 fps and the frame rate of the output file to 24 fps:
+ at example
+ffmpeg -r 1 -i input.m2v -r 24 output.avi
+ at end example
+ at end itemize
+
+The format option may be needed for raw input files.
+
+ at c man end DESCRIPTION
+
+ at chapter Detailed description
+ at c man begin DETAILED DESCRIPTION
+
+The transcoding process in @command{ffmpeg} for each output can be described by
+the following diagram:
+
+ at example
+ _______              ______________
+|       |            |              |
+| input |  demuxer   | encoded data |   decoder
+| file  | ---------> | packets      | -----+
+|_______|            |______________|      |
+                                           v
+                                       _________
+                                      |         |
+                                      | decoded |
+                                      | frames  |
+                                      |_________|
+ ________             ______________       |
+|        |           |              |      |
+| output | <-------- | encoded data | <----+
+| file   |   muxer   | packets      |   encoder
+|________|           |______________|
+
+
+ at end example
+
+ at command{ffmpeg} calls the libavformat library (containing demuxers) to read
+input files and get packets containing encoded data from them. When there are
+multiple input files, @command{ffmpeg} tries to keep them synchronized by
+tracking lowest timestamp on any active input stream.
+
+Encoded packets are then passed to the decoder (unless streamcopy is selected
+for the stream, see further for a description). The decoder produces
+uncompressed frames (raw video/PCM audio/...) which can be processed further by
+filtering (see next section). After filtering, the frames are passed to the
+encoder, which encodes them and outputs encoded packets. Finally those are
+passed to the muxer, which writes the encoded packets to the output file.
+
+ at section Filtering
+Before encoding, @command{ffmpeg} can process raw audio and video frames using
+filters from the libavfilter library. Several chained filters form a filter
+graph. @command{ffmpeg} distinguishes between two types of filtergraphs:
+simple and complex.
+
+ at subsection Simple filtergraphs
+Simple filtergraphs are those that have exactly one input and output, both of
+the same type. In the above diagram they can be represented by simply inserting
+an additional step between decoding and encoding:
+
+ at example
+ _________                        ______________
+|         |                      |              |
+| decoded |                      | encoded data |
+| frames  |\                   _ | packets      |
+|_________| \                  /||______________|
+             \   __________   /
+  simple     _\||          | /  encoder
+  filtergraph   | filtered |/
+                | frames   |
+                |__________|
+
+ at end example
+
+Simple filtergraphs are configured with the per-stream @option{-filter} option
+(with @option{-vf} and @option{-af} aliases for video and audio respectively).
+A simple filtergraph for video can look for example like this:
+
+ at example
+ _______        _____________        _______        ________
+|       |      |             |      |       |      |        |
+| input | ---> | deinterlace | ---> | scale | ---> | output |
+|_______|      |_____________|      |_______|      |________|
+
+ at end example
+
+Note that some filters change frame properties but not frame contents. E.g. the
+ at code{fps} filter in the example above changes number of frames, but does not
+touch the frame contents. Another example is the @code{setpts} filter, which
+only sets timestamps and otherwise passes the frames unchanged.
+
+ at subsection Complex filtergraphs
+Complex filtergraphs are those which cannot be described as simply a linear
+processing chain applied to one stream. This is the case, for example, when the graph has
+more than one input and/or output, or when output stream type is different from
+input. They can be represented with the following diagram:
+
+ at example
+ _________
+|         |
+| input 0 |\                    __________
+|_________| \                  |          |
+             \   _________    /| output 0 |
+              \ |         |  / |__________|
+ _________     \| complex | /
+|         |     |         |/
+| input 1 |---->| filter  |\
+|_________|     |         | \   __________
+               /| graph   |  \ |          |
+              / |         |   \| output 1 |
+ _________   /  |_________|    |__________|
+|         | /
+| input 2 |/
+|_________|
+
+ at end example
+
+Complex filtergraphs are configured with the @option{-filter_complex} option.
+Note that this option is global, since a complex filtergraph, by its nature,
+cannot be unambiguously associated with a single stream or file.
+
+The @option{-lavfi} option is equivalent to @option{-filter_complex}.
+
+A trivial example of a complex filtergraph is the @code{overlay} filter, which
+has two video inputs and one video output, containing one video overlaid on top
+of the other. Its audio counterpart is the @code{amix} filter.
+
+ at section Stream copy
+Stream copy is a mode selected by supplying the @code{copy} parameter to the
+ at option{-codec} option. It makes @command{ffmpeg} omit the decoding and encoding
+step for the specified stream, so it does only demuxing and muxing. It is useful
+for changing the container format or modifying container-level metadata. The
+diagram above will, in this case, simplify to this:
+
+ at example
+ _______              ______________            ________
+|       |            |              |          |        |
+| input |  demuxer   | encoded data |  muxer   | output |
+| file  | ---------> | packets      | -------> | file   |
+|_______|            |______________|          |________|
+
+ at end example
+
+Since there is no decoding or encoding, it is very fast and there is no quality
+loss. However, it might not work in some cases because of many factors. Applying
+filters is obviously also impossible, since filters work on uncompressed data.
+
+ at c man end DETAILED DESCRIPTION
+
+ at chapter Stream selection
+ at c man begin STREAM SELECTION
+
+By default, @command{ffmpeg} includes only one stream of each type (video, audio, subtitle)
+present in the input files and adds them to each output file.  It picks the
+"best" of each based upon the following criteria: for video, it is the stream
+with the highest resolution, for audio, it is the stream with the most channels, for
+subtitles, it is the first subtitle stream. In the case where several streams of
+the same type rate equally, the stream with the lowest index is chosen.
+
+You can disable some of those defaults by using the @code{-vn/-an/-sn} options. For
+full manual control, use the @code{-map} option, which disables the defaults just
+described.
+
+ at c man end STREAM SELECTION
+
+ at chapter Options
+ at c man begin OPTIONS
+
+ at include fftools-common-opts.texi
+
+ at section Main options
+
+ at table @option
+
+ at item -f @var{fmt} (@emph{input/output})
+Force input or output file format. The format is normally auto detected for input
+files and guessed from the file extension for output files, so this option is not
+needed in most cases.
+
+ at item -i @var{filename} (@emph{input})
+input file name
+
+ at item -y (@emph{global})
+Overwrite output files without asking.
+
+ at item -n (@emph{global})
+Do not overwrite output files, and exit immediately if a specified
+output file already exists.
+
+ at item -c[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
+ at itemx -codec[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
+Select an encoder (when used before an output file) or a decoder (when used
+before an input file) for one or more streams. @var{codec} is the name of a
+decoder/encoder or a special value @code{copy} (output only) to indicate that
+the stream is not to be re-encoded.
+
+For example
+ at example
+ffmpeg -i INPUT -map 0 -c:v libx264 -c:a copy OUTPUT
+ at end example
+encodes all video streams with libx264 and copies all audio streams.
+
+For each stream, the last matching @code{c} option is applied, so
+ at example
+ffmpeg -i INPUT -map 0 -c copy -c:v:1 libx264 -c:a:137 libvorbis OUTPUT
+ at end example
+will copy all the streams except the second video, which will be encoded with
+libx264, and the 138th audio, which will be encoded with libvorbis.
+
+ at item -t @var{duration} (@emph{input/output})
+When used as an input option (before @code{-i}), limit the @var{duration} of
+data read from the input file.
+
+When used as an output option (before an output filename), stop writing the
+output after its duration reaches @var{duration}.
+
+ at var{duration} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
+
+-to and -t are mutually exclusive and -t has priority.
+
+ at item -to @var{position} (@emph{output})
+Stop writing the output at @var{position}.
+ at var{position} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
+
+-to and -t are mutually exclusive and -t has priority.
+
+ at item -fs @var{limit_size} (@emph{output})
+Set the file size limit, expressed in bytes.
+
+ at item -ss @var{position} (@emph{input/output})
+When used as an input option (before @code{-i}), seeks in this input file to
+ at var{position}. Note the in most formats it is not possible to seek exactly, so
+ at command{ffmpeg} will seek to the closest seek point before @var{position}.
+When transcoding and @option{-accurate_seek} is enabled (the default), this
+extra segment between the seek point and @var{position} will be decoded and
+discarded. When doing stream copy or when @option{-noaccurate_seek} is used, it
+will be preserved.
+
+When used as an output option (before an output filename), decodes but discards
+input until the timestamps reach @var{position}.
+
+ at var{position} may be either in seconds or in @code{hh:mm:ss[.xxx]} form.
+
+ at item -itsoffset @var{offset} (@emph{input})
+Set the input time offset.
+
+ at var{offset} must be a time duration specification,
+see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
+
+The offset is added to the timestamps of the input files. Specifying
+a positive offset means that the corresponding streams are delayed by
+the time duration specified in @var{offset}.
+
+ at item -timestamp @var{date} (@emph{output})
+Set the recording timestamp in the container.
+
+ at var{date} must be a time duration specification,
+see @ref{date syntax,,the Date section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
+
+ at item -metadata[:metadata_specifier] @var{key}=@var{value} (@emph{output,per-metadata})
+Set a metadata key/value pair.
+
+An optional @var{metadata_specifier} may be given to set metadata
+on streams or chapters. See @code{-map_metadata} documentation for
+details.
+
+This option overrides metadata set with @code{-map_metadata}. It is
+also possible to delete metadata by using an empty value.
+
+For example, for setting the title in the output file:
+ at example
+ffmpeg -i in.avi -metadata title="my title" out.flv
+ at end example
+
+To set the language of the first audio stream:
+ at example
+ffmpeg -i INPUT -metadata:s:a:0 language=eng OUTPUT
+ at end example
+
+ at item -target @var{type} (@emph{output})
+Specify target file type (@code{vcd}, @code{svcd}, @code{dvd}, @code{dv},
+ at code{dv50}). @var{type} may be prefixed with @code{pal-}, @code{ntsc-} or
+ at code{film-} to use the corresponding standard. All the format options
+(bitrate, codecs, buffer sizes) are then set automatically. You can just type:
+
+ at example
+ffmpeg -i myfile.avi -target vcd /tmp/vcd.mpg
+ at end example
+
+Nevertheless you can specify additional options as long as you know
+they do not conflict with the standard, as in:
+
+ at example
+ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
+ at end example
+
+ at item -dframes @var{number} (@emph{output})
+Set the number of data frames to output. This is an alias for @code{-frames:d}.
+
+ at item -frames[:@var{stream_specifier}] @var{framecount} (@emph{output,per-stream})
+Stop writing to the stream after @var{framecount} frames.
+
+ at item -q[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
+ at itemx -qscale[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
+Use fixed quality scale (VBR). The meaning of @var{q}/@var{qscale} is
+codec-dependent.
+If @var{qscale} is used without a @var{stream_specifier} then it applies only
+to the video stream, this is to maintain compatibility with previous behavior
+and as specifying the same codec specific value to 2 different codecs that is
+audio and video generally is not what is intended when no stream_specifier is
+used.
+
+ at anchor{filter_option}
+ at item -filter[:@var{stream_specifier}] @var{filtergraph} (@emph{output,per-stream})
+Create the filtergraph specified by @var{filtergraph} and use it to
+filter the stream.
+
+ at var{filtergraph} is a description of the filtergraph to apply to
+the stream, and must have a single input and a single output of the
+same type of the stream. In the filtergraph, the input is associated
+to the label @code{in}, and the output to the label @code{out}. See
+the ffmpeg-filters manual for more information about the filtergraph
+syntax.
+
+See the @ref{filter_complex_option,,-filter_complex option} if you
+want to create filtergraphs with multiple inputs and/or outputs.
+
+ at item -filter_script[:@var{stream_specifier}] @var{filename} (@emph{output,per-stream})
+This option is similar to @option{-filter}, the only difference is that its
+argument is the name of the file from which a filtergraph description is to be
+read.
+
+ at item -pre[:@var{stream_specifier}] @var{preset_name} (@emph{output,per-stream})
+Specify the preset for matching stream(s).
+
+ at item -stats (@emph{global})
+Print encoding progress/statistics. It is on by default, to explicitly
+disable it you need to specify @code{-nostats}.
+
+ at item -progress @var{url} (@emph{global})
+Send program-friendly progress information to @var{url}.
+
+Progress information is written approximately every second and at the end of
+the encoding process. It is made of "@var{key}=@var{value}" lines. @var{key}
+consists of only alphanumeric characters. The last key of a sequence of
+progress information is always "progress".
+
+ at item -stdin
+Enable interaction on standard input. On by default unless standard input is
+used as an input. To explicitly disable interaction you need to specify
+ at code{-nostdin}.
+
+Disabling interaction on standard input is useful, for example, if
+ffmpeg is in the background process group. Roughly the same result can
+be achieved with @code{ffmpeg ... < /dev/null} but it requires a
+shell.
+
+ at item -debug_ts (@emph{global})
+Print timestamp information. It is off by default. This option is
+mostly useful for testing and debugging purposes, and the output
+format may change from one version to another, so it should not be
+employed by portable scripts.
+
+See also the option @code{-fdebug ts}.
+
+ at item -attach @var{filename} (@emph{output})
+Add an attachment to the output file. This is supported by a few formats
+like Matroska for e.g. fonts used in rendering subtitles. Attachments
+are implemented as a specific type of stream, so this option will add
+a new stream to the file. It is then possible to use per-stream options
+on this stream in the usual way. Attachment streams created with this
+option will be created after all the other streams (i.e. those created
+with @code{-map} or automatic mappings).
+
+Note that for Matroska you also have to set the mimetype metadata tag:
+ at example
+ffmpeg -i INPUT -attach DejaVuSans.ttf -metadata:s:2 mimetype=application/x-truetype-font out.mkv
+ at end example
+(assuming that the attachment stream will be third in the output file).
+
+ at item -dump_attachment[:@var{stream_specifier}] @var{filename} (@emph{input,per-stream})
+Extract the matching attachment stream into a file named @var{filename}. If
+ at var{filename} is empty, then the value of the @code{filename} metadata tag
+will be used.
+
+E.g. to extract the first attachment to a file named 'out.ttf':
+ at example
+ffmpeg -dump_attachment:t:0 out.ttf -i INPUT
+ at end example
+To extract all attachments to files determined by the @code{filename} tag:
+ at example
+ffmpeg -dump_attachment:t "" -i INPUT
+ at end example
+
+Technical note -- attachments are implemented as codec extradata, so this
+option can actually be used to extract extradata from any stream, not just
+attachments.
+
+ at end table
+
+ at section Video Options
+
+ at table @option
+ at item -vframes @var{number} (@emph{output})
+Set the number of video frames to output. This is an alias for @code{-frames:v}.
+ at item -r[:@var{stream_specifier}] @var{fps} (@emph{input/output,per-stream})
+Set frame rate (Hz value, fraction or abbreviation).
+
+As an input option, ignore any timestamps stored in the file and instead
+generate timestamps assuming constant frame rate @var{fps}.
+This is not the same as the @option{-framerate} option used for some input formats
+like image2 or v4l2 (it used to be the same in older versions of FFmpeg).
+If in doubt use @option{-framerate} instead of the input option @option{-r}.
+
+As an output option, duplicate or drop input frames to achieve constant output
+frame rate @var{fps}.
+
+ at item -s[:@var{stream_specifier}] @var{size} (@emph{input/output,per-stream})
+Set frame size.
+
+As an input option, this is a shortcut for the @option{video_size} private
+option, recognized by some demuxers for which the frame size is either not
+stored in the file or is configurable -- e.g. raw video or video grabbers.
+
+As an output option, this inserts the @code{scale} video filter to the
+ at emph{end} of the corresponding filtergraph. Please use the @code{scale} filter
+directly to insert it at the beginning or some other place.
+
+The format is @samp{wxh} (default - same as source).
+
+ at item -aspect[:@var{stream_specifier}] @var{aspect} (@emph{output,per-stream})
+Set the video display aspect ratio specified by @var{aspect}.
+
+ at var{aspect} can be a floating point number string, or a string of the
+form @var{num}:@var{den}, where @var{num} and @var{den} are the
+numerator and denominator of the aspect ratio. For example "4:3",
+"16:9", "1.3333", and "1.7777" are valid argument values.
+
+If used together with @option{-vcodec copy}, it will affect the aspect ratio
+stored at container level, but not the aspect ratio stored in encoded
+frames, if it exists.
+
+ at item -vn (@emph{output})
+Disable video recording.
+
+ at item -vcodec @var{codec} (@emph{output})
+Set the video codec. This is an alias for @code{-codec:v}.
+
+ at item -pass[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
+Select the pass number (1 or 2). It is used to do two-pass
+video encoding. The statistics of the video are recorded in the first
+pass into a log file (see also the option -passlogfile),
+and in the second pass that log file is used to generate the video
+at the exact requested bitrate.
+On pass 1, you may just deactivate audio and set output to null,
+examples for Windows and Unix:
+ at example
+ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y NUL
+ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y /dev/null
+ at end example
+
+ at item -passlogfile[:@var{stream_specifier}] @var{prefix} (@emph{output,per-stream})
+Set two-pass log file name prefix to @var{prefix}, the default file name
+prefix is ``ffmpeg2pass''. The complete file name will be
+ at file{PREFIX-N.log}, where N is a number specific to the output
+stream
+
+ at item -vf @var{filtergraph} (@emph{output})
+Create the filtergraph specified by @var{filtergraph} and use it to
+filter the stream.
+
+This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}.
+ at end table
+
+ at section Advanced Video options
+
+ at table @option
+ at item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream})
+Set pixel format. Use @code{-pix_fmts} to show all the supported
+pixel formats.
+If the selected pixel format can not be selected, ffmpeg will print a
+warning and select the best pixel format supported by the encoder.
+If @var{pix_fmt} is prefixed by a @code{+}, ffmpeg will exit with an error
+if the requested pixel format can not be selected, and automatic conversions
+inside filtergraphs are disabled.
+If @var{pix_fmt} is a single @code{+}, ffmpeg selects the same pixel format
+as the input (or graph output) and automatic conversions are disabled.
+
+ at item -sws_flags @var{flags} (@emph{input/output})
+Set SwScaler flags.
+ at item -vdt @var{n}
+Discard threshold.
+
+ at item -rc_override[:@var{stream_specifier}] @var{override} (@emph{output,per-stream})
+Rate control override for specific intervals, formatted as "int,int,int"
+list separated with slashes. Two first values are the beginning and
+end frame numbers, last one is quantizer to use if positive, or quality
+factor if negative.
+
+ at item -ilme
+Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
+Use this option if your input file is interlaced and you want
+to keep the interlaced format for minimum losses.
+The alternative is to deinterlace the input stream with
+ at option{-deinterlace}, but deinterlacing introduces losses.
+ at item -psnr
+Calculate PSNR of compressed frames.
+ at item -vstats
+Dump video coding statistics to @file{vstats_HHMMSS.log}.
+ at item -vstats_file @var{file}
+Dump video coding statistics to @var{file}.
+ at item -top[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
+top=1/bottom=0/auto=-1 field first
+ at item -dc @var{precision}
+Intra_dc_precision.
+ at item -vtag @var{fourcc/tag} (@emph{output})
+Force video tag/fourcc. This is an alias for @code{-tag:v}.
+ at item -qphist (@emph{global})
+Show QP histogram
+ at item -vbsf @var{bitstream_filter}
+Deprecated see -bsf
+
+ at item -force_key_frames[:@var{stream_specifier}] @var{time}[, at var{time}...] (@emph{output,per-stream})
+ at item -force_key_frames[:@var{stream_specifier}] expr:@var{expr} (@emph{output,per-stream})
+Force key frames at the specified timestamps, more precisely at the first
+frames after each specified time.
+
+If the argument is prefixed with @code{expr:}, the string @var{expr}
+is interpreted like an expression and is evaluated for each frame. A
+key frame is forced in case the evaluation is non-zero.
+
+If one of the times is "@code{chapters}[@var{delta}]", it is expanded into
+the time of the beginning of all chapters in the file, shifted by
+ at var{delta}, expressed as a time in seconds.
+This option can be useful to ensure that a seek point is present at a
+chapter mark or any other designated place in the output file.
+
+For example, to insert a key frame at 5 minutes, plus key frames 0.1 second
+before the beginning of every chapter:
+ at example
+-force_key_frames 0:05:00,chapters-0.1
+ at end example
+
+The expression in @var{expr} can contain the following constants:
+ at table @option
+ at item n
+the number of current processed frame, starting from 0
+ at item n_forced
+the number of forced frames
+ at item prev_forced_n
+the number of the previous forced frame, it is @code{NAN} when no
+keyframe was forced yet
+ at item prev_forced_t
+the time of the previous forced frame, it is @code{NAN} when no
+keyframe was forced yet
+ at item t
+the time of the current processed frame
+ at end table
+
+For example to force a key frame every 5 seconds, you can specify:
+ at example
+-force_key_frames expr:gte(t,n_forced*5)
+ at end example
+
+To force a key frame 5 seconds after the time of the last forced one,
+starting from second 13:
+ at example
+-force_key_frames expr:if(isnan(prev_forced_t),gte(t,13),gte(t,prev_forced_t+5))
+ at end example
+
+Note that forcing too many keyframes is very harmful for the lookahead
+algorithms of certain encoders: using fixed-GOP options or similar
+would be more efficient.
+
+ at item -copyinkf[:@var{stream_specifier}] (@emph{output,per-stream})
+When doing stream copy, copy also non-key frames found at the
+beginning.
+
+ at item -hwaccel[:@var{stream_specifier}] @var{hwaccel} (@emph{input,per-stream})
+Use hardware acceleration to decode the matching stream(s). The allowed values
+of @var{hwaccel} are:
+ at table @option
+ at item none
+Do not use any hardware acceleration (the default).
+
+ at item auto
+Automatically select the hardware acceleration method.
+
+ at item vda
+Use Apple VDA hardware acceleration.
+
+ at item vdpau
+Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
+
+ at item dxva2
+Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
+ at end table
+
+This option has no effect if the selected hwaccel is not available or not
+supported by the chosen decoder.
+
+Note that most acceleration methods are intended for playback and will not be
+faster than software decoding on modern CPUs. Additionally, @command{ffmpeg}
+will usually need to copy the decoded frames from the GPU memory into the system
+memory, resulting in further performance loss. This option is thus mainly
+useful for testing.
+
+ at item -hwaccel_device[:@var{stream_specifier}] @var{hwaccel_device} (@emph{input,per-stream})
+Select a device to use for hardware acceleration.
+
+This option only makes sense when the @option{-hwaccel} option is also
+specified. Its exact meaning depends on the specific hardware acceleration
+method chosen.
+
+ at table @option
+ at item vdpau
+For VDPAU, this option specifies the X11 display/screen to use. If this option
+is not specified, the value of the @var{DISPLAY} environment variable is used
+
+ at item dxva2
+For DXVA2, this option should contain the number of the display adapter to use.
+If this option is not specified, the default adapter is used.
+ at end table
+ at end table
+
+ at section Audio Options
+
+ at table @option
+ at item -aframes @var{number} (@emph{output})
+Set the number of audio frames to output. This is an alias for @code{-frames:a}.
+ at item -ar[:@var{stream_specifier}] @var{freq} (@emph{input/output,per-stream})
+Set the audio sampling frequency. For output streams it is set by
+default to the frequency of the corresponding input stream. For input
+streams this option only makes sense for audio grabbing devices and raw
+demuxers and is mapped to the corresponding demuxer options.
+ at item -aq @var{q} (@emph{output})
+Set the audio quality (codec-specific, VBR). This is an alias for -q:a.
+ at item -ac[:@var{stream_specifier}] @var{channels} (@emph{input/output,per-stream})
+Set the number of audio channels. For output streams it is set by
+default to the number of input audio channels. For input streams
+this option only makes sense for audio grabbing devices and raw demuxers
+and is mapped to the corresponding demuxer options.
+ at item -an (@emph{output})
+Disable audio recording.
+ at item -acodec @var{codec} (@emph{input/output})
+Set the audio codec. This is an alias for @code{-codec:a}.
+ at item -sample_fmt[:@var{stream_specifier}] @var{sample_fmt} (@emph{output,per-stream})
+Set the audio sample format. Use @code{-sample_fmts} to get a list
+of supported sample formats.
+
+ at item -af @var{filtergraph} (@emph{output})
+Create the filtergraph specified by @var{filtergraph} and use it to
+filter the stream.
+
+This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}.
+ at end table
+
+ at section Advanced Audio options
+
+ at table @option
+ at item -atag @var{fourcc/tag} (@emph{output})
+Force audio tag/fourcc. This is an alias for @code{-tag:a}.
+ at item -absf @var{bitstream_filter}
+Deprecated, see -bsf
+ at item -guess_layout_max @var{channels} (@emph{input,per-stream})
+If some input channel layout is not known, try to guess only if it
+corresponds to at most the specified number of channels. For example, 2
+tells to @command{ffmpeg} to recognize 1 channel as mono and 2 channels as
+stereo but not 6 channels as 5.1. The default is to always try to guess. Use
+0 to disable all guessing.
+ at end table
+
+ at section Subtitle options
+
+ at table @option
+ at item -scodec @var{codec} (@emph{input/output})
+Set the subtitle codec. This is an alias for @code{-codec:s}.
+ at item -sn (@emph{output})
+Disable subtitle recording.
+ at item -sbsf @var{bitstream_filter}
+Deprecated, see -bsf
+ at end table
+
+ at section Advanced Subtitle options
+
+ at table @option
+
+ at item -fix_sub_duration
+Fix subtitles durations. For each subtitle, wait for the next packet in the
+same stream and adjust the duration of the first to avoid overlap. This is
+necessary with some subtitles codecs, especially DVB subtitles, because the
+duration in the original packet is only a rough estimate and the end is
+actually marked by an empty subtitle frame. Failing to use this option when
+necessary can result in exaggerated durations or muxing failures due to
+non-monotonic timestamps.
+
+Note that this option will delay the output of all data until the next
+subtitle packet is decoded: it may increase memory consumption and latency a
+lot.
+
+ at item -canvas_size @var{size}
+Set the size of the canvas used to render subtitles.
+
+ at end table
+
+ at section Advanced options
+
+ at table @option
+ at item -map [-]@var{input_file_id}[:@var{stream_specifier}][, at var{sync_file_id}[:@var{stream_specifier}]] | @var{[linklabel]} (@emph{output})
+
+Designate one or more input streams as a source for the output file. Each input
+stream is identified by the input file index @var{input_file_id} and
+the input stream index @var{input_stream_id} within the input
+file. Both indices start at 0. If specified,
+ at var{sync_file_id}:@var{stream_specifier} sets which input stream
+is used as a presentation sync reference.
+
+The first @code{-map} option on the command line specifies the
+source for output stream 0, the second @code{-map} option specifies
+the source for output stream 1, etc.
+
+A @code{-} character before the stream identifier creates a "negative" mapping.
+It disables matching streams from already created mappings.
+
+An alternative @var{[linklabel]} form will map outputs from complex filter
+graphs (see the @option{-filter_complex} option) to the output file.
+ at var{linklabel} must correspond to a defined output link label in the graph.
+
+For example, to map ALL streams from the first input file to output
+ at example
+ffmpeg -i INPUT -map 0 output
+ at end example
+
+For example, if you have two audio streams in the first input file,
+these streams are identified by "0:0" and "0:1". You can use
+ at code{-map} to select which streams to place in an output file. For
+example:
+ at example
+ffmpeg -i INPUT -map 0:1 out.wav
+ at end example
+will map the input stream in @file{INPUT} identified by "0:1" to
+the (single) output stream in @file{out.wav}.
+
+For example, to select the stream with index 2 from input file
+ at file{a.mov} (specified by the identifier "0:2"), and stream with
+index 6 from input @file{b.mov} (specified by the identifier "1:6"),
+and copy them to the output file @file{out.mov}:
+ at example
+ffmpeg -i a.mov -i b.mov -c copy -map 0:2 -map 1:6 out.mov
+ at end example
+
+To select all video and the third audio stream from an input file:
+ at example
+ffmpeg -i INPUT -map 0:v -map 0:a:2 OUTPUT
+ at end example
+
+To map all the streams except the second audio, use negative mappings
+ at example
+ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT
+ at end example
+
+To pick the English audio stream:
+ at example
+ffmpeg -i INPUT -map 0:m:language:eng OUTPUT
+ at end example
+
+Note that using this option disables the default mappings for this output file.
+
+ at item -map_channel [@var{input_file_id}. at var{stream_specifier}. at var{channel_id}|-1][:@var{output_file_id}. at var{stream_specifier}]
+Map an audio channel from a given input to an output. If
+ at var{output_file_id}. at var{stream_specifier} is not set, the audio channel will
+be mapped on all the audio streams.
+
+Using "-1" instead of
+ at var{input_file_id}. at var{stream_specifier}. at var{channel_id} will map a muted
+channel.
+
+For example, assuming @var{INPUT} is a stereo audio file, you can switch the
+two audio channels with the following command:
+ at example
+ffmpeg -i INPUT -map_channel 0.0.1 -map_channel 0.0.0 OUTPUT
+ at end example
+
+If you want to mute the first channel and keep the second:
+ at example
+ffmpeg -i INPUT -map_channel -1 -map_channel 0.0.1 OUTPUT
+ at end example
+
+The order of the "-map_channel" option specifies the order of the channels in
+the output stream. The output channel layout is guessed from the number of
+channels mapped (mono if one "-map_channel", stereo if two, etc.). Using "-ac"
+in combination of "-map_channel" makes the channel gain levels to be updated if
+input and output channel layouts don't match (for instance two "-map_channel"
+options and "-ac 6").
+
+You can also extract each channel of an input to specific outputs; the following
+command extracts two channels of the @var{INPUT} audio stream (file 0, stream 0)
+to the respective @var{OUTPUT_CH0} and @var{OUTPUT_CH1} outputs:
+ at example
+ffmpeg -i INPUT -map_channel 0.0.0 OUTPUT_CH0 -map_channel 0.0.1 OUTPUT_CH1
+ at end example
+
+The following example splits the channels of a stereo input into two separate
+streams, which are put into the same output file:
+ at example
+ffmpeg -i stereo.wav -map 0:0 -map 0:0 -map_channel 0.0.0:0.0 -map_channel 0.0.1:0.1 -y out.ogg
+ at end example
+
+Note that currently each output stream can only contain channels from a single
+input stream; you can't for example use "-map_channel" to pick multiple input
+audio channels contained in different streams (from the same or different files)
+and merge them into a single output stream. It is therefore not currently
+possible, for example, to turn two separate mono streams into a single stereo
+stream. However splitting a stereo stream into two single channel mono streams
+is possible.
+
+If you need this feature, a possible workaround is to use the @emph{amerge}
+filter. For example, if you need to merge a media (here @file{input.mkv}) with 2
+mono audio streams into one single stereo channel audio stream (and keep the
+video stream), you can use the following command:
+ at example
+ffmpeg -i input.mkv -filter_complex "[0:1] [0:2] amerge" -c:a pcm_s16le -c:v copy output.mkv
+ at end example
+
+ at item -map_metadata[:@var{metadata_spec_out}] @var{infile}[:@var{metadata_spec_in}] (@emph{output,per-metadata})
+Set metadata information of the next output file from @var{infile}. Note that
+those are file indices (zero-based), not filenames.
+Optional @var{metadata_spec_in/out} parameters specify, which metadata to copy.
+A metadata specifier can have the following forms:
+ at table @option
+ at item @var{g}
+global metadata, i.e. metadata that applies to the whole file
+
+ at item @var{s}[:@var{stream_spec}]
+per-stream metadata. @var{stream_spec} is a stream specifier as described
+in the @ref{Stream specifiers} chapter. In an input metadata specifier, the first
+matching stream is copied from. In an output metadata specifier, all matching
+streams are copied to.
+
+ at item @var{c}:@var{chapter_index}
+per-chapter metadata. @var{chapter_index} is the zero-based chapter index.
+
+ at item @var{p}:@var{program_index}
+per-program metadata. @var{program_index} is the zero-based program index.
+ at end table
+If metadata specifier is omitted, it defaults to global.
+
+By default, global metadata is copied from the first input file,
+per-stream and per-chapter metadata is copied along with streams/chapters. These
+default mappings are disabled by creating any mapping of the relevant type. A negative
+file index can be used to create a dummy mapping that just disables automatic copying.
+
+For example to copy metadata from the first stream of the input file to global metadata
+of the output file:
+ at example
+ffmpeg -i in.ogg -map_metadata 0:s:0 out.mp3
+ at end example
+
+To do the reverse, i.e. copy global metadata to all audio streams:
+ at example
+ffmpeg -i in.mkv -map_metadata:s:a 0:g out.mkv
+ at end example
+Note that simple @code{0} would work as well in this example, since global
+metadata is assumed by default.
+
+ at item -map_chapters @var{input_file_index} (@emph{output})
+Copy chapters from input file with index @var{input_file_index} to the next
+output file. If no chapter mapping is specified, then chapters are copied from
+the first input file with at least one chapter. Use a negative file index to
+disable any chapter copying.
+
+ at item -benchmark (@emph{global})
+Show benchmarking information at the end of an encode.
+Shows CPU time used and maximum memory consumption.
+Maximum memory consumption is not supported on all systems,
+it will usually display as 0 if not supported.
+ at item -benchmark_all (@emph{global})
+Show benchmarking information during the encode.
+Shows CPU time used in various steps (audio/video encode/decode).
+ at item -timelimit @var{duration} (@emph{global})
+Exit after ffmpeg has been running for @var{duration} seconds.
+ at item -dump (@emph{global})
+Dump each input packet to stderr.
+ at item -hex (@emph{global})
+When dumping packets, also dump the payload.
+ at item -re (@emph{input})
+Read input at native frame rate. Mainly used to simulate a grab device.
+or live input stream (e.g. when reading from a file). Should not be used
+with actual grab devices or live input streams (where it can cause packet
+loss).
+By default @command{ffmpeg} attempts to read the input(s) as fast as possible.
+This option will slow down the reading of the input(s) to the native frame rate
+of the input(s). It is useful for real-time output (e.g. live streaming).
+ at item -loop_input
+Loop over the input stream. Currently it works only for image
+streams. This option is used for automatic FFserver testing.
+This option is deprecated, use -loop 1.
+ at item -loop_output @var{number_of_times}
+Repeatedly loop output for formats that support looping such as animated GIF
+(0 will loop the output infinitely).
+This option is deprecated, use -loop.
+ at item -vsync @var{parameter}
+Video sync method.
+For compatibility reasons old values can be specified as numbers.
+Newly added values will have to be specified as strings always.
+
+ at table @option
+ at item 0, passthrough
+Each frame is passed with its timestamp from the demuxer to the muxer.
+ at item 1, cfr
+Frames will be duplicated and dropped to achieve exactly the requested
+constant frame rate.
+ at item 2, vfr
+Frames are passed through with their timestamp or dropped so as to
+prevent 2 frames from having the same timestamp.
+ at item drop
+As passthrough but destroys all timestamps, making the muxer generate
+fresh timestamps based on frame-rate.
+ at item -1, auto
+Chooses between 1 and 2 depending on muxer capabilities. This is the
+default method.
+ at end table
+
+Note that the timestamps may be further modified by the muxer, after this.
+For example, in the case that the format option @option{avoid_negative_ts}
+is enabled.
+
+With -map you can select from which stream the timestamps should be
+taken. You can leave either video or audio unchanged and sync the
+remaining stream(s) to the unchanged one.
+
+ at item -async @var{samples_per_second}
+Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
+the parameter is the maximum samples per second by which the audio is changed.
+-async 1 is a special case where only the start of the audio stream is corrected
+without any later correction.
+
+Note that the timestamps may be further modified by the muxer, after this.
+For example, in the case that the format option @option{avoid_negative_ts}
+is enabled.
+
+This option has been deprecated. Use the @code{aresample} audio filter instead.
+
+ at item -copyts
+Do not process input timestamps, but keep their values without trying
+to sanitize them. In particular, do not remove the initial start time
+offset value.
+
+Note that, depending on the @option{vsync} option or on specific muxer
+processing (e.g. in case the format option @option{avoid_negative_ts}
+is enabled) the output timestamps may mismatch with the input
+timestamps even when this option is selected.
+
+ at item -start_at_zero
+When used with @option{copyts}, shift input timestamps so they start at zero.
+
+This means that using e.g. @code{-ss 50} will make output timestamps start at
+50 seconds, regardless of what timestamp the input file started at.
+
+ at item -copytb @var{mode}
+Specify how to set the encoder timebase when stream copying.  @var{mode} is an
+integer numeric value, and can assume one of the following values:
+
+ at table @option
+ at item 1
+Use the demuxer timebase.
+
+The time base is copied to the output encoder from the corresponding input
+demuxer. This is sometimes required to avoid non monotonically increasing
+timestamps when copying video streams with variable frame rate.
+
+ at item 0
+Use the decoder timebase.
+
+The time base is copied to the output encoder from the corresponding input
+decoder.
+
+ at item -1
+Try to make the choice automatically, in order to generate a sane output.
+ at end table
+
+Default value is -1.
+
+ at item -shortest (@emph{output})
+Finish encoding when the shortest input stream ends.
+ at item -dts_delta_threshold
+Timestamp discontinuity delta threshold.
+ at item -muxdelay @var{seconds} (@emph{input})
+Set the maximum demux-decode delay.
+ at item -muxpreload @var{seconds} (@emph{input})
+Set the initial demux-decode delay.
+ at item -streamid @var{output-stream-index}:@var{new-value} (@emph{output})
+Assign a new stream-id value to an output stream. This option should be
+specified prior to the output filename to which it applies.
+For the situation where multiple output files exist, a streamid
+may be reassigned to a different value.
+
+For example, to set the stream 0 PID to 33 and the stream 1 PID to 36 for
+an output mpegts file:
+ at example
+ffmpeg -i infile -streamid 0:33 -streamid 1:36 out.ts
+ at end example
+
+ at item -bsf[:@var{stream_specifier}] @var{bitstream_filters} (@emph{output,per-stream})
+Set bitstream filters for matching streams. @var{bitstream_filters} is
+a comma-separated list of bitstream filters. Use the @code{-bsfs} option
+to get the list of bitstream filters.
+ at example
+ffmpeg -i h264.mp4 -c:v copy -bsf:v h264_mp4toannexb -an out.h264
+ at end example
+ at example
+ffmpeg -i file.mov -an -vn -bsf:s mov2textsub -c:s copy -f rawvideo sub.txt
+ at end example
+
+ at item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{input/output,per-stream})
+Force a tag/fourcc for matching streams.
+
+ at item -timecode @var{hh}:@var{mm}:@var{ss}SEP at var{ff}
+Specify Timecode for writing. @var{SEP} is ':' for non drop timecode and ';'
+(or '.') for drop.
+ at example
+ffmpeg -i input.mpg -timecode 01:02:03.04 -r 30000/1001 -s ntsc output.mpg
+ at end example
+
+ at anchor{filter_complex_option}
+ at item -filter_complex @var{filtergraph} (@emph{global})
+Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
+outputs. For simple graphs -- those with one input and one output of the same
+type -- see the @option{-filter} options. @var{filtergraph} is a description of
+the filtergraph, as described in the ``Filtergraph syntax'' section of the
+ffmpeg-filters manual.
+
+Input link labels must refer to input streams using the
+ at code{[file_index:stream_specifier]} syntax (i.e. the same as @option{-map}
+uses). If @var{stream_specifier} matches multiple streams, the first one will be
+used. An unlabeled input will be connected to the first unused input stream of
+the matching type.
+
+Output link labels are referred to with @option{-map}. Unlabeled outputs are
+added to the first output file.
+
+Note that with this option it is possible to use only lavfi sources without
+normal input files.
+
+For example, to overlay an image over video
+ at example
+ffmpeg -i video.mkv -i image.png -filter_complex '[0:v][1:v]overlay[out]' -map
+'[out]' out.mkv
+ at end example
+Here @code{[0:v]} refers to the first video stream in the first input file,
+which is linked to the first (main) input of the overlay filter. Similarly the
+first video stream in the second input is linked to the second (overlay) input
+of overlay.
+
+Assuming there is only one video stream in each input file, we can omit input
+labels, so the above is equivalent to
+ at example
+ffmpeg -i video.mkv -i image.png -filter_complex 'overlay[out]' -map
+'[out]' out.mkv
+ at end example
+
+Furthermore we can omit the output label and the single output from the filter
+graph will be added to the output file automatically, so we can simply write
+ at example
+ffmpeg -i video.mkv -i image.png -filter_complex 'overlay' out.mkv
+ at end example
+
+To generate 5 seconds of pure red video using lavfi @code{color} source:
+ at example
+ffmpeg -filter_complex 'color=c=red' -t 5 out.mkv
+ at end example
+
+ at item -lavfi @var{filtergraph} (@emph{global})
+Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
+outputs. Equivalent to @option{-filter_complex}.
+
+ at item -filter_complex_script @var{filename} (@emph{global})
+This option is similar to @option{-filter_complex}, the only difference is that
+its argument is the name of the file from which a complex filtergraph
+description is to be read.
+
+ at item -accurate_seek (@emph{input})
+This option enables or disables accurate seeking in input files with the
+ at option{-ss} option. It is enabled by default, so seeking is accurate when
+transcoding. Use @option{-noaccurate_seek} to disable it, which may be useful
+e.g. when copying some streams and transcoding the others.
+
+ at item -override_ffserver (@emph{global})
+Overrides the input specifications from @command{ffserver}. Using this
+option you can map any input stream to @command{ffserver} and control
+many aspects of the encoding from @command{ffmpeg}. Without this
+option @command{ffmpeg} will transmit to @command{ffserver} what is
+requested by @command{ffserver}.
+
+The option is intended for cases where features are needed that cannot be
+specified to @command{ffserver} but can be to @command{ffmpeg}.
+
+ at item -discard (@emph{input})
+Allows discarding specific streams or frames of streams at the demuxer.
+Not all demuxers support this.
+
+ at table @option
+ at item none
+Discard no frame.
+
+ at item default
+Default, which discards no frames.
+
+ at item noref
+Discard all non-reference frames.
+
+ at item bidir
+Discard all bidirectional frames.
+
+ at item nokey
+Discard all frames excepts keyframes.
+
+ at item all
+Discard all frames.
+ at end table
+
+ at end table
+
+As a special exception, you can use a bitmap subtitle stream as input: it
+will be converted into a video with the same size as the largest video in
+the file, or 720x576 if no video is present. Note that this is an
+experimental and temporary solution. It will be removed once libavfilter has
+proper support for subtitles.
+
+For example, to hardcode subtitles on top of a DVB-T recording stored in
+MPEG-TS format, delaying the subtitles by 1 second:
+ at example
+ffmpeg -i input.ts -filter_complex \
+  '[#0x2ef] setpts=PTS+1/TB [sub] ; [#0x2d0] [sub] overlay' \
+  -sn -map '#0x2dc' output.mkv
+ at end example
+(0x2d0, 0x2dc and 0x2ef are the MPEG-TS PIDs of respectively the video,
+audio and subtitles streams; 0:0, 0:3 and 0:7 would have worked too)
+
+ at section Preset files
+A preset file contains a sequence of @var{option}=@var{value} pairs,
+one for each line, specifying a sequence of options which would be
+awkward to specify on the command line. Lines starting with the hash
+('#') character are ignored and are used to provide comments. Check
+the @file{presets} directory in the FFmpeg source tree for examples.
+
+Preset files are specified with the @code{vpre}, @code{apre},
+ at code{spre}, and @code{fpre} options. The @code{fpre} option takes the
+filename of the preset instead of a preset name as input and can be
+used for any kind of codec. For the @code{vpre}, @code{apre}, and
+ at code{spre} options, the options specified in a preset file are
+applied to the currently selected codec of the same type as the preset
+option.
+
+The argument passed to the @code{vpre}, @code{apre}, and @code{spre}
+preset options identifies the preset file to use according to the
+following rules:
+
+First ffmpeg searches for a file named @var{arg}.ffpreset in the
+directories @file{$FFMPEG_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
+the datadir defined at configuration time (usually @file{PREFIX/share/ffmpeg})
+or in a @file{ffpresets} folder along the executable on win32,
+in that order. For example, if the argument is @code{libvpx-1080p}, it will
+search for the file @file{libvpx-1080p.ffpreset}.
+
+If no such file is found, then ffmpeg will search for a file named
+ at var{codec_name}- at var{arg}.ffpreset in the above-mentioned
+directories, where @var{codec_name} is the name of the codec to which
+the preset file options will be applied. For example, if you select
+the video codec with @code{-vcodec libvpx} and use @code{-vpre 1080p},
+then it will search for the file @file{libvpx-1080p.ffpreset}.
+ at c man end OPTIONS
+
+ at chapter Tips
+ at c man begin TIPS
+
+ at itemize
+ at item
+For streaming at very low bitrates, use a low frame rate
+and a small GOP size. This is especially true for RealVideo where
+the Linux player does not seem to be very fast, so it can miss
+frames. An example is:
+
+ at example
+ffmpeg -g 3 -r 3 -t 10 -b:v 50k -s qcif -f rv10 /tmp/b.rm
+ at end example
+
+ at item
+The parameter 'q' which is displayed while encoding is the current
+quantizer. The value 1 indicates that a very good quality could
+be achieved. The value 31 indicates the worst quality. If q=31 appears
+too often, it means that the encoder cannot compress enough to meet
+your bitrate. You must either increase the bitrate, decrease the
+frame rate or decrease the frame size.
+
+ at item
+If your computer is not fast enough, you can speed up the
+compression at the expense of the compression ratio. You can use
+'-me zero' to speed up motion estimation, and '-g 0' to disable
+motion estimation completely (you have only I-frames, which means it
+is about as good as JPEG compression).
+
+ at item
+To have very low audio bitrates, reduce the sampling frequency
+(down to 22050 Hz for MPEG audio, 22050 or 11025 for AC-3).
+
+ at item
+To have a constant quality (but a variable bitrate), use the option
+'-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst
+quality).
+
+ at end itemize
+ at c man end TIPS
+
+ at chapter Examples
+ at c man begin EXAMPLES
+
+ at section Preset files
+
+A preset file contains a sequence of @var{option=value} pairs, one for
+each line, specifying a sequence of options which can be specified also on
+the command line. Lines starting with the hash ('#') character are ignored and
+are used to provide comments. Empty lines are also ignored. Check the
+ at file{presets} directory in the FFmpeg source tree for examples.
+
+Preset files are specified with the @code{pre} option, this option takes a
+preset name as input.  FFmpeg searches for a file named @var{preset_name}.avpreset in
+the directories @file{$AVCONV_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
+the data directory defined at configuration time (usually @file{$PREFIX/share/ffmpeg})
+in that order.  For example, if the argument is @code{libx264-max}, it will
+search for the file @file{libx264-max.avpreset}.
+
+ at section Video and Audio grabbing
+
+If you specify the input format and device then ffmpeg can grab video
+and audio directly.
+
+ at example
+ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg
+ at end example
+
+Or with an ALSA audio source (mono input, card id 1) instead of OSS:
+ at example
+ffmpeg -f alsa -ac 1 -i hw:1 -f video4linux2 -i /dev/video0 /tmp/out.mpg
+ at end example
+
+Note that you must activate the right video source and channel before
+launching ffmpeg with any TV viewer such as
+ at uref{http://linux.bytesex.org/xawtv/, xawtv} by Gerd Knorr. You also
+have to set the audio recording levels correctly with a
+standard mixer.
+
+ at section X11 grabbing
+
+Grab the X11 display with ffmpeg via
+
+ at example
+ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0 /tmp/out.mpg
+ at end example
+
+0.0 is display.screen number of your X11 server, same as
+the DISPLAY environment variable.
+
+ at example
+ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0+10,20 /tmp/out.mpg
+ at end example
+
+0.0 is display.screen number of your X11 server, same as the DISPLAY environment
+variable. 10 is the x-offset and 20 the y-offset for the grabbing.
+
+ at section Video and Audio file format conversion
+
+Any supported file format and protocol can serve as input to ffmpeg:
+
+Examples:
+ at itemize
+ at item
+You can use YUV files as input:
+
+ at example
+ffmpeg -i /tmp/test%d.Y /tmp/out.mpg
+ at end example
+
+It will use the files:
+ at example
+/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
+/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...
+ at end example
+
+The Y files use twice the resolution of the U and V files. They are
+raw files, without header. They can be generated by all decent video
+decoders. You must specify the size of the image with the @option{-s} option
+if ffmpeg cannot guess it.
+
+ at item
+You can input from a raw YUV420P file:
+
+ at example
+ffmpeg -i /tmp/test.yuv /tmp/out.avi
+ at end example
+
+test.yuv is a file containing raw YUV planar data. Each frame is composed
+of the Y plane followed by the U and V planes at half vertical and
+horizontal resolution.
+
+ at item
+You can output to a raw YUV420P file:
+
+ at example
+ffmpeg -i mydivx.avi hugefile.yuv
+ at end example
+
+ at item
+You can set several input files and output files:
+
+ at example
+ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg
+ at end example
+
+Converts the audio file a.wav and the raw YUV video file a.yuv
+to MPEG file a.mpg.
+
+ at item
+You can also do audio and video conversions at the same time:
+
+ at example
+ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2
+ at end example
+
+Converts a.wav to MPEG audio at 22050 Hz sample rate.
+
+ at item
+You can encode to several formats at the same time and define a
+mapping from input stream to output streams:
+
+ at example
+ffmpeg -i /tmp/a.wav -map 0:a -b:a 64k /tmp/a.mp2 -map 0:a -b:a 128k /tmp/b.mp2
+ at end example
+
+Converts a.wav to a.mp2 at 64 kbits and to b.mp2 at 128 kbits. '-map
+file:index' specifies which input stream is used for each output
+stream, in the order of the definition of output streams.
+
+ at item
+You can transcode decrypted VOBs:
+
+ at example
+ffmpeg -i snatch_1.vob -f avi -c:v mpeg4 -b:v 800k -g 300 -bf 2 -c:a libmp3lame -b:a 128k snatch.avi
+ at end example
+
+This is a typical DVD ripping example; the input is a VOB file, the
+output an AVI file with MPEG-4 video and MP3 audio. Note that in this
+command we use B-frames so the MPEG-4 stream is DivX5 compatible, and
+GOP size is 300 which means one intra frame every 10 seconds for 29.97fps
+input video. Furthermore, the audio stream is MP3-encoded so you need
+to enable LAME support by passing @code{--enable-libmp3lame} to configure.
+The mapping is particularly useful for DVD transcoding
+to get the desired audio language.
+
+NOTE: To see the supported input formats, use @code{ffmpeg -formats}.
+
+ at item
+You can extract images from a video, or create a video from many images:
+
+For extracting images from a video:
+ at example
+ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
+ at end example
+
+This will extract one video frame per second from the video and will
+output them in files named @file{foo-001.jpeg}, @file{foo-002.jpeg},
+etc. Images will be rescaled to fit the new WxH values.
+
+If you want to extract just a limited number of frames, you can use the
+above command in combination with the -vframes or -t option, or in
+combination with -ss to start extracting from a certain point in time.
+
+For creating a video from many images:
+ at example
+ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi
+ at end example
+
+The syntax @code{foo-%03d.jpeg} specifies to use a decimal number
+composed of three digits padded with zeroes to express the sequence
+number. It is the same syntax supported by the C printf function, but
+only formats accepting a normal integer are suitable.
+
+When importing an image sequence, -i also supports expanding
+shell-like wildcard patterns (globbing) internally, by selecting the
+image2-specific @code{-pattern_type glob} option.
+
+For example, for creating a video from filenames matching the glob pattern
+ at code{foo-*.jpeg}:
+ at example
+ffmpeg -f image2 -pattern_type glob -i 'foo-*.jpeg' -r 12 -s WxH foo.avi
+ at end example
+
+ at item
+You can put many streams of the same type in the output:
+
+ at example
+ffmpeg -i test1.avi -i test2.avi -map 1:1 -map 1:0 -map 0:1 -map 0:0 -c copy -y test12.nut
+ at end example
+
+The resulting output file @file{test12.nut} will contain the first four streams
+from the input files in reverse order.
+
+ at item
+To force CBR video output:
+ at example
+ffmpeg -i myfile.avi -b 4000k -minrate 4000k -maxrate 4000k -bufsize 1835k out.m2v
+ at end example
+
+ at item
+The four options lmin, lmax, mblmin and mblmax use 'lambda' units,
+but you may use the QP2LAMBDA constant to easily convert from 'q' units:
+ at example
+ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
+ at end example
+
+ at end itemize
+ at c man end EXAMPLES
+
+ at include vgtmpeg_dvd.texi
+ at include config.texi
+ at ifset config-all
+ at ifset config-avutil
+ at include utils.texi
+ at end ifset
+ at ifset config-avcodec
+ at include codecs.texi
+ at include bitstream_filters.texi
+ at end ifset
+ at ifset config-avformat
+ at include formats.texi
+ at include protocols.texi
+ at end ifset
+ at ifset config-avdevice
+ at include devices.texi
+ at end ifset
+ at ifset config-swresample
+ at include resampler.texi
+ at end ifset
+ at ifset config-swscale
+ at include scaler.texi
+ at end ifset
+ at ifset config-avfilter
+ at include filters.texi
+ at end ifset
+ at end ifset
+
+ at chapter See Also
+
+ at ifhtml
+ at ifset config-all
+ at url{ffmpeg.html,ffmpeg}
+ at end ifset
+ at ifset config-not-all
+ at url{ffmpeg-all.html,ffmpeg-all},
+ at end ifset
+ at url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
+ at url{ffmpeg-utils.html,ffmpeg-utils},
+ at url{ffmpeg-scaler.html,ffmpeg-scaler},
+ at url{ffmpeg-resampler.html,ffmpeg-resampler},
+ at url{ffmpeg-codecs.html,ffmpeg-codecs},
+ at url{ffmpeg-bitstream-filters.html,ffmpeg-bitstream-filters},
+ at url{ffmpeg-formats.html,ffmpeg-formats},
+ at url{ffmpeg-devices.html,ffmpeg-devices},
+ at url{ffmpeg-protocols.html,ffmpeg-protocols},
+ at url{ffmpeg-filters.html,ffmpeg-filters}
+ at end ifhtml
+
+ at ifnothtml
+ at ifset config-all
+ffmpeg(1),
+ at end ifset
+ at ifset config-not-all
+ffmpeg-all(1),
+ at end ifset
+ffplay(1), ffprobe(1), ffserver(1),
+ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
+ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
+ffmpeg-devices(1), ffmpeg-protocols(1), ffmpeg-filters(1)
+ at end ifnothtml
+
+ at include authors.texi
+
+ at ignore
+
+ at setfilename ffmpeg
+ at settitle ffmpeg video converter
+
+ at end ignore
+
+ at bye
diff --git a/doc/vgtmpeg_dvd.texi b/doc/vgtmpeg_dvd.texi
new file mode 100644
index 0000000000..6e4266767d
--- /dev/null
+++ b/doc/vgtmpeg_dvd.texi
@@ -0,0 +1,30 @@
+ at chapter DVD support
+ at c man begin DVDSUPPORT
+
+vgtmpeg adds support for DVD in its version of libavformat. DVD support is implemented by adding a new dvdurl protocol that can parse DVD folders, DVD ISO files, DVD devices and more. All the regular features available in vgtmpeg/ffmpeg are still available when a dvd url is used. From direct stream copy to all sorts of filtering and transcoding possibilities.
+
+ at section Using DVDs with vgtmpeg
+Strictly one can open a DVD folder, ISO file.. by using a DVD url like this:
+
+ at example
+> vgtmpeg -i dvd://path_to_dvd  outfile
+ at end example
+
+When using the above format vgtmpeg will inspect the path_to_dvd location looking for a DVD image in the form of a ISO file, or a DVD folder. path_to_dvd can also be any of the individual files inside the VIDEO_TS folder, vgtmpeg will figure out the rest.
+
+By default, the title with the longest duration is opened when using the above syntax. If you want to rely on this behavior, the use of the dvd:// is not required and just specifying the path will suffice. One can however, ask for specific titles to be used as the input using a url query var:
+
+ at example
+> vgtmpeg -i dvd://path_to_dvd?title=5 outfile
+ at end example
+
+This will open the title 5 (if available) of the DVD. If you want to know what is available on a DVD simply type:
+
+ at example
+> vgtmpeg -i dvd://path_to_dvd
+ at end example
+
+ at section DVD titles and vgtmpeg
+The way vgtmpeg handles DVD titles is by mapping every DVD title as if it were a separate input file. This way, the user can use the powerful mapping techniques available in ffmpeg. All DVD tittles are opened simultaneously and available for reading and conversion.
+
+ at c man end DVDSUPPORT
diff --git a/ffbuild/version.sh b/ffbuild/version.sh
index edc4dd33c5..8fe0eb158b 100755
--- a/ffbuild/version.sh
+++ b/ffbuild/version.sh
@@ -11,6 +11,9 @@ if ! test "$revision"; then
     fi
 fi
 
+#vgtmpeg. just pick the latest 'n...' tag without extra info
+revision=$(cd "$1" && git describe --tags --match "n*" --abbrev=0 2> /dev/null)
+
 # Shallow Git clones (--depth) do not have the N tag:
 # use 'git-YYYY-MM-DD-hhhhhhh'.
 test "$revision" || revision=$(cd "$1" &&
@@ -35,7 +38,9 @@ test "$revision" || revision=$(cd "$1" && cat RELEASE 2> /dev/null)
 test "$revision" && test "$git_hash" && revision="$revision-$git_hash"
 
 # releases extract the version number from the VERSION file
-version=$(cd "$1" && cat VERSION 2> /dev/null)
+vgtmpeg_version=$(cd "$1" && cat VERSION 2> /dev/null)
+VGTMPEG_REVISION="#define VGTMPEG_VERSION \"$vgtmpeg_version\""
+
 test "$version" || version=$revision
 
 test -n "$3" && version=$version-$3
@@ -58,6 +63,7 @@ if test "$NEW_REVISION" != "$OLD_REVISION"; then
 #ifndef $GUARD
 #define $GUARD
 $NEW_REVISION
+$VGTMPEG_REVISION
 #endif /* $GUARD */
 EOF
 fi
diff --git a/fftools/Makefile b/fftools/Makefile
index c3a0ff340b..c8aef20236 100644
--- a/fftools/Makefile
+++ b/fftools/Makefile
@@ -1,15 +1,25 @@
 AVPROGS-$(CONFIG_FFMPEG)   += ffmpeg
 AVPROGS-$(CONFIG_FFPLAY)   += ffplay
 AVPROGS-$(CONFIG_FFPROBE)  += ffprobe
+# -- vgtmpeg
+AVPROGS-yes	+= vgtmpeg
+# -- vgtmpeg
+
+
 
 AVPROGS     := $(AVPROGS-yes:%=%$(PROGSSUF)$(EXESUF))
 PROGS       += $(AVPROGS)
 
-AVBASENAMES  = ffmpeg ffplay ffprobe
+# --vgtmpeg
+AVBASENAMES  = ffmpeg ffplay ffprobe vgtmpeg
+FF_EXTRALIBS += -ldvdread -lbluray
+# --vgtmpeg
+
 ALLAVPROGS   = $(AVBASENAMES:%=%$(PROGSSUF)$(EXESUF))
 ALLAVPROGS_G = $(AVBASENAMES:%=%$(PROGSSUF)_g$(EXESUF))
 
-OBJS-ffmpeg                        += fftools/ffmpeg_opt.o fftools/ffmpeg_filter.o fftools/ffmpeg_hw.o
+# --vgtmpeg
+OBJS-ffmpeg                        += fftools/ffmpeg_opt.o fftools/ffmpeg_filter.o fftools/ffmpeg_hw.o fftools/vgtmpeg_support.o
 OBJS-ffmpeg-$(CONFIG_CUVID)        += fftools/ffmpeg_cuvid.o
 OBJS-ffmpeg-$(CONFIG_LIBMFX)       += fftools/ffmpeg_qsv.o
 ifndef CONFIG_VIDEOTOOLBOX
@@ -17,13 +27,22 @@ OBJS-ffmpeg-$(CONFIG_VDA)          += fftools/ffmpeg_videotoolbox.o
 endif
 OBJS-ffmpeg-$(CONFIG_VIDEOTOOLBOX) += fftools/ffmpeg_videotoolbox.o
 
+OBJS-vgtmpeg                        += fftools/ffmpeg_opt.o fftools/ffmpeg_filter.o fftools/ffmpeg_hw.o fftools/vgtmpeg_support.o
+OBJS-vgtmpeg-$(CONFIG_CUVID)        += fftools/ffmpeg_cuvid.o
+OBJS-vgtmpeg-$(CONFIG_LIBMFX)       += fftools/ffmpeg_qsv.o
+ifndef CONFIG_VIDEOTOOLBOX
+OBJS-vgtmpeg-$(CONFIG_VDA)          += fftools/ffmpeg_videotoolbox.o
+endif
+OBJS-vgtmpeg-$(CONFIG_VIDEOTOOLBOX) += fftools/ffmpeg_videotoolbox.o
+# --vgtmpeg
+
 define DOFFTOOL
 OBJS-$(1) += fftools/cmdutils.o fftools/$(1).o $(OBJS-$(1)-yes)
 $(1)$(PROGSSUF)_g$(EXESUF): $$(OBJS-$(1))
 $$(OBJS-$(1)): | fftools
-$$(OBJS-$(1)): CFLAGS  += $(CFLAGS-$(1))
+$$(OBJS-$(1)): CFLAGS  += $(CFLAGS-$(1)) 
 $(1)$(PROGSSUF)_g$(EXESUF): LDFLAGS += $(LDFLAGS-$(1))
-$(1)$(PROGSSUF)_g$(EXESUF): FF_EXTRALIBS += $(EXTRALIBS-$(1))
+$(1)$(PROGSSUF)_g$(EXESUF): FF_EXTRALIBS += $(EXTRALIBS-$(1)) -ldvdread
 -include $$(OBJS-$(1):.o=.d)
 endef
 
diff --git a/fftools/cmdutils.c b/fftools/cmdutils.c
index 9cfbc45c2b..31a4ed5e56 100644
--- a/fftools/cmdutils.c
+++ b/fftools/cmdutils.c
@@ -1141,13 +1141,14 @@ static void print_program_info(int flags, int level)
 {
     const char *indent = flags & INDENT? "  " : "";
 
-    av_log(NULL, level, "%s version " FFMPEG_VERSION, program_name);
+/*-- vgtmpeg */
+    av_log(NULL, level, "%s version " VGTMPEG_VERSION, program_name);
     if (flags & SHOW_COPYRIGHT)
-        av_log(NULL, level, " Copyright (c) %d-%d the FFmpeg developers",
-               program_birth_year, CONFIG_THIS_YEAR);
+    av_log(NULL, level, " Copyright (c) %d-%d Alberto Vigata and the FFmpeg developers", program_birth_year, CONFIG_THIS_YEAR);
     av_log(NULL, level, "\n");
-    av_log(NULL, level, "%sbuilt with %s\n", indent, CC_IDENT);
-
+    av_log(NULL, level, "%sbuilt on %s %s with %s\n",indent, __DATE__, __TIME__, CC_IDENT);
+    av_log(NULL, level, "%sbased on ffmpeg version %s\n",indent, FFMPEG_VERSION);
+/*-- vgtmpeg */
     av_log(NULL, level, "%sconfiguration: " FFMPEG_CONFIGURATION "\n", indent);
 }
 
@@ -1183,6 +1184,13 @@ void show_banner(int argc, char **argv, const OptionDef *options)
     if (hide_banner || idx)
         return;
 
+    /* -- vgtmpeg */
+    /* don't show banner if output json dump info */
+    if( locate_option(argc, argv, options, "codecs_json") ) return;
+    if( locate_option(argc, argv, options, "formats_json") ) return;
+    if( locate_option(argc, argv, options, "options_json") ) return;
+    /* -- vgtmpeg */
+
     print_program_info (INDENT|SHOW_COPYRIGHT, AV_LOG_INFO);
     print_all_libs_info(INDENT|SHOW_CONFIG,  AV_LOG_INFO);
     print_all_libs_info(INDENT|SHOW_VERSION, AV_LOG_INFO);
diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
index eb1eaf6363..469921135a 100644
--- a/fftools/ffmpeg.h
+++ b/fftools/ffmpeg.h
@@ -558,6 +558,13 @@ typedef struct OutputFile {
     int64_t recording_time;  ///< desired length of the resulting file in microseconds == AV_TIME_BASE units
     int64_t start_time;      ///< start time in microseconds == AV_TIME_BASE units
     uint64_t limit_filesize; /* filesize limit expressed in bytes */
+    
+	/* >> vgtmpeg */
+    int wrote_header; /* flag indicated that the header was already written.  --vgtmpeg */
+    int wrote_trailer; /* flag indicating that trailer was written */
+	/* << vgtmpeg */
+
+
 
     int shortest;
 
diff --git a/fftools/ffmpeg_opt.c b/fftools/ffmpeg_opt.c
index 53d688b764..77a5553232 100644
--- a/fftools/ffmpeg_opt.c
+++ b/fftools/ffmpeg_opt.c
@@ -21,6 +21,14 @@
 #include <stdint.h>
 
 #include "ffmpeg.h"
+/* >> vgtmpeg */
+#include "vgtmpeg.h"
+int output_xml = 0;
+int server_mode = 0;
+int banner = 1;
+int default_program_id = -1;
+/* << vgtmpeg */
+
 #include "cmdutils.h"
 
 #include "libavformat/avformat.h"
@@ -994,7 +1002,13 @@ static void dump_attachment(AVStream *st, const char *filename)
     avio_close(out);
 }
 
-static int open_input_file(OptionsContext *o, const char *filename)
+/* --vgtmpeg start 
+ *
+ * the parsing of the input file is modified to support extra protocols on the input file handler.
+ * parse_input_file is the original opt_input_file from ffmpeg. 
+ *
+ **/
+static int parse_input_file(OptionsContext *o, const char *filename)
 {
     InputFile *f;
     AVFormatContext *ic;
@@ -1034,8 +1048,12 @@ static int open_input_file(OptionsContext *o, const char *filename)
     if (!strcmp(filename, "-"))
         filename = "pipe:";
 
+    /* --vgtmpeg */
     stdin_interaction &= strncmp(filename, "pipe:", 5) &&
-                         strcmp(filename, "/dev/stdin");
+                         strcmp(filename, "/dev/stdin") && 
+                         !server_mode;
+	/* --vgtmpeg */
+
 
     /* get default parameters from command line */
     ic = avformat_alloc_context();
@@ -1192,6 +1210,11 @@ static int open_input_file(OptionsContext *o, const char *filename)
     /* dump the file content */
     av_dump_format(ic, nb_input_files, filename, 0);
 
+  	/* --vgtmpeg */
+    if( output_xml )
+        dump_nlformat(ic, nb_input_files, filename, 0);
+	/* --vgtmpeg */
+
     GROW_ARRAY(input_files, nb_input_files);
     f = av_mallocz(sizeof(*f));
     if (!f)
@@ -1268,6 +1291,31 @@ static int open_input_file(OptionsContext *o, const char *filename)
     return 0;
 }
 
+/* called by input file parsers to select a default program */
+static void select_default_program(int programid) {
+    default_program_id = programid;
+}
+
+/* A veneer function to parse_input_file */
+static int parse_file_veneer(void *ctx, char *filename) {
+    return parse_input_file((OptionsContext *)ctx, filename);
+}
+
+static ff_input_func_t ff_input_funcs = {
+		parse_file_veneer,
+		select_default_program
+};
+
+static int open_input_file(OptionsContext *o, const char *filename) {
+#if CONFIG_DVD_PROTOCOL || CONFIG_BD_PROTOCOL
+	if (parse_optmedia_path(o, filename, &ff_input_funcs)) {
+		return 1;
+	}
+#endif
+	return parse_input_file(o, filename);
+}
+/* --vgtmpeg end */
+
 static uint8_t *get_line(AVIOContext *s)
 {
     AVIOContext *line;
@@ -2105,6 +2153,31 @@ static int init_complex_filters(void)
     return 0;
 }
 
+
+/* --vgtmpeg start*/
+/* returns the program id that this stream belongs to or -1 if the 
+ * stream doesn't belong to any program */
+static int get_programid_from_stream(InputStream *is) {
+    for( int k=0; k<nb_input_files; k++) {
+        AVFormatContext *ic = input_files[k]->ctx;
+        int i, j;
+
+        for(i=0; i<ic->nb_programs; i++){
+            AVProgram *p= ic->programs[i];
+            for(j=0; j<p->nb_stream_indexes; j++){
+                AVStream *st = ic->streams[p->stream_index[j]];
+                if( is->file_index==k && st->id == is->st->id && st->index == is->st->index ) {
+                    av_log(NULL,AV_LOG_VERBOSE, "stream %d:%d(%X) belongs to program %d (default: %d)\n", is->file_index, st->index, st->id, p->id, default_program_id);
+                    return p->id;
+                }
+            }
+        }
+    }
+    av_log(NULL,AV_LOG_VERBOSE, "stream %d:%d(%X) doesn't belong to any program  (default: %d)\n", is->file_index, is->st->index, is->st->id, default_program_id);
+    return -1;
+}
+/* --vgtmpeg stop */
+
 static int open_output_file(OptionsContext *o, const char *filename)
 {
     AVFormatContext *oc;
@@ -2191,6 +2264,18 @@ static int open_output_file(OptionsContext *o, const char *filename)
         char *subtitle_codec_name = NULL;
         /* pick the "best" stream of each type */
 
+        /* --vgtmpeg
+         * This is the path that open_output_file takes when there are no explicit mappings of streams.
+         *
+         * vgtmpeg uses the 'default_program_id' if set (if != -1) to force selection of those streams
+         * as the default for the output. 'default_program_id' is usually set by the dvd and bluray libraries.
+         *
+         * note that if no 'default_program_id' is selected its value is -1. Using the method get_programid_from_stream()
+         * returns -1 if stream is not contained into a program, effectively coaercing all orphaned streams into the default
+         * program and ready for selection. This is the common path when 'default_program_id' is not set.
+         *
+         * --vgtmpeg */
+
         /* video: highest resolution */
         if (!o->video_disable && av_guess_codec(oc->oformat, NULL, filename, NULL, AVMEDIA_TYPE_VIDEO) != AV_CODEC_ID_NONE) {
             int area = 0, idx = -1;
@@ -2204,7 +2289,10 @@ static int open_output_file(OptionsContext *o, const char *filename)
                 if((qcr!=MKTAG('A', 'P', 'I', 'C')) && (ist->st->disposition & AV_DISPOSITION_ATTACHED_PIC))
                     new_area = 1;
                 if (ist->st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO &&
-                    new_area > area) {
+                    new_area > area 
+                    /* --vgtmpeg */
+                    && default_program_id == get_programid_from_stream(ist) ) {
+	                /* --vgtmpeg */
                     if((qcr==MKTAG('A', 'P', 'I', 'C')) && !(ist->st->disposition & AV_DISPOSITION_ATTACHED_PIC))
                         continue;
                     area = new_area;
@@ -2225,6 +2313,7 @@ static int open_output_file(OptionsContext *o, const char *filename)
                 if (ist->user_set_discard == AVDISCARD_ALL)
                     continue;
                 if (ist->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO &&
+                    default_program_id == get_programid_from_stream(ist) &&
                     score > best_score) {
                     best_score = score;
                     idx = i;
@@ -2238,7 +2327,10 @@ static int open_output_file(OptionsContext *o, const char *filename)
         MATCH_PER_TYPE_OPT(codec_names, str, subtitle_codec_name, oc, "s");
         if (!o->subtitle_disable && (avcodec_find_encoder(oc->oformat->subtitle_codec) || subtitle_codec_name)) {
             for (i = 0; i < nb_input_streams; i++)
-                if (input_streams[i]->st->codecpar->codec_type == AVMEDIA_TYPE_SUBTITLE) {
+                if (input_streams[i]->st->codec->codec_type == AVMEDIA_TYPE_SUBTITLE
+                    /* --vgtmpeg */
+                    && default_program_id == get_programid_from_stream(ist) ) {
+	                /* --vgtmpeg */
                     AVCodecDescriptor const *input_descriptor =
                         avcodec_descriptor_get(input_streams[i]->st->codecpar->codec_id);
                     AVCodecDescriptor const *output_descriptor = NULL;
@@ -3362,6 +3454,9 @@ static int opt_progress(void *optctx, const char *opt, const char *arg)
 const OptionDef options[] = {
     /* main options */
     CMDUTILS_COMMON_OPTIONS
+  	/* --vgtmpeg */
+#include "vgtmpeg_opts.h"
+	/* --vgtmpeg */
     { "f",              HAS_ARG | OPT_STRING | OPT_OFFSET |
                         OPT_INPUT | OPT_OUTPUT,                      { .off       = OFFSET(format) },
         "force format", "fmt" },
diff --git a/fftools/nldump_format.h b/fftools/nldump_format.h
new file mode 100644
index 0000000000..d3d2425d9c
--- /dev/null
+++ b/fftools/nldump_format.h
@@ -0,0 +1,37 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef __NLDUMP_FORMAT_H
+#define __NLDUMP_FORMAT_H
+
+#include "libavcodec/avcodec.h"
+#include "libavutil/avstring.h"
+#include "libavutil/dict.h"
+#include "libavutil/pixdesc.h"
+
+
+void dump_nlformat(AVFormatContext *ic,
+                 int index,
+                 const char *url,
+                 int is_output);
+#endif
diff --git a/fftools/nlffmsg.h b/fftools/nlffmsg.h
new file mode 100644
index 0000000000..cb96f46b0a
--- /dev/null
+++ b/fftools/nlffmsg.h
@@ -0,0 +1,91 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef __NLFFMSG_H
+#define __NLFFMSG_H
+
+#include <inttypes.h>
+
+#define FFMSG_VERSION_MAJOR 0
+#define FFMSG_VERSION_MINOR 3
+
+#define FFMSG_MSGTYPE_STREAMINFO "streaminfo"
+#define FFMSG_MSGTYPE_PROGRESSINFO "progressinfo"
+
+
+#define FFMSG_START "<nlffmsg>\n"
+#define FFMSG_STOP "</nlffmsg>\n"
+
+
+#define FFMSG_NODE_START(x) "<" #x ">\n"
+#define FFMSG_NODE_STOP(x)   "</" #x ">\n" 
+#define FFMSG_NODE_START_FMT(x) "<" x ">\n"
+#define FFMSG_NODE_STOP_FMT(x)   "</" x ">\n" 
+
+
+#define FFMSG_STRING_FMT(name) "<" #name " type=\"string\" val=\"%s\"/>\n"
+#define FFMSG_INTEGER_FMT(name) "<" #name " type=\"integer\" val=\"%" PRIi64 "\"/>\n"
+#define FFMSG_INT32_FMT(name) "<" #name " type=\"integer\" val=\"%d\"/>\n"
+
+#define FFMSG_LOG(...)  av_log ( NULL, AV_LOG_INFO, __VA_ARGS__ )
+
+#define FFMSG_PICTURE_START(width,height,format) FFMSG_LOG("<nlpicmsg width=\"%d\" height=\"%d\" format=\"%d\">\n", width, height, format )
+#define FFMSG_PICTURE_DATA(b64data) fputs(b64data,stderr)
+#define FFMSG_PICTURE_STOP()   FFMSG_LOG("</nlpicmsg>\n")
+
+#define FFMSG_START_MSGTYPE( type, mainkey) \
+    FFMSG_LOG( FFMSG_START );   \
+    FFMSG_LOG( FFMSG_INT32_FMT(version_major), FFMSG_VERSION_MAJOR );\
+    FFMSG_LOG( FFMSG_INT32_FMT(version_minor), FFMSG_VERSION_MINOR );\
+    FFMSG_LOG( FFMSG_STRING_FMT(msgtype), type  ); \
+    FFMSG_LOG( FFMSG_NODE_START(mainkey) );
+
+#define FFMSG_STOP_MSGTYPE(type, mainkey) \
+    FFMSG_LOG( FFMSG_NODE_STOP(mainkey) ); \
+    FFMSG_LOG( FFMSG_STOP );
+
+char *xescape(char *buf, char *s);
+
+
+#define FFMSG_STRING_VALUE(name,value) {FFMSG_LOG("<%s type=\"string\" val=\"%s\"/>\n", xescape(xmlesc1,name), xescape(xmlesc2,value));}
+/* definitions 
+ *
+ * ffnlmsg.version.major        (integer)
+ * ffnlmsg.version.minor        (integer)
+ * ffnlmsg.muxinfo.direction (string)   [input|output]
+ * ffnlmsg.muxinfo.index      (integer) 
+ * ffnlmsg.muxinfo.timebase   (integer) 
+ * ffnlmsg.muxinfo.mux_format (string)  [mpeg4|...]
+ * ffnlmsg.muxinfo.program_count  (integer) 
+ * ffnlmsg.muxinfo.stream_count     (integer)
+ * ffnlmsg.muxinfo.duration                 (integer)
+ * ffnlmsg.muxinfo.start_time               (integer)
+ * ffnlmsg.muxinfo.bitrate                  (integer)
+ * ffnlmsg.muxinfo.programs                 (array)
+ * ffnlmsg.muxinfo.programs.id[n]           (program info)
+ * ffnlmsg.muxinfo.programs.id[n].id        (integer)
+ * ffnlmsg.muxinfo.programs.id[n].name      (string)
+ *  
+ *  ...
+ */
+#endif /* __NLFFMSG_H */
diff --git a/fftools/nlinput.h b/fftools/nlinput.h
new file mode 100644
index 0000000000..dcbef83a11
--- /dev/null
+++ b/fftools/nlinput.h
@@ -0,0 +1,53 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef __NLINPUT_H
+#define __NLINPUT_H
+
+#include "nlffmsg.h"
+#include "config.h"
+
+
+
+#if HAVE_PTHREADS
+#include <pthread.h>
+#elif HAVE_W32THREADS
+#include "compat/w32pthreads.h"
+#elif HAVE_OS2THREADS
+#include "os2threads.h"
+#endif
+
+/* cross thread signal struct */
+typedef struct {
+    int exit;
+    int cancel_transcode;
+    pthread_t nlin_th;
+} nlinput_t;
+
+
+/* fires up input thread */
+nlinput_t *nlinput_prepare(void);
+void nlinput_cancel(nlinput_t *);
+
+
+#endif /* __NLINPUT_H */
diff --git a/fftools/nljsonmsg.h b/fftools/nljsonmsg.h
new file mode 100644
index 0000000000..97b304b27b
--- /dev/null
+++ b/fftools/nljsonmsg.h
@@ -0,0 +1,32 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef __NLJSONMSG_H
+#define __NLJSONMSG_H
+
+
+#include <inttypes.h>
+
+
+
+#endif 
diff --git a/fftools/nlreport.h b/fftools/nlreport.h
new file mode 100644
index 0000000000..73ba5a9e23
--- /dev/null
+++ b/fftools/nlreport.h
@@ -0,0 +1,46 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef __NLREPORT_H
+#define __NLREPORT_H
+
+#include "libavutil/opt.h"
+#include "libavutil/time.h"
+#include "ffmpeg.h"
+
+void print_nlreport( OutputFile **output_files,
+                         OutputStream **ost_table, int nb_ostreams,
+                         int is_last_report, int64_t timer_start, int nb_frames_dup, int nb_frames_drop );
+
+
+void c_strfree(char *str);  
+char *c_strescape (const char *source);
+
+
+void show_codecs_json(void);
+void show_formats_json(void);
+void show_options_json(void);
+
+
+
+#endif
diff --git a/fftools/vgtmpeg.c b/fftools/vgtmpeg.c
new file mode 100644
index 0000000000..127247908c
--- /dev/null
+++ b/fftools/vgtmpeg.c
@@ -0,0 +1,4994 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include "config.h"
+#include <ctype.h>
+#include <string.h>
+#include <math.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <limits.h>
+#include <stdatomic.h>
+#include <stdint.h>
+
+#if HAVE_IO_H
+#include <io.h>
+#endif
+#if HAVE_UNISTD_H
+#include <unistd.h>
+#endif
+
+#include "libavformat/avformat.h"
+#include "libavdevice/avdevice.h"
+#include "libswresample/swresample.h"
+#include "libavutil/opt.h"
+#include "libavutil/channel_layout.h"
+#include "libavutil/parseutils.h"
+#include "libavutil/samplefmt.h"
+#include "libavutil/fifo.h"
+#include "libavutil/hwcontext.h"
+#include "libavutil/internal.h"
+#include "libavutil/intreadwrite.h"
+#include "libavutil/dict.h"
+#include "libavutil/display.h"
+#include "libavutil/mathematics.h"
+#include "libavutil/pixdesc.h"
+#include "libavutil/avstring.h"
+#include "libavutil/libm.h"
+#include "libavutil/imgutils.h"
+#include "libavutil/timestamp.h"
+#include "libavutil/bprint.h"
+#include "libavutil/time.h"
+#include "libavutil/thread.h"
+#include "libavutil/threadmessage.h"
+#include "libavcodec/mathops.h"
+#include "libavformat/os_support.h"
+
+# include "libavfilter/avfilter.h"
+# include "libavfilter/buffersrc.h"
+# include "libavfilter/buffersink.h"
+
+#if HAVE_SYS_RESOURCE_H
+#include <sys/time.h>
+#include <sys/types.h>
+#include <sys/resource.h>
+#elif HAVE_GETPROCESSTIMES
+#include <windows.h>
+#endif
+#if HAVE_GETPROCESSMEMORYINFO
+#include <windows.h>
+#include <psapi.h>
+#endif
+#if HAVE_SETCONSOLECTRLHANDLER
+#include <windows.h>
+#endif
+
+
+#if HAVE_SYS_SELECT_H
+#include <sys/select.h>
+#endif
+
+#if HAVE_TERMIOS_H
+#include <fcntl.h>
+#include <sys/ioctl.h>
+#include <sys/time.h>
+#include <termios.h>
+#elif HAVE_KBHIT
+#include <conio.h>
+#endif
+
+#include <time.h>
+
+#include "ffmpeg.h"
+#include "cmdutils.h"
+
+#include "libavutil/avassert.h"
+
+/* -- vgtmpeg*/
+const char program_name[] = "vgtmpeg";
+const int program_birth_year = 2010;
+/* -- vgtmpeg */
+static FILE *vstats_file;
+
+const char *const forced_keyframes_const_names[] = {
+    "n",
+    "n_forced",
+    "prev_forced_n",
+    "prev_forced_t",
+    "t",
+    NULL
+};
+
+static void do_video_stats(OutputStream *ost, int frame_size);
+static int64_t getutime(void);
+static int64_t getmaxrss(void);
+static int ifilter_has_all_input_formats(FilterGraph *fg);
+
+static int run_as_daemon  = 0;
+static int nb_frames_dup = 0;
+static unsigned dup_warning = 1000;
+static int nb_frames_drop = 0;
+static int64_t decode_error_stat[2];
+
+static int want_sdp = 1;
+
+static int current_time;
+AVIOContext *progress_avio = NULL;
+
+static uint8_t *subtitle_out;
+
+InputStream **input_streams = NULL;
+int        nb_input_streams = 0;
+InputFile   **input_files   = NULL;
+int        nb_input_files   = 0;
+
+OutputStream **output_streams = NULL;
+int         nb_output_streams = 0;
+OutputFile   **output_files   = NULL;
+int         nb_output_files   = 0;
+
+FilterGraph **filtergraphs;
+int        nb_filtergraphs;
+
+#if HAVE_TERMIOS_H
+
+/* init terminal so that we can grab keys */
+static struct termios oldtty;
+static int restore_tty;
+#endif
+
+#if HAVE_THREADS
+static void free_input_threads(void);
+#endif
+
+/* --vgtmpeg */
+#include "vgtmpeg.h" 
+
+//globals for vgtmpeg
+static nlinput_t *nli;
+/* --vgtmpeg */
+
+/* sub2video hack:
+   Convert subtitles to video with alpha to insert them in filter graphs.
+   This is a temporary solution until libavfilter gets real subtitles support.
+ */
+
+static int sub2video_get_blank_frame(InputStream *ist)
+{
+    int ret;
+    AVFrame *frame = ist->sub2video.frame;
+
+    av_frame_unref(frame);
+    ist->sub2video.frame->width  = ist->dec_ctx->width  ? ist->dec_ctx->width  : ist->sub2video.w;
+    ist->sub2video.frame->height = ist->dec_ctx->height ? ist->dec_ctx->height : ist->sub2video.h;
+    ist->sub2video.frame->format = AV_PIX_FMT_RGB32;
+    if ((ret = av_frame_get_buffer(frame, 32)) < 0)
+        return ret;
+    memset(frame->data[0], 0, frame->height * frame->linesize[0]);
+    return 0;
+}
+
+static void sub2video_copy_rect(uint8_t *dst, int dst_linesize, int w, int h,
+                                AVSubtitleRect *r)
+{
+    uint32_t *pal, *dst2;
+    uint8_t *src, *src2;
+    int x, y;
+
+    if (r->type != SUBTITLE_BITMAP) {
+        av_log(NULL, AV_LOG_WARNING, "sub2video: non-bitmap subtitle\n");
+        return;
+    }
+    if (r->x < 0 || r->x + r->w > w || r->y < 0 || r->y + r->h > h) {
+        av_log(NULL, AV_LOG_WARNING, "sub2video: rectangle (%d %d %d %d) overflowing %d %d\n",
+            r->x, r->y, r->w, r->h, w, h
+        );
+        return;
+    }
+
+    dst += r->y * dst_linesize + r->x * 4;
+    src = r->data[0];
+    pal = (uint32_t *)r->data[1];
+    for (y = 0; y < r->h; y++) {
+        dst2 = (uint32_t *)dst;
+        src2 = src;
+        for (x = 0; x < r->w; x++)
+            *(dst2++) = pal[*(src2++)];
+        dst += dst_linesize;
+        src += r->linesize[0];
+    }
+}
+
+static void sub2video_push_ref(InputStream *ist, int64_t pts)
+{
+    AVFrame *frame = ist->sub2video.frame;
+    int i;
+    int ret;
+
+    av_assert1(frame->data[0]);
+    ist->sub2video.last_pts = frame->pts = pts;
+    for (i = 0; i < ist->nb_filters; i++) {
+        ret = av_buffersrc_add_frame_flags(ist->filters[i]->filter, frame,
+                                           AV_BUFFERSRC_FLAG_KEEP_REF |
+                                           AV_BUFFERSRC_FLAG_PUSH);
+        if (ret != AVERROR_EOF && ret < 0)
+            av_log(NULL, AV_LOG_WARNING, "Error while add the frame to buffer source(%s).\n",
+                   av_err2str(ret));
+    }
+}
+
+void sub2video_update(InputStream *ist, AVSubtitle *sub)
+{
+    AVFrame *frame = ist->sub2video.frame;
+    int8_t *dst;
+    int     dst_linesize;
+    int num_rects, i;
+    int64_t pts, end_pts;
+
+    if (!frame)
+        return;
+    if (sub) {
+        pts       = av_rescale_q(sub->pts + sub->start_display_time * 1000LL,
+                                 AV_TIME_BASE_Q, ist->st->time_base);
+        end_pts   = av_rescale_q(sub->pts + sub->end_display_time   * 1000LL,
+                                 AV_TIME_BASE_Q, ist->st->time_base);
+        num_rects = sub->num_rects;
+    } else {
+        pts       = ist->sub2video.end_pts;
+        end_pts   = INT64_MAX;
+        num_rects = 0;
+    }
+    if (sub2video_get_blank_frame(ist) < 0) {
+        av_log(ist->dec_ctx, AV_LOG_ERROR,
+               "Impossible to get a blank canvas.\n");
+        return;
+    }
+    dst          = frame->data    [0];
+    dst_linesize = frame->linesize[0];
+    for (i = 0; i < num_rects; i++)
+        sub2video_copy_rect(dst, dst_linesize, frame->width, frame->height, sub->rects[i]);
+    sub2video_push_ref(ist, pts);
+    ist->sub2video.end_pts = end_pts;
+}
+
+static void sub2video_heartbeat(InputStream *ist, int64_t pts)
+{
+    InputFile *infile = input_files[ist->file_index];
+    int i, j, nb_reqs;
+    int64_t pts2;
+
+    /* When a frame is read from a file, examine all sub2video streams in
+       the same file and send the sub2video frame again. Otherwise, decoded
+       video frames could be accumulating in the filter graph while a filter
+       (possibly overlay) is desperately waiting for a subtitle frame. */
+    for (i = 0; i < infile->nb_streams; i++) {
+        InputStream *ist2 = input_streams[infile->ist_index + i];
+        if (!ist2->sub2video.frame)
+            continue;
+        /* subtitles seem to be usually muxed ahead of other streams;
+           if not, subtracting a larger time here is necessary */
+        pts2 = av_rescale_q(pts, ist->st->time_base, ist2->st->time_base) - 1;
+        /* do not send the heartbeat frame if the subtitle is already ahead */
+        if (pts2 <= ist2->sub2video.last_pts)
+            continue;
+        if (pts2 >= ist2->sub2video.end_pts ||
+            (!ist2->sub2video.frame->data[0] && ist2->sub2video.end_pts < INT64_MAX))
+            sub2video_update(ist2, NULL);
+        for (j = 0, nb_reqs = 0; j < ist2->nb_filters; j++)
+            nb_reqs += av_buffersrc_get_nb_failed_requests(ist2->filters[j]->filter);
+        if (nb_reqs)
+            sub2video_push_ref(ist2, pts2);
+    }
+}
+
+static void sub2video_flush(InputStream *ist)
+{
+    int i;
+    int ret;
+
+    if (ist->sub2video.end_pts < INT64_MAX)
+        sub2video_update(ist, NULL);
+    for (i = 0; i < ist->nb_filters; i++) {
+        ret = av_buffersrc_add_frame(ist->filters[i]->filter, NULL);
+        if (ret != AVERROR_EOF && ret < 0)
+            av_log(NULL, AV_LOG_WARNING, "Flush the frame error.\n");
+    }
+}
+
+/* end of sub2video hack */
+
+static void term_exit_sigsafe(void)
+{
+#if HAVE_TERMIOS_H
+    if(restore_tty)
+        tcsetattr (0, TCSANOW, &oldtty);
+#endif
+}
+
+void term_exit(void)
+{
+    av_log(NULL, AV_LOG_QUIET, "%s", "");
+    term_exit_sigsafe();
+}
+
+static volatile int received_sigterm = 0;
+static volatile int received_nb_signals = 0;
+static atomic_int transcode_init_done = ATOMIC_VAR_INIT(0);
+static volatile int ffmpeg_exited = 0;
+static int main_return_code = 0;
+
+static void
+sigterm_handler(int sig)
+{
+    int ret;
+    received_sigterm = sig;
+    received_nb_signals++;
+    term_exit_sigsafe();
+    if(received_nb_signals > 3) {
+        ret = write(2/*STDERR_FILENO*/, "Received > 3 system signals, hard exiting\n",
+                    strlen("Received > 3 system signals, hard exiting\n"));
+        if (ret < 0) { /* Do nothing */ };
+        exit(123);
+    }
+}
+
+#if HAVE_SETCONSOLECTRLHANDLER
+static BOOL WINAPI CtrlHandler(DWORD fdwCtrlType)
+{
+    av_log(NULL, AV_LOG_DEBUG, "\nReceived windows signal %ld\n", fdwCtrlType);
+
+    switch (fdwCtrlType)
+    {
+    case CTRL_C_EVENT:
+    case CTRL_BREAK_EVENT:
+        sigterm_handler(SIGINT);
+        return TRUE;
+
+    case CTRL_CLOSE_EVENT:
+    case CTRL_LOGOFF_EVENT:
+    case CTRL_SHUTDOWN_EVENT:
+        sigterm_handler(SIGTERM);
+        /* Basically, with these 3 events, when we return from this method the
+           process is hard terminated, so stall as long as we need to
+           to try and let the main thread(s) clean up and gracefully terminate
+           (we have at most 5 seconds, but should be done far before that). */
+        while (!ffmpeg_exited) {
+            Sleep(0);
+        }
+        return TRUE;
+
+    default:
+        av_log(NULL, AV_LOG_ERROR, "Received unknown windows signal %ld\n", fdwCtrlType);
+        return FALSE;
+    }
+}
+#endif
+
+void term_init(void)
+{
+#if HAVE_TERMIOS_H
+    if (!run_as_daemon && stdin_interaction) {
+        struct termios tty;
+        if (tcgetattr (0, &tty) == 0) {
+            oldtty = tty;
+            restore_tty = 1;
+
+            tty.c_iflag &= ~(IGNBRK|BRKINT|PARMRK|ISTRIP
+                             |INLCR|IGNCR|ICRNL|IXON);
+            tty.c_oflag |= OPOST;
+            tty.c_lflag &= ~(ECHO|ECHONL|ICANON|IEXTEN);
+            tty.c_cflag &= ~(CSIZE|PARENB);
+            tty.c_cflag |= CS8;
+            tty.c_cc[VMIN] = 1;
+            tty.c_cc[VTIME] = 0;
+
+            tcsetattr (0, TCSANOW, &tty);
+        }
+        signal(SIGQUIT, sigterm_handler); /* Quit (POSIX).  */
+    }
+#endif
+
+    signal(SIGINT , sigterm_handler); /* Interrupt (ANSI).    */
+    signal(SIGTERM, sigterm_handler); /* Termination (ANSI).  */
+#ifdef SIGXCPU
+    signal(SIGXCPU, sigterm_handler);
+#endif
+#ifdef SIGPIPE
+    signal(SIGPIPE, SIG_IGN); /* Broken pipe (POSIX). */
+#endif
+#if HAVE_SETCONSOLECTRLHANDLER
+    SetConsoleCtrlHandler((PHANDLER_ROUTINE) CtrlHandler, TRUE);
+#endif
+}
+
+/* read a key without blocking */
+static int read_key(void)
+{
+    unsigned char ch;
+#if HAVE_TERMIOS_H
+    int n = 1;
+    struct timeval tv;
+    fd_set rfds;
+
+    FD_ZERO(&rfds);
+    FD_SET(0, &rfds);
+    tv.tv_sec = 0;
+    tv.tv_usec = 0;
+    n = select(1, &rfds, NULL, NULL, &tv);
+    if (n > 0) {
+        n = read(0, &ch, 1);
+        if (n == 1)
+            return ch;
+
+        return n;
+    }
+#elif HAVE_KBHIT
+#    if HAVE_PEEKNAMEDPIPE
+    static int is_pipe;
+    static HANDLE input_handle;
+    DWORD dw, nchars;
+    if(!input_handle){
+        input_handle = GetStdHandle(STD_INPUT_HANDLE);
+        is_pipe = !GetConsoleMode(input_handle, &dw);
+    }
+
+    if (is_pipe) {
+        /* When running under a GUI, you will end here. */
+        if (!PeekNamedPipe(input_handle, NULL, 0, NULL, &nchars, NULL)) {
+            // input pipe may have been closed by the program that ran ffmpeg
+            return -1;
+        }
+        //Read it
+        if(nchars != 0) {
+            read(0, &ch, 1);
+            return ch;
+        }else{
+            return -1;
+        }
+    }
+#    endif
+    if(kbhit())
+        return(getch());
+#endif
+    return -1;
+}
+
+static int decode_interrupt_cb(void *ctx)
+{
+    return received_nb_signals > atomic_load(&transcode_init_done);
+}
+
+const AVIOInterruptCB int_cb = { decode_interrupt_cb, NULL };
+
+static void ffmpeg_cleanup(int ret)
+{
+    int i, j;
+
+    if (do_benchmark) {
+        int maxrss = getmaxrss() / 1024;
+        av_log(NULL, AV_LOG_INFO, "bench: maxrss=%ikB\n", maxrss);
+    }
+
+
+	/* --vgtmpeg start */
+    if( output_xml ) {
+        FFMSG_START_MSGTYPE( FFMSG_MSGTYPE_PROGRESSINFO, progress );
+        FFMSG_LOG( FFMSG_INT32_FMT(is_last_report), 1 );
+        FFMSG_LOG( FFMSG_INT32_FMT(curtime), (int)(INT_MAX) );
+        FFMSG_STOP_MSGTYPE( FFMSG_MSGTYPE_PROGRESSINFO, progress );
+        fflush(stderr);
+    }
+    if( server_mode )
+        nlinput_cancel(nli);
+
+    /* write the trailers if not yet written */
+    for(i=0;i<nb_output_files;i++) {
+        if( output_files[i]->wrote_header && !(output_files[i]->wrote_trailer) ) {
+            av_write_trailer(output_files[i]->ctx);
+        }
+    }
+	/* --vgtmpeg  end*/
+
+    for (i = 0; i < nb_filtergraphs; i++) {
+        FilterGraph *fg = filtergraphs[i];
+        avfilter_graph_free(&fg->graph);
+        for (j = 0; j < fg->nb_inputs; j++) {
+            while (av_fifo_size(fg->inputs[j]->frame_queue)) {
+                AVFrame *frame;
+                av_fifo_generic_read(fg->inputs[j]->frame_queue, &frame,
+                                     sizeof(frame), NULL);
+                av_frame_free(&frame);
+            }
+            av_fifo_freep(&fg->inputs[j]->frame_queue);
+            if (fg->inputs[j]->ist->sub2video.sub_queue) {
+                while (av_fifo_size(fg->inputs[j]->ist->sub2video.sub_queue)) {
+                    AVSubtitle sub;
+                    av_fifo_generic_read(fg->inputs[j]->ist->sub2video.sub_queue,
+                                         &sub, sizeof(sub), NULL);
+                    avsubtitle_free(&sub);
+                }
+                av_fifo_freep(&fg->inputs[j]->ist->sub2video.sub_queue);
+            }
+            av_buffer_unref(&fg->inputs[j]->hw_frames_ctx);
+            av_freep(&fg->inputs[j]->name);
+            av_freep(&fg->inputs[j]);
+        }
+        av_freep(&fg->inputs);
+        for (j = 0; j < fg->nb_outputs; j++) {
+            av_freep(&fg->outputs[j]->name);
+            av_freep(&fg->outputs[j]->formats);
+            av_freep(&fg->outputs[j]->channel_layouts);
+            av_freep(&fg->outputs[j]->sample_rates);
+            av_freep(&fg->outputs[j]);
+        }
+        av_freep(&fg->outputs);
+        av_freep(&fg->graph_desc);
+
+        av_freep(&filtergraphs[i]);
+    }
+    av_freep(&filtergraphs);
+
+    av_freep(&subtitle_out);
+
+    /* close files */
+    for (i = 0; i < nb_output_files; i++) {
+        OutputFile *of = output_files[i];
+        AVFormatContext *s;
+        if (!of)
+            continue;
+        s = of->ctx;
+        if (s && s->oformat && !(s->oformat->flags & AVFMT_NOFILE))
+            avio_closep(&s->pb);
+        avformat_free_context(s);
+        av_dict_free(&of->opts);
+
+        av_freep(&output_files[i]);
+    }
+    for (i = 0; i < nb_output_streams; i++) {
+        OutputStream *ost = output_streams[i];
+
+        if (!ost)
+            continue;
+
+        for (j = 0; j < ost->nb_bitstream_filters; j++)
+            av_bsf_free(&ost->bsf_ctx[j]);
+        av_freep(&ost->bsf_ctx);
+
+        av_frame_free(&ost->filtered_frame);
+        av_frame_free(&ost->last_frame);
+        av_dict_free(&ost->encoder_opts);
+
+        av_freep(&ost->forced_keyframes);
+        av_expr_free(ost->forced_keyframes_pexpr);
+        av_freep(&ost->avfilter);
+        av_freep(&ost->logfile_prefix);
+
+        av_freep(&ost->audio_channels_map);
+        ost->audio_channels_mapped = 0;
+
+        av_dict_free(&ost->sws_dict);
+
+        avcodec_free_context(&ost->enc_ctx);
+        avcodec_parameters_free(&ost->ref_par);
+
+        if (ost->muxing_queue) {
+            while (av_fifo_size(ost->muxing_queue)) {
+                AVPacket pkt;
+                av_fifo_generic_read(ost->muxing_queue, &pkt, sizeof(pkt), NULL);
+                av_packet_unref(&pkt);
+            }
+            av_fifo_freep(&ost->muxing_queue);
+        }
+
+        av_freep(&output_streams[i]);
+    }
+#if HAVE_THREADS
+    free_input_threads();
+#endif
+    for (i = 0; i < nb_input_files; i++) {
+        avformat_close_input(&input_files[i]->ctx);
+        av_freep(&input_files[i]);
+    }
+    for (i = 0; i < nb_input_streams; i++) {
+        InputStream *ist = input_streams[i];
+
+        av_frame_free(&ist->decoded_frame);
+        av_frame_free(&ist->filter_frame);
+        av_dict_free(&ist->decoder_opts);
+        avsubtitle_free(&ist->prev_sub.subtitle);
+        av_frame_free(&ist->sub2video.frame);
+        av_freep(&ist->filters);
+        av_freep(&ist->hwaccel_device);
+        av_freep(&ist->dts_buffer);
+
+        avcodec_free_context(&ist->dec_ctx);
+
+        av_freep(&input_streams[i]);
+    }
+
+    if (vstats_file) {
+        if (fclose(vstats_file))
+            av_log(NULL, AV_LOG_ERROR,
+                   "Error closing vstats file, loss of information possible: %s\n",
+                   av_err2str(AVERROR(errno)));
+    }
+    av_freep(&vstats_filename);
+
+    av_freep(&input_streams);
+    av_freep(&input_files);
+    av_freep(&output_streams);
+    av_freep(&output_files);
+
+    uninit_opts();
+
+    avformat_network_deinit();
+
+	/* --vgtmpeg start */
+    if( nli && nli->cancel_transcode ) {
+        av_log(NULL, AV_LOG_INFO, "transcode was cancelled.\n");
+    }
+
+    if( nli && nli->exit ) {
+        av_log(NULL, AV_LOG_INFO, "Received exit signal from input: terminating.\n");
+    }
+	/* --vgtmpeg stop */
+
+    if (received_sigterm) {
+        av_log(NULL, AV_LOG_INFO, "Exiting normally, received signal %d.\n",
+               (int) received_sigterm);
+    } else if (ret && atomic_load(&transcode_init_done)) {
+        av_log(NULL, AV_LOG_INFO, "Conversion failed!\n");
+    }
+    term_exit();
+    ffmpeg_exited = 1;
+}
+
+void remove_avoptions(AVDictionary **a, AVDictionary *b)
+{
+    AVDictionaryEntry *t = NULL;
+
+    while ((t = av_dict_get(b, "", t, AV_DICT_IGNORE_SUFFIX))) {
+        av_dict_set(a, t->key, NULL, AV_DICT_MATCH_CASE);
+    }
+}
+
+void assert_avoptions(AVDictionary *m)
+{
+    AVDictionaryEntry *t;
+    if ((t = av_dict_get(m, "", NULL, AV_DICT_IGNORE_SUFFIX))) {
+        av_log(NULL, AV_LOG_FATAL, "Option %s not found.\n", t->key);
+        exit_program(1);
+    }
+}
+
+static void abort_codec_experimental(AVCodec *c, int encoder)
+{
+    exit_program(1);
+}
+
+static void update_benchmark(const char *fmt, ...)
+{
+    if (do_benchmark_all) {
+        int64_t t = getutime();
+        va_list va;
+        char buf[1024];
+
+        if (fmt) {
+            va_start(va, fmt);
+            vsnprintf(buf, sizeof(buf), fmt, va);
+            va_end(va);
+            av_log(NULL, AV_LOG_INFO, "bench: %8"PRIu64" %s \n", t - current_time, buf);
+        }
+        current_time = t;
+    }
+}
+
+static void close_all_output_streams(OutputStream *ost, OSTFinished this_stream, OSTFinished others)
+{
+    int i;
+    for (i = 0; i < nb_output_streams; i++) {
+        OutputStream *ost2 = output_streams[i];
+        ost2->finished |= ost == ost2 ? this_stream : others;
+    }
+}
+
+static void write_packet(OutputFile *of, AVPacket *pkt, OutputStream *ost, int unqueue)
+{
+    AVFormatContext *s = of->ctx;
+    AVStream *st = ost->st;
+    int ret;
+
+    /*
+     * Audio encoders may split the packets --  #frames in != #packets out.
+     * But there is no reordering, so we can limit the number of output packets
+     * by simply dropping them here.
+     * Counting encoded video frames needs to be done separately because of
+     * reordering, see do_video_out().
+     * Do not count the packet when unqueued because it has been counted when queued.
+     */
+    if (!(st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO && ost->encoding_needed) && !unqueue) {
+        if (ost->frame_number >= ost->max_frames) {
+            av_packet_unref(pkt);
+            return;
+        }
+        ost->frame_number++;
+    }
+
+    if (!of->header_written) {
+        AVPacket tmp_pkt = {0};
+        /* the muxer is not initialized yet, buffer the packet */
+        if (!av_fifo_space(ost->muxing_queue)) {
+            int new_size = FFMIN(2 * av_fifo_size(ost->muxing_queue),
+                                 ost->max_muxing_queue_size);
+            if (new_size <= av_fifo_size(ost->muxing_queue)) {
+                av_log(NULL, AV_LOG_ERROR,
+                       "Too many packets buffered for output stream %d:%d.\n",
+                       ost->file_index, ost->st->index);
+                exit_program(1);
+            }
+            ret = av_fifo_realloc2(ost->muxing_queue, new_size);
+            if (ret < 0)
+                exit_program(1);
+        }
+        ret = av_packet_ref(&tmp_pkt, pkt);
+        if (ret < 0)
+            exit_program(1);
+        av_fifo_generic_write(ost->muxing_queue, &tmp_pkt, sizeof(tmp_pkt), NULL);
+        av_packet_unref(pkt);
+        return;
+    }
+
+    if ((st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO && video_sync_method == VSYNC_DROP) ||
+        (st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO && audio_sync_method < 0))
+        pkt->pts = pkt->dts = AV_NOPTS_VALUE;
+
+    if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
+        int i;
+        uint8_t *sd = av_packet_get_side_data(pkt, AV_PKT_DATA_QUALITY_STATS,
+                                              NULL);
+        ost->quality = sd ? AV_RL32(sd) : -1;
+        ost->pict_type = sd ? sd[4] : AV_PICTURE_TYPE_NONE;
+
+        for (i = 0; i<FF_ARRAY_ELEMS(ost->error); i++) {
+            if (sd && i < sd[5])
+                ost->error[i] = AV_RL64(sd + 8 + 8*i);
+            else
+                ost->error[i] = -1;
+        }
+
+        if (ost->frame_rate.num && ost->is_cfr) {
+            if (pkt->duration > 0)
+                av_log(NULL, AV_LOG_WARNING, "Overriding packet duration by frame rate, this should not happen\n");
+            pkt->duration = av_rescale_q(1, av_inv_q(ost->frame_rate),
+                                         ost->mux_timebase);
+        }
+    }
+
+    av_packet_rescale_ts(pkt, ost->mux_timebase, ost->st->time_base);
+
+    if (!(s->oformat->flags & AVFMT_NOTIMESTAMPS)) {
+        if (pkt->dts != AV_NOPTS_VALUE &&
+            pkt->pts != AV_NOPTS_VALUE &&
+            pkt->dts > pkt->pts) {
+            av_log(s, AV_LOG_WARNING, "Invalid DTS: %"PRId64" PTS: %"PRId64" in output stream %d:%d, replacing by guess\n",
+                   pkt->dts, pkt->pts,
+                   ost->file_index, ost->st->index);
+            pkt->pts =
+            pkt->dts = pkt->pts + pkt->dts + ost->last_mux_dts + 1
+                     - FFMIN3(pkt->pts, pkt->dts, ost->last_mux_dts + 1)
+                     - FFMAX3(pkt->pts, pkt->dts, ost->last_mux_dts + 1);
+        }
+        if ((st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO || st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) &&
+            pkt->dts != AV_NOPTS_VALUE &&
+            !(st->codecpar->codec_id == AV_CODEC_ID_VP9 && ost->stream_copy) &&
+            ost->last_mux_dts != AV_NOPTS_VALUE) {
+            int64_t max = ost->last_mux_dts + !(s->oformat->flags & AVFMT_TS_NONSTRICT);
+            if (pkt->dts < max) {
+                int loglevel = max - pkt->dts > 2 || st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO ? AV_LOG_WARNING : AV_LOG_DEBUG;
+                av_log(s, loglevel, "Non-monotonous DTS in output stream "
+                       "%d:%d; previous: %"PRId64", current: %"PRId64"; ",
+                       ost->file_index, ost->st->index, ost->last_mux_dts, pkt->dts);
+                if (exit_on_error) {
+                    av_log(NULL, AV_LOG_FATAL, "aborting.\n");
+                    exit_program(1);
+                }
+                av_log(s, loglevel, "changing to %"PRId64". This may result "
+                       "in incorrect timestamps in the output file.\n",
+                       max);
+                if (pkt->pts >= pkt->dts)
+                    pkt->pts = FFMAX(pkt->pts, max);
+                pkt->dts = max;
+            }
+        }
+    }
+    ost->last_mux_dts = pkt->dts;
+
+    ost->data_size += pkt->size;
+    ost->packets_written++;
+
+    pkt->stream_index = ost->index;
+
+    if (debug_ts) {
+        av_log(NULL, AV_LOG_INFO, "muxer <- type:%s "
+                "pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s size:%d\n",
+                av_get_media_type_string(ost->enc_ctx->codec_type),
+                av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, &ost->st->time_base),
+                av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, &ost->st->time_base),
+                pkt->size
+              );
+    }
+
+    ret = av_interleaved_write_frame(s, pkt);
+    if (ret < 0) {
+        print_error("av_interleaved_write_frame()", ret);
+        main_return_code = 1;
+        close_all_output_streams(ost, MUXER_FINISHED | ENCODER_FINISHED, ENCODER_FINISHED);
+    }
+    av_packet_unref(pkt);
+}
+
+static void close_output_stream(OutputStream *ost)
+{
+    OutputFile *of = output_files[ost->file_index];
+
+    ost->finished |= ENCODER_FINISHED;
+    if (of->shortest) {
+        int64_t end = av_rescale_q(ost->sync_opts - ost->first_pts, ost->enc_ctx->time_base, AV_TIME_BASE_Q);
+        of->recording_time = FFMIN(of->recording_time, end);
+    }
+}
+
+/*
+ * Send a single packet to the output, applying any bitstream filters
+ * associated with the output stream.  This may result in any number
+ * of packets actually being written, depending on what bitstream
+ * filters are applied.  The supplied packet is consumed and will be
+ * blank (as if newly-allocated) when this function returns.
+ *
+ * If eof is set, instead indicate EOF to all bitstream filters and
+ * therefore flush any delayed packets to the output.  A blank packet
+ * must be supplied in this case.
+ */
+static void output_packet(OutputFile *of, AVPacket *pkt,
+                          OutputStream *ost, int eof)
+{
+    int ret = 0;
+
+    /* apply the output bitstream filters, if any */
+    if (ost->nb_bitstream_filters) {
+        int idx;
+
+        ret = av_bsf_send_packet(ost->bsf_ctx[0], eof ? NULL : pkt);
+        if (ret < 0)
+            goto finish;
+
+        eof = 0;
+        idx = 1;
+        while (idx) {
+            /* get a packet from the previous filter up the chain */
+            ret = av_bsf_receive_packet(ost->bsf_ctx[idx - 1], pkt);
+            if (ret == AVERROR(EAGAIN)) {
+                ret = 0;
+                idx--;
+                continue;
+            } else if (ret == AVERROR_EOF) {
+                eof = 1;
+            } else if (ret < 0)
+                goto finish;
+
+            /* send it to the next filter down the chain or to the muxer */
+            if (idx < ost->nb_bitstream_filters) {
+                ret = av_bsf_send_packet(ost->bsf_ctx[idx], eof ? NULL : pkt);
+                if (ret < 0)
+                    goto finish;
+                idx++;
+                eof = 0;
+            } else if (eof)
+                goto finish;
+            else
+                write_packet(of, pkt, ost, 0);
+        }
+    } else if (!eof)
+        write_packet(of, pkt, ost, 0);
+
+finish:
+    if (ret < 0 && ret != AVERROR_EOF) {
+        av_log(NULL, AV_LOG_ERROR, "Error applying bitstream filters to an output "
+               "packet for stream #%d:%d.\n", ost->file_index, ost->index);
+        if(exit_on_error)
+            exit_program(1);
+    }
+}
+
+static int check_recording_time(OutputStream *ost)
+{
+    OutputFile *of = output_files[ost->file_index];
+
+    if (of->recording_time != INT64_MAX &&
+        av_compare_ts(ost->sync_opts - ost->first_pts, ost->enc_ctx->time_base, of->recording_time,
+                      AV_TIME_BASE_Q) >= 0) {
+        close_output_stream(ost);
+        return 0;
+    }
+    return 1;
+}
+
+static void do_audio_out(OutputFile *of, OutputStream *ost,
+                         AVFrame *frame)
+{
+    AVCodecContext *enc = ost->enc_ctx;
+    AVPacket pkt;
+    int ret;
+
+    av_init_packet(&pkt);
+    pkt.data = NULL;
+    pkt.size = 0;
+
+    if (!check_recording_time(ost))
+        return;
+
+    if (frame->pts == AV_NOPTS_VALUE || audio_sync_method < 0)
+        frame->pts = ost->sync_opts;
+    ost->sync_opts = frame->pts + frame->nb_samples;
+    ost->samples_encoded += frame->nb_samples;
+    ost->frames_encoded++;
+
+    av_assert0(pkt.size || !pkt.data);
+    update_benchmark(NULL);
+    if (debug_ts) {
+        av_log(NULL, AV_LOG_INFO, "encoder <- type:audio "
+               "frame_pts:%s frame_pts_time:%s time_base:%d/%d\n",
+               av_ts2str(frame->pts), av_ts2timestr(frame->pts, &enc->time_base),
+               enc->time_base.num, enc->time_base.den);
+    }
+
+    ret = avcodec_send_frame(enc, frame);
+    if (ret < 0)
+        goto error;
+
+    while (1) {
+        ret = avcodec_receive_packet(enc, &pkt);
+        if (ret == AVERROR(EAGAIN))
+            break;
+        if (ret < 0)
+            goto error;
+
+        update_benchmark("encode_audio %d.%d", ost->file_index, ost->index);
+
+        av_packet_rescale_ts(&pkt, enc->time_base, ost->mux_timebase);
+
+        if (debug_ts) {
+            av_log(NULL, AV_LOG_INFO, "encoder -> type:audio "
+                   "pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s\n",
+                   av_ts2str(pkt.pts), av_ts2timestr(pkt.pts, &enc->time_base),
+                   av_ts2str(pkt.dts), av_ts2timestr(pkt.dts, &enc->time_base));
+        }
+
+        output_packet(of, &pkt, ost, 0);
+    }
+
+    return;
+error:
+    av_log(NULL, AV_LOG_FATAL, "Audio encoding failed\n");
+    exit_program(1);
+}
+
+static void do_subtitle_out(OutputFile *of,
+                            OutputStream *ost,
+                            AVSubtitle *sub)
+{
+    int subtitle_out_max_size = 1024 * 1024;
+    int subtitle_out_size, nb, i;
+    AVCodecContext *enc;
+    AVPacket pkt;
+    int64_t pts;
+
+    if (sub->pts == AV_NOPTS_VALUE) {
+        av_log(NULL, AV_LOG_ERROR, "Subtitle packets must have a pts\n");
+        if (exit_on_error)
+            exit_program(1);
+        return;
+    }
+
+    enc = ost->enc_ctx;
+
+    if (!subtitle_out) {
+        subtitle_out = av_malloc(subtitle_out_max_size);
+        if (!subtitle_out) {
+            av_log(NULL, AV_LOG_FATAL, "Failed to allocate subtitle_out\n");
+            exit_program(1);
+        }
+    }
+
+    /* Note: DVB subtitle need one packet to draw them and one other
+       packet to clear them */
+    /* XXX: signal it in the codec context ? */
+    if (enc->codec_id == AV_CODEC_ID_DVB_SUBTITLE)
+        nb = 2;
+    else
+        nb = 1;
+
+    /* shift timestamp to honor -ss and make check_recording_time() work with -t */
+    pts = sub->pts;
+    if (output_files[ost->file_index]->start_time != AV_NOPTS_VALUE)
+        pts -= output_files[ost->file_index]->start_time;
+    for (i = 0; i < nb; i++) {
+        unsigned save_num_rects = sub->num_rects;
+
+        ost->sync_opts = av_rescale_q(pts, AV_TIME_BASE_Q, enc->time_base);
+        if (!check_recording_time(ost))
+            return;
+
+        sub->pts = pts;
+        // start_display_time is required to be 0
+        sub->pts               += av_rescale_q(sub->start_display_time, (AVRational){ 1, 1000 }, AV_TIME_BASE_Q);
+        sub->end_display_time  -= sub->start_display_time;
+        sub->start_display_time = 0;
+        if (i == 1)
+            sub->num_rects = 0;
+
+        ost->frames_encoded++;
+
+        subtitle_out_size = avcodec_encode_subtitle(enc, subtitle_out,
+                                                    subtitle_out_max_size, sub);
+        if (i == 1)
+            sub->num_rects = save_num_rects;
+        if (subtitle_out_size < 0) {
+            av_log(NULL, AV_LOG_FATAL, "Subtitle encoding failed\n");
+            exit_program(1);
+        }
+
+        av_init_packet(&pkt);
+        pkt.data = subtitle_out;
+        pkt.size = subtitle_out_size;
+        pkt.pts  = av_rescale_q(sub->pts, AV_TIME_BASE_Q, ost->mux_timebase);
+        pkt.duration = av_rescale_q(sub->end_display_time, (AVRational){ 1, 1000 }, ost->mux_timebase);
+        if (enc->codec_id == AV_CODEC_ID_DVB_SUBTITLE) {
+            /* XXX: the pts correction is handled here. Maybe handling
+               it in the codec would be better */
+            if (i == 0)
+                pkt.pts += av_rescale_q(sub->start_display_time, (AVRational){ 1, 1000 }, ost->mux_timebase);
+            else
+                pkt.pts += av_rescale_q(sub->end_display_time, (AVRational){ 1, 1000 }, ost->mux_timebase);
+        }
+        pkt.dts = pkt.pts;
+        output_packet(of, &pkt, ost, 0);
+    }
+}
+
+/*-- vgtmpeg */
+/* this code outputs thumbnail images through the pipe output in XML and base64 format
+ * the binary data is an RGB24 encoded image.
+ */
+#include "libavutil/base64.h"
+#include "libswscale/swscale.h"
+static struct SwsContext *picmsgSws = NULL;
+static uint8_t *picmsgdata;
+static uint8_t *picmsgsrc[4];
+static int picmsgstride[4];
+static char *b64out;
+static int picmsg_srcw;
+static int picmsg_srch;
+static int picmsg_srcf;
+static int64_t picmsg_lasttime=-1;
+
+
+
+
+static void output_nlpicmsg(AVFrame *pic) {
+	int ow = 240;
+	int oh = 180;
+	int osize = 3*ow*oh;
+	int dst_format = AV_PIX_FMT_RGB24;
+	int64_t picmsg_delay = 5*1000000; // 5 seconds.
+    int64_t curtime;
+
+    if(!output_xml)
+    	return;
+
+    /* output image every picmsg_delay seconds */
+    curtime = av_gettime();
+
+    if (picmsg_lasttime == -1) {
+    	picmsg_lasttime = curtime;
+    } else if ((curtime - picmsg_lasttime) < picmsg_delay )
+        return;
+
+    picmsg_lasttime = curtime;
+
+
+	if(!picmsgSws) {
+		picmsgSws = sws_getContext( pic->width, pic->height, pic->format,
+				ow, oh, dst_format,
+				SWS_BILINEAR, NULL, NULL, NULL);
+		if(picmsgSws) {
+			picmsg_srcw = pic->width;
+			picmsg_srch = pic->height;
+			picmsg_srcf = pic->format;
+			picmsgdata = av_malloc(osize);
+			picmsgsrc[0] =  picmsgdata;
+			picmsgstride[0] = 3*ow;
+			b64out = av_malloc(2*osize);
+		}
+	}
+
+	if( picmsgSws && picmsg_srcw==pic->width && picmsg_srch==pic->height && picmsg_srcf==pic->format) {
+		sws_scale(picmsgSws, (const uint8_t * const *)pic->data, pic->linesize, 0, pic->height, picmsgsrc, picmsgstride);
+
+#if 0
+		FILE *o = fopen("out.bin","wb");
+		fwrite(picmsgdata, osize,1,o);
+		fclose(o);
+#endif
+		/* output pic message in BASE64 */
+		av_base64_encode(b64out, 2*osize, picmsgsrc[0], osize );
+		FFMSG_PICTURE_START(ow, oh, dst_format);
+		FFMSG_PICTURE_DATA(b64out);
+		FFMSG_LOG("\n");
+		FFMSG_PICTURE_STOP();
+	}
+
+}
+/*-- vgtmpeg */
+
+static void do_video_out(OutputFile *of,
+                         OutputStream *ost,
+                         AVFrame *next_picture,
+                         double sync_ipts)
+{
+    int ret, format_video_sync;
+    AVPacket pkt;
+    AVCodecContext *enc = ost->enc_ctx;
+    AVCodecParameters *mux_par = ost->st->codecpar;
+    AVRational frame_rate;
+    int nb_frames, nb0_frames, i;
+    double delta, delta0;
+    double duration = 0;
+    int frame_size = 0;
+    InputStream *ist = NULL;
+    AVFilterContext *filter = ost->filter->filter;
+
+    if (ost->source_index >= 0)
+        ist = input_streams[ost->source_index];
+
+    frame_rate = av_buffersink_get_frame_rate(filter);
+    if (frame_rate.num > 0 && frame_rate.den > 0)
+        duration = 1/(av_q2d(frame_rate) * av_q2d(enc->time_base));
+
+    if(ist && ist->st->start_time != AV_NOPTS_VALUE && ist->st->first_dts != AV_NOPTS_VALUE && ost->frame_rate.num)
+        duration = FFMIN(duration, 1/(av_q2d(ost->frame_rate) * av_q2d(enc->time_base)));
+
+    if (!ost->filters_script &&
+        !ost->filters &&
+        next_picture &&
+        ist &&
+        lrintf(next_picture->pkt_duration * av_q2d(ist->st->time_base) / av_q2d(enc->time_base)) > 0) {
+        duration = lrintf(next_picture->pkt_duration * av_q2d(ist->st->time_base) / av_q2d(enc->time_base));
+    }
+
+    if (!next_picture) {
+        //end, flushing
+        nb0_frames = nb_frames = mid_pred(ost->last_nb0_frames[0],
+                                          ost->last_nb0_frames[1],
+                                          ost->last_nb0_frames[2]);
+    } else {
+        delta0 = sync_ipts - ost->sync_opts; // delta0 is the "drift" between the input frame (next_picture) and where it would fall in the output.
+        delta  = delta0 + duration;
+
+        /* by default, we output a single frame */
+        nb0_frames = 0; // tracks the number of times the PREVIOUS frame should be duplicated, mostly for variable framerate (VFR)
+        nb_frames = 1;
+
+        format_video_sync = video_sync_method;
+        if (format_video_sync == VSYNC_AUTO) {
+            if(!strcmp(of->ctx->oformat->name, "avi")) {
+                format_video_sync = VSYNC_VFR;
+            } else
+                format_video_sync = (of->ctx->oformat->flags & AVFMT_VARIABLE_FPS) ? ((of->ctx->oformat->flags & AVFMT_NOTIMESTAMPS) ? VSYNC_PASSTHROUGH : VSYNC_VFR) : VSYNC_CFR;
+            if (   ist
+                && format_video_sync == VSYNC_CFR
+                && input_files[ist->file_index]->ctx->nb_streams == 1
+                && input_files[ist->file_index]->input_ts_offset == 0) {
+                format_video_sync = VSYNC_VSCFR;
+            }
+            if (format_video_sync == VSYNC_CFR && copy_ts) {
+                format_video_sync = VSYNC_VSCFR;
+            }
+        }
+        ost->is_cfr = (format_video_sync == VSYNC_CFR || format_video_sync == VSYNC_VSCFR);
+
+        if (delta0 < 0 &&
+            delta > 0 &&
+            format_video_sync != VSYNC_PASSTHROUGH &&
+            format_video_sync != VSYNC_DROP) {
+            if (delta0 < -0.6) {
+                av_log(NULL, AV_LOG_WARNING, "Past duration %f too large\n", -delta0);
+            } else
+                av_log(NULL, AV_LOG_DEBUG, "Clipping frame in rate conversion by %f\n", -delta0);
+            sync_ipts = ost->sync_opts;
+            duration += delta0;
+            delta0 = 0;
+        }
+
+        switch (format_video_sync) {
+        case VSYNC_VSCFR:
+            if (ost->frame_number == 0 && delta0 >= 0.5) {
+                av_log(NULL, AV_LOG_DEBUG, "Not duplicating %d initial frames\n", (int)lrintf(delta0));
+                delta = duration;
+                delta0 = 0;
+                ost->sync_opts = lrint(sync_ipts);
+            }
+        case VSYNC_CFR:
+            // FIXME set to 0.5 after we fix some dts/pts bugs like in avidec.c
+            if (frame_drop_threshold && delta < frame_drop_threshold && ost->frame_number) {
+                nb_frames = 0;
+            } else if (delta < -1.1)
+                nb_frames = 0;
+            else if (delta > 1.1) {
+                nb_frames = lrintf(delta);
+                if (delta0 > 1.1)
+                    nb0_frames = lrintf(delta0 - 0.6);
+            }
+            break;
+        case VSYNC_VFR:
+            if (delta <= -0.6)
+                nb_frames = 0;
+            else if (delta > 0.6)
+                ost->sync_opts = lrint(sync_ipts);
+            break;
+        case VSYNC_DROP:
+        case VSYNC_PASSTHROUGH:
+            ost->sync_opts = lrint(sync_ipts);
+            break;
+        default:
+            av_assert0(0);
+        }
+    }
+
+    nb_frames = FFMIN(nb_frames, ost->max_frames - ost->frame_number);
+    nb0_frames = FFMIN(nb0_frames, nb_frames);
+
+    memmove(ost->last_nb0_frames + 1,
+            ost->last_nb0_frames,
+            sizeof(ost->last_nb0_frames[0]) * (FF_ARRAY_ELEMS(ost->last_nb0_frames) - 1));
+    ost->last_nb0_frames[0] = nb0_frames;
+
+    if (nb0_frames == 0 && ost->last_dropped) {
+        nb_frames_drop++;
+        av_log(NULL, AV_LOG_VERBOSE,
+               "*** dropping frame %d from stream %d at ts %"PRId64"\n",
+               ost->frame_number, ost->st->index, ost->last_frame->pts);
+    }
+    if (nb_frames > (nb0_frames && ost->last_dropped) + (nb_frames > nb0_frames)) {
+        if (nb_frames > dts_error_threshold * 30) {
+            av_log(NULL, AV_LOG_ERROR, "%d frame duplication too large, skipping\n", nb_frames - 1);
+            nb_frames_drop++;
+            return;
+        }
+        nb_frames_dup += nb_frames - (nb0_frames && ost->last_dropped) - (nb_frames > nb0_frames);
+        av_log(NULL, AV_LOG_VERBOSE, "*** %d dup!\n", nb_frames - 1);
+        if (nb_frames_dup > dup_warning) {
+            av_log(NULL, AV_LOG_WARNING, "More than %d frames duplicated\n", dup_warning);
+            dup_warning *= 10;
+        }
+    }
+    ost->last_dropped = nb_frames == nb0_frames && next_picture;
+
+  /* duplicates frame if needed */
+  for (i = 0; i < nb_frames; i++) {
+    AVFrame *in_picture;
+    av_init_packet(&pkt);
+    pkt.data = NULL;
+    pkt.size = 0;
+
+    if (i < nb0_frames && ost->last_frame) {
+        in_picture = ost->last_frame;
+    } else
+        in_picture = next_picture;
+
+    if (!in_picture)
+        return;
+
+    in_picture->pts = ost->sync_opts;
+
+#if 1
+    if (!check_recording_time(ost))
+#else
+    if (ost->frame_number >= ost->max_frames)
+#endif
+        return;
+
+    {
+        int forced_keyframe = 0;
+        double pts_time;
+
+        if (enc->flags & (AV_CODEC_FLAG_INTERLACED_DCT | AV_CODEC_FLAG_INTERLACED_ME) &&
+            ost->top_field_first >= 0)
+            in_picture->top_field_first = !!ost->top_field_first;
+
+        if (in_picture->interlaced_frame) {
+            if (enc->codec->id == AV_CODEC_ID_MJPEG)
+                mux_par->field_order = in_picture->top_field_first ? AV_FIELD_TT:AV_FIELD_BB;
+            else
+                mux_par->field_order = in_picture->top_field_first ? AV_FIELD_TB:AV_FIELD_BT;
+        } else
+            mux_par->field_order = AV_FIELD_PROGRESSIVE;
+
+        in_picture->quality = enc->global_quality;
+        in_picture->pict_type = 0;
+
+        pts_time = in_picture->pts != AV_NOPTS_VALUE ?
+            in_picture->pts * av_q2d(enc->time_base) : NAN;
+        if (ost->forced_kf_index < ost->forced_kf_count &&
+            in_picture->pts >= ost->forced_kf_pts[ost->forced_kf_index]) {
+            ost->forced_kf_index++;
+            forced_keyframe = 1;
+        } else if (ost->forced_keyframes_pexpr) {
+            double res;
+            ost->forced_keyframes_expr_const_values[FKF_T] = pts_time;
+            res = av_expr_eval(ost->forced_keyframes_pexpr,
+                               ost->forced_keyframes_expr_const_values, NULL);
+            ff_dlog(NULL, "force_key_frame: n:%f n_forced:%f prev_forced_n:%f t:%f prev_forced_t:%f -> res:%f\n",
+                    ost->forced_keyframes_expr_const_values[FKF_N],
+                    ost->forced_keyframes_expr_const_values[FKF_N_FORCED],
+                    ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_N],
+                    ost->forced_keyframes_expr_const_values[FKF_T],
+                    ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_T],
+                    res);
+            if (res) {
+                forced_keyframe = 1;
+                ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_N] =
+                    ost->forced_keyframes_expr_const_values[FKF_N];
+                ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_T] =
+                    ost->forced_keyframes_expr_const_values[FKF_T];
+                ost->forced_keyframes_expr_const_values[FKF_N_FORCED] += 1;
+            }
+
+            ost->forced_keyframes_expr_const_values[FKF_N] += 1;
+        } else if (   ost->forced_keyframes
+                   && !strncmp(ost->forced_keyframes, "source", 6)
+                   && in_picture->key_frame==1) {
+            forced_keyframe = 1;
+        }
+
+        if (forced_keyframe) {
+            in_picture->pict_type = AV_PICTURE_TYPE_I;
+            av_log(NULL, AV_LOG_DEBUG, "Forced keyframe at time %f\n", pts_time);
+        }
+
+        update_benchmark(NULL);
+        if (debug_ts) {
+            av_log(NULL, AV_LOG_INFO, "encoder <- type:video "
+                   "frame_pts:%s frame_pts_time:%s time_base:%d/%d\n",
+                   av_ts2str(in_picture->pts), av_ts2timestr(in_picture->pts, &enc->time_base),
+                   enc->time_base.num, enc->time_base.den);
+        }
+
+        ost->frames_encoded++;
+
+        ret = avcodec_send_frame(enc, in_picture);
+        if (ret < 0)
+            goto error;
+
+        while (1) {
+            ret = avcodec_receive_packet(enc, &pkt);
+            update_benchmark("encode_video %d.%d", ost->file_index, ost->index);
+            if (ret == AVERROR(EAGAIN))
+                break;
+            if (ret < 0)
+                goto error;
+
+            if (debug_ts) {
+                av_log(NULL, AV_LOG_INFO, "encoder -> type:video "
+                       "pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s\n",
+                       av_ts2str(pkt.pts), av_ts2timestr(pkt.pts, &enc->time_base),
+                       av_ts2str(pkt.dts), av_ts2timestr(pkt.dts, &enc->time_base));
+            }
+
+            if (pkt.pts == AV_NOPTS_VALUE && !(enc->codec->capabilities & AV_CODEC_CAP_DELAY))
+                pkt.pts = ost->sync_opts;
+
+            av_packet_rescale_ts(&pkt, enc->time_base, ost->mux_timebase);
+
+            if (debug_ts) {
+                av_log(NULL, AV_LOG_INFO, "encoder -> type:video "
+                    "pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s\n",
+                    av_ts2str(pkt.pts), av_ts2timestr(pkt.pts, &ost->mux_timebase),
+                    av_ts2str(pkt.dts), av_ts2timestr(pkt.dts, &ost->mux_timebase));
+            }
+
+            frame_size = pkt.size;
+            output_packet(of, &pkt, ost, 0);
+
+            /* if two pass, output log */
+            if (ost->logfile && enc->stats_out) {
+                fprintf(ost->logfile, "%s", enc->stats_out);
+            }
+        }
+
+        /* --vgtmpeg */
+        output_nlpicmsg(in_picture);
+        /* --vgtmpeg */
+    }
+    ost->sync_opts++;
+    /*
+     * For video, number of frames in == number of packets out.
+     * But there may be reordering, so we can't throw away frames on encoder
+     * flush, we need to limit them here, before they go into encoder.
+     */
+    ost->frame_number++;
+
+    if (vstats_filename && frame_size)
+        do_video_stats(ost, frame_size);
+  }
+
+    if (!ost->last_frame)
+        ost->last_frame = av_frame_alloc();
+    av_frame_unref(ost->last_frame);
+    if (next_picture && ost->last_frame)
+        av_frame_ref(ost->last_frame, next_picture);
+    else
+        av_frame_free(&ost->last_frame);
+
+    return;
+error:
+    av_log(NULL, AV_LOG_FATAL, "Video encoding failed\n");
+    exit_program(1);
+}
+
+static double psnr(double d)
+{
+    return -10.0 * log10(d);
+}
+
+static void do_video_stats(OutputStream *ost, int frame_size)
+{
+    AVCodecContext *enc;
+    int frame_number;
+    double ti1, bitrate, avg_bitrate;
+
+    /* this is executed just the first time do_video_stats is called */
+    if (!vstats_file) {
+        vstats_file = fopen(vstats_filename, "w");
+        if (!vstats_file) {
+            perror("fopen");
+            exit_program(1);
+        }
+    }
+
+    enc = ost->enc_ctx;
+    if (enc->codec_type == AVMEDIA_TYPE_VIDEO) {
+        frame_number = ost->st->nb_frames;
+        if (vstats_version <= 1) {
+            fprintf(vstats_file, "frame= %5d q= %2.1f ", frame_number,
+                    ost->quality / (float)FF_QP2LAMBDA);
+        } else  {
+            fprintf(vstats_file, "out= %2d st= %2d frame= %5d q= %2.1f ", ost->file_index, ost->index, frame_number,
+                    ost->quality / (float)FF_QP2LAMBDA);
+        }
+
+        if (ost->error[0]>=0 && (enc->flags & AV_CODEC_FLAG_PSNR))
+            fprintf(vstats_file, "PSNR= %6.2f ", psnr(ost->error[0] / (enc->width * enc->height * 255.0 * 255.0)));
+
+        fprintf(vstats_file,"f_size= %6d ", frame_size);
+        /* compute pts value */
+        ti1 = av_stream_get_end_pts(ost->st) * av_q2d(ost->st->time_base);
+        if (ti1 < 0.01)
+            ti1 = 0.01;
+
+        bitrate     = (frame_size * 8) / av_q2d(enc->time_base) / 1000.0;
+        avg_bitrate = (double)(ost->data_size * 8) / ti1 / 1000.0;
+        fprintf(vstats_file, "s_size= %8.0fkB time= %0.3f br= %7.1fkbits/s avg_br= %7.1fkbits/s ",
+               (double)ost->data_size / 1024, ti1, bitrate, avg_bitrate);
+        fprintf(vstats_file, "type= %c\n", av_get_picture_type_char(ost->pict_type));
+    }
+}
+
+static int init_output_stream(OutputStream *ost, char *error, int error_len);
+
+static void finish_output_stream(OutputStream *ost)
+{
+    OutputFile *of = output_files[ost->file_index];
+    int i;
+
+    ost->finished = ENCODER_FINISHED | MUXER_FINISHED;
+
+    if (of->shortest) {
+        for (i = 0; i < of->ctx->nb_streams; i++)
+            output_streams[of->ost_index + i]->finished = ENCODER_FINISHED | MUXER_FINISHED;
+    }
+}
+
+/**
+ * Get and encode new output from any of the filtergraphs, without causing
+ * activity.
+ *
+ * @return  0 for success, <0 for severe errors
+ */
+static int reap_filters(int flush)
+{
+    AVFrame *filtered_frame = NULL;
+    int i;
+
+    /* Reap all buffers present in the buffer sinks */
+    for (i = 0; i < nb_output_streams; i++) {
+        OutputStream *ost = output_streams[i];
+        OutputFile    *of = output_files[ost->file_index];
+        AVFilterContext *filter;
+        AVCodecContext *enc = ost->enc_ctx;
+        int ret = 0;
+
+        if (!ost->filter || !ost->filter->graph->graph)
+            continue;
+        filter = ost->filter->filter;
+
+        if (!ost->initialized) {
+            char error[1024] = "";
+            ret = init_output_stream(ost, error, sizeof(error));
+            if (ret < 0) {
+                av_log(NULL, AV_LOG_ERROR, "Error initializing output stream %d:%d -- %s\n",
+                       ost->file_index, ost->index, error);
+                exit_program(1);
+            }
+        }
+
+        if (!ost->filtered_frame && !(ost->filtered_frame = av_frame_alloc())) {
+            return AVERROR(ENOMEM);
+        }
+        filtered_frame = ost->filtered_frame;
+
+        while (1) {
+            double float_pts = AV_NOPTS_VALUE; // this is identical to filtered_frame.pts but with higher precision
+            ret = av_buffersink_get_frame_flags(filter, filtered_frame,
+                                               AV_BUFFERSINK_FLAG_NO_REQUEST);
+            if (ret < 0) {
+                if (ret != AVERROR(EAGAIN) && ret != AVERROR_EOF) {
+                    av_log(NULL, AV_LOG_WARNING,
+                           "Error in av_buffersink_get_frame_flags(): %s\n", av_err2str(ret));
+                } else if (flush && ret == AVERROR_EOF) {
+                    if (av_buffersink_get_type(filter) == AVMEDIA_TYPE_VIDEO)
+                        do_video_out(of, ost, NULL, AV_NOPTS_VALUE);
+                }
+                break;
+            }
+            if (ost->finished) {
+                av_frame_unref(filtered_frame);
+                continue;
+            }
+            if (filtered_frame->pts != AV_NOPTS_VALUE) {
+                int64_t start_time = (of->start_time == AV_NOPTS_VALUE) ? 0 : of->start_time;
+                AVRational filter_tb = av_buffersink_get_time_base(filter);
+                AVRational tb = enc->time_base;
+                int extra_bits = av_clip(29 - av_log2(tb.den), 0, 16);
+
+                tb.den <<= extra_bits;
+                float_pts =
+                    av_rescale_q(filtered_frame->pts, filter_tb, tb) -
+                    av_rescale_q(start_time, AV_TIME_BASE_Q, tb);
+                float_pts /= 1 << extra_bits;
+                // avoid exact midoints to reduce the chance of rounding differences, this can be removed in case the fps code is changed to work with integers
+                float_pts += FFSIGN(float_pts) * 1.0 / (1<<17);
+
+                filtered_frame->pts =
+                    av_rescale_q(filtered_frame->pts, filter_tb, enc->time_base) -
+                    av_rescale_q(start_time, AV_TIME_BASE_Q, enc->time_base);
+            }
+            //if (ost->source_index >= 0)
+            //    *filtered_frame= *input_streams[ost->source_index]->decoded_frame; //for me_threshold
+
+            switch (av_buffersink_get_type(filter)) {
+            case AVMEDIA_TYPE_VIDEO:
+                if (!ost->frame_aspect_ratio.num)
+                    enc->sample_aspect_ratio = filtered_frame->sample_aspect_ratio;
+
+                if (debug_ts) {
+                    av_log(NULL, AV_LOG_INFO, "filter -> pts:%s pts_time:%s exact:%f time_base:%d/%d\n",
+                            av_ts2str(filtered_frame->pts), av_ts2timestr(filtered_frame->pts, &enc->time_base),
+                            float_pts,
+                            enc->time_base.num, enc->time_base.den);
+                }
+
+                do_video_out(of, ost, filtered_frame, float_pts);
+                break;
+            case AVMEDIA_TYPE_AUDIO:
+                if (!(enc->codec->capabilities & AV_CODEC_CAP_PARAM_CHANGE) &&
+                    enc->channels != filtered_frame->channels) {
+                    av_log(NULL, AV_LOG_ERROR,
+                           "Audio filter graph output is not normalized and encoder does not support parameter changes\n");
+                    break;
+                }
+                do_audio_out(of, ost, filtered_frame);
+                break;
+            default:
+                // TODO support subtitle filters
+                av_assert0(0);
+            }
+
+            av_frame_unref(filtered_frame);
+        }
+    }
+
+    return 0;
+}
+
+static void print_final_stats(int64_t total_size)
+{
+    uint64_t video_size = 0, audio_size = 0, extra_size = 0, other_size = 0;
+    uint64_t subtitle_size = 0;
+    uint64_t data_size = 0;
+    float percent = -1.0;
+    int i, j;
+    int pass1_used = 1;
+
+    for (i = 0; i < nb_output_streams; i++) {
+        OutputStream *ost = output_streams[i];
+        switch (ost->enc_ctx->codec_type) {
+            case AVMEDIA_TYPE_VIDEO: video_size += ost->data_size; break;
+            case AVMEDIA_TYPE_AUDIO: audio_size += ost->data_size; break;
+            case AVMEDIA_TYPE_SUBTITLE: subtitle_size += ost->data_size; break;
+            default:                 other_size += ost->data_size; break;
+        }
+        extra_size += ost->enc_ctx->extradata_size;
+        data_size  += ost->data_size;
+        if (   (ost->enc_ctx->flags & (AV_CODEC_FLAG_PASS1 | AV_CODEC_FLAG_PASS2))
+            != AV_CODEC_FLAG_PASS1)
+            pass1_used = 0;
+    }
+
+    if (data_size && total_size>0 && total_size >= data_size)
+        percent = 100.0 * (total_size - data_size) / data_size;
+
+    av_log(NULL, AV_LOG_INFO, "video:%1.0fkB audio:%1.0fkB subtitle:%1.0fkB other streams:%1.0fkB global headers:%1.0fkB muxing overhead: ",
+           video_size / 1024.0,
+           audio_size / 1024.0,
+           subtitle_size / 1024.0,
+           other_size / 1024.0,
+           extra_size / 1024.0);
+    if (percent >= 0.0)
+        av_log(NULL, AV_LOG_INFO, "%f%%", percent);
+    else
+        av_log(NULL, AV_LOG_INFO, "unknown");
+    av_log(NULL, AV_LOG_INFO, "\n");
+
+    /* print verbose per-stream stats */
+    for (i = 0; i < nb_input_files; i++) {
+        InputFile *f = input_files[i];
+        uint64_t total_packets = 0, total_size = 0;
+
+        av_log(NULL, AV_LOG_VERBOSE, "Input file #%d (%s):\n",
+               i, f->ctx->url);
+
+        for (j = 0; j < f->nb_streams; j++) {
+            InputStream *ist = input_streams[f->ist_index + j];
+            enum AVMediaType type = ist->dec_ctx->codec_type;
+
+            total_size    += ist->data_size;
+            total_packets += ist->nb_packets;
+
+            av_log(NULL, AV_LOG_VERBOSE, "  Input stream #%d:%d (%s): ",
+                   i, j, media_type_string(type));
+            av_log(NULL, AV_LOG_VERBOSE, "%"PRIu64" packets read (%"PRIu64" bytes); ",
+                   ist->nb_packets, ist->data_size);
+
+            if (ist->decoding_needed) {
+                av_log(NULL, AV_LOG_VERBOSE, "%"PRIu64" frames decoded",
+                       ist->frames_decoded);
+                if (type == AVMEDIA_TYPE_AUDIO)
+                    av_log(NULL, AV_LOG_VERBOSE, " (%"PRIu64" samples)", ist->samples_decoded);
+                av_log(NULL, AV_LOG_VERBOSE, "; ");
+            }
+
+            av_log(NULL, AV_LOG_VERBOSE, "\n");
+        }
+
+        av_log(NULL, AV_LOG_VERBOSE, "  Total: %"PRIu64" packets (%"PRIu64" bytes) demuxed\n",
+               total_packets, total_size);
+    }
+
+    for (i = 0; i < nb_output_files; i++) {
+        OutputFile *of = output_files[i];
+        uint64_t total_packets = 0, total_size = 0;
+
+        av_log(NULL, AV_LOG_VERBOSE, "Output file #%d (%s):\n",
+               i, of->ctx->url);
+
+        for (j = 0; j < of->ctx->nb_streams; j++) {
+            OutputStream *ost = output_streams[of->ost_index + j];
+            enum AVMediaType type = ost->enc_ctx->codec_type;
+
+            total_size    += ost->data_size;
+            total_packets += ost->packets_written;
+
+            av_log(NULL, AV_LOG_VERBOSE, "  Output stream #%d:%d (%s): ",
+                   i, j, media_type_string(type));
+            if (ost->encoding_needed) {
+                av_log(NULL, AV_LOG_VERBOSE, "%"PRIu64" frames encoded",
+                       ost->frames_encoded);
+                if (type == AVMEDIA_TYPE_AUDIO)
+                    av_log(NULL, AV_LOG_VERBOSE, " (%"PRIu64" samples)", ost->samples_encoded);
+                av_log(NULL, AV_LOG_VERBOSE, "; ");
+            }
+
+            av_log(NULL, AV_LOG_VERBOSE, "%"PRIu64" packets muxed (%"PRIu64" bytes); ",
+                   ost->packets_written, ost->data_size);
+
+            av_log(NULL, AV_LOG_VERBOSE, "\n");
+        }
+
+        av_log(NULL, AV_LOG_VERBOSE, "  Total: %"PRIu64" packets (%"PRIu64" bytes) muxed\n",
+               total_packets, total_size);
+    }
+    if(video_size + data_size + audio_size + subtitle_size + extra_size == 0){
+        av_log(NULL, AV_LOG_WARNING, "Output file is empty, nothing was encoded ");
+        if (pass1_used) {
+            av_log(NULL, AV_LOG_WARNING, "\n");
+        } else {
+            av_log(NULL, AV_LOG_WARNING, "(check -ss / -t / -frames parameters if used)\n");
+        }
+    }
+}
+
+static void print_report(int is_last_report, int64_t timer_start, int64_t cur_time)
+{
+    AVBPrint buf, buf_script;
+    OutputStream *ost;
+    AVFormatContext *oc;
+    int64_t total_size;
+    AVCodecContext *enc;
+    int frame_number, vid, i;
+    double bitrate;
+    double speed;
+    int64_t pts = INT64_MIN + 1;
+    static int64_t last_time = -1;
+    static int qp_histogram[52];
+    int hours, mins, secs, us;
+    const char *hours_sign;
+    int ret;
+    float t;
+
+	/* --vgtmpeg	 */
+    if( output_xml )
+        print_nlreport( output_files, output_streams, nb_output_streams, is_last_report, timer_start, nb_frames_dup, nb_frames_drop );
+	/* --vgtmpeg	 */
+
+
+    if (!print_stats && !is_last_report && !progress_avio)
+        return;
+
+    if (!is_last_report) {
+        if (last_time == -1) {
+            last_time = cur_time;
+            return;
+        }
+        if ((cur_time - last_time) < 500000)
+            return;
+        last_time = cur_time;
+    }
+
+    t = (cur_time-timer_start) / 1000000.0;
+
+
+    oc = output_files[0]->ctx;
+
+    total_size = avio_size(oc->pb);
+    if (total_size <= 0) // FIXME improve avio_size() so it works with non seekable output too
+        total_size = avio_tell(oc->pb);
+
+    vid = 0;
+    av_bprint_init(&buf, 0, AV_BPRINT_SIZE_AUTOMATIC);
+    av_bprint_init(&buf_script, 0, 1);
+    for (i = 0; i < nb_output_streams; i++) {
+        float q = -1;
+        ost = output_streams[i];
+        enc = ost->enc_ctx;
+        if (!ost->stream_copy)
+            q = ost->quality / (float) FF_QP2LAMBDA;
+
+        if (vid && enc->codec_type == AVMEDIA_TYPE_VIDEO) {
+            av_bprintf(&buf, "q=%2.1f ", q);
+            av_bprintf(&buf_script, "stream_%d_%d_q=%.1f\n",
+                       ost->file_index, ost->index, q);
+        }
+        if (!vid && enc->codec_type == AVMEDIA_TYPE_VIDEO) {
+            float fps;
+
+            frame_number = ost->frame_number;
+            fps = t > 1 ? frame_number / t : 0;
+            av_bprintf(&buf, "frame=%5d fps=%3.*f q=%3.1f ",
+                     frame_number, fps < 9.95, fps, q);
+            av_bprintf(&buf_script, "frame=%d\n", frame_number);
+            av_bprintf(&buf_script, "fps=%.1f\n", fps);
+            av_bprintf(&buf_script, "stream_%d_%d_q=%.1f\n",
+                       ost->file_index, ost->index, q);
+            if (is_last_report)
+                av_bprintf(&buf, "L");
+            if (qp_hist) {
+                int j;
+                int qp = lrintf(q);
+                if (qp >= 0 && qp < FF_ARRAY_ELEMS(qp_histogram))
+                    qp_histogram[qp]++;
+                for (j = 0; j < 32; j++)
+                    av_bprintf(&buf, "%X", av_log2(qp_histogram[j] + 1));
+            }
+
+            if ((enc->flags & AV_CODEC_FLAG_PSNR) && (ost->pict_type != AV_PICTURE_TYPE_NONE || is_last_report)) {
+                int j;
+                double error, error_sum = 0;
+                double scale, scale_sum = 0;
+                double p;
+                char type[3] = { 'Y','U','V' };
+                av_bprintf(&buf, "PSNR=");
+                for (j = 0; j < 3; j++) {
+                    if (is_last_report) {
+                        error = enc->error[j];
+                        scale = enc->width * enc->height * 255.0 * 255.0 * frame_number;
+                    } else {
+                        error = ost->error[j];
+                        scale = enc->width * enc->height * 255.0 * 255.0;
+                    }
+                    if (j)
+                        scale /= 4;
+                    error_sum += error;
+                    scale_sum += scale;
+                    p = psnr(error / scale);
+                    av_bprintf(&buf, "%c:%2.2f ", type[j], p);
+                    av_bprintf(&buf_script, "stream_%d_%d_psnr_%c=%2.2f\n",
+                               ost->file_index, ost->index, type[j] | 32, p);
+                }
+                p = psnr(error_sum / scale_sum);
+                av_bprintf(&buf, "*:%2.2f ", psnr(error_sum / scale_sum));
+                av_bprintf(&buf_script, "stream_%d_%d_psnr_all=%2.2f\n",
+                           ost->file_index, ost->index, p);
+            }
+            vid = 1;
+        }
+        /* compute min output value */
+        if (av_stream_get_end_pts(ost->st) != AV_NOPTS_VALUE)
+            pts = FFMAX(pts, av_rescale_q(av_stream_get_end_pts(ost->st),
+                                          ost->st->time_base, AV_TIME_BASE_Q));
+        if (is_last_report)
+            nb_frames_drop += ost->last_dropped;
+    }
+
+    secs = FFABS(pts) / AV_TIME_BASE;
+    us = FFABS(pts) % AV_TIME_BASE;
+    mins = secs / 60;
+    secs %= 60;
+    hours = mins / 60;
+    mins %= 60;
+    hours_sign = (pts < 0) ? "-" : "";
+
+    bitrate = pts && total_size >= 0 ? total_size * 8 / (pts / 1000.0) : -1;
+    speed = t != 0.0 ? (double)pts / AV_TIME_BASE / t : -1;
+
+    if (total_size < 0) av_bprintf(&buf, "size=N/A time=");
+    else                av_bprintf(&buf, "size=%8.0fkB time=", total_size / 1024.0);
+    if (pts == AV_NOPTS_VALUE) {
+        av_bprintf(&buf, "N/A ");
+    } else {
+        av_bprintf(&buf, "%s%02d:%02d:%02d.%02d ",
+                   hours_sign, hours, mins, secs, (100 * us) / AV_TIME_BASE);
+    }
+
+    if (bitrate < 0) {
+        av_bprintf(&buf, "bitrate=N/A");
+        av_bprintf(&buf_script, "bitrate=N/A\n");
+    }else{
+        av_bprintf(&buf, "bitrate=%6.1fkbits/s", bitrate);
+        av_bprintf(&buf_script, "bitrate=%6.1fkbits/s\n", bitrate);
+    }
+
+    if (total_size < 0) av_bprintf(&buf_script, "total_size=N/A\n");
+    else                av_bprintf(&buf_script, "total_size=%"PRId64"\n", total_size);
+    if (pts == AV_NOPTS_VALUE) {
+        av_bprintf(&buf_script, "out_time_ms=N/A\n");
+        av_bprintf(&buf_script, "out_time=N/A\n");
+    } else {
+        av_bprintf(&buf_script, "out_time_ms=%"PRId64"\n", pts);
+        av_bprintf(&buf_script, "out_time=%s%02d:%02d:%02d.%06d\n",
+                   hours_sign, hours, mins, secs, us);
+    }
+
+    if (nb_frames_dup || nb_frames_drop)
+        av_bprintf(&buf, " dup=%d drop=%d", nb_frames_dup, nb_frames_drop);
+    av_bprintf(&buf_script, "dup_frames=%d\n", nb_frames_dup);
+    av_bprintf(&buf_script, "drop_frames=%d\n", nb_frames_drop);
+
+    if (speed < 0) {
+        av_bprintf(&buf, " speed=N/A");
+        av_bprintf(&buf_script, "speed=N/A\n");
+    } else {
+        av_bprintf(&buf, " speed=%4.3gx", speed);
+        av_bprintf(&buf_script, "speed=%4.3gx\n", speed);
+    }
+
+    if (print_stats || is_last_report) {
+        const char end = is_last_report ? '\n' : '\r';
+        if (print_stats==1 && AV_LOG_INFO > av_log_get_level()) {
+            fprintf(stderr, "%s    %c", buf.str, end);
+        } else
+            av_log(NULL, AV_LOG_INFO, "%s    %c", buf.str, end);
+
+    fflush(stderr);
+    }
+    av_bprint_finalize(&buf, NULL);
+
+    if (progress_avio) {
+        av_bprintf(&buf_script, "progress=%s\n",
+                   is_last_report ? "end" : "continue");
+        avio_write(progress_avio, buf_script.str,
+                   FFMIN(buf_script.len, buf_script.size - 1));
+        avio_flush(progress_avio);
+        av_bprint_finalize(&buf_script, NULL);
+        if (is_last_report) {
+            if ((ret = avio_closep(&progress_avio)) < 0)
+                av_log(NULL, AV_LOG_ERROR,
+                       "Error closing progress log, loss of information possible: %s\n", av_err2str(ret));
+        }
+    }
+
+    if (is_last_report)
+        print_final_stats(total_size);
+}
+
+static void ifilter_parameters_from_codecpar(InputFilter *ifilter, AVCodecParameters *par)
+{
+    // We never got any input. Set a fake format, which will
+    // come from libavformat.
+    ifilter->format                 = par->format;
+    ifilter->sample_rate            = par->sample_rate;
+    ifilter->channels               = par->channels;
+    ifilter->channel_layout         = par->channel_layout;
+    ifilter->width                  = par->width;
+    ifilter->height                 = par->height;
+    ifilter->sample_aspect_ratio    = par->sample_aspect_ratio;
+}
+
+static void flush_encoders(void)
+{
+    int i, ret;
+
+    for (i = 0; i < nb_output_streams; i++) {
+        OutputStream   *ost = output_streams[i];
+        AVCodecContext *enc = ost->enc_ctx;
+        OutputFile      *of = output_files[ost->file_index];
+
+        if (!ost->encoding_needed)
+            continue;
+
+        // Try to enable encoding with no input frames.
+        // Maybe we should just let encoding fail instead.
+        if (!ost->initialized) {
+            FilterGraph *fg = ost->filter->graph;
+            char error[1024] = "";
+
+            av_log(NULL, AV_LOG_WARNING,
+                   "Finishing stream %d:%d without any data written to it.\n",
+                   ost->file_index, ost->st->index);
+
+            if (ost->filter && !fg->graph) {
+                int x;
+                for (x = 0; x < fg->nb_inputs; x++) {
+                    InputFilter *ifilter = fg->inputs[x];
+                    if (ifilter->format < 0)
+                        ifilter_parameters_from_codecpar(ifilter, ifilter->ist->st->codecpar);
+                }
+
+                if (!ifilter_has_all_input_formats(fg))
+                    continue;
+
+                ret = configure_filtergraph(fg);
+                if (ret < 0) {
+                    av_log(NULL, AV_LOG_ERROR, "Error configuring filter graph\n");
+                    exit_program(1);
+                }
+
+                finish_output_stream(ost);
+            }
+
+            ret = init_output_stream(ost, error, sizeof(error));
+            if (ret < 0) {
+                av_log(NULL, AV_LOG_ERROR, "Error initializing output stream %d:%d -- %s\n",
+                       ost->file_index, ost->index, error);
+                exit_program(1);
+            }
+        }
+
+        if (enc->codec_type == AVMEDIA_TYPE_AUDIO && enc->frame_size <= 1)
+            continue;
+
+        if (enc->codec_type != AVMEDIA_TYPE_VIDEO && enc->codec_type != AVMEDIA_TYPE_AUDIO)
+            continue;
+
+        for (;;) {
+            const char *desc = NULL;
+            AVPacket pkt;
+            int pkt_size;
+
+            switch (enc->codec_type) {
+            case AVMEDIA_TYPE_AUDIO:
+                desc   = "audio";
+                break;
+            case AVMEDIA_TYPE_VIDEO:
+                desc   = "video";
+                break;
+            default:
+                av_assert0(0);
+            }
+
+                av_init_packet(&pkt);
+                pkt.data = NULL;
+                pkt.size = 0;
+
+                update_benchmark(NULL);
+
+                while ((ret = avcodec_receive_packet(enc, &pkt)) == AVERROR(EAGAIN)) {
+                    ret = avcodec_send_frame(enc, NULL);
+                    if (ret < 0) {
+                        av_log(NULL, AV_LOG_FATAL, "%s encoding failed: %s\n",
+                               desc,
+                               av_err2str(ret));
+                        exit_program(1);
+                    }
+                }
+
+                update_benchmark("flush_%s %d.%d", desc, ost->file_index, ost->index);
+                if (ret < 0 && ret != AVERROR_EOF) {
+                    av_log(NULL, AV_LOG_FATAL, "%s encoding failed: %s\n",
+                           desc,
+                           av_err2str(ret));
+                    exit_program(1);
+                }
+                if (ost->logfile && enc->stats_out) {
+                    fprintf(ost->logfile, "%s", enc->stats_out);
+                }
+                if (ret == AVERROR_EOF) {
+                    output_packet(of, &pkt, ost, 1);
+                    break;
+                }
+                if (ost->finished & MUXER_FINISHED) {
+                    av_packet_unref(&pkt);
+                    continue;
+                }
+                av_packet_rescale_ts(&pkt, enc->time_base, ost->mux_timebase);
+                pkt_size = pkt.size;
+                output_packet(of, &pkt, ost, 0);
+                if (ost->enc_ctx->codec_type == AVMEDIA_TYPE_VIDEO && vstats_filename) {
+                    do_video_stats(ost, pkt_size);
+                }
+        }
+    }
+}
+
+/*
+ * Check whether a packet from ist should be written into ost at this time
+ */
+static int check_output_constraints(InputStream *ist, OutputStream *ost)
+{
+    OutputFile *of = output_files[ost->file_index];
+    int ist_index  = input_files[ist->file_index]->ist_index + ist->st->index;
+
+    if (ost->source_index != ist_index)
+        return 0;
+
+    if (ost->finished)
+        return 0;
+
+    if (of->start_time != AV_NOPTS_VALUE && ist->pts < of->start_time)
+        return 0;
+
+    return 1;
+}
+
+static void do_streamcopy(InputStream *ist, OutputStream *ost, const AVPacket *pkt)
+{
+    OutputFile *of = output_files[ost->file_index];
+    InputFile   *f = input_files [ist->file_index];
+    int64_t start_time = (of->start_time == AV_NOPTS_VALUE) ? 0 : of->start_time;
+    int64_t ost_tb_start_time = av_rescale_q(start_time, AV_TIME_BASE_Q, ost->mux_timebase);
+    AVPacket opkt = { 0 };
+
+    av_init_packet(&opkt);
+
+    // EOF: flush output bitstream filters.
+    if (!pkt) {
+        output_packet(of, &opkt, ost, 1);
+        return;
+    }
+
+    if ((!ost->frame_number && !(pkt->flags & AV_PKT_FLAG_KEY)) &&
+        !ost->copy_initial_nonkeyframes)
+        return;
+
+    if (!ost->frame_number && !ost->copy_prior_start) {
+        int64_t comp_start = start_time;
+        if (copy_ts && f->start_time != AV_NOPTS_VALUE)
+            comp_start = FFMAX(start_time, f->start_time + f->ts_offset);
+        if (pkt->pts == AV_NOPTS_VALUE ?
+            ist->pts < comp_start :
+            pkt->pts < av_rescale_q(comp_start, AV_TIME_BASE_Q, ist->st->time_base))
+            return;
+    }
+
+    if (of->recording_time != INT64_MAX &&
+        ist->pts >= of->recording_time + start_time) {
+        close_output_stream(ost);
+        return;
+    }
+
+    if (f->recording_time != INT64_MAX) {
+        start_time = f->ctx->start_time;
+        if (f->start_time != AV_NOPTS_VALUE && copy_ts)
+            start_time += f->start_time;
+        if (ist->pts >= f->recording_time + start_time) {
+            close_output_stream(ost);
+            return;
+        }
+    }
+
+    /* force the input stream PTS */
+    if (ost->enc_ctx->codec_type == AVMEDIA_TYPE_VIDEO)
+        ost->sync_opts++;
+
+    if (pkt->pts != AV_NOPTS_VALUE)
+        opkt.pts = av_rescale_q(pkt->pts, ist->st->time_base, ost->mux_timebase) - ost_tb_start_time;
+    else
+        opkt.pts = AV_NOPTS_VALUE;
+
+    if (pkt->dts == AV_NOPTS_VALUE)
+        opkt.dts = av_rescale_q(ist->dts, AV_TIME_BASE_Q, ost->mux_timebase);
+    else
+        opkt.dts = av_rescale_q(pkt->dts, ist->st->time_base, ost->mux_timebase);
+    opkt.dts -= ost_tb_start_time;
+
+    if (ost->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO && pkt->dts != AV_NOPTS_VALUE) {
+        int duration = av_get_audio_frame_duration(ist->dec_ctx, pkt->size);
+        if(!duration)
+            duration = ist->dec_ctx->frame_size;
+        opkt.dts = opkt.pts = av_rescale_delta(ist->st->time_base, pkt->dts,
+                                               (AVRational){1, ist->dec_ctx->sample_rate}, duration, &ist->filter_in_rescale_delta_last,
+                                               ost->mux_timebase) - ost_tb_start_time;
+    }
+
+    opkt.duration = av_rescale_q(pkt->duration, ist->st->time_base, ost->mux_timebase);
+
+    opkt.flags    = pkt->flags;
+
+    if (pkt->buf) {
+        opkt.buf = av_buffer_ref(pkt->buf);
+        if (!opkt.buf)
+            exit_program(1);
+    }
+    opkt.data = pkt->data;
+    opkt.size = pkt->size;
+
+    av_copy_packet_side_data(&opkt, pkt);
+
+    output_packet(of, &opkt, ost, 0);
+}
+
+int guess_input_channel_layout(InputStream *ist)
+{
+    AVCodecContext *dec = ist->dec_ctx;
+
+    if (!dec->channel_layout) {
+        char layout_name[256];
+
+        if (dec->channels > ist->guess_layout_max)
+            return 0;
+        dec->channel_layout = av_get_default_channel_layout(dec->channels);
+        if (!dec->channel_layout)
+            return 0;
+        av_get_channel_layout_string(layout_name, sizeof(layout_name),
+                                     dec->channels, dec->channel_layout);
+        av_log(NULL, AV_LOG_WARNING, "Guessed Channel Layout for Input Stream "
+               "#%d.%d : %s\n", ist->file_index, ist->st->index, layout_name);
+    }
+    return 1;
+}
+
+static void check_decode_result(InputStream *ist, int *got_output, int ret)
+{
+    if (*got_output || ret<0)
+        decode_error_stat[ret<0] ++;
+
+    if (ret < 0 && exit_on_error)
+        exit_program(1);
+
+    if (exit_on_error && *got_output && ist) {
+        if (ist->decoded_frame->decode_error_flags || (ist->decoded_frame->flags & AV_FRAME_FLAG_CORRUPT)) {
+            av_log(NULL, AV_LOG_FATAL, "%s: corrupt decoded frame in stream %d\n", input_files[ist->file_index]->ctx->url, ist->st->index);
+            exit_program(1);
+        }
+    }
+}
+
+// Filters can be configured only if the formats of all inputs are known.
+static int ifilter_has_all_input_formats(FilterGraph *fg)
+{
+    int i;
+    for (i = 0; i < fg->nb_inputs; i++) {
+        if (fg->inputs[i]->format < 0 && (fg->inputs[i]->type == AVMEDIA_TYPE_AUDIO ||
+                                          fg->inputs[i]->type == AVMEDIA_TYPE_VIDEO))
+            return 0;
+    }
+    return 1;
+}
+
+static int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame)
+{
+    FilterGraph *fg = ifilter->graph;
+    int need_reinit, ret, i;
+
+    /* determine if the parameters for this input changed */
+    need_reinit = ifilter->format != frame->format;
+    if (!!ifilter->hw_frames_ctx != !!frame->hw_frames_ctx ||
+        (ifilter->hw_frames_ctx && ifilter->hw_frames_ctx->data != frame->hw_frames_ctx->data))
+        need_reinit = 1;
+
+    switch (ifilter->ist->st->codecpar->codec_type) {
+    case AVMEDIA_TYPE_AUDIO:
+        need_reinit |= ifilter->sample_rate    != frame->sample_rate ||
+                       ifilter->channels       != frame->channels ||
+                       ifilter->channel_layout != frame->channel_layout;
+        break;
+    case AVMEDIA_TYPE_VIDEO:
+        need_reinit |= ifilter->width  != frame->width ||
+                       ifilter->height != frame->height;
+        break;
+    }
+
+    if (need_reinit) {
+        ret = ifilter_parameters_from_frame(ifilter, frame);
+        if (ret < 0)
+            return ret;
+    }
+
+    /* (re)init the graph if possible, otherwise buffer the frame and return */
+    if (need_reinit || !fg->graph) {
+        for (i = 0; i < fg->nb_inputs; i++) {
+            if (!ifilter_has_all_input_formats(fg)) {
+                AVFrame *tmp = av_frame_clone(frame);
+                if (!tmp)
+                    return AVERROR(ENOMEM);
+                av_frame_unref(frame);
+
+                if (!av_fifo_space(ifilter->frame_queue)) {
+                    ret = av_fifo_realloc2(ifilter->frame_queue, 2 * av_fifo_size(ifilter->frame_queue));
+                    if (ret < 0) {
+                        av_frame_free(&tmp);
+                        return ret;
+                    }
+                }
+                av_fifo_generic_write(ifilter->frame_queue, &tmp, sizeof(tmp), NULL);
+                return 0;
+            }
+        }
+
+        ret = reap_filters(1);
+        if (ret < 0 && ret != AVERROR_EOF) {
+            av_log(NULL, AV_LOG_ERROR, "Error while filtering: %s\n", av_err2str(ret));
+            return ret;
+        }
+
+        ret = configure_filtergraph(fg);
+        if (ret < 0) {
+            av_log(NULL, AV_LOG_ERROR, "Error reinitializing filters!\n");
+            return ret;
+        }
+    }
+
+    ret = av_buffersrc_add_frame_flags(ifilter->filter, frame, AV_BUFFERSRC_FLAG_PUSH);
+    if (ret < 0) {
+        if (ret != AVERROR_EOF)
+            av_log(NULL, AV_LOG_ERROR, "Error while filtering: %s\n", av_err2str(ret));
+        return ret;
+    }
+
+    return 0;
+}
+
+static int ifilter_send_eof(InputFilter *ifilter, int64_t pts)
+{
+    int ret;
+
+    ifilter->eof = 1;
+
+    if (ifilter->filter) {
+        ret = av_buffersrc_close(ifilter->filter, pts, AV_BUFFERSRC_FLAG_PUSH);
+        if (ret < 0)
+            return ret;
+    } else {
+        // the filtergraph was never configured
+        if (ifilter->format < 0)
+            ifilter_parameters_from_codecpar(ifilter, ifilter->ist->st->codecpar);
+        if (ifilter->format < 0 && (ifilter->type == AVMEDIA_TYPE_AUDIO || ifilter->type == AVMEDIA_TYPE_VIDEO)) {
+            av_log(NULL, AV_LOG_ERROR, "Cannot determine format of input stream %d:%d after EOF\n", ifilter->ist->file_index, ifilter->ist->st->index);
+            return AVERROR_INVALIDDATA;
+        }
+    }
+
+    return 0;
+}
+
+// This does not quite work like avcodec_decode_audio4/avcodec_decode_video2.
+// There is the following difference: if you got a frame, you must call
+// it again with pkt=NULL. pkt==NULL is treated differently from pkt->size==0
+// (pkt==NULL means get more output, pkt->size==0 is a flush/drain packet)
+static int decode(AVCodecContext *avctx, AVFrame *frame, int *got_frame, AVPacket *pkt)
+{
+    int ret;
+
+    *got_frame = 0;
+
+    if (pkt) {
+        ret = avcodec_send_packet(avctx, pkt);
+        // In particular, we don't expect AVERROR(EAGAIN), because we read all
+        // decoded frames with avcodec_receive_frame() until done.
+        if (ret < 0 && ret != AVERROR_EOF)
+            return ret;
+    }
+
+    ret = avcodec_receive_frame(avctx, frame);
+    if (ret < 0 && ret != AVERROR(EAGAIN))
+        return ret;
+    if (ret >= 0)
+        *got_frame = 1;
+
+    return 0;
+}
+
+static int send_frame_to_filters(InputStream *ist, AVFrame *decoded_frame)
+{
+    int i, ret;
+    AVFrame *f;
+
+    av_assert1(ist->nb_filters > 0); /* ensure ret is initialized */
+    for (i = 0; i < ist->nb_filters; i++) {
+        if (i < ist->nb_filters - 1) {
+            f = ist->filter_frame;
+            ret = av_frame_ref(f, decoded_frame);
+            if (ret < 0)
+                break;
+        } else
+            f = decoded_frame;
+        ret = ifilter_send_frame(ist->filters[i], f);
+        if (ret == AVERROR_EOF)
+            ret = 0; /* ignore */
+        if (ret < 0) {
+            av_log(NULL, AV_LOG_ERROR,
+                   "Failed to inject frame into filter network: %s\n", av_err2str(ret));
+            break;
+        }
+    }
+    return ret;
+}
+
+static int decode_audio(InputStream *ist, AVPacket *pkt, int *got_output,
+                        int *decode_failed)
+{
+    AVFrame *decoded_frame;
+    AVCodecContext *avctx = ist->dec_ctx;
+    int ret, err = 0;
+    AVRational decoded_frame_tb;
+
+    if (!ist->decoded_frame && !(ist->decoded_frame = av_frame_alloc()))
+        return AVERROR(ENOMEM);
+    if (!ist->filter_frame && !(ist->filter_frame = av_frame_alloc()))
+        return AVERROR(ENOMEM);
+    decoded_frame = ist->decoded_frame;
+
+    update_benchmark(NULL);
+    ret = decode(avctx, decoded_frame, got_output, pkt);
+    update_benchmark("decode_audio %d.%d", ist->file_index, ist->st->index);
+    if (ret < 0)
+        *decode_failed = 1;
+
+    if (ret >= 0 && avctx->sample_rate <= 0) {
+        av_log(avctx, AV_LOG_ERROR, "Sample rate %d invalid\n", avctx->sample_rate);
+        ret = AVERROR_INVALIDDATA;
+    }
+
+    if (ret != AVERROR_EOF)
+        check_decode_result(ist, got_output, ret);
+
+    if (!*got_output || ret < 0)
+        return ret;
+
+    ist->samples_decoded += decoded_frame->nb_samples;
+    ist->frames_decoded++;
+
+#if 1
+    /* increment next_dts to use for the case where the input stream does not
+       have timestamps or there are multiple frames in the packet */
+    ist->next_pts += ((int64_t)AV_TIME_BASE * decoded_frame->nb_samples) /
+                     avctx->sample_rate;
+    ist->next_dts += ((int64_t)AV_TIME_BASE * decoded_frame->nb_samples) /
+                     avctx->sample_rate;
+#endif
+
+    if (decoded_frame->pts != AV_NOPTS_VALUE) {
+        decoded_frame_tb   = ist->st->time_base;
+    } else if (pkt && pkt->pts != AV_NOPTS_VALUE) {
+        decoded_frame->pts = pkt->pts;
+        decoded_frame_tb   = ist->st->time_base;
+    }else {
+        decoded_frame->pts = ist->dts;
+        decoded_frame_tb   = AV_TIME_BASE_Q;
+    }
+    if (decoded_frame->pts != AV_NOPTS_VALUE)
+        decoded_frame->pts = av_rescale_delta(decoded_frame_tb, decoded_frame->pts,
+                                              (AVRational){1, avctx->sample_rate}, decoded_frame->nb_samples, &ist->filter_in_rescale_delta_last,
+                                              (AVRational){1, avctx->sample_rate});
+    ist->nb_samples = decoded_frame->nb_samples;
+    err = send_frame_to_filters(ist, decoded_frame);
+
+    av_frame_unref(ist->filter_frame);
+    av_frame_unref(decoded_frame);
+    return err < 0 ? err : ret;
+}
+
+static int decode_video(InputStream *ist, AVPacket *pkt, int *got_output, int64_t *duration_pts, int eof,
+                        int *decode_failed)
+{
+    AVFrame *decoded_frame;
+    int i, ret = 0, err = 0;
+    int64_t best_effort_timestamp;
+    int64_t dts = AV_NOPTS_VALUE;
+    AVPacket avpkt;
+
+    // With fate-indeo3-2, we're getting 0-sized packets before EOF for some
+    // reason. This seems like a semi-critical bug. Don't trigger EOF, and
+    // skip the packet.
+    if (!eof && pkt && pkt->size == 0)
+        return 0;
+
+    if (!ist->decoded_frame && !(ist->decoded_frame = av_frame_alloc()))
+        return AVERROR(ENOMEM);
+    if (!ist->filter_frame && !(ist->filter_frame = av_frame_alloc()))
+        return AVERROR(ENOMEM);
+    decoded_frame = ist->decoded_frame;
+    if (ist->dts != AV_NOPTS_VALUE)
+        dts = av_rescale_q(ist->dts, AV_TIME_BASE_Q, ist->st->time_base);
+    if (pkt) {
+        avpkt = *pkt;
+        avpkt.dts = dts; // ffmpeg.c probably shouldn't do this
+    }
+
+    // The old code used to set dts on the drain packet, which does not work
+    // with the new API anymore.
+    if (eof) {
+        void *new = av_realloc_array(ist->dts_buffer, ist->nb_dts_buffer + 1, sizeof(ist->dts_buffer[0]));
+        if (!new)
+            return AVERROR(ENOMEM);
+        ist->dts_buffer = new;
+        ist->dts_buffer[ist->nb_dts_buffer++] = dts;
+    }
+
+    update_benchmark(NULL);
+    ret = decode(ist->dec_ctx, decoded_frame, got_output, pkt ? &avpkt : NULL);
+    update_benchmark("decode_video %d.%d", ist->file_index, ist->st->index);
+    if (ret < 0)
+        *decode_failed = 1;
+
+    // The following line may be required in some cases where there is no parser
+    // or the parser does not has_b_frames correctly
+    if (ist->st->codecpar->video_delay < ist->dec_ctx->has_b_frames) {
+        if (ist->dec_ctx->codec_id == AV_CODEC_ID_H264) {
+            ist->st->codecpar->video_delay = ist->dec_ctx->has_b_frames;
+        } else
+            av_log(ist->dec_ctx, AV_LOG_WARNING,
+                   "video_delay is larger in decoder than demuxer %d > %d.\n"
+                   "If you want to help, upload a sample "
+                   "of this file to ftp://upload.ffmpeg.org/incoming/ "
+                   "and contact the ffmpeg-devel mailing list. (ffmpeg-devel at ffmpeg.org)\n",
+                   ist->dec_ctx->has_b_frames,
+                   ist->st->codecpar->video_delay);
+    }
+
+    if (ret != AVERROR_EOF)
+        check_decode_result(ist, got_output, ret);
+
+    if (*got_output && ret >= 0) {
+        if (ist->dec_ctx->width  != decoded_frame->width ||
+            ist->dec_ctx->height != decoded_frame->height ||
+            ist->dec_ctx->pix_fmt != decoded_frame->format) {
+            av_log(NULL, AV_LOG_DEBUG, "Frame parameters mismatch context %d,%d,%d != %d,%d,%d\n",
+                decoded_frame->width,
+                decoded_frame->height,
+                decoded_frame->format,
+                ist->dec_ctx->width,
+                ist->dec_ctx->height,
+                ist->dec_ctx->pix_fmt);
+        }
+    }
+
+    if (!*got_output || ret < 0)
+        return ret;
+
+    if(ist->top_field_first>=0)
+        decoded_frame->top_field_first = ist->top_field_first;
+
+    ist->frames_decoded++;
+
+    if (ist->hwaccel_retrieve_data && decoded_frame->format == ist->hwaccel_pix_fmt) {
+        err = ist->hwaccel_retrieve_data(ist->dec_ctx, decoded_frame);
+        if (err < 0)
+            goto fail;
+    }
+    ist->hwaccel_retrieved_pix_fmt = decoded_frame->format;
+
+    best_effort_timestamp= decoded_frame->best_effort_timestamp;
+    *duration_pts = decoded_frame->pkt_duration;
+
+    if (ist->framerate.num)
+        best_effort_timestamp = ist->cfr_next_pts++;
+
+    if (eof && best_effort_timestamp == AV_NOPTS_VALUE && ist->nb_dts_buffer > 0) {
+        best_effort_timestamp = ist->dts_buffer[0];
+
+        for (i = 0; i < ist->nb_dts_buffer - 1; i++)
+            ist->dts_buffer[i] = ist->dts_buffer[i + 1];
+        ist->nb_dts_buffer--;
+    }
+
+    if(best_effort_timestamp != AV_NOPTS_VALUE) {
+        int64_t ts = av_rescale_q(decoded_frame->pts = best_effort_timestamp, ist->st->time_base, AV_TIME_BASE_Q);
+
+        if (ts != AV_NOPTS_VALUE)
+            ist->next_pts = ist->pts = ts;
+    }
+
+    if (debug_ts) {
+        av_log(NULL, AV_LOG_INFO, "decoder -> ist_index:%d type:video "
+               "frame_pts:%s frame_pts_time:%s best_effort_ts:%"PRId64" best_effort_ts_time:%s keyframe:%d frame_type:%d time_base:%d/%d\n",
+               ist->st->index, av_ts2str(decoded_frame->pts),
+               av_ts2timestr(decoded_frame->pts, &ist->st->time_base),
+               best_effort_timestamp,
+               av_ts2timestr(best_effort_timestamp, &ist->st->time_base),
+               decoded_frame->key_frame, decoded_frame->pict_type,
+               ist->st->time_base.num, ist->st->time_base.den);
+    }
+
+    if (ist->st->sample_aspect_ratio.num)
+        decoded_frame->sample_aspect_ratio = ist->st->sample_aspect_ratio;
+
+    err = send_frame_to_filters(ist, decoded_frame);
+
+fail:
+    av_frame_unref(ist->filter_frame);
+    av_frame_unref(decoded_frame);
+    return err < 0 ? err : ret;
+}
+
+static int transcode_subtitles(InputStream *ist, AVPacket *pkt, int *got_output,
+                               int *decode_failed)
+{
+    AVSubtitle subtitle;
+    int free_sub = 1;
+    int i, ret = avcodec_decode_subtitle2(ist->dec_ctx,
+                                          &subtitle, got_output, pkt);
+
+    check_decode_result(NULL, got_output, ret);
+
+    if (ret < 0 || !*got_output) {
+        *decode_failed = 1;
+        if (!pkt->size)
+            sub2video_flush(ist);
+        return ret;
+    }
+
+    if (ist->fix_sub_duration) {
+        int end = 1;
+        if (ist->prev_sub.got_output) {
+            end = av_rescale(subtitle.pts - ist->prev_sub.subtitle.pts,
+                             1000, AV_TIME_BASE);
+            if (end < ist->prev_sub.subtitle.end_display_time) {
+                av_log(ist->dec_ctx, AV_LOG_DEBUG,
+                       "Subtitle duration reduced from %"PRId32" to %d%s\n",
+                       ist->prev_sub.subtitle.end_display_time, end,
+                       end <= 0 ? ", dropping it" : "");
+                ist->prev_sub.subtitle.end_display_time = end;
+            }
+        }
+        FFSWAP(int,        *got_output, ist->prev_sub.got_output);
+        FFSWAP(int,        ret,         ist->prev_sub.ret);
+        FFSWAP(AVSubtitle, subtitle,    ist->prev_sub.subtitle);
+        if (end <= 0)
+            goto out;
+    }
+
+    if (!*got_output)
+        return ret;
+
+    if (ist->sub2video.frame) {
+        sub2video_update(ist, &subtitle);
+    } else if (ist->nb_filters) {
+        if (!ist->sub2video.sub_queue)
+            ist->sub2video.sub_queue = av_fifo_alloc(8 * sizeof(AVSubtitle));
+        if (!ist->sub2video.sub_queue)
+            exit_program(1);
+        if (!av_fifo_space(ist->sub2video.sub_queue)) {
+            ret = av_fifo_realloc2(ist->sub2video.sub_queue, 2 * av_fifo_size(ist->sub2video.sub_queue));
+            if (ret < 0)
+                exit_program(1);
+        }
+        av_fifo_generic_write(ist->sub2video.sub_queue, &subtitle, sizeof(subtitle), NULL);
+        free_sub = 0;
+    }
+
+    if (!subtitle.num_rects)
+        goto out;
+
+    ist->frames_decoded++;
+
+    for (i = 0; i < nb_output_streams; i++) {
+        OutputStream *ost = output_streams[i];
+
+        if (!check_output_constraints(ist, ost) || !ost->encoding_needed
+            || ost->enc->type != AVMEDIA_TYPE_SUBTITLE)
+            continue;
+
+        do_subtitle_out(output_files[ost->file_index], ost, &subtitle);
+    }
+
+out:
+    if (free_sub)
+        avsubtitle_free(&subtitle);
+    return ret;
+}
+
+static int send_filter_eof(InputStream *ist)
+{
+    int i, ret;
+    /* TODO keep pts also in stream time base to avoid converting back */
+    int64_t pts = av_rescale_q_rnd(ist->pts, AV_TIME_BASE_Q, ist->st->time_base,
+                                   AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX);
+
+    for (i = 0; i < ist->nb_filters; i++) {
+        ret = ifilter_send_eof(ist->filters[i], pts);
+        if (ret < 0)
+            return ret;
+    }
+    return 0;
+}
+
+/* pkt = NULL means EOF (needed to flush decoder buffers) */
+static int process_input_packet(InputStream *ist, const AVPacket *pkt, int no_eof)
+{
+    int ret = 0, i;
+    int repeating = 0;
+    int eof_reached = 0;
+
+    AVPacket avpkt;
+    if (!ist->saw_first_ts) {
+        ist->dts = ist->st->avg_frame_rate.num ? - ist->dec_ctx->has_b_frames * AV_TIME_BASE / av_q2d(ist->st->avg_frame_rate) : 0;
+        ist->pts = 0;
+        if (pkt && pkt->pts != AV_NOPTS_VALUE && !ist->decoding_needed) {
+            ist->dts += av_rescale_q(pkt->pts, ist->st->time_base, AV_TIME_BASE_Q);
+            ist->pts = ist->dts; //unused but better to set it to a value thats not totally wrong
+        }
+        ist->saw_first_ts = 1;
+    }
+
+    if (ist->next_dts == AV_NOPTS_VALUE)
+        ist->next_dts = ist->dts;
+    if (ist->next_pts == AV_NOPTS_VALUE)
+        ist->next_pts = ist->pts;
+
+    if (!pkt) {
+        /* EOF handling */
+        av_init_packet(&avpkt);
+        avpkt.data = NULL;
+        avpkt.size = 0;
+    } else {
+        avpkt = *pkt;
+    }
+
+    if (pkt && pkt->dts != AV_NOPTS_VALUE) {
+        ist->next_dts = ist->dts = av_rescale_q(pkt->dts, ist->st->time_base, AV_TIME_BASE_Q);
+        if (ist->dec_ctx->codec_type != AVMEDIA_TYPE_VIDEO || !ist->decoding_needed)
+            ist->next_pts = ist->pts = ist->dts;
+    }
+
+    // while we have more to decode or while the decoder did output something on EOF
+    while (ist->decoding_needed) {
+        int64_t duration_dts = 0;
+        int64_t duration_pts = 0;
+        int got_output = 0;
+        int decode_failed = 0;
+
+        ist->pts = ist->next_pts;
+        ist->dts = ist->next_dts;
+
+        switch (ist->dec_ctx->codec_type) {
+        case AVMEDIA_TYPE_AUDIO:
+            ret = decode_audio    (ist, repeating ? NULL : &avpkt, &got_output,
+                                   &decode_failed);
+            break;
+        case AVMEDIA_TYPE_VIDEO:
+            ret = decode_video    (ist, repeating ? NULL : &avpkt, &got_output, &duration_pts, !pkt,
+                                   &decode_failed);
+            if (!repeating || !pkt || got_output) {
+                if (pkt && pkt->duration) {
+                    duration_dts = av_rescale_q(pkt->duration, ist->st->time_base, AV_TIME_BASE_Q);
+                } else if(ist->dec_ctx->framerate.num != 0 && ist->dec_ctx->framerate.den != 0) {
+                    int ticks= av_stream_get_parser(ist->st) ? av_stream_get_parser(ist->st)->repeat_pict+1 : ist->dec_ctx->ticks_per_frame;
+                    duration_dts = ((int64_t)AV_TIME_BASE *
+                                    ist->dec_ctx->framerate.den * ticks) /
+                                    ist->dec_ctx->framerate.num / ist->dec_ctx->ticks_per_frame;
+                }
+
+                if(ist->dts != AV_NOPTS_VALUE && duration_dts) {
+                    ist->next_dts += duration_dts;
+                }else
+                    ist->next_dts = AV_NOPTS_VALUE;
+            }
+
+            if (got_output) {
+                if (duration_pts > 0) {
+                    ist->next_pts += av_rescale_q(duration_pts, ist->st->time_base, AV_TIME_BASE_Q);
+                } else {
+                    ist->next_pts += duration_dts;
+                }
+            }
+            break;
+        case AVMEDIA_TYPE_SUBTITLE:
+            if (repeating)
+                break;
+            ret = transcode_subtitles(ist, &avpkt, &got_output, &decode_failed);
+            if (!pkt && ret >= 0)
+                ret = AVERROR_EOF;
+            break;
+        default:
+            return -1;
+        }
+
+        if (ret == AVERROR_EOF) {
+            eof_reached = 1;
+            break;
+        }
+
+        if (ret < 0) {
+            if (decode_failed) {
+                av_log(NULL, AV_LOG_ERROR, "Error while decoding stream #%d:%d: %s\n",
+                       ist->file_index, ist->st->index, av_err2str(ret));
+            } else {
+                av_log(NULL, AV_LOG_FATAL, "Error while processing the decoded "
+                       "data for stream #%d:%d\n", ist->file_index, ist->st->index);
+            }
+            if (!decode_failed || exit_on_error)
+                exit_program(1);
+            break;
+        }
+
+        if (got_output)
+            ist->got_output = 1;
+
+        if (!got_output)
+            break;
+
+        // During draining, we might get multiple output frames in this loop.
+        // ffmpeg.c does not drain the filter chain on configuration changes,
+        // which means if we send multiple frames at once to the filters, and
+        // one of those frames changes configuration, the buffered frames will
+        // be lost. This can upset certain FATE tests.
+        // Decode only 1 frame per call on EOF to appease these FATE tests.
+        // The ideal solution would be to rewrite decoding to use the new
+        // decoding API in a better way.
+        if (!pkt)
+            break;
+
+        repeating = 1;
+    }
+
+    /* after flushing, send an EOF on all the filter inputs attached to the stream */
+    /* except when looping we need to flush but not to send an EOF */
+    if (!pkt && ist->decoding_needed && eof_reached && !no_eof) {
+        int ret = send_filter_eof(ist);
+        if (ret < 0) {
+            av_log(NULL, AV_LOG_FATAL, "Error marking filters as finished\n");
+            exit_program(1);
+        }
+    }
+
+    /* handle stream copy */
+    if (!ist->decoding_needed && pkt) {
+        ist->dts = ist->next_dts;
+        switch (ist->dec_ctx->codec_type) {
+        case AVMEDIA_TYPE_AUDIO:
+            ist->next_dts += ((int64_t)AV_TIME_BASE * ist->dec_ctx->frame_size) /
+                             ist->dec_ctx->sample_rate;
+            break;
+        case AVMEDIA_TYPE_VIDEO:
+            if (ist->framerate.num) {
+                // TODO: Remove work-around for c99-to-c89 issue 7
+                AVRational time_base_q = AV_TIME_BASE_Q;
+                int64_t next_dts = av_rescale_q(ist->next_dts, time_base_q, av_inv_q(ist->framerate));
+                ist->next_dts = av_rescale_q(next_dts + 1, av_inv_q(ist->framerate), time_base_q);
+            } else if (pkt->duration) {
+                ist->next_dts += av_rescale_q(pkt->duration, ist->st->time_base, AV_TIME_BASE_Q);
+            } else if(ist->dec_ctx->framerate.num != 0) {
+                int ticks= av_stream_get_parser(ist->st) ? av_stream_get_parser(ist->st)->repeat_pict + 1 : ist->dec_ctx->ticks_per_frame;
+                ist->next_dts += ((int64_t)AV_TIME_BASE *
+                                  ist->dec_ctx->framerate.den * ticks) /
+                                  ist->dec_ctx->framerate.num / ist->dec_ctx->ticks_per_frame;
+            }
+            break;
+        }
+        ist->pts = ist->dts;
+        ist->next_pts = ist->next_dts;
+    }
+    for (i = 0; i < nb_output_streams; i++) {
+        OutputStream *ost = output_streams[i];
+
+        if (!check_output_constraints(ist, ost) || ost->encoding_needed)
+            continue;
+
+        do_streamcopy(ist, ost, pkt);
+    }
+
+    return !eof_reached;
+}
+
+static void print_sdp(void)
+{
+    char sdp[16384];
+    int i;
+    int j;
+    AVIOContext *sdp_pb;
+    AVFormatContext **avc;
+
+    for (i = 0; i < nb_output_files; i++) {
+        if (!output_files[i]->header_written)
+            return;
+    }
+
+    avc = av_malloc_array(nb_output_files, sizeof(*avc));
+    if (!avc)
+        exit_program(1);
+    for (i = 0, j = 0; i < nb_output_files; i++) {
+        if (!strcmp(output_files[i]->ctx->oformat->name, "rtp")) {
+            avc[j] = output_files[i]->ctx;
+            j++;
+        }
+    }
+
+    if (!j)
+        goto fail;
+
+    av_sdp_create(avc, j, sdp, sizeof(sdp));
+
+    if (!sdp_filename) {
+        printf("SDP:\n%s\n", sdp);
+        fflush(stdout);
+    } else {
+        if (avio_open2(&sdp_pb, sdp_filename, AVIO_FLAG_WRITE, &int_cb, NULL) < 0) {
+            av_log(NULL, AV_LOG_ERROR, "Failed to open sdp file '%s'\n", sdp_filename);
+        } else {
+            avio_printf(sdp_pb, "SDP:\n%s", sdp);
+            avio_closep(&sdp_pb);
+            av_freep(&sdp_filename);
+        }
+    }
+
+fail:
+    av_freep(&avc);
+}
+
+static enum AVPixelFormat get_format(AVCodecContext *s, const enum AVPixelFormat *pix_fmts)
+{
+    InputStream *ist = s->opaque;
+    const enum AVPixelFormat *p;
+    int ret;
+
+    for (p = pix_fmts; *p != AV_PIX_FMT_NONE; p++) {
+        const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(*p);
+        const AVCodecHWConfig  *config = NULL;
+        int i;
+
+        if (!(desc->flags & AV_PIX_FMT_FLAG_HWACCEL))
+            break;
+
+        if (ist->hwaccel_id == HWACCEL_GENERIC ||
+            ist->hwaccel_id == HWACCEL_AUTO) {
+            for (i = 0;; i++) {
+                config = avcodec_get_hw_config(s->codec, i);
+                if (!config)
+                    break;
+                if (!(config->methods &
+                      AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX))
+                    continue;
+                if (config->pix_fmt == *p)
+                    break;
+            }
+        }
+        if (config) {
+            if (config->device_type != ist->hwaccel_device_type) {
+                // Different hwaccel offered, ignore.
+                continue;
+            }
+
+            ret = hwaccel_decode_init(s);
+            if (ret < 0) {
+                if (ist->hwaccel_id == HWACCEL_GENERIC) {
+                    av_log(NULL, AV_LOG_FATAL,
+                           "%s hwaccel requested for input stream #%d:%d, "
+                           "but cannot be initialized.\n",
+                           av_hwdevice_get_type_name(config->device_type),
+                           ist->file_index, ist->st->index);
+                    return AV_PIX_FMT_NONE;
+                }
+                continue;
+            }
+        } else {
+            const HWAccel *hwaccel = NULL;
+            int i;
+            for (i = 0; hwaccels[i].name; i++) {
+                if (hwaccels[i].pix_fmt == *p) {
+                    hwaccel = &hwaccels[i];
+                    break;
+                }
+            }
+            if (!hwaccel) {
+                // No hwaccel supporting this pixfmt.
+                continue;
+            }
+            if (hwaccel->id != ist->hwaccel_id) {
+                // Does not match requested hwaccel.
+                continue;
+            }
+
+            ret = hwaccel->init(s);
+            if (ret < 0) {
+                av_log(NULL, AV_LOG_FATAL,
+                       "%s hwaccel requested for input stream #%d:%d, "
+                       "but cannot be initialized.\n", hwaccel->name,
+                       ist->file_index, ist->st->index);
+                return AV_PIX_FMT_NONE;
+            }
+        }
+
+        if (ist->hw_frames_ctx) {
+            s->hw_frames_ctx = av_buffer_ref(ist->hw_frames_ctx);
+            if (!s->hw_frames_ctx)
+                return AV_PIX_FMT_NONE;
+        }
+
+        ist->hwaccel_pix_fmt = *p;
+        break;
+    }
+
+    return *p;
+}
+
+static int get_buffer(AVCodecContext *s, AVFrame *frame, int flags)
+{
+    InputStream *ist = s->opaque;
+
+    if (ist->hwaccel_get_buffer && frame->format == ist->hwaccel_pix_fmt)
+        return ist->hwaccel_get_buffer(s, frame, flags);
+
+    return avcodec_default_get_buffer2(s, frame, flags);
+}
+
+static int init_input_stream(int ist_index, char *error, int error_len)
+{
+    int ret;
+    InputStream *ist = input_streams[ist_index];
+
+    if (ist->decoding_needed) {
+        AVCodec *codec = ist->dec;
+        if (!codec) {
+            snprintf(error, error_len, "Decoder (codec %s) not found for input stream #%d:%d",
+                    avcodec_get_name(ist->dec_ctx->codec_id), ist->file_index, ist->st->index);
+            return AVERROR(EINVAL);
+        }
+
+        ist->dec_ctx->opaque                = ist;
+        ist->dec_ctx->get_format            = get_format;
+        ist->dec_ctx->get_buffer2           = get_buffer;
+        ist->dec_ctx->thread_safe_callbacks = 1;
+
+        av_opt_set_int(ist->dec_ctx, "refcounted_frames", 1, 0);
+        if (ist->dec_ctx->codec_id == AV_CODEC_ID_DVB_SUBTITLE &&
+           (ist->decoding_needed & DECODING_FOR_OST)) {
+            av_dict_set(&ist->decoder_opts, "compute_edt", "1", AV_DICT_DONT_OVERWRITE);
+            if (ist->decoding_needed & DECODING_FOR_FILTER)
+                av_log(NULL, AV_LOG_WARNING, "Warning using DVB subtitles for filtering and output at the same time is not fully supported, also see -compute_edt [0|1]\n");
+        }
+
+        av_dict_set(&ist->decoder_opts, "sub_text_format", "ass", AV_DICT_DONT_OVERWRITE);
+
+        /* Useful for subtitles retiming by lavf (FIXME), skipping samples in
+         * audio, and video decoders such as cuvid or mediacodec */
+        ist->dec_ctx->pkt_timebase = ist->st->time_base;
+
+        if (!av_dict_get(ist->decoder_opts, "threads", NULL, 0))
+            av_dict_set(&ist->decoder_opts, "threads", "auto", 0);
+        /* Attached pics are sparse, therefore we would not want to delay their decoding till EOF. */
+        if (ist->st->disposition & AV_DISPOSITION_ATTACHED_PIC)
+            av_dict_set(&ist->decoder_opts, "threads", "1", 0);
+
+        ret = hw_device_setup_for_decode(ist);
+        if (ret < 0) {
+            snprintf(error, error_len, "Device setup failed for "
+                     "decoder on input stream #%d:%d : %s",
+                     ist->file_index, ist->st->index, av_err2str(ret));
+            return ret;
+        }
+
+        if ((ret = avcodec_open2(ist->dec_ctx, codec, &ist->decoder_opts)) < 0) {
+            if (ret == AVERROR_EXPERIMENTAL)
+                abort_codec_experimental(codec, 0);
+
+            snprintf(error, error_len,
+                     "Error while opening decoder for input stream "
+                     "#%d:%d : %s",
+                     ist->file_index, ist->st->index, av_err2str(ret));
+            return ret;
+        }
+        assert_avoptions(ist->decoder_opts);
+    }
+
+    ist->next_pts = AV_NOPTS_VALUE;
+    ist->next_dts = AV_NOPTS_VALUE;
+
+    return 0;
+}
+
+static InputStream *get_input_stream(OutputStream *ost)
+{
+    if (ost->source_index >= 0)
+        return input_streams[ost->source_index];
+    return NULL;
+}
+
+static int compare_int64(const void *a, const void *b)
+{
+    return FFDIFFSIGN(*(const int64_t *)a, *(const int64_t *)b);
+}
+
+/* open the muxer when all the streams are initialized */
+static int check_init_output_file(OutputFile *of, int file_index)
+{
+    int ret, i;
+
+    for (i = 0; i < of->ctx->nb_streams; i++) {
+        OutputStream *ost = output_streams[of->ost_index + i];
+        if (!ost->initialized)
+            return 0;
+    }
+
+    of->ctx->interrupt_callback = int_cb;
+
+    ret = avformat_write_header(of->ctx, &of->opts);
+    if (ret < 0) {
+        av_log(NULL, AV_LOG_ERROR,
+               "Could not write header for output file #%d "
+               "(incorrect codec parameters ?): %s\n",
+               file_index, av_err2str(ret));
+        return ret;
+    }
+    //assert_avoptions(of->opts);
+    of->header_written = 1;
+
+    av_dump_format(of->ctx, file_index, of->ctx->url, 1);
+
+    if (sdp_filename || want_sdp)
+        print_sdp();
+
+    /* flush the muxing queues */
+    for (i = 0; i < of->ctx->nb_streams; i++) {
+        OutputStream *ost = output_streams[of->ost_index + i];
+
+        /* try to improve muxing time_base (only possible if nothing has been written yet) */
+        if (!av_fifo_size(ost->muxing_queue))
+            ost->mux_timebase = ost->st->time_base;
+
+        while (av_fifo_size(ost->muxing_queue)) {
+            AVPacket pkt;
+            av_fifo_generic_read(ost->muxing_queue, &pkt, sizeof(pkt), NULL);
+            write_packet(of, &pkt, ost, 1);
+        }
+    }
+
+    return 0;
+}
+
+static int init_output_bsfs(OutputStream *ost)
+{
+    AVBSFContext *ctx;
+    int i, ret;
+
+    if (!ost->nb_bitstream_filters)
+        return 0;
+
+    for (i = 0; i < ost->nb_bitstream_filters; i++) {
+        ctx = ost->bsf_ctx[i];
+
+        ret = avcodec_parameters_copy(ctx->par_in,
+                                      i ? ost->bsf_ctx[i - 1]->par_out : ost->st->codecpar);
+        if (ret < 0)
+            return ret;
+
+        ctx->time_base_in = i ? ost->bsf_ctx[i - 1]->time_base_out : ost->st->time_base;
+
+        ret = av_bsf_init(ctx);
+        if (ret < 0) {
+            av_log(NULL, AV_LOG_ERROR, "Error initializing bitstream filter: %s\n",
+                   ost->bsf_ctx[i]->filter->name);
+            return ret;
+        }
+    }
+
+    ctx = ost->bsf_ctx[ost->nb_bitstream_filters - 1];
+    ret = avcodec_parameters_copy(ost->st->codecpar, ctx->par_out);
+    if (ret < 0)
+        return ret;
+
+    ost->st->time_base = ctx->time_base_out;
+
+    return 0;
+}
+
+static int init_output_stream_streamcopy(OutputStream *ost)
+{
+    OutputFile *of = output_files[ost->file_index];
+    InputStream *ist = get_input_stream(ost);
+    AVCodecParameters *par_dst = ost->st->codecpar;
+    AVCodecParameters *par_src = ost->ref_par;
+    AVRational sar;
+    int i, ret;
+    uint32_t codec_tag = par_dst->codec_tag;
+
+    av_assert0(ist && !ost->filter);
+
+    ret = avcodec_parameters_to_context(ost->enc_ctx, ist->st->codecpar);
+    if (ret >= 0)
+        ret = av_opt_set_dict(ost->enc_ctx, &ost->encoder_opts);
+    if (ret < 0) {
+        av_log(NULL, AV_LOG_FATAL,
+               "Error setting up codec context options.\n");
+        return ret;
+    }
+    avcodec_parameters_from_context(par_src, ost->enc_ctx);
+
+    if (!codec_tag) {
+        unsigned int codec_tag_tmp;
+        if (!of->ctx->oformat->codec_tag ||
+            av_codec_get_id (of->ctx->oformat->codec_tag, par_src->codec_tag) == par_src->codec_id ||
+            !av_codec_get_tag2(of->ctx->oformat->codec_tag, par_src->codec_id, &codec_tag_tmp))
+            codec_tag = par_src->codec_tag;
+    }
+
+    ret = avcodec_parameters_copy(par_dst, par_src);
+    if (ret < 0)
+        return ret;
+
+    par_dst->codec_tag = codec_tag;
+
+    if (!ost->frame_rate.num)
+        ost->frame_rate = ist->framerate;
+    ost->st->avg_frame_rate = ost->frame_rate;
+
+    ret = avformat_transfer_internal_stream_timing_info(of->ctx->oformat, ost->st, ist->st, copy_tb);
+    if (ret < 0)
+        return ret;
+
+    // copy timebase while removing common factors
+    if (ost->st->time_base.num <= 0 || ost->st->time_base.den <= 0)
+        ost->st->time_base = av_add_q(av_stream_get_codec_timebase(ost->st), (AVRational){0, 1});
+
+    // copy estimated duration as a hint to the muxer
+    if (ost->st->duration <= 0 && ist->st->duration > 0)
+        ost->st->duration = av_rescale_q(ist->st->duration, ist->st->time_base, ost->st->time_base);
+
+    // copy disposition
+    ost->st->disposition = ist->st->disposition;
+
+    if (ist->st->nb_side_data) {
+        for (i = 0; i < ist->st->nb_side_data; i++) {
+            const AVPacketSideData *sd_src = &ist->st->side_data[i];
+            uint8_t *dst_data;
+
+            dst_data = av_stream_new_side_data(ost->st, sd_src->type, sd_src->size);
+            if (!dst_data)
+                return AVERROR(ENOMEM);
+            memcpy(dst_data, sd_src->data, sd_src->size);
+        }
+    }
+
+    if (ost->rotate_overridden) {
+        uint8_t *sd = av_stream_new_side_data(ost->st, AV_PKT_DATA_DISPLAYMATRIX,
+                                              sizeof(int32_t) * 9);
+        if (sd)
+            av_display_rotation_set((int32_t *)sd, -ost->rotate_override_value);
+    }
+
+    switch (par_dst->codec_type) {
+    case AVMEDIA_TYPE_AUDIO:
+        if (audio_volume != 256) {
+            av_log(NULL, AV_LOG_FATAL, "-acodec copy and -vol are incompatible (frames are not decoded)\n");
+            exit_program(1);
+        }
+        if((par_dst->block_align == 1 || par_dst->block_align == 1152 || par_dst->block_align == 576) && par_dst->codec_id == AV_CODEC_ID_MP3)
+            par_dst->block_align= 0;
+        if(par_dst->codec_id == AV_CODEC_ID_AC3)
+            par_dst->block_align= 0;
+        break;
+    case AVMEDIA_TYPE_VIDEO:
+        if (ost->frame_aspect_ratio.num) { // overridden by the -aspect cli option
+            sar =
+                av_mul_q(ost->frame_aspect_ratio,
+                         (AVRational){ par_dst->height, par_dst->width });
+            av_log(NULL, AV_LOG_WARNING, "Overriding aspect ratio "
+                   "with stream copy may produce invalid files\n");
+            }
+        else if (ist->st->sample_aspect_ratio.num)
+            sar = ist->st->sample_aspect_ratio;
+        else
+            sar = par_src->sample_aspect_ratio;
+        ost->st->sample_aspect_ratio = par_dst->sample_aspect_ratio = sar;
+        ost->st->avg_frame_rate = ist->st->avg_frame_rate;
+        ost->st->r_frame_rate = ist->st->r_frame_rate;
+        break;
+    }
+
+    ost->mux_timebase = ist->st->time_base;
+
+    return 0;
+}
+
+static void set_encoder_id(OutputFile *of, OutputStream *ost)
+{
+    AVDictionaryEntry *e;
+
+    uint8_t *encoder_string;
+    int encoder_string_len;
+    int format_flags = 0;
+    int codec_flags = ost->enc_ctx->flags;
+
+    if (av_dict_get(ost->st->metadata, "encoder",  NULL, 0))
+        return;
+
+    e = av_dict_get(of->opts, "fflags", NULL, 0);
+    if (e) {
+        const AVOption *o = av_opt_find(of->ctx, "fflags", NULL, 0, 0);
+        if (!o)
+            return;
+        av_opt_eval_flags(of->ctx, o, e->value, &format_flags);
+    }
+    e = av_dict_get(ost->encoder_opts, "flags", NULL, 0);
+    if (e) {
+        const AVOption *o = av_opt_find(ost->enc_ctx, "flags", NULL, 0, 0);
+        if (!o)
+            return;
+        av_opt_eval_flags(ost->enc_ctx, o, e->value, &codec_flags);
+    }
+
+    encoder_string_len = sizeof(LIBAVCODEC_IDENT) + strlen(ost->enc->name) + 2;
+    encoder_string     = av_mallocz(encoder_string_len);
+    if (!encoder_string)
+        exit_program(1);
+
+    if (!(format_flags & AVFMT_FLAG_BITEXACT) && !(codec_flags & AV_CODEC_FLAG_BITEXACT))
+        av_strlcpy(encoder_string, LIBAVCODEC_IDENT " ", encoder_string_len);
+    else
+        av_strlcpy(encoder_string, "Lavc ", encoder_string_len);
+    av_strlcat(encoder_string, ost->enc->name, encoder_string_len);
+    av_dict_set(&ost->st->metadata, "encoder",  encoder_string,
+                AV_DICT_DONT_STRDUP_VAL | AV_DICT_DONT_OVERWRITE);
+}
+
+static void parse_forced_key_frames(char *kf, OutputStream *ost,
+                                    AVCodecContext *avctx)
+{
+    char *p;
+    int n = 1, i, size, index = 0;
+    int64_t t, *pts;
+
+    for (p = kf; *p; p++)
+        if (*p == ',')
+            n++;
+    size = n;
+    pts = av_malloc_array(size, sizeof(*pts));
+    if (!pts) {
+        av_log(NULL, AV_LOG_FATAL, "Could not allocate forced key frames array.\n");
+        exit_program(1);
+    }
+
+    p = kf;
+    for (i = 0; i < n; i++) {
+        char *next = strchr(p, ',');
+
+        if (next)
+            *next++ = 0;
+
+        if (!memcmp(p, "chapters", 8)) {
+
+            AVFormatContext *avf = output_files[ost->file_index]->ctx;
+            int j;
+
+            if (avf->nb_chapters > INT_MAX - size ||
+                !(pts = av_realloc_f(pts, size += avf->nb_chapters - 1,
+                                     sizeof(*pts)))) {
+                av_log(NULL, AV_LOG_FATAL,
+                       "Could not allocate forced key frames array.\n");
+                exit_program(1);
+            }
+            t = p[8] ? parse_time_or_die("force_key_frames", p + 8, 1) : 0;
+            t = av_rescale_q(t, AV_TIME_BASE_Q, avctx->time_base);
+
+            for (j = 0; j < avf->nb_chapters; j++) {
+                AVChapter *c = avf->chapters[j];
+                av_assert1(index < size);
+                pts[index++] = av_rescale_q(c->start, c->time_base,
+                                            avctx->time_base) + t;
+            }
+
+        } else {
+
+            t = parse_time_or_die("force_key_frames", p, 1);
+            av_assert1(index < size);
+            pts[index++] = av_rescale_q(t, AV_TIME_BASE_Q, avctx->time_base);
+
+        }
+
+        p = next;
+    }
+
+    av_assert0(index == size);
+    qsort(pts, size, sizeof(*pts), compare_int64);
+    ost->forced_kf_count = size;
+    ost->forced_kf_pts   = pts;
+}
+
+static void init_encoder_time_base(OutputStream *ost, AVRational default_time_base)
+{
+    InputStream *ist = get_input_stream(ost);
+    AVCodecContext *enc_ctx = ost->enc_ctx;
+    AVFormatContext *oc;
+
+    if (ost->enc_timebase.num > 0) {
+        enc_ctx->time_base = ost->enc_timebase;
+        return;
+    }
+
+    if (ost->enc_timebase.num < 0) {
+        if (ist) {
+            enc_ctx->time_base = ist->st->time_base;
+            return;
+        }
+
+        oc = output_files[ost->file_index]->ctx;
+        av_log(oc, AV_LOG_WARNING, "Input stream data not available, using default time base\n");
+    }
+
+    enc_ctx->time_base = default_time_base;
+}
+
+static int init_output_stream_encode(OutputStream *ost)
+{
+    InputStream *ist = get_input_stream(ost);
+    AVCodecContext *enc_ctx = ost->enc_ctx;
+    AVCodecContext *dec_ctx = NULL;
+    AVFormatContext *oc = output_files[ost->file_index]->ctx;
+    int j, ret;
+
+    set_encoder_id(output_files[ost->file_index], ost);
+
+    // Muxers use AV_PKT_DATA_DISPLAYMATRIX to signal rotation. On the other
+    // hand, the legacy API makes demuxers set "rotate" metadata entries,
+    // which have to be filtered out to prevent leaking them to output files.
+    av_dict_set(&ost->st->metadata, "rotate", NULL, 0);
+
+    if (ist) {
+        ost->st->disposition          = ist->st->disposition;
+
+        dec_ctx = ist->dec_ctx;
+
+        enc_ctx->chroma_sample_location = dec_ctx->chroma_sample_location;
+    } else {
+        for (j = 0; j < oc->nb_streams; j++) {
+            AVStream *st = oc->streams[j];
+            if (st != ost->st && st->codecpar->codec_type == ost->st->codecpar->codec_type)
+                break;
+        }
+        if (j == oc->nb_streams)
+            if (ost->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO ||
+                ost->st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
+                ost->st->disposition = AV_DISPOSITION_DEFAULT;
+    }
+
+    if (enc_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
+        if (!ost->frame_rate.num)
+            ost->frame_rate = av_buffersink_get_frame_rate(ost->filter->filter);
+        if (ist && !ost->frame_rate.num)
+            ost->frame_rate = ist->framerate;
+        if (ist && !ost->frame_rate.num)
+            ost->frame_rate = ist->st->r_frame_rate;
+        if (ist && !ost->frame_rate.num) {
+            ost->frame_rate = (AVRational){25, 1};
+            av_log(NULL, AV_LOG_WARNING,
+                   "No information "
+                   "about the input framerate is available. Falling "
+                   "back to a default value of 25fps for output stream #%d:%d. Use the -r option "
+                   "if you want a different framerate.\n",
+                   ost->file_index, ost->index);
+        }
+//      ost->frame_rate = ist->st->avg_frame_rate.num ? ist->st->avg_frame_rate : (AVRational){25, 1};
+        if (ost->enc->supported_framerates && !ost->force_fps) {
+            int idx = av_find_nearest_q_idx(ost->frame_rate, ost->enc->supported_framerates);
+            ost->frame_rate = ost->enc->supported_framerates[idx];
+        }
+        // reduce frame rate for mpeg4 to be within the spec limits
+        if (enc_ctx->codec_id == AV_CODEC_ID_MPEG4) {
+            av_reduce(&ost->frame_rate.num, &ost->frame_rate.den,
+                      ost->frame_rate.num, ost->frame_rate.den, 65535);
+        }
+    }
+
+    switch (enc_ctx->codec_type) {
+    case AVMEDIA_TYPE_AUDIO:
+        enc_ctx->sample_fmt     = av_buffersink_get_format(ost->filter->filter);
+        if (dec_ctx)
+            enc_ctx->bits_per_raw_sample = FFMIN(dec_ctx->bits_per_raw_sample,
+                                                 av_get_bytes_per_sample(enc_ctx->sample_fmt) << 3);
+        enc_ctx->sample_rate    = av_buffersink_get_sample_rate(ost->filter->filter);
+        enc_ctx->channel_layout = av_buffersink_get_channel_layout(ost->filter->filter);
+        enc_ctx->channels       = av_buffersink_get_channels(ost->filter->filter);
+
+        init_encoder_time_base(ost, av_make_q(1, enc_ctx->sample_rate));
+        break;
+
+    case AVMEDIA_TYPE_VIDEO:
+        init_encoder_time_base(ost, av_inv_q(ost->frame_rate));
+
+        if (!(enc_ctx->time_base.num && enc_ctx->time_base.den))
+            enc_ctx->time_base = av_buffersink_get_time_base(ost->filter->filter);
+        if (   av_q2d(enc_ctx->time_base) < 0.001 && video_sync_method != VSYNC_PASSTHROUGH
+           && (video_sync_method == VSYNC_CFR || video_sync_method == VSYNC_VSCFR || (video_sync_method == VSYNC_AUTO && !(oc->oformat->flags & AVFMT_VARIABLE_FPS)))){
+            av_log(oc, AV_LOG_WARNING, "Frame rate very high for a muxer not efficiently supporting it.\n"
+                                       "Please consider specifying a lower framerate, a different muxer or -vsync 2\n");
+        }
+        for (j = 0; j < ost->forced_kf_count; j++)
+            ost->forced_kf_pts[j] = av_rescale_q(ost->forced_kf_pts[j],
+                                                 AV_TIME_BASE_Q,
+                                                 enc_ctx->time_base);
+
+        enc_ctx->width  = av_buffersink_get_w(ost->filter->filter);
+        enc_ctx->height = av_buffersink_get_h(ost->filter->filter);
+        enc_ctx->sample_aspect_ratio = ost->st->sample_aspect_ratio =
+            ost->frame_aspect_ratio.num ? // overridden by the -aspect cli option
+            av_mul_q(ost->frame_aspect_ratio, (AVRational){ enc_ctx->height, enc_ctx->width }) :
+            av_buffersink_get_sample_aspect_ratio(ost->filter->filter);
+
+        enc_ctx->pix_fmt = av_buffersink_get_format(ost->filter->filter);
+        if (dec_ctx)
+            enc_ctx->bits_per_raw_sample = FFMIN(dec_ctx->bits_per_raw_sample,
+                                                 av_pix_fmt_desc_get(enc_ctx->pix_fmt)->comp[0].depth);
+
+        enc_ctx->framerate = ost->frame_rate;
+
+        ost->st->avg_frame_rate = ost->frame_rate;
+
+        if (!dec_ctx ||
+            enc_ctx->width   != dec_ctx->width  ||
+            enc_ctx->height  != dec_ctx->height ||
+            enc_ctx->pix_fmt != dec_ctx->pix_fmt) {
+            enc_ctx->bits_per_raw_sample = frame_bits_per_raw_sample;
+        }
+
+        if (ost->forced_keyframes) {
+            if (!strncmp(ost->forced_keyframes, "expr:", 5)) {
+                ret = av_expr_parse(&ost->forced_keyframes_pexpr, ost->forced_keyframes+5,
+                                    forced_keyframes_const_names, NULL, NULL, NULL, NULL, 0, NULL);
+                if (ret < 0) {
+                    av_log(NULL, AV_LOG_ERROR,
+                           "Invalid force_key_frames expression '%s'\n", ost->forced_keyframes+5);
+                    return ret;
+                }
+                ost->forced_keyframes_expr_const_values[FKF_N] = 0;
+                ost->forced_keyframes_expr_const_values[FKF_N_FORCED] = 0;
+                ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_N] = NAN;
+                ost->forced_keyframes_expr_const_values[FKF_PREV_FORCED_T] = NAN;
+
+            // Don't parse the 'forced_keyframes' in case of 'keep-source-keyframes',
+            // parse it only for static kf timings
+            } else if(strncmp(ost->forced_keyframes, "source", 6)) {
+                parse_forced_key_frames(ost->forced_keyframes, ost, ost->enc_ctx);
+            }
+        }
+        break;
+    case AVMEDIA_TYPE_SUBTITLE:
+        enc_ctx->time_base = AV_TIME_BASE_Q;
+        if (!enc_ctx->width) {
+            enc_ctx->width     = input_streams[ost->source_index]->st->codecpar->width;
+            enc_ctx->height    = input_streams[ost->source_index]->st->codecpar->height;
+        }
+        break;
+    case AVMEDIA_TYPE_DATA:
+        break;
+    default:
+        abort();
+        break;
+    }
+
+    ost->mux_timebase = enc_ctx->time_base;
+
+    return 0;
+}
+
+static int init_output_stream(OutputStream *ost, char *error, int error_len)
+{
+    int ret = 0;
+
+    if (ost->encoding_needed) {
+        AVCodec      *codec = ost->enc;
+        AVCodecContext *dec = NULL;
+        InputStream *ist;
+
+        ret = init_output_stream_encode(ost);
+        if (ret < 0)
+            return ret;
+
+        if ((ist = get_input_stream(ost)))
+            dec = ist->dec_ctx;
+        if (dec && dec->subtitle_header) {
+            /* ASS code assumes this buffer is null terminated so add extra byte. */
+            ost->enc_ctx->subtitle_header = av_mallocz(dec->subtitle_header_size + 1);
+            if (!ost->enc_ctx->subtitle_header)
+                return AVERROR(ENOMEM);
+            memcpy(ost->enc_ctx->subtitle_header, dec->subtitle_header, dec->subtitle_header_size);
+            ost->enc_ctx->subtitle_header_size = dec->subtitle_header_size;
+        }
+        if (!av_dict_get(ost->encoder_opts, "threads", NULL, 0))
+            av_dict_set(&ost->encoder_opts, "threads", "auto", 0);
+        if (ost->enc->type == AVMEDIA_TYPE_AUDIO &&
+            !codec->defaults &&
+            !av_dict_get(ost->encoder_opts, "b", NULL, 0) &&
+            !av_dict_get(ost->encoder_opts, "ab", NULL, 0))
+            av_dict_set(&ost->encoder_opts, "b", "128000", 0);
+
+        if (ost->filter && av_buffersink_get_hw_frames_ctx(ost->filter->filter) &&
+            ((AVHWFramesContext*)av_buffersink_get_hw_frames_ctx(ost->filter->filter)->data)->format ==
+            av_buffersink_get_format(ost->filter->filter)) {
+            ost->enc_ctx->hw_frames_ctx = av_buffer_ref(av_buffersink_get_hw_frames_ctx(ost->filter->filter));
+            if (!ost->enc_ctx->hw_frames_ctx)
+                return AVERROR(ENOMEM);
+        } else {
+            ret = hw_device_setup_for_encode(ost);
+            if (ret < 0) {
+                snprintf(error, error_len, "Device setup failed for "
+                         "encoder on output stream #%d:%d : %s",
+                     ost->file_index, ost->index, av_err2str(ret));
+                return ret;
+            }
+        }
+
+        if ((ret = avcodec_open2(ost->enc_ctx, codec, &ost->encoder_opts)) < 0) {
+            if (ret == AVERROR_EXPERIMENTAL)
+                abort_codec_experimental(codec, 1);
+            snprintf(error, error_len,
+                     "Error while opening encoder for output stream #%d:%d - "
+                     "maybe incorrect parameters such as bit_rate, rate, width or height",
+                    ost->file_index, ost->index);
+            return ret;
+        }
+        if (ost->enc->type == AVMEDIA_TYPE_AUDIO &&
+            !(ost->enc->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE))
+            av_buffersink_set_frame_size(ost->filter->filter,
+                                            ost->enc_ctx->frame_size);
+        assert_avoptions(ost->encoder_opts);
+        if (ost->enc_ctx->bit_rate && ost->enc_ctx->bit_rate < 1000 &&
+            ost->enc_ctx->codec_id != AV_CODEC_ID_CODEC2 /* don't complain about 700 bit/s modes */)
+            av_log(NULL, AV_LOG_WARNING, "The bitrate parameter is set too low."
+                                         " It takes bits/s as argument, not kbits/s\n");
+
+        ret = avcodec_parameters_from_context(ost->st->codecpar, ost->enc_ctx);
+        if (ret < 0) {
+            av_log(NULL, AV_LOG_FATAL,
+                   "Error initializing the output stream codec context.\n");
+            exit_program(1);
+        }
+        /*
+         * FIXME: ost->st->codec should't be needed here anymore.
+         */
+        ret = avcodec_copy_context(ost->st->codec, ost->enc_ctx);
+        if (ret < 0)
+            return ret;
+
+        if (ost->enc_ctx->nb_coded_side_data) {
+            int i;
+
+            for (i = 0; i < ost->enc_ctx->nb_coded_side_data; i++) {
+                const AVPacketSideData *sd_src = &ost->enc_ctx->coded_side_data[i];
+                uint8_t *dst_data;
+
+                dst_data = av_stream_new_side_data(ost->st, sd_src->type, sd_src->size);
+                if (!dst_data)
+                    return AVERROR(ENOMEM);
+                memcpy(dst_data, sd_src->data, sd_src->size);
+            }
+        }
+
+        /*
+         * Add global input side data. For now this is naive, and copies it
+         * from the input stream's global side data. All side data should
+         * really be funneled over AVFrame and libavfilter, then added back to
+         * packet side data, and then potentially using the first packet for
+         * global side data.
+         */
+        if (ist) {
+            int i;
+            for (i = 0; i < ist->st->nb_side_data; i++) {
+                AVPacketSideData *sd = &ist->st->side_data[i];
+                uint8_t *dst = av_stream_new_side_data(ost->st, sd->type, sd->size);
+                if (!dst)
+                    return AVERROR(ENOMEM);
+                memcpy(dst, sd->data, sd->size);
+                if (ist->autorotate && sd->type == AV_PKT_DATA_DISPLAYMATRIX)
+                    av_display_rotation_set((uint32_t *)dst, 0);
+            }
+        }
+
+        // copy timebase while removing common factors
+        if (ost->st->time_base.num <= 0 || ost->st->time_base.den <= 0)
+            ost->st->time_base = av_add_q(ost->enc_ctx->time_base, (AVRational){0, 1});
+
+        // copy estimated duration as a hint to the muxer
+        if (ost->st->duration <= 0 && ist && ist->st->duration > 0)
+            ost->st->duration = av_rescale_q(ist->st->duration, ist->st->time_base, ost->st->time_base);
+
+        ost->st->codec->codec= ost->enc_ctx->codec;
+    } else if (ost->stream_copy) {
+        ret = init_output_stream_streamcopy(ost);
+        if (ret < 0)
+            return ret;
+    }
+
+    // parse user provided disposition, and update stream values
+    if (ost->disposition) {
+        static const AVOption opts[] = {
+            { "disposition"         , NULL, 0, AV_OPT_TYPE_FLAGS, { .i64 = 0 }, INT64_MIN, INT64_MAX, .unit = "flags" },
+            { "default"             , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_DEFAULT           },    .unit = "flags" },
+            { "dub"                 , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_DUB               },    .unit = "flags" },
+            { "original"            , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_ORIGINAL          },    .unit = "flags" },
+            { "comment"             , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_COMMENT           },    .unit = "flags" },
+            { "lyrics"              , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_LYRICS            },    .unit = "flags" },
+            { "karaoke"             , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_KARAOKE           },    .unit = "flags" },
+            { "forced"              , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_FORCED            },    .unit = "flags" },
+            { "hearing_impaired"    , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_HEARING_IMPAIRED  },    .unit = "flags" },
+            { "visual_impaired"     , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_VISUAL_IMPAIRED   },    .unit = "flags" },
+            { "clean_effects"       , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_CLEAN_EFFECTS     },    .unit = "flags" },
+            { "attached_pic"        , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_ATTACHED_PIC      },    .unit = "flags" },
+            { "captions"            , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_CAPTIONS          },    .unit = "flags" },
+            { "descriptions"        , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_DESCRIPTIONS      },    .unit = "flags" },
+            { "dependent"           , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_DEPENDENT         },    .unit = "flags" },
+            { "metadata"            , NULL, 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_METADATA          },    .unit = "flags" },
+            { NULL },
+        };
+        static const AVClass class = {
+            .class_name = "",
+            .item_name  = av_default_item_name,
+            .option     = opts,
+            .version    = LIBAVUTIL_VERSION_INT,
+        };
+        const AVClass *pclass = &class;
+
+        ret = av_opt_eval_flags(&pclass, &opts[0], ost->disposition, &ost->st->disposition);
+        if (ret < 0)
+            return ret;
+    }
+
+    /* initialize bitstream filters for the output stream
+     * needs to be done here, because the codec id for streamcopy is not
+     * known until now */
+    ret = init_output_bsfs(ost);
+    if (ret < 0)
+        return ret;
+
+    ost->initialized = 1;
+
+    ret = check_init_output_file(output_files[ost->file_index], ost->file_index);
+    if (ret < 0)
+        return ret;
+
+    return ret;
+}
+
+static void report_new_stream(int input_index, AVPacket *pkt)
+{
+    InputFile *file = input_files[input_index];
+    AVStream *st = file->ctx->streams[pkt->stream_index];
+
+    if (pkt->stream_index < file->nb_streams_warn)
+        return;
+    av_log(file->ctx, AV_LOG_WARNING,
+           "New %s stream %d:%d at pos:%"PRId64" and DTS:%ss\n",
+           av_get_media_type_string(st->codecpar->codec_type),
+           input_index, pkt->stream_index,
+           pkt->pos, av_ts2timestr(pkt->dts, &st->time_base));
+    file->nb_streams_warn = pkt->stream_index + 1;
+}
+
+static int transcode_init(void)
+{
+    int ret = 0, i, j, k;
+    AVFormatContext *oc;
+    OutputStream *ost;
+    InputStream *ist;
+    char error[1024] = {0};
+
+    for (i = 0; i < nb_filtergraphs; i++) {
+        FilterGraph *fg = filtergraphs[i];
+        for (j = 0; j < fg->nb_outputs; j++) {
+            OutputFilter *ofilter = fg->outputs[j];
+            if (!ofilter->ost || ofilter->ost->source_index >= 0)
+                continue;
+            if (fg->nb_inputs != 1)
+                continue;
+            for (k = nb_input_streams-1; k >= 0 ; k--)
+                if (fg->inputs[0]->ist == input_streams[k])
+                    break;
+            ofilter->ost->source_index = k;
+        }
+    }
+
+    /* init framerate emulation */
+    for (i = 0; i < nb_input_files; i++) {
+        InputFile *ifile = input_files[i];
+        if (ifile->rate_emu)
+            for (j = 0; j < ifile->nb_streams; j++)
+                input_streams[j + ifile->ist_index]->start = av_gettime_relative();
+    }
+
+    /* init input streams */
+    for (i = 0; i < nb_input_streams; i++)
+        if ((ret = init_input_stream(i, error, sizeof(error))) < 0) {
+            for (i = 0; i < nb_output_streams; i++) {
+                ost = output_streams[i];
+                avcodec_close(ost->enc_ctx);
+            }
+            goto dump_format;
+        }
+
+    /* open each encoder */
+    for (i = 0; i < nb_output_streams; i++) {
+        // skip streams fed from filtergraphs until we have a frame for them
+        if (output_streams[i]->filter)
+            continue;
+
+        ret = init_output_stream(output_streams[i], error, sizeof(error));
+        if (ret < 0)
+            goto dump_format;
+    }
+
+    /* discard unused programs */
+    for (i = 0; i < nb_input_files; i++) {
+        InputFile *ifile = input_files[i];
+        for (j = 0; j < ifile->ctx->nb_programs; j++) {
+            AVProgram *p = ifile->ctx->programs[j];
+            int discard  = AVDISCARD_ALL;
+
+            for (k = 0; k < p->nb_stream_indexes; k++)
+                if (!input_streams[ifile->ist_index + p->stream_index[k]]->discard) {
+                    discard = AVDISCARD_DEFAULT;
+                    break;
+                }
+            p->discard = discard;
+        }
+    }
+
+    /* write headers for files with no streams */
+    for (i = 0; i < nb_output_files; i++) {
+        oc = output_files[i]->ctx;
+        if (oc->oformat->flags & AVFMT_NOSTREAMS && oc->nb_streams == 0) {
+            ret = check_init_output_file(output_files[i], i);
+            if (ret < 0)
+                goto dump_format;
+        }
+    }
+
+ dump_format:
+    /* dump the stream mapping */
+    av_log(NULL, AV_LOG_INFO, "Stream mapping:\n");
+    for (i = 0; i < nb_input_streams; i++) {
+        ist = input_streams[i];
+
+        for (j = 0; j < ist->nb_filters; j++) {
+            if (!filtergraph_is_simple(ist->filters[j]->graph)) {
+                av_log(NULL, AV_LOG_INFO, "  Stream #%d:%d (%s) -> %s",
+                       ist->file_index, ist->st->index, ist->dec ? ist->dec->name : "?",
+                       ist->filters[j]->name);
+                if (nb_filtergraphs > 1)
+                    av_log(NULL, AV_LOG_INFO, " (graph %d)", ist->filters[j]->graph->index);
+                av_log(NULL, AV_LOG_INFO, "\n");
+            }
+        }
+    }
+
+    for (i = 0; i < nb_output_streams; i++) {
+        ost = output_streams[i];
+
+        if (ost->attachment_filename) {
+            /* an attached file */
+            av_log(NULL, AV_LOG_INFO, "  File %s -> Stream #%d:%d\n",
+                   ost->attachment_filename, ost->file_index, ost->index);
+            continue;
+        }
+
+        if (ost->filter && !filtergraph_is_simple(ost->filter->graph)) {
+            /* output from a complex graph */
+            av_log(NULL, AV_LOG_INFO, "  %s", ost->filter->name);
+            if (nb_filtergraphs > 1)
+                av_log(NULL, AV_LOG_INFO, " (graph %d)", ost->filter->graph->index);
+
+            av_log(NULL, AV_LOG_INFO, " -> Stream #%d:%d (%s)\n", ost->file_index,
+                   ost->index, ost->enc ? ost->enc->name : "?");
+            continue;
+        }
+
+        av_log(NULL, AV_LOG_INFO, "  Stream #%d:%d -> #%d:%d",
+               input_streams[ost->source_index]->file_index,
+               input_streams[ost->source_index]->st->index,
+               ost->file_index,
+               ost->index);
+        if (ost->sync_ist != input_streams[ost->source_index])
+            av_log(NULL, AV_LOG_INFO, " [sync #%d:%d]",
+                   ost->sync_ist->file_index,
+                   ost->sync_ist->st->index);
+        if (ost->stream_copy)
+            av_log(NULL, AV_LOG_INFO, " (copy)");
+        else {
+            const AVCodec *in_codec    = input_streams[ost->source_index]->dec;
+            const AVCodec *out_codec   = ost->enc;
+            const char *decoder_name   = "?";
+            const char *in_codec_name  = "?";
+            const char *encoder_name   = "?";
+            const char *out_codec_name = "?";
+            const AVCodecDescriptor *desc;
+
+            if (in_codec) {
+                decoder_name  = in_codec->name;
+                desc = avcodec_descriptor_get(in_codec->id);
+                if (desc)
+                    in_codec_name = desc->name;
+                if (!strcmp(decoder_name, in_codec_name))
+                    decoder_name = "native";
+            }
+
+            if (out_codec) {
+                encoder_name   = out_codec->name;
+                desc = avcodec_descriptor_get(out_codec->id);
+                if (desc)
+                    out_codec_name = desc->name;
+                if (!strcmp(encoder_name, out_codec_name))
+                    encoder_name = "native";
+            }
+
+            av_log(NULL, AV_LOG_INFO, " (%s (%s) -> %s (%s))",
+                   in_codec_name, decoder_name,
+                   out_codec_name, encoder_name);
+        }
+        av_log(NULL, AV_LOG_INFO, "\n");
+    }
+
+    if (ret) {
+        av_log(NULL, AV_LOG_ERROR, "%s\n", error);
+        return ret;
+    }
+
+    atomic_store(&transcode_init_done, 1);
+
+    return 0;
+}
+
+/* Return 1 if there remain streams where more output is wanted, 0 otherwise. */
+static int need_output(void)
+{
+    int i;
+
+    for (i = 0; i < nb_output_streams; i++) {
+        OutputStream *ost    = output_streams[i];
+        OutputFile *of       = output_files[ost->file_index];
+        AVFormatContext *os  = output_files[ost->file_index]->ctx;
+
+        if (ost->finished ||
+            (os->pb && avio_tell(os->pb) >= of->limit_filesize))
+            continue;
+        if (ost->frame_number >= ost->max_frames) {
+            int j;
+            for (j = 0; j < of->ctx->nb_streams; j++)
+                close_output_stream(output_streams[of->ost_index + j]);
+            continue;
+        }
+
+        return 1;
+    }
+
+    return 0;
+}
+
+/**
+ * Select the output stream to process.
+ *
+ * @return  selected output stream, or NULL if none available
+ */
+static OutputStream *choose_output(void)
+{
+    int i;
+    int64_t opts_min = INT64_MAX;
+    OutputStream *ost_min = NULL;
+
+    for (i = 0; i < nb_output_streams; i++) {
+        OutputStream *ost = output_streams[i];
+        int64_t opts = ost->st->cur_dts == AV_NOPTS_VALUE ? INT64_MIN :
+                       av_rescale_q(ost->st->cur_dts, ost->st->time_base,
+                                    AV_TIME_BASE_Q);
+        if (ost->st->cur_dts == AV_NOPTS_VALUE)
+            av_log(NULL, AV_LOG_DEBUG, "cur_dts is invalid (this is harmless if it occurs once at the start per stream)\n");
+
+        if (!ost->initialized && !ost->inputs_done)
+            return ost;
+
+        if (!ost->finished && opts < opts_min) {
+            opts_min = opts;
+            ost_min  = ost->unavailable ? NULL : ost;
+        }
+    }
+    return ost_min;
+}
+
+static void set_tty_echo(int on)
+{
+#if HAVE_TERMIOS_H
+    struct termios tty;
+    if (tcgetattr(0, &tty) == 0) {
+        if (on) tty.c_lflag |= ECHO;
+        else    tty.c_lflag &= ~ECHO;
+        tcsetattr(0, TCSANOW, &tty);
+    }
+#endif
+}
+
+static int check_keyboard_interaction(int64_t cur_time)
+{
+    int i, ret, key;
+    static int64_t last_time;
+    if (received_nb_signals)
+        return AVERROR_EXIT;
+    /* read_key() returns 0 on EOF */
+    if(cur_time - last_time >= 100000 && !run_as_daemon){
+        key =  read_key();
+        last_time = cur_time;
+    }else
+        key = -1;
+    if (key == 'q')
+        return AVERROR_EXIT;
+    if (key == '+') av_log_set_level(av_log_get_level()+10);
+    if (key == '-') av_log_set_level(av_log_get_level()-10);
+    if (key == 's') qp_hist     ^= 1;
+    if (key == 'h'){
+        if (do_hex_dump){
+            do_hex_dump = do_pkt_dump = 0;
+        } else if(do_pkt_dump){
+            do_hex_dump = 1;
+        } else
+            do_pkt_dump = 1;
+        av_log_set_level(AV_LOG_DEBUG);
+    }
+    if (key == 'c' || key == 'C'){
+        char buf[4096], target[64], command[256], arg[256] = {0};
+        double time;
+        int k, n = 0;
+        fprintf(stderr, "\nEnter command: <target>|all <time>|-1 <command>[ <argument>]\n");
+        i = 0;
+        set_tty_echo(1);
+        while ((k = read_key()) != '\n' && k != '\r' && i < sizeof(buf)-1)
+            if (k > 0)
+                buf[i++] = k;
+        buf[i] = 0;
+        set_tty_echo(0);
+        fprintf(stderr, "\n");
+        if (k > 0 &&
+            (n = sscanf(buf, "%63[^ ] %lf %255[^ ] %255[^\n]", target, &time, command, arg)) >= 3) {
+            av_log(NULL, AV_LOG_DEBUG, "Processing command target:%s time:%f command:%s arg:%s",
+                   target, time, command, arg);
+            for (i = 0; i < nb_filtergraphs; i++) {
+                FilterGraph *fg = filtergraphs[i];
+                if (fg->graph) {
+                    if (time < 0) {
+                        ret = avfilter_graph_send_command(fg->graph, target, command, arg, buf, sizeof(buf),
+                                                          key == 'c' ? AVFILTER_CMD_FLAG_ONE : 0);
+                        fprintf(stderr, "Command reply for stream %d: ret:%d res:\n%s", i, ret, buf);
+                    } else if (key == 'c') {
+                        fprintf(stderr, "Queuing commands only on filters supporting the specific command is unsupported\n");
+                        ret = AVERROR_PATCHWELCOME;
+                    } else {
+                        ret = avfilter_graph_queue_command(fg->graph, target, command, arg, 0, time);
+                        if (ret < 0)
+                            fprintf(stderr, "Queuing command failed with error %s\n", av_err2str(ret));
+                    }
+                }
+            }
+        } else {
+            av_log(NULL, AV_LOG_ERROR,
+                   "Parse error, at least 3 arguments were expected, "
+                   "only %d given in string '%s'\n", n, buf);
+        }
+    }
+    if (key == 'd' || key == 'D'){
+        int debug=0;
+        if(key == 'D') {
+            debug = input_streams[0]->st->codec->debug<<1;
+            if(!debug) debug = 1;
+            while(debug & (FF_DEBUG_DCT_COEFF
+#if FF_API_DEBUG_MV
+                                             |FF_DEBUG_VIS_QP|FF_DEBUG_VIS_MB_TYPE
+#endif
+                                                                                  )) //unsupported, would just crash
+                debug += debug;
+        }else{
+            char buf[32];
+            int k = 0;
+            i = 0;
+            set_tty_echo(1);
+            while ((k = read_key()) != '\n' && k != '\r' && i < sizeof(buf)-1)
+                if (k > 0)
+                    buf[i++] = k;
+            buf[i] = 0;
+            set_tty_echo(0);
+            fprintf(stderr, "\n");
+            if (k <= 0 || sscanf(buf, "%d", &debug)!=1)
+                fprintf(stderr,"error parsing debug value\n");
+        }
+        for(i=0;i<nb_input_streams;i++) {
+            input_streams[i]->st->codec->debug = debug;
+        }
+        for(i=0;i<nb_output_streams;i++) {
+            OutputStream *ost = output_streams[i];
+            ost->enc_ctx->debug = debug;
+        }
+        if(debug) av_log_set_level(AV_LOG_DEBUG);
+        fprintf(stderr,"debug=%d\n", debug);
+    }
+    if (key == '?'){
+        fprintf(stderr, "key    function\n"
+                        "?      show this help\n"
+                        "+      increase verbosity\n"
+                        "-      decrease verbosity\n"
+                        "c      Send command to first matching filter supporting it\n"
+                        "C      Send/Queue command to all matching filters\n"
+                        "D      cycle through available debug modes\n"
+                        "h      dump packets/hex press to cycle through the 3 states\n"
+                        "q      quit\n"
+                        "s      Show QP histogram\n"
+        );
+    }
+    return 0;
+}
+
+#if HAVE_THREADS
+static void *input_thread(void *arg)
+{
+    InputFile *f = arg;
+    unsigned flags = f->non_blocking ? AV_THREAD_MESSAGE_NONBLOCK : 0;
+    int ret = 0;
+
+    while (1) {
+        AVPacket pkt;
+        ret = av_read_frame(f->ctx, &pkt);
+
+        if (ret == AVERROR(EAGAIN)) {
+            av_usleep(10000);
+            continue;
+        }
+        if (ret < 0) {
+            av_thread_message_queue_set_err_recv(f->in_thread_queue, ret);
+            break;
+        }
+        ret = av_thread_message_queue_send(f->in_thread_queue, &pkt, flags);
+        if (flags && ret == AVERROR(EAGAIN)) {
+            flags = 0;
+            ret = av_thread_message_queue_send(f->in_thread_queue, &pkt, flags);
+            av_log(f->ctx, AV_LOG_WARNING,
+                   "Thread message queue blocking; consider raising the "
+                   "thread_queue_size option (current value: %d)\n",
+                   f->thread_queue_size);
+        }
+        if (ret < 0) {
+            if (ret != AVERROR_EOF)
+                av_log(f->ctx, AV_LOG_ERROR,
+                       "Unable to send packet to main thread: %s\n",
+                       av_err2str(ret));
+            av_packet_unref(&pkt);
+            av_thread_message_queue_set_err_recv(f->in_thread_queue, ret);
+            break;
+        }
+    }
+
+    return NULL;
+}
+
+static void free_input_threads(void)
+{
+    int i;
+
+    for (i = 0; i < nb_input_files; i++) {
+        InputFile *f = input_files[i];
+        AVPacket pkt;
+
+        if (!f || !f->in_thread_queue)
+            continue;
+        av_thread_message_queue_set_err_send(f->in_thread_queue, AVERROR_EOF);
+        while (av_thread_message_queue_recv(f->in_thread_queue, &pkt, 0) >= 0)
+            av_packet_unref(&pkt);
+
+        pthread_join(f->thread, NULL);
+        f->joined = 1;
+        av_thread_message_queue_free(&f->in_thread_queue);
+    }
+}
+
+/* --vgtmpeg */
+/* figures out if a input file is discarded by stream selection*/
+static int is_discarded(InputFile *f ) {
+	AVFormatContext *ic = f->ctx;
+	int i;
+	int is_discarded = 1;
+
+	/* disable all streams first */
+	for (i = 0; i < ic->nb_streams; i++) {
+		is_discarded = is_discarded && (ic->streams[i]->discard == AVDISCARD_ALL);
+	}
+	if(is_discarded)
+		return 1;
+	else
+		return 0;
+}
+/* --vgtmpeg */
+
+static int init_input_threads(void)
+{
+    int i, ret;
+
+    if (nb_input_files == 1)
+        return 0;
+
+    for (i = 0; i < nb_input_files; i++) {
+        InputFile *f = input_files[i];
+
+        /* --vgtmpeg */
+        /* if input file is not use don't bother creating an
+         * input thread for it
+         */
+        if(is_discarded(f)) {
+        	f->joined = 1;
+        	continue;
+        }
+        /* --vgtmpeg */
+
+        if (f->ctx->pb ? !f->ctx->pb->seekable :
+            strcmp(f->ctx->iformat->name, "lavfi"))
+            f->non_blocking = 1;
+        ret = av_thread_message_queue_alloc(&f->in_thread_queue,
+                                            f->thread_queue_size, sizeof(AVPacket));
+        if (ret < 0)
+            return ret;
+
+        if ((ret = pthread_create(&f->thread, NULL, input_thread, f))) {
+            av_log(NULL, AV_LOG_ERROR, "pthread_create failed: %s. Try to increase `ulimit -v` or decrease `ulimit -s`.\n", strerror(ret));
+            av_thread_message_queue_free(&f->in_thread_queue);
+            return AVERROR(ret);
+        }
+    }
+    return 0;
+}
+
+static int get_input_packet_mt(InputFile *f, AVPacket *pkt)
+{
+    return av_thread_message_queue_recv(f->in_thread_queue, pkt,
+                                        f->non_blocking ?
+                                        AV_THREAD_MESSAGE_NONBLOCK : 0);
+}
+#endif
+
+static int get_input_packet(InputFile *f, AVPacket *pkt)
+{
+    if (f->rate_emu) {
+        int i;
+        for (i = 0; i < f->nb_streams; i++) {
+            InputStream *ist = input_streams[f->ist_index + i];
+            int64_t pts = av_rescale(ist->dts, 1000000, AV_TIME_BASE);
+            int64_t now = av_gettime_relative() - ist->start;
+            if (pts > now)
+                return AVERROR(EAGAIN);
+        }
+    }
+
+#if HAVE_THREADS
+    if (nb_input_files > 1)
+        return get_input_packet_mt(f, pkt);
+#endif
+    return av_read_frame(f->ctx, pkt);
+}
+
+static int got_eagain(void)
+{
+    int i;
+    for (i = 0; i < nb_output_streams; i++)
+        if (output_streams[i]->unavailable)
+            return 1;
+    return 0;
+}
+
+static void reset_eagain(void)
+{
+    int i;
+    for (i = 0; i < nb_input_files; i++)
+        input_files[i]->eagain = 0;
+    for (i = 0; i < nb_output_streams; i++)
+        output_streams[i]->unavailable = 0;
+}
+
+// set duration to max(tmp, duration) in a proper time base and return duration's time_base
+static AVRational duration_max(int64_t tmp, int64_t *duration, AVRational tmp_time_base,
+                                AVRational time_base)
+{
+    int ret;
+
+    if (!*duration) {
+        *duration = tmp;
+        return tmp_time_base;
+    }
+
+    ret = av_compare_ts(*duration, time_base, tmp, tmp_time_base);
+    if (ret < 0) {
+        *duration = tmp;
+        return tmp_time_base;
+    }
+
+    return time_base;
+}
+
+static int seek_to_start(InputFile *ifile, AVFormatContext *is)
+{
+    InputStream *ist;
+    AVCodecContext *avctx;
+    int i, ret, has_audio = 0;
+    int64_t duration = 0;
+
+    ret = av_seek_frame(is, -1, is->start_time, 0);
+    if (ret < 0)
+        return ret;
+
+    for (i = 0; i < ifile->nb_streams; i++) {
+        ist   = input_streams[ifile->ist_index + i];
+        avctx = ist->dec_ctx;
+
+        /* duration is the length of the last frame in a stream
+         * when audio stream is present we don't care about
+         * last video frame length because it's not defined exactly */
+        if (avctx->codec_type == AVMEDIA_TYPE_AUDIO && ist->nb_samples)
+            has_audio = 1;
+    }
+
+    for (i = 0; i < ifile->nb_streams; i++) {
+        ist   = input_streams[ifile->ist_index + i];
+        avctx = ist->dec_ctx;
+
+        if (has_audio) {
+            if (avctx->codec_type == AVMEDIA_TYPE_AUDIO && ist->nb_samples) {
+                AVRational sample_rate = {1, avctx->sample_rate};
+
+                duration = av_rescale_q(ist->nb_samples, sample_rate, ist->st->time_base);
+            } else {
+                continue;
+            }
+        } else {
+            if (ist->framerate.num) {
+                duration = av_rescale_q(1, av_inv_q(ist->framerate), ist->st->time_base);
+            } else if (ist->st->avg_frame_rate.num) {
+                duration = av_rescale_q(1, av_inv_q(ist->st->avg_frame_rate), ist->st->time_base);
+            } else {
+                duration = 1;
+            }
+        }
+        if (!ifile->duration)
+            ifile->time_base = ist->st->time_base;
+        /* the total duration of the stream, max_pts - min_pts is
+         * the duration of the stream without the last frame */
+        duration += ist->max_pts - ist->min_pts;
+        ifile->time_base = duration_max(duration, &ifile->duration, ist->st->time_base,
+                                        ifile->time_base);
+    }
+
+    if (ifile->loop > 0)
+        ifile->loop--;
+
+    return ret;
+}
+
+/*
+ * Return
+ * - 0 -- one packet was read and processed
+ * - AVERROR(EAGAIN) -- no packets were available for selected file,
+ *   this function should be called again
+ * - AVERROR_EOF -- this function should not be called again
+ */
+static int process_input(int file_index)
+{
+    InputFile *ifile = input_files[file_index];
+    AVFormatContext *is;
+    InputStream *ist;
+    AVPacket pkt;
+    int ret, i, j;
+    int64_t duration;
+    int64_t pkt_dts;
+
+    is  = ifile->ctx;
+    ret = get_input_packet(ifile, &pkt);
+
+    if (ret == AVERROR(EAGAIN)) {
+        ifile->eagain = 1;
+        return ret;
+    }
+    if (ret < 0 && ifile->loop) {
+        AVCodecContext *avctx;
+        for (i = 0; i < ifile->nb_streams; i++) {
+            ist = input_streams[ifile->ist_index + i];
+            avctx = ist->dec_ctx;
+            if (ist->decoding_needed) {
+                ret = process_input_packet(ist, NULL, 1);
+                if (ret>0)
+                    return 0;
+                avcodec_flush_buffers(avctx);
+            }
+        }
+        ret = seek_to_start(ifile, is);
+        if (ret < 0)
+            av_log(NULL, AV_LOG_WARNING, "Seek to start failed.\n");
+        else
+            ret = get_input_packet(ifile, &pkt);
+        if (ret == AVERROR(EAGAIN)) {
+            ifile->eagain = 1;
+            return ret;
+        }
+    }
+    if (ret < 0) {
+        if (ret != AVERROR_EOF) {
+            print_error(is->url, ret);
+            if (exit_on_error)
+                exit_program(1);
+        }
+
+        for (i = 0; i < ifile->nb_streams; i++) {
+            ist = input_streams[ifile->ist_index + i];
+            if (ist->decoding_needed) {
+                ret = process_input_packet(ist, NULL, 0);
+                if (ret>0)
+                    return 0;
+            }
+
+            /* mark all outputs that don't go through lavfi as finished */
+            for (j = 0; j < nb_output_streams; j++) {
+                OutputStream *ost = output_streams[j];
+
+                if (ost->source_index == ifile->ist_index + i &&
+                    (ost->stream_copy || ost->enc->type == AVMEDIA_TYPE_SUBTITLE))
+                    finish_output_stream(ost);
+            }
+        }
+
+        ifile->eof_reached = 1;
+        return AVERROR(EAGAIN);
+    }
+
+    reset_eagain();
+
+    if (do_pkt_dump) {
+        av_pkt_dump_log2(NULL, AV_LOG_INFO, &pkt, do_hex_dump,
+                         is->streams[pkt.stream_index]);
+    }
+    /* the following test is needed in case new streams appear
+       dynamically in stream : we ignore them */
+    if (pkt.stream_index >= ifile->nb_streams) {
+        report_new_stream(file_index, &pkt);
+        goto discard_packet;
+    }
+
+    ist = input_streams[ifile->ist_index + pkt.stream_index];
+
+    ist->data_size += pkt.size;
+    ist->nb_packets++;
+
+    if (ist->discard)
+        goto discard_packet;
+
+    if (exit_on_error && (pkt.flags & AV_PKT_FLAG_CORRUPT)) {
+        av_log(NULL, AV_LOG_FATAL, "%s: corrupt input packet in stream %d\n", is->url, pkt.stream_index);
+        exit_program(1);
+    }
+
+    if (debug_ts) {
+        av_log(NULL, AV_LOG_INFO, "demuxer -> ist_index:%d type:%s "
+               "next_dts:%s next_dts_time:%s next_pts:%s next_pts_time:%s pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s off:%s off_time:%s\n",
+               ifile->ist_index + pkt.stream_index, av_get_media_type_string(ist->dec_ctx->codec_type),
+               av_ts2str(ist->next_dts), av_ts2timestr(ist->next_dts, &AV_TIME_BASE_Q),
+               av_ts2str(ist->next_pts), av_ts2timestr(ist->next_pts, &AV_TIME_BASE_Q),
+               av_ts2str(pkt.pts), av_ts2timestr(pkt.pts, &ist->st->time_base),
+               av_ts2str(pkt.dts), av_ts2timestr(pkt.dts, &ist->st->time_base),
+               av_ts2str(input_files[ist->file_index]->ts_offset),
+               av_ts2timestr(input_files[ist->file_index]->ts_offset, &AV_TIME_BASE_Q));
+    }
+
+    if(!ist->wrap_correction_done && is->start_time != AV_NOPTS_VALUE && ist->st->pts_wrap_bits < 64){
+        int64_t stime, stime2;
+        // Correcting starttime based on the enabled streams
+        // FIXME this ideally should be done before the first use of starttime but we do not know which are the enabled streams at that point.
+        //       so we instead do it here as part of discontinuity handling
+        if (   ist->next_dts == AV_NOPTS_VALUE
+            && ifile->ts_offset == -is->start_time
+            && (is->iformat->flags & AVFMT_TS_DISCONT)) {
+            int64_t new_start_time = INT64_MAX;
+            for (i=0; i<is->nb_streams; i++) {
+                AVStream *st = is->streams[i];
+                if(st->discard == AVDISCARD_ALL || st->start_time == AV_NOPTS_VALUE)
+                    continue;
+                new_start_time = FFMIN(new_start_time, av_rescale_q(st->start_time, st->time_base, AV_TIME_BASE_Q));
+            }
+            if (new_start_time > is->start_time) {
+                av_log(is, AV_LOG_VERBOSE, "Correcting start time by %"PRId64"\n", new_start_time - is->start_time);
+                ifile->ts_offset = -new_start_time;
+            }
+        }
+
+        stime = av_rescale_q(is->start_time, AV_TIME_BASE_Q, ist->st->time_base);
+        stime2= stime + (1ULL<<ist->st->pts_wrap_bits);
+        ist->wrap_correction_done = 1;
+
+        if(stime2 > stime && pkt.dts != AV_NOPTS_VALUE && pkt.dts > stime + (1LL<<(ist->st->pts_wrap_bits-1))) {
+            pkt.dts -= 1ULL<<ist->st->pts_wrap_bits;
+            ist->wrap_correction_done = 0;
+        }
+        if(stime2 > stime && pkt.pts != AV_NOPTS_VALUE && pkt.pts > stime + (1LL<<(ist->st->pts_wrap_bits-1))) {
+            pkt.pts -= 1ULL<<ist->st->pts_wrap_bits;
+            ist->wrap_correction_done = 0;
+        }
+    }
+
+    /* add the stream-global side data to the first packet */
+    if (ist->nb_packets == 1) {
+        for (i = 0; i < ist->st->nb_side_data; i++) {
+            AVPacketSideData *src_sd = &ist->st->side_data[i];
+            uint8_t *dst_data;
+
+            if (src_sd->type == AV_PKT_DATA_DISPLAYMATRIX)
+                continue;
+
+            if (av_packet_get_side_data(&pkt, src_sd->type, NULL))
+                continue;
+
+            dst_data = av_packet_new_side_data(&pkt, src_sd->type, src_sd->size);
+            if (!dst_data)
+                exit_program(1);
+
+            memcpy(dst_data, src_sd->data, src_sd->size);
+        }
+    }
+
+    if (pkt.dts != AV_NOPTS_VALUE)
+        pkt.dts += av_rescale_q(ifile->ts_offset, AV_TIME_BASE_Q, ist->st->time_base);
+    if (pkt.pts != AV_NOPTS_VALUE)
+        pkt.pts += av_rescale_q(ifile->ts_offset, AV_TIME_BASE_Q, ist->st->time_base);
+
+    if (pkt.pts != AV_NOPTS_VALUE)
+        pkt.pts *= ist->ts_scale;
+    if (pkt.dts != AV_NOPTS_VALUE)
+        pkt.dts *= ist->ts_scale;
+
+    pkt_dts = av_rescale_q_rnd(pkt.dts, ist->st->time_base, AV_TIME_BASE_Q, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
+    if ((ist->dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO ||
+         ist->dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) &&
+        pkt_dts != AV_NOPTS_VALUE && ist->next_dts == AV_NOPTS_VALUE && !copy_ts
+        && (is->iformat->flags & AVFMT_TS_DISCONT) && ifile->last_ts != AV_NOPTS_VALUE) {
+        int64_t delta   = pkt_dts - ifile->last_ts;
+        if (delta < -1LL*dts_delta_threshold*AV_TIME_BASE ||
+            delta >  1LL*dts_delta_threshold*AV_TIME_BASE){
+            ifile->ts_offset -= delta;
+            av_log(NULL, AV_LOG_DEBUG,
+                   "Inter stream timestamp discontinuity %"PRId64", new offset= %"PRId64"\n",
+                   delta, ifile->ts_offset);
+            pkt.dts -= av_rescale_q(delta, AV_TIME_BASE_Q, ist->st->time_base);
+            if (pkt.pts != AV_NOPTS_VALUE)
+                pkt.pts -= av_rescale_q(delta, AV_TIME_BASE_Q, ist->st->time_base);
+        }
+    }
+
+    duration = av_rescale_q(ifile->duration, ifile->time_base, ist->st->time_base);
+    if (pkt.pts != AV_NOPTS_VALUE) {
+        pkt.pts += duration;
+        ist->max_pts = FFMAX(pkt.pts, ist->max_pts);
+        ist->min_pts = FFMIN(pkt.pts, ist->min_pts);
+    }
+
+    if (pkt.dts != AV_NOPTS_VALUE)
+        pkt.dts += duration;
+
+    pkt_dts = av_rescale_q_rnd(pkt.dts, ist->st->time_base, AV_TIME_BASE_Q, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
+    if ((ist->dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO ||
+         ist->dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) &&
+         pkt_dts != AV_NOPTS_VALUE && ist->next_dts != AV_NOPTS_VALUE &&
+        !copy_ts) {
+        int64_t delta   = pkt_dts - ist->next_dts;
+        if (is->iformat->flags & AVFMT_TS_DISCONT) {
+            if (delta < -1LL*dts_delta_threshold*AV_TIME_BASE ||
+                delta >  1LL*dts_delta_threshold*AV_TIME_BASE ||
+                pkt_dts + AV_TIME_BASE/10 < FFMAX(ist->pts, ist->dts)) {
+                ifile->ts_offset -= delta;
+                av_log(NULL, AV_LOG_DEBUG,
+                       "timestamp discontinuity %"PRId64", new offset= %"PRId64"\n",
+                       delta, ifile->ts_offset);
+                pkt.dts -= av_rescale_q(delta, AV_TIME_BASE_Q, ist->st->time_base);
+                if (pkt.pts != AV_NOPTS_VALUE)
+                    pkt.pts -= av_rescale_q(delta, AV_TIME_BASE_Q, ist->st->time_base);
+            }
+        } else {
+            if ( delta < -1LL*dts_error_threshold*AV_TIME_BASE ||
+                 delta >  1LL*dts_error_threshold*AV_TIME_BASE) {
+                av_log(NULL, AV_LOG_WARNING, "DTS %"PRId64", next:%"PRId64" st:%d invalid dropping\n", pkt.dts, ist->next_dts, pkt.stream_index);
+                pkt.dts = AV_NOPTS_VALUE;
+            }
+            if (pkt.pts != AV_NOPTS_VALUE){
+                int64_t pkt_pts = av_rescale_q(pkt.pts, ist->st->time_base, AV_TIME_BASE_Q);
+                delta   = pkt_pts - ist->next_dts;
+                if ( delta < -1LL*dts_error_threshold*AV_TIME_BASE ||
+                     delta >  1LL*dts_error_threshold*AV_TIME_BASE) {
+                    av_log(NULL, AV_LOG_WARNING, "PTS %"PRId64", next:%"PRId64" invalid dropping st:%d\n", pkt.pts, ist->next_dts, pkt.stream_index);
+                    pkt.pts = AV_NOPTS_VALUE;
+                }
+            }
+        }
+    }
+
+    if (pkt.dts != AV_NOPTS_VALUE)
+        ifile->last_ts = av_rescale_q(pkt.dts, ist->st->time_base, AV_TIME_BASE_Q);
+
+    if (debug_ts) {
+        av_log(NULL, AV_LOG_INFO, "demuxer+ffmpeg -> ist_index:%d type:%s pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s off:%s off_time:%s\n",
+               ifile->ist_index + pkt.stream_index, av_get_media_type_string(ist->dec_ctx->codec_type),
+               av_ts2str(pkt.pts), av_ts2timestr(pkt.pts, &ist->st->time_base),
+               av_ts2str(pkt.dts), av_ts2timestr(pkt.dts, &ist->st->time_base),
+               av_ts2str(input_files[ist->file_index]->ts_offset),
+               av_ts2timestr(input_files[ist->file_index]->ts_offset, &AV_TIME_BASE_Q));
+    }
+
+    sub2video_heartbeat(ist, pkt.pts);
+
+    process_input_packet(ist, &pkt, 0);
+
+discard_packet:
+    av_packet_unref(&pkt);
+
+    return 0;
+}
+
+/**
+ * Perform a step of transcoding for the specified filter graph.
+ *
+ * @param[in]  graph     filter graph to consider
+ * @param[out] best_ist  input stream where a frame would allow to continue
+ * @return  0 for success, <0 for error
+ */
+static int transcode_from_filter(FilterGraph *graph, InputStream **best_ist)
+{
+    int i, ret;
+    int nb_requests, nb_requests_max = 0;
+    InputFilter *ifilter;
+    InputStream *ist;
+
+    *best_ist = NULL;
+    ret = avfilter_graph_request_oldest(graph->graph);
+    if (ret >= 0)
+        return reap_filters(0);
+
+    if (ret == AVERROR_EOF) {
+        ret = reap_filters(1);
+        for (i = 0; i < graph->nb_outputs; i++)
+            close_output_stream(graph->outputs[i]->ost);
+        return ret;
+    }
+    if (ret != AVERROR(EAGAIN))
+        return ret;
+
+    for (i = 0; i < graph->nb_inputs; i++) {
+        ifilter = graph->inputs[i];
+        ist = ifilter->ist;
+        if (input_files[ist->file_index]->eagain ||
+            input_files[ist->file_index]->eof_reached)
+            continue;
+        nb_requests = av_buffersrc_get_nb_failed_requests(ifilter->filter);
+        if (nb_requests > nb_requests_max) {
+            nb_requests_max = nb_requests;
+            *best_ist = ist;
+        }
+    }
+
+    if (!*best_ist)
+        for (i = 0; i < graph->nb_outputs; i++)
+            graph->outputs[i]->ost->unavailable = 1;
+
+    return 0;
+}
+
+/**
+ * Run a single step of transcoding.
+ *
+ * @return  0 for success, <0 for error
+ */
+static int transcode_step(void)
+{
+    OutputStream *ost;
+    InputStream  *ist = NULL;
+    int ret;
+
+    ost = choose_output();
+    if (!ost) {
+        if (got_eagain()) {
+            reset_eagain();
+            av_usleep(10000);
+            return 0;
+        }
+        av_log(NULL, AV_LOG_VERBOSE, "No more inputs to read from, finishing.\n");
+        return AVERROR_EOF;
+    }
+
+    if (ost->filter && !ost->filter->graph->graph) {
+        if (ifilter_has_all_input_formats(ost->filter->graph)) {
+            ret = configure_filtergraph(ost->filter->graph);
+            if (ret < 0) {
+                av_log(NULL, AV_LOG_ERROR, "Error reinitializing filters!\n");
+                return ret;
+            }
+        }
+    }
+
+    if (ost->filter && ost->filter->graph->graph) {
+        if (!ost->initialized) {
+            char error[1024] = {0};
+            ret = init_output_stream(ost, error, sizeof(error));
+            if (ret < 0) {
+                av_log(NULL, AV_LOG_ERROR, "Error initializing output stream %d:%d -- %s\n",
+                       ost->file_index, ost->index, error);
+                exit_program(1);
+            }
+        }
+        if ((ret = transcode_from_filter(ost->filter->graph, &ist)) < 0)
+            return ret;
+        if (!ist)
+            return 0;
+    } else if (ost->filter) {
+        int i;
+        for (i = 0; i < ost->filter->graph->nb_inputs; i++) {
+            InputFilter *ifilter = ost->filter->graph->inputs[i];
+            if (!ifilter->ist->got_output && !input_files[ifilter->ist->file_index]->eof_reached) {
+                ist = ifilter->ist;
+                break;
+            }
+        }
+        if (!ist) {
+            ost->inputs_done = 1;
+            return 0;
+        }
+    } else {
+        av_assert0(ost->source_index >= 0);
+        ist = input_streams[ost->source_index];
+    }
+
+    ret = process_input(ist->file_index);
+    if (ret == AVERROR(EAGAIN)) {
+        if (input_files[ist->file_index]->eagain)
+            ost->unavailable = 1;
+        return 0;
+    }
+
+    if (ret < 0)
+        return ret == AVERROR_EOF ? 0 : ret;
+
+    return reap_filters(0);
+}
+
+/*
+ * The following code is the main loop of the file converter
+ */
+static int transcode(void)
+{
+    int ret, i;
+    AVFormatContext *os;
+    OutputStream *ost;
+    InputStream *ist;
+    int64_t timer_start;
+    int64_t total_packets_written = 0;
+
+    ret = transcode_init();
+    if (ret < 0)
+        goto fail;
+
+    if (stdin_interaction) {
+        av_log(NULL, AV_LOG_INFO, "Press [q] to stop, [?] for help\n");
+    }
+
+    timer_start = av_gettime_relative();
+
+#if HAVE_THREADS
+    if ((ret = init_input_threads()) < 0)
+        goto fail;
+#endif
+
+    /* --vgtmpeg start */
+    while (!received_sigterm && (!nli || (!nli->exit && !nli->cancel_transcode ))) {
+    /* --vgtmpeg end */
+
+        int64_t cur_time= av_gettime_relative();
+
+        /* if 'q' pressed, exits */
+        if (stdin_interaction)
+            if (check_keyboard_interaction(cur_time) < 0)
+                break;
+
+        /* check if there's any stream where output is still needed */
+        if (!need_output()) {
+            av_log(NULL, AV_LOG_VERBOSE, "No more output streams to write to, finishing.\n");
+            break;
+        }
+
+        ret = transcode_step();
+        if (ret < 0 && ret != AVERROR_EOF) {
+            av_log(NULL, AV_LOG_ERROR, "Error while filtering: %s\n", av_err2str(ret));
+            break;
+        }
+
+        /* dump report by using the output first video and audio streams */
+        print_report(0, timer_start, cur_time);
+    }
+#if HAVE_THREADS
+    free_input_threads();
+#endif
+
+    /* at the end of stream, we must flush the decoder buffers */
+    for (i = 0; i < nb_input_streams; i++) {
+        ist = input_streams[i];
+        if (!input_files[ist->file_index]->eof_reached) {
+            process_input_packet(ist, NULL, 0);
+        }
+    }
+    flush_encoders();
+
+    term_exit();
+
+    /* write the trailer if needed and close file */
+    for (i = 0; i < nb_output_files; i++) {
+        os = output_files[i]->ctx;
+        if (!output_files[i]->header_written) {
+            av_log(NULL, AV_LOG_ERROR,
+                   "Nothing was written into output file %d (%s), because "
+                   "at least one of its streams received no packets.\n",
+                   i, os->url);
+            continue;
+        }
+        if ((ret = av_write_trailer(os)) < 0) {
+            av_log(NULL, AV_LOG_ERROR, "Error writing trailer of %s: %s\n", os->url, av_err2str(ret));
+            if (exit_on_error)
+                exit_program(1);
+        }
+        /* -- vgtmpeg */
+        output_files[i]->wrote_trailer = 1;
+        /* -- vgtmpeg */
+    }
+
+    /* dump report by using the first video and audio streams */
+    print_report(1, timer_start, av_gettime_relative());
+
+    /* close each encoder */
+    for (i = 0; i < nb_output_streams; i++) {
+        ost = output_streams[i];
+        if (ost->encoding_needed) {
+            av_freep(&ost->enc_ctx->stats_in);
+        }
+        total_packets_written += ost->packets_written;
+    }
+
+    if (!total_packets_written && (abort_on_flags & ABORT_ON_FLAG_EMPTY_OUTPUT)) {
+        av_log(NULL, AV_LOG_FATAL, "Empty output\n");
+        exit_program(1);
+    }
+
+    /* close each decoder */
+    for (i = 0; i < nb_input_streams; i++) {
+        ist = input_streams[i];
+        if (ist->decoding_needed) {
+            avcodec_close(ist->dec_ctx);
+            if (ist->hwaccel_uninit)
+                ist->hwaccel_uninit(ist->dec_ctx);
+        }
+    }
+
+    av_buffer_unref(&hw_device_ctx);
+    hw_device_free_all();
+
+    /* finished ! */
+    ret = 0;
+
+ fail:
+#if HAVE_THREADS
+    free_input_threads();
+#endif
+
+    if (output_streams) {
+        for (i = 0; i < nb_output_streams; i++) {
+            ost = output_streams[i];
+            if (ost) {
+                if (ost->logfile) {
+                    if (fclose(ost->logfile))
+                        av_log(NULL, AV_LOG_ERROR,
+                               "Error closing logfile, loss of information possible: %s\n",
+                               av_err2str(AVERROR(errno)));
+                    ost->logfile = NULL;
+                }
+                av_freep(&ost->forced_kf_pts);
+                av_freep(&ost->apad);
+                av_freep(&ost->disposition);
+                av_dict_free(&ost->encoder_opts);
+                av_dict_free(&ost->sws_dict);
+                av_dict_free(&ost->swr_opts);
+                av_dict_free(&ost->resample_opts);
+            }
+        }
+    }
+    return ret;
+}
+
+
+static int64_t getutime(void)
+{
+#if HAVE_GETRUSAGE
+    struct rusage rusage;
+
+    getrusage(RUSAGE_SELF, &rusage);
+    return (rusage.ru_utime.tv_sec * 1000000LL) + rusage.ru_utime.tv_usec;
+#elif HAVE_GETPROCESSTIMES
+    HANDLE proc;
+    FILETIME c, e, k, u;
+    proc = GetCurrentProcess();
+    GetProcessTimes(proc, &c, &e, &k, &u);
+    return ((int64_t) u.dwHighDateTime << 32 | u.dwLowDateTime) / 10;
+#else
+    return av_gettime_relative();
+#endif
+}
+
+static int64_t getmaxrss(void)
+{
+#if HAVE_GETRUSAGE && HAVE_STRUCT_RUSAGE_RU_MAXRSS
+    struct rusage rusage;
+    getrusage(RUSAGE_SELF, &rusage);
+    return (int64_t)rusage.ru_maxrss * 1024;
+#elif HAVE_GETPROCESSMEMORYINFO
+    HANDLE proc;
+    PROCESS_MEMORY_COUNTERS memcounters;
+    proc = GetCurrentProcess();
+    memcounters.cb = sizeof(memcounters);
+    GetProcessMemoryInfo(proc, &memcounters, sizeof(memcounters));
+    return memcounters.PeakPagefileUsage;
+#else
+    return 0;
+#endif
+}
+
+static void log_callback_null(void *ptr, int level, const char *fmt, va_list vl)
+{
+}
+
+int main(int argc, char **argv)
+{
+    int i, ret;
+    int64_t ti;
+
+    init_dynload();
+
+    register_exit(ffmpeg_cleanup);
+
+    setvbuf(stderr,NULL,_IONBF,0); /* win32 runtime needs this */
+
+    av_log_set_flags(AV_LOG_SKIP_REPEATED);
+    parse_loglevel(argc, argv, options);
+
+    if(argc>1 && !strcmp(argv[1], "-d")){
+        run_as_daemon=1;
+        av_log_set_callback(log_callback_null);
+        argc--;
+        argv++;
+    }
+
+#if CONFIG_AVDEVICE
+    avdevice_register_all();
+#endif
+    avformat_network_init();
+
+    show_banner(argc, argv, options);
+
+    /* parse options and open all input/output files */
+    ret = ffmpeg_parse_options(argc, argv);
+    if (ret < 0)
+        exit_program(1);
+
+    /* -- vgtmpeg */
+    /* startup the input processing thread */
+    if( server_mode ) {
+        nli = nlinput_prepare();
+        stdin_interaction = 0; // disable native stdin interaction 
+        run_as_daemon = 1;
+    }
+
+    /* --vgtmpeg */
+
+
+    if (nb_output_files <= 0 && nb_input_files == 0) {
+        show_usage();
+        av_log(NULL, AV_LOG_WARNING, "Use -h to get full help or, even better, run 'man %s'\n", program_name);
+        exit_program(1);
+    }
+
+    /* file converter / grab */
+    if (nb_output_files <= 0) {
+        av_log(NULL, AV_LOG_FATAL, "At least one output file must be specified\n");
+        exit_program(1);
+    }
+
+//     if (nb_input_files == 0) {
+//         av_log(NULL, AV_LOG_FATAL, "At least one input file must be specified\n");
+//         exit_program(1);
+//     }
+
+    for (i = 0; i < nb_output_files; i++) {
+        if (strcmp(output_files[i]->ctx->oformat->name, "rtp"))
+            want_sdp = 0;
+    }
+
+    current_time = ti = getutime();
+    if (transcode() < 0)
+        exit_program(1);
+    ti = getutime() - ti;
+    if (do_benchmark) {
+        av_log(NULL, AV_LOG_INFO, "bench: utime=%0.3fs\n", ti / 1000000.0);
+    }
+    av_log(NULL, AV_LOG_DEBUG, "%"PRIu64" frames successfully decoded, %"PRIu64" decoding errors\n",
+           decode_error_stat[0], decode_error_stat[1]);
+    if ((decode_error_stat[0] + decode_error_stat[1]) * max_error_rate < decode_error_stat[1])
+        exit_program(69);
+
+    exit_program(received_nb_signals ? 255 : main_return_code);
+    return main_return_code;
+}
diff --git a/fftools/vgtmpeg.h b/fftools/vgtmpeg.h
new file mode 100644
index 0000000000..40f985f9db
--- /dev/null
+++ b/fftools/vgtmpeg.h
@@ -0,0 +1,42 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef __VGTMPEG_H
+#define __VGTMPEG_H
+
+/* nl addons */
+#include "nlffmsg.h"
+#include "nlinput.h"
+#include "nldump_format.h"
+#include "nlreport.h"
+
+/* optical media public functions */
+#include "libavformat/optmedia.h"
+extern int output_xml;
+extern int server_mode;
+extern int banner;
+extern int default_program_id;
+
+/* running options */
+
+#endif
diff --git a/fftools/vgtmpeg_opts.h b/fftools/vgtmpeg_opts.h
new file mode 100644
index 0000000000..db90553968
--- /dev/null
+++ b/fftools/vgtmpeg_opts.h
@@ -0,0 +1,7 @@
+    { "output_xml", OPT_BOOL, {(void*)&output_xml}, "turn on xml output" },
+    { "server_mode", OPT_BOOL, {(void*)&server_mode}, "setup server mode" },
+    { "codecs_json", OPT_EXIT, {(void*)&show_codecs_json}, "show codecs in json format" },
+    { "formats_json", OPT_EXIT, {(void*)&show_formats_json}, "show formats  in json format" },
+    { "options_json", OPT_EXIT, {(void*)&show_options_json}, "show options in json format" },
+    { "banner", OPT_BOOL, {(void*)&banner}, "shows vgtmpeg banner" },
+
diff --git a/fftools/vgtmpeg_support.c b/fftools/vgtmpeg_support.c
new file mode 100644
index 0000000000..d2a7f55533
--- /dev/null
+++ b/fftools/vgtmpeg_support.c
@@ -0,0 +1,1176 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include "config.h"
+#include "nlffmsg.h"
+#include "nlinput.h"
+#include "nlreport.h"
+#include "nldump_format.h"
+#include "libavcodec/avcodec.h"
+#include "libavformat/avformat.h"
+#include "cmdutils.h"
+
+
+
+/****************************************************************/
+/* utils                                                        */
+/****************************************************************/
+/* buffer to use for escaped xml strings. this allows for a maximum of 2048 escaped characters
+ * FIXME the buffer needs to move into an app context to ensure thread safety  */
+#define MAX_FFGMT_STRING_LEN 2048
+static char xmlesc1[MAX_FFGMT_STRING_LEN*6];
+static char xmlesc2[MAX_FFGMT_STRING_LEN*6];
+
+
+char *xescape(char *buf, char *s) {
+	int i=MAX_FFGMT_STRING_LEN; 
+	char *o = buf;
+	while(i-- && *s ) {
+		switch(*s) {
+		case '<':
+			*o++ = '&'; *o++ = 'l'; *o++ = 't'; *o++ = ';';
+			break;
+		case '>':
+			*o++ = '&'; *o++ = 'g'; *o++ = 't'; *o++ = ';';
+			break;
+		case '"':
+			*o++ = '&'; *o++ = 'q'; *o++ = 'u'; *o++ = 'o'; *o++ = 't'; *o++ = ';';
+			break;
+		case '\'':
+			*o++ = '&'; *o++ = 'a'; *o++ = 'p'; *o++ = 'o'; *o++ = 's'; *o++ = ';';
+			break;
+		case '&':
+			*o++ = '&'; *o++ = 'a'; *o++ = 'm'; *o++ = 'p'; *o++ = ';';
+			break;        
+		default:
+            /* ignore not ascii characters */
+            if( (((unsigned char)(*s)) >= 0x20) && (((unsigned char)(*s)) <= 0x7f )  ) *o++ = *s;
+			break;
+		}
+		s++;
+	}
+	*o=0;
+	return buf;
+}
+
+
+/****************************************************************/
+/* nlinput                                                      */
+/****************************************************************/
+#define sl0(x) (x)
+#define sl8(x) ((x)<<8)
+#define sl16(x) ((x)<<16)
+#define sl24(x) ((x)<<24)
+
+#define VGTM_W  ( sl24('V') | sl16('G') | sl8('T') | sl0('M') )
+#define MTGV_W  ( sl24('M') | sl16('T') | sl8('G') | sl0('V') )
+
+#define CB2INT(x) ( sl24(x[0]) | sl16(x[1]) | sl8(x[2]) | sl0(x[3]) ) 
+static int nlinput_readbyte(char *val) {
+    int read = fread( val, 1, 1, stdin );
+    return read==1;
+}
+
+
+/* message code definitions */
+#define EXIT                101
+#define CANCEL_TRANSCODE    99
+
+static void * nlinput_start(void *p) {
+    nlinput_t *ctx = (nlinput_t *)p;
+    unsigned char b;
+    int loop = 1;
+
+    while(loop) {
+
+       printf("nlinput: about to read\n");
+       if(!nlinput_readbyte(&b)) 
+           break;
+
+       printf("nlinput: read %x(%c)\n",b,b);
+       switch(b) {
+           case EXIT:
+               printf("nlinput: exiting\n");
+               ctx->exit = 1;
+               loop = 0;
+               break;
+           case CANCEL_TRANSCODE:
+               printf("nlinput: canceling transcode\n");
+               ctx->cancel_transcode = 1;
+               break;
+       }
+    }
+
+    return 0;
+}
+
+/* fires up input thread */
+nlinput_t *nlinput_prepare(void) {
+    /* starting nlinput */
+    nlinput_t *ret = malloc( sizeof(nlinput_t) );
+    memset( ret, 0, sizeof (nlinput_t) );
+
+    //pthread_attr_init(&nlin_attr);
+    //pthread_attr_setdetachstate(&nlin_attr, PTHREAD_CREATE_JOINABLE );
+    pthread_create( &ret->nlin_th, NULL /*nlin_attr*/, nlinput_start, (void *)ret );
+
+    return ret;
+}
+
+/* shutdown input thread */
+void nlinput_cancel(nlinput_t *ctx) {
+    void *status;
+    printf("nlinput: cancel\n");
+
+    pthread_join( ctx->nlin_th, &status );
+}
+
+
+
+
+
+
+
+/****************************************************************/
+/* JSON output                                                  */
+/****************************************************************/
+#define JSON_LOG(...)  av_log ( NULL, AV_LOG_INFO, __VA_ARGS__ )
+
+#define JSON_OBJECT(x) JSON_LOG("{"); {x}; JSON_LOG("}"); 
+#define JSON_PROPERTY( first, name, val) { if(!first) {JSON_LOG(",");}; JSON_LOG( "\""#name "\":"  ); {val}; }
+
+
+static char *tmpstrcptr;
+#define JSON_STRING_C(cstring)  { JSON_LOG("\"%s\"", tmpstrcptr=c_strescape(cstring) ); c_strfree(tmpstrcptr); }
+//#define JSON_STRING_C(cstring)  { JSON_LOG("\"%s\"", cstring) ;  }
+#define JSON_INT_C(val)  JSON_LOG("%d", (val));
+/* if double is NaN set to zero on output. Checking NaN as compiler is supposed to return true on NaN!=NaN */
+#define JSON_DOUBLE_C(val) JSON_LOG("%f", ((double)(val)!=(double)(val)) ? 0.0 : (double)(val));
+#define JSON_BOOLEAN_C(val)     JSON_LOG("%s", (val) ? "true" : "false");
+
+
+#define JSON_ARRAY(x) {JSON_LOG("["); {x}; JSON_LOG("]");}
+#define JSON_ARRAY_ITEM(first, x) {if(!first) {JSON_LOG(",");}; {x}; }
+
+
+void show_codecs_json(void)
+{
+    AVCodec *p=NULL, *p2;
+    const char *last_name;
+    last_name= "000";
+    
+
+
+    JSON_OBJECT( 
+            JSON_PROPERTY( 1, codecs, JSON_ARRAY(
+                    int first = 1;
+                    for(;;)                    {
+                        int decode=0;
+                        int encode=0;
+                        int cap=0;
+                        const char *type_str;
+
+                        p2=NULL;
+                        while((p= av_codec_next(p))) {
+                            if((p2==NULL || strcmp(p->name, p2->name)<0) &&
+                                strcmp(p->name, last_name)>0){
+                            p2= p;
+                            decode= encode= cap=0;
+                        }
+                        if(p2 && strcmp(p->name, p2->name)==0){
+                            if(p->decode ) decode=1;
+                            if(p->encode2) encode=1;
+                            cap |= p->capabilities;
+                            }
+                        }
+                        if(p2==NULL)
+                            break;
+        
+                        last_name= p2->name;
+
+                        switch(p2->type) {
+                            case AVMEDIA_TYPE_VIDEO:
+                                type_str = "video";
+                                break;
+                            case AVMEDIA_TYPE_AUDIO:
+                                type_str = "audio";
+                                break;
+                            case AVMEDIA_TYPE_SUBTITLE:
+                                type_str = "subtitle";
+                                break;
+                            default:
+                                type_str = "?";
+                                break;
+                        }
+
+                        JSON_ARRAY_ITEM( first, JSON_OBJECT( 
+                            JSON_PROPERTY( 1, name,  JSON_STRING_C( p2->name ) );
+                            JSON_PROPERTY( 0, decode, JSON_BOOLEAN_C(decode) );
+                            JSON_PROPERTY( 0, encode, JSON_BOOLEAN_C(encode) );
+                            JSON_PROPERTY( 0, type, JSON_STRING_C(type_str) );
+                            JSON_PROPERTY( 0, caps, JSON_INT_C(cap) );
+                            JSON_PROPERTY( 0, long_name,  JSON_STRING_C( p2->long_name ) );
+                        ))
+                        first = 0;
+                    }
+        ))
+   );
+}
+
+void show_formats_json(void) {
+    AVInputFormat *ifmt=NULL;
+    AVOutputFormat *ofmt=NULL;
+    const char *last_name;
+    last_name= "000";
+
+    JSON_OBJECT( 
+            JSON_PROPERTY( 1, formats, JSON_ARRAY(
+                    int first = 1;
+                    for(;;)                    {
+                        int decode=0;
+                        int encode=0;
+                        const char *name=NULL;
+                        const char *long_name=NULL;
+
+                        while((ofmt= av_oformat_next(ofmt))) {
+                            if((name == NULL || strcmp(ofmt->name, name)<0) &&
+                                strcmp(ofmt->name, last_name)>0){
+                            name= ofmt->name;
+                            long_name= ofmt->long_name;
+                            encode=1;
+                            }
+                        }
+
+                        while((ifmt= av_iformat_next(ifmt))) {
+                            if((name == NULL || strcmp(ifmt->name, name)<0) &&
+                            strcmp(ifmt->name, last_name)>0){
+                            name= ifmt->name;
+                            long_name= ifmt->long_name;
+                            encode=0;
+                           }
+                        
+                            if(name && strcmp(ifmt->name, name)==0)
+                            decode=1;
+                        }
+                        if(name==NULL)
+                            break;
+                        last_name= name;
+        
+                        JSON_ARRAY_ITEM( first, JSON_OBJECT( 
+                            JSON_PROPERTY( 1, name,  JSON_STRING_C( name ) );
+                            JSON_PROPERTY( 0, decode, JSON_BOOLEAN_C(decode) );
+                            JSON_PROPERTY( 0, encode, JSON_BOOLEAN_C(encode) );
+                            JSON_PROPERTY( 0, long_name,  JSON_STRING_C( long_name ) );
+                        ))
+                        first = 0;
+
+                  }
+
+        ))
+   );
+}
+
+/**
+ * Given an avoption returns the default in double format.
+ * This is useful to transform default value to a 80bit double for JSON output.
+ * Returns 0 if it was not possible to coalesce value
+ */
+static double show_avoptions_get_double_default(const AVOption *opt)
+{
+	double dblval = 0;
+
+    /* flatten numeric value to a double */
+	switch (opt->type) {
+	case AV_OPT_TYPE_FLAGS:
+	case AV_OPT_TYPE_DURATION:
+	case AV_OPT_TYPE_CHANNEL_LAYOUT:
+	case AV_OPT_TYPE_CONST:
+	case AV_OPT_TYPE_INT:
+	case AV_OPT_TYPE_INT64:
+	case AV_OPT_TYPE_PIXEL_FMT:
+	case AV_OPT_TYPE_SAMPLE_FMT:
+		dblval = (double) opt->default_val.i64;
+		break;
+	case AV_OPT_TYPE_DOUBLE:
+	case AV_OPT_TYPE_FLOAT:
+		dblval = opt->default_val.dbl;
+		break;
+	case AV_OPT_TYPE_RATIONAL:
+		dblval = (double) opt->default_val.q.num
+				/ (double) opt->default_val.q.den;
+		break;
+	case AV_OPT_TYPE_STRING:
+	case AV_OPT_TYPE_BINARY:
+	case AV_OPT_TYPE_COLOR:
+	case AV_OPT_TYPE_IMAGE_SIZE:
+	case AV_OPT_TYPE_VIDEO_RATE:
+		break;
+	}
+
+	return dblval;
+}
+
+/**
+ * Give an avoption returns the default in string format.
+ * returns "" if not relevant
+ */
+static const char * show_avoptions_get_string_default(const AVOption *opt)
+{
+	const char *ret = "";
+
+	switch(opt->type) {
+	case AV_OPT_TYPE_COLOR:
+	case AV_OPT_TYPE_IMAGE_SIZE:
+	case AV_OPT_TYPE_STRING:
+	case AV_OPT_TYPE_VIDEO_RATE:
+		ret = opt->default_val.str;
+		break;
+	default:
+		ret = "";
+		break;
+	}
+
+	return ret;
+}
+static void show_avoptions_opt_list_enum(void *obj,  const char *unit,
+                     int req_flags, int rej_flags) {
+    const AVOption *opt=NULL;
+    //const char *class_name = (*(AVClass**)obj)->class_name;
+    int first=1;
+    double dblval = 0;
+
+
+    while ((opt= av_opt_next(obj, opt))) {
+        if (!(opt->flags & req_flags) || (opt->flags & rej_flags))
+            continue;
+
+        /* Don't print CONST's on level one.
+         * Don't print anything but CONST's on level two.
+         * Only print items from the requested unit.
+         */
+        if (!unit && opt->type==AV_OPT_TYPE_CONST)
+            continue;
+        else if (unit && opt->type!=AV_OPT_TYPE_CONST)
+            continue;
+        else if (unit && opt->type==AV_OPT_TYPE_CONST && strcmp(unit, opt->unit))
+            continue;
+
+        dblval = show_avoptions_get_double_default(opt);
+
+        JSON_ARRAY_ITEM( first, 
+                JSON_OBJECT( 
+                    JSON_PROPERTY( 1, name, JSON_STRING_C(opt->name) );
+                    JSON_PROPERTY( 0, value, JSON_DOUBLE_C(dblval) );
+                    if( opt->help ) JSON_PROPERTY( 0, help, JSON_STRING_C(opt->help) );
+                    );
+                );
+        first = 0;
+    }
+}
+
+
+static void show_avoptions_json(void *obj, int req_flags, int rej_flags, const char *klass, int add_flags )
+{
+    const AVOption *opt=NULL;
+    const char *type;
+    const char *unit=NULL;
+    //char *class_name = (*(AVClass**)obj)->class_name;
+    if (!obj)
+        return;
+
+
+    while ((opt= av_opt_next(obj, opt))) {
+        if (!(opt->flags & req_flags) || (opt->flags & rej_flags))
+            continue;
+
+        /* Don't print CONST's on level one.
+         * Don't print anything but CONST's on level two.
+         * Only print items from the requested unit.
+         */
+        if (!unit && opt->type==AV_OPT_TYPE_CONST)
+            continue;
+        else if (unit && opt->type!=AV_OPT_TYPE_CONST)
+            continue;
+        else if (unit && opt->type==AV_OPT_TYPE_CONST && strcmp(unit, opt->unit))
+            continue;
+        /* else if (unit && opt->type == FF_OPT_TYPE_CONST) */
+            /* av_log(av_log_obj, AV_LOG_INFO, "   %-15s ", opt->name); */
+        /* else */
+            /* av_log(av_log_obj, AV_LOG_INFO, "-%-17s ", opt->name); */
+
+        switch (opt->type) {
+            case AV_OPT_TYPE_FLAGS:
+                type = "flags";
+                break;
+            case AV_OPT_TYPE_INT:
+                type = "int";
+                break;
+            case AV_OPT_TYPE_INT64:
+                type = "int64";
+                break;
+            case AV_OPT_TYPE_DOUBLE:
+                type = "double";
+                break;
+            case AV_OPT_TYPE_FLOAT:
+                type = "float";
+                break;
+            case AV_OPT_TYPE_STRING:
+                type = "string";
+                break;
+            case AV_OPT_TYPE_RATIONAL:
+                type = "rational";
+                break;
+            case AV_OPT_TYPE_BINARY:
+                type = "binary";
+                break;
+            case AV_OPT_TYPE_IMAGE_SIZE:
+                type = "image_size";
+                break;
+            case AV_OPT_TYPE_VIDEO_RATE:
+                type = "video_rate";
+                break;
+            case AV_OPT_TYPE_PIXEL_FMT:
+                type = "pixel_fmt";
+                break;
+            case AV_OPT_TYPE_SAMPLE_FMT:
+                type = "sample_fmt";
+                break;
+            case AV_OPT_TYPE_DURATION:
+                type = "duration";
+                break;
+            case AV_OPT_TYPE_COLOR:
+                type = "color";
+                break;
+            case AV_OPT_TYPE_CHANNEL_LAYOUT:
+                type = "channel_layout";
+                break;
+            case AV_OPT_TYPE_CONST:
+                type = "const";
+                break;
+            default:
+                type = "unknown";
+                break;
+        }
+
+    JSON_ARRAY_ITEM( 0, 
+    JSON_OBJECT( 
+            JSON_PROPERTY( 1, domain,  JSON_STRING_C( "avcontext" ) );                    
+            JSON_PROPERTY( 0, flags, JSON_INT_C(opt->flags | add_flags) ); 
+            JSON_PROPERTY( 0, klass, JSON_STRING_C(klass) );
+            JSON_PROPERTY( 0, name, JSON_STRING_C(opt->name) );
+            JSON_PROPERTY( 0, type, JSON_STRING_C(type) );
+            JSON_PROPERTY( 0, typeint, JSON_INT_C(opt->type) );
+            JSON_PROPERTY( 0, def, JSON_DOUBLE_C( show_avoptions_get_double_default(opt)) );
+            JSON_PROPERTY( 0, defstr,JSON_STRING_C( show_avoptions_get_string_default(opt)) );
+
+
+            if( opt->help ) JSON_PROPERTY( 0, help, JSON_STRING_C(opt->help) );
+
+            {
+                int has_enum = opt->unit && opt->type!=AV_OPT_TYPE_CONST;
+                JSON_PROPERTY( 0, has_enum, JSON_BOOLEAN_C(has_enum) );
+
+                if ( has_enum ) {
+                    JSON_PROPERTY(0, enums, JSON_ARRAY(
+                                show_avoptions_opt_list_enum(obj,  opt->unit, req_flags, rej_flags);
+                                ));
+                }
+            }
+            )
+        );
+    }
+}
+
+
+
+static void show_help_options_json(const OptionDef *po)
+{
+    JSON_OBJECT(
+            JSON_PROPERTY( 1, domain,  JSON_STRING_C( "general" ) );                    
+            JSON_PROPERTY( 0, flags, JSON_INT_C(po->flags) );
+            JSON_PROPERTY( 0, name, JSON_STRING_C(po->name) );
+            JSON_PROPERTY( 0, has_arg, JSON_BOOLEAN_C(po->flags & HAS_ARG) );
+            if( po->flags &HAS_ARG ) JSON_PROPERTY(0, arg, JSON_STRING_C(po->argname));
+            JSON_PROPERTY( 0, help, JSON_STRING_C(po->help) );
+    );
+}
+
+#define AV_OPT_FLAG_FORMAT_PARAM (1<<17)
+static void avcontext_flagdef(void) {
+    JSON_OBJECT(
+            JSON_PROPERTY( 1, name, JSON_STRING_C("avflags") );
+            JSON_PROPERTY( 0, def,  JSON_OBJECT(
+                    JSON_PROPERTY(1, AV_OPT_FLAG_ENCODING_PARAM, JSON_INT_C(AV_OPT_FLAG_ENCODING_PARAM));
+                    JSON_PROPERTY(0, AV_OPT_FLAG_DECODING_PARAM, JSON_INT_C(AV_OPT_FLAG_DECODING_PARAM));
+                    JSON_PROPERTY(0, AV_OPT_FLAG_EXPORT, JSON_INT_C(AV_OPT_FLAG_EXPORT));
+                    JSON_PROPERTY(0, AV_OPT_FLAG_AUDIO_PARAM, JSON_INT_C(AV_OPT_FLAG_AUDIO_PARAM));
+                    JSON_PROPERTY(0, AV_OPT_FLAG_VIDEO_PARAM, JSON_INT_C(AV_OPT_FLAG_VIDEO_PARAM));
+                    JSON_PROPERTY(0, AV_OPT_FLAG_FORMAT_PARAM, JSON_INT_C(AV_OPT_FLAG_FORMAT_PARAM));
+                    JSON_PROPERTY(0, AV_OPT_FLAG_SUBTITLE_PARAM, JSON_INT_C(AV_OPT_FLAG_SUBTITLE_PARAM));
+                    ));
+            )
+}
+
+static void avcontext_typedef(void) {
+    JSON_OBJECT(
+            JSON_PROPERTY( 1, name, JSON_STRING_C("avtype") );
+            JSON_PROPERTY( 0, def,  JSON_OBJECT(
+                    JSON_PROPERTY(1, AV_OPT_TYPE_FLAGS, JSON_INT_C(AV_OPT_TYPE_FLAGS));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_INT, JSON_INT_C(AV_OPT_TYPE_INT));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_INT64, JSON_INT_C(AV_OPT_TYPE_INT64));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_DOUBLE, JSON_INT_C(AV_OPT_TYPE_DOUBLE));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_FLOAT, JSON_INT_C(AV_OPT_TYPE_FLOAT));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_STRING, JSON_INT_C(AV_OPT_TYPE_STRING));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_RATIONAL, JSON_INT_C(AV_OPT_TYPE_RATIONAL));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_BINARY, JSON_INT_C(AV_OPT_TYPE_BINARY));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_CONST, JSON_INT_C(AV_OPT_TYPE_CONST));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_IMAGE_SIZE, JSON_INT_C(AV_OPT_TYPE_IMAGE_SIZE));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_PIXEL_FMT, JSON_INT_C(AV_OPT_TYPE_PIXEL_FMT));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_SAMPLE_FMT, JSON_INT_C(AV_OPT_TYPE_SAMPLE_FMT));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_VIDEO_RATE, JSON_INT_C(AV_OPT_TYPE_VIDEO_RATE));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_DURATION, JSON_INT_C(AV_OPT_TYPE_DURATION));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_COLOR, JSON_INT_C(AV_OPT_TYPE_COLOR));
+                    JSON_PROPERTY(0, AV_OPT_TYPE_CHANNEL_LAYOUT, JSON_INT_C(AV_OPT_TYPE_CHANNEL_LAYOUT));
+                    ));
+            )
+}  
+static void show_help_options_json_flagdef(void) {
+    JSON_OBJECT(
+            JSON_PROPERTY( 1, name, JSON_STRING_C("gflags") );
+            JSON_PROPERTY( 0, def,  JSON_OBJECT(
+                    JSON_PROPERTY(1, HAS_ARG    ,JSON_INT_C(HAS_ARG));
+                    JSON_PROPERTY(0, OPT_BOOL   ,JSON_INT_C(OPT_BOOL));
+                    JSON_PROPERTY(0, OPT_EXPERT ,JSON_INT_C(OPT_EXPERT));
+                    JSON_PROPERTY(0, OPT_STRING ,JSON_INT_C(OPT_STRING));
+                    JSON_PROPERTY(0, OPT_VIDEO  ,JSON_INT_C(OPT_VIDEO));
+                    JSON_PROPERTY(0, OPT_AUDIO  ,JSON_INT_C(OPT_AUDIO));
+                    JSON_PROPERTY(0, OPT_INT    ,JSON_INT_C(OPT_INT));
+                    JSON_PROPERTY(0, OPT_FLOAT  ,JSON_INT_C(OPT_FLOAT));
+                    JSON_PROPERTY(0, OPT_SUBTITLE ,JSON_INT_C(OPT_SUBTITLE));
+                    JSON_PROPERTY(0, OPT_INT64  ,JSON_INT_C(OPT_INT64));
+                    JSON_PROPERTY(0, OPT_EXIT   ,JSON_INT_C(OPT_EXIT));
+                    JSON_PROPERTY(0, OPT_DATA   ,JSON_INT_C(OPT_DATA));
+                    JSON_PROPERTY(0, OPT_PERFILE   ,JSON_INT_C(OPT_PERFILE));
+                    JSON_PROPERTY(0, OPT_OFFSET   ,JSON_INT_C(OPT_OFFSET));
+                    JSON_PROPERTY(0, OPT_SPEC   ,JSON_INT_C(OPT_SPEC));
+                    JSON_PROPERTY(0, OPT_TIME   ,JSON_INT_C(OPT_TIME));
+                    JSON_PROPERTY(0, OPT_DOUBLE   ,JSON_INT_C(OPT_DOUBLE));
+                    JSON_PROPERTY(0, OPT_INPUT   ,JSON_INT_C(OPT_INPUT));
+                    JSON_PROPERTY(0, OPT_OUTPUT   ,JSON_INT_C(OPT_OUTPUT));
+                    ));
+            )
+}
+
+static void show_options_children(const AVClass *class, int flags, int addflags, const char  *context_name_def ) {
+    const AVClass *child = NULL;
+    const char *context_name = context_name_def;
+    AVCodec *c;
+    AVOutputFormat *of;
+    AVInputFormat *ifo;
+
+    /* retrieve context name for this class */
+    c = NULL;
+    while((c=av_codec_next(c))) {
+        if(c->priv_class && !strcmp(c->priv_class->class_name, class->class_name)) {
+            context_name = c->name;
+        }
+    }
+
+    /* maybe its a format */
+    of = NULL;
+    while((of=av_oformat_next(of))) {
+        if(of->priv_class && !strcmp(of->priv_class->class_name, class->class_name)) {
+            context_name = of->name;
+        }
+    }
+
+    /* maybe its a format */
+    ifo = NULL;
+    while((ifo=av_iformat_next(ifo))) {
+        if(ifo->priv_class && !strcmp(ifo->priv_class->class_name, class->class_name)) {
+            context_name = ifo->name;
+        }
+    }
+
+    show_avoptions_json(&class, flags, 0, context_name, addflags);
+    printf("\n");
+
+    while (child = av_opt_child_class_next(class, child))
+        show_options_children(child, flags, addflags, child->class_name );
+}
+
+/**
+ * show_options_json()
+ *
+ * shows all the available options for vgtmpeg on stdout
+ *
+ */
+void show_options_json(void) {
+    /* general options  */
+    const OptionDef *po;
+    int first = 1;
+
+    JSON_OBJECT(
+            JSON_PROPERTY(1, flagdef, JSON_ARRAY(
+                    JSON_ARRAY_ITEM(1, show_help_options_json_flagdef(); );
+                    JSON_ARRAY_ITEM(0, avcontext_flagdef(); );
+                    JSON_ARRAY_ITEM(0, avcontext_typedef(); );
+                    )); 
+            JSON_PROPERTY(0, options, JSON_ARRAY( 
+
+            for( po=options; po->name!=NULL; po++ ) {
+            JSON_ARRAY_ITEM( first, show_help_options_json(po); );
+            first = 0;
+            }
+
+            /* codec options */
+            show_options_children(avcodec_get_class(), AV_OPT_FLAG_ENCODING_PARAM|AV_OPT_FLAG_DECODING_PARAM,0, "" );
+            /* muxer options */
+            show_options_children(avformat_get_class(), AV_OPT_FLAG_ENCODING_PARAM|AV_OPT_FLAG_DECODING_PARAM,AV_OPT_FLAG_FORMAT_PARAM, "" );
+            /* sws */
+            show_options_children(sws_get_class(), AV_OPT_FLAG_ENCODING_PARAM|AV_OPT_FLAG_DECODING_PARAM,0, sws_get_class()->class_name );
+
+
+            //show_avoptions_json(avcodec_opts[1], , 2, "", 0);
+
+            /* individual codec options */
+            /* c = NULL; */
+            /* while ((c = av_codec_next(c))) { */
+                /* if (c->priv_class) { */
+                    /* show_avoptions_json(&c->priv_class, AV_OPT_FLAG_ENCODING_PARAM|AV_OPT_FLAG_DECODING_PARAM, 0, c->name, 0); */
+                /* } */
+            /* } */
+
+
+            /*muxer options */
+            //show_avoptions_json(avformat_opts, AV_OPT_FLAG_ENCODING_PARAM|AV_OPT_FLAG_DECODING_PARAM, 0, "", AV_OPT_FLAG_FORMAT_PARAM );
+
+            /* individual muxer options */
+            /* while ((oformat = av_oformat_next(oformat))) { */
+                /* if (oformat->priv_class) { */
+                /* show_avoptions_json(&oformat->priv_class, AV_OPT_FLAG_ENCODING_PARAM, 0, oformat->name, AV_OPT_FLAG_FORMAT_PARAM ); */
+                /* } */
+            /* } */
+/*  */
+            /* show_avoptions_json(sws_opts, AV_OPT_FLAG_ENCODING_PARAM|AV_OPT_FLAG_DECODING_PARAM, 0, "", AV_OPT_FLAG_FORMAT_PARAM ); */
+    ))
+        )
+
+}
+
+
+/****************************************************************/
+/* nldump format                                                */
+/****************************************************************/
+static int get_bit_rate(AVCodecContext *ctx)
+{
+    int bit_rate;
+    int bits_per_sample;
+
+    switch(ctx->codec_type) {
+    case AVMEDIA_TYPE_VIDEO:
+    case AVMEDIA_TYPE_DATA:
+    case AVMEDIA_TYPE_SUBTITLE:
+    case AVMEDIA_TYPE_ATTACHMENT:
+        bit_rate = ctx->bit_rate;
+        break;
+    case AVMEDIA_TYPE_AUDIO:
+        bits_per_sample = av_get_bits_per_sample(ctx->codec_id);
+        bit_rate = bits_per_sample ? ctx->sample_rate * ctx->channels * bits_per_sample : ctx->bit_rate;
+        break;
+    default:
+        bit_rate = 0;
+        break;
+    }
+    return bit_rate;
+}
+
+static void nl_dump_metadata(AVDictionary *m)
+{
+    if(m ){
+        AVDictionaryEntry *tag=NULL;
+
+        FFMSG_LOG(FFMSG_NODE_START(metadata));
+
+        while((tag=av_dict_get(m, "", tag, AV_DICT_IGNORE_SUFFIX))) {
+            char tmp[1024];
+            int i;
+            av_strlcpy(tmp, tag->value, sizeof(tmp));
+            for(i=0; i<strlen(tmp); i++) if(tmp[i]==0xd) tmp[i]=' ';
+
+            FFMSG_STRING_VALUE(tag->key, tmp);
+        }
+        FFMSG_LOG(FFMSG_NODE_STOP(metadata));
+    }
+}
+
+static void avcodec_nlstring(char *buf, int buf_size, AVCodecContext *enc, int encode) {
+    const char *codec_name;
+    const char *profile = "";
+    const AVCodec *p;
+    // char tag_buf[32];
+    int bitrate;
+    AVRational display_aspect_ratio;
+
+
+    codec_name = avcodec_get_name( enc->codec_id );
+    if (enc->profile != FF_PROFILE_UNKNOWN) {
+        if (enc->codec)
+            p = enc->codec;
+        else
+            p = encode ? avcodec_find_encoder(enc->codec_id) :
+                        avcodec_find_decoder(enc->codec_id);
+        if (p)
+            profile = av_get_profile_name(p, enc->profile);
+    }
+
+    /*
+    if (enc->codec_tag) {        
+        av_get_codec_tag_string(tag_buf, sizeof(tag_buf), enc->codec_tag);
+        codec_name = tag_buf;
+    }*/
+
+
+    switch(enc->codec_type) {
+    case AVMEDIA_TYPE_VIDEO:
+        FFMSG_LOG( FFMSG_STRING_FMT(codectype), "video" );
+        FFMSG_LOG( FFMSG_STRING_FMT(codecname), codec_name );
+        FFMSG_LOG( FFMSG_STRING_FMT(profile), profile );
+
+        if (enc->pix_fmt != AV_PIX_FMT_NONE) {
+            FFMSG_LOG( FFMSG_STRING_FMT(picfmt), av_get_pix_fmt_name(enc->pix_fmt) );
+        }
+
+        if (enc->width) {
+            FFMSG_LOG( FFMSG_INT32_FMT(width), enc->width );
+            FFMSG_LOG( FFMSG_INT32_FMT(height),enc->height );
+
+            if (enc->sample_aspect_ratio.num) {
+                av_reduce(&display_aspect_ratio.num, &display_aspect_ratio.den,
+                          enc->width*enc->sample_aspect_ratio.num,
+                          enc->height*enc->sample_aspect_ratio.den,
+                          1024*1024);
+                FFMSG_LOG( FFMSG_INT32_FMT(darnum), display_aspect_ratio.num );
+                FFMSG_LOG( FFMSG_INT32_FMT(darden), display_aspect_ratio.den );
+                FFMSG_LOG( FFMSG_INT32_FMT(sarnum), enc->sample_aspect_ratio.num );
+                FFMSG_LOG( FFMSG_INT32_FMT(sarden), enc->sample_aspect_ratio.den );
+            }
+
+            /* if(av_log_get_level() >= AV_LOG_DEBUG){ */
+                /* int g= av_gcd(enc->time_base.num, enc->time_base.den); */
+                /* snprintf(buf + strlen(buf), buf_size - strlen(buf), */
+                     /* ", %d/%d", */
+                     /* enc->time_base.num/g, enc->time_base.den/g); */
+            /* } */
+        }
+        break;
+    case AVMEDIA_TYPE_AUDIO:
+        FFMSG_LOG( FFMSG_STRING_FMT(codectype), "audio" );
+        FFMSG_LOG( FFMSG_STRING_FMT(codecname), codec_name );
+        if (enc->sample_rate) {
+                FFMSG_LOG( FFMSG_INT32_FMT(samplerate), enc->sample_rate  );
+        }
+        //avcodec_get_channel_layout_string(buf , buf_size , enc->channels, enc->channel_layout);
+        FFMSG_LOG( FFMSG_INTEGER_FMT(channel_layout),  enc->channel_layout );
+
+        if (enc->sample_fmt != AV_SAMPLE_FMT_NONE) {
+            FFMSG_LOG( FFMSG_STRING_FMT(audfmt), av_get_sample_fmt_name(enc->sample_fmt));
+        }
+        break;
+    case AVMEDIA_TYPE_DATA:
+        FFMSG_LOG( FFMSG_STRING_FMT(codectype), "data" );
+        FFMSG_LOG( FFMSG_STRING_FMT(codecname), codec_name );
+        break;
+    case AVMEDIA_TYPE_SUBTITLE:
+        FFMSG_LOG( FFMSG_STRING_FMT(codectype), "subtitle" );
+        FFMSG_LOG( FFMSG_STRING_FMT(codecname), codec_name );
+        break;
+    case AVMEDIA_TYPE_ATTACHMENT:
+        FFMSG_LOG( FFMSG_STRING_FMT(codectype), "attachment" );
+        FFMSG_LOG( FFMSG_STRING_FMT(codecname), codec_name );
+        break;
+    default:
+        FFMSG_LOG( FFMSG_STRING_FMT(codectype), "invalid" );
+        FFMSG_LOG( FFMSG_INT32_FMT(codecname), enc->codec_type );
+        return;
+    }
+
+    bitrate = get_bit_rate(enc);
+    if (bitrate != 0) {
+        FFMSG_LOG( FFMSG_INT32_FMT(bitrate), bitrate );
+    }
+}
+
+
+
+static void dump_stream_nlformat(AVFormatContext *ic, int i, int index, int is_output)
+{
+    char buf[256];
+    AVStream *st = ic->streams[i];
+    AVDictionaryEntry *lang;
+
+    FFMSG_LOG( FFMSG_NODE_START_FMT("stream_%d_%d"), index, i );
+    FFMSG_LOG( FFMSG_INT32_FMT(index), index );
+    FFMSG_LOG( FFMSG_INT32_FMT(stid), i );
+
+    avcodec_nlstring(buf, sizeof(buf), st->codec, is_output);
+
+    nl_dump_metadata(st->metadata);
+    lang = av_dict_get(st->metadata, "language", 0, 0);
+    if (lang) {
+        FFMSG_LOG(FFMSG_STRING_FMT(lang), lang->value);
+    }
+ 
+    if(st->codec->codec_type == AVMEDIA_TYPE_VIDEO){
+        if(st->avg_frame_rate.den && st->avg_frame_rate.num) {
+            FFMSG_LOG( FFMSG_INT32_FMT(avg_framerate_num), st->avg_frame_rate.num );
+            FFMSG_LOG( FFMSG_INT32_FMT(avg_framerate_den), st->avg_frame_rate.den );
+        }
+
+        if(st->r_frame_rate.den && st->r_frame_rate.num) {
+            FFMSG_LOG( FFMSG_INT32_FMT(r_framerate_num), st->r_frame_rate.num );
+            FFMSG_LOG( FFMSG_INT32_FMT(r_framerate_den), st->r_frame_rate.den );
+        }
+        if(st->time_base.den && st->time_base.num) {
+            FFMSG_LOG( FFMSG_INT32_FMT(mux_timebase_num), st->time_base.num );
+            FFMSG_LOG( FFMSG_INT32_FMT(mux_timebase_den), st->time_base.den );
+        }
+
+        if(st->codec->time_base.den && st->codec->time_base.num) {
+            FFMSG_LOG( FFMSG_INT32_FMT(codec_timebase_num), st->codec->time_base.num );
+            FFMSG_LOG( FFMSG_INT32_FMT(codec_timebase_den), st->codec->time_base.den );
+        }
+    }
+
+    FFMSG_LOG( FFMSG_INTEGER_FMT(duration), st->duration != AV_NOPTS_VALUE ? st->duration : 0  );
+    FFMSG_LOG( FFMSG_INT32_FMT(inspected_frame_count), st->codec_info_nb_frames );
+
+    FFMSG_LOG( FFMSG_NODE_STOP_FMT("stream_%d_%d"), index, i );
+}
+
+
+void dump_nlformat(AVFormatContext *ic,
+                 int index,
+                 const char *url,
+                 int is_output)
+{
+    int i,rscount;
+    AVDictionaryEntry *srctype;
+    uint8_t *printed = av_mallocz(ic->nb_streams);
+    if (ic->nb_streams && !printed)
+        return;
+
+    /* stream info */
+    av_log(NULL, AV_LOG_INFO, FFMSG_START );
+    FFMSG_LOG( FFMSG_INT32_FMT(version_major), FFMSG_VERSION_MAJOR );
+    FFMSG_LOG( FFMSG_INT32_FMT(version_minor), FFMSG_VERSION_MINOR );
+    FFMSG_LOG( FFMSG_STRING_FMT(msgtype), FFMSG_MSGTYPE_STREAMINFO );
+
+    av_log(NULL, AV_LOG_INFO, FFMSG_NODE_START(muxinfo) );
+
+    av_log(NULL, AV_LOG_INFO, FFMSG_STRING_FMT(direction), is_output ? "output" : "input" );
+    av_log(NULL, AV_LOG_INFO, FFMSG_INTEGER_FMT(index), (int64_t)index );
+    av_log(NULL, AV_LOG_INFO, FFMSG_INTEGER_FMT(timebase), (int64_t)AV_TIME_BASE );
+    av_log(NULL, AV_LOG_INFO, FFMSG_STRING_FMT(mux_format), is_output ? ic->oformat->name : ic->iformat->name  );
+    srctype = av_dict_get(ic->metadata, "source_type", 0,0);
+    av_log(NULL, AV_LOG_INFO, FFMSG_STRING_FMT(source_type), srctype ? srctype->value : "file"  );
+
+    av_log(NULL, AV_LOG_INFO, FFMSG_INTEGER_FMT(program_count), (int64_t)ic->nb_programs );
+    FFMSG_LOG( FFMSG_INT32_FMT(stream_count), ic->nb_streams );
+
+    if (!is_output) {
+        av_log(NULL, AV_LOG_INFO, FFMSG_INTEGER_FMT(duration), ic->duration != AV_NOPTS_VALUE ? ic->duration : 0 );
+        av_log(NULL, AV_LOG_INFO, FFMSG_INTEGER_FMT(start_time), ic->start_time != AV_NOPTS_VALUE ? ic->start_time : 0 );
+        if( ic->bit_rate ) {
+            av_log(NULL, AV_LOG_INFO, FFMSG_INTEGER_FMT(bitrate), (int64_t)ic->bit_rate );
+        }        
+
+    }
+
+    if(ic->nb_programs) {
+        int j, k, total = 0;
+        FFMSG_LOG( FFMSG_NODE_START(programs) );
+        for(j=0; j<ic->nb_programs; j++) {
+            AVDictionaryEntry *name = av_dict_get(ic->programs[j]->metadata,  "name", NULL, 0);
+            FFMSG_LOG( FFMSG_NODE_START_FMT("id%d"), ic->programs[j]->id );
+            FFMSG_LOG( FFMSG_INTEGER_FMT(id), (int64_t) ic->programs[j]->id );
+            FFMSG_LOG( FFMSG_STRING_FMT(name), name ? name->value : ""  );
+
+            for(k=0; k<ic->programs[j]->nb_stream_indexes; k++) {
+                dump_stream_nlformat(ic, ic->programs[j]->stream_index[k], index, is_output);
+                printed[ic->programs[j]->stream_index[k]] = 1;
+            }
+            total += ic->programs[j]->nb_stream_indexes;
+
+            FFMSG_LOG( FFMSG_NODE_STOP_FMT("id%d"), ic->programs[j]->id );
+        }
+        /* if (total < ic->nb_streams) */
+            /* av_log(NULL, AV_LOG_INFO, "  No Program\n"); */
+        FFMSG_LOG( FFMSG_NODE_STOP(programs) );
+    }
+
+    FFMSG_LOG( FFMSG_NODE_START(streams) );
+    rscount = 0;
+    for(i=0;i<ic->nb_streams;i++)
+        if (!printed[i]) {
+            dump_stream_nlformat(ic, i, index, is_output);
+            rscount++;
+        }
+
+    FFMSG_LOG( FFMSG_NODE_STOP(streams) );
+ 
+    FFMSG_LOG( FFMSG_INT32_FMT(rawstream_count), rscount );
+
+    av_log(NULL, AV_LOG_INFO, FFMSG_NODE_STOP(muxinfo) );
+    av_log(NULL, AV_LOG_INFO, FFMSG_STOP );
+    av_free(printed); 
+}
+
+
+
+
+/****************************************************************/
+/* nlreport                                                     */
+/****************************************************************/
+//#define _XOPEN_SOURCE 600
+//#define STATS_DELAY 100000
+#define STATS_DELAY 200000  /* delay between progress info messages */
+void print_nlreport( OutputFile **output_files,
+                         OutputStream **ost_table, int nb_ostreams,
+                         int is_last_report, int64_t timer_start, int nb_frames_dup, int nb_frames_drop )
+{
+    //char buf[1024];
+    OutputStream *ost;
+    AVFormatContext *oc;
+    int64_t total_size;
+    AVCodecContext *enc;
+    int frame_number, vid, i;
+    double bitrate, ti1, pts;
+    static int64_t last_time = -1;
+    //static int qp_histogram[52];
+
+    if (!is_last_report) {
+        int64_t cur_time;
+        /* display the report every 0.5 seconds */
+        cur_time = av_gettime();
+        if (last_time == -1) {
+            last_time = cur_time;
+            return;
+        }
+        if ((cur_time - last_time) < STATS_DELAY )
+            return;
+        last_time = cur_time;
+    }
+
+    
+    FFMSG_LOG( FFMSG_START );
+    FFMSG_LOG( FFMSG_INT32_FMT(version_major), FFMSG_VERSION_MAJOR );
+    FFMSG_LOG( FFMSG_INT32_FMT(version_minor), FFMSG_VERSION_MINOR );
+    FFMSG_LOG( FFMSG_STRING_FMT(msgtype), FFMSG_MSGTYPE_PROGRESSINFO );
+
+    FFMSG_LOG( FFMSG_NODE_START(progress) );
+
+    oc = output_files[0]->ctx;
+
+    total_size = avio_size(oc->pb);
+    if (total_size < 0) { // FIXME improve avio_size() so it works with non seekable output too
+        total_size= avio_tell(oc->pb);
+        if (total_size < 0)
+            total_size = 0;
+    }
+
+    //buf[0] = '\0';
+    ti1 = 1e10;
+    vid = 0;
+    for(i=0;i<nb_ostreams;i++) {
+        ost = ost_table[i];
+        enc = ost->st->codec;
+//        if (vid && enc->codec_type == AVMEDIA_TYPE_VIDEO) {
+//            snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), "q=%2.1f ",
+//                     !ost->st->stream_copy ?
+//                     enc->coded_frame->quality/(float)FF_QP2LAMBDA : -1);
+//        }
+        if (!vid && enc->codec_type == AVMEDIA_TYPE_VIDEO) {
+            int fps;
+            float t = (av_gettime()-timer_start) / 1000000.0;
+
+            frame_number = ost->frame_number;
+            fps = (t>1)?(int)(frame_number/t+0.5) : 0;
+            /* snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), "frame=%5d fps=%3d q=%3.1f ", */
+                     /* frame_number, fps, */
+                     /* !ost->st->stream_copy ? */
+                     /* enc->coded_frame->quality/(float)FF_QP2LAMBDA : -1); */
+
+            FFMSG_LOG( FFMSG_INT32_FMT(curframe), frame_number );
+            FFMSG_LOG( FFMSG_INT32_FMT(fps), fps );
+
+//            if(is_last_report)
+//                snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), "L");
+            vid = 1;
+        }
+        /* compute min output value */
+        pts = (double)ost->st->cur_dts * av_q2d(ost->st->time_base);
+        if ((pts < ti1) && (pts > 0))
+            ti1 = pts;
+    }
+    if (ti1 < 0.01)
+        ti1 = 0.01;
+
+    if (1) {
+        bitrate = (double)(total_size * 8) / ti1 / 1000.0;
+
+//        snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf),
+//            "size=%8.0fkB time=%0.2f bitrate=%6.1fkbits/s",
+//            (double)total_size / 1024, ti1, bitrate);
+
+        FFMSG_LOG( FFMSG_INTEGER_FMT(size), total_size );
+        FFMSG_LOG( FFMSG_INT32_FMT(bitrate), (int)(bitrate*1000.0) );
+        FFMSG_LOG( FFMSG_INT32_FMT(frames_dup), nb_frames_dup );
+        FFMSG_LOG( FFMSG_INT32_FMT(frames_drop), nb_frames_drop );
+        FFMSG_LOG( FFMSG_INT32_FMT(is_last_report), is_last_report );
+        FFMSG_LOG( FFMSG_INT32_FMT(curtime), (int)(ti1*1000.0) );
+
+//        if (nb_frames_dup || nb_frames_drop)
+//          snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), " dup=%d drop=%d",
+//                  nb_frames_dup, nb_frames_drop);
+
+        /* if (verbose >= 0) */
+            /* fprintf(stderr, "%s    \r", buf); */
+/*  */
+        /* fflush(stderr); */
+    }
+
+//    if (is_last_report && verbose >= 0){
+//        int64_t raw= audio_size + video_size + extra_size;
+//        /* fprintf(stderr, "\n"); */
+//        /* fprintf(stderr, "video:%1.0fkB audio:%1.0fkB global headers:%1.0fkB muxing overhead %f%%\n", */
+//                /* video_size/1024.0, */
+//                /* audio_size/1024.0, */
+//                /* extra_size/1024.0, */
+//                /* 100.0*(total_size - raw)/raw */
+//        /* ); */
+//    }
+
+
+    FFMSG_LOG( FFMSG_NODE_STOP(progress) );
+    FFMSG_LOG( FFMSG_STOP );
+}
+
+/* codec output report */
+/* outputs and array of codecs with format:
+ *
+ * {
+ *  codecs: [
+ *      {
+ *          decode: true,
+ *          encode: false,
+ *          type: one of 'video','audio','subtitle'
+ *          features: int
+ *      }
+ *  ]
+ *  }
+ */
+
+
+/* escape C string so it can be used as a string literal. Make sure to call
+ * c_strfree to free the string returned by c_strescape  */
+void c_strfree(char *str) { 
+    free(str);
+}
+char *c_strescape (const char *source)
+{
+    const unsigned char *p;
+    char *dest;
+    char *q;
+    unsigned char excmap[256];
+
+    //g_return_val_if_fail (source != NULL, NULL);
+    if(!source) return NULL;
+
+    p = (const unsigned char *) source;
+    /* Each source byte needs maximally four destination chars (\777) */
+    q = dest = malloc (strlen (source) * 4 + 1);
+
+    memset (excmap, 0, 256);
+
+    while (*p)
+    {
+        if (excmap[*p])
+            *q++ = *p;
+        else
+        {
+            switch (*p)
+            {
+                case '\b':
+                    *q++ = '\\';
+                    *q++ = 'b';
+                    break;
+                case '\f':
+                    *q++ = '\\';
+                    *q++ = 'f';
+                    break;
+                case '\n':
+                    *q++ = '\\';
+                    *q++ = 'n';
+                    break;
+                case '\r':
+                    *q++ = '\\';
+                    *q++ = 'r';
+                    break;
+                case '\t':
+                    *q++ = '\\';
+                    *q++ = 't';
+                    break;
+                case '\\':
+                    *q++ = '\\';
+                    *q++ = '\\';
+                    break;
+                case '"':
+                    *q++ = '\\';
+                    *q++ = '"';
+                    break;
+                default:
+                    if ((*p < ' ') || (*p >= 0177))
+                    {
+                        *q++ = '\\';
+                        *q++ = '0' + (((*p) >> 6) & 07);
+                        *q++ = '0' + (((*p) >> 3) & 07);
+                        *q++ = '0' + ((*p) & 07);
+                    }
+                    else
+                        *q++ = *p;
+                    break;
+            }
+        }
+        p++;
+    }
+    *q = 0;
+    return dest;
+}
+
+
+
+
diff --git a/libavformat/Makefile b/libavformat/Makefile
index c010fc83f9..5ead04cfe7 100644
--- a/libavformat/Makefile
+++ b/libavformat/Makefile
@@ -636,6 +636,12 @@ OBJS-$(CONFIG_IEC61883_INDEV)            += dv.o
 # Windows resource file
 SLIBOBJS-$(HAVE_GNU_WINDRES)             += avformatres.o
 
+
+# --vgtmpeg
+OBJS-$(CONFIG_DVD_PROTOCOL)				 += dvdurl.o dvdurl_common.o dvdurl_lang.o
+OBJS-$(CONFIG_BD_PROTOCOL)               += dvdurl.o dvdurl_common.o dvdurl_lang.o bdurl.o
+# --vgtmpeg
+
 SKIPHEADERS-$(CONFIG_FFRTMPCRYPT_PROTOCOL) += rtmpdh.h
 SKIPHEADERS-$(CONFIG_NETWORK)            += network.h rtsp.h
 
diff --git a/libavformat/bdurl.c b/libavformat/bdurl.c
new file mode 100644
index 0000000000..3671873417
--- /dev/null
+++ b/libavformat/bdurl.c
@@ -0,0 +1,1078 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <string.h>
+#include "avformat.h"
+#include "libavutil/avstring.h"
+#include "libavutil/opt.h"
+#include "dvdurl_lang.h"
+#include "bdurl.h"
+#include "url.h"
+
+
+static int gloglevel = HB_LOG_VERBOSE;
+
+#ifdef __GNUC__
+#define BDNOT_USED __attribute__ ((unused))
+#else
+#define BDNOT_USED
+#endif
+
+/***********************************************************************
+ * Local prototypes
+ **********************************************************************/
+static int           next_packet( BLURAY *bd, uint8_t *pkt );
+static int title_info_compare_mpls(const void *, const void *);
+
+static hb_bd_t     * hb_bd_init( char * path );
+static int           hb_bd_title_count( hb_bd_t * d );
+static hb_title_t  * hb_bd_title_scan( hb_bd_t * d, int t, uint64_t min_duration );
+static int           hb_bd_start( hb_bd_t * d, hb_title_t *title );
+static void          hb_bd_stop( hb_bd_t * d );
+static int           hb_bd_seek( hb_bd_t * d, float f );
+static int           hb_bd_seek_pts( hb_bd_t * d, uint64_t pts );
+static int           hb_bd_seek_chapter( hb_bd_t * d, int chapter );
+static hb_buffer_t * hb_bd_read( hb_bd_t * d );
+static int           hb_bd_chapter( hb_bd_t * d );
+static void          hb_bd_close( hb_bd_t ** _d );
+static void          hb_bd_set_angle( hb_bd_t * d, int angle );
+static int           hb_bd_main_feature( hb_bd_t * d, hb_list_t * list_title );
+
+/***********************************************************************
+ * hb_bd_init
+ ***********************************************************************
+ *
+ **********************************************************************/
+hb_bd_t * hb_bd_init( char * path )
+{
+    hb_bd_t * d;
+    int ii;
+
+    d = av_mallocz( sizeof( hb_bd_t ) );
+
+    /* Open device */
+    d->bd = bd_open( path, NULL );
+    if( d->bd == NULL )
+    {
+        /*
+         * Not an error, may be a stream - which we'll try in a moment.
+         */
+        hb_log_level(gloglevel, "bd: not a bd - trying as a stream/file instead" );
+        goto fail;
+    }
+
+    d->title_count = bd_get_titles( d->bd, TITLES_RELEVANT, 0 );  /* FIXME min duration */
+    if ( d->title_count == 0 )
+    {
+        hb_log_level(gloglevel, "bd: not a bd - trying as a stream/file instead" );
+        goto fail;
+    }
+    d->title_info = av_mallocz( sizeof( BLURAY_TITLE_INFO* ) * d->title_count );
+    for ( ii = 0; ii < d->title_count; ii++ )
+    {
+        d->title_info[ii] = bd_get_title_info( d->bd, ii, 0 );  /* FIXME 0 is correct angle? */
+    }
+    qsort(d->title_info, d->title_count, sizeof( BLURAY_TITLE_INFO* ), title_info_compare_mpls );
+
+    /* vgtmpeg */
+    /* allocate fixed hb_buffer_t for reads */
+    d->read_buffer = av_mallocz( sizeof(hb_buffer_t));
+    d->read_buffer->size = HB_DVD_READ_BUFFER_SIZE;
+    d->read_buffer->data = av_malloc(HB_DVD_READ_BUFFER_SIZE);
+
+    d->path = av_strdup( path );
+
+    return d;
+
+fail:
+    if( d->bd ) bd_close( d->bd );
+    av_free( d );
+    return NULL;
+}
+
+/***********************************************************************
+ * hb_bd_title_count
+ **********************************************************************/
+int hb_bd_title_count( hb_bd_t * d )
+{
+    return d->title_count;
+}
+
+static void add_audio(int track, hb_list_t *list_audio, BLURAY_STREAM_INFO *bdaudio, int substream_type, uint32_t codec, uint32_t codec_param)
+{
+    hb_audio_t * audio;
+    const iso639_lang_t * lang;
+    int stream_type;
+
+    audio = av_mallocz( sizeof( hb_audio_t ) );
+
+    audio->id = (substream_type << 16) | bdaudio->pid;
+    audio->config.in.stream_type = bdaudio->coding_type;
+    audio->config.in.substream_type = substream_type;
+    audio->config.in.codec = codec;
+    audio->config.in.codec_param = codec_param;
+    audio->config.lang.type = 0;
+
+    lang = lang_for_code2( (char*)bdaudio->lang );
+
+    stream_type = bdaudio->coding_type;
+    snprintf( audio->config.lang.description, 
+        sizeof( audio->config.lang.description ), "%s (%s)",
+        strlen(lang->native_name) ? lang->native_name : lang->eng_name,
+        audio->config.in.codec == HB_ACODEC_AC3 ? "AC3" : 
+        ( audio->config.in.codec == HB_ACODEC_DCA ? "DTS" : 
+        ( ( audio->config.in.codec & HB_ACODEC_FF_MASK ) ? 
+            ( stream_type == BLURAY_STREAM_TYPE_AUDIO_LPCM ? "BD LPCM" : 
+            ( stream_type == BLURAY_STREAM_TYPE_AUDIO_AC3PLUS ? "E-AC3" : 
+            ( stream_type == BLURAY_STREAM_TYPE_AUDIO_TRUHD ? "TrueHD" : 
+            ( stream_type == BLURAY_STREAM_TYPE_AUDIO_DTSHD ? "DTS-HD HRA" : 
+            ( stream_type == BLURAY_STREAM_TYPE_AUDIO_DTSHD_MASTER ? "DTS-HD MA" : 
+            ( stream_type == BLURAY_STREAM_TYPE_AUDIO_MPEG1 ? "MPEG1" : 
+            ( stream_type == BLURAY_STREAM_TYPE_AUDIO_MPEG2 ? "MPEG2" : 
+                                                           "Unknown FFmpeg" 
+            ) ) ) ) ) ) ) : "Unknown" 
+        ) ) );
+
+    snprintf( audio->config.lang.simple, 
+              sizeof( audio->config.lang.simple ), "%s",
+              strlen(lang->native_name) ? lang->native_name : 
+                                          lang->eng_name );
+
+    snprintf( audio->config.lang.iso639_2, 
+              sizeof( audio->config.lang.iso639_2 ), "%s", lang->iso639_2);
+
+    hb_log_level(gloglevel, "bd: audio id=0x%x, lang=%s, 3cc=%s", audio->id,
+            audio->config.lang.description, audio->config.lang.iso639_2 );
+
+    audio->config.in.track = track;
+    hb_list_add( list_audio, audio );
+    return;
+}
+
+static int bd_audio_equal( BLURAY_CLIP_INFO *a, BLURAY_CLIP_INFO *b )
+{
+    int ii, jj, equal;
+
+    if ( a->audio_stream_count != b->audio_stream_count )
+        return 0;
+
+    for ( ii = 0; ii < a->audio_stream_count; ii++ )
+    {
+        BLURAY_STREAM_INFO * s = &a->audio_streams[ii];
+        equal = 0;
+        for ( jj = 0; jj < b->audio_stream_count; jj++ )
+        {
+            if ( s->pid == b->audio_streams[jj].pid &&
+                 s->coding_type == b->audio_streams[jj].coding_type)
+            {
+                equal = 1;
+                break;
+            }
+        }
+        if ( !equal )
+            return 0;
+    }
+    return 1;
+}
+#define STR4_TO_UINT32(p) \
+    ((((const uint8_t*)(p))[0] << 24) | \
+     (((const uint8_t*)(p))[1] << 16) | \
+     (((const uint8_t*)(p))[2] <<  8) | \
+      ((const uint8_t*)(p))[3])
+
+
+/***********************************************************************
+ * hb_bd_title_scan
+ **********************************************************************/
+hb_title_t * hb_bd_title_scan( hb_bd_t * d, int tt, uint64_t min_duration )
+{
+
+    hb_title_t   * title;
+    hb_chapter_t * chapter;
+    int            ii, jj;
+    BLURAY_TITLE_INFO * ti = NULL;
+    BLURAY_STREAM_INFO * bdvideo;
+    char * p_cur, * p_last;
+    uint64_t pkt_count;
+
+
+    hb_log_level(gloglevel, "bd: scanning title %d", tt );
+
+    title = hb_title_init( d->path, tt );
+    title->demuxer = HB_MPEG_DEMUXER;
+    title->type = HB_BD_TYPE;
+    title->reg_desc = STR4_TO_UINT32("HDMV");
+
+    p_last = d->path;
+    for( p_cur = d->path; *p_cur; p_cur++ )
+    {
+        if( p_cur[0] == '/' && p_cur[1] )
+        {
+            p_last = &p_cur[1];
+        }
+    }
+    snprintf( title->name, sizeof( title->name ), "%s", p_last );
+    av_strlcpy( title->path, d->path, 1024 );
+    title->path[1023] = 0;
+
+    title->vts = 0;
+    title->ttn = 0;
+
+    ti = d->title_info[tt - 1];
+    if ( ti == NULL )
+    {
+        hb_log_level(gloglevel, "bd: invalid title" );
+        goto fail;
+    }
+    if ( ti->clip_count == 0 )
+    {
+        hb_log_level(gloglevel, "bd: stream has no clips" );
+        goto fail;
+    }
+    if ( ti->clips[0].video_stream_count == 0 )
+    {
+        hb_log_level(gloglevel, "bd: stream has no video" );
+        goto fail;
+    }
+
+    hb_log_level(gloglevel, "bd: playlist %05d.MPLS", ti->playlist );
+    title->playlist = ti->playlist;
+
+    pkt_count = 0;
+    for ( ii = 0; ii < ti->clip_count; ii++ )
+    {
+        pkt_count += ti->clips[ii].pkt_count;
+    }
+    title->block_start = 0;
+    title->block_end = pkt_count;
+    title->block_count = pkt_count;
+
+    title->angle_count = ti->angle_count;
+
+    /* Get duration */
+    title->duration = ti->duration;
+    title->hours    = title->duration / 90000 / 3600;
+    title->minutes  = ( ( title->duration / 90000 ) % 3600 ) / 60;
+    title->seconds  = ( title->duration / 90000 ) % 60;
+    hb_log_level(gloglevel, "bd: duration is %02d:%02d:%02d (%"PRId64" ms)",
+            title->hours, title->minutes, title->seconds,
+            title->duration / 90 );
+
+    /* ignore short titles because they're often stills */
+    if( ti->duration < min_duration )
+    {
+        hb_log_level(gloglevel, "bd: ignoring title (too short)" );
+        goto fail;
+    }
+
+    bdvideo = &ti->clips[0].video_streams[0];
+
+    title->video_id = bdvideo->pid;
+    title->video_stream_type = bdvideo->coding_type;
+
+    hb_log_level(gloglevel, "bd: video id=0x%x, stream type=%s, format %s", title->video_id,
+            bdvideo->coding_type == BLURAY_STREAM_TYPE_VIDEO_MPEG1 ? "MPEG1" :
+            bdvideo->coding_type == BLURAY_STREAM_TYPE_VIDEO_MPEG2 ? "MPEG2" :
+            bdvideo->coding_type == BLURAY_STREAM_TYPE_VIDEO_VC1 ? "VC-1" :
+            bdvideo->coding_type == BLURAY_STREAM_TYPE_VIDEO_H264 ? "H.264" :
+            "Unknown",
+            bdvideo->format == BLURAY_VIDEO_FORMAT_480I ? "480i" :
+            bdvideo->format == BLURAY_VIDEO_FORMAT_576I ? "576i" :
+            bdvideo->format == BLURAY_VIDEO_FORMAT_480P ? "480p" :
+            bdvideo->format == BLURAY_VIDEO_FORMAT_1080I ? "1080i" :
+            bdvideo->format == BLURAY_VIDEO_FORMAT_720P ? "720p" :
+            bdvideo->format == BLURAY_VIDEO_FORMAT_1080P ? "1080p" :
+            bdvideo->format == BLURAY_VIDEO_FORMAT_576P ? "576p" :
+            "Unknown"
+          );
+
+    if ( bdvideo->coding_type == BLURAY_STREAM_TYPE_VIDEO_VC1 &&
+       ( bdvideo->format == BLURAY_VIDEO_FORMAT_480I ||
+         bdvideo->format == BLURAY_VIDEO_FORMAT_576I ||
+         bdvideo->format == BLURAY_VIDEO_FORMAT_1080I ) )
+    {
+        hb_log_level(gloglevel, "bd: Interlaced VC-1 not supported" );
+        goto fail;
+    }
+
+    switch( bdvideo->coding_type )
+    {
+        case BLURAY_STREAM_TYPE_VIDEO_MPEG1:
+        case BLURAY_STREAM_TYPE_VIDEO_MPEG2:
+            title->video_codec = WORK_DECMPEG2;
+            title->video_codec_param = 0;
+            break;
+
+        case BLURAY_STREAM_TYPE_VIDEO_VC1:
+            title->video_codec = WORK_DECAVCODECV;
+            title->video_codec_param = AV_CODEC_ID_VC1;
+            break;
+
+        case BLURAY_STREAM_TYPE_VIDEO_H264:
+            title->video_codec = WORK_DECAVCODECV;
+            title->video_codec_param = AV_CODEC_ID_H264;
+            title->flags |= HBTF_NO_IDR;
+            break;
+
+        default:
+            hb_log_level(gloglevel, "scan: unknown video codec (0x%x)",
+                    bdvideo->coding_type );
+            goto fail;
+    }
+
+    switch ( bdvideo->aspect )
+    {
+        case BLURAY_ASPECT_RATIO_4_3:
+            title->container_aspect = 4. / 3.;
+            break;
+        case BLURAY_ASPECT_RATIO_16_9:
+            title->container_aspect = 16. / 9.;
+            break;
+        default:
+            hb_log_level(gloglevel, "bd: unknown aspect" );
+            goto fail;
+    }
+    hb_log_level(gloglevel, "bd: aspect = %g", title->container_aspect );
+
+    /* Detect audio */
+    // All BD clips are not all required to have the same audio.
+    // But clips that have seamless transition are required
+    // to have the same audio as the previous clip.
+    // So find the clip that has the most other clips with the 
+    // matching audio.
+    // Max primary BD audios is 32
+	{
+		int matches;
+		int most_audio = 0;
+		int audio_clip_index = 0;
+		for (ii = 0; ii < ti->clip_count; ii++) {
+			matches = 0;
+			for (jj = 0; jj < ti->clip_count; jj++) {
+				if (bd_audio_equal(&ti->clips[ii], &ti->clips[jj])) {
+					matches++;
+				}
+			}
+			if (matches > most_audio) {
+				most_audio = matches;
+				audio_clip_index = ii;
+			}
+		}
+
+		// Add all the audios found in the above clip.
+		for (ii = 0; ii < ti->clips[audio_clip_index].audio_stream_count;
+				ii++) {
+			BLURAY_STREAM_INFO * bdaudio;
+
+			bdaudio = &ti->clips[audio_clip_index].audio_streams[ii];
+
+			switch (bdaudio->coding_type) {
+			case BLURAY_STREAM_TYPE_AUDIO_TRUHD:
+				// Add 2 audio tracks.  One for TrueHD and one for AC-3
+				add_audio(ii, title->list_audio, bdaudio,
+				HB_SUBSTREAM_BD_AC3, HB_ACODEC_AC3, 0);
+				add_audio(ii, title->list_audio, bdaudio,
+				HB_SUBSTREAM_BD_TRUEHD, HB_ACODEC_FFMPEG, AV_CODEC_ID_TRUEHD);
+				break;
+
+			case BLURAY_STREAM_TYPE_AUDIO_DTS:
+				add_audio(ii, title->list_audio, bdaudio, 0, HB_ACODEC_DCA, 0);
+				break;
+
+			case BLURAY_STREAM_TYPE_AUDIO_MPEG2:
+			case BLURAY_STREAM_TYPE_AUDIO_MPEG1:
+				add_audio(ii, title->list_audio, bdaudio, 0,
+				HB_ACODEC_FFMPEG, AV_CODEC_ID_MP2);
+				break;
+
+			case BLURAY_STREAM_TYPE_AUDIO_AC3PLUS:
+				add_audio(ii, title->list_audio, bdaudio, 0,
+				HB_ACODEC_FFMPEG, AV_CODEC_ID_EAC3);
+				break;
+
+			case BLURAY_STREAM_TYPE_AUDIO_LPCM:
+				add_audio(ii, title->list_audio, bdaudio, 0,
+				HB_ACODEC_FFMPEG, AV_CODEC_ID_PCM_BLURAY);
+				break;
+
+			case BLURAY_STREAM_TYPE_AUDIO_AC3:
+				add_audio(ii, title->list_audio, bdaudio, 0, HB_ACODEC_AC3, 0);
+				break;
+
+			case BLURAY_STREAM_TYPE_AUDIO_DTSHD_MASTER:
+			case BLURAY_STREAM_TYPE_AUDIO_DTSHD:
+				// Add 2 audio tracks.  One for DTS-HD and one for DTS
+				add_audio(ii, title->list_audio, bdaudio, HB_SUBSTREAM_BD_DTS,
+				HB_ACODEC_DCA, 0);
+				// DTS-HD is special.  The substreams must be concatinated
+				// DTS-core followed by DTS-hd-extensions.  Setting
+				// a substream id of 0 says use all substreams.
+				add_audio(ii, title->list_audio, bdaudio, 0,
+				HB_ACODEC_DCA_HD, AV_CODEC_ID_DTS);
+				break;
+
+			default:
+				hb_log_level(gloglevel,
+						"scan: unknown audio pid 0x%x codec 0x%x", bdaudio->pid,
+						bdaudio->coding_type)
+				;
+				break;
+			}
+		}
+	}
+
+    /* Chapters */
+    for ( ii = 0; ii < ti->chapter_count; ii++ )
+    {
+    	int seconds;
+        chapter = av_mallocz( sizeof( hb_chapter_t ) );
+
+        chapter->index = ii + 1;
+        chapter->duration = ti->chapters[ii].duration;
+        chapter->block_start = ti->chapters[ii].offset;
+
+        seconds            = ( chapter->duration + 45000 ) / 90000;
+        chapter->hours     = seconds / 3600;
+        chapter->minutes   = ( seconds % 3600 ) / 60;
+        chapter->seconds   = seconds % 60;
+
+        hb_log_level(gloglevel, "bd: chap %d packet=%"PRIu64", %"PRId64" ms",
+                chapter->index,
+                chapter->block_start,
+                chapter->duration / 90 );
+
+        hb_list_add( title->list_chapter, chapter );
+    }
+    hb_log_level(gloglevel, "bd: title %d has %d chapters", tt, ti->chapter_count );
+
+    /* This title is ok so far */
+    goto cleanup;
+
+fail:
+    hb_title_close( &title );
+
+cleanup:
+
+    return title;
+}
+
+/***********************************************************************
+ * hb_bd_main_feature
+ **********************************************************************/
+int hb_bd_main_feature( hb_bd_t * d, hb_list_t * list_title )
+{
+    int longest = 0;
+    int ii;
+    uint64_t longest_duration = 0;
+    int highest_rank = 0;
+    int most_chapters = 0;
+    int rank[8] = {0, 1, 3, 2, 6, 5, 7, 4};
+    BLURAY_TITLE_INFO * ti;
+
+    for ( ii = 0; ii < hb_list_count( list_title ); ii++ )
+    {
+        hb_title_t * title = hb_list_item( list_title, ii );
+        ti = d->title_info[title->index - 1];
+        if ( ti ) 
+        {
+            BLURAY_STREAM_INFO * bdvideo = &ti->clips[0].video_streams[0];
+            if ( title->duration > longest_duration * 0.7 && bdvideo->format < 8 )
+            {
+                if (highest_rank < rank[bdvideo->format] ||
+                    ( title->duration > longest_duration &&
+                          highest_rank == rank[bdvideo->format]))
+                {
+                    longest = title->index;
+                    longest_duration = title->duration;
+                    highest_rank = rank[bdvideo->format];
+                    most_chapters = ti->chapter_count;
+                }
+                else if (highest_rank == rank[bdvideo->format] &&
+                         title->duration == longest_duration &&
+                         ti->chapter_count > most_chapters)
+                {
+                    longest = title->index;
+                    most_chapters = ti->chapter_count;
+                }
+            }
+        }
+        else if ( title->duration > longest_duration )
+        {
+            longest_duration = title->duration;
+            longest = title->index;
+        }
+    }
+    return longest;
+}
+
+/***********************************************************************
+ * hb_bd_start
+ ***********************************************************************
+ * Title and chapter start at 1
+ **********************************************************************/
+int hb_bd_start( hb_bd_t * d, hb_title_t *title )
+{
+    BD_EVENT event;
+
+    d->pkt_count = title->block_count;
+
+    // Calling bd_get_event initializes libbluray event queue.
+    bd_select_title( d->bd, d->title_info[title->index - 1]->idx );
+    bd_get_event( d->bd, &event );
+    d->chapter = 1;
+//    d->stream = hb_bd_stream_open( title );
+//    if ( d->stream == NULL )
+//    {
+//        return 0;
+//    }
+    return 1;
+}
+
+/***********************************************************************
+ * hb_bd_stop
+ ***********************************************************************
+ *
+ **********************************************************************/
+void BDNOT_USED hb_bd_stop( hb_bd_t * d )
+{
+    //if( d->stream ) hb_stream_close( &d->stream );
+}
+
+/***********************************************************************
+ * hb_bd_seek
+ ***********************************************************************
+ *
+ **********************************************************************/
+int BDNOT_USED hb_bd_seek( hb_bd_t * d, float f )
+{
+    uint64_t packet = f * d->pkt_count;
+
+    bd_seek(d->bd, packet * 192);
+    d->next_chap = bd_get_current_chapter( d->bd ) + 1;
+    //hb_ts_stream_reset(d->stream);
+    return 1;
+}
+
+int BDNOT_USED hb_bd_seek_pts( hb_bd_t * d, uint64_t pts )
+{
+    bd_seek_time(d->bd, pts);
+    d->next_chap = bd_get_current_chapter( d->bd ) + 1;
+    //hb_ts_stream_reset(d->stream);
+    return 1;
+}
+
+int  BDNOT_USED hb_bd_seek_chapter( hb_bd_t * d, int c )
+{
+    d->next_chap = c;
+    bd_seek_chapter( d->bd, c - 1 );
+    //hb_ts_stream_reset(d->stream);
+    return 1;
+}
+
+/***********************************************************************
+ * hb_bd_read
+ ***********************************************************************
+ *
+ **********************************************************************/
+hb_buffer_t * hb_bd_read( hb_bd_t * d )
+{
+    int result;
+    int error_count = 0;
+    //uint8_t buf[192];
+    BD_EVENT event;
+    uint64_t pos;
+    uint8_t discontinuity;
+    int new_chap = 0;
+
+    discontinuity = 0;
+    while ( 1 )
+    {
+        if ( d->next_chap != d->chapter )
+        {
+            new_chap = d->chapter = d->next_chap;
+        }
+        result = next_packet( d->bd, d->read_buffer->data );
+        if ( result < 0 )
+        {
+            hb_error("bd: Read Error");
+            pos = bd_tell( d->bd );
+            bd_seek( d->bd, pos + 192 );
+            error_count++;
+            if (error_count > 10)
+            {
+                hb_error("bd: Error, too many consecutive read errors");
+                return 0;
+            }
+            continue;
+        }
+        else if ( result == 0 )
+        {
+            return 0;
+        }
+
+        error_count = 0;
+        while ( bd_get_event( d->bd, &event ) )
+        {
+            switch ( event.event )
+            {
+                case BD_EVENT_CHAPTER:
+                    // The muxers expect to only get chapter 2 and above
+                    // They write chapter 1 when chapter 2 is detected.
+                    d->next_chap = event.param;
+                    break;
+
+                case BD_EVENT_PLAYITEM:
+                    discontinuity = 1;
+                    hb_log_level(gloglevel, "bd: Playitem %u", event.param);
+                    break;
+
+                case BD_EVENT_STILL:
+                    bd_read_skip_still( d->bd );
+                    break;
+
+                default:
+                    break;
+            }
+        }
+        // buf+4 to skip the BD timestamp at start of packet
+        d->read_buffer->discontinuity = discontinuity;
+        d->read_buffer->new_chap = new_chap;
+        d->read_buffer->size = 192;
+        return d->read_buffer;
+
+//        b = hb_ts_decode_pkt( d->stream, buf+4 );
+//        if ( b )
+//        {
+//            b->discontinuity = discontinuity;
+//            b->new_chap = new_chap;
+//            return b;
+//        }
+    }
+    return NULL;
+}
+
+/***********************************************************************
+ * hb_bd_chapter
+ ***********************************************************************
+ * Returns in which chapter the next block to be read is.
+ * Chapter numbers start at 1.
+ **********************************************************************/
+int  BDNOT_USED hb_bd_chapter( hb_bd_t * d )
+{
+    return d->next_chap;
+}
+
+/***********************************************************************
+ * hb_bd_close
+ ***********************************************************************
+ * Closes and frees everything
+ **********************************************************************/
+void hb_bd_close( hb_bd_t ** _d )
+{
+    hb_bd_t * d = *_d;
+    int ii;
+
+    if ( d->title_info )
+    {
+        for ( ii = 0; ii < d->title_count; ii++ )
+            bd_free_title_info( d->title_info[ii] );
+        av_free( d->title_info );
+    }
+    //if( d->stream ) hb_stream_close( &d->stream );
+    if( d->bd ) bd_close( d->bd );
+    if( d->path ) av_free( d->path );
+
+    if(d->read_buffer) {
+        av_free(d->read_buffer->data);
+        av_free(d->read_buffer);
+    }
+
+    av_free( d );
+    *_d = NULL;
+}
+
+/***********************************************************************
+ * hb_bd_set_angle
+ ***********************************************************************
+ * Sets the angle to read
+ **********************************************************************/
+void  BDNOT_USED hb_bd_set_angle( hb_bd_t * d, int angle )
+{
+
+    if ( !bd_select_angle( d->bd, angle) )
+    {
+        hb_log_level(gloglevel,"bd_select_angle failed");
+    }
+}
+
+static int check_ts_sync(const uint8_t *buf)
+{
+    // must have initial sync byte, no scrambling & a legal adaptation ctrl
+    return (buf[0] == 0x47) && ((buf[3] >> 6) == 0) && ((buf[3] >> 4) > 0);
+}
+
+static int have_ts_sync(const uint8_t *buf, int psize)
+{
+    return check_ts_sync(&buf[0*psize]) && check_ts_sync(&buf[1*psize]) &&
+           check_ts_sync(&buf[2*psize]) && check_ts_sync(&buf[3*psize]) &&
+           check_ts_sync(&buf[4*psize]) && check_ts_sync(&buf[5*psize]) &&
+           check_ts_sync(&buf[6*psize]) && check_ts_sync(&buf[7*psize]);
+}
+
+#define MAX_HOLE 192*80
+
+static uint64_t align_to_next_packet(BLURAY *bd, uint8_t *pkt)
+{
+    uint8_t buf[MAX_HOLE];
+    uint64_t pos = 0;
+    uint64_t start = bd_tell(bd);
+    uint64_t orig;
+    uint64_t off = 192;
+
+    memcpy(buf, pkt, 192);
+    if ( start >= 192 ) {
+        start -= 192;
+    }
+    orig = start;
+
+    while (1)
+    {
+        if (bd_read(bd, buf+off, sizeof(buf)-off) == sizeof(buf)-off)
+        {
+            const uint8_t *bp = buf;
+            int i;
+
+            for ( i = sizeof(buf) - 8 * 192; --i >= 0; ++bp )
+            {
+                if ( have_ts_sync( bp, 192 ) )
+                {
+                    break;
+                }
+            }
+            if ( i >= 0 )
+            {
+                pos = ( bp - buf );
+                break;
+            }
+            off = 8 * 192;
+            memcpy(buf, buf + sizeof(buf) - off, off);
+            start += sizeof(buf) - off;
+        }
+        else
+        {
+            return 0;
+        }
+    }
+    off = start + pos - 4;
+    // bd_seek seeks to the nearest access unit *before* the requested position
+    // we don't want to seek backwards, so we need to read until we get
+    // past that position.
+    bd_seek(bd, off);
+    while (off > bd_tell(bd))
+    {
+        if (bd_read(bd, buf, 192) != 192)
+        {
+            break;
+        }
+    }
+    return start - orig + pos;
+}
+
+static int next_packet( BLURAY *bd, uint8_t *pkt )
+{
+    int result;
+    uint64_t pos,pos2;
+
+    while ( 1 )
+    {
+        result = bd_read( bd, pkt, 192 );
+        if ( result < 0 )
+        {
+            return -1;
+        }
+        if ( result < 192 )
+        {
+            return 0;
+        }
+        // Sync byte is byte 4.  0-3 are timestamp.
+        if (pkt[4] == 0x47)
+        {
+            return 1;
+        }
+        // lost sync - back up to where we started then try to re-establish.
+        pos = bd_tell(bd);
+        pos2 = align_to_next_packet(bd, pkt);
+        if ( pos2 == 0 )
+        {
+            hb_log_level(gloglevel, "next_packet: eof while re-establishing sync @ %"PRId64, pos );
+            return 0;
+        }
+        hb_log_level(gloglevel, "next_packet: sync lost @ %"PRId64", regained after %"PRId64" bytes",
+                 pos, pos2 );
+    }
+}
+
+static int title_info_compare_mpls(const void *va, const void *vb)
+{
+    BLURAY_TITLE_INFO *a, *b;
+
+    a = *(BLURAY_TITLE_INFO**)va;
+    b = *(BLURAY_TITLE_INFO**)vb;
+
+    return a->playlist - b->playlist;
+}
+
+
+static int64_t hb_bd_cur_title_size( hb_bd_t *e ) {
+	int64_t s = bd_get_title_size(e->bd);
+    hb_log_level(gloglevel, "hb_bd_cur_title_size: %"PRId64,s);
+    return s;
+}
+
+static int64_t hb_bd_seek_bytes( hb_bd_t *e, int64_t off, int mode ) {
+	int64_t r = bd_seek(e->bd, off);
+    hb_log_level(gloglevel, "hb_bd_seek_bytes: off %"PRId64"  ret %"PRId64, off, r);
+    return (int64_t)r;
+}
+
+/* optmedia exports */
+static om_handle_t    * __hb_bd_init( char * path ) { return (om_handle_t *) hb_bd_init(path); }
+static void     __hb_bd_close( om_handle_t  ** _d ) { hb_bd_close((hb_bd_t **)_d); }
+static int           __hb_bd_title_count( om_handle_t *d ) { return hb_bd_title_count((hb_bd_t *)d); }
+static hb_title_t  * __hb_bd_title_scan( om_handle_t * d, int t, uint64_t min_duration ) { return hb_bd_title_scan((hb_bd_t *)d,t,min_duration); }
+static int           __hb_bd_main_feature( om_handle_t * d, hb_list_t * list_title ) { return hb_bd_main_feature((hb_bd_t *)d,list_title);}
+
+static hb_optmedia_func_t bd_methods = {
+		__hb_bd_init,
+		__hb_bd_close,
+		__hb_bd_title_count,
+		__hb_bd_title_scan,
+		__hb_bd_main_feature
+} ;
+
+hb_optmedia_func_t *hb_optmedia_bd_methods(void) {
+	return &bd_methods;
+}
+
+
+/* libavformat glue */
+static const AVOption options[] = {
+    { "wide_support", "enable wide support", offsetof(bdurl_t, wide_support), AV_OPT_TYPE_INT, {1}, -1, 1, AV_OPT_FLAG_DECODING_PARAM},
+    { "min_title_duration", "minimum duration in ms to select a BD title", offsetof(bdurl_t, min_title_duration), AV_OPT_TYPE_INT, {0}, 0, INT_MAX, AV_OPT_FLAG_DECODING_PARAM},
+    { NULL }
+};
+
+static const AVClass bdurl_class = {
+    "BDURL protocol",
+    av_default_item_name,
+    options,
+    LIBAVUTIL_VERSION_INT,
+};
+
+#define bdurl_max(a,b) ((a)>(b)?(a):(b))
+#define bdurl_min(a,b) ((a)<(b)?(a):(b))
+
+
+
+static hb_buffer_t* bd_fragread(void *ctx) {
+    hb_bd_t *bd_ctx = ctx;
+    return hb_bd_read( bd_ctx );
+}
+
+static int bdurl_read(URLContext *h, unsigned char *buf, int size){
+    bdurl_t *ctx = (bdurl_t *)h->priv_data;
+    return fragmented_read(ctx->hb_bd, bd_fragread, &ctx->cur_read_buffer, buf, size);
+}
+
+static int  BDNOT_USED bdurl_readnew(URLContext *h, unsigned char *buf, int size)
+{
+    bdurl_t *ctx = (bdurl_t *)h->priv_data;
+
+    unsigned char *bufptr = buf;
+    unsigned char *bufend = buf + size;
+
+    while( bufptr < bufend ) {
+        /* if there is still a buffer we were reading */
+        if( ctx->cur_read_buffer ) {
+            int left_dstbytes = bufend - bufptr;
+            int left_srcbytes = ctx->cur_read_buffer->size - ctx->cur_read_buffer->cur;
+
+            int readmax = bdurl_min( left_srcbytes, left_dstbytes );
+
+            memcpy(bufptr, ctx->cur_read_buffer->data + ctx->cur_read_buffer->cur, readmax );
+
+            bufptr += readmax;
+            ctx->cur_read_buffer->cur += readmax;
+
+            if( ctx->cur_read_buffer->cur == ctx->cur_read_buffer->size ) {
+                ctx->cur_read_buffer = 0;
+            }
+        } else {
+            /* reading fresh data from bdread. this must return a buffer if succesful */
+            ctx->cur_read_buffer = hb_bd_read( ctx->hb_bd );
+            if(!ctx->cur_read_buffer) {
+                hb_log_level(gloglevel,"bd_read: EOF");
+                break;
+            }
+            ctx->cur_read_buffer->cur = 0;
+        }
+    }
+    return bufptr - buf;
+}
+
+static int bdurl_write(URLContext *h, const unsigned char *buf, int size)
+{
+    return 0;
+}
+
+static int bdurl_get_handle(URLContext *h)
+{
+    hb_log_level(gloglevel,"bd_get_handle");
+    return (intptr_t) h->priv_data;
+}
+
+static int bdurl_check(URLContext *h, int mask)
+{
+    int ret = mask&AVIO_FLAG_READ;
+    hb_error("bd_check: mask %d",  mask);
+    return ret;
+}
+
+
+static int bdurl_open(URLContext *h, const char *filename, int flags)
+{
+    const char *bdpath;
+    int i,title_count;
+    bdurl_t *ctx;
+    int64_t min_title_duration = 0*90000;
+    int urltitle = 0;
+    int loglevel =  gloglevel;
+    hb_title_t *t;
+
+
+    //ctx = av_malloc( sizeof(bdurl_t) );
+    ctx = h->priv_data;
+    ctx->class = &bdurl_class;
+    ctx->list_title = hb_list_init();
+    ctx->selected_chapter = 1;
+    ctx->min_title_duration = 15000;
+    min_title_duration = (((uint64_t)ctx->min_title_duration)*90000L)/1000L;
+
+    url_parse("bd",filename, &bdpath, &urltitle );
+
+
+    ctx->hb_bd = hb_bd_init((char *)bdpath);
+    if(!ctx->hb_bd) {
+        hb_log_level(loglevel, "bd_open: couldn't initialize bdread");
+        return -1;
+    }
+
+    title_count = hb_bd_title_count(ctx->hb_bd);
+    if( urltitle>0 && urltitle<=title_count ) {
+        hb_log_level(loglevel,"bd_open: opening title %d ", urltitle);
+        t= hb_bd_title_scan(ctx->hb_bd, urltitle, min_title_duration );
+        if(t) {
+            ctx->selected_title = t;
+        } else {
+            return -1;
+        }
+    } else {
+    	int selected_title_idx;
+        hb_log_level(loglevel,"bd_open: bd image has %d titles", title_count);
+        for (i = 0; i < title_count; i++) {
+            t = hb_bd_title_scan(ctx->hb_bd, i + 1, min_title_duration);
+            if (t) {
+                ctx->selected_title = t;
+                hb_list_add(ctx->list_title, t);
+            }
+        }
+
+        selected_title_idx = hb_bd_main_feature(ctx->hb_bd, ctx->list_title);
+        for (i = 0; i < hb_list_count(ctx->list_title); i++) {
+        	if( ((hb_title_t *)hb_list_item(ctx->list_title, i))->index == selected_title_idx ) {
+        		ctx->selected_title = hb_list_item(ctx->list_title,i);
+        		break;
+        	}
+        }
+    }
+
+    if( title_count<=0 || !ctx->selected_title ) {
+        hb_error("bd_open: no titles found");
+        return -1;
+    }
+
+    hb_log_level(loglevel,"bd_open: selected title %d", ctx->selected_title->index );
+
+    if( hb_bd_start(ctx->hb_bd, ctx->selected_title ) == 0 ) {
+        hb_error("bd_open: couldn't start reading title");
+        return -1;
+    }
+
+
+
+    h->priv_data = (void *)ctx;
+    return 0;
+}
+
+static int64_t bdurl_seek(URLContext *h, int64_t pos, int whence)
+{
+    bdurl_t *ctx = h->priv_data;
+
+    if (whence == AVSEEK_SIZE) {
+        return hb_bd_cur_title_size(ctx->hb_bd);
+    }
+    return hb_bd_seek_bytes( ctx->hb_bd, pos, whence );
+}
+
+static int bdurl_close(URLContext *h)
+{
+    hb_log_level(gloglevel,"bd_close: closing");
+    if( h->priv_data ) {
+        bdurl_t *ctx = h->priv_data;
+        if( ctx->list_title ) {
+            hb_list_close( &ctx->list_title );
+        }
+        hb_bd_close(&ctx->hb_bd);
+
+        //av_free(ctx);
+        //h->priv_data =0;
+    }
+    hb_log_level(gloglevel,"bd_close: closed");
+    return 0;
+}
+
+
+
+URLProtocol ff_bd_protocol = {
+    .name                = "bd",
+    .url_open            = bdurl_open,
+    .url_read            = bdurl_read,
+    .url_write           = bdurl_write,
+    .url_seek            = bdurl_seek,
+    .url_close           = bdurl_close,
+    .url_get_file_handle = bdurl_get_handle,
+    .url_check           = bdurl_check,
+    .priv_data_class	= &bdurl_class,
+    .priv_data_size 	= sizeof(bdurl_t)
+};
+
+
diff --git a/libavformat/bdurl.h b/libavformat/bdurl.h
new file mode 100644
index 0000000000..f4350712df
--- /dev/null
+++ b/libavformat/bdurl.h
@@ -0,0 +1,64 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef HB_BDURL_H
+#define HB_BDURL_H
+
+#include "dvdurl_common.h"
+#include "libbluray/bluray.h"
+
+struct hb_bd_s
+{
+    char         * path;
+    BLURAY       * bd;
+    int            title_count;
+    BLURAY_TITLE_INFO  ** title_info;
+    uint64_t       pkt_count;
+//    hb_stream_t  * stream;
+    int            chapter;
+    int            next_chap;
+
+    /* vgtmpeg */
+    hb_buffer_t     *read_buffer;
+};
+
+typedef struct hb_bd_s hb_bd_t;
+
+hb_optmedia_func_t *hb_optmedia_bd_methods(void);
+
+typedef struct bdurl {
+	const AVClass *class;
+    hb_bd_t *hb_bd;
+    hb_list_t *list_title;
+    hb_title_t *selected_title;
+    //int selected_title_idx;
+    int selected_chapter;
+    hb_buffer_t *cur_read_buffer;
+    int wide_support;
+    int min_title_duration;
+} bdurl_t;
+
+
+#endif // HB_BDURL_H
+
+
diff --git a/libavformat/dvdurl.c b/libavformat/dvdurl.c
new file mode 100644
index 0000000000..e38ec7ad08
--- /dev/null
+++ b/libavformat/dvdurl.c
@@ -0,0 +1,1657 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+#include <string.h>
+#include "avformat.h"
+#include "libavutil/avstring.h"
+#include "libavutil/opt.h"
+#include "dvdurl_lang.h"
+#include "dvdurl.h"
+#include "url.h"
+
+#include "dvdread/ifo_read.h"
+#include "dvdread/ifo_print.h"
+#include "dvdread/nav_read.h"
+
+static hb_dvd_t    * hb_dvdread_init( char * path );
+static void          hb_dvdread_close( hb_dvd_t ** _d );
+static char        * hb_dvdread_name( char * path );
+static int           hb_dvdread_title_count( hb_dvd_t * d );
+static hb_title_t  * hb_dvdread_title_scan( hb_dvd_t * d, int t, uint64_t min_duration );
+static int           hb_dvdread_start( hb_dvd_t * d, hb_title_t *title, int chapter );
+static void          hb_dvdread_stop( hb_dvd_t * d );
+static int           hb_dvdread_seek( hb_dvd_t * d, float f );
+static hb_buffer_t * hb_dvdread_read( hb_dvd_t * d );
+static int           hb_dvdread_chapter( hb_dvd_t * d );
+static int           hb_dvdread_angle_count( hb_dvd_t * d );
+static void          hb_dvdread_set_angle( hb_dvd_t * d, int angle );
+static int           hb_dvdread_main_feature( hb_dvd_t * d, hb_list_t * list_title );
+static int           is_nav_pack( unsigned char *buf );
+
+static int gloglevel = HB_LOG_VERBOSE; 
+
+#if __GNUC__
+#pragma GCC diagnostic ignored "-Wformat-security"
+#endif
+
+static void dvdread_logger(const char *log){
+	av_log(NULL, gloglevel,  log );
+	return;
+}
+
+/***********************************************************************
+ * Local prototypes
+ **********************************************************************/
+static void FindNextCell( hb_dvdread_t * );
+static int  dvdtime2msec( dvd_time_t * );
+static int hb_dvdread_is_break( hb_dvdread_t * d );
+
+
+static hb_title_t *hb_dvdread_main_feature_title( hb_dvd_t * e, hb_list_t * list_title ) {
+    int ii;
+    uint64_t longest_duration = 0;
+    hb_title_t *longest = 0;
+
+    for ( ii = 0; ii < hb_list_count( list_title ); ii++ )
+    {
+        hb_title_t * title = hb_list_item( list_title, ii );
+        if ( title->duration > longest_duration )
+        {
+            longest_duration = title->duration;
+            longest = title;
+        }
+    }
+    return longest;
+}
+
+static int hb_dvdread_main_feature( hb_dvd_t * e, hb_list_t * list_title )
+{
+    hb_title_t *title = hb_dvdread_main_feature_title(e,list_title);
+    if(title) {
+        return title->index;
+    } else {
+        return -1;
+    }
+}
+
+static OPTMEDIA_NOT_USED char * hb_dvdread_name( char * path )
+{
+    static char name[1024];
+    unsigned char unused[1024];
+    dvd_reader_t * reader;
+
+    reader = DVDOpenEx( path, dvdread_logger, 0 );
+    if( !reader )
+    {
+        return NULL;
+    }
+
+    if( DVDUDFVolumeInfo( reader, name, sizeof( name ),
+                          unused, sizeof( unused ) ) )
+    {
+        DVDClose( reader );
+        return NULL;
+    }
+
+    DVDClose( reader );
+    return name;
+}
+
+
+/* gets the parent path. Caller must free the string eventually */
+static char *get_parent_path( const char *path ) {
+    char *p = strdup(path);
+    char *l = &p[strlen(p)-1];
+    while(l-->=p) {
+        if(*l=='/' || *l=='\\') {
+            *l=0;
+            break;
+        }
+    }
+    return p;
+}
+
+/* if a path to individual file in a DVD folder is passed
+ * obtain path to root folder. If couldn't be identified just return path by itself
+ */
+static char *get_root_dvd_path(const char *path) {
+    if (av_stristr(&path[strlen(path) - 4], ".IFO") || av_stristr(&path[strlen(path) - 4], ".BUP") || av_stristr(&path[strlen(path) - 4], ".VOB")) {
+        return get_parent_path(path);
+    } else {
+        return strdup(path);
+    }
+}
+
+
+
+/***********************************************************************
+ * hb_dvdread_init
+ ***********************************************************************
+ *
+ **********************************************************************/
+hb_dvd_t * hb_dvdread_init( char * path )
+{
+    hb_dvd_t * e;
+    hb_dvdread_t * d;
+    int region_mask;
+    
+    // gloglevel = (verbose>1) ? HB_LOG_VERBOSE : HB_LOG_INFO;
+
+    path = get_root_dvd_path(path);
+
+    e = av_calloc( sizeof( hb_dvd_t ), 1 );
+    d = &(e->dvdread);
+
+	/* Log DVD drive region code */
+    if ( hb_dvd_region( path, &region_mask ) == 0 )
+    {
+        hb_log_level(gloglevel, "dvd: Region mask 0x%02x", region_mask );
+        if ( region_mask == 0xFF )
+        {
+            hb_log_level(gloglevel, "dvd: Warning, DVD device has no region set" );
+        }
+    }
+
+    /* Open device */
+    if( !( d->reader = DVDOpenEx( path, dvdread_logger, 0 ) ) )
+    {
+        /*
+         * Not an error, may be a stream - which we'll try in a moment.
+         */
+        hb_log_level(gloglevel, "dvd: not a dvd - trying as a stream/file instead" );
+        goto fail;
+    }
+
+    /* Open main IFO */
+    if( !( d->vmg = ifoOpen( d->reader, 0 ) ) )
+    {
+        hb_error( "dvd: ifoOpen failed" );
+        goto fail;
+    }
+
+    /* vgtmpeg */
+    /* allocate fixed hb_buffer_t for reads */
+    d->read_buffer = av_mallocz( sizeof(hb_buffer_t));
+    d->read_buffer->size = HB_DVD_READ_BUFFER_SIZE;
+    d->read_buffer->data = av_malloc(HB_DVD_READ_BUFFER_SIZE);
+
+    d->path = strdup( path );
+
+    return e;
+
+fail:
+    if( d->vmg )    ifoClose( d->vmg );
+    if( d->reader ) DVDClose( d->reader );
+    av_free( d );
+    return NULL;
+}
+
+/***********************************************************************
+ * hb_dvdread_title_count
+ **********************************************************************/
+static int hb_dvdread_title_count( hb_dvd_t * e )
+{
+    hb_dvdread_t *d = &(e->dvdread);
+    return d->vmg->tt_srpt->nr_of_srpts;
+}
+
+/***********************************************************************
+ * hb_dvdread_title_scan
+ **********************************************************************/
+static hb_title_t * hb_dvdread_title_scan( hb_dvd_t * e, int t, uint64_t min_duration )
+{
+
+    hb_dvdread_t *d = &(e->dvdread);
+    hb_title_t   * title;
+    ifo_handle_t * vts = NULL;
+    int            pgc_id, pgn, i;
+    hb_chapter_t * chapter;
+    int            c;
+    uint64_t       duration;
+    float          duration_correction;
+    unsigned char  unused[1024];
+    int loglevel =  gloglevel; 
+
+    hb_log_level( loglevel, "scan: scanning title %d", t );
+
+    title = hb_title_init( d->path, t );
+    title->type = HB_DVD_TYPE;
+
+    if( DVDUDFVolumeInfo( d->reader, title->name, sizeof( title->name ),
+                          unused, sizeof( unused ) ) )
+    {
+        char * p_cur, * p_last = d->path;
+        for( p_cur = d->path; *p_cur; p_cur++ )
+        {
+            if( p_cur[0] == '/' && p_cur[1] )
+            {
+                p_last = &p_cur[1];
+            }
+        }
+        snprintf( title->name, sizeof( title->name ), "%s", p_last );
+    }
+
+    /* VTS which our title is in */
+    title->vts = d->vmg->tt_srpt->title[t-1].title_set_nr;
+
+    if ( !title->vts )
+    {
+        /* A VTS of 0 means the title wasn't found in the title set */
+        hb_error("Invalid VTS (title set) number: %i", title->vts);
+        goto fail;
+    }
+
+    hb_log_level( loglevel, "scan: opening IFO for VTS %d", title->vts );
+    if( !( vts = ifoOpen( d->reader, title->vts ) ) )
+    {
+        hb_error( "scan: ifoOpen failed" );
+        goto fail;
+    }
+
+    /* ignore titles with bogus cell addresses so we don't abort later
+     * in libdvdread. */
+    for ( i = 0; i < vts->vts_c_adt->nr_of_vobs; ++i)
+    {
+        if( (vts->vts_c_adt->cell_adr_table[i].start_sector & 0xffffff ) ==
+            0xffffff )
+        {
+            hb_error( "scan: cell_adr_table[%d].start_sector invalid (0x%x) "
+                      "- skipping title", i,
+                      vts->vts_c_adt->cell_adr_table[i].start_sector );
+            goto fail;
+        }
+        if( (vts->vts_c_adt->cell_adr_table[i].last_sector & 0xffffff ) ==
+            0xffffff )
+        {
+            hb_error( "scan: cell_adr_table[%d].last_sector invalid (0x%x) "
+                      "- skipping title", i,
+                      vts->vts_c_adt->cell_adr_table[i].last_sector );
+            goto fail;
+        }
+        if( vts->vts_c_adt->cell_adr_table[i].start_sector >=
+            vts->vts_c_adt->cell_adr_table[i].last_sector )
+        {
+            hb_error( "scan: cell_adr_table[%d].start_sector (0x%x) "
+                      "is not before last_sector (0x%x) - skipping title", i,
+                      vts->vts_c_adt->cell_adr_table[i].start_sector,
+                      vts->vts_c_adt->cell_adr_table[i].last_sector );
+            goto fail;
+        }
+    }
+
+    if( hb_global_verbosity_level == 3 )
+    {
+        ifo_print( d->reader, title->vts );
+    }
+
+    /* Position of the title in the VTS */
+    title->ttn = d->vmg->tt_srpt->title[t-1].vts_ttn;
+    if ( title->ttn < 1 || title->ttn > vts->vts_ptt_srpt->nr_of_srpts )
+    {
+        hb_error( "invalid VTS PTT offset %d for title %d, skipping", title->ttn, t );
+        goto fail;
+    }
+
+    /* Get pgc */
+    pgc_id = vts->vts_ptt_srpt->title[title->ttn-1].ptt[0].pgcn;
+    if ( pgc_id < 1 || pgc_id > vts->vts_pgcit->nr_of_pgci_srp )
+    {
+        hb_error( "invalid PGC ID %d for title %d, skipping", pgc_id, t );
+        goto fail;
+    }
+    pgn    = vts->vts_ptt_srpt->title[title->ttn-1].ptt[0].pgn;
+    d->pgc = vts->vts_pgcit->pgci_srp[pgc_id-1].pgc;
+
+    hb_log_level( loglevel,"pgc_id: %d, pgn: %d: pgc: %p", pgc_id, pgn, d->pgc);
+
+    if( !d->pgc )
+    {
+        hb_error( "scan: pgc not valid, skipping" );
+        goto fail;
+    }
+
+    if( pgn <= 0 || pgn > 99 )
+    {
+        hb_error( "scan: pgn %d not valid, skipping", pgn );
+        goto fail;
+    }
+
+    /* Start cell */
+    title->cell_start  = d->pgc->program_map[pgn-1] - 1;
+    title->block_start = d->pgc->cell_playback[title->cell_start].first_sector;
+
+    /* End cell */
+    title->cell_end  = d->pgc->nr_of_cells - 1;
+    title->block_end = d->pgc->cell_playback[title->cell_end].last_sector;
+
+    /* Block count */
+    title->block_count = 0;
+    d->cell_cur = title->cell_start;
+    while( d->cell_cur <= title->cell_end )
+    {
+#define cp d->pgc->cell_playback[d->cell_cur]
+        title->block_count += cp.last_sector + 1 - cp.first_sector;
+#undef cp
+        FindNextCell( d );
+        d->cell_cur = d->cell_next;
+    }
+
+    hb_log_level( loglevel, "scan: vts=%d, ttn=%d, cells=%d->%d, blocks=%"PRIu64"->%"PRIu64", "
+            "%"PRIu64" blocks", title->vts, title->ttn, title->cell_start,
+            title->cell_end, title->block_start, title->block_end,
+            title->block_count );
+
+    /* Get duration */
+    title->duration = 90LL * dvdtime2msec( &d->pgc->playback_time );
+    title->hours    = title->duration / 90000 / 3600;
+    title->minutes  = ( ( title->duration / 90000 ) % 3600 ) / 60;
+    title->seconds  = ( title->duration / 90000 ) % 60;
+    hb_log_level( loglevel, "scan: duration is %02d:%02d:%02d (%"PRId64" ms)",
+            title->hours, title->minutes, title->seconds,
+            title->duration / 90 );
+
+    /* ignore titles under 10 seconds because they're often stills or
+     * clips with no audio & our preview code doesn't currently handle
+     * either of these. */
+    if( title->duration < min_duration )
+    {
+        hb_log_level( loglevel, "scan: ignoring title (too short)" );
+        goto fail;
+    }
+
+    /* video */
+    {
+        int height = 480;
+        if (vts->vtsi_mat->vtsm_video_attr.video_format  != 0)
+            height = 576;
+
+        switch (vts->vtsi_mat->vtsm_video_attr.picture_size) {
+        case 0:
+            title->width = 720;
+            title->height = height;
+            break;
+        case 1:
+            title->width = 704;
+            title->height = height;
+            break;
+        case 2:
+            title->width = 352;
+            title->height = height;
+            break;
+        case 3:
+            title->width = 352;
+            title->height = height / 2;
+            break;
+        default:
+            hb_error("width and height not available on IFO file");
+            break;
+        }
+
+        hb_log_level( loglevel, "scan: film mode is %s", vts->vtsi_mat->vts_video_attr.film_mode ? "on" : "off" );
+    }
+
+    /* Detect languages */
+    for( i = 0; i < vts->vtsi_mat->nr_of_vts_audio_streams; i++ )
+    {
+        hb_audio_t * audio, * audio_tmp;
+        int          audio_format, audio_control,
+                     position, j;
+        const iso639_lang_t * lang;
+        int lang_extension = 0;
+
+        hb_log_level( loglevel, "scan: checking audio %d", i + 1 );
+
+        audio = av_calloc( sizeof( hb_audio_t ), 1 );
+
+        audio_format  = vts->vtsi_mat->vts_audio_attr[i].audio_format;
+        //lang_code     = vts->vtsi_mat->vts_audio_attr[i].lang_code;
+        lang_extension = vts->vtsi_mat->vts_audio_attr[i].code_extension;
+        audio_control =
+            vts->vts_pgcit->pgci_srp[pgc_id-1].pgc->audio_control[i];
+
+        if( !( audio_control & 0x8000 ) )
+        {
+            hb_log_level( loglevel, "scan: audio channel is not active" );
+            av_free( audio );
+            continue;
+        }
+
+        position = ( audio_control & 0x7F00 ) >> 8;
+
+        switch( audio_format )
+        {
+            case 0x00:
+                audio->id    = ( ( 0x80 + position ) << 8 ) | 0xbd;
+                audio->config.in.codec = HB_ACODEC_AC3;
+                break;
+
+            case 0x02:
+            case 0x03:
+                audio->id    = 0xc0 + position;
+                audio->config.in.codec = HB_ACODEC_FFMPEG;
+                break;
+
+            case 0x04:
+                audio->id    = ( ( 0xa0 + position ) << 8 ) | 0xbd;
+                audio->config.in.codec = HB_ACODEC_LPCM;
+                break;
+
+            case 0x06:
+                audio->id    = ( ( 0x88 + position ) << 8 ) | 0xbd;
+                audio->config.in.codec = HB_ACODEC_DCA;
+                break;
+
+            default:
+                audio->id    = 0;
+                audio->config.in.codec = 0;
+                hb_log_level( loglevel, "scan: unknown audio codec (%x)",
+                        audio_format );
+                break;
+        }
+        if( !audio->id )
+        {
+            continue;
+        }
+
+        /* Check for duplicate tracks */
+        audio_tmp = NULL;
+        for( j = 0; j < hb_list_count( title->list_audio ); j++ )
+        {
+            audio_tmp = hb_list_item( title->list_audio, j );
+            if( audio->id == audio_tmp->id )
+            {
+                break;
+            }
+            audio_tmp = NULL;
+        }
+        if( audio_tmp )
+        {
+            hb_log_level( loglevel, "scan: duplicate audio track" );
+            av_free( audio );
+            continue;
+        }
+
+        audio->config.lang.type = lang_extension;
+
+        lang = lang_for_code( vts->vtsi_mat->vts_audio_attr[i].lang_code );
+
+        snprintf( audio->config.lang.description, sizeof( audio->config.lang.description ), "%s",
+            strlen(lang->native_name) ? lang->native_name : lang->eng_name);
+        snprintf( audio->config.lang.simple, sizeof( audio->config.lang.simple ), "%s",
+                  strlen(lang->native_name) ? lang->native_name : lang->eng_name );
+        snprintf( audio->config.lang.iso639_2, sizeof( audio->config.lang.iso639_2 ), "%s",
+                  lang->iso639_2);
+
+        switch( lang_extension )
+        {
+        case 0:
+        case 1:
+            break;
+        case 2:
+            av_strlcat( audio->config.lang.description, " (Visually Impaired)", sizeof( audio->config.lang.description ) );
+            break;
+        case 3:
+            av_strlcat( audio->config.lang.description, " (Director's Commentary 1)" , sizeof( audio->config.lang.description ) );
+            break;
+        case 4:
+            av_strlcat( audio->config.lang.description, " (Director's Commentary 2)" , sizeof( audio->config.lang.description ) );
+            break;
+        default:
+            break;
+        }
+
+        hb_log_level( loglevel, "scan: id=0x%x, lang=%s, 3cc=%s ext=%i", audio->id,
+                audio->config.lang.description, audio->config.lang.iso639_2,
+                lang_extension );
+
+        audio->config.in.track = i;
+        hb_list_add( title->list_audio, audio );
+    }
+
+    /* Check for subtitles */
+    for( i = 0; i < vts->vtsi_mat->nr_of_vts_subp_streams; i++ )
+    {
+        hb_subtitle_t * subtitle;
+        int spu_control;
+        int position;
+        const iso639_lang_t * lang;
+        int lang_extension = 0;
+
+        hb_log_level( loglevel, "scan: checking subtitle %d", i + 1 );
+
+        spu_control =
+            vts->vts_pgcit->pgci_srp[pgc_id-1].pgc->subp_control[i];
+
+        if( !( spu_control & 0x80000000 ) )
+        {
+            hb_log_level( loglevel, "scan: subtitle channel is not active" );
+            continue;
+        }
+
+        if( vts->vtsi_mat->vts_video_attr.display_aspect_ratio )
+        {
+            switch( vts->vtsi_mat->vts_video_attr.permitted_df )
+            {
+                case 1:
+                    position = spu_control & 0xFF;
+                    break;
+                case 2:
+                    position = ( spu_control >> 8 ) & 0xFF;
+                    break;
+                default:
+                    position = ( spu_control >> 16 ) & 0xFF;
+            }
+        }
+        else
+        {
+            position = ( spu_control >> 24 ) & 0x7F;
+        }
+
+        lang_extension = vts->vtsi_mat->vts_subp_attr[i].code_extension;
+
+        lang = lang_for_code( vts->vtsi_mat->vts_subp_attr[i].lang_code );
+
+        subtitle = av_calloc( sizeof( hb_subtitle_t ), 1 );
+        subtitle->track = i+1;
+        subtitle->id = ( ( 0x20 + position ) << 8 ) | 0xbd;
+        snprintf( subtitle->lang, sizeof( subtitle->lang ), "%s",
+             strlen(lang->native_name) ? lang->native_name : lang->eng_name);
+        snprintf( subtitle->iso639_2, sizeof( subtitle->iso639_2 ), "%s",
+                  lang->iso639_2);
+        subtitle->format = PICTURESUB;
+        subtitle->source = VOBSUB;
+        subtitle->config.dest   = RENDERSUB;  // By default render (burn-in) the VOBSUB.
+
+        subtitle->type = lang_extension;
+        
+        memcpy( subtitle->palette,
+            vts->vts_pgcit->pgci_srp[pgc_id-1].pgc->palette,
+            16 * sizeof( uint32_t ) );
+
+
+        switch( lang_extension )
+        {  
+        case 0:
+            break;
+        case 1:
+            break;
+        case 2:
+            av_strlcat( subtitle->lang, " (Caption with bigger size character)" ,sizeof(subtitle->lang));
+            break;
+        case 3: 
+            av_strlcat( subtitle->lang, " (Caption for Children)" ,sizeof(subtitle->lang));
+            break;
+        case 4:
+            break;
+        case 5:
+            av_strlcat( subtitle->lang, " (Closed Caption)" ,sizeof(subtitle->lang));
+            break;
+        case 6:
+            av_strlcat( subtitle->lang, " (Closed Caption with bigger size character)" ,sizeof(subtitle->lang));
+            break;
+        case 7:
+            av_strlcat( subtitle->lang, " (Closed Caption for Children)" ,sizeof(subtitle->lang));
+            break;
+        case 8:
+            break;
+        case 9:
+            av_strlcat( subtitle->lang, " (Forced Caption)" ,sizeof(subtitle->lang));
+            break;
+        case 10:
+            break;
+        case 11:
+            break;
+        case 12:
+            break;
+        case 13:
+            av_strlcat( subtitle->lang, " (Director's Commentary)" ,sizeof(subtitle->lang));
+            break;
+        case 14:
+            av_strlcat( subtitle->lang, " (Director's Commentary with bigger size character)" ,sizeof(subtitle->lang));
+            break;
+        case 15:
+            av_strlcat( subtitle->lang, " (Director's Commentary for Children)" ,sizeof(subtitle->lang));
+        default:
+            break;
+        }
+
+        hb_log_level( loglevel, "scan: id=0x%x, lang=%s, 3cc=%s", subtitle->id,
+                subtitle->lang, subtitle->iso639_2 );
+
+        hb_list_add( title->list_subtitle, subtitle );
+    }
+
+    /* Chapters */
+    hb_log_level( loglevel, "scan: title %d has %d chapters", t,
+            vts->vts_ptt_srpt->title[title->ttn-1].nr_of_ptts );
+    for( i = 0, c = 1;
+         i < vts->vts_ptt_srpt->title[title->ttn-1].nr_of_ptts; i++ )
+    {
+        chapter = av_calloc( sizeof( hb_chapter_t ), 1 );
+        /* remember the on-disc chapter number */
+        chapter->index = i + 1;
+
+        pgc_id = vts->vts_ptt_srpt->title[title->ttn-1].ptt[i].pgcn;
+        pgn    = vts->vts_ptt_srpt->title[title->ttn-1].ptt[i].pgn;
+        d->pgc = vts->vts_pgcit->pgci_srp[pgc_id-1].pgc;
+
+
+
+
+        /* Start cell */
+        chapter->cell_start  = d->pgc->program_map[pgn-1] - 1;
+        chapter->block_start =
+            d->pgc->cell_playback[chapter->cell_start].first_sector;
+
+
+
+        // if there are no more programs in this pgc, the end cell is the
+        // last cell. Otherwise it's the cell before the start cell of the
+        // next program.
+        if ( pgn == d->pgc->nr_of_programs )
+        {
+            chapter->cell_end = d->pgc->nr_of_cells - 1;
+        }
+        else
+        {
+            chapter->cell_end = d->pgc->program_map[pgn] - 2;;
+        }
+        chapter->block_end = d->pgc->cell_playback[chapter->cell_end].last_sector;
+
+        /* Block count, duration */
+        chapter->block_count = 0;
+        chapter->duration = 0;
+
+        d->cell_cur = chapter->cell_start;
+        while( d->cell_cur <= chapter->cell_end )
+        {
+#define cp d->pgc->cell_playback[d->cell_cur]
+            chapter->block_count += cp.last_sector + 1 - cp.first_sector;
+            chapter->duration += 90LL * dvdtime2msec( &cp.playback_time );
+#undef cp
+            FindNextCell( d );
+            d->cell_cur = d->cell_next;
+        }
+        hb_list_add( title->list_chapter, chapter );
+        c++;
+    }
+
+    hb_log_level(gloglevel, "hb_title_scan: adjusting chapter durations");
+
+    /* The durations we get for chapters aren't precise. Scale them so
+       the total matches the title duration */
+    duration = 0;
+    for( i = 0; i < hb_list_count( title->list_chapter ); i++ )
+    {
+        chapter = hb_list_item( title->list_chapter, i );
+        duration += chapter->duration;
+    }
+    duration_correction = (float) title->duration / (float) duration;
+    for( i = 0; i < hb_list_count( title->list_chapter ); i++ )
+    {
+        int seconds;
+        chapter            = hb_list_item( title->list_chapter, i );
+        chapter->duration  = duration_correction * chapter->duration;
+        seconds            = ( chapter->duration + 45000 ) / 90000;
+        chapter->hours     = seconds / 3600;
+        chapter->minutes   = ( seconds % 3600 ) / 60;
+        chapter->seconds   = seconds % 60;
+
+        hb_log_level( loglevel, "scan: chap %d c=%d->%d, b=%"PRIu64"->%"PRIu64" (%"PRIu64"), %"PRId64" ms",
+                chapter->index, chapter->cell_start, chapter->cell_end,
+                chapter->block_start, chapter->block_end,
+                chapter->block_count, chapter->duration / 90 );
+    }
+
+    /* Get aspect. We don't get width/height/rate infos here as
+       they tend to be wrong */
+    switch( vts->vtsi_mat->vts_video_attr.display_aspect_ratio )
+    {
+        case 0:
+            title->container_aspect = 4. / 3.;
+            break;
+        case 3:
+            title->container_aspect = 16. / 9.;
+            break;
+        default:
+            hb_log_level( loglevel, "scan: unknown aspect" );
+            goto fail;
+    }
+
+    hb_log_level( loglevel, "scan: aspect = %g", title->aspect );
+
+    /* This title is ok so far */
+    goto cleanup;
+
+fail:
+    hb_title_close( &title );
+
+cleanup:
+    if( vts ) ifoClose( vts );
+
+    return title;
+}
+
+/***********************************************************************
+ * hb_dvdread_start
+ ***********************************************************************
+ * Title and chapter start at 1
+ **********************************************************************/
+static int hb_dvdread_start( hb_dvd_t * e, hb_title_t *title, int chapter )
+{
+    hb_dvdread_t *d = &(e->dvdread);
+    int pgc_id, pgn;
+    int i;
+    int t = title->index;
+
+    /* Open the IFO and the VOBs for this title */
+    d->vts = d->vmg->tt_srpt->title[t-1].title_set_nr;
+    d->ttn = d->vmg->tt_srpt->title[t-1].vts_ttn;
+    if( !( d->ifo = ifoOpen( d->reader, d->vts ) ) )
+    {
+        hb_error( "dvd: ifoOpen failed for VTS %d", d->vts );
+        return 0;
+    }
+    if( !( d->file = DVDOpenFile( d->reader, d->vts,
+                                  DVD_READ_TITLE_VOBS ) ) )
+    {
+        hb_error( "dvd: DVDOpenFile failed for VTS %d", d->vts );
+        return 0;
+    }
+
+    /* Get title first and last blocks */
+    pgc_id         = d->ifo->vts_ptt_srpt->title[d->ttn-1].ptt[0].pgcn;
+    pgn            = d->ifo->vts_ptt_srpt->title[d->ttn-1].ptt[0].pgn;
+    d->pgc         = d->ifo->vts_pgcit->pgci_srp[pgc_id-1].pgc;
+    d->cell_start  = d->pgc->program_map[pgn - 1] - 1;
+    d->cell_end    = d->pgc->nr_of_cells - 1;
+    d->title_start = d->pgc->cell_playback[d->cell_start].first_sector;
+    d->title_end   = d->pgc->cell_playback[d->cell_end].last_sector;
+
+    /* Block count */
+    d->title_block_count = 0;
+    for( i = d->cell_start; i <= d->cell_end; i++ )
+    {
+        d->title_block_count += d->pgc->cell_playback[i].last_sector + 1 -
+            d->pgc->cell_playback[i].first_sector;
+    }
+
+    /* Get pgc for the current chapter */
+    pgc_id = d->ifo->vts_ptt_srpt->title[d->ttn-1].ptt[chapter-1].pgcn;
+    pgn    = d->ifo->vts_ptt_srpt->title[d->ttn-1].ptt[chapter-1].pgn;
+    d->pgc = d->ifo->vts_pgcit->pgci_srp[pgc_id-1].pgc;
+
+    /* Get the two first cells */
+    d->cell_cur = d->pgc->program_map[pgn-1] - 1;
+    FindNextCell( d );
+
+    d->block     = d->pgc->cell_playback[d->cell_cur].first_sector;
+    d->next_vobu = d->block;
+    d->pack_len  = 0;
+    d->cell_overlap = 0;
+    d->in_cell = 0;
+    d->in_sync = 2;
+
+    return 1;
+}
+
+/***********************************************************************
+ * hb_dvdread_stop
+ ***********************************************************************
+ *
+ **********************************************************************/
+static void OPTMEDIA_NOT_USED hb_dvdread_stop( hb_dvd_t * e )
+{
+    hb_dvdread_t *d = &(e->dvdread);
+    if( d->ifo )
+    {
+        ifoClose( d->ifo );
+        d->ifo = NULL;
+    }
+    if( d->file )
+    {
+        DVDCloseFile( d->file );
+        d->file = NULL;
+    }
+}
+
+#define DVD_BLOCK_SIZE 2048
+static int64_t hb_dvdread_cur_title_size( hb_dvd_t *e ) {
+    hb_dvdread_t *d = &(e->dvdread);
+    return (int64_t)d->title_block_count * DVD_BLOCK_SIZE;
+}
+
+static int hb_dvdread_cur_title_sector( hb_dvd_t *e ) {
+    hb_dvdread_t *d = &(e->dvdread);
+    int i;
+    int cursector = 0;
+
+    for( i = d->cell_start; i < d->cell_cur; i++ )
+    {
+        cursector += d->pgc->cell_playback[i].last_sector + 1 - d->pgc->cell_playback[i].first_sector;
+    }
+
+    if( i == d->cell_cur  )
+    {
+        cursector += d->next_vobu - d->pgc->cell_playback[i].first_sector;
+    }
+
+    return cursector;
+}
+
+static int64_t hb_dvdread_seek_bytes( hb_dvd_t *e, int64_t off, int mode ) {
+    hb_dvdread_t *d = &(e->dvdread);
+    int count, sizeCell,sftell;
+    int i;
+
+    if( mode == SEEK_CUR ) {
+        int soff = off/DVD_BLOCK_SIZE;
+        sftell = hb_dvdread_cur_title_sector(e) + soff;
+    } else if( mode==SEEK_SET) {
+        sftell = off/DVD_BLOCK_SIZE;
+    } else {
+        hb_log_level(gloglevel,"dvdread_seek_bytes: asked to seek but no mode specified");
+        return -1;
+    }
+    count = sftell;
+
+    for( i = d->cell_start; i <= d->cell_end; i++ )
+    {
+        sizeCell = d->pgc->cell_playback[i].last_sector + 1 - d->pgc->cell_playback[i].first_sector;
+
+        if( count < sizeCell )
+        {
+            d->cell_cur = i;
+            d->cur_cell_id = 0;
+            FindNextCell( d );
+
+            /* Now let hb_dvdread_read find the next VOBU */
+            d->next_vobu = d->pgc->cell_playback[i].first_sector + count;
+            d->pack_len  = 0;
+            break;
+        }
+
+        count -= sizeCell;
+    }
+
+    if( i > d->cell_end )
+    {
+        return -1;
+    }
+
+    /*
+     * Assume that we are in sync, even if we are not given that it is obvious
+     * that we are out of sync if we have just done a seek.
+     */
+    d->in_sync = 2;
+    d->pack_len  = 0;
+    d->cell_overlap = 0;
+    d->in_cell = 0;
+
+    return (int64_t)sftell*DVD_BLOCK_SIZE;
+}
+
+/***********************************************************************
+ * hb_dvdread_seek
+ ***********************************************************************
+ *
+ **********************************************************************/
+static int OPTMEDIA_NOT_USED  hb_dvdread_seek( hb_dvd_t * e, float f )
+{
+    hb_dvdread_t *d = &(e->dvdread);
+    int count, sizeCell;
+    int i;
+
+    count = f * d->title_block_count;
+
+    for( i = d->cell_start; i <= d->cell_end; i++ )
+    {
+        sizeCell = d->pgc->cell_playback[i].last_sector + 1 - d->pgc->cell_playback[i].first_sector;
+
+        if( count < sizeCell )
+        {
+            d->cell_cur = i;
+            d->cur_cell_id = 0;
+            FindNextCell( d );
+
+            /* Now let hb_dvdread_read find the next VOBU */
+            d->next_vobu = d->pgc->cell_playback[i].first_sector + count;
+            d->pack_len  = 0;
+            break;
+        }
+
+        count -= sizeCell;
+    }
+
+    if( i > d->cell_end )
+    {
+        return 0;
+    }
+
+    /*
+     * Assume that we are in sync, even if we are not given that it is obvious
+     * that we are out of sync if we have just done a seek.
+     */
+    d->in_sync = 2;
+
+    return 1;
+}
+
+
+/***********************************************************************
+ * is_nav_pack
+ ***********************************************************************/
+int is_nav_pack( unsigned char *buf )
+{
+    /*
+     * The NAV Pack is comprised of the PCI Packet and DSI Packet, both
+     * of these start at known offsets and start with a special identifier.
+     *
+     * NAV = {
+     *  PCI = { 00 00 01 bf  # private stream header
+     *          ?? ??        # length
+     *          00           # substream
+     *          ...
+     *        }
+     *  DSI = { 00 00 01 bf  # private stream header
+     *          ?? ??        # length
+     *          01           # substream
+     *          ...
+     *        }
+     *
+     * The PCI starts at offset 0x26 into the sector, and the DSI starts at 0x400
+     *
+     * This information from: http://dvd.sourceforge.net/dvdinfo/
+     */
+    if( ( buf[0x26] == 0x00 &&      // PCI
+          buf[0x27] == 0x00 &&
+          buf[0x28] == 0x01 &&
+          buf[0x29] == 0xbf &&
+          buf[0x2c] == 0x00 ) &&
+        ( buf[0x400] == 0x00 &&     // DSI
+          buf[0x401] == 0x00 &&
+          buf[0x402] == 0x01 &&
+          buf[0x403] == 0xbf &&
+          buf[0x406] == 0x01 ) )
+    {
+        return ( 1 );
+    } else {
+        return ( 0 );
+    }
+}
+
+
+/***********************************************************************
+ * hb_dvdread_read
+ ***********************************************************************
+ *
+ **********************************************************************/
+static hb_buffer_t * hb_dvdread_read( hb_dvd_t * e )
+{
+    hb_dvdread_t *d = &(e->dvdread);
+    //hb_buffer_t *b = hb_buffer_init( HB_DVD_READ_BUFFER_SIZE );
+    hb_buffer_t *b = d->read_buffer;
+    b->new_chap = 0;
+ top:
+    if( !d->pack_len )
+    {
+        /* New pack */
+        dsi_t dsi_pack;
+        int   error = 0;
+        uint32_t next_ptr;
+
+        // if we've just finished the last cell of the title we don't
+        // want to read another block because our next_vobu pointer
+        // is probably invalid. Just return 'no data' & our caller
+        // should check and discover we're at eof.
+        if ( d->cell_cur > d->cell_end )
+        {
+            //hb_buffer_close( &b );
+            return NULL;
+        }
+
+        for( ;; )
+        {
+            int block, pack_len, next_vobu, read_retry;
+
+            for( read_retry = 1; read_retry < 1024; read_retry++ )
+            {
+                if( DVDReadBlocks( d->file, d->next_vobu, 1, b->data ) == 1 )
+                {
+                    /*
+                     * Successful read.
+                     */
+                    if( read_retry > 1 && !is_nav_pack( b->data) )
+                    {
+                        // But wasn't a nav pack, so carry on looking
+                        read_retry = 1;
+                        d->next_vobu++;
+                        continue;
+                    }
+                    break;
+                } else {  
+                    // First retry the same block, then try the next one, 
+                    // adjust the skip increment upwards so that we can skip
+                    // large sections of bad blocks more efficiently (at the
+                    // cost of some missed good blocks at the end).
+                    hb_log_level(gloglevel, "dvd: vobu read error blk %d - skipping to next blk incr %d",
+                            d->next_vobu, (read_retry * 10));
+                    d->next_vobu += (read_retry * 10);
+                }
+                
+            }
+
+            if( read_retry == 1024 )
+            {
+                // That's too many bad blocks, jump to the start of the
+                // next cell.
+                hb_log_level(gloglevel, "dvd: vobu read error blk %d - skipping to cell %d",
+                        d->next_vobu, d->cell_next );
+                d->cell_cur  = d->cell_next;
+                if ( d->cell_cur > d->cell_end )
+                {
+                    // hb_buffer_close( &b );
+                    return NULL;
+                }
+                d->in_cell = 0;
+                d->next_vobu = d->pgc->cell_playback[d->cell_cur].first_sector;
+                FindNextCell( d );
+                d->cell_overlap = 1;
+                continue;
+            }
+
+            if ( !is_nav_pack( b->data ) ) {
+                (d->next_vobu)++;
+                if( d->in_sync == 1 ) {
+                    hb_log_level(gloglevel,"dvd: Lost sync, searching for NAV pack at blk %d",
+                           d->next_vobu);
+                    d->in_sync = 0;
+                } 
+                continue;
+            }
+
+            navRead_DSI( &dsi_pack, &b->data[DSI_START_BYTE] );
+
+            if ( d->in_sync == 0 && d->cur_cell_id &&
+                 (d->cur_vob_id != dsi_pack.dsi_gi.vobu_vob_idn ||
+                  d->cur_cell_id != dsi_pack.dsi_gi.vobu_c_idn ) )
+            {
+                // We walked out of the cell we're supposed to be in.
+                // If we're not at the start of our next cell go there.
+                hb_log_level(gloglevel,"dvd: left cell %d (%u,%u) for (%u,%u) at block %u",
+                       d->cell_cur, d->cur_vob_id, d->cur_cell_id,
+                       dsi_pack.dsi_gi.vobu_vob_idn, dsi_pack.dsi_gi.vobu_c_idn,
+                       d->next_vobu );
+                if ( d->next_vobu != d->pgc->cell_playback[d->cell_next].first_sector )
+                {
+                    d->next_vobu = d->pgc->cell_playback[d->cell_next].first_sector;
+                    d->cur_cell_id = 0;
+                    continue;
+                }
+            }
+
+            block     = dsi_pack.dsi_gi.nv_pck_lbn;
+            pack_len  = dsi_pack.dsi_gi.vobu_ea;
+
+            // There are a total of 21 next vobu offsets (and 21 prev_vobu
+            // offsets) in the navpack SRI structure. The primary one is
+            // 'next_vobu' which is the offset (in dvd blocks) from the current
+            // block to the start of the next vobu. If the block at 'next_vobu'
+            // can't be read, 'next_video' is the offset to the vobu closest to it.
+            // The other 19 offsets are vobus at successively longer distances from
+            // the current block (this is so that a hardware player can do
+            // adaptive error recovery to skip over a bad spot on the disk). In all
+            // these offsets the high bit is set to indicate when it contains a
+            // valid offset. The next bit (2^30) is set to indicate that there's
+            // another valid offset in the SRI that's closer to the current block.
+            // A hardware player uses these bits to chose the closest valid offset
+            // and uses that as its next vobu. (Some mastering schemes appear to
+            // put a bogus offset in next_vobu with the 2^30 bit set & the
+            // correct offset in next_video. This works fine in hardware players
+            // but will mess up software that doesn't implement the full next
+            // vobu decision logic.) In addition to the flag bits there's a
+            // reserved value of the offset that indicates 'no next vobu' (which
+            // indicates the end of a cell). But having no next vobu pointer with a
+            // 'valid' bit set also indicates end of cell. Different mastering
+            // schemes seem to use all possible combinations of the flag bits
+            // and reserved values to indicate end of cell so we have to check
+            // them all or we'll get a disk read error from following an
+            // invalid offset.
+            next_ptr = dsi_pack.vobu_sri.next_vobu;
+            if ( ( next_ptr & ( 1 << 31 ) ) == 0  ||
+                 ( next_ptr & ( 1 << 30 ) ) != 0 ||
+                 ( next_ptr & 0x3fffffff ) == 0x3fffffff )
+            {
+                next_ptr = dsi_pack.vobu_sri.next_video;
+                if ( ( next_ptr & ( 1 << 31 ) ) == 0 ||
+                     ( next_ptr & 0x3fffffff ) == 0x3fffffff )
+                {
+                    // don't have a valid forward pointer - assume end-of-cell
+                    d->block     = block;
+                    d->pack_len  = pack_len;
+                    break;
+                }
+            }
+            next_vobu = block + ( next_ptr & 0x3fffffff );
+
+            if( pack_len >  0    &&
+                pack_len <  1024 &&
+                block    >= d->next_vobu &&
+                ( block <= d->title_start + d->title_block_count ||
+                  block <= d->title_end ) )
+            {
+                d->block     = block;
+                d->pack_len  = pack_len;
+                d->next_vobu = next_vobu;
+                break;
+            }
+
+            /* Wasn't a valid VOBU, try next block */
+            if( ++error > 1024 )
+            {
+                hb_log_level(gloglevel, "dvd: couldn't find a VOBU after 1024 blocks" );
+                //hb_buffer_close( &b );
+                return NULL;
+            }
+
+            (d->next_vobu)++;
+        }
+
+        if( d->in_sync == 0 || d->in_sync == 2 )
+        {
+            if( d->in_sync == 0 )
+            {
+                hb_log_level(gloglevel, "dvd: In sync with DVD at block %d", d->block );
+            }
+            d->in_sync = 1;
+        }
+
+        // Revert the cell overlap, and check for a chapter break
+        // If this cell is zero length (prev_vobu & next_vobu both
+        // set to END_OF_CELL) we need to check for beginning of
+        // cell before checking for end or we'll advance to the next
+        // cell too early and fail to generate a chapter mark when this
+        // cell starts a chapter.
+        if( ( dsi_pack.vobu_sri.prev_vobu & (1 << 31 ) ) == 0 ||
+            ( dsi_pack.vobu_sri.prev_vobu & 0x3fffffff ) == 0x3fffffff )
+        {
+            // A vobu that's not at the start of a cell can have an
+            // EOC prev pointer (this seems to be associated with some
+            // sort of drm). The rest of the content in the cell may be
+            // booby-trapped so treat this like an end-of-cell rather
+            // than a beginning of cell.
+            if ( d->pgc->cell_playback[d->cell_cur].first_sector < dsi_pack.dsi_gi.nv_pck_lbn &&
+                 d->pgc->cell_playback[d->cell_cur].last_sector >= dsi_pack.dsi_gi.nv_pck_lbn )
+            {
+                hb_log_level(gloglevel, "dvd: null prev_vobu in cell %d at block %d", d->cell_cur,
+                        d->block );
+                // treat like end-of-cell then go directly to start of next cell.
+                d->cell_cur  = d->cell_next;
+                d->in_cell = 0;
+                d->next_vobu = d->pgc->cell_playback[d->cell_cur].first_sector;
+                FindNextCell( d );
+                d->cell_overlap = 1;
+                goto top;
+            }
+            else
+            {
+                if ( d->block != d->pgc->cell_playback[d->cell_cur].first_sector )
+                {
+                    hb_log_level(gloglevel, "dvd: beginning of cell %d at block %d", d->cell_cur,
+                           d->block );
+                }
+                if( d->in_cell )
+                {
+                    hb_error( "dvd: assuming missed end of cell %d at block %d", d->cell_cur, d->block );
+                    d->cell_cur  = d->cell_next;
+                    d->next_vobu = d->pgc->cell_playback[d->cell_cur].first_sector;
+                    FindNextCell( d );
+                    d->cell_overlap = 1;
+                    d->in_cell = 0;
+                } else {
+                    d->in_cell = 1;
+                }
+                d->cur_vob_id = dsi_pack.dsi_gi.vobu_vob_idn;
+                d->cur_cell_id = dsi_pack.dsi_gi.vobu_c_idn;
+
+                if( d->cell_overlap )
+                {
+                    b->new_chap = hb_dvdread_is_break( d );
+                    d->cell_overlap = 0;
+                }
+            }
+        }
+
+        if( ( dsi_pack.vobu_sri.next_vobu & (1 << 31 ) ) == 0 ||
+            ( dsi_pack.vobu_sri.next_vobu & 0x3fffffff ) == 0x3fffffff )
+        {
+            if ( d->block <= d->pgc->cell_playback[d->cell_cur].first_sector ||
+                 d->block > d->pgc->cell_playback[d->cell_cur].last_sector )
+            {
+                hb_log_level(gloglevel, "dvd: end of cell %d at block %d", d->cell_cur,
+                        d->block );
+            }
+            d->cell_cur  = d->cell_next;
+            d->in_cell = 0;
+            d->next_vobu = d->pgc->cell_playback[d->cell_cur].first_sector;
+            FindNextCell( d );
+            d->cell_overlap = 1;
+
+        }
+    }
+    else
+    {
+        if( DVDReadBlocks( d->file, d->block, 1, b->data ) != 1 )
+        {
+            // this may be a real DVD error or may be DRM. Either way
+            // we don't want to quit because of one bad block so set
+            // things up so we'll advance to the next vobu and recurse.
+            hb_error( "dvd: DVDReadBlocks failed (%d), skipping to vobu %u",
+                      d->block, d->next_vobu );
+            d->pack_len = 0;
+            goto top;  /* XXX need to restructure this routine & avoid goto */
+        }
+        d->pack_len--;
+
+    }
+
+    d->block++;
+
+    return b;
+}
+
+/***********************************************************************
+ * hb_dvdread_chapter
+ ***********************************************************************
+ * Returns in which chapter the next block to be read is.
+ * Chapter numbers start at 1.
+ **********************************************************************/
+static  OPTMEDIA_NOT_USED int hb_dvdread_chapter( hb_dvd_t * e )
+{
+    hb_dvdread_t *d = &(e->dvdread);
+    int     i;
+    int     pgc_id, pgn;
+    int     nr_of_ptts = d->ifo->vts_ptt_srpt->title[d->ttn-1].nr_of_ptts;
+    pgc_t * pgc;
+
+    for( i = nr_of_ptts - 1;
+         i >= 0;
+         i-- )
+    {
+        /* Get pgc for chapter (i+1) */
+        pgc_id = d->ifo->vts_ptt_srpt->title[d->ttn-1].ptt[i].pgcn;
+        pgn    = d->ifo->vts_ptt_srpt->title[d->ttn-1].ptt[i].pgn;
+        pgc    = d->ifo->vts_pgcit->pgci_srp[pgc_id-1].pgc;
+
+        if( d->cell_cur - d->cell_overlap >= pgc->program_map[pgn-1] - 1 &&
+            d->cell_cur - d->cell_overlap <= pgc->nr_of_cells - 1 )
+        {
+            /* We are in this chapter */
+            return i + 1;
+        }
+    }
+
+    /* End of title */
+    return -1;
+}
+
+/***********************************************************************
+ * hb_dvdread_is_break
+ ***********************************************************************
+ * Returns chapter number if the current block is a new chapter start
+ **********************************************************************/
+static int hb_dvdread_is_break( hb_dvdread_t * d )
+{
+    int     i;
+    int     pgc_id, pgn;
+    int     nr_of_ptts = d->ifo->vts_ptt_srpt->title[d->ttn-1].nr_of_ptts;
+    pgc_t * pgc;
+    int     cell;
+
+    for( i = nr_of_ptts - 1;
+         i > 0;
+         i-- )
+    {
+        /* Get pgc for chapter (i+1) */
+        pgc_id = d->ifo->vts_ptt_srpt->title[d->ttn-1].ptt[i].pgcn;
+        pgn    = d->ifo->vts_ptt_srpt->title[d->ttn-1].ptt[i].pgn;
+        pgc    = d->ifo->vts_pgcit->pgci_srp[pgc_id-1].pgc;
+        cell   = pgc->program_map[pgn-1] - 1;
+
+        if( cell <= d->cell_start )
+            break;
+
+        // This must not match against the start cell.
+        if( pgc->cell_playback[cell].first_sector == d->block && cell != d->cell_start )
+        {
+            return i + 1;
+        }
+    }
+
+    return 0;
+}
+
+/***********************************************************************
+ * hb_dvdread_close
+ ***********************************************************************
+ * Closes and frees everything
+ **********************************************************************/
+static void hb_dvdread_close( hb_dvd_t ** _d )
+{
+    hb_dvdread_t * d = &((*_d)->dvdread);
+
+    if( d->vmg )
+    {
+        ifoClose( d->vmg );
+    }
+    if( d->reader )
+    {
+        DVDClose( d->reader );
+    }
+
+    if(d->read_buffer) {
+        av_free(d->read_buffer->data);
+        av_free(d->read_buffer);
+    }
+
+    av_free( d );
+    *_d = NULL;
+}
+
+/***********************************************************************
+ * hb_dvdread_angle_count
+ ***********************************************************************
+ * Returns the number of angles supported.  We do not support angles
+ * with dvdread
+ **********************************************************************/
+static OPTMEDIA_NOT_USED  int hb_dvdread_angle_count( hb_dvd_t * d )
+{
+    return 1;
+}
+
+/***********************************************************************
+ * hb_dvdread_set_angle
+ ***********************************************************************
+ * Sets the angle to read.  Not supported with dvdread
+ **********************************************************************/
+static OPTMEDIA_NOT_USED  void hb_dvdread_set_angle( hb_dvd_t * d, int angle )
+{
+}
+
+/***********************************************************************
+ * FindNextCell
+ ***********************************************************************
+ * Assumes pgc and cell_cur are correctly set, and sets cell_next to the
+ * cell to be read when we will be done with cell_cur.
+ **********************************************************************/
+static void FindNextCell( hb_dvdread_t * d )
+{
+    int i = 0;
+
+    if( d->pgc->cell_playback[d->cell_cur].block_type ==
+            BLOCK_TYPE_ANGLE_BLOCK )
+    {
+
+        while( d->pgc->cell_playback[d->cell_cur+i].block_mode !=
+                   BLOCK_MODE_LAST_CELL )
+        {
+             i++;
+        }
+        d->cell_next = d->cell_cur + i + 1;
+        hb_log_level(gloglevel, "dvd: Skipping multi-angle cells %d-%d",
+                d->cell_cur,
+                d->cell_next - 1 );
+    }
+    else
+    {
+        d->cell_next = d->cell_cur + 1;
+    }
+}
+
+/***********************************************************************
+ * dvdtime2msec
+ ***********************************************************************
+ * From lsdvd
+ **********************************************************************/
+static int dvdtime2msec(dvd_time_t * dt)
+{
+    double frames_per_s[4] = {-1.0, 25.00, -1.0, 29.97};
+    double fps = frames_per_s[(dt->frame_u & 0xc0) >> 6];
+    long   ms;
+    ms  = (((dt->hour &   0xf0) >> 3) * 5 + (dt->hour   & 0x0f)) * 3600000;
+    ms += (((dt->minute & 0xf0) >> 3) * 5 + (dt->minute & 0x0f)) * 60000;
+    ms += (((dt->second & 0xf0) >> 3) * 5 + (dt->second & 0x0f)) * 1000;
+
+    if( fps > 0 )
+    {
+        ms += ((dt->frame_u & 0x30) >> 3) * 5 +
+              (dt->frame_u & 0x0f) * 1000.0 / fps;
+    }
+
+    return ms;
+}
+
+/* optmedia exports */
+static om_handle_t    * __hb_dvdread_init( char * path ) { return (om_handle_t *)hb_dvdread_init(path); }
+static void     __hb_dvdread_close( om_handle_t  ** _d ) { hb_dvdread_close((hb_dvd_t **)_d); }
+static int           __hb_dvdread_title_count( om_handle_t *d ) { return hb_dvdread_title_count((hb_dvd_t *)d); }
+static hb_title_t  * __hb_dvdread_title_scan( om_handle_t * d, int t, uint64_t min_duration ) { return hb_dvdread_title_scan((hb_dvd_t *)d,t,min_duration); }
+static int           __hb_dvdread_main_feature( om_handle_t * d, hb_list_t * list_title ) { return hb_dvdread_main_feature((hb_dvd_t *)d,list_title);}
+
+static hb_optmedia_func_t dvd_methods = {
+		__hb_dvdread_init,
+		__hb_dvdread_close,
+		__hb_dvdread_title_count,
+		__hb_dvdread_title_scan,
+		__hb_dvdread_main_feature
+} ;
+
+hb_optmedia_func_t *hb_optmedia_dvd_methods(void) {
+	return &dvd_methods;
+}
+
+
+/* libavformat glue */
+static const AVOption options[] = {
+    { "wide_support", "enable wide support", offsetof(dvdurl_t, wide_support), AV_OPT_TYPE_INT, {1}, -1, 1, AV_OPT_FLAG_DECODING_PARAM},
+    { "min_title_duration", "minimum duration in ms to select a DVD title", offsetof(dvdurl_t, min_title_duration), AV_OPT_TYPE_INT, {0}, 0, INT_MAX, AV_OPT_FLAG_DECODING_PARAM},
+    {0}
+};
+
+static const AVClass dvdurl_class = {
+    "DVDURL protocol",
+    av_default_item_name,
+    options,
+    LIBAVUTIL_VERSION_INT,
+};
+
+#define dvdurl_max(a,b) ((a)>(b)?(a):(b))
+#define dvdurl_min(a,b) ((a)<(b)?(a):(b))
+
+static int dvd_read(URLContext *h, unsigned char *buf, int size)
+{
+    dvdurl_t *ctx = (dvdurl_t *)h->priv_data;
+
+    unsigned char *bufptr = buf;
+    unsigned char *bufend = buf + size;
+
+    while( bufptr < bufend ) {
+        /* if there is still a buffer we were reading */        
+        if( ctx->cur_read_buffer ) {
+            int left_dstbytes = bufend - bufptr;
+            int left_srcbytes = ctx->cur_read_buffer->size - ctx->cur_read_buffer->cur;            
+            
+            int readmax = dvdurl_min( left_srcbytes, left_dstbytes );
+
+            memcpy(bufptr, ctx->cur_read_buffer->data + ctx->cur_read_buffer->cur, readmax );
+
+            bufptr += readmax;
+            ctx->cur_read_buffer->cur += readmax;
+
+            if( ctx->cur_read_buffer->cur == ctx->cur_read_buffer->size ) {
+                ctx->cur_read_buffer = 0;
+            }
+        } else {
+            /* reading fresh data from dvdread. this must return a buffer if succesful */
+            ctx->cur_read_buffer = hb_dvdread_read( ctx->hb_dvd );
+            if(!ctx->cur_read_buffer) {
+                hb_log_level(gloglevel,"dvd_read: EOF");
+                return AVERROR_EOF;
+            }
+            ctx->cur_read_buffer->cur = 0;
+        }
+    }
+    return bufptr - buf;
+}
+
+static int dvd_write(URLContext *h, const unsigned char *buf, int size)
+{
+    return 0; 
+}
+
+static int dvd_get_handle(URLContext *h)
+{
+    hb_log_level(gloglevel,"dvd_get_handle");
+    return (intptr_t) h->priv_data;
+}
+
+static int dvd_check(URLContext *h, int mask)
+{
+    int ret = mask&AVIO_FLAG_READ;
+    hb_error("dvd_check: mask %d",  mask);
+    return ret;
+}
+
+
+static int dvd_open(URLContext *h, const char *filename, int flags)
+{
+    const char *dvdpath;
+    int i,title_count;
+    dvdurl_t *ctx;
+    int64_t min_title_duration = 0*90000;
+    int urltitle = 0;
+    int loglevel =  gloglevel;
+
+
+
+
+    //ctx = av_malloc( sizeof(dvdurl_t) );
+    ctx = h->priv_data;
+    ctx->class = &dvdurl_class;
+    ctx->list_title = hb_list_init();
+    ctx->selected_chapter = 1;
+    min_title_duration = (((uint64_t)ctx->min_title_duration)*90000L)/1000L;
+
+    url_parse("dvd",filename, &dvdpath, &urltitle );
+
+
+    ctx->hb_dvd = hb_dvdread_init((char *)dvdpath);
+    if(!ctx->hb_dvd) {
+        hb_log_level(loglevel, "dvd_open: couldn't initialize dvdread");
+        return -1;
+    }
+
+    title_count = hb_dvdread_title_count(ctx->hb_dvd);
+    if( urltitle>0 && urltitle<=title_count ) {
+        hb_title_t *t= hb_dvdread_title_scan(ctx->hb_dvd, urltitle, min_title_duration );
+        hb_log_level(loglevel,"dvd_open: opening title %d ", urltitle);
+        if(t) {
+            ctx->selected_title = t;
+            ctx->selected_title_idx = t->index;
+        } else {
+            return -1;
+        }
+    } else {
+        hb_log_level(loglevel,"dvd_open: dvd image has %d titles", title_count);
+        for (i = 0; i < title_count; i++) {
+            hb_title_t *t = hb_dvdread_title_scan(ctx->hb_dvd, i + 1, min_title_duration);
+            if (t) {
+                ctx->selected_title = t;
+                ctx->selected_title_idx = t->index;
+                hb_list_add(ctx->list_title, t);
+            }
+        }
+    }
+
+    if( title_count<=0 ) {
+        hb_error("dvd_open: no titles found");
+        return -1;
+    }
+
+
+    hb_log_level(loglevel,"dvd_open: selected title %d", ctx->selected_title_idx);
+
+
+    if( hb_dvdread_start(ctx->hb_dvd, ctx->selected_title, ctx->selected_chapter ) == 0 ) {
+        hb_error("dvd_open: couldn't start reading title");
+        return -1;
+    }
+
+
+
+    h->priv_data = (void *)ctx;
+    return 0;
+}
+
+static int64_t dvd_seek(URLContext *h, int64_t pos, int whence)
+{
+    dvdurl_t *ctx = h->priv_data;
+    //hb_error("dvd_seek: pos %"PRId64" whence %d", pos, whence);
+
+    if (whence == AVSEEK_SIZE) {
+        return hb_dvdread_cur_title_size(ctx->hb_dvd); 
+    }
+    return hb_dvdread_seek_bytes( ctx->hb_dvd, pos, whence );
+}
+
+static int dvd_close(URLContext *h)
+{
+    hb_log_level(gloglevel,"dvd_close: closing");
+    if( h->priv_data ) {        
+        dvdurl_t *ctx = h->priv_data;
+        if( ctx->list_title ) {
+            hb_list_close( &ctx->list_title );
+        }
+        hb_dvdread_close(&ctx->hb_dvd);
+
+        //av_free(ctx);
+        //h->priv_data =0;
+    }
+    hb_log_level(gloglevel,"dvd_close: closed");
+    return 0;
+}
+
+
+
+URLProtocol ff_dvd_protocol = {
+    .name                = "dvd",
+    .url_open            = dvd_open,
+    .url_read            = dvd_read,
+    .url_write           = dvd_write,
+    .url_seek            = dvd_seek,
+    .url_close           = dvd_close,
+    .url_get_file_handle = dvd_get_handle,
+    .url_check           = dvd_check,
+    .priv_data_class	= &dvdurl_class,
+    .priv_data_size 	= sizeof(dvdurl_t)
+};
+
diff --git a/libavformat/dvdurl.h b/libavformat/dvdurl.h
new file mode 100644
index 0000000000..560c4da058
--- /dev/null
+++ b/libavformat/dvdurl.h
@@ -0,0 +1,120 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef HB_DVD_H
+#define HB_DVD_H
+
+#include "url.h"
+#include "dvdurl_common.h"
+#include "dvdread/ifo_read.h"
+#include "dvdread/nav_read.h"
+
+
+
+struct hb_dvdread_s
+{
+    char         * path;
+
+    dvd_reader_t * reader;
+    ifo_handle_t * vmg;
+
+    int            vts;
+    int            ttn;
+    ifo_handle_t * ifo;
+    dvd_file_t   * file;
+
+    pgc_t        * pgc;
+    int            cell_start;
+    int            cell_end;
+    int            title_start;
+    int            title_end;
+    int            title_block_count;
+    int            cell_cur;
+    int            cell_next;
+    int            cell_overlap;
+    int            block;
+    int            pack_len;
+    int            next_vobu;
+    int            in_cell;
+    int            in_sync;
+    uint16_t       cur_vob_id;
+    uint8_t        cur_cell_id;
+
+    /* vgtmpeg */
+    hb_buffer_t     *read_buffer;
+};
+
+
+typedef struct hb_dvdread_s hb_dvdread_t;
+
+union hb_dvd_s
+{
+    hb_dvdread_t dvdread;
+};
+
+typedef union  hb_dvd_s hb_dvd_t;
+
+//struct hb_dvd_func_s
+//{
+//    hb_dvd_t *    (* init)        ( char * );
+//    void          (* close)       ( hb_dvd_t ** );
+//    char        * (* name)        ( char * );
+//    int           (* title_count) ( hb_dvd_t * );
+//    hb_title_t  * (* title_scan)  ( hb_dvd_t *, int, uint64_t );
+//    int           (* start)       ( hb_dvd_t *, hb_title_t *, int );
+//    void          (* stop)        ( hb_dvd_t * );
+//    int           (* seek)        ( hb_dvd_t *, float );
+//    hb_buffer_t * (* read)        ( hb_dvd_t * );
+//    int           (* chapter)     ( hb_dvd_t * );
+//    int           (* angle_count) ( hb_dvd_t * );
+//    void          (* set_angle)   ( hb_dvd_t *, int );
+//    int           (* main_feature)( hb_dvd_t *, hb_list_t * );
+//};
+//typedef struct hb_dvd_func_s hb_dvd_func_t;
+//
+//hb_dvd_func_t * hb_dvdread_methods( void );
+hb_optmedia_func_t *hb_optmedia_dvd_methods(void);
+
+
+
+typedef struct dvdurl_s {
+	const AVClass *class;
+    hb_list_t *list_title;
+    hb_dvd_t *hb_dvd;
+    hb_title_t *selected_title;
+    int selected_title_idx;
+    int selected_chapter;
+    hb_buffer_t *cur_read_buffer;
+    int wide_support;
+    int min_title_duration;
+} dvdurl_t;
+
+/* returns 1 if the path indicated contains a valid path that will be opened
+ * with dvd url
+ */
+int parse_dvd_path(void *ctx, char *opt, const char *path,  int (* parse_file)(void *ctx, char *opt, char *filename), void (* select_default_program)(int programid) );
+
+
+#endif // HB_DVD_H
+
+
diff --git a/libavformat/dvdurl_common.c b/libavformat/dvdurl_common.c
new file mode 100644
index 0000000000..e56b3901cf
--- /dev/null
+++ b/libavformat/dvdurl_common.c
@@ -0,0 +1,1644 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include <stdarg.h>
+#include <time.h>
+#include <ctype.h>
+#include <sys/time.h>
+
+#include "avformat.h"
+#include "libavutil/avstring.h"
+#include "dvdurl_common.h"
+#include "dvdurl_lang.h"
+#include "dvdurl.h"
+#include "bdurl.h"
+
+int hb_global_verbosity_level=1; //Necessary for hb_deep_log
+
+
+/**********************************************************************
+ * Global variables
+ *********************************************************************/
+hb_rate_t hb_video_rates[] =
+{ { "5",  5400000 }, { "10",     2700000 }, { "12", 2250000 },
+  { "15", 1800000 }, { "23.976", 1126125 }, { "24", 1125000 },
+  { "25", 1080000 }, { "29.97",  900900  } };
+int hb_video_rates_count = sizeof( hb_video_rates ) /
+                           sizeof( hb_rate_t );
+
+hb_rate_t hb_audio_rates[] =
+{ { "22.05", 22050 }, { "24", 24000 }, { "32", 32000 },
+  { "44.1",  44100 }, { "48", 48000 } };
+int hb_audio_rates_count   = sizeof( hb_audio_rates ) /
+                             sizeof( hb_rate_t );
+int hb_audio_rates_default = 3; /* 44100 Hz */
+
+hb_rate_t hb_audio_bitrates[] =
+{ {  "32",  32 }, {  "40",  40 }, {  "48",  48 }, {  "56",  56 },
+  {  "64",  64 }, {  "80",  80 }, {  "96",  96 }, { "112", 112 },
+  { "128", 128 }, { "160", 160 }, { "192", 192 }, { "224", 224 },
+  { "256", 256 }, { "320", 320 }, { "384", 384 }, { "448", 448 },
+  { "512", 512 }, { "576", 576 }, { "640", 640 }, { "768", 768 } };
+int hb_audio_bitrates_count = sizeof( hb_audio_bitrates ) /
+                              sizeof( hb_rate_t );
+
+static hb_error_handler_t *error_handler = NULL;
+
+hb_mixdown_t hb_audio_mixdowns[] =
+{ { "None",               "HB_AMIXDOWN_NONE",      "none",   HB_AMIXDOWN_NONE      },
+  { "Mono",               "HB_AMIXDOWN_MONO",      "mono",   HB_AMIXDOWN_MONO      },
+  { "Stereo",             "HB_AMIXDOWN_STEREO",    "stereo", HB_AMIXDOWN_STEREO    },
+  { "Dolby Surround",     "HB_AMIXDOWN_DOLBY",     "dpl1",   HB_AMIXDOWN_DOLBY     },
+  { "Dolby Pro Logic II", "HB_AMIXDOWN_DOLBYPLII", "dpl2",   HB_AMIXDOWN_DOLBYPLII },
+  { "6-channel discrete", "HB_AMIXDOWN_6CH",       "6ch",    HB_AMIXDOWN_6CH       } };
+int hb_audio_mixdowns_count = sizeof( hb_audio_mixdowns ) /
+                              sizeof( hb_mixdown_t );
+
+hb_encoder_t hb_video_encoders[] =
+{ { "H.264 (x264)",       "x264",       HB_VCODEC_X264,         HB_MUX_MP4|HB_MUX_MKV },
+  { "MPEG-4 (FFmpeg)",    "ffmpeg4",    HB_VCODEC_FFMPEG_MPEG4, HB_MUX_MP4|HB_MUX_MKV },
+  { "MPEG-2 (FFmpeg)",    "ffmpeg2",    HB_VCODEC_FFMPEG_MPEG2, HB_MUX_MP4|HB_MUX_MKV },
+  { "VP3 (Theora)",       "theora",     HB_VCODEC_THEORA,                  HB_MUX_MKV } };
+int hb_video_encoders_count = sizeof( hb_video_encoders ) /
+                              sizeof( hb_encoder_t );
+
+hb_encoder_t hb_audio_encoders[] =
+{
+#ifdef __APPLE__
+  { "AAC (CoreAudio)",    "ca_aac",     HB_ACODEC_CA_AAC,       HB_MUX_MP4|HB_MUX_MKV },
+  { "HE-AAC (CoreAudio)", "ca_haac",    HB_ACODEC_CA_HAAC,      HB_MUX_MP4|HB_MUX_MKV },
+#endif
+  { "AAC (faac)",         "faac",       HB_ACODEC_FAAC,         HB_MUX_MP4|HB_MUX_MKV },
+  { "AAC (ffmpeg)",       "ffaac",      HB_ACODEC_FFAAC,        HB_MUX_MP4|HB_MUX_MKV },
+  { "AAC Passthru",       "copy:aac",   HB_ACODEC_AAC_PASS,     HB_MUX_MP4|HB_MUX_MKV },
+  { "AC3 (ffmpeg)",       "ffac3",      HB_ACODEC_AC3,          HB_MUX_MP4|HB_MUX_MKV },
+  { "AC3 Passthru",       "copy:ac3",   HB_ACODEC_AC3_PASS,     HB_MUX_MP4|HB_MUX_MKV },
+  { "DTS Passthru",       "copy:dts",   HB_ACODEC_DCA_PASS,     HB_MUX_MP4|HB_MUX_MKV },
+  { "DTS-HD Passthru",    "copy:dtshd", HB_ACODEC_DCA_HD_PASS,  HB_MUX_MP4|HB_MUX_MKV },
+  { "MP3 (lame)",         "lame",       HB_ACODEC_LAME,         HB_MUX_MP4|HB_MUX_MKV },
+  { "MP3 Passthru",       "copy:mp3",   HB_ACODEC_MP3_PASS,     HB_MUX_MP4|HB_MUX_MKV },
+  { "Vorbis (vorbis)",    "vorbis",     HB_ACODEC_VORBIS,                  HB_MUX_MKV },
+  { "FLAC (ffmpeg)",      "ffflac",     HB_ACODEC_FFFLAC,                  HB_MUX_MKV },
+  { "Auto Passthru",      "copy",       HB_ACODEC_AUTO_PASS,    HB_MUX_MP4|HB_MUX_MKV } };
+int hb_audio_encoders_count = sizeof( hb_audio_encoders ) /
+                              sizeof( hb_encoder_t );
+
+int hb_mixdown_get_mixdown_from_short_name( const char * short_name )
+{
+    int i;
+    for (i = 0; i < hb_audio_mixdowns_count; i++)
+    {
+        if (strcmp(hb_audio_mixdowns[i].short_name, short_name) == 0)
+        {
+            return hb_audio_mixdowns[i].amixdown;
+        }
+    }
+    return 0;
+}
+
+const char * hb_mixdown_get_short_name_from_mixdown( int amixdown )
+{
+    int i;
+    for (i = 0; i < hb_audio_mixdowns_count; i++)
+    {
+        if (hb_audio_mixdowns[i].amixdown == amixdown)
+        {
+            return hb_audio_mixdowns[i].short_name;
+        }
+    }
+    return "";
+}
+
+
+// Given an input bitrate, find closest match in the set of allowed bitrates
+int hb_find_closest_audio_bitrate(int bitrate)
+{
+    int ii;
+    int result;
+
+    // Check if bitrate mode was disabled
+    if( bitrate <= 0 )
+        return bitrate;
+
+    // result is highest rate if none found during search.
+    // rate returned will always be <= rate asked for.
+    result = hb_audio_bitrates[0].rate;
+    for (ii = hb_audio_bitrates_count-1; ii >= 0; ii--)
+    {
+        if (bitrate >= hb_audio_bitrates[ii].rate)
+        {
+            result = hb_audio_bitrates[ii].rate;
+            break;
+        }
+    }
+    return result;
+}
+
+// Get the bitrate low and high limits for a codec/samplerate/mixdown triplet
+// The limits have been empirically determined through testing.  Max bitrates
+// in table below. Numbers in parenthesis are the target bitrate chosen.
+/*
+Encoder     1 channel           2 channels          6 channels
+
+faac
+24kHz       86 (128)            173 (256)           460 (768)
+48kHz       152 (160)           304 (320)           759 (768)
+
+Vorbis
+24kHz       97 (80)             177 (160)           527 (512)
+48kHz       241 (224)           465 (448)           783 (768)
+
+Lame
+24kHz       146 (768)           138 (768)
+48kHz       318 (768)           318 (768)
+
+ffac3
+24kHz       318 (320)           318 (320)           318 (320)
+48kHz       636 (640)           636 (640)           636 (640)
+
+Core Audio AAC (core audio api provides range of allowed bitrates)
+24kHz       16-64               32-128              80-320
+32kHz       24-96               48-192              128-448
+48kHz       32-256              64-320              160-768
+
+Core Audio HE-AAC (core audio api provides range of allowed bitrates)
+32kHz       12-40               24-80               64-192
+48kHz       16-40               32-80               80-192
+*/
+
+void hb_get_audio_bitrate_limits(uint32_t codec, int samplerate, int mixdown, int *low, int *high)
+{
+    int channels;
+
+    channels = HB_AMIXDOWN_GET_DISCRETE_CHANNEL_COUNT(mixdown);
+    if( codec & HB_ACODEC_PASS_FLAG )
+    {
+        // Bitrates don't apply to "lossless" audio (Passthru, FLAC)
+        *low = *high = -1;
+        return;
+    }
+    switch( codec )
+    {
+        case HB_ACODEC_FFFLAC:
+            // Bitrates don't apply to "lossless" audio (Passthru, FLAC)
+            *high = *low = -1;
+            break;
+
+        case HB_ACODEC_AC3:
+            *low = 32 * channels;
+            if (samplerate > 24000)
+            {
+                *high = 640;
+            }
+            else
+            {
+                *high = 320;
+            }
+            break;
+
+        case HB_ACODEC_CA_AAC:
+            if (samplerate > 32000)
+            {
+                *low = channels * 32;
+                if (channels == 1)
+                    *high = 256;
+                if (channels == 2)
+                    *high = 320;
+                if (channels == 6)
+                {
+                    *low = 160;
+                    *high = 768;
+                }
+            }
+            else if (samplerate > 24000)
+            {
+                *low = channels * 24;
+                *high = channels * 96;
+                if (channels == 6)
+                {
+                    *low = 128;
+                    *high = 448;
+                }
+            }
+            else
+            {
+                *low = channels * 16;
+                *high = channels * 64;
+                if (channels == 6)
+                {
+                    *low = 80;
+                    *high = 320;
+                }
+            }
+            break;
+
+        case HB_ACODEC_CA_HAAC:
+            if (samplerate > 32000)
+            {
+                *low = channels * 16;
+                *high = channels * 40;
+                if (channels == 6)
+                {
+                    *low = 80;
+                    *high = 192;
+                }
+            }
+            else
+            {
+                *low = channels * 12;
+                *high = channels * 40;
+                if (channels == 6)
+                {
+                    *low = 64;
+                    *high = 192;
+                }
+            }
+            break;
+
+        case HB_ACODEC_FAAC:
+            *low = 32 * channels;
+            if (samplerate > 24000)
+            {
+                *high = 160 * channels;
+                if (*high > 768)
+                    *high = 768;
+            }
+            else
+            {
+                *high = 96 * channels;
+                if (*high > 480)
+                    *high = 480;
+            }
+            break;
+
+        case HB_ACODEC_FFAAC:
+            *low = 32 * channels;
+            if (samplerate > 24000)
+            {
+                *high = 160 * channels;
+                if (*high > 768)
+                    *high = 768;
+            }
+            else
+            {
+                *high = 96 * channels;
+                if (*high > 480)
+                    *high = 480;
+            }
+            break;
+
+        case HB_ACODEC_VORBIS:
+            *high = channels * 80;
+            if (samplerate > 24000)
+            {
+                if (channels > 2)
+                {
+                    // Vorbis minimum is around 30kbps/ch for 6ch 
+                    // at rates > 24k (32k/44.1k/48k) 
+                    *low = 32 * channels;
+                    *high = 128 * channels;
+                }
+                else
+                {
+                    // Allow 24kbps mono and 48kbps stereo at rates > 24k 
+                    // (32k/44.1k/48k)
+                    *low = 24 * channels;
+                    if (samplerate > 32000)
+                        *high = channels * 224;
+                    else
+                        *high = channels * 160;
+                }
+            }
+            else
+            {
+                *low = channels * 16;
+                *high = 80 * channels;
+            }
+            break;
+
+        case HB_ACODEC_LAME:
+            *low = hb_audio_bitrates[0].rate;
+            if (samplerate > 24000)
+                *high = 320;
+            else
+                *high = 160;
+            break;
+        
+        default:
+            *low = hb_audio_bitrates[0].rate;
+            *high = hb_audio_bitrates[hb_audio_bitrates_count-1].rate;
+            break;
+    }
+}
+
+// Given an input bitrate, sanitize it.  Check low and high limits and
+// make sure it is in the set of allowed bitrates.
+int hb_get_best_audio_bitrate( uint32_t codec, int bitrate, int samplerate, int mixdown)
+{
+    int low, high;
+
+    hb_get_audio_bitrate_limits(codec, samplerate, mixdown, &low, &high);
+    if (bitrate > high)
+        bitrate = high;
+    if (bitrate < low)
+        bitrate = low;
+    bitrate = hb_find_closest_audio_bitrate(bitrate);
+    return bitrate;
+}
+
+// Get the default bitrate for a given codec/samplerate/mixdown triplet.
+int hb_get_default_audio_bitrate( uint32_t codec, int samplerate, int mixdown )
+{
+    int bitrate, channels;
+    int sr_shift;
+
+    if( codec & HB_ACODEC_PASS_FLAG )
+        return -1;
+
+    channels = HB_AMIXDOWN_GET_DISCRETE_CHANNEL_COUNT(mixdown);
+
+    // Min bitrate is established such that we get good quality
+    // audio as a minimum.
+    sr_shift = (samplerate <= 24000) ? 1 : 0;
+
+    switch ( codec )
+    {
+        case HB_ACODEC_FFFLAC:
+            bitrate = -1;
+            sr_shift = 0;
+            break;
+        case HB_ACODEC_AC3:
+            if (channels == 1)
+                bitrate = 96;
+            else if (channels <= 2)
+                bitrate = 224;
+            else
+                bitrate = 640;
+            break;
+        case HB_ACODEC_CA_HAAC:
+            bitrate = channels * 32;
+            break;
+        default:
+            bitrate = channels * 80;
+            break;
+    }
+    bitrate >>= sr_shift;
+    bitrate = hb_get_best_audio_bitrate( codec, bitrate, samplerate, mixdown );
+    return bitrate;
+}
+
+// Get limits and hints for the UIs.
+//
+// granularity sets the minimum step increments that should be used
+// (it's ok to round up to some nice multiple if you like)
+//
+// direction says whether 'low' limit is highest or lowest 
+// quality (direction 0 == lowest value is worst quality)
+void hb_get_audio_quality_limits(uint32_t codec, float *low, float *high, float *granularity, int *direction)
+{
+    switch( codec )
+    {
+        case HB_ACODEC_LAME:
+            *direction = 1;
+            *granularity = 0.5;
+            *low = 0.;
+            *high = 10.0;
+            break;
+
+        case HB_ACODEC_VORBIS:
+            *direction = 0;
+            *granularity = 0.05;
+            *low = 0.;
+            *high = 1.0;
+            break;
+
+        case HB_ACODEC_CA_AAC:
+            *direction = 0;
+            *granularity = 9;
+            *low = 0.;
+            *high = 127.0;
+            break;
+
+        default:
+            *direction = 0;
+            *granularity = 1;
+            *low = *high = -1.;
+            break;
+    }
+}
+
+float hb_get_best_audio_quality( uint32_t codec, float quality)
+{
+    float low, high, granularity;
+    int direction;
+
+    hb_get_audio_quality_limits(codec, &low, &high, &granularity, &direction);
+    if (quality > high)
+        quality = high;
+    if (quality < low)
+        quality = low;
+    return quality;
+}
+
+float hb_get_default_audio_quality( uint32_t codec )
+{
+    float quality;
+    switch( codec )
+    {
+        case HB_ACODEC_LAME:
+            quality = 2.;
+            break;
+
+        case HB_ACODEC_VORBIS:
+            quality = .5;
+            break;
+
+        case HB_ACODEC_CA_AAC:
+            quality = 91.;
+            break;
+
+        default:
+            quality = -1.;
+            break;
+    }
+    return quality;
+}
+
+// Get limits and hints for the UIs.
+//
+// granularity sets the minimum step increments that should be used
+// (it's ok to round up to some nice multiple if you like)
+//
+// direction says whether 'low' limit is highest or lowest 
+// compression level (direction 0 == lowest value is worst compression level)
+void hb_get_audio_compression_limits(uint32_t codec, float *low, float *high, float *granularity, int *direction)
+{
+    switch( codec )
+    {
+        case HB_ACODEC_FFFLAC:
+            *direction = 0;
+            *granularity = 1;
+            *high = 12;
+            *low = 0;
+            break;
+
+        case HB_ACODEC_LAME:
+            *direction = 1;
+            *granularity = 1;
+            *high = 9;
+            *low = 0;
+            break;
+
+        default:
+            *direction = 0;
+            *granularity = 1;
+            *low = *high = -1;
+            break;
+    }
+}
+
+float hb_get_best_audio_compression( uint32_t codec, float compression)
+{
+    float low, high, granularity;
+    int direction;
+
+    hb_get_audio_compression_limits( codec, &low, &high, &granularity, &direction );
+    if( compression > high )
+        compression = high;
+    if( compression < low )
+        compression = low;
+    return compression;
+}
+
+float hb_get_default_audio_compression( uint32_t codec )
+{
+    float compression;
+    switch( codec )
+    {
+        case HB_ACODEC_FFFLAC:
+            compression = 5;
+            break;
+
+        case HB_ACODEC_LAME:
+            compression = 2;
+            break;
+
+        default:
+            compression = -1;
+            break;
+    }
+    return compression;
+}
+
+int hb_get_best_mixdown( uint32_t codec, int layout, int mixdown )
+{
+
+    int best_mixdown;
+    
+    if (codec & HB_ACODEC_PASS_FLAG)
+    {
+        // Audio pass-thru.  No mixdown.
+        return HB_AMIXDOWN_NONE;
+    }
+    switch (layout & HB_INPUT_CH_LAYOUT_DISCRETE_NO_LFE_MASK)
+    {
+        // stereo input or something not handled below
+        default:
+        case HB_INPUT_CH_LAYOUT_STEREO:
+            // mono gets mixed up to stereo & more than stereo gets mixed down
+            best_mixdown = HB_AMIXDOWN_STEREO;
+            break;
+
+        // mono input
+        case HB_INPUT_CH_LAYOUT_MONO:
+            // everything else passes through
+            best_mixdown = HB_AMIXDOWN_MONO;
+            break;
+
+        // dolby (DPL1 aka Dolby Surround = 4.0 matrix-encoded) input
+        // the A52 flags don't allow for a way to distinguish between DPL1 and
+        // DPL2 on a DVD so we always assume a DPL1 source for A52_DOLBY.
+        case HB_INPUT_CH_LAYOUT_DOLBY:
+            best_mixdown = HB_AMIXDOWN_DOLBY;
+            break;
+
+        // 4 channel discrete
+        case HB_INPUT_CH_LAYOUT_2F2R:
+        case HB_INPUT_CH_LAYOUT_3F1R:
+            // a52dec and libdca can't upmix to 6ch, 
+            // so we must downmix these.
+            best_mixdown = HB_AMIXDOWN_DOLBYPLII;
+            break;
+
+        // 5, 6, 7, or 8 channel discrete
+        case HB_INPUT_CH_LAYOUT_4F2R:
+        case HB_INPUT_CH_LAYOUT_3F4R:
+        case HB_INPUT_CH_LAYOUT_3F2R:
+            if ( ! ( layout & HB_INPUT_CH_LAYOUT_HAS_LFE ) )
+            {
+                // we don't do 5 channel discrete so mixdown to DPLII
+                // a52dec and libdca can't upmix to 6ch, 
+                // so we must downmix this.
+                best_mixdown = HB_AMIXDOWN_DOLBYPLII;
+            }
+            else
+            {
+                switch (codec)
+                {
+                    case HB_ACODEC_LAME:
+                    case HB_ACODEC_FFAAC:
+                        best_mixdown = HB_AMIXDOWN_DOLBYPLII;
+                        break;
+
+                    default:
+                        best_mixdown = HB_AMIXDOWN_6CH;
+                        break;
+                }
+            }
+            break;
+    }
+    // return the best that is not greater than the requested mixdown
+    // 0 means the caller requested the best available mixdown
+    if( best_mixdown > mixdown && mixdown > 0 )
+        best_mixdown = mixdown;
+    
+    return best_mixdown;
+}
+
+int hb_get_default_mixdown( uint32_t codec, int layout )
+{
+    int mixdown;
+    switch (codec)
+    {
+        // the AC3 encoder defaults to the best mixdown up to 6-channel
+        case HB_ACODEC_FFFLAC:
+        case HB_ACODEC_AC3:
+            mixdown = HB_AMIXDOWN_6CH;
+            break;
+        // other encoders default to the best mixdown up to DPLII
+        default:
+            mixdown = HB_AMIXDOWN_DOLBYPLII;
+            break;
+    }
+    // return the best available mixdown up to the selected default
+    return hb_get_best_mixdown( codec, layout, mixdown );
+}
+
+/**********************************************************************
+ * hb_reduce
+ **********************************************************************
+ * Given a numerator (num) and a denominator (den), reduce them to an
+ * equivalent fraction and store the result in x and y.
+ *********************************************************************/
+void hb_reduce( int *x, int *y, int num, int den )
+{
+    // find the greatest common divisor of num & den by Euclid's algorithm
+    int n = num, d = den;
+    while ( d )
+    {
+        int t = d;
+        d = n % d;
+        n = t;
+    }
+
+    // at this point n is the gcd. if it's non-zero remove it from num
+    // and den. Otherwise just return the original values.
+    if ( n )
+    {
+        *x = num / n;
+        *y = den / n;
+    }
+    else
+    {
+        *x = num;
+        *y = den;
+    }
+}
+
+/**********************************************************************
+ * hb_reduce64
+ **********************************************************************
+ * Given a numerator (num) and a denominator (den), reduce them to an
+ * equivalent fraction and store the result in x and y.
+ *********************************************************************/
+void hb_reduce64( int64_t *x, int64_t *y, int64_t num, int64_t den )
+{
+    // find the greatest common divisor of num & den by Euclid's algorithm
+    int64_t n = num, d = den;
+    while ( d )
+    {
+        int64_t t = d;
+        d = n % d;
+        n = t;
+    }
+
+    // at this point n is the gcd. if it's non-zero remove it from num
+    // and den. Otherwise just return the original values.
+    if ( n )
+    {
+        num /= n;
+        den /= n;
+    }
+
+    *x = num;
+    *y = den;
+
+}
+
+void hb_limit_rational64( int64_t *x, int64_t *y, int64_t num, int64_t den, int64_t limit )
+{
+    hb_reduce64( &num, &den, num, den );
+    if ( num < limit && den < limit )
+    {
+        *x = num;
+        *y = den;
+        return;
+    }
+
+    if ( num > den )
+    {
+        double div = (double)limit / num;
+        num = limit;
+        den *= div;
+    }
+    else
+    {
+        double div = (double)limit / den;
+        den = limit;
+        num *= div;
+    }
+    *x = num;
+    *y = den;
+}
+
+/**********************************************************************
+ * hb_fix_aspect
+ **********************************************************************
+ * Given the output width (if HB_KEEP_WIDTH) or height
+ * (HB_KEEP_HEIGHT) and the current crop values, calculates the
+ * correct height or width in order to respect the DVD aspect ratio
+ *********************************************************************/
+void hb_fix_aspect( hb_job_t * job, int keep )
+{
+    hb_title_t * title = job->title;
+    int          i;
+    int  min_width;
+    int min_height;
+    int    modulus;
+    double par, cropped_sar, ar;
+
+    /* don't do anything unless the title has complete size info */
+    if ( title->height == 0 || title->width == 0 || title->aspect == 0 )
+    {
+        hb_log( "hb_fix_aspect: incomplete info for title %d: "
+                "height = %d, width = %d, aspect = %.3f",
+                title->index, title->height, title->width, title->aspect );
+        return;
+    }
+
+    // min_width and min_height should be multiples of modulus
+    min_width    = 32;
+    min_height   = 32;
+    modulus      = job->modulus ? job->modulus : 16;
+
+    for( i = 0; i < 4; i++ )
+    {
+        // Sanity check crop values are zero or positive multiples of 2
+        if( i < 2 )
+        {
+            // Top, bottom
+            job->crop[i] = MIN( EVEN( job->crop[i] ), EVEN( ( title->height / 2 ) - ( min_height / 2 ) ) );
+            job->crop[i] = MAX( 0, job->crop[i] );
+        }
+        else
+        {
+            // Left, right
+            job->crop[i] = MIN( EVEN( job->crop[i] ), EVEN( ( title->width / 2 ) - ( min_width / 2 ) ) );
+            job->crop[i] = MAX( 0, job->crop[i] );
+        }
+    }
+
+    par = (double)title->width / ( (double)title->height * title->aspect );
+    cropped_sar = (double)( title->height - job->crop[0] - job->crop[1] ) /
+                         (double)( title->width - job->crop[2] - job->crop[3] );
+    ar = par * cropped_sar;
+
+    // Dimensions must be greater than minimum and multiple of modulus
+    if( keep == HB_KEEP_WIDTH )
+    {
+        job->width  = MULTIPLE_MOD( job->width, modulus );
+        job->width  = MAX( min_width, job->width );
+        job->height = MULTIPLE_MOD( (uint64_t)( (double)job->width * ar ), modulus );
+        job->height = MAX( min_height, job->height );
+    }
+    else
+    {
+        job->height = MULTIPLE_MOD( job->height, modulus );
+        job->height = MAX( min_height, job->height );
+        job->width  = MULTIPLE_MOD( (uint64_t)( (double)job->height / ar ), modulus );
+        job->width  = MAX( min_width, job->width );
+    }
+}
+
+/**********************************************************************
+ * hb_list implementation
+ **********************************************************************
+ * Basic and slow, but enough for what we need
+ *********************************************************************/
+
+#define HB_LIST_DEFAULT_SIZE 20
+
+struct hb_list_s
+{
+    /* Pointers to items in the list */
+    void ** items;
+
+    /* How many (void *) allocated in 'items' */
+    int     items_alloc;
+
+    /* How many valid pointers in 'items' */
+    int     items_count;
+};
+
+/**********************************************************************
+ * hb_list_init
+ **********************************************************************
+ * Allocates an empty list ready for HB_LIST_DEFAULT_SIZE items
+ *********************************************************************/
+hb_list_t * hb_list_init(void)
+{
+    hb_list_t * l;
+
+    l              = av_mallocz( sizeof( hb_list_t ));
+    l->items       = av_mallocz( HB_LIST_DEFAULT_SIZE * sizeof( void * ));
+    l->items_alloc = HB_LIST_DEFAULT_SIZE;
+
+    return l;
+}
+
+/**********************************************************************
+ * hb_list_count
+ **********************************************************************
+ * Returns the number of items currently in the list
+ *********************************************************************/
+int hb_list_count( hb_list_t * l )
+{
+    return l->items_count;
+}
+
+/**********************************************************************
+ * hb_list_add
+ **********************************************************************
+ * Adds an item at the end of the list, making it bigger if necessary.
+ * Can safely be called with a NULL pointer to add, it will be ignored.
+ *********************************************************************/
+void hb_list_add( hb_list_t * l, void * p )
+{
+    if( !p )
+    {
+        return;
+    }
+
+    if( l->items_count == l->items_alloc )
+    {
+        /* We need a bigger boat */
+        l->items_alloc += HB_LIST_DEFAULT_SIZE;
+        l->items        = av_realloc( l->items,
+                                   l->items_alloc * sizeof( void * ) );
+    }
+
+    l->items[l->items_count] = p;
+    (l->items_count)++;
+}
+
+/**********************************************************************
+ * hb_list_insert
+ **********************************************************************
+ * Adds an item at the specifiec position in the list, making it bigger
+ * if necessary.
+ * Can safely be called with a NULL pointer to add, it will be ignored.
+ *********************************************************************/
+void hb_list_insert( hb_list_t * l, int pos, void * p )
+{
+    if( !p )
+    {
+        return;
+    }
+
+    if( l->items_count == l->items_alloc )
+    {
+        /* We need a bigger boat */
+        l->items_alloc += HB_LIST_DEFAULT_SIZE;
+        l->items        = av_realloc( l->items,
+                                   l->items_alloc * sizeof( void * ) );
+    }
+
+    if ( l->items_count != pos )
+    {
+        /* Shift all items after it sizeof( void * ) bytes earlier */
+        memmove( &l->items[pos+1], &l->items[pos],
+                 ( l->items_count - pos ) * sizeof( void * ) );
+    }
+
+
+    l->items[pos] = p;
+    (l->items_count)++;
+}
+
+/**********************************************************************
+ * hb_list_rem
+ **********************************************************************
+ * Remove an item from the list. Bad things will happen if called
+ * with a NULL pointer or if the item is not in the list.
+ *********************************************************************/
+void hb_list_rem( hb_list_t * l, void * p )
+{
+    int i;
+
+    /* Find the item in the list */
+    for( i = 0; i < l->items_count; i++ )
+    {
+        if( l->items[i] == p )
+        {
+            break;
+        }
+    }
+
+    /* Shift all items after it sizeof( void * ) bytes earlier */
+    memmove( &l->items[i], &l->items[i+1],
+             ( l->items_count - i - 1 ) * sizeof( void * ) );
+
+    (l->items_count)--;
+}
+
+/**********************************************************************
+ * hb_list_item
+ **********************************************************************
+ * Returns item at position i, or NULL if there are not that many
+ * items in the list
+ *********************************************************************/
+void * hb_list_item( hb_list_t * l, int i )
+{
+    if( i < 0 || i >= l->items_count )
+    {
+        return NULL;
+    }
+
+    return l->items[i];
+}
+
+
+/**********************************************************************
+ * hb_list_close
+ **********************************************************************
+ * Free memory allocated by hb_list_init. Does NOT free contents of
+ * items still in the list.
+ *********************************************************************/
+void hb_list_close( hb_list_t ** _l )
+{
+    hb_list_t * l = *_l;
+
+    av_free( l->items );
+    av_free( l );
+
+    *_l = NULL;
+}
+
+
+
+void hb_register_error_handler( hb_error_handler_t * handler )
+{
+    error_handler = handler;
+}
+
+/**********************************************************************
+ * hb_title_init
+ **********************************************************************
+ *
+ *********************************************************************/
+hb_title_t * hb_title_init( char * path, int index )
+{
+    hb_title_t * t;
+
+    t = av_mallocz( sizeof( hb_title_t ));
+
+    t->index         = index;
+    t->playlist      = -1;
+    t->list_audio    = hb_list_init();
+    t->list_chapter  = hb_list_init();
+    t->list_subtitle = hb_list_init();
+    t->list_attachment = hb_list_init();
+    av_strlcat( t->path, path, sizeof(t->path) );
+    // default to decoding mpeg2
+    t->video_id      = 0xE0;
+    t->video_codec   = WORK_DECMPEG2;
+    t->angle_count   = 1;
+    t->pixel_aspect_width = 1;
+    t->pixel_aspect_height = 1;
+
+    return t;
+}
+
+/**********************************************************************
+ * hb_title_close
+ **********************************************************************
+ *
+ *********************************************************************/
+void hb_title_close( hb_title_t ** _t )
+{
+    hb_title_t * t = *_t;
+    hb_audio_t * audio;
+    hb_chapter_t * chapter;
+    hb_subtitle_t * subtitle;
+    hb_attachment_t * attachment;
+
+    while( ( audio = hb_list_item( t->list_audio, 0 ) ) )
+    {
+        hb_list_rem( t->list_audio, audio );
+        av_free( audio );
+    }
+    hb_list_close( &t->list_audio );
+
+    while( ( chapter = hb_list_item( t->list_chapter, 0 ) ) )
+    {
+        hb_list_rem( t->list_chapter, chapter );
+        av_free( chapter );
+    }
+    hb_list_close( &t->list_chapter );
+
+    while( ( subtitle = hb_list_item( t->list_subtitle, 0 ) ) )
+    {
+        hb_list_rem( t->list_subtitle, subtitle );
+        if ( subtitle->extradata )
+        {
+            av_free( subtitle->extradata );
+            subtitle->extradata = NULL;
+        }
+        av_free( subtitle );
+    }
+    hb_list_close( &t->list_subtitle );
+    
+    while( ( attachment = hb_list_item( t->list_attachment, 0 ) ) )
+    {
+        hb_list_rem( t->list_attachment, attachment );
+        if ( attachment->name )
+        {
+            av_free( attachment->name );
+            attachment->name = NULL;
+        }
+        if ( attachment->data )
+        {
+            av_free( attachment->data );
+            attachment->data = NULL;
+        }
+        av_free( attachment );
+    }
+    hb_list_close( &t->list_attachment );
+
+    if( t->metadata )
+    {
+        if( t->metadata->coverart )
+        {
+            av_free( t->metadata->coverart );
+        }
+        av_free( t->metadata );
+    }
+
+    if ( t->video_codec_name )
+    {
+        av_free( t->video_codec_name );
+    }
+
+    av_free( t );
+    *_t = NULL;
+}
+
+
+/**********************************************************************
+ * hb_audio_copy
+ **********************************************************************
+ *
+ *********************************************************************/
+hb_audio_t *hb_audio_copy(const hb_audio_t *src)
+{
+    hb_audio_t *audio = NULL;
+
+    if( src )
+    {
+        audio = av_mallocz(sizeof(*audio));
+        memcpy(audio, src, sizeof(*audio));
+    }
+    return audio;
+}
+
+/**********************************************************************
+ * hb_audio_new
+ **********************************************************************
+ *
+ *********************************************************************/
+void hb_audio_config_init(hb_audio_config_t * audiocfg)
+{
+    /* Set read only paramaters to invalid values */
+    audiocfg->in.codec = 0xDEADBEEF;
+    audiocfg->in.bitrate = -1;
+    audiocfg->in.samplerate = -1;
+    audiocfg->in.channel_layout = 0;
+    audiocfg->in.channel_map = NULL;
+    audiocfg->in.version = 0;
+    audiocfg->in.mode = 0;
+    audiocfg->flags.ac3 = 0;
+    audiocfg->lang.description[0] = 0;
+    audiocfg->lang.simple[0] = 0;
+    audiocfg->lang.iso639_2[0] = 0;
+
+    /* Initalize some sensable defaults */
+    audiocfg->in.track = audiocfg->out.track = 0;
+    audiocfg->out.codec = HB_ACODEC_FAAC;
+    audiocfg->out.bitrate = -1;
+    audiocfg->out.quality = -1;
+    audiocfg->out.compression_level = -1;
+    audiocfg->out.samplerate = -1;
+    audiocfg->out.mixdown = -1;
+    audiocfg->out.dynamic_range_compression = 0;
+    audiocfg->out.name = NULL;
+
+}
+
+/**********************************************************************
+ * hb_audio_add
+ **********************************************************************
+ *
+ *********************************************************************/
+int hb_audio_add(const hb_job_t * job, const hb_audio_config_t * audiocfg)
+{
+    hb_title_t *title = job->title;
+    hb_audio_t *audio;
+
+    audio = hb_audio_copy( hb_list_item( title->list_audio, audiocfg->in.track ) );
+    if( audio == NULL )
+    {
+        /* We fail! */
+        return 0;
+    }
+
+    if( (audiocfg->in.bitrate != -1) && (audiocfg->in.codec != 0xDEADBEEF) )
+    {
+        /* This most likely means the client didn't call hb_audio_config_init
+         * so bail. */
+        return 0;
+    }
+
+    /* Set the job's "in track" to the value passed in audiocfg.
+     * HandBrakeCLI assumes this value is preserved in the jobs
+     * audio list, but in.track in the title's audio list is not 
+     * required to be the same. */
+    audio->config.in.track = audiocfg->in.track;
+
+    /* Really shouldn't ignore the passed out track, but there is currently no
+     * way to handle duplicates or out-of-order track numbers. */
+    audio->config.out.track = hb_list_count(job->list_audio) + 1;
+    audio->config.out.codec = audiocfg->out.codec;
+    if((audiocfg->out.codec & HB_ACODEC_PASS_FLAG) &&
+       ((audiocfg->out.codec == HB_ACODEC_AUTO_PASS) ||
+        (audiocfg->out.codec & audio->config.in.codec & HB_ACODEC_PASS_MASK)))
+    {
+        /* Pass-through, copy from input. */
+        audio->config.out.samplerate = audio->config.in.samplerate;
+        audio->config.out.bitrate = audio->config.in.bitrate;
+        audio->config.out.mixdown = 0;
+        audio->config.out.dynamic_range_compression = 0;
+        audio->config.out.gain = 0;
+    }
+    else
+    {
+        /* Non pass-through, use what is given. */
+        audio->config.out.codec &= ~HB_ACODEC_PASS_FLAG;
+        audio->config.out.samplerate = audiocfg->out.samplerate;
+        audio->config.out.bitrate = audiocfg->out.bitrate;
+        audio->config.out.compression_level = audiocfg->out.compression_level;
+        audio->config.out.quality = audiocfg->out.quality;
+        audio->config.out.dynamic_range_compression = audiocfg->out.dynamic_range_compression;
+        audio->config.out.mixdown = audiocfg->out.mixdown;
+        audio->config.out.gain = audiocfg->out.gain;
+    }
+    if (audiocfg->out.name && *audiocfg->out.name)
+    {
+        audio->config.out.name = audiocfg->out.name;
+    }
+
+    hb_list_add(job->list_audio, audio);
+    return 1;
+}
+
+hb_audio_config_t * hb_list_audio_config_item(hb_list_t * list, int i)
+{
+    hb_audio_t *audio = NULL;
+
+    if( (audio = hb_list_item(list, i)) )
+        return &(audio->config);
+
+    return NULL;
+}
+
+/**********************************************************************
+ * hb_subtitle_copy
+ **********************************************************************
+ *
+ *********************************************************************/
+hb_subtitle_t *hb_subtitle_copy(const hb_subtitle_t *src)
+{
+    hb_subtitle_t *subtitle = NULL;
+
+    if( src )
+    {
+        subtitle = av_mallocz(sizeof(*subtitle));
+        memcpy(subtitle, src, sizeof(*subtitle));
+        if ( src->extradata )
+        {
+            subtitle->extradata = av_malloc( src->extradata_size );
+            memcpy( subtitle->extradata, src->extradata, src->extradata_size );
+        }
+    }
+    return subtitle;
+}
+
+/**********************************************************************
+ * hb_subtitle_add
+ **********************************************************************
+ *
+ *********************************************************************/
+int hb_subtitle_add(const hb_job_t * job, const hb_subtitle_config_t * subtitlecfg, int track)
+{
+    hb_title_t *title = job->title;
+    hb_subtitle_t *subtitle;
+
+    subtitle = hb_subtitle_copy( hb_list_item( title->list_subtitle, track ) );
+    if( subtitle == NULL )
+    {
+        /* We fail! */
+        return 0;
+    }
+    subtitle->config = *subtitlecfg;
+    hb_list_add(job->list_subtitle, subtitle);
+    return 1;
+}
+
+int hb_srt_add( const hb_job_t * job, 
+                const hb_subtitle_config_t * subtitlecfg, 
+                const char *lang )
+{
+    hb_subtitle_t *subtitle;
+    const iso639_lang_t *language = NULL;
+    int retval = 0;
+
+    subtitle = av_mallocz( sizeof( *subtitle ) );
+    
+    subtitle->id = (hb_list_count(job->list_subtitle) << 8) | 0xFF;
+    subtitle->format = TEXTSUB;
+    subtitle->source = SRTSUB;
+
+    language = lang_for_code2( lang );
+
+    if( language )
+    {
+
+        strcpy( subtitle->lang, language->eng_name );
+        av_strlcpy( subtitle->iso639_2, lang, 4 );
+        
+        subtitle->config = *subtitlecfg;
+        subtitle->config.dest = PASSTHRUSUB;
+
+        hb_list_add(job->list_subtitle, subtitle);
+        retval = 1;
+    }
+    return retval;
+}
+
+/**********************************************************************
+ * hb_attachment_copy
+ **********************************************************************
+ *
+ *********************************************************************/
+hb_attachment_t *hb_attachment_copy(const hb_attachment_t *src)
+{
+    hb_attachment_t *attachment = NULL;
+
+    if( src )
+    {
+        attachment = av_mallocz(sizeof(*attachment));
+        memcpy(attachment, src, sizeof(*attachment));
+        if ( src->name )
+        {
+            attachment->name = strdup( src->name );
+        }
+        if ( src->data )
+        {
+            attachment->data = av_malloc( src->size );
+            memcpy( attachment->data, src->data, src->size );
+        }
+    }
+    return attachment;
+}
+
+/**********************************************************************
+ * hb_yuv2rgb
+ **********************************************************************
+ * Converts a YCrCb pixel to an RGB pixel.
+ * 
+ * This conversion is lossy (due to rounding and clamping).
+ * 
+ * Algorithm:
+ *   http://en.wikipedia.org/w/index.php?title=YCbCr&oldid=361987695#Technical_details
+ *********************************************************************/
+int hb_yuv2rgb(int yuv)
+{
+    double y, Cr, Cb;
+    int r, g, b;
+
+    y  = (yuv >> 16) & 0xff;
+    Cr = (yuv >>  8) & 0xff;
+    Cb = (yuv      ) & 0xff;
+
+    r = 1.164 * (y - 16)                      + 1.596 * (Cr - 128);
+    g = 1.164 * (y - 16) - 0.392 * (Cb - 128) - 0.813 * (Cr - 128);
+    b = 1.164 * (y - 16) + 2.017 * (Cb - 128);
+    
+    r = (r < 0) ? 0 : r;
+    g = (g < 0) ? 0 : g;
+    b = (b < 0) ? 0 : b;
+    
+    r = (r > 255) ? 255 : r;
+    g = (g > 255) ? 255 : g;
+    b = (b > 255) ? 255 : b;
+    
+    return (r << 16) | (g << 8) | b;
+}
+
+/**********************************************************************
+ * hb_rgb2yuv
+ **********************************************************************
+ * Converts an RGB pixel to a YCrCb pixel.
+ * 
+ * This conversion is lossy (due to rounding and clamping).
+ * 
+ * Algorithm:
+ *   http://en.wikipedia.org/w/index.php?title=YCbCr&oldid=361987695#Technical_details
+ *********************************************************************/
+int hb_rgb2yuv(int rgb)
+{
+    double r, g, b;
+    int y, Cr, Cb;
+    
+    r = (rgb >> 16) & 0xff;
+    g = (rgb >>  8) & 0xff;
+    b = (rgb      ) & 0xff;
+
+    y  =  16. + ( 0.257 * r) + (0.504 * g) + (0.098 * b);
+    Cb = 128. + (-0.148 * r) - (0.291 * g) + (0.439 * b);
+    Cr = 128. + ( 0.439 * r) - (0.368 * g) - (0.071 * b);
+    
+    y = (y < 0) ? 0 : y;
+    Cb = (Cb < 0) ? 0 : Cb;
+    Cr = (Cr < 0) ? 0 : Cr;
+    
+    y = (y > 255) ? 255 : y;
+    Cb = (Cb > 255) ? 255 : Cb;
+    Cr = (Cr > 255) ? 255 : Cr;
+    
+    return (y << 16) | (Cr << 8) | Cb;
+}
+
+const char * hb_subsource_name( int source )
+{
+    switch (source)
+    {
+        case VOBSUB:
+            return "VOBSUB";
+        case SRTSUB:
+            return "SRT";
+        case CC608SUB:
+            return "CC";
+        case CC708SUB:
+            return "CC";
+        case UTF8SUB:
+            return "UTF-8";
+        case TX3GSUB:
+            return "TX3G";
+        case SSASUB:
+            return "SSA";
+        default:
+            return "Unknown";
+    }
+}
+
+int hb_dvd_region(char *device, int *region_mask)
+{
+#if defined( DVD_LU_SEND_RPC_STATE ) && defined( DVD_AUTH )
+    struct stat  st;
+    dvd_authinfo ai;
+    int          fd, ret;
+
+    fd = open( device, O_RDONLY );
+    if ( fd < 0 )
+        return -1;
+    if ( fstat( fd, &st ) < 0 )
+	{
+        close( fd );
+        return -1;
+	}
+    if ( !( S_ISBLK( st.st_mode ) || S_ISCHR( st.st_mode ) ) )
+	{
+        close( fd );
+        return -1;
+	}
+
+    ai.type = DVD_LU_SEND_RPC_STATE;
+    ret = ioctl(fd, DVD_AUTH, &ai);
+    close( fd );
+    if ( ret < 0 )
+        return ret;
+
+    *region_mask = ai.lrpcs.region_mask;
+    return 0;
+#else
+    return -1;
+#endif
+}
+
+/* Converts a hex character to its integer value */
+static char from_hex(char ch) {
+  return isdigit(ch) ? ch - '0' : tolower(ch) - 'a' + 10;
+}
+
+/* Converts an integer value to its hex character*/
+static char to_hex(char code) {
+  static char hex[] = "0123456789abcdef";
+  return hex[code & 15];
+}
+
+/* Returns a url-encoded version of str */ 
+/* IMPORTANT: be sure to free() the returned string after use */
+char *url_encode(const char *str) {
+  char *pstr = (char *)str, *buf = av_malloc(strlen(str) * 3 + 1), *pbuf = buf;
+  while (*pstr) {
+    if (isalnum(*pstr) || *pstr == '-' || *pstr == '_' || *pstr == '.' || *pstr == '~' || *pstr == '/' || *pstr == '#')
+      *pbuf++ = *pstr;
+    else if (*pstr == ' ')
+      *pbuf++ = '+';
+    else
+      *pbuf++ = '%', *pbuf++ = to_hex(*pstr >> 4), *pbuf++ = to_hex(*pstr & 15);
+    pstr++;
+  }
+  *pbuf = '\0';
+  return buf;
+}
+
+/* Returns a url-decoded version of str */
+/* IMPORTANT: be sure to free() the returned string after use */
+char *url_decode(const char *str) {
+  char *pstr = (char *)str, *buf = av_malloc(strlen(str) + 1), *pbuf = buf;
+  while (*pstr) {
+    if (*pstr == '%') {
+      if (pstr[1] && pstr[2]) {
+        *pbuf++ = from_hex(pstr[1]) << 4 | from_hex(pstr[2]);
+        pstr += 2;
+      }
+    } else if (*pstr == '+') {
+      *pbuf++ = ' ';
+    } else {
+      *pbuf++ = *pstr;
+    }
+    pstr++;
+  }
+  *pbuf = '\0';
+  return buf;
+}
+
+/* parses a 'filename' that can be a dvd://.. , a file://... url or a file name
+ * and returns the decoded path to the resource, and optionally a title
+ */
+int url_parse(const char *proto, const char *filename, const char **urlpath, int *title ) {
+    const char *pathstart;
+    char *protourl = av_asprintf("%s://",proto);
+    *title = 0;
+
+    if( av_strstart(filename, protourl, &pathstart) ) {
+    	const char *path;
+    	char *query, *tq;
+        av_free(protourl);
+        path = strdup(pathstart);
+
+        /* remove query from path */
+        tq = strchr(path, '?');
+        if(tq) *tq = 0;
+        *urlpath = url_decode(path);
+
+
+        query = strchr(pathstart,'?');
+        // is url
+        // is title specified in query string
+        if( query && ((tq=strstr(query, "?title=")) || (tq=strstr(query, "&title=")))) {
+            tq+=7;
+            *title = atoi(tq);
+        }
+    } else if( av_strstart(filename, "file://", &pathstart) ) {
+        return 0;
+        //*urlpath = url_decode(pathstart);
+    } else {
+        /* assume raw file name */
+        *urlpath = filename;
+    }
+    return 1;
+}
+
+#define bdurl_max(a,b) ((a)>(b)?(a):(b))
+#define bdurl_min(a,b) ((a)<(b)?(a):(b))
+
+/* reads size bytes into buffer doing continuous reads if necessary.
+ * calls the read function passed with the context */
+int fragmented_read(void *ctx, fragread_t read, hb_buffer_t **cur_read_buffer, unsigned char *buf, int size)
+{
+    unsigned char *bufptr = buf;
+    unsigned char *bufend = buf + size;
+
+    while( bufptr < bufend ) {
+        /* if there is still a buffer we were reading */
+        if( *cur_read_buffer ) {
+            hb_buffer_t *cur_hb_buffer = *cur_read_buffer;
+            int left_dstbytes = bufend - bufptr;
+            int left_srcbytes = cur_hb_buffer->size - cur_hb_buffer->cur;
+
+            int readmax = bdurl_min( left_srcbytes, left_dstbytes );
+
+            memcpy(bufptr, cur_hb_buffer->data + cur_hb_buffer->cur, readmax );
+
+            bufptr += readmax;
+            cur_hb_buffer->cur += readmax;
+
+            if( cur_hb_buffer->cur == cur_hb_buffer->size ) {
+                *cur_read_buffer = 0;
+            }
+        } else {
+            /* reading fresh data from bdread. this must return a buffer if succesful */
+            *cur_read_buffer = read(ctx); //hb_bd_read( ctx->hb_bd );
+            if(!(*cur_read_buffer)) {
+                hb_log_level(HB_LOG_VERBOSE,"fragmented_read: EOF");
+                return AVERROR_EOF;
+            }
+            (*cur_read_buffer)->cur = 0;
+        }
+    }
+    return bufptr - buf;
+}
+
+static int __parse_optmedia_path(const char *proto, void *ctx, const char *path, ff_input_func_t *ff ){
+	om_handle_t *c;
+    int min_title_duration = 25*90000;
+    //const char *fpath;
+    // static char dfname[4096];
+    const char *urlpath;
+    int urltitle=0;
+
+    hb_optmedia_func_t *om;
+    if(!strcmp(proto,"dvd")) {
+    	om = hb_optmedia_dvd_methods();
+    } else if(!strcmp(proto,"bd")){
+    	om = hb_optmedia_bd_methods();;
+    } else {
+    	return 0;
+    }
+
+    if(!url_parse(proto, path, &urlpath, &urltitle))
+        return 0;
+
+    c = om->init((char *)urlpath);
+    if(c) {
+        int tc = om->title_count(c);
+        int i;
+        hb_title_t *longest_title;
+        int longest_title_idx;
+        if (ff->parse_file) {
+            char *efilename;
+
+            hb_list_t *list_title = hb_list_init();
+
+            if( urltitle && urltitle>0 && urltitle<=tc ) {
+                hb_title_t *t = om->title_scan(c, urltitle, min_title_duration);
+                if (t) {
+                    hb_list_add(list_title, t);
+                } else {
+                    hb_error("parse_optmedia_path: couldn't open title %d. Does title exist in %s?", urltitle, proto);
+                    return 0;
+                }
+            } else {
+                /* retrieve title information */
+                for (i = 1; i <= tc; i++) {
+                    hb_title_t *t = om->title_scan(c, i, min_title_duration);
+                    if (t) {
+                        hb_list_add(list_title, t);
+                    }
+                }
+            }
+            longest_title_idx = om->main_feature(c,list_title);
+            for (i = 0; i < hb_list_count(list_title); i++) {
+            	if( ((hb_title_t *)hb_list_item(list_title, i))->index == longest_title_idx ) {
+            		longest_title = hb_list_item(list_title,i);
+            		break;
+            	}
+            }
+
+            hb_log_level(HB_LOG_VERBOSE, "parse_optmedia_path: calling parse file");
+            /* call parse file */
+            efilename = url_encode(urlpath);
+            for (i = 0; i < hb_list_count(list_title); i++) {
+                hb_title_t *t = hb_list_item(list_title,i);
+                //dfname[0] = 0;
+                char *ppf = av_asprintf("%s://%s?title=%d",proto,efilename,t->index);
+                //av_strlcat(dfname, "dvd://", 7);
+                //av_strlcat(dfname, efilename, 2048 - 7);
+                //av_strlcatf(dfname, 2048, "?title=%d", t->index);
+                ff->parse_file(ctx, ppf );
+                av_free(ppf);
+            }
+            av_free(efilename);
+
+            if( urltitle && urltitle>0 && urltitle<=tc ) {
+                ff->select_default_program(urltitle);
+            }else if( longest_title ) {
+                ff->select_default_program(longest_title->index);
+            }
+        }
+        om->close(&c);
+        return tc;
+    }
+    return 0;
+}
+
+int parse_optmedia_path(void *ctx, const char *path, ff_input_func_t *ff ){
+	if(!__parse_optmedia_path("dvd",ctx,path,ff) &&
+			!__parse_optmedia_path("bd",ctx,path,ff) ) {
+		return 0;
+	}
+	return 1;
+}
+
diff --git a/libavformat/dvdurl_common.h b/libavformat/dvdurl_common.h
new file mode 100644
index 0000000000..3d1cd144ad
--- /dev/null
+++ b/libavformat/dvdurl_common.h
@@ -0,0 +1,838 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef HB_COMMON_H
+#define HB_COMMON_H
+
+#include <math.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdarg.h>
+#include <string.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <dirent.h>
+
+#include "optmedia.h"
+
+#if defined( __GNUC__ ) && !(defined( _WIN32 ) || defined( __MINGW32__ ))
+#   define HB_WPRINTF(s,v) __attribute__((format(printf,s,v)))
+#else
+#   define HB_WPRINTF(s,v)
+#endif
+
+#if defined( SYS_MINGW )
+#   define fseek fseeko64
+#   define ftell ftello64
+#   undef  fseeko
+#   define fseeko fseeko64
+#   undef  ftello
+#   define ftello ftello64
+#   define flockfile(...)
+#   define funlockfile(...)
+#   define getc_unlocked getc
+#   undef  off_t
+#   define off_t off64_t
+#endif
+
+#ifndef MIN
+#define MIN( a, b ) ( (a) > (b) ? (b) : (a) )
+#endif
+#ifndef MAX
+#define MAX( a, b ) ( (a) > (b) ? (a) : (b) )
+#endif
+
+#define EVEN( a )        ( (a) + ( (a) & 1 ) )
+#define MULTIPLE_16( a ) ( 16 * ( ( (a) + 8 ) / 16 ) )
+#define MULTIPLE_MOD( a, b ) ((b==1)?a:( b * ( ( (a) + (b / 2) - 1) / b ) ))
+#define MULTIPLE_MOD_DOWN( a, b ) ((b==1)?a:( b * ( (a) / b ) ))
+
+#define HB_DVD_READ_BUFFER_SIZE 2048
+
+typedef struct hb_handle_s hb_handle_t;
+typedef struct hb_list_s hb_list_t;
+typedef struct hb_rate_s hb_rate_t;
+typedef struct hb_mixdown_s hb_mixdown_t;
+typedef struct hb_encoder_s hb_encoder_t;
+typedef struct hb_job_s  hb_job_t;
+typedef struct hb_title_s hb_title_t;
+typedef struct hb_chapter_s hb_chapter_t;
+typedef struct hb_audio_s hb_audio_t;
+typedef struct hb_audio_config_s hb_audio_config_t;
+typedef struct hb_subtitle_s hb_subtitle_t;
+typedef struct hb_subtitle_config_s hb_subtitle_config_t;
+typedef struct hb_attachment_s hb_attachment_t;
+typedef struct hb_metadata_s hb_metadata_t;
+typedef struct hb_state_s hb_state_t;
+typedef union  hb_esconfig_u     hb_esconfig_t;
+typedef struct hb_work_private_s hb_work_private_t;
+typedef struct hb_work_object_s  hb_work_object_t;
+typedef struct hb_filter_private_s hb_filter_private_t;
+typedef struct hb_filter_object_s  hb_filter_object_t;
+typedef struct hb_buffer_s hb_buffer_t;
+typedef struct hb_fifo_s hb_fifo_t;
+typedef struct hb_lock_s hb_lock_t;
+
+#define __LIBHB__
+
+#ifdef __LIBHB__
+#define PRIVATE
+#else
+#define PRIVATE const
+#endif
+
+hb_list_t * hb_list_init(void);
+int         hb_list_count( hb_list_t * );
+void        hb_list_add( hb_list_t *, void * );
+void        hb_list_insert( hb_list_t * l, int pos, void * p );
+void        hb_list_rem( hb_list_t *, void * );
+void      * hb_list_item( hb_list_t *, int );
+void        hb_list_close( hb_list_t ** );
+
+void hb_reduce( int *x, int *y, int num, int den );
+void hb_reduce64( int64_t *x, int64_t *y, int64_t num, int64_t den );
+void hb_limit_rational64( int64_t *x, int64_t *y, int64_t num, int64_t den, int64_t limit );
+
+#define HB_KEEP_WIDTH  0
+#define HB_KEEP_HEIGHT 1
+void hb_fix_aspect( hb_job_t * job, int keep );
+
+hb_audio_t *hb_audio_copy(const hb_audio_t *src);
+void hb_audio_config_init(hb_audio_config_t * audiocfg);
+int hb_audio_add(const hb_job_t * job, const hb_audio_config_t * audiocfg);
+hb_audio_config_t * hb_list_audio_config_item(hb_list_t * list, int i);
+
+hb_subtitle_t *hb_subtitle_copy(const hb_subtitle_t *src);
+int hb_subtitle_add(const hb_job_t * job, const hb_subtitle_config_t * subtitlecfg, int track);
+int hb_srt_add(const hb_job_t * job, const hb_subtitle_config_t * subtitlecfg, 
+               const char *lang);
+
+hb_attachment_t *hb_attachment_copy(const hb_attachment_t *src);
+
+struct hb_rate_s
+{
+    const char * string;
+    const int    rate;
+};
+
+struct hb_mixdown_s
+{
+    const char * human_readable_name;
+    const char * internal_name;
+    const char * short_name;
+    const int    amixdown;
+};
+
+struct hb_encoder_s
+{
+    const char * human_readable_name; // note: used in presets
+    const char * short_name;          // note: used in CLI
+    const int    encoder;             // HB_*CODEC_* define
+    const int    muxers;              // supported muxers
+};
+
+struct hb_subtitle_config_s
+{
+    enum subdest { RENDERSUB, PASSTHRUSUB } dest;
+    int  force;
+    int  default_track; 
+    
+    /* SRT subtitle tracks only */
+    char src_filename[256];
+    char src_codeset[40];
+    int64_t offset;
+};
+
+#define HB_VIDEO_RATE_BASE   27000000
+
+extern hb_rate_t    hb_video_rates[];
+extern int          hb_video_rates_count;
+extern hb_rate_t    hb_audio_rates[];
+extern int          hb_audio_rates_count;
+extern int          hb_audio_rates_default;
+extern hb_rate_t    hb_audio_bitrates[];
+extern int          hb_audio_bitrates_count;
+extern hb_mixdown_t hb_audio_mixdowns[];
+extern int          hb_audio_mixdowns_count;
+extern hb_encoder_t hb_video_encoders[];
+extern int          hb_video_encoders_count;
+extern hb_encoder_t hb_audio_encoders[];
+extern int          hb_audio_encoders_count;
+int hb_mixdown_get_mixdown_from_short_name( const char * short_name );
+const char * hb_mixdown_get_short_name_from_mixdown( int amixdown );
+void hb_autopassthru_apply_settings( hb_job_t * job, hb_title_t * title );
+int hb_autopassthru_get_encoder( int in_codec, int copy_mask, int fallback, int muxer );
+int hb_get_best_mixdown( uint32_t codec, int layout, int mixdown );
+int hb_get_default_mixdown( uint32_t codec, int layout );
+int hb_find_closest_audio_bitrate(int bitrate);
+void hb_get_audio_bitrate_limits(uint32_t codec, int samplerate, int mixdown, int *low, int *high);
+int hb_get_best_audio_bitrate( uint32_t codec, int bitrate, int samplerate, int mixdown);
+int hb_get_default_audio_bitrate( uint32_t codec, int samplerate, int mixdown );
+void hb_get_audio_quality_limits(uint32_t codec, float *low, float *high, float *granularity, int *direction);
+float hb_get_best_audio_quality( uint32_t codec, float quality);
+float hb_get_default_audio_quality( uint32_t codec );
+void hb_get_audio_compression_limits(uint32_t codec, float *low, float *high, float *granularity, int *direction);
+float hb_get_best_audio_compression( uint32_t codec, float compression);
+float hb_get_default_audio_compression( uint32_t codec );
+
+/******************************************************************************
+ * hb_job_t: settings to be filled by the UI
+ *****************************************************************************/
+struct hb_job_s
+{
+    /* ID assigned by UI so it can groups job passes together */
+    int             sequence_id;
+
+    /* Pointer to the title to be ripped */
+    hb_title_t    * title;
+    int             feature; // Detected DVD feature title
+
+    /* Chapter selection */
+    int             chapter_start;
+    int             chapter_end;
+
+	/* Include chapter marker track in mp4? */
+    int             chapter_markers;
+
+    /* Picture settings:
+         crop:                must be multiples of 2 (top/bottom/left/right)
+         deinterlace:         0 or 1
+         width:               must be a multiple of 2
+         height:              must be a multiple of 2
+         keep_ratio:          used by UIs
+         grayscale:           black and white encoding
+         pixel_ratio:         store pixel aspect ratio in the video
+         pixel_aspect_width:  numerator for pixel aspect ratio
+         pixel_aspect_height: denominator for pixel aspect ratio
+         modulus:             set a number for dimensions to be multiples of
+         maxWidth:            keep width below this
+         maxHeight:           keep height below this */
+    int             crop[4];
+    int             deinterlace;
+    hb_list_t     * filters;
+    int             width;
+    int             height;
+    int             keep_ratio;
+    int             grayscale;
+
+    struct
+    {
+        int             mode;
+        int             itu_par;
+        int             par_width;
+        int             par_height;
+        int             dar_width;  // 0 if normal
+        int             dar_height; // 0 if normal
+        int             keep_display_aspect;
+    } anamorphic;
+
+    int             modulus;
+    int             maxWidth;
+    int             maxHeight;
+
+    /* Video settings:
+         vcodec:            output codec
+         vquality:          output quality (0.0..1.0)
+                            if < 0.0 or > 1.0, bitrate is used instead,
+                            except with x264, to use its real QP/RF scale
+         vbitrate:          output bitrate (kbps)
+         vrate, vrate_base: output framerate is vrate / vrate_base
+         cfr:               0 (vfr), 1 (cfr), 2 (pfr) [see render.c]
+         pass:              0, 1 or 2 (or -1 for scan)
+         advanced_opts:     string of extra advanced encoder options
+         areBframes:        boolean to note if b-frames are included in advanced_opts */
+#define HB_VCODEC_MASK   0x0000FF
+#define HB_VCODEC_X264   0x000001
+#define HB_VCODEC_THEORA 0x000002
+#define HB_VCODEC_FFMPEG_MPEG4 0x000010
+#define HB_VCODEC_FFMPEG       HB_VCODEC_FFMPEG_MPEG4
+#define HB_VCODEC_FFMPEG_MPEG2 0x000020
+#define HB_VCODEC_FFMPEG_MASK  0x0000F0
+
+    int             vcodec;
+    float           vquality;
+    int             vbitrate;
+    int             pfr_vrate;
+    int             pfr_vrate_base;
+    int             vrate;
+    int             vrate_base;
+    int             cfr;
+    int             pass;
+    char            *advanced_opts;
+    char            *x264_profile;
+    char            *x264_preset;
+    char            *x264_tune;
+    int             areBframes;
+    int             color_matrix_code;
+    int             color_prim;
+    int             color_transfer;
+    int             color_matrix;
+
+    /* List of audio settings. */
+    hb_list_t     * list_audio;
+    int             acodec_copy_mask; // Auto Passthru allowed codecs
+    int             acodec_fallback;  // Auto Passthru fallback encoder
+
+    /* Subtitles */
+    hb_list_t     * list_subtitle;
+
+    /* Muxer settings
+         mux:  output file format
+         file: file path */
+#define HB_MUX_MASK 0xFF0000
+#define HB_MUX_MP4  0x010000
+#define HB_MUX_MKV  0x200000
+
+    int             mux;
+    const char          * file;
+
+    /* Allow MP4 files > 4 gigs */
+    int             largeFileSize;
+    int             mp4_optimize;
+    int             ipod_atom;
+
+    int                     indepth_scan;
+    hb_subtitle_config_t    select_subtitle_config;
+
+    int             angle;              // dvd angle to encode
+    int             frame_to_start;     // declare eof when we hit this frame
+    int64_t         pts_to_start;       // drop frames until  we pass this pts 
+                                        //  in the time-linearized input stream
+    int             frame_to_stop;      // declare eof when we hit this frame
+    int64_t         pts_to_stop;        // declare eof when we pass this pts in
+                                        //  the time-linearized input stream
+    int             start_at_preview;   // if non-zero, encoding will start
+                                        //  at the position of preview n
+    int             seek_points;        //  out of N previews
+    uint32_t        frames_to_skip;     // decode but discard this many frames
+                                        //  initially (for frame accurate positioning
+                                        //  to non-I frames).
+
+};
+
+/* Audio starts here */
+/* Audio Codecs */
+#define HB_ACODEC_MASK      0x001FFF00
+#define HB_ACODEC_FAAC      0x00000100
+#define HB_ACODEC_LAME      0x00000200
+#define HB_ACODEC_VORBIS    0x00000400
+#define HB_ACODEC_AC3       0x00000800
+#define HB_ACODEC_LPCM      0x00001000
+#define HB_ACODEC_DCA       0x00002000
+#define HB_ACODEC_CA_AAC    0x00004000
+#define HB_ACODEC_CA_HAAC   0x00008000
+#define HB_ACODEC_FFAAC     0x00010000
+#define HB_ACODEC_FFMPEG    0x00020000
+#define HB_ACODEC_DCA_HD    0x00040000
+#define HB_ACODEC_MP3       0x00080000
+#define HB_ACODEC_FFFLAC    0x00100000
+#define HB_ACODEC_FF_MASK   0x001f0000
+#define HB_ACODEC_PASS_FLAG 0x40000000
+#define HB_ACODEC_PASS_MASK (HB_ACODEC_MP3 | HB_ACODEC_FFAAC | HB_ACODEC_DCA_HD | HB_ACODEC_AC3 | HB_ACODEC_DCA)
+#define HB_ACODEC_AUTO_PASS (HB_ACODEC_PASS_MASK | HB_ACODEC_PASS_FLAG)
+#define HB_ACODEC_MP3_PASS  (HB_ACODEC_MP3 | HB_ACODEC_PASS_FLAG)
+#define HB_ACODEC_AAC_PASS  (HB_ACODEC_FFAAC | HB_ACODEC_PASS_FLAG)
+#define HB_ACODEC_AC3_PASS  (HB_ACODEC_AC3 | HB_ACODEC_PASS_FLAG)
+#define HB_ACODEC_DCA_PASS  (HB_ACODEC_DCA | HB_ACODEC_PASS_FLAG)
+#define HB_ACODEC_DCA_HD_PASS  (HB_ACODEC_DCA_HD | HB_ACODEC_PASS_FLAG)
+#define HB_ACODEC_ANY       (HB_ACODEC_MASK | HB_ACODEC_PASS_FLAG)
+
+#define HB_SUBSTREAM_BD_TRUEHD  0x72
+#define HB_SUBSTREAM_BD_AC3     0x76
+#define HB_SUBSTREAM_BD_DTSHD   0x72
+#define HB_SUBSTREAM_BD_DTS     0x71
+
+/* Audio Mixdown */
+/* define some masks, used to extract the various information from the HB_AMIXDOWN_XXXX values */
+#define HB_AMIXDOWN_DCA_FORMAT_MASK             0x00FFF000
+#define HB_AMIXDOWN_A52_FORMAT_MASK             0x00000FF0
+#define HB_AMIXDOWN_DISCRETE_CHANNEL_COUNT_MASK 0x0000000F
+/* define the HB_AMIXDOWN_XXXX values */
+#define HB_AMIXDOWN_NONE                        0x00000000
+#define HB_AMIXDOWN_MONO                        0x01000001
+// DCA_FORMAT of DCA_MONO                  = 0    = 0x000
+// A52_FORMAT of A52_MONO                  = 1    = 0x01
+// discrete channel count of 1
+#define HB_AMIXDOWN_STEREO                      0x02002022
+// DCA_FORMAT of DCA_STEREO                = 2    = 0x002
+// A52_FORMAT of A52_STEREO                = 2    = 0x02
+// discrete channel count of 2
+#define HB_AMIXDOWN_DOLBY                       0x042070A2
+// DCA_FORMAT of DCA_3F1R | DCA_OUT_DPLI   = 519  = 0x207
+// A52_FORMAT of A52_DOLBY                 = 10   = 0x0A
+// discrete channel count of 2
+#define HB_AMIXDOWN_DOLBYPLII                   0x084094A2
+// DCA_FORMAT of DCA_3F2R | DCA_OUT_DPLII  = 1033 = 0x409
+// A52_FORMAT of A52_DOLBY | A52_USE_DPLII = 74   = 0x4A
+// discrete channel count of 2
+#define HB_AMIXDOWN_6CH                         0x10089176
+// DCA_FORMAT of DCA_3F2R | DCA_LFE        = 137  = 0x089
+// A52_FORMAT of A52_3F2R | A52_LFE        = 23   = 0x17
+// discrete channel count of 6
+/* define some macros to extract the various information from the HB_AMIXDOWN_XXXX values */
+#define HB_AMIXDOWN_GET_DCA_FORMAT( a ) ( ( a & HB_AMIXDOWN_DCA_FORMAT_MASK ) >> 12 )
+#define HB_AMIXDOWN_GET_A52_FORMAT( a ) ( ( a & HB_AMIXDOWN_A52_FORMAT_MASK ) >> 4 )
+#define HB_AMIXDOWN_GET_DISCRETE_CHANNEL_COUNT( a ) ( ( a & HB_AMIXDOWN_DISCRETE_CHANNEL_COUNT_MASK ) )
+
+/* Input Channel Layout */
+/* define some masks, used to extract the various information from the HB_AMIXDOWN_XXXX values */
+#define HB_INPUT_CH_LAYOUT_DISCRETE_FRONT_MASK  0x00F0000
+#define HB_INPUT_CH_LAYOUT_DISCRETE_REAR_MASK   0x000F000
+#define HB_INPUT_CH_LAYOUT_DISCRETE_LFE_MASK    0x0000F00
+#define HB_INPUT_CH_LAYOUT_DISCRETE_NO_LFE_MASK 0xFFFF0FF
+#define HB_INPUT_CH_LAYOUT_ENCODED_FRONT_MASK   0x00000F0
+#define HB_INPUT_CH_LAYOUT_ENCODED_REAR_MASK    0x000000F
+/* define the input channel layouts used to describe the channel layout of this audio */
+#define HB_INPUT_CH_LAYOUT_MONO    0x0110010
+#define HB_INPUT_CH_LAYOUT_STEREO  0x0220020
+#define HB_INPUT_CH_LAYOUT_DOLBY   0x0320031
+#define HB_INPUT_CH_LAYOUT_3F      0x0430030
+#define HB_INPUT_CH_LAYOUT_2F1R    0x0521021
+#define HB_INPUT_CH_LAYOUT_3F1R    0x0631031
+#define HB_INPUT_CH_LAYOUT_2F2R    0x0722022
+#define HB_INPUT_CH_LAYOUT_3F2R    0x0832032
+#define HB_INPUT_CH_LAYOUT_4F2R    0x0942042
+#define HB_INPUT_CH_LAYOUT_3F4R    0x0a34034
+#define HB_INPUT_CH_LAYOUT_HAS_LFE 0x0000100
+/* define some macros to extract the various information from the HB_AMIXDOWN_XXXX values */
+#define HB_INPUT_CH_LAYOUT_GET_DISCRETE_FRONT_COUNT( a ) ( ( a & HB_INPUT_CH_LAYOUT_DISCRETE_FRONT_MASK ) >> 16 )
+#define HB_INPUT_CH_LAYOUT_GET_DISCRETE_REAR_COUNT( a )  ( ( a & HB_INPUT_CH_LAYOUT_DISCRETE_REAR_MASK ) >> 12 )
+#define HB_INPUT_CH_LAYOUT_GET_DISCRETE_LFE_COUNT( a )   ( ( a & HB_INPUT_CH_LAYOUT_DISCRETE_LFE_MASK ) >> 8 )
+#define HB_INPUT_CH_LAYOUT_GET_DISCRETE_COUNT( a ) ( ( ( a & HB_INPUT_CH_LAYOUT_DISCRETE_FRONT_MASK ) >> 16 ) + ( ( a & HB_INPUT_CH_LAYOUT_DISCRETE_REAR_MASK ) >> 12 ) + ( ( a & HB_INPUT_CH_LAYOUT_DISCRETE_LFE_MASK ) >> 8 ) )
+#define HB_INPUT_CH_LAYOUT_GET_ENCODED_FRONT_COUNT( a )   ( ( a & HB_INPUT_CH_LAYOUT_ENCODED_FRONT_MASK ) >> 4 )
+#define HB_INPUT_CH_LAYOUT_GET_ENCODED_REAR_COUNT( a )   ( ( a & HB_INPUT_CH_LAYOUT_ENCODED_REAR_MASK ) )
+typedef struct
+{
+    int chan_map[10][2][8];
+    int inv_chan_map[10][2][8];
+} hb_chan_map_t;
+
+struct hb_audio_config_s
+{
+    /* Output */
+    struct
+    {
+            int track;      /* Output track number */
+            uint32_t codec;  /* Output audio codec.
+                                 * HB_ACODEC_AC3 means pass-through, then bitrate and samplerate
+                                 * are ignored.
+                                 */
+            int samplerate;         /* Output sample rate (Hz) */
+            int samples_per_frame;  /* Number of samples per frame */
+            int bitrate;            /* Output bitrate (kbps) */
+            float quality;          /* Output quality */
+            float compression_level;  /* Output compression level */
+            int mixdown;            /* The mixdown format to be used for this audio track (see HB_AMIXDOWN_*) */
+            double dynamic_range_compression; /* Amount of DRC that gets applied to this track */
+            double gain;    /* Gain in dB. negative is quieter */
+            char * name;    /* Output track name */
+    } out;
+
+    /* Input */
+    struct
+    {
+        int track;                /* Input track number */
+        PRIVATE uint32_t codec;   /* Input audio codec */
+        PRIVATE uint32_t reg_desc; /* registration descriptor of source */
+        PRIVATE uint32_t stream_type; /* stream type from source stream */
+        PRIVATE uint32_t substream_type; /* substream for multiplexed streams */
+        PRIVATE uint32_t codec_param; /* per-codec config info */
+        PRIVATE uint32_t version; /* Bitsream version */
+        PRIVATE uint32_t mode;    /* Bitstream mode, codec dependent encoding */
+        PRIVATE int samplerate; /* Input sample rate (Hz) */
+        PRIVATE int samples_per_frame; /* Number of samples per frame */
+        PRIVATE int bitrate;    /* Input bitrate (kbps) */
+        PRIVATE int channel_layout; /* channel_layout is the channel layout of this audio this is used to
+                                     * provide a common way of describing the source audio */
+        PRIVATE hb_chan_map_t * channel_map; /* source channel map, set by the audio decoder */
+    } in;
+
+    /* Misc. */
+    union
+    {
+        PRIVATE int ac3;    /* flags.ac3 is only set when the source audio format is HB_ACODEC_AC3 */
+        PRIVATE int dca;    /* flags.dca is only set when the source audio format is HB_ACODEC_DCA */
+    } flags;
+#define AUDIO_F_DOLBY (1 << 31)  /* set if source uses Dolby Surround */
+
+    struct
+    {
+        PRIVATE char description[1024];
+        PRIVATE char simple[1024];
+        PRIVATE char iso639_2[4];
+        PRIVATE uint8_t type; /* normal, visually impared, directors */
+    } lang;
+};
+
+enum
+{
+    WORK_SYNC_VIDEO = 1,
+    WORK_SYNC_AUDIO,
+    WORK_DECMPEG2,
+    WORK_DECCC608,
+    WORK_DECVOBSUB,
+    WORK_DECSRTSUB,
+    WORK_DECUTF8SUB,
+    WORK_DECTX3GSUB,
+    WORK_DECSSASUB,
+    WORK_ENCVOBSUB,
+    WORK_RENDER,
+    WORK_ENCAVCODEC,
+    WORK_ENCX264,
+    WORK_ENCTHEORA,
+    WORK_DECA52,
+    WORK_DECDCA,
+    WORK_DECAVCODEC,
+    WORK_DECAVCODECV,
+    WORK_DECLPCM,
+    WORK_ENCFAAC,
+    WORK_ENCLAME,
+    WORK_ENCVORBIS,
+    WORK_ENC_CA_AAC,
+    WORK_ENC_CA_HAAC,
+    WORK_ENCAVCODEC_AUDIO,
+    WORK_MUX,
+    WORK_READER
+};
+
+
+struct hb_audio_s
+{
+    int id;
+
+    hb_audio_config_t config;
+
+};
+
+
+struct hb_chapter_s
+{
+    int      index;
+    int      pgcn;
+    int      pgn;
+    int      cell_start;
+    int      cell_end;
+    uint64_t block_start;
+    uint64_t block_end;
+    uint64_t block_count;
+
+    /* Visual-friendly duration */
+    int      hours;
+    int      minutes;
+    int      seconds;
+
+    /* Exact duration (in 1/90000s) */
+    uint64_t duration;
+
+    /* Optional chapter title */
+    char     title[1024];
+};
+
+/*
+ * A subtitle track.
+ * 
+ * Required fields when a demuxer creates a subtitle track are:
+ * > id
+ *     - ID of this track
+ *     - must be unique for all tracks within a single job,
+ *       since it is used to look up the appropriate in-FIFO with GetFifoForId()
+ * > format
+ *     - format of the packets the subtitle decoder work-object sends to sub->fifo_raw
+ *     - for passthru subtitles, is also the format of the final packets sent to sub->fifo_out
+ *     - PICTURESUB for banded 8-bit YAUV pixels; see decvobsub.c documentation for more info
+ *     - TEXTSUB for UTF-8 text marked up with <b>, <i>, or <u>
+ *     - read by the muxers, and by the subtitle burn-in logic in the hb_sync_video work-object
+ * > source
+ *     - used to create the appropriate subtitle decoder work-object in do_job()
+ * > config.dest
+ *     - whether to render the subtitle on the video track (RENDERSUB) or 
+ *       to pass it through its own subtitle track in the output container (PASSTHRUSUB)
+ *     - all newly created non-VOBSUB tracks should default to PASSTHRUSUB
+ *     - all newly created VOBSUB tracks should default to RENDERSUB, for legacy compatibility
+ * > lang
+ *     - user-readable description of the subtitle track
+ *     - may correspond to the language of the track (see the 'iso639_2' field)
+ *     - may correspond to the type of track (see the 'type' field; ex: "Closed Captions")
+ * > iso639_2
+ *     - language code for the subtitle, or "und" if unknown
+ */
+struct hb_subtitle_s
+{
+    int  id;
+    int  track;
+
+    hb_subtitle_config_t config;
+
+    enum subtype { PICTURESUB, TEXTSUB } format;
+    enum subsource { VOBSUB, SRTSUB, CC608SUB, /*unused*/CC708SUB, UTF8SUB, TX3GSUB, SSASUB } source;
+    char lang[1024];
+    char iso639_2[4];
+    uint8_t type; /* Closed Caption, Childrens, Directors etc */
+    
+    // Color lookup table for VOB subtitle tracks. Each entry is in YCbCr format.
+    // Must be filled out by the demuxer for VOB subtitle tracks.
+    uint32_t    palette[16];
+    int         width;
+    int         height;
+    
+    // Codec private data for subtitles originating from FFMPEG sources
+    uint8_t *   extradata;
+    int         extradata_size;
+
+    int hits;     /* How many hits/occurrences of this subtitle */
+    int forced_hits; /* How many forced hits in this subtitle */
+
+};
+
+/*
+ * An attachment.
+ * 
+ * These are usually used for attaching embedded fonts to movies containing SSA subtitles.
+ */
+struct hb_attachment_s
+{
+    enum attachtype { FONT_TTF_ATTACH } type;
+    char *  name;
+    char *  data;
+    int     size;
+};
+
+struct hb_metadata_s 
+{
+    char  name[255];
+    char  artist[255];
+    char  composer[255];
+    char  release_date[255];
+    char  comment[1024];
+    char  album[255];
+    char  genre[255];
+    uint32_t coverart_size;
+    uint8_t *coverart;
+};
+
+struct hb_title_s
+{
+    enum { HB_DVD_TYPE, HB_BD_TYPE, HB_STREAM_TYPE, HB_FF_STREAM_TYPE } type;
+    uint32_t    reg_desc;
+    char        path[1024];
+    char        name[1024];
+    int         index;
+    int         playlist;
+    int         vts;
+    int         ttn;
+    int         cell_start;
+    int         cell_end;
+    uint64_t    block_start;
+    uint64_t    block_end;
+    uint64_t    block_count;
+    int         angle_count;
+    void        *opaque_priv;
+
+    /* Visual-friendly duration */
+    int         hours;
+    int         minutes;
+    int         seconds;
+
+    /* Exact duration (in 1/90000s) */
+    uint64_t    duration;
+
+    double      aspect;             // aspect ratio for the title's video
+    double      container_aspect;   // aspect ratio from container (0 if none)
+    int         has_resolution_change;
+    int         width;
+    int         height;
+    int         pixel_aspect_width;
+    int         pixel_aspect_height;
+    int         rate;
+    int         rate_base;
+    int         crop[4];
+    enum { HB_DVD_DEMUXER, HB_MPEG_DEMUXER, HB_NULL_DEMUXER } demuxer;
+    int         detected_interlacing;
+    int         pcr_pid;                /* PCR PID for TS streams */
+    int         video_id;               /* demuxer stream id for video */
+    int         video_codec;            /* worker object id of video codec */
+    uint32_t    video_stream_type;      /* stream type from source stream */
+    int         video_codec_param;      /* codec specific config */
+    char        *video_codec_name;
+    int         video_bitrate;
+    const char  *container_name;
+    int         data_rate;
+
+    hb_metadata_t *metadata;
+
+    hb_list_t * list_chapter;
+    hb_list_t * list_audio;
+    hb_list_t * list_subtitle;
+    hb_list_t * list_attachment;
+
+    /* Job template for this title */
+    hb_job_t  * job;
+
+    uint32_t    flags;
+                // set if video stream doesn't have IDR frames
+#define         HBTF_NO_IDR (1 << 0)
+};
+
+
+
+typedef void hb_error_handler_t( const char *errmsg );
+
+extern void hb_register_error_handler( hb_error_handler_t * handler );
+
+//char * hb_strdup_printf(const char *fmt, ...) HB_WPRINTF(1, 2);
+
+int hb_yuv2rgb(int yuv);
+int hb_rgb2yuv(int rgb);
+
+const char * hb_subsource_name( int source );
+
+// x264 preset/tune/profile helpers
+const char * const * hb_x264_presets(void);
+const char * const * hb_x264_tunes(void);
+const char * const * hb_x264_profiles(void);
+
+/************************************************************************
+ * DVD utils
+ ***********************************************************************/
+int hb_dvd_region(char *device, int *region_mask);
+
+hb_title_t *hb_title_init( char *path, int index );
+void hb_title_close( hb_title_t ** );
+
+
+/*
+ * Holds a packet of data that is moving through the transcoding process.
+ * 
+ * May have metadata associated with it via extra fields
+ * that are conditionally used depending on the type of packet.
+ */
+struct hb_buffer_s
+{
+    int           size;     // size of this packet
+    int           alloc;    // used internally by the packet allocator (hb_buffer_init)
+    uint8_t *     data;     // packet data
+    int           cur;      // used internally by packet lists (hb_list_t)
+
+    /*
+     * Corresponds to the order that this packet was read from the demuxer.
+     * 
+     * It is important that video decoder work-objects pass this value through
+     * from their input packets to the output packets they generate. Otherwise
+     * RENDERSUB subtitles (especially VOB subtitles) will break.
+     * 
+     * Subtitle decoder work-objects that output a renderable subtitle
+     * format (ex: PICTURESUB) must also be careful to pass the sequence number
+     * through for the same reason.
+     */
+    int64_t       sequence;
+
+    enum { AUDIO_BUF, VIDEO_BUF, SUBTITLE_BUF, OTHER_BUF } type;
+
+    int           id;           // ID of the track that the packet comes from
+    int64_t       start;        // Video and subtitle packets: start time of frame/subtitle
+    int64_t       stop;         // Video and subtitle packets: stop time of frame/subtitle
+    int64_t       pcr;
+    uint8_t       discontinuity;
+    int           new_chap;     // Video packets: if non-zero, is the index of the chapter whose boundary was crossed
+
+#define HB_FRAME_IDR    0x01
+#define HB_FRAME_I      0x02
+#define HB_FRAME_AUDIO  0x04
+#define HB_FRAME_P      0x10
+#define HB_FRAME_B      0x20
+#define HB_FRAME_BREF   0x40
+#define HB_FRAME_KEY    0x0F
+#define HB_FRAME_REF    0xF0
+    uint8_t       frametype;
+    uint16_t       flags;
+
+    /* Holds the output PTS from x264, for use by b-frame offsets in muxmp4.c */
+    int64_t     renderOffset;
+
+    // PICTURESUB subtitle packets:
+    //   Location and size of the subpicture.
+    int           x;
+    int           y;
+    int           width;
+    int           height;
+
+    //   A (copy of a) PICTURESUB subtitle packet that needs to be burned into this video packet by the hb_render work-object.
+    //   Subtitles that are simply passed thru are NOT attached to the associated video packets.
+    hb_buffer_t * sub;
+
+    // Packets in a list:
+    //   the next packet in the list
+    hb_buffer_t * next;
+};
+
+typedef void * om_handle_t;
+struct hb_optmedia_func_s
+{
+	om_handle_t *    	  (* init)        ( char * );
+    void          (* close)       ( om_handle_t ** );
+    int           (* title_count) ( om_handle_t * );
+    hb_title_t  * (* title_scan)  ( om_handle_t *, int, uint64_t );
+    int           (* main_feature)( om_handle_t *, hb_list_t * );
+};
+
+typedef struct hb_optmedia_func_s hb_optmedia_func_t;
+
+extern int hb_global_verbosity_level;
+#define HB_LOG_INFO      AV_LOG_INFO
+#define HB_LOG_VERBOSE   AV_LOG_VERBOSE
+#define HB_LOG_ERROR     AV_LOG_ERROR
+
+/* parses a 'filename' that can be a dvd://.. , bd:// ..a file://... url or a file name
+ * and returns the decoded path to the resource, and optionally a title
+ */
+char *url_encode(const char *str);
+char *url_decode(const char *str);
+int url_parse(const char *proto, const char *filename, const char **urlpath, int *title );
+
+typedef hb_buffer_t* (* fragread_t)(void *ctx);
+/* reads size bytes into buffer doing continuous reads if necessary.
+ * calls the read function passed with the context */
+int fragmented_read(void *ctx, fragread_t read, hb_buffer_t **cur_read_buffer, unsigned char *buf, int size);
+
+/* logging */
+#ifdef __GNUC__
+
+#define hb_log_level(level,fmt,...) { av_log(NULL, level,  fmt"\n", ##__VA_ARGS__ ); }
+#define hb_log(fmt,...)  hb_log_level( HB_LOG_INFO, fmt, ##__VA_ARGS__ )
+#define hb_error(fmt,...) hb_log_level( HB_LOG_VERBOSE, fmt, ##__VA_ARGS__ )
+
+#else
+
+#define hb_log(...) { av_log(NULL, AV_LOG_INFO,  "\n"__VA_ARGS__ ); }
+#define hb_error(...){av_log(NULL, AV_LOG_VERBOSE, "\n"__VA_ARGS__ ); }
+
+#endif
+
+/* foo(void) { */
+            /* hb_log( "dvd: Warning, DVD device has no region set" ); */
+/* } */
+
+#endif
+
diff --git a/libavformat/dvdurl_lang.c b/libavformat/dvdurl_lang.c
new file mode 100644
index 0000000000..06d860e9f4
--- /dev/null
+++ b/libavformat/dvdurl_lang.c
@@ -0,0 +1,285 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#include "dvdurl_lang.h"
+#include <string.h>
+#include <ctype.h>
+
+static const iso639_lang_t languages[] =
+{ { "Unknown", "", "", "und" },
+  { "Afar", "", "aa", "aar" },
+  { "Abkhazian", "", "ab", "abk" },
+  { "Afrikaans", "", "af", "afr" },
+  { "Akan", "", "ak", "aka" },
+  { "Albanian", "", "sq", "sqi", "alb" },
+  { "Amharic", "", "am", "amh" },
+  { "Arabic", "", "ar", "ara" },
+  { "Aragonese", "", "an", "arg" },
+  { "Armenian", "", "hy", "hye", "arm" },
+  { "Assamese", "", "as", "asm" },
+  { "Avaric", "", "av", "ava" },
+  { "Avestan", "", "ae", "ave" },
+  { "Aymara", "", "ay", "aym" },
+  { "Azerbaijani", "", "az", "aze" },
+  { "Bashkir", "", "ba", "bak" },
+  { "Bambara", "", "bm", "bam" },
+  { "Basque", "", "eu", "eus", "baq" },
+  { "Belarusian", "", "be", "bel" },
+  { "Bengali", "", "bn", "ben" },
+  { "Bihari", "", "bh", "bih" },
+  { "Bislama", "", "bi", "bis" },
+  { "Bosnian", "", "bs", "bos" },
+  { "Breton", "", "br", "bre" },
+  { "Bulgarian", "", "bg", "bul" },
+  { "Burmese", "", "my", "mya", "bur" },
+  { "Catalan", "", "ca", "cat" },
+  { "Chamorro", "", "ch", "cha" },
+  { "Chechen", "", "ce", "che" },
+  { "Chinese", "", "zh", "zho", "chi" },
+  { "Church Slavic", "", "cu", "chu" },
+  { "Chuvash", "", "cv", "chv" },
+  { "Cornish", "", "kw", "cor" },
+  { "Corsican", "", "co", "cos" },
+  { "Cree", "", "cr", "cre" },
+  { "Czech", "", "cs", "ces", "cze" },
+  { "Danish", "Dansk", "da", "dan" },
+  { "Divehi", "", "dv", "div" },
+  { "Dutch", "Nederlands", "nl", "nld", "dut" },
+  { "Dzongkha", "", "dz", "dzo" },
+  { "English", "English", "en", "eng" },
+  { "Esperanto", "", "eo", "epo" },
+  { "Estonian", "", "et", "est" },
+  { "Ewe", "", "ee", "ewe" },
+  { "Faroese", "", "fo", "fao" },
+  { "Fijian", "", "fj", "fij" },
+  { "Finnish", "Suomi", "fi", "fin" },
+  { "French", "Francais", "fr", "fra", "fre" },
+  { "Western Frisian", "", "fy", "fry" },
+  { "Fulah", "", "ff", "ful" },
+  { "Georgian", "", "ka", "kat", "geo" },
+  { "German", "Deutsch", "de", "deu", "ger" },
+  { "Gaelic (Scots)", "", "gd", "gla" },
+  { "Irish", "", "ga", "gle" },
+  { "Galician", "", "gl", "glg" },
+  { "Manx", "", "gv", "glv" },
+  { "Greek, Modern", "", "el", "ell", "gre" },
+  { "Guarani", "", "gn", "grn" },
+  { "Gujarati", "", "gu", "guj" },
+  { "Haitian", "", "ht", "hat" },
+  { "Hausa", "", "ha", "hau" },
+  { "Hebrew", "", "he", "heb" },
+  { "Herero", "", "hz", "her" },
+  { "Hindi", "", "hi", "hin" },
+  { "Hiri Motu", "", "ho", "hmo" },
+  { "Hungarian", "Magyar", "hu", "hun" },
+  { "Igbo", "", "ig", "ibo" },
+  { "Icelandic", "Islenska", "is", "isl", "ice" },
+  { "Ido", "", "io", "ido" },
+  { "Sichuan Yi", "", "ii", "iii" },
+  { "Inuktitut", "", "iu", "iku" },
+  { "Interlingue", "", "ie", "ile" },
+  { "Interlingua", "", "ia", "ina" },
+  { "Indonesian", "", "id", "ind" },
+  { "Inupiaq", "", "ik", "ipk" },
+  { "Italian", "Italiano", "it", "ita" },
+  { "Javanese", "", "jv", "jav" },
+  { "Japanese", "", "ja", "jpn" },
+  { "Kalaallisut (Greenlandic)", "", "kl", "kal" },
+  { "Kannada", "", "kn", "kan" },
+  { "Kashmiri", "", "ks", "kas" },
+  { "Kanuri", "", "kr", "kau" },
+  { "Kazakh", "", "kk", "kaz" },
+  { "Central Khmer", "", "km", "khm" },
+  { "Kikuyu", "", "ki", "kik" },
+  { "Kinyarwanda", "", "rw", "kin" },
+  { "Kirghiz", "", "ky", "kir" },
+  { "Komi", "", "kv", "kom" },
+  { "Kongo", "", "kg", "kon" },
+  { "Korean", "", "ko", "kor" },
+  { "Kuanyama", "", "kj", "kua" },
+  { "Kurdish", "", "ku", "kur" },
+  { "Lao", "", "lo", "lao" },
+  { "Latin", "", "la", "lat" },
+  { "Latvian", "", "lv", "lav" },
+  { "Limburgan", "", "li", "lim" },
+  { "Lingala", "", "ln", "lin" },
+  { "Lithuanian", "", "lt", "lit" },
+  { "Luxembourgish", "", "lb", "ltz" },
+  { "Luba-Katanga", "", "lu", "lub" },
+  { "Ganda", "", "lg", "lug" },
+  { "Macedonian", "", "mk", "mkd", "mac" },
+  { "Marshallese", "", "mh", "mah" },
+  { "Malayalam", "", "ml", "mal" },
+  { "Maori", "", "mi", "mri", "mao" },
+  { "Marathi", "", "mr", "mar" },
+  { "Malay", "", "ms", "msa", "msa" },
+  { "Malagasy", "", "mg", "mlg" },
+  { "Maltese", "", "mt", "mlt" },
+  { "Moldavian", "", "mo", "mol" },
+  { "Mongolian", "", "mn", "mon" },
+  { "Nauru", "", "na", "nau" },
+  { "Navajo", "", "nv", "nav" },
+  { "Ndebele, South", "", "nr", "nbl" },
+  { "Ndebele, North", "", "nd", "nde" },
+  { "Ndonga", "", "ng", "ndo" },
+  { "Nepali", "", "ne", "nep" },
+  { "Norwegian Nynorsk", "", "nn", "nno" },
+  { "Norwegian Bokmål", "", "nb", "nob" },
+  { "Norwegian", "Norsk", "no", "nor" },
+  { "Chichewa; Nyanja", "", "ny", "nya" },
+  { "Occitan (post 1500); Provençal", "", "oc", "oci" },
+  { "Ojibwa", "", "oj", "oji" },
+  { "Oriya", "", "or", "ori" },
+  { "Oromo", "", "om", "orm" },
+  { "Ossetian; Ossetic", "", "os", "oss" },
+  { "Panjabi", "", "pa", "pan" },
+  { "Persian", "", "fa", "fas", "per" },
+  { "Pali", "", "pi", "pli" },
+  { "Polish", "", "pl", "pol" },
+  { "Portuguese", "Portugues", "pt", "por" },
+  { "Pushto", "", "ps", "pus" },
+  { "Quechua", "", "qu", "que" },
+  { "Romansh", "", "rm", "roh" },
+  { "Romanian", "", "ro", "ron", "rum" },
+  { "Rundi", "", "rn", "run" },
+  { "Russian", "", "ru", "rus" },
+  { "Sango", "", "sg", "sag" },
+  { "Sanskrit", "", "sa", "san" },
+  { "Serbian", "", "sr", "srp", "scc" },
+  { "Croatian", "Hrvatski", "hr", "hrv", "scr" },
+  { "Sinhala", "", "si", "sin" },
+  { "Slovak", "", "sk", "slk", "slo" },
+  { "Slovenian", "", "sl", "slv" },
+  { "Northern Sami", "", "se", "sme" },
+  { "Samoan", "", "sm", "smo" },
+  { "Shona", "", "sn", "sna" },
+  { "Sindhi", "", "sd", "snd" },
+  { "Somali", "", "so", "som" },
+  { "Sotho, Southern", "", "st", "sot" },
+  { "Spanish", "Espanol", "es", "spa" },
+  { "Sardinian", "", "sc", "srd" },
+  { "Swati", "", "ss", "ssw" },
+  { "Sundanese", "", "su", "sun" },
+  { "Swahili", "", "sw", "swa" },
+  { "Swedish", "Svenska", "sv", "swe" },
+  { "Tahitian", "", "ty", "tah" },
+  { "Tamil", "", "ta", "tam" },
+  { "Tatar", "", "tt", "tat" },
+  { "Telugu", "", "te", "tel" },
+  { "Tajik", "", "tg", "tgk" },
+  { "Tagalog", "", "tl", "tgl" },
+  { "Thai", "", "th", "tha" },
+  { "Tibetan", "", "bo", "bod", "tib" },
+  { "Tigrinya", "", "ti", "tir" },
+  { "Tonga (Tonga Islands)", "", "to", "ton" },
+  { "Tswana", "", "tn", "tsn" },
+  { "Tsonga", "", "ts", "tso" },
+  { "Turkmen", "", "tk", "tuk" },
+  { "Turkish", "", "tr", "tur" },
+  { "Twi", "", "tw", "twi" },
+  { "Uighur", "", "ug", "uig" },
+  { "Ukrainian", "", "uk", "ukr" },
+  { "Urdu", "", "ur", "urd" },
+  { "Uzbek", "", "uz", "uzb" },
+  { "Venda", "", "ve", "ven" },
+  { "Vietnamese", "", "vi", "vie" },
+  { "Volapük", "", "vo", "vol" },
+  { "Welsh", "", "cy", "cym", "wel" },
+  { "Walloon", "", "wa", "wln" },
+  { "Wolof", "", "wo", "wol" },
+  { "Xhosa", "", "xh", "xho" },
+  { "Yiddish", "", "yi", "yid" },
+  { "Yoruba", "", "yo", "yor" },
+  { "Zhuang", "", "za", "zha" },
+  { "Zulu", "", "zu", "zul" },
+  { NULL, NULL, NULL } };
+
+const iso639_lang_t * lang_for_code( int code )
+{
+    char code_string[2];
+    const iso639_lang_t * lang;
+
+    code_string[0] = tolower( ( code >> 8 ) & 0xFF );
+    code_string[1] = tolower( code & 0xFF );
+
+    for( lang = (const iso639_lang_t*) languages; lang->eng_name; lang++ )
+    {
+        if( !strncmp( lang->iso639_1, code_string, 2 ) )
+        {
+            return lang;
+        }
+    }
+
+    return (const iso639_lang_t*) languages;
+}
+
+const iso639_lang_t * lang_for_code2( const char *code )
+{
+    char code_string[4];
+    const iso639_lang_t * lang;
+
+    code_string[0] = tolower( code[0] );
+    code_string[1] = tolower( code[1] );
+    code_string[2] = tolower( code[2] );
+    code_string[3] = 0;
+
+    for( lang = (const iso639_lang_t*) languages; lang->eng_name; lang++ )
+    {
+        if( !strcmp( lang->iso639_2, code_string ) )
+        {
+            return lang;
+        }
+        if( lang->iso639_2b && !strcmp( lang->iso639_2b, code_string ) )
+        {
+            return lang;
+        }
+    }
+
+    return (const iso639_lang_t*) languages;
+}
+
+int lang_to_code(const iso639_lang_t *lang)
+{
+    int code = 0;
+
+    if (lang)
+        code = (lang->iso639_1[0] << 8) | lang->iso639_1[1];
+
+    return code;
+}
+
+const iso639_lang_t * lang_for_english( const char * english )
+{
+    const iso639_lang_t * lang;
+
+    for( lang = (const iso639_lang_t*) languages; lang->eng_name; lang++ )
+    {
+        if( !strcmp( lang->eng_name, english ) )
+        {
+            return lang;
+        }
+    }
+
+    return (const iso639_lang_t*) languages;
+}
+
diff --git a/libavformat/dvdurl_lang.h b/libavformat/dvdurl_lang.h
new file mode 100644
index 0000000000..3cfa6e3602
--- /dev/null
+++ b/libavformat/dvdurl_lang.h
@@ -0,0 +1,52 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+#ifndef HB_LANG_H
+#define HB_LANG_H
+
+typedef struct iso639_lang_t
+{
+    const char * eng_name;        /* Description in English */
+    const char * native_name;     /* Description in native language */
+    const char * iso639_1;       /* ISO-639-1 (2 const characters) code */
+    const char * iso639_2;        /* ISO-639-2/t (3 const character) code */
+    const char * iso639_2b;       /* ISO-639-2/b code (if different from above) */
+
+} iso639_lang_t;
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+/* find language associated with ISO-639-1 language code */
+const iso639_lang_t * lang_for_code( int code );
+
+/* find language associated with ISO-639-2 language code */
+const iso639_lang_t * lang_for_code2( const char *code2 );
+
+/* ISO-639-1 code for language */
+int lang_to_code(const iso639_lang_t *lang);
+
+const iso639_lang_t * lang_for_english( const char * english );
+#ifdef __cplusplus
+}
+#endif
+#endif
diff --git a/libavformat/mpeg.c b/libavformat/mpeg.c
index c147fa72ed..27f8056466 100644
--- a/libavformat/mpeg.c
+++ b/libavformat/mpeg.c
@@ -24,6 +24,12 @@
 #include "internal.h"
 #include "mpeg.h"
 
+/* -- vgtmpeg */
+#include "dvdurl.h"
+#include "libavutil/dict.h"
+/* -- vgtmpeg */
+
+
 #if CONFIG_VOBSUB_DEMUXER
 # include "subtitles.h"
 # include "libavutil/bprint.h"
@@ -137,6 +143,202 @@ typedef struct MpegDemuxContext {
 #endif
 } MpegDemuxContext;
 
+/*--vgtmpeg start*/
+/* DVDURL support routines */
+static dvdurl_t *get_dvdurl_ctx(AVFormatContext *s) {
+	URLContext *uc = ffio_geturlcontext( s->pb );
+	dvdurl_t * du = uc->priv_data;
+	if(strcmp(uc->prot->name,"dvd")){
+	    return 0;
+	}
+
+	if(du) {
+		return strstr(du->class->class_name, "DVDURL")==NULL  ? 0 : du;
+	} else {
+		return 0;
+	}
+}
+
+static int is_dvdurl(AVFormatContext *s) {
+	return get_dvdurl_ctx(s) ? 1 : 0;
+}
+
+/* calls an avformat_new_stream from a startcode and es_type
+ * es_type = STREAM_TYPE_PRIVATE_DATA or a specific stream type
+ * */
+static AVStream * dvd_add_stream(AVFormatContext *s, int es_type, int startcode, int dvdaudio_substream_type, int av_id) {
+    int codec_id, type, add = 0;
+    MpegDemuxContext *m = s->priv_data;
+    AVStream *st = NULL;
+
+    //es_type = m->psm_es_type[startcode & 0xff];
+    if (es_type > 0 && es_type != STREAM_TYPE_PRIVATE_DATA) {
+        if (es_type == STREAM_TYPE_VIDEO_MPEG1) {
+            codec_id = AV_CODEC_ID_MPEG2VIDEO;
+            type = AVMEDIA_TYPE_VIDEO;
+            add = 1;
+        } else if (es_type == STREAM_TYPE_VIDEO_MPEG2) {
+            codec_id = AV_CODEC_ID_MPEG2VIDEO;
+            type = AVMEDIA_TYPE_VIDEO;
+            add = 1;
+        } else if (es_type == STREAM_TYPE_AUDIO_MPEG1 || es_type == STREAM_TYPE_AUDIO_MPEG2) {
+            codec_id = AV_CODEC_ID_MP3;
+            type = AVMEDIA_TYPE_AUDIO;
+        } else if (es_type == STREAM_TYPE_AUDIO_AAC) {
+            codec_id = AV_CODEC_ID_AAC;
+            type = AVMEDIA_TYPE_AUDIO;
+        } else if (es_type == STREAM_TYPE_VIDEO_MPEG4) {
+            codec_id = AV_CODEC_ID_MPEG4;
+            type = AVMEDIA_TYPE_VIDEO;
+        } else if (es_type == STREAM_TYPE_VIDEO_H264) {
+            codec_id = AV_CODEC_ID_H264;
+            type = AVMEDIA_TYPE_VIDEO;
+        } else if (es_type == STREAM_TYPE_AUDIO_AC3) {
+            codec_id = AV_CODEC_ID_AC3;
+            type = AVMEDIA_TYPE_AUDIO;
+        }
+    } else if (startcode >= 0x1e0 && startcode <= 0x1ef) {
+        static const unsigned char avs_seqh[4] = { 0, 0, 1, 0xb0 };
+        unsigned char buf[8];
+        avio_read(s->pb, buf, 8);
+        avio_seek(s->pb, -8, SEEK_CUR);
+        if (!memcmp(buf, avs_seqh, 4) && (buf[6] != 0 || buf[7] != 1))
+            codec_id = AV_CODEC_ID_CAVS;
+        type = AVMEDIA_TYPE_VIDEO;
+    } else if (startcode >= 0x1c0 && startcode <= 0x1df) {
+        type = AVMEDIA_TYPE_AUDIO;
+        codec_id = m->sofdec > 0 ? AV_CODEC_ID_ADPCM_ADX : AV_CODEC_ID_MP2;
+    } else if (startcode >= 0x80 && startcode <= 0x87) {
+        type = AVMEDIA_TYPE_AUDIO;
+        codec_id = AV_CODEC_ID_AC3;
+    } else if ((startcode >= 0x88 && startcode <= 0x8f) || (startcode >= 0x98 && startcode <= 0x9f)) {
+        /* 0x90 - 0x97 is reserved for SDDS in DVD specs */
+        type = AVMEDIA_TYPE_AUDIO;
+        codec_id = AV_CODEC_ID_DTS;
+    } else if (startcode >= 0xa0 && startcode <= 0xaf) {
+        type = AVMEDIA_TYPE_AUDIO;
+        /* 16 bit form will be handled as AV_CODEC_ID_PCM_S16BE */
+        codec_id = AV_CODEC_ID_PCM_DVD;
+    } else if (startcode >= 0xb0 && startcode <= 0xbf) {
+        type = AVMEDIA_TYPE_AUDIO;
+        codec_id = AV_CODEC_ID_TRUEHD;
+    } else if (startcode >= 0xc0 && startcode <= 0xcf) {
+        /* Used for both AC-3 and E-AC-3 in EVOB files */
+        type = AVMEDIA_TYPE_AUDIO;
+        codec_id = AV_CODEC_ID_AC3;
+    } else if (startcode >= 0x20 && startcode <= 0x3f) {
+        type = AVMEDIA_TYPE_SUBTITLE;
+        codec_id = AV_CODEC_ID_DVD_SUBTITLE;
+    } else if (startcode >= 0xfd55 && startcode <= 0xfd5f) {
+        type = AVMEDIA_TYPE_VIDEO;
+        codec_id = AV_CODEC_ID_VC1;
+    } else if (startcode == 0x1bd) {
+        // check dvd audio substream type
+        type = AVMEDIA_TYPE_AUDIO;
+        switch (dvdaudio_substream_type & 0xe0) {
+        case 0xa0:
+            codec_id = AV_CODEC_ID_PCM_DVD;
+            add = 1;
+            break;
+        case 0x80:
+            if ((dvdaudio_substream_type & 0xf8) == 0x88)
+                codec_id = AV_CODEC_ID_DTS;
+            else
+                codec_id = AV_CODEC_ID_AC3;
+            add = 1;
+            break;
+        default:
+            av_log(s, AV_LOG_ERROR, "Unknown 0x1bd sub-stream\n");
+            break;
+        }
+    }
+
+    if (add) {
+        st = avformat_new_stream(s, NULL);
+
+        if (st) {
+            st->id = av_id;
+            avpriv_set_pts_info(st, 33, 1, 90000);
+            st->codecpar->codec_type = type;
+            st->codecpar->codec_id = codec_id;
+            st->request_probe = 0;
+            if (codec_id != AV_CODEC_ID_PCM_S16BE)
+                st->need_parsing = AVSTREAM_PARSE_FULL;
+        }
+    }
+
+    return st;
+}
+
+#define DVD_AVID(title,startcode) (((title)<<16)|((startcode)&0xff))
+static uint32_t get_current_privateid(AVFormatContext *s, uint32_t startcode) {
+    dvdurl_t *ctx = get_dvdurl_ctx(s);
+    if(ctx) {
+        return DVD_AVID(ctx->selected_title_idx, startcode);
+    } else {
+        return startcode;
+    }
+}
+
+#define DVDAUDIO_STARTCODE_FROM_HB_ID(x) ((x)>>8)
+/* if source is a dvd creates all the streams available in the DVD for all titles */
+static void dvd_create_streams(AVFormatContext *s) {
+	dvdurl_t *ctx = get_dvdurl_ctx(s);
+	AVStream *st;
+    int64_t start;
+	if(!ctx) return;
+	
+	/* iterate through the title list and add all the streams contained */
+	{
+	    hb_title_t *title = ctx->selected_title;
+		uint64_t duration = title->duration;
+		int j;
+
+        /* create new program entry in AVFormatContext */
+        av_new_program(s, title->index);
+
+		/* add video stream */
+		st = dvd_add_stream(s, STREAM_TYPE_VIDEO_MPEG2, title->video_id, 0, DVD_AVID(title->index,title->video_id) ); /* mpeg 2 stream type works for mpeg1/2 */
+		if(st) {
+			st->start_time = 0;
+		    st->duration = duration; // the 90khz base was set in dvd_add_stream
+		    st->codec->width = title->width;
+		    st->codec->height = title->height;
+		    av_program_add_stream_index(s, title->index, st->index);
+		}
+
+		/* add audio streams */
+		for( j=0; j<hb_list_count(title->list_audio); j++ ) {
+			hb_audio_t *as = hb_list_item(title->list_audio,j);
+
+
+			st = dvd_add_stream(s, 0, PRIVATE_STREAM_1, DVDAUDIO_STARTCODE_FROM_HB_ID(as->id), DVD_AVID(title->index,DVDAUDIO_STARTCODE_FROM_HB_ID(as->id)) ); /* mpeg 2 stream type works for mpeg1/2 */
+			if(st) {
+			    av_dict_set(&st->metadata, "language", as->config.lang.iso639_2, 0);
+			    av_dict_set(&st->metadata, "language-iso639_2", as->config.lang.iso639_2, 0);
+			    av_dict_set(&st->metadata, "language-simple", as->config.lang.simple, 0);
+			    av_dict_set(&st->metadata, "language-description", as->config.lang.description, 0);
+			    st->start_time = 0;
+			    st->duration = duration; // the 90khz base was set in dvd_add_stream
+			    av_program_add_stream_index(s, title->index, st->index);
+			}
+		}
+
+		/* add subtitle streams */
+		/* add chapters */
+		start = 0;
+		for( j=0; j<hb_list_count(title->list_chapter); j++ ) {
+            int64_t end;
+			hb_chapter_t *c = hb_list_item(title->list_chapter,j);
+
+			end = start + c->duration;
+			avpriv_new_chapter(s,j, (AVRational){1,90000}, start, end, NULL );
+			start = end;
+		}
+	}
+}
+/*--vgtmpeg end*/
+
 static int mpegps_read_header(AVFormatContext *s)
 {
     MpegDemuxContext *m = s->priv_data;
@@ -154,6 +356,13 @@ static int mpegps_read_header(AVFormatContext *s)
     } else
        avio_seek(s->pb, last_pos, SEEK_SET);
 
+/* -- vgtmpeg */
+    if(is_dvdurl(s)) {
+        av_dict_set(&s->metadata, "source_type", "dvd", 0);
+    }
+    dvd_create_streams(s);
+/* -- vgtmpeg */
+
     /* no need to do more */
     return 0;
 }
@@ -520,7 +729,9 @@ redo:
     /* now find stream */
     for (i = 0; i < s->nb_streams; i++) {
         st = s->streams[i];
-        if (st->id == startcode)
+/* -- vgtmpeg */
+        if (st->id == get_current_privateid(s,startcode))
+/* -- vgtmpeg */
             goto found;
     }
 
@@ -619,7 +830,9 @@ skip:
     st = avformat_new_stream(s, NULL);
     if (!st)
         goto skip;
-    st->id                = startcode;
+/* -- vgtmpeg */
+    st->id = get_current_privateid(s,startcode);
+/* -- vgtmpeg */
     st->codecpar->codec_type = type;
     st->codecpar->codec_id   = codec_id;
     if (   st->codecpar->codec_id == AV_CODEC_ID_PCM_MULAW
@@ -674,7 +887,9 @@ static int64_t mpegps_read_dts(AVFormatContext *s, int stream_index,
                 av_log(s, AV_LOG_DEBUG, "none (ret=%d)\n", len);
             return AV_NOPTS_VALUE;
         }
-        if (startcode == s->streams[stream_index]->id &&
+/* -- vgtmpeg */
+        if ( get_current_privateid(s,startcode) == s->streams[stream_index]->id &&
+/* -- vgtmpeg */
             dts != AV_NOPTS_VALUE) {
             break;
         }
diff --git a/libavformat/mpegts.c b/libavformat/mpegts.c
index e7bbf3e488..ee480b2ce1 100644
--- a/libavformat/mpegts.c
+++ b/libavformat/mpegts.c
@@ -256,6 +256,88 @@ typedef struct PESContext {
 
 extern AVInputFormat ff_mpegts_demuxer;
 
+/* >> vgtmpeg */
+#include "bdurl.h"
+
+
+/* DVDURL support routines */
+static bdurl_t *get_bdurl_ctx(AVFormatContext *s) {
+	URLContext *uc = ffio_geturlcontext( s->pb );
+	bdurl_t * du = uc->priv_data;
+	if(strcmp(uc->prot->name,"bd")){
+	    return 0;
+	}
+
+	if(du) {
+		return strstr(du->class->class_name, "BDURL")==NULL  ? 0 : du;
+	} else {
+		return 0;
+	}
+}
+
+static void bdurl_stream_init(MpegTSContext *ts, AVStream *st) {
+	bdurl_t *bdurl = get_bdurl_ctx(ts->stream);
+	if(bdurl && bdurl->selected_title ) {
+		/* add audio streams */
+	    hb_title_t *title = bdurl->selected_title;
+		uint64_t duration = title->duration;
+		int j;
+		for( j=0; j<hb_list_count(title->list_audio); j++ ) {
+			hb_audio_t *as = hb_list_item(title->list_audio,j);
+			//av_log(NULL,AV_LOG_INFO,"bdurl_stream_init: hb_audio_id %X   st_id %X\n", as->id, st->id);
+			// if st->id !=as->id
+			if((((as->id)&0xff00) != 0x1100) || ((as->id)!=(st->id)))
+				continue;
+
+			av_dict_set(&st->metadata, "language", as->config.lang.iso639_2, 0);
+			av_dict_set(&st->metadata, "language-iso639_2", as->config.lang.iso639_2, 0);
+			av_dict_set(&st->metadata, "language-simple", as->config.lang.simple, 0);
+			av_dict_set(&st->metadata, "language-description", as->config.lang.description, 0);
+			st->start_time = 0;
+			st->duration = duration; // the 90khz base was set in dvd_add_stream
+			break;
+		}
+	}
+}
+
+static void bdurl_program_init(MpegTSContext *ts) {
+	bdurl_t *bdurl = get_bdurl_ctx(ts->stream);
+    if(bdurl) {
+        av_dict_set(&ts->stream->metadata, "source_type", "bluray", 0);
+    }
+	if(bdurl && bdurl->selected_title ) {
+	    hb_title_t *title = bdurl->selected_title;
+		int j=0;
+
+
+
+		/* add chapters */
+		int64_t start = 0;
+		for( j=0; j<hb_list_count(title->list_chapter); j++ ) {
+			hb_chapter_t *c = hb_list_item(title->list_chapter,j);
+
+			int64_t end = start + c->duration;
+			avpriv_new_chapter(ts->stream,j, (AVRational){1,90000}, start, end, NULL );
+			start = end;
+		}
+    }
+
+}
+
+static unsigned int get_ff_program_id_from_sid(MpegTSContext *ts, unsigned int sid )
+{
+	bdurl_t *bdurl = get_bdurl_ctx(ts->stream);
+	if(bdurl && bdurl->selected_title ) {
+		return bdurl->selected_title->index;
+	} else {
+		return sid;
+	}
+}
+/* << vgtmpeg */
+
+
+
+
 static struct Program * get_program(MpegTSContext *ts, unsigned int programid)
 {
     int i;
@@ -2362,7 +2444,10 @@ static void pmt_cb(MpegTSFilter *filter, const uint8_t *section, int section_len
 
         add_pid_to_pmt(ts, h->id, pid);
 
-        av_program_add_stream_index(ts->stream, h->id, st->index);
+/* --vgtmpeg */
+        bdurl_stream_init(ts, st);
+        av_program_add_stream_index(ts->stream, get_ff_program_id_from_sid(ts, h->id), st->index);
+/* --vgtmpeg */
 
         desc_list_len = get16(&p, p_end);
         if (desc_list_len < 0)
@@ -2379,8 +2464,10 @@ static void pmt_cb(MpegTSFilter *filter, const uint8_t *section, int section_len
 
             if (pes && prog_reg_desc == AV_RL32("HDMV") &&
                 stream_type == 0x83 && pes->sub_st) {
-                av_program_add_stream_index(ts->stream, h->id,
-                                            pes->sub_st->index);
+/* --vgtmpeg */
+                av_program_add_stream_index(ts->stream, get_ff_program_id_from_sid(ts, h->id), 
+						pes->sub_st->index);
+/* --vgtmpeg */
                 pes->sub_st->codecpar->codec_tag = st->codecpar->codec_tag;
             }
         }
@@ -2439,7 +2526,10 @@ static void pat_cb(MpegTSFilter *filter, const uint8_t *section, int section_len
             /* NIT info */
         } else {
             MpegTSFilter *fil = ts->pids[pmt_pid];
-            program = av_new_program(ts->stream, sid);
+/* --vgtmpeg */
+            program = av_new_program(ts->stream, get_ff_program_id_from_sid(ts,sid));
+            bdurl_program_init(ts);
+/* --vgtmpeg */
             if (program) {
                 program->program_num = sid;
                 program->pmt_pid = pmt_pid;
@@ -2535,8 +2625,10 @@ static void sdt_cb(MpegTSFilter *filter, const uint8_t *section, int section_len
                     break;
                 name = getstr8(&p, p_end);
                 if (name) {
-                    AVProgram *program = av_new_program(ts->stream, sid);
-                    if (program) {
+/* --vgtmpeg */
+                    AVProgram *program = av_new_program(ts->stream, get_ff_program_id_from_sid(ts, sid));
+/* --vgtmpeg */
+                    if(program) {
                         av_dict_set(&program->metadata, "service_name", name, 0);
                         av_dict_set(&program->metadata, "service_provider",
                                     provider_name, 0);
diff --git a/libavformat/optmedia.h b/libavformat/optmedia.h
new file mode 100644
index 0000000000..0cafb9b30f
--- /dev/null
+++ b/libavformat/optmedia.h
@@ -0,0 +1,53 @@
+/* @@--
+ * 
+ * Copyright (C) 2010-2018 Alberto Vigata
+ *       
+ * This file is part of vgtmpeg
+ * 
+ * a Versed Generalist Transcoder
+ * 
+ * vgtmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ * 
+ * vgtmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ * 
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+
+#ifndef OPTMEDIA_H
+#define OPTMEDIA_H
+
+/* public functions of optical media protocols */
+struct ff_input_func_s
+{
+	int (* parse_file)(void *ctx, char *filename);
+	void (* select_default_program)(int programid);
+};
+
+typedef struct ff_input_func_s ff_input_func_t;
+
+/* returns 0 if path is not an optical media supported
+ * if its an optical media path, calls parse_file with the right url for the optical media
+ *
+ * It will also call select_default_program
+ * */
+int parse_optmedia_path( void *ctx, const char *path, ff_input_func_t *ff_input_func );
+
+#ifdef __GNUC__
+#define BDNOT_USED __attribute__ ((unused))
+#else
+#define BDNOT_USED
+#endif
+
+#define OPTMEDIA_NOT_USED BDNOT_USED
+
+
+#endif //!OPTMEDIA_H
+
diff --git a/libavformat/protocols.c b/libavformat/protocols.c
index ad95659795..5d01e2e811 100644
--- a/libavformat/protocols.c
+++ b/libavformat/protocols.c
@@ -69,6 +69,11 @@ extern const URLProtocol ff_libsrt_protocol;
 extern const URLProtocol ff_libssh_protocol;
 extern const URLProtocol ff_libsmbclient_protocol;
 
+/*-- vgtmpeg --*/
+extern const  URLProtocol ff_bd_protocol;
+extern const  URLProtocol ff_dvd_protocol;
+/*-- vgtmpeg --*/
+
 #include "libavformat/protocol_list.c"
 
 const AVClass *ff_urlcontext_child_class_next(const AVClass *prev)
diff --git a/libavformat/utils.c b/libavformat/utils.c
index d113a16c80..4acd9f2360 100644
--- a/libavformat/utils.c
+++ b/libavformat/utils.c
@@ -2819,6 +2819,11 @@ static void estimate_timings_from_pts(AVFormatContext *ic, int64_t old_offset)
     }
 
     av_opt_set(ic, "skip_changes", "1", AV_OPT_SEARCH_CHILDREN);
+
+	/* --vgtmpeg */
+	if (!has_duration(ic)) {
+	/* --vgtmpeg */
+
     /* estimate the end time (duration) */
     /* XXX: may need to support wrapping */
     filesize = ic->pb ? avio_size(ic->pb) : 0;
@@ -2885,6 +2890,9 @@ static void estimate_timings_from_pts(AVFormatContext *ic, int64_t old_offset)
     } while (!is_end &&
              offset &&
              ++retry <= DURATION_MAX_RETRY);
+	/* --vgtmpeg */
+	}
+	/* --vgtmpeg */
 
     av_opt_set(ic, "skip_changes", "0", AV_OPT_SEARCH_CHILDREN);
 
-- 
2.14.5



More information about the ffmpeg-user mailing list