Video Encoding with ffmpeg

Motivations

This project CM-TES gathers various activities around private social network use in the context of massive online learning. MOOCs yielded an important evolution of formats and uses of video in trainng this evolution originated thoughts on how to propose to project members and contributors an efficient and simple framework for video encoding that best fits their needs.

There are many tutorials or blogs or answers in forums describing in a better way than this one various uses of ffmpeg, this article is intended just to keep a trace of our work and share our experience. As some tricks were not found elsewhere I hope this article may help some visitors of my blog.


Table of contents

  1. A "chained scripts" framework to encode videos
  2. Some other problems ffmpeg can solve for you

Downloads

  1. low level script with ffmpeg command
  2. user defined script calling low level script
  3. low level script with ffmpeg commands for sar !=1
  4. resizing screencast to standard 4:3 or 16:9 values

A "chained scripts" framework to encode videos

The proposed solution described in this article shows two chained scripts to enable complex encoding operations on pedagogical video files.

A If Possible make it simple and fast (You probably don't need to read this article !)

ffmpeg has been developped and maintained for years by a large team of advanced programmers, the number of options for the command ffmpeg is really huge but fortunately those smart devs have provided defaults values for most parameters that may be sufficient for "ordinary" operations. Do not hesitate to use the defaults if you have few or no constraint on what you want to produce. If you need clear and precise information goto : the official documentation.

To encode an avi video in mp4 format the command line is quite straightforward :

ffmpeg -i myrawfile.avi myconvertedfile.mp4

ffmpeg will manage to handle your avi (if it follows the standards) and provide a "standard" mp4 (in such a command you do not choose codec, bitrate, size and no other parameter). Of course you may want to choose the starting time of the encoded video and its duration for example to get rid of useless images at the beginning and end of your film, add two parameters :

ffmpeg -ss 00:04:00.00 -i myrawfile.avi -t 00:13:10.85 myconvertedfile.mp4

ffmpeg will start encoding after 4 minutes and convert 13 minutes 10 seconds and 85 hundredth of the original avi into an mp4. As you may know mp4 is a container that may handle various codecs you may then want to choose on your own the codecs, that will bring two more parameters :

ffmpeg -ss 00:04:00.00 -i myrawfile.avi -t 00:13:10.85 -c:v thisVideoCodec -c:a thisAudioCodec myconvertedfile.mp4

A list of options is available on the official ffmpeg documentation and the full list with tool options is here.

The constraints that lead us to build two chained scripts

As it has been described in the previous chapter even if encoding and transcoding can be done with simple commands one can easily feel the need for more, and more, and more, and more options... The number of parameters dramatically increaseshen also when you must do complicated operations on your raw videos : add overlays (institution logos for example), resample (whose quality is closely related the algorithm used), change size (for example display old 4:3 into 16:9), change bitrate, change codec, etc... In this project we had to deal with two types of constraints :

  1. on the one hand some low level requirements
    • being able to handle various types of video source formats;
    • being able to produce various types of controlled output i.e. bitrate, type, ratio;
    • being able to add logos of institutions involved in the project in a user choosen order and position
  2. on the other hand the need to facilitate users and contributors activity
    • being able to prepare "batch executions"
    • being able to choose start time, duration size and other parameters linked with the vraw video content
    • being able to choose among predefined low level parameters

To solve those questions it has been decided to write a first script that will handle the low level parameters derived from editorial choices and create variables meant to receive user's choices. At first it was proposed to the users to call the script from command lines and it worked !... But run after run the number of "user requested parameters" increased with experience to reach such a number that typing mistakes and complexity of the command line prevented good practice. It was then decided to produce a second script to gather all "user side" variables and options. We decided nevertheless to keep a difference between "management side" parameters that control the framework of the produced videos and "user side" parameters that depend on user's will (inside the project framework if possible) and user's raw video properties.

If you are in a hurry and want to use those scripts ASAP :

  1. check the parameters in low level script
  2. fill in your parameters in the user defined script
  3. check that your presets, logos and other dependencies are in the right path
  4. execute the user defined script
  5. enjoy your transcoded videos

The first script bearing low level choices

Encoding a video with ffmpeg without accepting defaults is rather tricky because of the number of interactions between parameters that may lead to impossible configurations. It is very frustrating to carefully select advanced parameters and get an error message (which most of the time is not clear enough to trace the problem) and have no encoding or an empty file... To minimize the risk of such situations for our users we tried to settle some parameters and restrict the possible choice for others. Building a script was also a good solution to produce with one command three video files to be sure that the produced video will be visible on any browser. At the beginning we encodes three files : ogg(ogv), mp4 and webm but theora codec becoming outdated we only produce now mp4 and webm.

First of all the script carries some fixed parameters:

nbthreads=0
fadeduration=3
freq=44100
ffmpeg=/usr/bin/ffmpeg
# oclOptions=platform_idx=0:device_idx=0

nbthreads set to 0 lets ffmpeg use multicore and threads in an automatic way, fadeduration is the fading duration of the title or watermark encoded as an overlay at the beginning of the film, freq is the audio frequence used, ffmpeg is the path of the program in case it has been compiled by hand and placed in a non standard place, oclOptions is used ony if your ffmpeg has been compiled for opencl.

Fading timing is computed before constructing the filters and a date is also computed according to the header-date requested format.

  endfade=$(( (( laptitle - 1 )) * ${fps%.*} ))
  startfade=$(( endfade - fadeduration * ${fps%.*} ))
  fade=" $startfade:$endfade "
  thisdate=`date --rfc-3339=seconds` 

An important point has been to choose a set of sizes for our videos to keep an homogenous look to our productions, it has been necessary to set up a series of choices to handle all the possible cases in each case a filter is computed with the users provided parameters :

  case $aspect in
   16:9 ) 
    case $type_out in
     ipod320 )
     ;;
     ipod480 )
     ;;
     ens384 )
     ;;
     ipod640 )
     ;;
     iphone4 )
     ;;
     ipad1024 )
     ;;
      720p )
     ;;
     1080p )
     ;;
    * )
      echo  $type_out is not recognized as type out
     ;;
    esac
   ;;
   16:9to4:3 )
     case $type_out in
     ipod320 )
     ;;
     ipod480 )
     ;;
     ens384 )
     ;;
     ipod640 )
     ;;
     iphone4 )
     ;;
     ipad1024 )
     ;;
     PAL )
      ;;
     qPAL )
     ;;
     qPAL+ )
     ;;
     * ) echo  $type_out not recognized as type out
     ;;
    esac
   ;;
   4:3 )      
     case $type_out in
       ipod320 )
     ;;
     ipod480 )
     ;;
     ens384 )
     ;;
     ipod640 )
     ;;
     iphone4 )
     ;;
     ipad1024 )
     ;;
     PAL )
      ;;
     qPAL )
      ;;
     qPAL+ )
     ;;
     * ) echo  $type_out not recognized as type out
     ;;
    esac
   ;;
   * ) echo  $aspect is not treated
   ;;
  esac

Each filter is built to encode an overlay with decorations and logos of institutions and faculties participating to the project, 5 png images with transparent background are used (.png give a better rendering but the option image2must be set in ffmpeg command) a watermark decoration on the bottom-left corner of the image, institution's logo on the bottom-right of the image faculty logo to the left of institution's logo other institution's logo to the left of our faculty logo * other faculty's logoto the left of its mother institution

We describe here one example of filter go to [the script] to see all combinations. This example enables to encode a 16:9 video inside a 4:3 frame with an output size of 384 x 288.

filter="[0:0]pad=iw:iw*3/4:0:(iw*3/4-ih)/2[a];[a]scale=384:288[b];[b][1:v]overlay=0:main_h-overlay_h[c];[c][2:v]overlay=main_w-overlay_w-10:main_h-overlay_h-3[d];[d][3:v]overlay=main_w-3*overlay_w-20:main_h-overlay_h-3[e];[e][4:v]overlay=30:main_h-overlay_h-3[f];[5:v]fade=out:""$fade"":alpha=1[wm];[f][wm]overlay=(main_w-overlay_w)/2:5:enable=between(t\,0\,5)"

First step [0:0]pad=iw:iw*3/4:0:(iw*3/4-ih)/2[a] is a padding of the 16:9 image, parameters are separated by ":" and values are derived from interl ffmpeg parameters see the official ffmpeg documentation. [input stream]width:height:Xposition:Yposition[outputstream]. Each step of the filter is separated with the next one by ";". Second step scales the image. Third step adds an overlay at a given position [input stream][input overlay]overlay=Xposition:Yposition[output stream]. The four logos are added in the overlay in the same way (if needed one or more overlays vcan be removed).

Title or watermark management comes after with a fading process. The fading process to the alpha channel is dclared first [5:v]fade=out:""$fade"":alpha=1[wm], The input stream stream is displayed than faded out from fadeStart to fadeEnd in the alpha channel [input stream]fade=out:fadeStart:fadeEnd:alpha=1[outputstream] then the fading of the title/watermark is encoded [f][wm]overlay=(main_w-overlay_w)/2:5:enable=between(t\,0\,5). The input stream receives the faded title or watermark as an overlay like for the previous images but this time this overlay is removed after a given duration thanks to the parameter enable [input stream][input overlay]overlay=Xposition:Yposition:enable=between(t\,startOverlay\,enOverlay), no temporary output is produced as it is the last step of the flter, output is handed over the rest of ffmpeg processing.

Encoding is a two pass encoding with reduced parameters (noaudio) during the first pass to save processing time. We tried to produce two outputs on one single command to save more time but we dropped out the solution because users prefer in some cases just to comment one of the ffmpeg instruction to produce only one file without changing their parameters known to work.

#
# Pass1 extension1
#
echo "Codec1 Pass1 : "
date
$ffmpeg -y -ss $start -v $verbosity -threads $nbthreads -t $duration -i $file -f image2 -r 25 -loop 1 -i $logo1 -f image2 -r 25 -loop 1 -i $logo2 -f image2 -r 25 -loop 1 -i $logo3 -f image2 -r 25 -loop 1 -i $logo4  -f image2 -r 25 -loop 1 -i $logotitle -t $duration -r $fps -c:v $vcodec1 -vpre $vpreset1 -async 1 -pass 1 -b:v $vbr -maxrate $maxvbr -bufsize $bufsize -map $videomap -an  -f $extension1 /dev/null
#
# Pass2 extension1
#
echo "Codec1 Pass2 : "
date
$ffmpeg -y -ss $start -t $duration -v $verbosity -threads $nbthreads -i $file -f image2 -r 25 -loop 1 -i $logo1 -f image2 -r 25 -loop 1 -i $logo2 -f image2 -r 25 -loop 1 -i $logo3 -f image2 -r 25 -loop 1 -i $logo4 -f image2 -r 25 -loop 1 -i $logotitle -t $duration -r $fps -metadata title="$title" -metadata artist="$author" -metadata copyright="$copyright" -metadata creation_time="$thisdate" -metadata:s:a  language=$lang -c:v $vcodec1 -vpre $vpreset1 -async 1 -pass 2 -b:v $vbr -maxrate $maxvbr -bufsize $bufsize -map $videomap -c:a $acodec1 -ac 2 -ar $freq -b:a $abr -map $audiomap -map_channel $channel1map  -map_channel $channel2map -af $vol -filter_complex "$filter"  $fileout.$extension1 
date

Parameters are divided into 3 groups :

  1. General parameters
    • automatic yes answer to questions from the program -y
    • start time and duration -ss $start -t $duration
    • number of threads verbosity -v $verbosity -threads $nbthreads
  2. Files used
    • main file -i $file
    • overlay files with their parameters -f image2 -r 25 -loop 1 -i $logo1 image2 for png images loop 1 is compulsory to get the fading effect if not set the overlay is only on/off 25 fps is for France standards
  3. Encoding options
    • video options -r $fps -c:v $vcodec1 -async 1 -pass 2 -b:v $vbr -maxrate $maxvbr -bufsize $bufsize -map $videomap
    • audio options -c:a $acodec1 -ac 2 -ar $freq -b:a $abr -map $audiomap -map_channel $channel1map -map_channel $channel2map -af $vol
    • filtering -filter_complex "$filter"
  4. Metadata values
    • title -metadata title="$title"
    • teacher -metadata artist="$author"
    • copyright information -metadata copyright="$copyright"
    • date of creation -metadata creation_time="$thisdate"
    • language -metadata:s:a language=$lang
  5. presets
    • preset -vpre $vpreset1 system enables to set up fine tuned parameters linked with each codec used, it enables to pin point an equilibrium between processing time and output quality. ffmpeg or other softwares (x264) provide default presets but it is possible to build a preset with values fitting the properties of the original video.

All the dode displayed in this chapter has been copied /pasted from the script bearing ffmpeg command line and low level parameters.

In most cases this script can fit any situations but some camcorders are not yielding square pixels. Movie display softwares are used to such situations and will properly handle your productions but in some cases -when sample aspect ratio (SAR) value is not recognized, - when you want to overlay images with different SARs you may get into trouble... To take care of this situation here is a script for sar !=1, it handles the values of the SAR not equal to 1:1 and produces an output with SAR=1:1.

The second script widely opened to user's choices

Users have to provide all the parameters decribed in the previous chapter and for most of them they may choose between numerous possibilities. We decided to reduce some of them : for example we recommend to producewebm ans mp4 only, we recommend a reduced set of video and audio codecs but even with few choices for each parameter the number of parameters makes u huge number of possibilities and in fact users tend to produce a script for each of their video sessions; Most advanced users (those who change filtering parameters) have ended to build a folder for each of their projects with the two amended scripts, the logos, and the presets... We are far from a generic command line that fits to all but we observed a great increase in user satisfaction!

Frst part of the script contains in comments the full description of the parameters. You will find the parameters listed above plus some recommendations from the project managers :

# here are some proposals
# Apple World
# iPad          1024:768                 aspect = 4:3
# iPodTouch & iphone4    960:640                 aspect = 3:2  
# OldiPodTouch           480:320                 aspect = 3:2
# iPodClassic        320:240                 aspect = 4:3
# iPodNano               376:240                 aspect = 1.57
# Video world
# PAL            768:576                 aspect = 4:3
# qPAL (quarter)     384:288                 aspect = 4:3
# qPAL+          448:336                 aspect = 4:3
# 720p                  1280:720                 aspect = 16:9
# 1080p                  920:1080                aspect = 16:9
#----------------------------------------------
# Values for iPods iPads have to be taken carefully
#----------------------------------------------
# It is important to choose properly the combination between profile and level
# Values have evolved with the devices 
# originally profile baseline level 3.0
# then profile Main then High and level 3.1 4.0 4.1 4.2 5.1 5.2
#-----------------
# Encoding options
#-----------------
# libx264   and libfaac    for mp4 
# libvpx    and libvorbis  for webM
# it is possible to momodify audio rendering
# this can be used when multiple audio channels have been recorded
# here is a short explanation of some simple strategies in redirecting audio
# The work is done by channel_map parameter
# Basic assumption is that the source file has 2 channels in 1 stereo audio stream resource
# If the sound track is mono it works also with a warning 
# sound track is usually track number 1 ( 0 being the video track) 
# use ffprobe to check because exceptions exist change the track number accordingly
# do not mix up map that deals with the tracks and map_channel that deals with sound streams
# the order of the maps determine their action video first audio second
# first map acts on first output stream,  value 0:0 if video comes on the first input stream
# second map acts on second output stream,  value 0:1  sound comes from the second input stream
# stream mapping is done in the following way for each channel:
# [input track num.].[input stream num.].[input channel num.]
# as many times as channels (2 for us as we infer that we have stereo stream)
# first map_channel for the fist output channel,  second map_channel for the second output channel
# For example :
# -map_channel 0.1.0 -map_channel 0.1.1 keeps the same channel setup
# -map_channel 0.1.1 -map_channel 0.1.0 exchanges right and left channels
# -map_channel 0.1.0 -map_channel 0.1.0 copies first input channel in both output channels
# -map_channel 0.1.1 -map_channel 0.1.1 copies second input channel in both output channels
# many more complicated combinations are possible read the manual
#
# List of Parameters: 
# *******************
# - ScriptName 
# - requested_processing (videofile audiofile listTypes listSizes) 
# - [filename]
# - encoding starting time (HH:MM:SS.hh 00:00:00.00 if no heading crap) 
# - duration   HH:MM:SS.hh
# - verbosity = warning (info quantity is in the order : quiet panic fatal error warning info verbose debug)
# - image ratio (4:3 16:9 16:9to4:3)
# - output type type ipod320 ipod480 ens384 ipod640  iphone4 ipad1024 PAL qPAL qPAL+ 720p(16:9 only) 1080p(16:9 only)
# - framerate 25.0
# - used codec libx264 libvpx 
# - preset to use (preselected options)
#   2 values must be set for x264 and vpx  from the following default values or your own choices
#   1. ens384 ipod320 ipod320ens ipod480 ipod640 ipod640ens ipad1024 ultrafast superfast veryfast faster fast 
#   medium slow slower veryslow placebo
#   2. ipod possible mais non applicable 1080p50_60 1080p 360p 720p50_60 720p
# - language code ISO 639 3 characters fra or eng or ...
# - mean rate 700k
# - max rate 750k
# - buffer size =  max_rate x nb secondes to preload in the buffer 1500k
# - title (metadata)                "My great title" 
# - author (metadata)                "The legendary Flanker"
# - copyright  "My institution CC BY SA ND"
# - video map  0:0 to encode video in output track 0 when it comes from input track 0 (this map parameter should come first)
#            0:1 to encode video in output track 0 when it comes from input track 1 (rare inverted situation)
# - audio encoding to perform  libfaac libvorbis libfdk_aac
# - audio level increase 1.0 identical 
# - audio bitrate 96k (some formats require a minimum value)
# - audio map  0:1 to encode audio in output track 1 when it comes from input track 1 (this map option should come second)
#              0:0 to encode audio in output track 1 when it comes from input track 0 (rare inverted situation)
# - channel_map 1 deals with the first audio stream 0.1.0  (standard situation see previous remarks)
# - channel_map 2 deals with the second audio stream  0.1.1 (standard situation see previous remarks)
# - logo image 1
# - logo image 2
# - logo image 3
# - logo image 4
# - logo/watermark overlay that will fade out 
# - display duration of watermark
# - output filename
# - 2 file extensions for output container mp4 webm  
# - redirect traces

Users have then to modify the path of the external files they are using, logos, presets...

decoleft="/home/vidal/Video/BashEncodingTest/logos/onglet_blanc.png"
logo1="/home/vidal/Video/BashEncodingTest/logos/logo_ens_blanc.png"
logo2="/home/vidal/Video/BashEncodingTest/logos/logo_ife_blanc.png"
logo3="/home/vidal/Video/BashEncodingTest/logos/logo_meteo+enm.png"
logotitle="/home/vidal/Video/BashEncodingTest/logos/titre-conf1.png"
#
# Directory to store encoded files (must have write rights on it)
location=/home/vidal/Video/BashEncodingTest/Encodages
cd $location
echo ""
echo processed files will be in $location
#
# Directory where ffmpeg will find presets (enable to used custom values)
export FFMPEG_DATADIR="/home/vidal/Video/BashEncodingTest/FFpresets" 
echo "Using presets from"
printenv | grep FFMPEG_DATADIR
echo""
#
# End of common parameters

After those common parameters to a group of encodings one block of lines is produced for each video that will be encoded. In this default form the script produces two outputs (mp4 and webm recommended but not compulsory). The parameters are requested in the order they are described above.

#==================================================================================
# Variable parameter set
# Name and location of the video that will be encoded 
# "video"is the video that will be encoded
# Multiple encoding can be sent at the same time copy/ paste/modify the following block for each video
# variable i is the rank in the list
#-------------------------------------------------------------------------------------------
#
video="/media/vidal/1383AD63534957FA/Videos_MeteoAvril2014/Vidal/Tremplin/AtelierMichael/RawVideo/atelier.mp4"
execution="`basename ${video%.*}`"
#
echo Start encoding $execution
date
$script videofile $video  00:04:00.00 00:00:20.00 warning 16:9 ipod640 25.0 libx264 libvpx tremplin ipod fra 700k 750k 2800k "Atelier Observation Prévision : 1 imagerie infrarouge" "Michaël Kreist" "Météo-France, Ecole Normale Supérieure de Lyon - CC BY SA ND" 0:0 libfdk_aac libvorbis 1.5 92k 0:1 0.1.0 0.1.1 $decoleft $logo1 $logo2 $logo3 $logotitle 5 $execution mp4 webm &> "$execution.log"
echo Finished encoding $execution
date
echo -------------------------------------

As you may have seen this script contains true path values and true names as I have published this article from a real situation to be sure to hand over operational material, I know that this does not mean that everything will work on your computer as soon as you will have carefully replaced my values with yours but I hope it will help.

Script with user defined parameters and call to the ffmpeg main script

Some other problems ffmpeg can solve for you

Ffmpeg is really a multipurpose tool that can provide great help. Here are some simple situations where ffmpeg has been of great help for our project...

This is a nice screencast but how can I handle the size ?

Many tools enable to capture computer screens but very few provide an output aligned with standard video values. We faced this problem because colleagues wanted to demonstrate the use of softwares or online tools. We wrote this script to keep a trace of "the good" parameters used with ffmpeg to solve the question.

Most of the problems comme from the size of the video which is not 4:3 nor 16:9 and the codec used which is not controlled and can be old or with peculiar choices of parameters. To deal with those problems we have used ffprobeto extract the parameters needed for th encoding but unfortunately in some cases headers are empty and information has to be typed manually. Here is the code used to do that :

tmpvar=$(ffprobe -show_streams  $video_file 2>/dev/null | grep sample_aspect_ratio)
export $(echo $tmpvar)
tmpvar=$(ffprobe -show_streams  $video_file 2>/dev/null | grep display_aspect_ratio)
export $(echo $tmpvar)
tmpvar=$(ffprobe -show_streams  $video_file 2>/dev/null | grep width)
export $(echo $tmpvar)
tmpvar=$(ffprobe -show_streams  $video_file 2>/dev/null | grep height)
export $(echo $tmpvar)
tmpvar=$(ffprobe -show_streams  $video_file 2>/dev/null | grep codec_name)
export $(echo $tmpvar | awk '{print $1}')
#
if [[ ${sample_aspect_ratio}  == "" || ${sample_aspect_ratio}  == "0:1" ]] ; then
  echo "We have a problem, program cannot retrieve videofile  properties you will have to provide some informations"
  sample_aspect_ratio=1:1
fi
if [[ ${display_aspect_ratio} == "" || ${display_aspect_ratio} == "0:1" ]] ; then
  echo "please enter the initial display aspect ratio considering SAR=1:1 (irreducible rational fraction)"
  read -p "a:b -> " display_aspect_ratio
fi
if [[ ! ${width}  || ${width} == "" ]] ; then
  echo "please enter the width :"
  read -p "nbpix -> " width
fi
if [[ ! ${height} ||  ${width} == "" ]] ; then
  echo "please enter the height :"
  read -p "nbpix -> " height
fi
#
sarup="${sample_aspect_ratio%:*}"
sardown="${sample_aspect_ratio#*:}"
darup="${display_aspect_ratio%:*}"
dardown="${display_aspect_ratio#*:}"
#
echo SAR    : $sample_aspect_ratio
echo DAR    : $display_aspect_ratio
echo Width  : $width
echo Height : $height
#
if [[ $sample_aspect_ratio != "1:1" ]] ; then 
  odar=$(echo "scale=3;$width*$sarup/$sardown/$height*1000" | bc)
  else
  odar=$(echo "scale=3;$darup/$dardown*1000" | bc)
fi
sar=$(echo "scale=3;$sarup/$sardown" | bc)
dar="${odar%.*}"
echo float DAR : $dar
echo float SAR : $sar
#
if [[ $codec_name == msvideo1 ]] ; then
echo "We have detected a microsoft video file We need to adjust parameters by hand"
# YUV444    3 bytes per pixel
# YUV422    4 bytes per 2 pixels
# YUV411    6 bytes per 4 pixels
# YUV420p   6 bytes per 4 pixels, reordered
echo "please enter the pixel format you want to use  yuv420p yuv411 yuv422 yuv444"
read -p "-> " pixel_format
echo "Please give the name of the preset that fits with your pixel_format choice" 
read -p "-> " newvpreset
vpreset=($newvpreset" "-pix_fmt" "$pixel_format)

Sample Aspect Ratio (SAR) and Display Aspect Ration are then used to compute the filter parameters according to the desired output. Values of SAR and DAR are explicitly given to avoid rounded values problems that may derive from the computation of lines or colums. The script informs if the output ratio is the best possible.

case $1 in  
  4:3 ) 
    final_height=$(echo "$final_width*3/4" | bc)
    if [[ $dar -lt 1333 ]]; then
      filter="[0:0]pad=ih*sar*4/3:ih*sar:(ih*sar*4/3-iw)/2:0[a];[a]scale="$final_width":"$final_height"[b];[b]setdar=dar=$darup/$dardown[c];[c]setsar=sar=1/1"
    else
      if [[ $dar -lt 1555 ]]; then
        filter="[0:0]pad=iw*sar:iw*sar*3/4:0:(iw*sar*3/4-ih)/2[a];[a]scale="$final_width":"$final_height"[b];[b]setdar=dar=$darup/$dardown[c];[c]setsar=sar=1/1"
      else
      echo "!!--!!--!!--!!--!!--!!--!!--!!--!!"
        echo "You'd get better display with 16:9"
      echo "!!--!!--!!--!!--!!--!!--!!--!!--!!"
        filter="[0:0]pad=iw*sar:iw*sar*3/4:0:(iw*sar*3/4-ih)/2[a];[a]scale="$final_width":"$final_height"[b];[b]setdar=dar=$darup/$dardown[c];[c]setsar=sar=1/1"
      fi
    fi
    ;;
  16:9 ) 
    final_height=$(echo "$final_width*9/16" | bc)
    if [[ $dar -lt 1555 ]]; then
      echo "!!--!!--!!--!!--!!--!!--!!--!!--!!"
      echo "You'd get better display with 4:3"
      echo "!!--!!--!!--!!--!!--!!--!!--!!--!!"
      filter="[0:0]pad=ih*sar*16/9:ih*sar:(ih*sar*16/9-iw)/2:0[a];[a]scale="$final_width":"$final_height"[b];[b]setdar=dar=$darup/$dardown[c];[c]setsar=sar=1/1"
    else
      if [[ $dar -lt 1777 ]]; then
        filter="[0:0]pad=ih*sar*16/9:ih*sar:(ih*sar*16/9-iw)/2:0[a];[a]scale="$final_width":"$final_height"[b];[b]setdar=dar=$darup/$dardown[c];[c]setsar=sar=1/1"
      else 
        filter="[0:0]pad=iw*sar:iw*sar*9/16:0:(iw*sar*9/16-ih)/2[a];[a]scale="$final_width":"$final_height"[b];[b]setdar=dar=$darup/$dardown[c];[c]setsar=sar=1/1"
      fi
    fi
    ;;
esac

To finish up ffmpeg command is issued :

#
date
$ffmpeg -y -ss $start -t $duration -v $verbosity -threads $nbthreads -i $video_file -t $duration -r $fps -c:v $vcodec -vpre $vpreset -async 1 -pass 1 -b:v $vbr -maxrate $maxvbr -bufsize $bufsize -map $videomap -an  -f $extension /dev/null
#
date
$ffmpeg -y -ss $start -t $duration -v $verbosity -threads $nbthreads -i $video_file -t $duration -r $fps -c:v $vcodec -vpre $vpreset -async 1 -pass 2 -b:v $vbr -maxrate $maxvbr -bufsize $bufsize -map $videomap -c:a $acodec -ac 2 -ar $freq -b:a $abr -map $audiomap -map_channel $channel1map  -map_channel $channel2map -af volume=$vol -filter_complex "$filter"  -f $extension $video_out 
date
#

The name of the parameters are identical to the previous ones described in this scipt

What the hell is this video file ?

If you work with more than one colleague and even if you have enforced some strict rules on codec/bitrate/size and as many other parameters as needed to exchange videos, you will for sure one day or the other receive some weird files that have strange behavior on some player / browser and sometimes cannot even be viewed. Ffmpeg provides the ffprobe instruction to check what is inside the metadata which may help finding a solution. With ffprobe you will collect many interesting informations on your file like this :

esquel $: ffprobe 00003.MTS

Input #0, mpegts, from '00003.MTS':
  Duration: 00:10:04.40, start: 1813.900000, bitrate: 28131 kb/s
  Program 1 
    Stream #0:0[0x1011]: Video: h264 (High) (HDPR / 0x52504448), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 50 fps, 50 tbr, 90k tbn, 100 tbc
    Stream #0:1[0x1100]: Audio: pcm_bluray (HDPR / 0x52504448), 48000 Hz, stereo, s16, 1536 kb/s
    Stream #0:2[0x1200]: Subtitle: hdmv_pgs_subtitle ([144][0][0][0] / 0x0090), 1920x1080

We have here one track with 3 streams :

  • Video stream h264 codec, high profile details here, color coding format is yuv420p, size of the image is 1920x1080, pixels are square (1:1) image ratio is (16:9), 50 frames/second and three values of the time base,
  • Audio stream pcm_bluray codec, sampling frequence 48000 Hz, stereo 16 bit encoding, 1.5 Mb/s
  • Subtitle stream in format hdmv_pgs_subtitle on an image size 1920x1080

In most cases these informations will be sufficient to solve your problems or at least undertand what is the origin of the bug but you may face more complex issues in that case go to : the official ffprobe documentation.

ffprobe can also help you retrieving variables from the videofile header with this parameter :

ffprobe -show_streams mooc-JC2013-02.dv

which provides a lot of information on top of what has been described in the previous paragraph. Each stream is described between streal tags :

[STREAM]
index=0
codec_name=dvvideo
codec_long_name=DV (Digital Video)
profile=unknown
codec_type=video
codec_time_base=1/25
codec_tag_string=[0][0][0][0]
codec_tag=0x0000
width=720
height=576
has_b_frames=0
sample_aspect_ratio=16:15
display_aspect_ratio=4:3
pix_fmt=yuv420p
level=-99
color_range=N/A
.../...
[/STREAM]

Values are given in the format name=Value which enables in a script to fill in valies for variables like here :

tmpvar=$(ffprobe -show_streams  $video_file 2>/dev/null | grep sample_aspect_ratio)
export $(echo $tmpvar)

which affects the value 16:15 from the to the variable sample_aspect_ratio.

I have 15 files grabbed during a full afternoon session of recording... I want all the films in one single file

ffmpeg enables to concatenate and transcode easily files with a command like this one :

ffmpeg -f concat -i mardi.txt  -c:v copy -c:a libfdk_aac -b:a 1536k mardimatin.mp4

In this example the video codec is copied, the audio codec is encoded in aac at 1536kb/s. The xommand uses a textfile in the following form :

# list of monday PM files
file '00000.MTS'
file '00001.MTS'
file '00002.MTS'
file '00003.MTS'
file '00004.MTS'
file '00005.MTS'
file '00006.MTS'
file '00007.MTS'

Commentaires