Understanding MLT melt mixer luma mix (fade) duration? - video-editing

Related to Understanding/controlling MLT melt slideshow?, I am trying to understand how does melt's luma mixer work, especially in context of short (small frame number) durations.
For instance, if I use something like this specification tmppics/pic_01.jpg length=6 -mix 2 -mixer luma I would have expected 2 frames of fade of pic_01, then 2 frames full pic_01, then 2 frames of fade again - however, I've both experienced results that are like this, and those that are not.
To clarify this, I've developed a bash script, which uses ImageMagick convert to generate test images, then melt to make a "slideshow" with fades out of these images, ffmpeg to convert that to animated .gif, then ImageMagick convert and montage to obtain a film strip (sprite sheet) of the video .gif (Ubuntu 18.04, melt 6.6.0, ffmpeg version 3.4.4-0ubuntu0.18.04.1, ImageMagick 6.9.7-4 Q16 x86_64 20170114).
Here is the bash script, melt-test-strip.sh:
#!/usr/bin/env bash
FRAMERATE=25
echo "
description=DV PAL
frame_rate_num=$FRAMERATE
frame_rate_den=1
width=720
height=576
progressive=0
sample_aspect_num=59
sample_aspect_den=54
display_aspect_num=4
display_aspect_den=3
colorspace=601
" > my-melt.profile
mkdir tmppics
convert -background lightblue -fill blue -size 3840x2160 -pointsize 200 -gravity center label:"Test A" tmppics/pic_01.jpg
convert -background lightblue -fill blue -size 2160x3840 -pointsize 200 -gravity center label:"Test B" tmppics/pic_02.jpg
if [ -z "$IMGDURATIONF" ]; then
IMGDURATIONF=6 # picture duration, frames
fi
if [ -z "$FADEDURATIONF" ]; then
FADEDURATIONF=2 # single-end fade duration, frames
fi
melt -verbose -profile ./my-melt.profile \
tmppics/pic_01.jpg length=$IMGDURATIONF \
tmppics/pic_02.jpg length=$IMGDURATIONF -mix $FADEDURATIONF -mixer luma \
colour:black length=$IMGDURATIONF -mix $FADEDURATIONF -mixer luma \
-consumer avformat:meltout.mp4 vcodec=libx264 an=1
# auxiliary: just for creating sprite sheet/film strip:
melt -verbose -profile ./my-melt.profile tmppics/pic_01.jpg length=$IMGDURATIONF -consumer avformat:meltout-01.mp4 vcodec=libx264 an=1
melt -verbose -profile ./my-melt.profile tmppics/pic_02.jpg length=$IMGDURATIONF -consumer avformat:meltout-02.mp4 vcodec=libx264 an=1
melt -verbose -profile ./my-melt.profile colour:black length=$IMGDURATIONF -consumer avformat:meltout-b.mp4 vcodec=libx264 an=1
# convert to gif to obtain sprite sheet/film strip from:
ffmpeg \
-i meltout.mp4 \
-r 25 \
-vf scale=256:-1 \
-y meltout.gif
# auxiliary: just for creating sprite sheet/film strip:
ffmpeg -i meltout-01.mp4 -r $FRAMERATE -vf scale=256:-1 -y meltout-01.gif
ffmpeg -i meltout-02.mp4 -r $FRAMERATE -vf scale=256:-1 -y meltout-02.gif
ffmpeg -i meltout-b.mp4 -r $FRAMERATE -vf scale=256:-1 -y meltout-b.gif
convert meltout.gif -coalesce meltoutc.gif
convert meltout-01.gif -coalesce meltoutc-01.gif
convert meltout-02.gif -coalesce meltoutc-02.gif
convert meltout-b.gif -coalesce meltoutc-b.gif
FRAMETHICK=5
#~ montage temp.gif -tile x1 -geometry '1x1+0+0<' -border 5 -bordercolor "rgb(200, 200, 200)" -label 'Image' -quality 100 meltout.png
# "%p index of image in current image list" is "t=> index of current image (s) in list" in fx:
montage -label 'Frame %[fx:t+1]/%n' meltoutc.gif -tile x1 -geometry '1x1+0+0<' -frame $FRAMETHICK -bordercolor "rgb(200, 200, 200)" -quality 100 meltout.png
# here from the .gif - otherwise for replicating images: montage in.jpg +clone +clone +clone -tile x4 -geometry +0+0 out.jpg
montage -label 'Frame %[fx:t+1]/%n' meltoutc-01.gif -tile x1 -geometry '1x1+0+0<' -frame $FRAMETHICK -bordercolor "rgb(200, 200, 200)" -quality 100 meltout-01.png
montage -label 'Frame %[fx:t+1]/%n' meltoutc-02.gif -tile x1 -geometry '1x1+0+0<' -frame $FRAMETHICK -bordercolor "rgb(200, 200, 200)" -quality 100 meltout-02.png
montage -label 'Frame %[fx:t+1]/%n' meltoutc-b.gif -tile x1 -geometry '1x1+0+0<' -frame $FRAMETHICK -bordercolor "rgb(200, 200, 200)" -quality 100 meltout-b.png
# for offsetting:
# (gif) frame width/height
fw=$(convert meltoutc-01.gif[0] -format "%w" info:)
fh=$(convert meltoutc-01.gif[0] -format "%w" info:)
# strip width/height
sw=$(convert meltout.png -format "%w" info:)
sh=$(convert meltout.png -format "%h" info:)
echo fw $fw fh $fh sw $sw sh $sh
convert -size $(( (IMGDURATIONF-FADEDURATIONF)*(fw+2*FRAMETHICK) ))x$sh xc:white meltout-02.png +append meltout-02B.png
# IMGDURATIONF-FADEDURATIONF to get to the start of second clip; +IMGDURATIONF from there to get to end of second clip, and -FADEDURATIONF from there to get to start of third clip
convert -size $(( (IMGDURATIONF-FADEDURATIONF+IMGDURATIONF-FADEDURATIONF)*(fw+2*FRAMETHICK) ))x$sh xc:white meltout-b.png +append meltout-bB.png
montage -geometry '+0+0' meltout.png meltout-01.png meltout-02B.png meltout-bB.png -tile 1x meltout-all.png
eog meltout-all.png
So, if you call bash melt-test-strip.sh, you get the defaults, IMGDURATIONF=6 and FADEDURATIONF=2, for which the output is this (click for full size):
The only way the luma-mixed result (on top) makes sense to me, is if the starting frame of the new clip participates in the mix with 0% - which is why, for the second image clip, we observe 1 frame fade + 3 frames full + 1 frame fade (instead of 2 frames fade + 2 frames full + 2 frames fade, which I'd expect).
Is this correct?
Reading https://www.mltframework.org/docs/melt/#mixes I cannot really tell if this interpretation is correct.
If I run the script with other parameters, like IMGDURATIONF=8 FADEDURATIONF=3 bash melt-test-strip.sh, then the output is:
... in which case the interpretation holds (if every new clip's first frame participates in the mix with 0%, that explains why we're seeing 2 frames fade + 3 frames full + 2 frames fade for second clip = 7 frames in all, instead of the requested 8) - but now I'm not sure if this is just an artefact of my script (as opposed to the true behavior of melts mix).
Can anyone confirm if this is how melts mixer works - and if not, explain how can it be understood?

Related

What does this specific sed command do exactly? Using sed to parse full HTML pages in a bash script

subreddit=$(curl -sL "https://www.reddit.com/search/?q=${query}&type=sr"|tr "<" "\n"|
sed -nE 's#.*class="_2torGbn_fNOMbGw3UAasPl">r/([^<]*)#\1#p'|gum filter)
I've been learning bash and have been making pretty good progress. One thing that just seems far too daunting is these complex sed commands. It's unfortunate because I really want to use them to do things like parse HTML but it quickly becomes a mess. this is a little snippet of a script that queries Reddit, pipes it through sed and returns just the names of the subreddits that were a result of the search on a new line.
My main question is.. What is it that this is actually cutting/replacing and what does the beginning part mean 's#.'?
What I tried:
I used curl to search for a subreddit name so that I could see the raw output from that command and then I tried to pipe it into sed using little snippets of the full command to see if I could reconstruct the logic behind the command and all I really figured out was that I am lacking in my knowledge of sed beyond basic replacements.
I'm trying to re-write this script (for learning purposes only, the script works just fine) that allows you to search reddit and view the image posts in your terminal using Kitty. Mostly everything is pretty readable but the sed commands just trip me up.
I'll attach the full script below in case anyone is interested and I welcome any advice or explanations that could help me fully understand and re-construct it.
I'm really curious about this. I'm also wondering if it would just be better to call a Python script from bash that could return the images using beautiful soup... or maybe using "htmlq" would be a better idea?
Thanks!
#!/bin/sh
get_input() {
[ -z "$*" ] && query=$(gum input --placeholder "Search for a subreddit") || query=$*
query=$(printf "%s" "$query"|tr ' ' '+')
subreddit=$(curl -sL "https://www.reddit.com/search/?q=${query}&type=sr"|tr "<" "\n"|
sed -nE 's#.*class="_2torGbn_fNOMbGw3UAasPl">r/([^<]*)#\1#p'|gum filter)
xml=$(curl -s "https://www.reddit.com/r/$subreddit.rss" -A "uwu"|tr "<|>" "\n")
post_href=$(printf "%s" "$xml"|sed -nE '/media:thumbnail/,/title/{p;n;p;}'|
sed -nE 's_.*href="([^"]+)".*_\1_p;s_.*media:thumbnail[^>]+url="([^"]+)".*_\1_p; /title/{n;p;}'|
sed -e 'N;N;s/\n/\t/g' -e 's/&/\&/g'|grep -vE '.*\.gif.*')
[ -z "$post_href" ] && printf "No results found for \"%s\"\n" "$query" && exit 1
}
readc() {
if [ -t 0 ]; then
saved_tty_settings=$(stty -g)
stty -echo -icanon min 1 time 0
fi
eval "$1="
while
c=$(dd bs=1 count=1 2> /dev/null; echo .)
c=${c%.}
[ -n "$c" ] &&
eval "$1=\${$1}"'$c
[ "$(($(printf %s "${'"$1"'}" | wc -m)))" -eq 0 ]'; do
continue
done
[ -t 0 ] && stty "$saved_tty_settings"
}
download_image() {
downloadable_link=$(curl -s -A "uwu" "$1"|sed -nE 's#.*class="_3Oa0THmZ3f5iZXAQ0hBJ0k".*<a href="([^"]+)".*#\1#p')
curl -s -A "uwu" "$downloadable_link" -o "$(basename "$downloadable_link")"
[ -z "$downloadable_link" ] && printf "No image found\n" && exit 1
tput clear && gum style \
--foreground 212 --border-foreground 212 --border double \
--align center --width 50 --margin "1 2" --padding "2 4" \
'Your image has been downloaded!' "Image saved to $(basename "$downloadable_link")"
# shellcheck disable=SC2034
printf "Press Enter to continue..." && read -r useless
}
cleanup() {
tput cnorm && exit 0
}
trap cleanup EXIT INT HUP
get_input "$#"
i=1 && tput clear
while true; do
tput civis
[ "$i" -lt 1 ] && i=$(printf "%s" "$post_href"|wc -l)
[ "$i" -gt "$(printf "%s" "$post_href"|wc -l)" ] && i=1
link=$(printf "%s" "$post_href"|sed -n "$i"p|cut -f1)
post_link=$(printf "%s" "$post_href"|sed -n "$i"p|cut -f2)
gum style \
--foreground 212 --border-foreground 212 --border double \
--align left --width 50 --margin "20 1" --padding "2 4" \
'Press (j) to go to next' 'Press (k) to go to previous' 'Press (d) to download' \
'Press (o) to open in browser' 'Press (s) to search for another subreddit' 'Press (q) to quit'
kitty +kitten icat --scale-up --place 60x40#69x3 --transfer-mode file "$link"
readc key
# shellcheck disable=SC2154
case "$key" in
j) i=$((i+1)) && tput clear ;;
k) i=$((i-1)) && tput clear ;;
d) download_image "$post_link" ;;
o) xdg-open "$post_link" || open "$post_link" ;;
s) get_input ;;
q) exit 0 && tput clear ;;
*) ;;
esac
done
"gum filter" is essentially a fuzzy finder like fzf and "gum style" draws pretty text and nice boxes that work kind of like css.
What does this specific sed command do exactly?
sed -nE 's#.*class="_2torGbn_fNOMbGw3UAasPl">r/([^<]*)#\1#p'
It does two things:
Select all lines that contain the literal string class="_2torGbn_fNOMbGw3UAasPl">r/.
For those lines, print only the part after ...>r/.
Basically, it translates to ... (written inefficiently on purpose)
grep 'class="_2torGbn_fNOMbGw3UAasPl">r/' |
sed 's/.*>r\///'
what does the beginning part mean 's#.'?
You are looking at the (beginning of the) substitution command. Normally, it is written as s/search/replace/ but the delimiter / can be chosen (mostly) freely. s/…/…/ and s#…#…# are equivalent.
Here, # has the benefit of not having to escape the / in …>r/.
The . belongs to the search pattern. The .* in the beginning selects everything from the start of the line, so that it can be deleted when doing the substitution. Here we delete the beginning of the line up to (and including) …>r/.
The \1 in the replacement pattern is a placeholder for the string that was matched by the group ([^<]*) (longest <-free substring directly after …>r/).
That part is unnecessarily complicated. Because sed is preceded by tr "<" "\n" there is no point in dealing with the < inside sed. It could be simplified to
sed -n 's#.*class="_2torGbn_fNOMbGw3UAasPl">r/##p'
Speaking about simplifications:
I really want to use them [sed commands] to do things like parse HTML
Advice: Don't. For one-off jobs where you know the exact formatting (!) of your html files, regexes are ok. But even then, they make only sense if you are quicker to write them than using a proper tool.
I'm also wondering if it would just be better to call a Python script from bash that could return the images using beautiful soup... or maybe using "htmlq" would be a better idea?
You are right! In general, regexes are not powerful enough to reliably parse html.
Whether you use python or bash is up to you. Personally, I find easier to use a "proper" language for bigger projects. But then I use only that. Writing half in python and half in bash only increases complexity in my opinion.
If you stick with bash, I'd recommend something a bit more mature and widespread than htmlq (first released in Sep 2021, currently at version 0.4). E.g. install libxml and use an XPath expression with post-processing:
curl -sL "https://www.reddit.com/search/?q=QUERY&type=sr" |
xmllint --html --xpath '//h6[#class="_2torGbn_fNOMbGw3UAasPl"]/text()' - 2>/dev/null |
sed 's#^r/##'
But then again, parsing HTML isn't necessary in the first place, since reddit has an API that can return JSON, which you can process using jq.
curl -sL "https://www.reddit.com/subreddits/search.json?q=QUERY" |
jq -r '.data.children | map(.data.display_name) | .[]'

Meteorologist trying to create forecast pages

Let me preface this by saying my last computer class was in high school 1968-69. I'm sure I'm not using best practices and I always appreciate help there. Everything I do is self taught and this is the first truly original piece of code I've written.
In this case I'm trying to produce weather forecast pages. Here are samples for Honolulu. The data comes from NWS NDFD (national digital forecast database) via api.weather.gov in json. I pluck those variables and plug them into ImageMagick.
Two problems. The only way I could accommodate the forecast, which is a different length every time, was to use the caption command. But SUNNY, next to a three or four line forecast is jarring. Is there a better way or at least a way to limit the upper font size?
Also, this takes a lot longer than I expected. Is there a way for me to speed the process?
Thanks in advance for your help. I learn a lot here.
#!/bin/bash
#process forecast json
#8Oct2019
##geofffox
cd /tmp/json
#curl -o kofk https://api.weather.gov/gridpoints/OAX/31,93/forecast
#curl -o kofk https://api.weather.gov/gridpoints/AFG/381,359/forecast
#curl -o kofk https://api.weather.gov/gridpoints/APX/36,23/forecast
#curl -o kofk https://api.weather.gov/gridpoints/HFO/153,144/forecast
curl -o kofk https://api.weather.gov/gridpoints/OKX/66,65/forecast
counter=0
while [ $counter -le 13 ]
do
number["$counter"]=$(cat kofk | jq -r '.properties.periods['$counter'].number')
name["$counter"]=$(cat kofk | jq -r '.properties.periods['$counter'].name')
start["$counter"]=$(cat kofk | jq -r '.properties.periods['$counter'].startTime')
end["$counter"]=$(cat kofk | jq -r '.properties.periods['$counter'].endTime')
swch["$counter"]=$(cat kofk | jq -r '.properties.periods['$counter'].isDaytime')
temp["$counter"]=$(cat kofk | jq -r '.properties.periods['$counter'].temperature')
wind["$counter"]=$(cat kofk | jq -r '.properties.periods['$counter'].windSpeed')
wdir["$counter"]=$(cat kofk | jq -r '.properties.periods['$counter'].windDirection')
shrt["$counter"]=$(cat kofk | jq -r '.properties.periods['$counter'].shortForecast')
long["$counter"]=$(cat kofk | jq -r '.properties.periods['$counter'].detailedForecast')
echo $counter
((counter++))
done
innerLoop=0
rm /var/www/html/output/json/kofk/*.png
while [ $innerLoop -le 13 ]
do
echo $innerLoop
convert -size 1920x1080 xc:blue PNG32:/var/www/html/output/json/kofk/kofk.png
convert -background rgba\(0,0,0,0.001\) -fill white -stroke black -strokewidth 3 -gravity west -font Open-Sans-Extrabold -size 700x400 caption:"${shrt["$innerLoop"]^^}" \( +clone -shadow 70x12+5+5 \) +swap \
-flatten -trim +repage /var/www/html/output/json/kofk/shrt["$innerLoop"].png
convert /var/www/html/output/json/kofk/kofk.png -gravity northwest -pointsize 50 -fill white -font Open-Sans-Bold -stroke black -strokewidth 2 -draw "text 950,115 '${name["$innerLoop"]^^}'" /var/www/html/output/json/kofk/kofk["$innerLoop"].png
if ${swch["$innerLoop"]} == false; then
convert /var/www/html/output/json/kofk/kofk["$innerLoop"].png -pointsize 50 -fill white -font Open-Sans-Bold -stroke black -strokewidth 2 -draw "text 950 700 'DAYTIME HIGH:'" /var/www/html/output/json/kofk/kofk["$innerLoop"].png
else
convert /var/www/html/output/json/kofk/kofk["$innerLoop"].png -pointsize 50 -fill white -font Open-Sans-Bold -stroke black -strokewidth 2 -draw "text 950 700 'OVERNIGHT LOW:'" /var/www/html/output/json/kofk/kofk["$innerLoop"].png
fi
convert /var/www/html/output/json/kofk/kofk["$innerLoop"].png -pointsize 200 -fill black -font Open-Sans-Extrabold -draw "text 1405 705 '${temp["$innerLoop"]^^}°'" -fill white -stroke black -strokewidth 5 -draw "text 1400 700 '${temp["$innerLoop"]^^}°'" /var/www/html/output/json/kofk/kofk["$innerLoop"].png
convert /var/www/html/output/json/kofk/kofk["$innerLoop"].png -pointsize 50 -fill white -font Open-Sans-Bold -stroke black -strokewidth 2 -draw "text 950 750 'WIND: ${wdir["$innerLoop"]}"" ${wind["$innerLoop"]^^}'" /var/www/html/output/json/kofk/kofk["$innerLoop"].png
convert -composite -gravity west -geometry +950-175 /var/www/html/output/json/kofk/kofk["$innerLoop"].png /var/www/html/output/json/kofk/shrt["$innerLoop"].png /var/www/html/output/json/kofk/kofk["$innerLoop"].png
rm /var/www/html/output/json/kofk/shrt["$innerLoop"].png
((innerLoop++))
done
exit

How to capture terminal screen output (with ansi color) to an image file?

I tried the following command to capture the output of a command (grep as an example) with color. But the result is shown as ^[[01;31m^[[Ka^[[m^[[K.
grep --color=always a <<< a |
a2ps -=book -B -q --medium=A4dj --borders=no -o out1.ps &&
gs \
-sDEVICE=png16m \
-dNOPAUSE -dBATCH -dSAFER \
-dTextAlphaBits=4 -q \
-r300x300 \
-sOutputFile=out2.png out1.ps
Is there a way to capture the color in the image? Thanks.

Split CSV to Multiple Files Containing a Set Number of Unique Field Values

As a beginner of awk I am able to split the data with unique value by
awk -F, '{print >> $1".csv";close($1)}' myfile.csv
But I would like to split a large CSV file based on additional condition which is the occurrences of unique values in a specific column.
Specifically, with input
111,1,0,1
111,1,1,1
222,1,1,1
333,1,0,0
333,1,1,1
444,1,1,1
444,0,0,0
555,1,1,1
666,1,0,0
I would like the output files to be
111,1,0,1
111,1,1,1
222,1,1,1
333,1,0,0
333,1,1,1
and
444,1,1,1
444,1,0,1
555,1,1,1
666,1,0,0
each of which contains three(in this case) unique values, 111,222,333and 444,555,666respectively, in first column.
Any help would be appreciated.
This will do the trick and I find it pretty readable and easy to understand:
awk -F',' 'BEGIN { count=0; filename=1 }
x[$1]++==0 {count++}
count==4 { count=1; filename++}
{print >> filename".csv"; close(filename".csv");}' file
We start with our count at 0 and our filename at 1. We then count each unique value we get from the fist column, and whenever its the 4th one, we reset our count and move to the next filename.
Here's some sample data I used, which is just yours with some additional lines.
~$ cat test.txt
111,1,0,1
111,1,1,1
222,1,1,1
333,1,0,0
333,1,1,1
444,1,1,1
444,0,0,0
555,1,1,1
666,1,0,0
777,1,1,1
777,1,0,1
777,1,1,0
777,1,1,1
888,1,0,1
888,1,1,1
999,1,1,1
999,0,0,0
999,0,0,1
101,0,0,0
102,0,0,0
And running the awk like so:
~$ awk -F',' 'BEGIN { count=0; filename=1 }
x[$1]++==0 {count++}
count==4 { count=1; filename++}
{print >> filename".csv"; close(filename".csv");}' test.txt
We see the following output files and content:
~$ cat 1.csv
111,1,0,1
111,1,1,1
222,1,1,1
333,1,0,0
333,1,1,1
~$ cat 2.csv
444,1,1,1
444,0,0,0
555,1,1,1
666,1,0,0
~$ cat 3.csv
777,1,1,1
777,1,0,1
777,1,1,0
777,1,1,1
888,1,0,1
888,1,1,1
999,1,1,1
999,0,0,0
999,0,0,1
~$ cat 4.csv
101,0,0,0
102,0,0,0
this one-liner would help:
awk -F, -v u=3 -v i=1 '{a[$1];
if (length(a)>u){close(i".csv");++i;delete a;a[$1]}print>i".csv"}' file
You change the u=3 value into x to gain x unique values per file.
If you run this line with your input file, you should got 1.csv and 2.csv
Edit (add some test output):
kent$ ll
total 4.0K
drwxr-xr-x 2 kent kent 60 Mar 25 18:19 ./
drwxrwxrwt 19 root root 580 Mar 25 18:18 ../
-rw-r--r-- 1 kent kent 90 Mar 25 17:57 f
kent$ cat f
111,1,0,1
111,1,1,1
222,1,1,1
333,1,0,0
333,1,1,1
444,1,1,1
444,0,0,0
555,1,1,1
666,1,0,0
kent$ awk -F, -v u=3 -v i=1 '{fn=i".csv";a[$1];if (length(a)>u){close(fn);++i;delete a;a[$1]}print>fn}' f
kent$ head *.csv
==> 1.csv <==
111,1,0,1
111,1,1,1
222,1,1,1
333,1,0,0
333,1,1,1
==> 2.csv <==
444,1,1,1
444,0,0,0
555,1,1,1
666,1,0,0

Only one command line in PROJ.4

I would like to know if there are a way to write only one command line to obtain the expected results. I explain:
When you write this :
$ proj +proj=utm +zone=13 +ellps=WGS84 -f %12.6f
If you want to recieved the output data:
500000.000000 4427757.218739
You must to write in another line with the input data:
-105 40
Is it possible to write concatenated command line as this stile?:
$ proj +proj=utm +zone=13 +ellps=WGS84 -f %12.6f | -105 40
Thank you
I also ran into this problem and found the solution:
echo -105 40 | proj +proj=utm +zone=13 +ellps=WGS84 -f %12.6f
That should do the trick.
If you need to do this e.g. from within c#, the command you'd use is this:
cmd.exe /c echo -105 40 | proj +proj=utm +zone=13 +ellps=WGS84 -f %12.6f
Note: you may need to double up the % as the command processor interprets this as a variable.