Start b clip after mixer transition ends - video-editing

I'm seeking to mix 2 clips, however, I'd like for clip2 to start after the mixer transition ends, not begins.
Essentially, this should mix clip1 with only clip2's frame 0.
I was wondering if there was a better alternative to my current workaround:
melt \
clip1.mp4 \
clip2.mp4 in=0 out=0 length=300 \
-mix 300 -mixer luma \
clip2.mp4
Perhaps there is something to pause clip2 at frame 0 for 300 frames?
(I'm doing this with 2 .mlt clips, but voiding the audio_index doesn't seem to work on mlt clips, thus I get a small audio jump for 1 frame, so this workaround isn't ideal)

You cannot set audio_index on .mlt virtual clips because audio_index is a property of the avformat producer, but MLT XML is read by the xml producer.
You can use the hold producer to hold a frame and mute audio. It defaults to 25 frames duration; so use out to override it:
melt clip1.mp4 hold:clip2.mp4 frame=0 out=299 -mix 300 -mixer luma clip2.mp4

Related

Understanding/controlling MLT melt slideshow?

Consider the following bash script (on Ubuntu 18.04, melt 6.6.0), which uses melt to make a slideshow and play it locally in a window (SDL consumer), mostly copied from https://mltframework.org/blog/making_nice_slideshows/ ( edit: I'm aware that's its possible to specify files individually as in https://superuser.com/questions/833232/create-video-with-5-images-with-fadein-out-effect-in-ffmpeg/834035#834035 - but that approach seems to scale images during transition, and takes quite a while to "render" before playing in SDL window, while this one has nearly instant playback):
echo "
description=DV PAL
frame_rate_num=25
frame_rate_den=1
width=720
height=576
progressive=0
sample_aspect_num=59
sample_aspect_den=54
display_aspect_num=4
display_aspect_den=3
colorspace=601
" > my-melt.profile
mkdir tmppics
convert -background lightblue -fill blue -size 3840x2160 -pointsize 200 -gravity center label:"Test A" tmppics/pic_01.jpg
convert -background lightblue -fill blue -size 3840x2160 -pointsize 200 -gravity center label:"Test B" tmppics/pic_02.jpg
melt -verbose -profile ./my-melt.profile \
./tmppics/.all.jpg ttl=6 \
-attach crop center=1 \
-filter luma cycle=6 duration=4 \
-consumer sdl
When I run the above command, the video shows the two images loop, but the frame counter keeps on going, increasing indefinitely. How do I make it stop after the exact amount of frames that the loop is long?
As far as I can see, the size of the output video is controlled by a profile; that is, even if I don't specify -profile, a default one is assumed; is that correct?
The original images look like this:
... and the video looks like this:
... which means the aspect ratio is wrong; additionally I can see jagged edges, meaning the scaled image in the video is not antialiased.
How do I make the image fit in video size with correct aspect ratio, with antialiasing/smoothing? (I guess it has to do with -attach crop center=1, but I couldn't find documentation on that).
When viewing stuff in SDL and stepping through frames, are frames numbered 0-based, - or are they 1-based, and at frame 0 simply the same frame as 1 is shown?
If I use ttl=6 and -filter luma cycle=6 duration=4, I get this:
... that is, visible transition starts at frame 7 (frame 6 is full image A), lasts for frames 7 and 8, and ends at frame 9 (which is full image B); then again at frames 13 and 14 (frame 15 is full image A)
However, if I use ttl=6 and -filter luma cycle=6 duration=2, then I get this:
... that is, there is no transition, image instantly changes at frame 7, then again at frame 13, etc.
So, I'd call the first case a transition duration of 2 frames, and the second case a duration of 0 frames - yet the options are duration=4 and duration=2, respectively. Can anyone explain why? Where are those 2 frames of difference gone?
Can I - and if so, how - do the same kind of slideshow, except with fade to black? I'd like to define a "time to live" (ttl) of 6 frames per image, and a transition of 4 frames, such that:
first, 4 frames are shown of image A;
then one frame image A faded, followed by one frame black (amounting to 6 frames TTL for image A, the last 2 transition);
then two frames image B faded (amounting to 4 frames transition with previous 2), followed by two more frames image B full (so 4 frames here of image B);
then one frame image B faded, followed by one frame black (amounting to 6 frames TTL for image B);
... etc.
Is it possible to persuade melt to use globbing to select images for slideshow, instead of using .all.jpg? As far as I can see on MLT (Media Lovin' Toolkit) Photo Slide Video no - but maybe there is another approach...
Ok, so, I spent some time looking into the commands for melt and turns out there is actually a pretty effective way of altering a bunch of images (if the number of arguments is too long or there are too many characters for your terminal to handle).
What you want to do is to use -serialise <name of file>.melt which will store your commands (you can also create this file manually). Then to execute that file, use melt <name of file>.melt along with any other options you have for your video file.
Example Format:
melt <images and what to do to them> -serialise <name of file>.melt
Example
Create the melt file (with Melt CLI)
melt image1.png out=50 image2.png out=75 -mix 25 -mixer luma image3.png out=75 -mix 25 -mixer luma image3.png out=75 -mix 25 -mixer luma image4.png out=75 -mix 25 -mixer luma <...> -serialise test.melt
.melt file format
test.melt
image1.png
out=50
image2.png
out=75
-mix
25
-mixer
luma
image3.png
out=75
-mix
25
-mixer
luma
image3.png
out=75
-mix
25
-mixer
luma
image4.png
out=75
-mix
25
-mixer
luma
<...>
Run
melt test.melt -profile atsc_1080p_60 -consumer avformat:output.mp4 vcodec=libx264 an=1
Additional Notes
There should be an extra return character at the end of the melt file. If there isn't, Exceeded maximum line length (2048) while reading a melt file. will be outputted
Notice that -serialise <name of file>.melt will not be in the .melt file
Melt will actually take some time to load the melt file before the encoding process begins

Pov-ray Summing scenes with separate light sources vs rendering all together

I have a scene like two street lights (area lights) illuminating a flag pole in between them. If I render the scene once for each scene and sum the result, I get less contrast in each shadow than if I render the scene with both lights on together. I export the scenes as 16bit PNG. The signal is <1 everywhere with both lights on, and I don't normalize. Background is set to black.
Should summing the scenes with separate light sources give same result as rendering the scene once with both light sources? I have set Display_Gamma=1.0 and assumed_gamma 1.0
Some info on the scene (I can post an example pov ray file necessary):
The floor material is:
pigment { color rgbft<1.0,1.0,1.0,0.0,0.0> }
finish {ambient 0 diffuse albedo 1.}
The flag pole is:
texture { pigment {color Black }}
The light sources are like:
color rgb<0.125,0.125,0.125>
area_light <5, 0, 0>, <0, 0, 5>, 5, 5
adaptive 5
circular
OK, I also had to set File_Gamma=1.0
So to keep linear intensity units throughout, set the global option: assumed_gamma 1.0 and in the rendering command, include the parameters: File_Gamma=1.0 and Display_Gamma=1.0
All three gammas need to be 1.0 if you want to measure shadow contrast in linear units.

How to get collision between bat of player and ball and ignore player's body

I have baseball player animation where player are hitting with bat, I have 90 frames in that animation and on sprite for ball. I added circle physic body to ball. How to make, what is a concept, to get collision only with ball and bat and not between player's body and ball ?
Bat is on every image together with player.
I am using Cocos2d-x and Chipmunk but I can switch to Box2d easily if this problem is solvable.
What you are looking for is a way to distinguish between collisions in your physics world. When using box2D you can set the user data for the physics body. When a collision happens you can get the two bodies that are colliding and get the user information set on it. Then handle your collision appropriately.
Pseudo-code below to create a body:
b2BodyDef ballBodyDef;
ballBodyDef.type = b2_dynamicBody;
ballBodyDef.position.Set(tileNode->getPositionX() / Pixel_To_Meter_Ratio,
tileNode->getPositionY()/ Pixel_To_Meter_Ratio);
auto b2CircleShape ballShape;
ballShape->m_radius = ballSize / PTM_RATIO; // Pixel to meter ration is 32.0 in my project
auto ballBody= physicsWorld->CreateBody(ballBodyDef);
// This will be used later on when you want to have different reactions for different collisions.
ballBody->SetUserData("BallBody");
Using similar code above create your bat body (will not put all the code below).
...
auto batBody = physicsWorld->CreateBody(batBodyDef);
batBody->setUserData("BatBody");
When doing your check in your contact listener you can ignore all collisions that has user data that does not match BallBody and BatBody. Something like below in your collision handler.
auto objectOne = contact->GetFixtureA()->GetBody();
auto objectTwo = contact->GetFixtureB()->GetBody();
auto objectOneUserData = (objectOne->GetUserData());
auto objectTwoUserData = (objectTwo->GetUserData());
Do your checks on the userData above and return out of the function. This should do the trick. It's what I use to handle or ignore specific types of collisions.
I hope this helps.

Melt: how to extract a frame every second?

If I use:
melt -profile square_ntsc movie.flv -consumer avformat:image%05d.jpg s=60x45
I get so many images as many frames are in movie.
How do I extract an image each second? (the frame rate is known). Same as how do I extract an image every n-th frame.
found the answer: just add a r=1 for consumer, like in:
melt -profile square_pal movie.flv -consumer avformat:image%05d.jpg r=1
to get 2 images each second, write:
melt -profile square_pal movie.flv -consumer avformat:image%05d.jpg r=2
to get images each 2 seconds, write:
melt -profile square_pal movie.flv -consumer avformat:image%05d.jpg r=0.5

Walkcycles and timing in pygame

I have a pygame.Timer running in my game calling a draw function 32 times/second. The drawing method gets positions from all elements on my screen and blits them accordingly. However, I want the main character to walk around slower than other objects move.
Should I set up a timer specifically for it or should I just blit the same frames several times? Is there any better way to do it? A push in the right direction would be awesome :)
(If anyone's interested, here is the code that currently controls what frames to send to the drawing: http://github.com/kallepersson/subterranean-ng/blob/master/Player.py#L88)
Your walk cycle frame (like all motion) should be a function of absolute time, not of frame count. e.g.:
def walk_frame(millis, frames_per_second, framecount, start_millis=0):
millis_per_frame = 1000 / frames_per_second
elapsed_millis = millis - start_millis
total_frames = elapsed_millis / millis_per_frame
return total_frames % framecount