Best Inverse kinematics algorithm with constraints on joint angles - inverse-kinematics

I implemented the CCD algorithm for Inverse kinematics, it works great but it fails with constraints, I want to implement a system in which if the arm cannot reach the target it tries to get closer to it.
I tried to put the constraints in CCD algorithm, that is if my rotation angle goes above or below the constraint i limit it to max or min. for example, if the rotation angle is 100 degree and the constraint is 90, then I rotate 90 and based on that I calculate other angles, It works for some cases but fails for most.
CAn anyone tell me a 3D IK algorithm which takes care of constraints?

CCD itself will work like a charm.
If you are working on 3D, first find the rotation you should do in each axis without applying bone limits.
After that, the approach
expectedRotation.X = min(expectedRotation.X, maxLimit.X)
expectedRotation.X = max(expectedRotation.X, minLimit.X)
expectedRotation.Y = min(expectedRotation.Y, maxLimit.Y)
expectedRotation.Y = max(expectedRotation.Y, minLimit.Y)
expectedRotation.Z = min(expectedRotation.Z, maxLimit.Z)
expectedRotation.Z = max(expectedRotation.Z, minLimit.Z)
is wrong. Because, if you can't move in one of the axis further, the other two axis will keep moving and you will get weird results.
The fix:
If any of the 3 axis mismatches with the limit constraints, you must not change the rotation at all.
First convert all the angles into degrees in -180 to 180 format. Then the following will do
vector3df angleDifference = expectedRotation - baseRotation; //baseRotation is just the initial rotation from which the bone limits are calculated.
if(angleDifference.X < boneLimits.minRotation.X || angleDifference.Y < boneLimits.minRotation.Y || angleDifference.Z < boneLimits.minRotation.Z || angleDifference.X > boneLimits.maxRotation.X || angleDifference.Y > boneLimits.maxRotation.Y || angleDifference.Z > boneLimits.maxRotation.Z)
return currentRotation;
return expectedRotation;

Related

Cut ultrasound signal between specific values using Octave

I have an ultrasound wave (graph axes: Volt vs microsecond) and need to cut the signal/wave between two specific value to further analyze this clipping. My idea is to cut the signal between 0.2 V (y-axis). The wave is sine shaped as shown in the figure with the desired cutoff points in red
In my current code, I'm cutting the signal between 1900 to 4000 ms (x-axis) (Aa = A(1900:4000);) and then I want to make the aforementioned clipping and proceed with the code.
Does anyone know how I could do this y-axis clipping?
Thanks!! :)
clear
clf
pkg load signal
for k=1:2
w=1
filename=strcat("PCB 2.1 (",sprintf("%01d",k),").mat")
load(filename)
Lthisrun=length(A);
Pico(k,1:Lthisrun)=A;
Aa = A(1900:4000);
Ah= abs(hilbert(Aa));
step=100;
hold on
i=1;
Ac=0;
for index=1:step:3601
Ac(i+1)=Ac(i)+Ah(i);
i=i+1
r(k)=trapz(Ac)
end
end
ok, you want to just look at values 'above the noise' in your data. Or, in this case, 'clip out' everything below 0.2V. the easiest way to do this is with logical indexing. You can take an array and create a sub array eliminating everything that doesn't meet a certain logical condition. See this example:
f = #(x) sin(x)./x;
x = [-100:.1:100];
y = f(x);
plot(x,y);
figure;
x_trim = x(y>0.2);
y_trim = y(y>0.2);
plot(x_trim, y_trim);
From your question it looks like you want to do the clipping after applying the horizontal windowing from 1900-4000. (you say that that is in milliseconds, but your image shows the pulse being much sooner than 1900 ms). In any case, something like
Ab = Aa(Aa > 0.2);
will create another array Ab that will only contain the portions of Aa with values above 0.2. You may need to do something similar (see the example) for the horizontal axis if your x-data is not just the element index.

Actionscript 3- Random movement on stage? Also, Boundaries?

I'm trying to code something where there are creatures running back and forth, up and down across the stage, and I the player, have to try to go up to them, and pick them up. There are also boundaries on stage-
The map constraints- a big rectangle box is easy enough to accomplish. I've done this.
The boundaries within the map, which are also rectangles, but instead of bouncing the player back INSIDE the rectangle, I'm trying to do the opposite- keep the player out of it.
My code for it looks like this as of now:
//Conditions that check if player/monsters are hittesting the boxes (rocks
//and stuff), then if correct, bounce them away. Following code excludes
//the monsters for simplicity.
if((mcPlayer.x - aBounceBox[b].x) < 0 && mcPlayer.y <= (aBounceBox[b].y + aBounceBox[b].height/2) && mcPlayer.y >= (aBounceBox[b].y - aBounceBox[b].height/2))
{
mcPlayer.x = aBounceBox[b].x - aBounceBox[b].width/2 - mcPlayer.width/2;
}
//Duplicate above code for right side of box here
if((mcPlayer.y - (aBounceBox[b].y + aBounceBox[b].height/2)) < 0 && (mcPlayer.x + mcPlayer.width/2) > (aBounceBox[b].x - aBounceBox[b].width/2) && (mcPlayer.x - mcPlayer.width/2) < (aBounceBox[b].x + aBounceBox[b].width/2))
{
mcPlayer.y = aBounceBox[b].y + aBounceBox[b].height/2;
}
//Duplicate above code for Upper boundary of box here
The above doesn't work very well because the code to bounce for the left and right sides of the box conflicts with the upper and lower parts of the box I'm hit-testing for. Any ideas how to do that smoothly?
Also, another problem I am having is the pathing for the monsters in the game. I'm trying to get them to do the following:
Move around "organically", or a little randomly- move a little, stop. If they encounter a boundary, they'd stop and move, elsewhere. Not concerned where to, as long as they stop moving into rocks and trees, things like that.
Not overlap as much as possible as the move around on stage.
To push each other apart if they are overlapping, although I'd like to allow them to overlap very slightly.
I'm building that code slowly, but I thought I'd just ask if anyone has any ideas on how to do that.
To answer your first question, you may try to implement a new class/object which indicates the xy-offset between two display objects. In order to illustrate the idea more clearly, you can have a function similar to this:
public function getOffset(source:DisplayObject, target:DisplayObject):Object {
var dx:Number = target.x - source.x;
var dy:Number = target.y - source.y;
return { x:dx, y:dy };
}
Check if the hero character is colliding with another object first by hitTestObject(displayObj) of DisplayObject class. Proceed if the result is true.
Suppose you pass in your hero character as the source object, and another obstacle as the target object,
var offset:Object = getOffset(my_hero.mc, some_obstacle.mc);
After getting the resulting offset values, compare the magnitude (absolute value) of offset.x and offset.y. The outcome can be summarized as follows:
Let absDx be Math.abs(offset.x), absDy be Math.abs(offset.y),
absDx < absDy
offset.y < 0, target is above source
offset.y > 0, target is below source
absDx > absDy
offset.x < 0, target is to the left of source
offset.x > 0, target is to the right of source
absDx == absDy
refer to one of the above cases, doesn't really matter
Then you can update the position of your hero character according to different situations.
For your second question concerning implementing a very simple AI algorithm for your creatures, you can make use of the strategy above for your creatures to verify if they collide with any other stuff or not. If they do collide, assign them other directions of movement, or even simpler, just flip the signs(+/-) of their velocities and they will travel in opposite directions.
It is easier to implement simple algorithms first. Once it is working, you can apply whatever enhancements you like afterwards. For example, change directions when reaching junctions or per 3 seconds etc.

AS3 Polar coordinate from Cartesian coordinate

I'm playing with body animation in AS3. I did a body with all parts (excluding fingers) and make a XML with the "skeleton". The XML got the instances of each part and the place of the articulation of the next part. I make it work with cardinal coordinates (x,y) and the body moves when I rotate a part and recalculate all the links again (each part in each articulation).
However, this will demand some calculation each little modification of the body, so now I'm optimizing it. As for de design x,y is easier, so when the body instance is created, the class re-build the XML converting coordinates to Polar system (r,t), like this ("Quadro" is the node with coordinates):
dx = Quadro.#x;
dy = Quadro.#y;
Quadro.#r = Math.sqrt(Math.pow(dx,2) + Math.pow(dy,2));
Quadro.#t = (dy>0)? Math.asin(dx/Quadro.#r) : Math.acos(dy/Quadro.#r);
I did some changes to make it work but at list one quadrant is always wrong! In this case, the upper left is wrong. The neck and the head should be in this place and they are in upper right (mirrored).
Any tips for a right conversion in AS3?
Try to use this:
Quadro.#t=Math.atan2(dy,dx);
From Wikipedia:
The Cartesian coordinates x and y can be converted to polar
coordinates r and φ with r ≥ 0 and φ in the interval (−π, π] by:

How to control a kiwi drive robot?

I'm on the FIRST robotics team at my high school, and we are working on developing a kiwi drive robot, where there are three omni wheels mounted in a equilateral triangle configuration, like this:
The problem is programming the robot to drive the motors such that that the robot moves in the direction of a given joystick input. For example, to move "up", motors 1 and 2 would be powered equally, while motor 3 would be off. The joystick position is given as a vector, and I was thinking that if the motors were expressed as vectors too, vector projection might be what I need. However, I'm not sure if this is right, and if it is, how I would apply it. I also have a feeling that there may be multiple solutions to one joystick position. Any help would be greatly appreciated.
I've built 9 robots during my time at school (1 FIRST, 8 RoboCup). We used the same omnidrive layout as you do. Beta's answer looks correct but add rotation to all wheels afterwards:
W1 = -1/2 X - sqrt(3)/2 Y + R
W2 = -1/2 X + sqrt(3)/2 Y + R
W3 = X + R
[This is Beta's formula with some added Rotation]
You need to think about the available ranges for your motors. I am guessing it can take a PWM signal of +/-255, so either the input or the output has to be adjusted somewhat. (It's not that hard...)
A good paper with details
To answer your specific questions: Vector projection is essentially what you are doing here. You apply it by having a matrix M, your input from the joystick I and your output to the motors O. Thus O = M * I;
M = [(-0.5 -sqrt(3)/2 +1)
(-0.5 +sqrt(3)/2 +1)
(1 0 +1)]
First let's define some terms. In keeping with the usual convention, the X axis will point to the right and the y axis will point up (so that the thrust of wheel 3 is along the X axis). We'll call the motion of the wheels W1, W2 and W3, each defined so that Wi > 0 means that the wheel rotates in the clockwise direction. In your example, if W1 < 0, W2 = W1 and W3 = 0, the robot will move in the +Y direction.
If all three wheels rotated at the same rate (W1 = W2 = W3) the robot would rotate in place. I'm guessing you don't want that, so the sum of the rotations must be zero: W1 + W2 + W3 = 0.
The motion of each wheel contributes to the motion of the robot; they add as vectors:
W1 = -1/2 X - sqrt(3)/2 Y
W2 = -1/2 X + sqrt(3)/2 Y
W3 = X
So if you know the desired X and Y from the joystick, you have W1, W2 and W3. As we've already seen, the difference between W1 and W2 is what drives Y motion. Their sum drives motion in X.
Though this system can be solved mathematically, in 2002, FIRST Team 857 chose to solve it mechanically. Our control system used three joysticks mounted with their X-axes forming an equilateral triangle, and handles replaced with ball-socket arms connected with a Y-shaped yoke. Map the X-axis of each stick directly to a motor speed, and the control system has been solved. As an advantage, this system is very intuitive for laypeople to run--push the yoke in the direction you want to go, rotate it to turn.
As you have recognized, the first part of this will be finding an appropriate equation to represent the resultant motion for any motor settings. Depending on the level of control and feedback you have on your motor speeds, I would suggest the process you go thorough should start with writing a vector equation: (define positive X as straight ahead)
-M1Cos(30)+M2Cos(30)=X (the negative is because 1 and 2 must be powered the same magnitude, but opposite polarities for forward motion)
M1Sin(30)+M2Sin(30)-M3 = Y (as anticlockwise motion on 1 and 2 will result in the robot moving left in the Y and anticlockwise motion on 3 will result in the robot moving to the right)
The other input that you need to add into this is the desired rotation of the robot, thankfully, M1+M2+M3 = W (Rotational velocity)
Your joystick input will give you X,Y and W, so you have 3 equations with 3 unknowns.
From here it is simultaneous equations, so you may end up with multiple solutions, but these can generally be restricted based on possible motor speeds and the like.
An example of this is the rec::robotino::com::OmniDrive Class - the source code for this method is available too...

How does this work in computing the depth map?

From this site: http://www.catalinzima.com/?page_id=14
I've always been confused about how the depth map is calculated.
The vertex shader function calculates position as follows:
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.TexCoord = input.TexCoord; //pass the texture coordinates further
output.Normal =mul(input.Normal,World); //get normal into world space
output.Depth.x = output.Position.z;
output.Depth.y = output.Position.w;
return output;
}
What are output.Position.z and output.Position.w? I'm not sure as to the maths behind this.
And in the pixel shader there is this line: output.Depth = input.Depth.x / input.Depth.y;
So output.Depth is output.Position.z / outputPOsition.w? Why do we do this?
Finally in the point light shader (http://www.catalinzima.com/?page_id=55) to convert this output to be a position the code is:
//read depth
float depthVal = tex2D(depthSampler,texCoord).r;
//compute screen-space position
float4 position;
position.xy = input.ScreenPosition.xy;
position.z = depthVal;
position.w = 1.0f;
//transform to world space
position = mul(position, InvertViewProjection);
position /= position.w;
again I don't understand this. I sort of see why we use InvertViewProjection as we multiply by the view projection previously, but the whole z and now w being made to equal 1, after which the whole position is divided by w confuses me quite a bit.
To understand this completely, you'll need to understand how the algebra that underpins 3D transforms works. SO does not really help (or I don't know how to use it) to do matrix math, so it'll have to be without fancy formulaes. Here is some high level explanation though:
If you look closely, you'll notice that all transformations that happen to a vertex position (from model to world to view to clip coordinates) happens to be using 4D vectors. That's right. 4D. Why, when we live in a 3D world ? Because in that 4D representation, all the transformations we usually want to do to vertices are expressible as a matrix multiplication. This is not the case if we stay in 3D representation. And matrix multiplications are what a GPU is good at.
What does a vertex in 3D correspond to in 4D ? This is where it gets interesting. The (x, y, z) point corresponds to the line (a.x, a.y, a.z, a). We can grab any point on this line to do the math we need, and we usually pick the easiest one, a=1 (that way, we don't have to do any multiplication, just set w=1).
So that answers pretty much all the math you're looking at. To project a 3D point in 4D we set w=1, to get back a component from a 4D vector, that we want to compare against our standard sizes in 3D, we have to divide that component by w.
This coordinate system, if you want to dive deeper, is called homogeneous coordinates.