I'm on the FIRST robotics team at my high school, and we are working on developing a kiwi drive robot, where there are three omni wheels mounted in a equilateral triangle configuration, like this:
The problem is programming the robot to drive the motors such that that the robot moves in the direction of a given joystick input. For example, to move "up", motors 1 and 2 would be powered equally, while motor 3 would be off. The joystick position is given as a vector, and I was thinking that if the motors were expressed as vectors too, vector projection might be what I need. However, I'm not sure if this is right, and if it is, how I would apply it. I also have a feeling that there may be multiple solutions to one joystick position. Any help would be greatly appreciated.
I've built 9 robots during my time at school (1 FIRST, 8 RoboCup). We used the same omnidrive layout as you do. Beta's answer looks correct but add rotation to all wheels afterwards:
W1 = -1/2 X - sqrt(3)/2 Y + R
W2 = -1/2 X + sqrt(3)/2 Y + R
W3 = X + R
[This is Beta's formula with some added Rotation]
You need to think about the available ranges for your motors. I am guessing it can take a PWM signal of +/-255, so either the input or the output has to be adjusted somewhat. (It's not that hard...)
A good paper with details
To answer your specific questions: Vector projection is essentially what you are doing here. You apply it by having a matrix M, your input from the joystick I and your output to the motors O. Thus O = M * I;
M = [(-0.5 -sqrt(3)/2 +1)
(-0.5 +sqrt(3)/2 +1)
(1 0 +1)]
First let's define some terms. In keeping with the usual convention, the X axis will point to the right and the y axis will point up (so that the thrust of wheel 3 is along the X axis). We'll call the motion of the wheels W1, W2 and W3, each defined so that Wi > 0 means that the wheel rotates in the clockwise direction. In your example, if W1 < 0, W2 = W1 and W3 = 0, the robot will move in the +Y direction.
If all three wheels rotated at the same rate (W1 = W2 = W3) the robot would rotate in place. I'm guessing you don't want that, so the sum of the rotations must be zero: W1 + W2 + W3 = 0.
The motion of each wheel contributes to the motion of the robot; they add as vectors:
W1 = -1/2 X - sqrt(3)/2 Y
W2 = -1/2 X + sqrt(3)/2 Y
W3 = X
So if you know the desired X and Y from the joystick, you have W1, W2 and W3. As we've already seen, the difference between W1 and W2 is what drives Y motion. Their sum drives motion in X.
Though this system can be solved mathematically, in 2002, FIRST Team 857 chose to solve it mechanically. Our control system used three joysticks mounted with their X-axes forming an equilateral triangle, and handles replaced with ball-socket arms connected with a Y-shaped yoke. Map the X-axis of each stick directly to a motor speed, and the control system has been solved. As an advantage, this system is very intuitive for laypeople to run--push the yoke in the direction you want to go, rotate it to turn.
As you have recognized, the first part of this will be finding an appropriate equation to represent the resultant motion for any motor settings. Depending on the level of control and feedback you have on your motor speeds, I would suggest the process you go thorough should start with writing a vector equation: (define positive X as straight ahead)
-M1Cos(30)+M2Cos(30)=X (the negative is because 1 and 2 must be powered the same magnitude, but opposite polarities for forward motion)
M1Sin(30)+M2Sin(30)-M3 = Y (as anticlockwise motion on 1 and 2 will result in the robot moving left in the Y and anticlockwise motion on 3 will result in the robot moving to the right)
The other input that you need to add into this is the desired rotation of the robot, thankfully, M1+M2+M3 = W (Rotational velocity)
Your joystick input will give you X,Y and W, so you have 3 equations with 3 unknowns.
From here it is simultaneous equations, so you may end up with multiple solutions, but these can generally be restricted based on possible motor speeds and the like.
An example of this is the rec::robotino::com::OmniDrive Class - the source code for this method is available too...
Related
I have an ultrasound wave (graph axes: Volt vs microsecond) and need to cut the signal/wave between two specific value to further analyze this clipping. My idea is to cut the signal between 0.2 V (y-axis). The wave is sine shaped as shown in the figure with the desired cutoff points in red
In my current code, I'm cutting the signal between 1900 to 4000 ms (x-axis) (Aa = A(1900:4000);) and then I want to make the aforementioned clipping and proceed with the code.
Does anyone know how I could do this y-axis clipping?
Thanks!! :)
clear
clf
pkg load signal
for k=1:2
w=1
filename=strcat("PCB 2.1 (",sprintf("%01d",k),").mat")
load(filename)
Lthisrun=length(A);
Pico(k,1:Lthisrun)=A;
Aa = A(1900:4000);
Ah= abs(hilbert(Aa));
step=100;
hold on
i=1;
Ac=0;
for index=1:step:3601
Ac(i+1)=Ac(i)+Ah(i);
i=i+1
r(k)=trapz(Ac)
end
end
ok, you want to just look at values 'above the noise' in your data. Or, in this case, 'clip out' everything below 0.2V. the easiest way to do this is with logical indexing. You can take an array and create a sub array eliminating everything that doesn't meet a certain logical condition. See this example:
f = #(x) sin(x)./x;
x = [-100:.1:100];
y = f(x);
plot(x,y);
figure;
x_trim = x(y>0.2);
y_trim = y(y>0.2);
plot(x_trim, y_trim);
From your question it looks like you want to do the clipping after applying the horizontal windowing from 1900-4000. (you say that that is in milliseconds, but your image shows the pulse being much sooner than 1900 ms). In any case, something like
Ab = Aa(Aa > 0.2);
will create another array Ab that will only contain the portions of Aa with values above 0.2. You may need to do something similar (see the example) for the horizontal axis if your x-data is not just the element index.
When using ol3-cesium and the map is in 3d mode, calling map.getView().getZoom() returns undefined. This might affect setZoom as well.
I understand we are in a 3d world, so there are no z-levels as in the tiled maps. On the other hand, Google Maps calculates a z-equivalent when coming back grom 3d to 2d.
How can I convert from height to a z-equivalent? Any formula, taking into account the latitude and altitude, to get the z equivalent?
There's no easy formula to get a 2D "Z" value from 3D, because the 3D camera can be tilted, can see different levels of tiles in the foreground vs the background, etc.
For individual tiles however, there are specific known "Level" values from the imagery quadtree. You can see these in Cesium Inspector by clicking the little + next to the word Terrain on the right side, and then put a checkmark on Show tile coordinates. The coordinates shown include L, X, and Y, where L is the tile's level (0 being the most zoomed-out, higher numbers more zoomed in), and X and Y are 2D positions within the imagery layer.
I posted an answer on GIS SE that shows how to reach in and grab these tiles, the same way Cesium Inspector does, along with a number of caveats involved. You could potentially look for the highest-level visible tile, and use that as your "Z" value.
I know this is not accurate, but sharing in case this is of use to anyone.
I have moved to several altitudes in Google Maps, switching between the 2D and 3D maps, writing down the z or altitude shown in the address bar:
z altitude (metres)
----- -----------------
3 10311040
4 5932713
5 2966357
6 1483178
7 741589
8.6 243624
11.35 36310
13.85 6410
15.26 2411
17.01 717
18.27 214
19.6 119
20.77 50
21 44
With the above correspondences, I have approximated the following function:
function altitudeToZoom(altitude) {
var A = 40487.57;
var B = 0.00007096758;
var C = 91610.74;
var D = -40467.74;
return D+(A-D)/(1+Math.pow(altitude/C, B));
}
Based on your formula, the reverse conversion should be:
altitude = C * Math.pow((A-D)/(zoomLevel-D) -1, 1/B);
I implemented the CCD algorithm for Inverse kinematics, it works great but it fails with constraints, I want to implement a system in which if the arm cannot reach the target it tries to get closer to it.
I tried to put the constraints in CCD algorithm, that is if my rotation angle goes above or below the constraint i limit it to max or min. for example, if the rotation angle is 100 degree and the constraint is 90, then I rotate 90 and based on that I calculate other angles, It works for some cases but fails for most.
CAn anyone tell me a 3D IK algorithm which takes care of constraints?
CCD itself will work like a charm.
If you are working on 3D, first find the rotation you should do in each axis without applying bone limits.
After that, the approach
expectedRotation.X = min(expectedRotation.X, maxLimit.X)
expectedRotation.X = max(expectedRotation.X, minLimit.X)
expectedRotation.Y = min(expectedRotation.Y, maxLimit.Y)
expectedRotation.Y = max(expectedRotation.Y, minLimit.Y)
expectedRotation.Z = min(expectedRotation.Z, maxLimit.Z)
expectedRotation.Z = max(expectedRotation.Z, minLimit.Z)
is wrong. Because, if you can't move in one of the axis further, the other two axis will keep moving and you will get weird results.
The fix:
If any of the 3 axis mismatches with the limit constraints, you must not change the rotation at all.
First convert all the angles into degrees in -180 to 180 format. Then the following will do
vector3df angleDifference = expectedRotation - baseRotation; //baseRotation is just the initial rotation from which the bone limits are calculated.
if(angleDifference.X < boneLimits.minRotation.X || angleDifference.Y < boneLimits.minRotation.Y || angleDifference.Z < boneLimits.minRotation.Z || angleDifference.X > boneLimits.maxRotation.X || angleDifference.Y > boneLimits.maxRotation.Y || angleDifference.Z > boneLimits.maxRotation.Z)
return currentRotation;
return expectedRotation;
I'm playing with body animation in AS3. I did a body with all parts (excluding fingers) and make a XML with the "skeleton". The XML got the instances of each part and the place of the articulation of the next part. I make it work with cardinal coordinates (x,y) and the body moves when I rotate a part and recalculate all the links again (each part in each articulation).
However, this will demand some calculation each little modification of the body, so now I'm optimizing it. As for de design x,y is easier, so when the body instance is created, the class re-build the XML converting coordinates to Polar system (r,t), like this ("Quadro" is the node with coordinates):
dx = Quadro.#x;
dy = Quadro.#y;
Quadro.#r = Math.sqrt(Math.pow(dx,2) + Math.pow(dy,2));
Quadro.#t = (dy>0)? Math.asin(dx/Quadro.#r) : Math.acos(dy/Quadro.#r);
I did some changes to make it work but at list one quadrant is always wrong! In this case, the upper left is wrong. The neck and the head should be in this place and they are in upper right (mirrored).
Any tips for a right conversion in AS3?
Try to use this:
Quadro.#t=Math.atan2(dy,dx);
From Wikipedia:
The Cartesian coordinates x and y can be converted to polar
coordinates r and φ with r ≥ 0 and φ in the interval (−π, π] by:
From this site: http://www.catalinzima.com/?page_id=14
I've always been confused about how the depth map is calculated.
The vertex shader function calculates position as follows:
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.TexCoord = input.TexCoord; //pass the texture coordinates further
output.Normal =mul(input.Normal,World); //get normal into world space
output.Depth.x = output.Position.z;
output.Depth.y = output.Position.w;
return output;
}
What are output.Position.z and output.Position.w? I'm not sure as to the maths behind this.
And in the pixel shader there is this line: output.Depth = input.Depth.x / input.Depth.y;
So output.Depth is output.Position.z / outputPOsition.w? Why do we do this?
Finally in the point light shader (http://www.catalinzima.com/?page_id=55) to convert this output to be a position the code is:
//read depth
float depthVal = tex2D(depthSampler,texCoord).r;
//compute screen-space position
float4 position;
position.xy = input.ScreenPosition.xy;
position.z = depthVal;
position.w = 1.0f;
//transform to world space
position = mul(position, InvertViewProjection);
position /= position.w;
again I don't understand this. I sort of see why we use InvertViewProjection as we multiply by the view projection previously, but the whole z and now w being made to equal 1, after which the whole position is divided by w confuses me quite a bit.
To understand this completely, you'll need to understand how the algebra that underpins 3D transforms works. SO does not really help (or I don't know how to use it) to do matrix math, so it'll have to be without fancy formulaes. Here is some high level explanation though:
If you look closely, you'll notice that all transformations that happen to a vertex position (from model to world to view to clip coordinates) happens to be using 4D vectors. That's right. 4D. Why, when we live in a 3D world ? Because in that 4D representation, all the transformations we usually want to do to vertices are expressible as a matrix multiplication. This is not the case if we stay in 3D representation. And matrix multiplications are what a GPU is good at.
What does a vertex in 3D correspond to in 4D ? This is where it gets interesting. The (x, y, z) point corresponds to the line (a.x, a.y, a.z, a). We can grab any point on this line to do the math we need, and we usually pick the easiest one, a=1 (that way, we don't have to do any multiplication, just set w=1).
So that answers pretty much all the math you're looking at. To project a 3D point in 4D we set w=1, to get back a component from a 4D vector, that we want to compare against our standard sizes in 3D, we have to divide that component by w.
This coordinate system, if you want to dive deeper, is called homogeneous coordinates.