Google Slides Rotate Rectangle 35 degrees - google-apps-script

I am trying to understand the Google Slides API Rotate function.
3000000 is the Object width and Height.
If we wanted to rotate Rectangle 35 degrees Counter clockwise,
Trying to understand example parameters in documentation, and how to rotate https://developers.google.com/slides/samples/transform
What is -0.5 below and 0.3? How are they derived?
Additionally, what is -2000000 and -550000?
Last curious if there any shorthand method of doing this? Three requests for 15 lines, just to rotate a rectangle?
{
"requests": [
{
"updatePageElementTransform": {
"objectId": pageElementId,
"applyMode": "RELATIVE",
"transform": {
"scaleX": 1,
"scaleY": 1,
"translateX": -2000000 - 0.5 * 0.3 * 3000000,
"translateY": -550000 - 0.5 * 0.12 * 3000000,
"unit": "EMU"
}
}
},
{
"updatePageElementTransform": {
"objectId": pageElementId,
"applyMode": "RELATIVE",
"transform": {
"scaleX": cos(35 * (pi/180)),
"scaleY": cos(35 * (pi/180)),
"shearX": sin(35 * (pi/180)),
"shearY": -sin(35 * (pi/180)),
"unit": "EMU"
}
}
},
{
"updatePageElementTransform": {
"objectId": pageElementId,
"applyMode": "RELATIVE",
"transform": {
"scaleX": 1,
"scaleY": 1,
"translateX": 2000000 + 0.5 * 0.3 * 3000000,
"translateY": 550000 + 0.5 * 0.12 * 3000000,
"unit": "EMU"
}
}
}
]
}
By the way, just opened feature request for shorthand rotate, will try to figure it out for now https://issuetracker.google.com/u/2/issues/183986639

Answer:
When completing transformations on shapes in Slides, all operations are done from the frame of reference of the origin of the page. This point is the top-left point of the page.
More Information:
At the top of the page on Transform Operations, it states that the examples on the page assume the existence of a defined arrow shape:
For these examples, assume that there exists an example arrow shape page element with the following size and transform data (which can be found with a presentations.pages.get request):
{
"objectId": pageElementId,
"size": {
"width": {
"magnitude": 3000000,
"unit": "EMU"
},
"height": {
"magnitude": 3000000,
"unit": "EMU"
}
},
"transform": {
"scaleX": 0.3,
"scaleY": 0.12,
"shearX": 0,
"shearY": 0,
"translateX": 2000000,
"translateY": 550000,
"unit": "EMU"
},
"shape": {
"shapeType": "RIGHT_ARROW"
}
}
So to answer your first two questions:
The 0.3 is taken from the sclaing factor of the arrow.
The -2000000 and -550000 are used to translate the shape to the origin of the page
The -0.5 is used to halve the translation distance (as we are doing the rotation from the shape's centre rather than its vertex)
Also, from the documentation on Sizing and Positioning Page Elements (emphasis my own):
Rotation transforms rotate a page element around a point, using the scaling and shear parameters. The basic rotation transform matrix has the following form, where the angle of rotation (in radians) is measured from the X-axis, moving counterclockwise:
As with scaling, you can use this matrix form directly as a RELATIVE transform to rotate an element, but this causes the element to be rotated about the origin of the page. To rotate the element about its center or a different point, shift to that reference frame.
So simple answer: yes you can do it in one request, but you will have to do the calculations for shifting to the element's reference frame yourself and send that request to the API instead.
References:
Transform Operations - Example arrow shape | Slides API | Google Developers
Sizing and Positioning Page Elements
Element references frames

Related

Cocos Creator Change rotation parent but childrent not move

I rotate a node (with a green image sprite for easy viewing) containing child nodes (called Dot, with red color, collider2D and rigidbody2D). I change the Rotation property and all Dots also change. I want to do so in code, however only the blue part rotates.
I rotated both ways:
tween(new Vec3(0, 0, 0))
.to(3, new Vec3(0, 0, 360), {
easing: "cubicIn",
onUpdate(target?: any) {
wheelDot.eulerAngles = target;
},
})
.start();
and
tween(this.wheelDot)
.to(
3,
{
angle: this.wheelDot.angle + 360,
},
{
easing: "cubicIn",
}
)
.start();
Both of the above codes just rotate the green part and the red dots stay in place.
Edit: I found out if I dropped RigidBody2D, the red dots moved. How can I still use RigidBody2D and still rotate?

Convert grid positions to absolute positions

I have a big issue, but I don't know how to solve it.
I have a json file, which contains elements and their positions in a grid.
All elements possible have childrens, and the childrens are re-indexed from zero (0,0).
I need to convert relative positions to 'absolute' position.
An example json file:
{
label: 'item 1',
position: {x: 0, y: 0},
childrens: [
{
label: 'item 1 children 1'
position: {x: 1, y: 0}
},
{
label: 'item 1 children 2',
position: {x: 2, y: 0}
}
]
},
{
label: 'item 2',
position: {x: 1, y: 0},
childrens: [
{
label: 'item 2 children 1',
position : {x: 0, y: 2}
}
]
}
Keep an absolute offset.
Start with offset (0,0) for the root node. Then when you reach a child compute the absolute coordinates for the top-left corner and call the recursive function with the computed coordinates as the new absolute offset.
For regular items add the absolute offset to the item coordinates to get the absolute coordinates.
In your example, you start with offset (0,0). Then you reach the child rectangle. You compute that the top-left corner should be at position (2,0) and send that as the offset into the recursive call. When you reach the first block inside the child you compute the absolute coordinates as child relative coordinates: (0,0) + absolute coordinate: (2,0) to get (2,0).

TurfJs union - How to ignore point which are inside but with little bit difference in union?

I have used 2 geojson object for polygon. It's too large that I can't post it here. Now I am using TurfJs to make the union of this polygon geojson and plotting it on the map. But it's not working properly.
I think little bit points in the middle of it is a little bit different. So is there any way to ignore this points in the middle in turfjs union?
See images bellow for better understanding.
Polygon 1 :
Polygon 2 :
Now merged polygon for bellow code:
polygons = {
"type": "FeatureCollection",
"features": [poly1, poly2]
};
Now main UNION result:
union = turf.union(poly1,poly2);
So in this, i want to ignore points that are in middle of boundary I know that, there may be points that are not accurate on intersection boundary of both polygon but can I ignore points that are nearer or having little bit of differencr to ignore middle points?
Or is there is any alternative to do union of polygon that ignore few nearer distraction of point and remove middle points?
You can try running the resulting polygon through turf.buffer(result, 0, 'kilometers') (turf-buffer docs). If your result is invalid geojson then using a buffer of 0 should cleanse the geometry (remove the points/lines in the middle).
It is hard to say what will work for sure without seeing the actual GeoJSON of the result. Is there any way you can upload it to pastebin or something?
Update - Turf buffer did not work in this case. The only solution that I could get to work was doing this on the result of turf.union(p1, p2).
result.geometry.coordinates = [result.geometry.coordinates[0]]
You want to be careful with this solution as it removes everthing from the polygon other than the external ring.
To understand why/how this works, you will want to make sure you understand how the coordinates for geojson polygons work. From the geojson.org geojson polygon specification
For type "Polygon", the "coordinates" member must be an array of LinearRing coordinate arrays. For Polygons with multiple rings, the first must be the external ring and any others must be internal rings or holes.
The external ring is essentially the outline of your polygon. Any internal rings are usually represented as holes. In your case, the internal rings were actually lines.
When looking at the coordinates for a geojson polygon, you will notice that all coordinates are contained within an outer array. Here is an example of a geojson polygon with only a single (external) ring.
{"type": "Feature", "properties": {}, "geometry": {"type": "Polygon", "coordinates": **[ [ [1, 1], [1, 2], [1, 3], [1, 1] ] ]**
Notice that the first coordinate and last coordinate of a ringe must always be the same. That ensure that we get a closed shape (ie: polygon).
Now here is an example with an external ring, and an internal ring
{"type": "Feature", "properties": {}, "geometry": {"type": "Polygon", "coordinates": **[ [ [1, 1], [1, 2], [1, 3], [1, 1] ], [ [1, 2], [1, 3], [1, 1] ] ]**
Now if we apply the suggested solution to the above example, we would get the same coordinates as the first example because we are grabbing only the first set of coordinates from the polygon, which will always be the external ring. Any subsequent elements in the coordinates array will represent internal rings (which is what the lines were, even though they are technically not valid internal rings).
{"type": "Feature", "properties": {}, "geometry": {"type": "Polygon", "coordinates": **[ [ [1, 1], [1, 2], [1, 3], [1, 1] ] ]**
As you can see, we are removing all internal rings from the polygon. That is the reason that you must be careful with how you use this. If you ever have valid internal rings, it will actually get rid of those.
I think that the reason this happens is because your polygons (p1 and p2) share a border.
Faced the same problem: After trying to buffer a small positive amount and the same negative amount, the lines disappears. But this made the polygon having more points than the original so I did this workaround:
inner = [YOUR FEATURE COLLECTION]
var areas = []
for (var i = 0; i < inner.geometry.coordinates.length; i++) {
let item = inner.geometry.coordinates[i]
if (item.length > 10) areas.push(item)
}
inner = turf.polygon(areas)
As you can see I am removing the "non complex" polygons (assuming that a polygon with less than 10 points is not a real area)
This happens because the coordinates of both polygons are not 100% the same, creating a small gap when merging them together.
When faced with this problem, I had to use turf's distance method to check every vertex of the polygons, and if there was a small difference between them, I'd make them the same.
The implementation method may vary on the map library you are using, but it should go something like this:
layers.forEach(layer => {
layers.forEach(innerLayer => {
if (layer === innerLayer) return;
// Here you would check if the vertexes are close to each other, using distance.
// If the vertexes are close, you would make them equal and update the layer.
})
})
Only after making the vertex of the polygons the same, you would merge them with the union method.
Since the implementation is pretty unique and depends on the project, I won't waste both our time with actual code, but I believe that with the insights above, you should be good to go.

Body and Sprite Positions

When I compile my hero doesn't touch the floor but stops anyways a few pixels above. I figured if I traced both bodies and their respective sprites I'd know which ones aren't coincinding.
trace("Hero: ", hero.position.y, "/", heroSprite.y);
trace("Floor: ", floor.position.y, "/", floorSprite.y);
I get the following,
Hero: 470.2(...) / 470.2
Floor: 0 / 0
Also, how is the floor position 0 in its y property when:
createWall(stage.stageWidth/2, 500, 100, 30); //(y = 500)
I read that while the nape body 'registration point' is in the middle, the sprite one is in the upper-left corner so when giving the sprite the same x and y of the body it won't match. Below the sprite will be out of position.
public function createWall(x:Number, y:Number, width:Number, height:Number):void
{
wall.shapes.add(new Polygon(Polygon.rect(x, y, width, height)));
wall.space = space;
wallSprite.graphics.beginFill(0x000000);
wallSprite.graphics.drawRect(x, y, width, height);
wallSprite.graphics.endFill;
addChild(wallSprite);
wall.userData.sprite = (wallSprite);
addChild(wall.userData.sprite);
}
I tried wallSprite.graphics.drawRect(-width/2, -height/2, width, height); but didn't work. Althought I believe the problem is there, placing the sprite properly.
Drawing does not affect the position of an object. In your case the wall is at 0,0 and you draw at x:stage.stageWidth/2 , y: 500 but that's not going to become the wall coordinates, those are still 0,0 anyway.

Kinetic.JS - why does layer order change overwrite my colour?

I'm currently playing with Kinetic.JS. I have drawn a rather crude UFO-like shape in two parts, a hull (red) and a disc (grey).
Demo - JSBin
Question: how come when I later arrange the shape ordering so the hull is above the disc, the disc bizarrely goes from grey to the hull's red?
Uncomment the moveToTop() line at the bottom of my JSBin to see what I mean. Here's the pertinent (condensed) code.
//ship hull
var hull = new Kinetic.Shape({
drawFunc: function(ctx) {
ctx.arc(game_dims.w / 2, game_dims.h * 0.6, game_dims.h * 0.45, 0, Math.PI, true);
this.fill(ctx);
},
fill: 'red'
});
//ship disc
var disc = new Kinetic.Circle({
x: game_dims.w / 2,
y: game_dims.h * 0.6,
radius: {x: game_dims.w * 0.45, y: 30},
fill: '#888'
});
//draw
layer.add(hull);
layer.add(disc);
stage.add(layer);
//post-production
hull.moveToTop(); // <-- weirdness - changes disc colour!?
layer.draw();
I am aware I could draw the two shapes in reverse order to get the desired order, but that is not what I want with this question - I'm interested in rearrangement of order after drawing.
Thanks in advance
Your draw function of the hull needs to tell the context it's drawing a new path:
function(ctx) {
ctx.beginPath();
ctx.arc(...);
this.fill(ctx);
}
By adding the beginPath() command you are telling the context that you are not in fact adding to the previous path, but drawing a new one instead. This is also what makes this.fill() fill the previous shape with red, because in your example the context is still referring to the disc when it attempts to fill it