How to calculate refraction-corrected RA/DEC coordinates with pyephem? - astronomy

I try to convert J2000 RA/DEC coordinates of an object to the "observed position", i.e. topocentric RA/DEC coordinates including refraction effects. Following the docs (http://rhodesmill.org/pyephem/radec.html#how-the-three-positions-differ) I did this:
from math import pi
import ephem
from datetime import datetime
ra = 20.370473492 / 12. * pi
dec = 40.256674958 / 180. * pi
tt = datetime(2016, 07, 27, 23, 30, 0)
lowell = ephem.Observer()
lowell.lon = '-111:32.1'
lowell.lat = '35:05.8'
lowell.elevation = 2198
lowell.date = tt
lowell.pressure = 1000
bd = ephem.FixedBody()
bd._ra = ra
bd._dec = dec
bd.compute(lowell)
print "Pressure: ", lowell.pressure
print "Input: ", bd._ra, bd._dec
print "Astrometric Geocentric Position: ", bd.a_ra, bd.a_dec
print "Apparent Geocentric Position ", bd.g_ra, bd.g_dec
print "Apparent Topocentric Position: ", bd.ra, bd.dec
print "Horizontal Position: ", bd.alt, bd.az
lowell.pressure = 0
bd.compute(lowell)
print "Pressure: ", lowell.pressure
print "Input: ", bd._ra, bd._dec
print "Astrometric Geocentric Position: ", bd.a_ra, bd.a_dec
print "Apparent Geocentric Position ", bd.g_ra, bd.g_dec
print "Apparent Topocentric Position: ", bd.ra, bd.dec
print "Horizontal Position: ", bd.alt, bd.az
First notable thing is: The apparent geocentric and apparent topocentric coordinates do not differ. If corrections for parallax and refraction were made as stated in the docs, they should differ.
Second thing: If I set the pressure to zero, refraction should disappear and something should change. However, only the horizontal coordinates change (by some 11 arcminutes, that looks right), but all ra/dec coordinates stay the same.
What am I missing here?
Or in other words: Can I somehow get refraction-correction RA/DEC coordinates from pyephem?
(btw, I use the newest pyepehm version, 3.7.6.0)

The short answer is that refraction simply does not affect RA and declination.
There are two ways to visualize this. The first is to image the object you are interested in, sitting amidst a grid of RA and declination lines that make up the part of the celestial sphere where it is located. Now, in your mind, imagine looking at the object and the grid around it as they are bent by the atmosphere. Because they are passing through the same atmosphere, they will be refracted the same amount — and the object's refracted image will be in the same position relative to the image of the refracted RA and declination lines as it was before! So its RA and declination are still the same.
Another way to visualize it is to imagine that the planet's RA and declination tell you what little group of distant objects the planet will be in front of — which distant galaxies and quasars the object will be sitting among. Now, if you will imagine the patch of sky where the object is sitting, and imagine the image of that patch of sky being refracted through the atmosphere, you will see that the objects and planet all get refracted together. So, again, refraction will not change the object's RA and declination — it will appear in front of exactly the same galaxies as it would have if there were no refraction at all.
The long answer is that, to create the number you are imagining, you could probably take the horizontal coordinates, set the observer's atmosphere to zero pressure, and use the observer's .radec_of(...) method to turn the horizontal coordinates back into the RA and dec that would sit at that position on the sky if the atmosphere were not refracting them. But I hope that the above discussion makes clear that the numbers you get out, strictly speaking, will have no physical meaning or significance: RA and declination do not change because of refraction, and the changed RA and declination you get back will not refer to any true position on a sky chart.

Related

How can I concatenate the 4 corners of the image quickly when loading image in deep learning?

What is the most effective way to concatenate 4 corner, shown in this photo ?
(conducting in getitem())
left_img = Image.open('image.jpg')
...
output = right_img
This is how I would do it.
Firstly I would convert the image to a Tensor Image temporarily
from torchvision import transforms
tensor_image = transforms.ToTensor()(image)
Now assuming you have a 3 channel image (although similiar principles apply to any matrices of any number of channels including 1 channel gray scale images).
You can find the Red channel with tensor_image[0] the Green channel with tensor_image[1] and the the Blue channel with tensor_image[2]
You can make a for loop iterating through each channel like
for i in tensor_image.size(0):
curr_channel = tensor_image[i]
Now inside that for loop with each channel you can extract the
First corner pixel with float(curr_channel[0][0])
Last top corner pixel with float(curr_channel[0][-1])
Bottom first pixel with float(curr_channel[-1][0])
Bottom and last pixel with float(curr_channel[-1][-1])
Make sure to convert all the pixel values to float or double values before this next appending step
Now you have four values that correspond to the corner pixels of each channel
Then you can make a list called new_image = []
You can then append the above mentioned pixel values using
new_image.append([[curr_channel[0][0], curr_channel[0][-1]], [curr_channel[-1][0], curr_channel[-1][-1]]])
Now after iterating through every channel you should have a big list that contains three (or tensor_image.size(0)) number of lists of lists.
Next step is to convert this list of lists of lists to a torch.tensor by running
new_image = torch.tensor(new_image)
To make sure everything is right new_image.size() should return torch.Size([3, 2, 2])
If that is the case you now have your wanted image but it is tensor format.
The way to convert it back to PIL is to run
final_pil_image = transforms.ToPILImage()(new_image)
If everything went good, you should have a pil image that fulfills your task. The only code it uses is clever indexing and one for loop.
There is a possibility however if you look more than I can, then you can avoid using a for loop and perform operations on all the channels without the loop.
Sarthak Jain
I don't know how quick this is but here:
import numpy as np
img = np.array(Image.open('image.jpg'))
w, h = img.shape[0], image.shape[1]
# the window size:
r = 4
upper_left = img[:r, :r]
lower_left = img[h-r:, :r]
upper_right = img[:r, w-r:]
lower_right = img[h-r:, w-r:]
upper_half = np.concatenate((upper_left, upper_right), axis=1)
lower_half = np.concatenate((lower_left, lower_right), axis=1)
img = np.concatenate((upper_half, lower_half))
or short:
upper_half = np.concatenate((img[:r, :r], img[:r, w-r:]), axis=1)
lower_half = np.concatenate((img[h-r:, :r], img[h-r:, w-r:]), axis=1)
img = np.concatenate((upper_half, lower_half))

Maps into Forge Viewer

Trying to follow the steps https://forge.autodesk.com/blog/add-mapbox-google-maps-forge-viewer but i can't place the model correctly on the map.
I am running the functions listed here: https://learn.microsoft.com/en-us/bingmaps/articles/bing-maps-tile-system:
LatLongToPixelXY(latitude, longitude, 7, out pixelX, out pixelY);
PixelXYToTileXY(pixelX, pixelY, out tileX, out tileY);
The result pixelX = 16225, pixelY = 12249, tileX = 63, tileY = 47.
I substitute the previous values:
map.position.set(16225,12249,-45);
class MapPlaneNode extends MapNode {
constructor(parentNode = null, mapView = null, location = MapNode.ROOT, level = 7, x = 63, y = 47)
The result is that the model comes out small and not positioned correctly. In the image, the red arrow is where the model is inserted, and the green arrow is where it should be.
image of result
What am I doing wrong?
Thank you very much
Positioning the model is a little tricky.
In the demo I created, I originally used world coordinates, where I set the root tile as level 0, and used the correct lat/long coordinate utils function to position the revit model in the correct location.
Unfortunately, the precision caused a rendering problem with the post-renderer (line edges were missing, and some strange z-fighting precision issues)...
so, I decided to hack the level, and move the map into the position I wanted and center the revit model at origin 0,0,0.
This made things a lot more manual and rather tricky, but it got around the rendering issue and also limited the user into a small area in the world, which I preferred.
I suggest changing the root tile back to zero, and adjusting the model position globaloffset to the value of the lat/long W84 utils. See the blog post and also the coordinates section of the geo-three repo, for more details here: https://github.com/tentone/geo-three#coordinates
Found a trick to adjust the map. It is still manual but it's fairly quick:
Calculate Tile X and Y (you did that step already, it's just for reference):
Copy the TileSystem class from the the link bing-maps-tile-system you posted into https://dotnetfiddle.net/
(you'll also need to add: using System.Text)
Change the main as follows
public static void Main()
{
int pixelX, pixelY, tileX, tileY;
TileSystem.LatLongToPixelXY(YOUR LAT HERE, YOUR LONG HERE, 7, out pixelX, out pixelY);
Console.WriteLine("LatLongToPixelXY: " + pixelX.ToString() + ", " + pixelY.ToString());
TileSystem.PixelXYToTileXY(pixelX, pixelY, out tileX, out tileY);
Console.WriteLine("PixelXYToTileXY: " + tileX.ToString() + ", " + tileY.ToString());
}
This will give you the TileX and Tile Y that you'll need to replace in the Extension.
Calculate Position
In the Extension set the X, Y position to 0,0, and the adjust the Z so that the map is below your model
map.position.set(0, 0, z);
Run the Extension and see where your project lands on the map. Now locate this landing point in Google maps (I found it useful at this stage to search the map using a corner between two streets by entering for example: Parker St & Wilson Rd). When you've found it, click on the landing point in Google map to place a Marker, then right-click on the marker and select Measure Distance. You will have to measure the distance to your destination both vertically, and horizontally (not directly to it). For example you'll get dH = 43.5km and dV = 17.8km
And this is were the magic happens: Multiply both numbers by 3400 if your distance is in km (or by 2113 if you distance is in miles) and set the position with those values:
dH * 3400 = 147900
dV * 3400 = 60520
If your destination is to the E or S use positive values.
If your destination is to the W or N use negative values
map.position.set(147900, -60520, z);
Now it won't be perfect, but it'll be close enough to finish adjusting the value manually.

Mimick photoshop/painter smooth draw on HTML5 canvas?

As many people knew, HTML5 Canvas lineTo() is going to give you a very jaggy line at each corner. At this point, a more preferable solution would be to implement quadraticCurveTo(), which is a very great way to generate smooth drawing. However, I desire to create a smooth, yet accurate, draw on canvas HTML5. Quadratic curve approach works well in smoothing out the draw, but it does not go through all the sample points. In other word, when I try to draw a quick curve using quadratic curve, sometime the curve appears to be "corrected" by the application. Instead of following my drawing path, some of the segment is curved out of its original path to follow a quadratic curve.
My application is intended for a professional drawing on HTML5 canvas, so it is very crucial for the drawing to be both smooth and precise. I am not sure if I am asking for the impossible by trying to put HTML5 canvas on the same level as photoshop or any other painter applications (SAI, painterX, etc.)
Thanks
What you want is a Cardinal spline as cardinal splines goes through the actual points you draw.
Note: to get a professional result you will also need to implement moving average for short thresholds while using cardinal splines for larger thresholds and using knee values to break the lines at sharp corner so you don't smooth the entire line. I won't be addressing the moving average or knee here (nor taper) as these are outside the scope, but show a way to use cardinal spline.
A side note as well - the effect that the app seem to modify the line is in-avoidable as the smoothing happens post. There exists algorithms that smooth while you draw but they do not preserve knee values and the lines seem to "wobble" while you draw. It's a matter of preference I guess.
Here is an fiddle to demonstrate the following:
ONLINE DEMO
First some prerequisites (I am using my easyCanvas library to setup the environment in the demo as it saves me a lot of work, but this is not a requirement for this solution to work):
I recommend you to draw the new stroke to a separate canvas that is on top of the main one.
When stroke is finished (mouse up) pass it through the smoother and store it in the stroke stack.
Then draw the smoothed line to the main.
When you have the points in an array order by X / Y (ie [x1, y1, x2, y2, ... xn, yn]) then you can use this function to smooth it:
The tension value (ts, default 0.5) is what smooths the curve. The higher number the more round the curve becomes. You can go outside the normal interval [0, 1] to make curls.
The segment (nos, or number-of-segments) is the resolution between each point. In most cases you will probably not need higher than 9-10. But on slower computers or where you draw fast higher values is needed.
The function (optimized):
/// cardinal spline by Ken Fyrstenberg, CC-attribute
function smoothCurve(pts, ts, nos) {
// use input value if provided, or use a default value
ts = (typeof ts === 'undefined') ? 0.5 : ts;
nos = (typeof nos === 'undefined') ? 16 : nos;
var _pts = [], res = [], // clone array
x, y, // our x,y coords
t1x, t2x, t1y, t2y, // tension vectors
c1, c2, c3, c4, // cardinal points
st, st2, st3, st23, st32, // steps
t, i, r = 0,
len = pts.length,
pt1, pt2, pt3, pt4;
_pts.push(pts[0]); //copy 1. point and insert at beginning
_pts.push(pts[1]);
_pts = _pts.concat(pts);
_pts.push(pts[len - 2]); //copy last point and append
_pts.push(pts[len - 1]);
for (i = 2; i < len; i+=2) {
pt1 = _pts[i];
pt2 = _pts[i+1];
pt3 = _pts[i+2];
pt4 = _pts[i+3];
t1x = (pt3 - _pts[i-2]) * ts;
t2x = (_pts[i+4] - pt1) * ts;
t1y = (pt4 - _pts[i-1]) * ts;
t2y = (_pts[i+5] - pt2) * ts;
for (t = 0; t <= nos; t++) {
// pre-calc steps
st = t / nos;
st2 = st * st;
st3 = st2 * st;
st23 = st3 * 2;
st32 = st2 * 3;
// calc cardinals
c1 = st23 - st32 + 1;
c2 = st32 - st23;
c3 = st3 - 2 * st2 + st;
c4 = st3 - st2;
res.push(c1 * pt1 + c2 * pt3 + c3 * t1x + c4 * t2x);
res.push(c1 * pt2 + c2 * pt4 + c3 * t1y + c4 * t2y);
} //for t
} //for i
return res;
}
Then simply call it from the mouseup event after the points has been stored:
stroke = smoothCurve(stroke, 0.5, 16);
strokes.push(stroke);
Short comments on knee values:
A knee value in this context is where the angle between points (as part of a line segment) in the line is greater than a certain threshold (typically between 45 - 60 degrees). When a knee occur the lines is broken into a new line so that only the line consisting of points with a lesser angle than threshold between them are used (you see the small curls in the demo as a result of not using knees).
Short comment on moving average:
Moving average is typically used for statistical purposes, but is very useful for drawing applications as well. When you have a cluster of many points with a short distance between them splines doesn't work very well. So here you can use MA to smooth the points.
There is also point reduction algorithms that can be used such as the Ramer/Douglas/Peucker one, but it has more use for storage purposes to reduce amount of data.

Create a function to generate random points in a parallelogram

I hope someone can help me here, I have been asked to write some code for an Lua script for a game. Firstly i am not an Lua Scripter and I am defiantly no mathematician.
What i need to do is generate random points within a parallelogram, so over time the entire parallelogram becomes filled. I have played with the scripting and had some success with the parallelogram (rectangle) positioned on a straight up and down or at 90 degrees. My problem comes when the parallelogram is rotated.
As you can see in the image, things are made even worse by the coordinates originating at the centre of the map area, and the parallelogram can be positioned anywhere within the map area. The parallelogram itself is defined by 3 pairs of coordinates, start_X and Start_Y, Height_X and Height_Y and finally Width_X and Width_Y. The random points generated need to be within the bounds of these coordinates regardless of position or orientation.
Map coordinates and example parallelogram
An example of coordinates are...
Start_X = 122.226
Start_Y = -523.541
Height_X = 144.113
Height_Y = -536.169
Width_X = 128.089
Width_Y = -513.825
In my script testing i have eliminated the decimals down to .5 as any smaller seems to have no effect on the final outcome. Also in real terms the start width and height could be in any orientation when in final use.
Is there anyone out there with the patients to explain what i need to do to get this working, my maths is pretty basic, so please be gentle.
Thanks for reading and in anticipation of a reply.
Ian
In Pseudocode
a= random number with 0<=a<=1
b= random number with 0<=b<=1
x= Start_X + a*(Width_X-Start_X) + b*(Height_X-Start_X)
y= Start_Y + a*(Width_Y-Start_Y) + b*(Height_Y-Start_Y)
this should make a random point at coordinates x,y within the parallelogram
The idea is that each point inside the parallelogram can be specified by saying how far you go from Start in the direction of the first edge (a) and how far you go in the direction of the second edge (b).
For example, if you have a=0, and b=0, then you do not move at all and are still at Start.
If you have a=1, and b=0, then you move to Width.
If you have a=1, and b=1, then you move to the opposite corner.
You can use something like "texture coordinates", which are in the range [0,1], to generate X,Y for a point inside your parallelogram. Then, you could generate random numbers (u,v) from range [0,1] and get a random point you want.
To explain this better, here is a picture:
The base is formed by vectors v1 and v2. The four points A,B,C,D represent the corners of the parallelogram. You can see the "texture coordinates" (which I will call u,v) of the points in parentheses, for example A is (0,0), D is (1,1). Every point inside the parallelogram will have coordinates within (0,0) and (1,1), for example the center of the parallelogram has coordinates (0.5,0.5).
To get the vectors v1,v2, you need to do vector subtraction: v1 = B - A, v2 = C - A. When you generate random coordinates u,v for a random point r, you can get back the X,Y using this vector formula: r = A + u*v1 + v*v2.
In Lua, you can do this as follows:
-- let's say that you have A,B,C,D defined as the four corners as {x=...,y=...}
-- (actually, you do not need D, as it is D=v1+v2)
-- returns the vector a+b
function add(a,b)
return {x = a.x + b.x, y = a.y + b.y} end
end
-- returns the vector a-b
function sub(a,b)
return {x = a.x - b.x, y = a.y - b.y} end
end
-- returns the vector v1*u + v2*v
function combine(v1,u,v2,v)
return {x = v1.x*u + v2.x*v, y = v1.y*u + v2.y*v}
end
-- returns a random point in parallelogram defined by 2 vectors and start
function randomPoint(s,v1,v2)
local u,v = math.random(), math.random() -- these are in range [0,1]
return add(s, combine(v1,u,v2,v))
end
v1 = sub(B,A) -- your basis vectors v1, v2
v2 = sub(C,A)
r = randomPoint(A,v1,v2) -- this will be in your parallelogram defined by A,B,C
Note that this will not work with your current layout - start, width, height. How do you want to handle rotation with these parameters?

Constructing a triangle based on Coordinates on a map

I'm constructing a geolocation based application and I'm trying to figure out a way to make my application realise when a user is facing the direction of the given location (a particular long / lat co-ord). I've got the math figured, I just have the triangle to construct.
//UPDATE
So I've figured out a good bit of this...
Below is a method which takes in a long / lat value and attempts to compute a triangle finding a point 700 meters away and one to its left + right. It'd then use these to construct the triangle. It computes the correct longitude but the latitude ends up somewhere off the coast of east Africa. (I'm in Ireland!).
public void drawtri(double currlng,double currlat, double bearing){
bearing = (bearing < 0 ? -bearing : bearing);
System.out.println("RUNNING THE DRAW TRIANGLE METHOD!!!!!");
System.out.println("CURRENT LNG" + currlng);
System.out.println("CURRENT LAT" + currlat);
System.out.println("CURRENT BEARING" + bearing);
//Find point X(x,y)
double distance = 0.7; //700 meters.
double R = 6371.0; //The radius of the earth.
//Finding X's y value.
Math.toRadians(currlng);
Math.toRadians(currlat);
Math.toRadians(bearing);
distance = distance/R;
Global.Alat = Math.asin(Math.sin(currlat)*Math.cos(distance)+
Math.cos(currlat)*Math.sin(distance)*Math.cos(bearing));
System.out.println("CURRENT ALAT!!: " + Global.Alat);
//Finding X's x value.
Global.Alng = currlng + Math.atan2(Math.sin(bearing)*Math.sin(distance)
*Math.cos(currlat), Math.cos(distance)-Math.sin(currlat)*Math.sin(Global.Alat));
Math.toDegrees(Global.Alat);
Math.toDegrees(Global.Alng);
//Co-ord of Point B(x,y)
// Note: Lng = X axis, Lat = Y axis.
Global.Blat = Global.Alat+ 00.007931;
Global.Blng = Global.Alng;
//Co-ord of Point C(x,y)
Global.Clat = Global.Alat - 00.007931;
Global.Clng = Global.Alng;
}
From debugging I've determined the problem lies with the computation of the latitude done here..
Global.Alat = Math.asin(Math.sin(currlat)*Math.cos(distance)+
Math.cos(currlat)*Math.sin(distance)*Math.cos(bearing));
I have no idea why though and don't know how to fix it. I got the formula from this site..
http://www.movable-type.co.uk/scripts/latlong.html
It appears correct and I've tested multiple things...
I've tried converting to Radians then post computations back to degrees, etc. etc.
Anyone got any ideas how to fix this method so that it will map the triangle ONLY 700 meters in from my current location in the direction that I am facing?
Thanks,
for long distance: http://www.dtcenter.org/met/users/docs/write_ups/gc_simple.pdf
but for short distance You can try simple 2d math to simulate "classic" compass using: http://en.wikipedia.org/wiki/Compass#Using_a_compass. For example you can get pixel coordinates from points A and B and find angle between line connecting those points and vertical line.
also You probably should consider magnetic declination: http://www.ngdc.noaa.gov/geomagmodels/Declination.jsp
//edit:
I was trying to give intuitive solution. However calculating screen coordinates from long/lat wouldn't be easy so You probably should use formulas provided in links.
Maybe its because I don't know javascript, but don't you have to do something like
currlat = Math.toRadians(currlat);
to actually change the currlat value to be radians.
Problem was no matter what I piped in java would output in Radians, Trick was to change everything to Radians and then output came in radians, convert to degrees.