Public domain high resolution vector data - gis

Can someone point to me where I can grab high res vector data ?
Preferably WGS84 vector data in Shape file format (Coastlines, nation boundaries, lakes...)

Have you checked GADM2 already?

Related

Support vector regression based GIS anaysis

I'm new here and I really want some help. I have a dataset including geographical information (longitude, latitude.. ) and I want to ensure the prediction of some aspects using this dataset with Support Vector Regression, but I don't know how to perform this task. I have the following inquires,
Is there a specific precessing I need to go through?
Does SVR consider a geographic dataset as normal data set or are there some specificities in term of tools and treatment?
Any recommended prediction analytics tools (including SVR) considering geographical data?
This given solution is for the situation that you want to extract the independent variable base on the dependent variable from a raster.
but if you have you all dependent and independent data with their corresponding location you simply use svm function in R and you then add a raster or vector (new) data to your predict function for prediction, or you also can use the estimated coefficient of dependent variable in raster calculator in GIS and multiply them to the corresponding independent variable and finally you will get your predicted raster.
Simply you can do the following for spatial data in R.
First of all, the support vector regression can be used for prediction of real value and you can use the library("e1071") in R in order to execute this algorithm.
you can import your dataset as CSV along with lat and long columns.
transform your data.fram to Spatial data.frame
#Read data
dat<-read.csv(choose.files())
#convert the data to SPDF.
dat_sp=SpatialPoints(cbind(dat$x,dat$y))
#add your Geographical referense system
dat_crs=CRS("+proj=utm +zone=39 +datum=WGS84")
#Data Frams for SpatialPoint Data(Creating a SpatialPoints data frame for dat)
dat_spdf=SpatialPointsDataFrame(coords = dat_sp,data = dat, proj4string = dat_crs)
plot(dat_spdf, col='blue', cex=1, pch=16, axes=TRUE)
#Extract value
dat_spdf$ref <- extract(raster , dat_spdf)
then you can extract your data on a raster data or whatever you have(your independent variable).
and finally, you can use the following cold in R.
SVM(dependent ~.,independent)
But you need to really have an intuition about what the SVR is and how to evaluate the result.
you also can show your result as a final raster map.
you can use toolbox package or you may use raster package.

Reload weights into fc layer after converting to csr_matrix

I am trying to store weights in a the fc layers in a compressed sparse row format. When I retrieve the weights and convert them to CSR matrix format, its size in memory reduces drastically but when I load it back into caffe my model size remains the same. Basically this is what I'm doing:
temp2 = net.params['ip1'][0].data.shape
sparse_csr1 = sparse.scr_matrix(temp2, shape)
net.params['ip1'][0].data[...] = sparse_csr1
net.save('compressed.caffemodel')
Any suggestions will be appreciated.
Caffe does not support sparse matices, therefore you cannot benefit from sparse compression of weights.
Looks like these repos does exactly this for both inner-product and conv layers:
https://github.com/IntelLabs/SkimCaffe
https://github.com/valeriifilevsc/caffe

Quadtree for collisions with latitude/longitude (earth size)

I have a Google Map and a server sends a list of objects that have a position with a small radius (100m max). I need to quickly be able to know if a position is colliding with something in the list and draw on the map everything.
I'm thinking I should use a Quadtree (very useful in 2D collisions for games) but my issue is I'm not limited to a screen but to the earth !
Sure, if I have 100 objects it's not a problem but at any time the server can send me new objects that I need to add to the list and so my Quadtree could drastically change or become unbalanced.
What should I do ? Should I still use a Quadtree and modify the entire tree if a new element is added outside of the current boundaries ? Should I set the boundaries to the max latitude longitude (but could have issue with double precision) ? Or does someone knows a better data structure for that type of problem ?
rXp
To avoid issues with double precision, especially at the splitting border of a quad cell, it is advisable to use integer coordinates in the quad tree.
convert double lat/lon to int by multiplying with 1E6, this results in a precision of about 10cm.
You can use a space-filling-curve, for example a z curve.

UV mapping in Stage3D / AS3

I've wrote a little wavefront's .obj file parser (3d model format), I'm able to display the geometry correctly but am having problems texturing it correctly.
The only way I'm able to get a correct texture is by dividing the model in my 3d editor, exporting and parsing it this way.. ie: I'm no longer sharing vertex data, each triangle is on it own so my indexBuffer's array looks like this [0,1,2,3,4,5,6...] which I want to avoid.
The correct texture/inefficient geometry (No reusing of vertices: 36 vertices):
Correct http://imageshack.us/a/img29/2242/textureright.jpg
Wrong texture/right topology (Sharing data: 8 vertices only = efficient):
Wrong http://imageshack.us/a/img443/6160/texturewrong.jpg
I thought to try and separate the UVs buffer from the indexBuffer destined to the vertices but didn't found a way to do it; if indeed it is doable.
I also messed with the agal code but haven't achieved any results.
The desired end is being able to pass different UVs coordinates to the same vertex in context of the triangle being drawn atm.
What to do?
Thanks. (I'm new to 3d programming)
It might seem like you need just one vertex per 'vertex location' of your model but, from what I understand of an .obj parser, you need to define your vertices around the FACES. This means you may have multiple vertices for some locations - depending on how many faces adjoin that location - but the pay off is you can have different UV coordinates for those vertices in the same location.
I'd suggest altering your parser to create vertices based on the faces they define rather than solely their positions. I know this bumps up the number of vertices but, from what I've read, it's unavoidable if you need different UVs for the same vertex location.
So, unfortunately, I'm pretty sure your first option is the way to go.
it seems like your welding operation is wrong. For welding vertices you must be sure that positions, UV-coordinates, normals and tangents(if you need them) are equal

2D Open Street Map Data Representation in Meters

I am in the process of converting OSM data into an open source Minecraft port (written in javascript - voxel.js). The javascript rendition is written such that each voxel (arbitrarily defined as a cubic meter) is created as a relation from a single point of origin (x,y,z)(0,0,0).
As an example, if one wanted to create a cubic chunk of voxels, one would simply generate voxels as a relation to the origin (0,0,0) : [(0,0,0),(1,0,0), (0,1,0)...].
My question is this: I've exported OSM data, and the standard XML output (.osm) plots nodes in latitude and longitude. My initial thought is that I can create a map by calculating the distance of each node from an arbitrary point of origin (0,0,0) = (37.77559, -122.41392) using the Haversine formula, convert the distance to meters, find the bearing, and plot it as a relation to (0,0,0).
I've noticed, however, that there are a number of other export formats available: (.osm.pbf, .osm2pgsql, .imposm). I'm assuming they plot nodes in a similar fashion (lat, lng), but some of them have the ability to import directly into a database (e.g. PostgreSQL).
I've heard of people using PG add-ons like PostGIS, but (as this is my first dive into GIS) I'm unfamiliar with their capabilities and whether something like PostGIS would help me in plotting OSM data into a 2D voxel grid.
Are there functions within add-ons like PostGIS that would enable me to dynamically calculate the distance between two Lat/Lng points, and plot them in an x,y fashion?
I guess, fundamentally, my question is: if I create a script that plots OSM data into an x,y grid would I be reinventing the wheel, or is there a more efficient way to do this?
You need to transform from the spherical coordinates (LatLon, using WGS84) to cartesian coordinates, like googles spherical mercator.
In pseudo code
transform(double lat, double lon) {
double wgs84radius = 6378137;
double shift = PI * wgs84radius;
double x = lon * shift / 180;
double y = log(tan((90+lat)*PI/360)/ (PI/180);
return {x,y}
}
This is the simplest way. Keep in mind that Lat/Lon are angles, while x and y are distances from (0/0)
The OSM data is by default in the WGS84 (EPSG:4326) projection which is based on an ellipsoidal Earth and measures latitude and longitude in degrees.
Most map tiles are generated in the EPSG:900913 "Google" spherical mercator projection. This projection is based on a spherical Earth and latitude and longitude are measured in metres from the origin.
It really seems like the 900913 projection will fit quite nicely with your requirements.
Here is some code for converting between the two.
You might like to consider using osm2psql. During the import process all of the OSM map data is converted to the 900913 projection. What you are left with is a database of all the nodes, lines and polygons of the OSM map data in an easy to access Postgres database.
I was initially intimidated by this process but it is really quite straightforward and will give you lots of flexibility when it comes to using the OSM data.