I have an ascii text file containing location data (column 9-lat and 10-long) and intensity(column 20)
200010 207 020311 40658.5 406593 52 344927.31 7100203.50 -26.2078720 127.4491855 345060.64 7100369.14 26.4 650.3 628.0 55471.293 20.168 55648.817 55637.523 -146.062
the text file has many lines 10k+
I am trying to visualize this using GDAL, but not sure how to proceed.
Ideas?
Try QGis. It is free software for making maps with data.
GDAL is for doing sophisticated data transformations.
If your file is named viz.txt, then you can extract
and plot the data using the following commands:
$ awk '{print $9, $10, $20}' < viz.txt > viz2.txt
$ gnuplot
...
gnuplot> plot "viz2.txt" with points palette
This will give you a chart, nicely coloured by intensity.
If you want a more interactive solution, or to overlay the
data on a map, then you will have to use GIS software such
as ArcView, MapInfo or the free tools Generic Mapping Tools (GMT) or QGIS.
Related
I have a json file that contains a lot of data with polygons, lines, points. But I can't exploit it to export the data in shapefile. Can someone help me on how to get there. The data is here.
https://www.sia.aviation-civile.gouv.fr/produits-numeriques-en-libre-disposition/donnees-zones-geographiques-uas.html
if someone can help me solve this problem.
You can't export all of it to one shapefile as it is not possible to mix geometry types in a shapefile, so you will need 3 shapefiles (points, lines, polygons).
I would make use of ogr2ogr the Swiss army knife of vector formats and use something like:
ogr2ogr -nlt POINT -skipfailures points.shp geojsonfile.json
ogr2ogr -nlt LINESTRING -skipfailures linestrings.shp geojsonfile.json
ogr2ogr -nlt POLYGON -skipfailures polygons.shp geojsonfile.json
I am relatively new to machine learning/python/ubuntu.
I have a set of images in .jpg format where half contain a feature I want caffe to learn and half don't. I'm having trouble in finding a way to convert them to the required lmdb format.
I have the necessary text input files.
My question is can anyone provide a step by step guide on how to use convert_imageset.cpp in the ubuntu terminal?
Thanks
A quick guide to Caffe's convert_imageset
Build
First thing you must do is build caffe and caffe's tools (convert_imageset is one of these tools).
After installing caffe and makeing it make sure you ran make tools as well.
Verify that a binary file convert_imageset is created in $CAFFE_ROOT/build/tools.
Prepare your data
Images: put all images in a folder (I'll call it here /path/to/jpegs/).
Labels: create a text file (e.g., /path/to/labels/train.txt) with a line per input image . For example:
img_0000.jpeg 1
img_0001.jpeg 0
img_0002.jpeg 0
In this example the first image is labeled 1 while the other two are labeled 0.
Convert the dataset
Run the binary in shell
~$ GLOG_logtostderr=1 $CAFFE_ROOT/build/tools/convert_imageset \
--resize_height=200 --resize_width=200 --shuffle \
/path/to/jpegs/ \
/path/to/labels/train.txt \
/path/to/lmdb/train_lmdb
Command line explained:
GLOG_logtostderr flag is set to 1 before calling convert_imageset indicates the logging mechanism to redirect log messages to stderr.
--resize_height and --resize_width resize all input images to same size 200x200.
--shuffle randomly change the order of images and does not preserve the order in the /path/to/labels/train.txt file.
Following are the path to the images folder, the labels text file and the output name. Note that the output name should not exist prior to calling convert_imageset otherwise you'll get a scary error message.
Other flags that might be useful:
--backend - allows you to choose between an lmdb dataset or levelDB.
--gray - convert all images to gray scale.
--encoded and --encoded_type - keep image data in encoded (jpg/png) compressed form in the database.
--help - shows some help, see all relevant flags under Flags from tools/convert_imageset.cpp
You can check out $CAFFE_ROOT/examples/imagenet/convert_imagenet.sh
for an example how to use convert_imageset.
I have QGIS 2.18 (latest version) installed for windows (new users). Along came OSGeo4W Shell. Now using this shell, I want to convert a specific value in one CRS into another. For example, if I know coordinates in WGS84 (say, 91.7362, 26.1445 just to give an example), I would like to know how to convert it to Indian 1954/UTM Zone 46N (which are in meters) using OSGeoShell.
PS: I know there is a way because I once successfully found the way. I had copied the syntax of the command but I deleted the file by mistake and I can't find the way in net again even after long time searches. It was barely a 2 line and simple command.
I think, the command is:
osgeo4w
gdaltransform -s_srs EPSG:4326 -t_srs EPSG:XXXX < input.csv > output.txt
Where the EPSG codes are the codes for the CRS (4326 is for WGS84). You have to find out the epsg code for your target crs and then you can perform the transformation.
I've been using ogr2ogr to do most of what I need with shapefiles (including dissolving them). However, I find that for big ones, it takes a REALLY long time.
Here's an example of what I'm doing:
ogr2ogr new.shp old.shp -dialect sqlite -sql "SELECT ST_Union(geometry) FROM old"
In certain instances, one might want to dissolve common neighboring shapes (which is what I think is going on here in the above command). However, in my case I simply want to flatten the entire file and every shape in it regardless of the values (I've already isolated the shapes I need).
Is there a faster way to do this when you don't need to care about the values and just want a shape that outlines the array of shapes in the file?
If you have isolated the shapes, and they don't have any shared boundaries, they can be easily collected into a single MULTIPOLYGON using ST_Collect. This should be really fast and simple to do:
ogr2ogr gcol.shp old.shp -dialect sqlite -sql "SELECT ST_Collect(geometry) FROM old"
If the geometries overlap and the boundaries need to be "dissolved", then ST_Union must be used. Faster spatial unions are done with a cascaded union technique, described here for PostGIS. It is supported by OGR, but it doesn't seem to be done elegantly.
Here is a two step SQL query. First make a MULTIPOLYGON of everything with ST_Collect (this is fast), then do a self-union which should trigger a UnionCascaded() call.
ogr2ogr new.shp old.shp -dialect sqlite -sql "SELECT ST_Union(gcol, gcol) FROM (SELECT ST_Collect(geometry) AS gcol FROM old) AS f"
Or to better view the actual SQL statement:
SELECT ST_Union(gcol, gcol)
FROM (
SELECT ST_Collect(geometry) AS gcol
FROM old
) AS f
I've had better success (i.e. faster) by converting it to raster then back to vector. For example:
# convert the vector file old.shp to a raster file new.tif using a pixel size of XRES/YRES
gdal_rasterize -tr XRES YRES -burn 255 -ot Byte -co COMPRESS=DEFLATE old.shp new.tif
# convert the raster file new.tif to a vector file new.shp, using the same raster as a -mask speeds up the processing
gdal_polygonize.py -f 'ESRI Shapefile' -mask new.tif new.tif new.shp
# removes the DN attribute created by gdal_polygonize.py
ogrinfo new.shp -sql "ALTER TABLE new DROP COLUMN DN"
I'm trying to map some statistical data of Italy and I need the infrastructure (railway and motorway) on top of it.
The problem is that I'm not able to simplify the infrastructure json file.
I'm using the openstreetmap shape of Italy by geofabbriK: http://download.geofabrik.de/europe/italy.html#
I've converted the roads.shp to json selecting only motorway and and primary roads using this command:
ogr2ogr -f GeoJSON -where "type IN ('motorway', 'motorway_link', 'primary', 'primary_link')" -t_srs EPSG:4326 roads.json roads.shp
I get a 55Mb json file. You can download it here: http://www.danielepennati.com/prove/mapping/roads_mw_pr.zip
Than I tryed to simplify and convert it in topojson.
Whit no -s command the new json file is about 13Mb
If I use -s or --simplify-proportion with any value form 1 to 0 I always get a max semplification of 95% and a filesize of 11Mb
How can I get a more simplified topojson?
Thanks
daniele