I'm trying to map some statistical data of Italy and I need the infrastructure (railway and motorway) on top of it.
The problem is that I'm not able to simplify the infrastructure json file.
I'm using the openstreetmap shape of Italy by geofabbriK: http://download.geofabrik.de/europe/italy.html#
I've converted the roads.shp to json selecting only motorway and and primary roads using this command:
ogr2ogr -f GeoJSON -where "type IN ('motorway', 'motorway_link', 'primary', 'primary_link')" -t_srs EPSG:4326 roads.json roads.shp
I get a 55Mb json file. You can download it here: http://www.danielepennati.com/prove/mapping/roads_mw_pr.zip
Than I tryed to simplify and convert it in topojson.
Whit no -s command the new json file is about 13Mb
If I use -s or --simplify-proportion with any value form 1 to 0 I always get a max semplification of 95% and a filesize of 11Mb
How can I get a more simplified topojson?
Thanks
daniele
Related
I've gone through Mike Bostock's excellent tutorials on Command-Line Cartography and I'm confused by his use of his ndjson-split utility. That program is used to split up an array of objects in a json file, putting each object in the array on a single line. (Reference: https://github.com/mbostock/ndjson-cli)
In Part Two of the tutorial (https://medium.com/#mbostock/command-line-cartography-part-2-c3a82c5c0f3#.624i8b4iy) Mike uses ndjson-split on a geojson file:
ndjson-split 'd.features' \
< ca-albers.json \
> ca-albers.ndjson
He explains:
The output here looks underwhelmingly similar to the ca-albers.json we
saw previously; the only difference is that there is one feature (one
census tract) per line.
However, it seems there is another big difference. The new file does not contain all of the data that was in the original file. Specifically, the start of the original JSON object, {"type":"FeatureCollection" ... is gone.
Mike doesn't explain why this additional key is not needed in the geojson file (the resulting files work perfectly).
Anyone know why? Is this key not needed for valid geoJSON?
What I am starting with, is the postcode table from the netherlands. I split it up into a couple of csv files, containing for instance the city as subject, PartOf as predicate and municipality as object. This gives you this in a file:
city,PartOf,municipality
Meppel,PartOf,Meppel
Nijeveen,PartOf,Meppel
Rogat,PartOf,Meppel
Now I would like to get this data into MarkLogic. And I can import csv-files, I can import triples, but I can't figure out the combination.
I would suggest rewriting it slightly so it conforms to the N-Triples format, giving it the .nt extension, and then using MLCP to load it as input_type rdf.
HTH!
You can use Google Refine to convert CSV data to RDF. After that, MLCP can be used to push that data. You can do something like this -
$ mlcp.sh import -username user -password password -host localhost \
-port 8000 -input_file_path /my/data -mode local \
-input_file_type rdf
For more on loading triples using MLCP you can refer this MarkLogic Community Page
What is the best way to load the following geojson file in Google Big Query?
http://storage.googleapis.com/velibs/stations/test.json
I have a lot of json files like this (much bigger) on Google Storage, and I cannot download/modify/upload them all (it would take forever). Note that the file is not newline-delimited, so I guess it needs to be modified online.
Thanks all.
Step by step 2019:
If you get the error "Error while reading data, error message: JSON parsing error in row starting at position 0: Nested arrays not allowed.", you might have a GeoJSON file.
Transform GeoJSON into new-line delimited JSON with jq, load as CSV into BigQuery:
jq -c .features[] \
san_francisco_censustracts.json > sf_censustracts_201905.json
bq load --source_format=CSV \
--quote='' --field_delimiter='|' \
fh-bigquery:deleting.sf_censustracts_201905 \
sf_censustracts_201905.json row
Parse the loaded file in BigQuery:
CREATE OR REPLACE TABLE `fh-bigquery.uber_201905.sf_censustracts`
AS
SELECT FORMAT('%f,%f', ST_Y(centroid), ST_X(centroid)) lat_lon, *
FROM (
SELECT *, ST_CENTROID(geometry) centroid
FROM (
SELECT
CAST(JSON_EXTRACT_SCALAR(row, '$.properties.MOVEMENT_ID') AS INT64) movement_id
, JSON_EXTRACT_SCALAR(row, '$.properties.DISPLAY_NAME') display_name
, ST_GeogFromGeoJson(JSON_EXTRACT(row, '$.geometry')) geometry
FROM `fh-bigquery.deleting.sf_censustracts_201905`
)
)
Alternative approaches:
With ogr2ogr:
https://medium.com/google-cloud/how-to-load-geographic-data-like-zipcode-boundaries-into-bigquery-25e4be4391c8
https://medium.com/#mentin/loading-large-spatial-features-to-bigquery-geography-2f6ceb6796df
With Node.js:
https://github.com/mentin/geoscripts/blob/master/geojson2bq/geojson2bqjson.js
The bucket in the question no longer exists.... However five years later there is a new answer.
In July 2018, Google announced an alpha (now beta) of BigQuery GIS.
The docs highlight a limitation that
BigQuery GIS supports only individual geometry objects in GeoJSON.
BigQuery GIS does not currently support GeoJSON feature objects,
feature collections, or the GeoJSON file format.
This means that any Feature of Feature Collection properties would need to be added to separate columns, with a geography column to hold the geojson geography.
In this tutorial by a Google trainer, polygons in a shape file are converted into geojson strings inside rows of a CSV file using gdal.
ogr2ogr -f csv -dialect sqlite -sql "select AsGeoJSON(geometry) AS geom, * from LAYER_NAME" output.csv inputfilename.shp
You want to end up with one column with the geometry content like this
{"type":"Polygon","coordinates":[[....]]}
Other columns may contain feature properties.
The CSV can then be imported to BQ. Then a query on the table can be viewed in BigQuery Geo Viz. You need to tell it which column contains the geometry.
I'm trying to combine topojson (produced from a shape file ) and data so I could display the data for relevant selection on the map, but no luck yet.
Shape file features/properties:
id, code, name
Data files (I've got both .csv and josh)
file 1 columns:
year1, year2, year3,....identifier, %change.
('identifier' column in data files is equals to 'code' in the shape file)
I have data in 5 json files.
I was hoping, by combining these two files, to get a topojson file with properties,
id, code, name, year1, year2, year3,...%change.
Idea is, I could use just one topojson file for displaying map as well as relevant
data..
This is what I have tried so far,
Generating topojson:
1. ogr2ogr -f GeoJSON geojsonoutput.json shpefile.shp
2. topojson -o final.json -e *.json --id-property=identifier -p -- geojsonoutput.json
final.json :
{
"type":"Topology",
"objects":{"geojsonoutput":{"type":"GeometryCollection","geometries": [{"type":"Polygon","properties":{"id":"1","name":"some name"},"arcs":
, "file1" : [{id, code, name, year1, year2, year3,...%change}],
"file2" : [{id, code, name, year1, year2, year3,...%change}],
}
I could access map information by using the following,
topojson.feature(data, data.objects.geojsonoutput).features
however, not sure how I could access the data..for example in ("file1" or "file2") keys.
Actually..Am I going in the right direction? is what I have done so far correct? is there any better way achieve what I'm trying to do?
Any guidance would be great. I'm still kind of new to D3 but enjoying working with it so far.
Cheers
Thanks to this example http://bl.ocks.org/mbostock/5562380! managed to get what I'm after..here is solution..
topojson -e data.csv --id-property id_in_shapefile,id_in_datafile -p -o final.json -- shapefile.shp
it added the properties correctly..
Cheers
I am trying to run the following code in a command window. The code executes, but it gives me no values in the .SHP files. The table has GeographyCollections and Polygons stored in a Field of type Geography. I have tried many variations for the Geography type in the sql statement - Binary, Text etc. but no luck. The output .DBF file has data, so the connection to the database works, but the shape .Shp file and .shx file has no data and is of size 17K and 11 K, respectively.
Any suggestions?
ogr2ogr -f "ESRI Shapefile" -overwrite c:\temp -nln Zip_States -sql "SELECT [ID2],[STATEFP10],[ZCTA5CE10],GEOMETRY::STGeomFromWKB([Geography].STAsBinary(),4326).STAsText() AS [Geography] FROM [GeoSpatial].[dbo].[us_State_Illinois_2010]" ODBC:dbo/GeoSpatial#PPDULCL708504
ESRI Shapefiles can contain only a single type of geometry - Point, LineString, Polygon etc.
Your description suggests that your query returns multiple types of geometry, so restrict that first (using STGeometryType() == 'POLYGON', for example).
Secondly, you're currently returning the spatial field as a text string using STAsText(), but you're not telling OGR that it's a spatial field so it's probably just treating the WKT as a regular text column and adding it as an attribute to the dbf file.
To tell OGR which column contains your spatial information you can add the "Tables" parameter to the connection string. However, there's no reason to do all the casting from WKT/WKB if you're using SQL Server 2008 - OGR2OGR will load SQL Server's native binary format fine.
Are you actually using SQL Server 2008, or Denali? Because the serialisation format changed, and OGR2OGR can't read the new format. So, in that case it's safer (but slower) to convert to WKB first.
The following works for me to dump a table of polygons from SQL Server to Shapefile:
ogr2ogr -f "ESRI Shapefile" -overwrite c:\temp -nln Zip_States -sql "SELECT ID, geom26986.STAsBinary() FROM [Spatial].[dbo].[OUTLINE25K_POLY]" "MSSQL:server=.\DENALICTP3;database=Spatial;trusted_connection=yes;Tables=dbo.OUTLINE25K_POLY(geom26986)"
Try the following command
ogr2ogr shapeFileName.shp -overwrite -sql "select top 10 * from schema.table" "MSSQL:Server=serverIP;Database=dbname;Uid=userid;trusted_connection=no;Pwd=password" -s_srs EPSG:4326 -t_srs EPSG:4326