How to load OSM (GeoJSON) data to ArangoDB? - json

How I can load OSM data to ArangoDB?
I loaded data sed named luxembourg-latest.osm.pbf from OSM, than converted it to JSON with OSMTOGEOJSON, after I tried to load result geojson to ArangoDB with next command: arangoimp --file out.json --collection lux1 --server.database geodb and got hude list of errors:
...
2017-03-17T12:44:28Z [7712] WARNING at position 719386: invalid JSON type (expecting object, probably parse error), offending context: ],
2017-03-17T12:44:28Z [7712] WARNING at position 719387: invalid JSON type (expecting object, probably parse error), offending context: [
2017-03-17T12:44:28Z [7712] WARNING at position 719388: invalid JSON type (expecting object, probably parse error), offending context: 5.867441,
...
What I am doing wrong?
upd: it's seems that converter osm2json converter should be run with option osmtogeojson --ndjson that produce items not as single Json, but in line by line mode.

As #dmitry-bubnenkov already found out, --ndjson is required to produce the right input for ArangoImp.
One has to know here, that ArangoImp expects a JSON-Subset (since it doesn't parse the json on its own) dubbed as JSONL.
Thus, Each line of the JSON-File is expected to become one json document in the collection after the import. To maximize performance and simplify the implementation, The json is not completely parsed before sending it to the server.
It tries to chop the JSON into chunks with the maximum request size that the server permits. It leans on the JSONL-line endings to isolate possible chunks.
However, the server expects valid JSON for sure. Sending the chopped part to the server with possibly incomplete JSON documents will lead to parse errors on the server, which is the error message you saw in your output.

Related

not able to successfully slice a json file with jq

I try to slice the json file but it's giving me the error:
jq: error (at result2.json:0): Cannot index array with string "url"
I am trying to parse the following json file
code ss
I want to access six reference links from this list. I tried using the following code but it's giving me an error.
jq -r .result.CVE_Items[].cve.references.reference_data[0:5].url result2.json
I am fairly new to json parsing so i am having a tough time overcoming these issues :(
.reference_data[0:5] produces a slice, i.e. an array, so in your query, you would need to itemize it, e.g. by appending []:
.reference_data[0:5][]

How can a read csv.deflate hdfs files in a dask datfarame?

I am trying to read csv.deflate files from hdfs path and put them in dask dataframe. I tried read_csv and I am getting "UnicodeDecodeError: 'utf-8' codec can't decode byte 0x9c in position 1: invalid start byte" error. Then, I set engine='python' and encoding='utf-8' but I am still getting the same error.
Perhaps the compression= keyword would help? How would you read this data locally with Pandas? I suspect that you need the same keyword arguments that you would need in that case.

Loading multiple JSON records into BigQuery using the console

I'm trying to upload some data into bigquery in JSON format using the BigQuery Console as described here.
If I have a single record in a JSON file I can upload it successfully. If I put two or more records in a JSON file with newline delimination then I get this error:
Error while reading data, error message: JSON parsing error in row starting at position 0: Parser terminated before end of string
I tried searching stackoverflow and google but didn't have any luck finding any information. The two records I uploaded with newline delimination are able to upload successfully as individual records in separate JSON files.
My editor must have been adding some other character on my newlines. I went back to my original json array of records and used:
cat test.json | jq -c '.[]' > testNDJSON.json
This fixed everything.

How to import Google Maps API into PostgreSQL?

I am trying to transfer data from a JSON file produced by the Google Maps API onto my PostgreSQL database. This is done through cURL and I made sure that the permissions have been correctly set.
The url:
https://maps.googleapis.com/maps/api/distancematrix/json?units=imperial&origins=London&destinations=Paris&key=AIza-[key-redacted]-3z6ho-o
The query:
copy bookings.import(info) from program 'C:/temp/mycurl/curl "https://maps.googleapis.com/maps/api/distancematrix/json?units=imperial&origins=London&destinations=Paris&key=AIzaSyBIhOMI68hTIFarH4jrb_eKUmvY3z6ho-o" --insecure'
However, when I try to do this on my table with column 'info' of type 'json', I get the following error:
ERROR: invalid input syntax for type json DETAIL: The input string
ended unexpectedly. CONTEXT: JSON data, line 1: { COPY import, line
1, column info: "{"
********** Error **********
ERROR: invalid input syntax for type json SQL state: 22P02 Detail: The
input string ended unexpectedly. Context: JSON data, line 1: { COPY
import, line 1, column info: "{"
I am trying to not include things such as PHP or any other tool currently, yet if the only option is that I would certainly consider it.
What exactly do you guys think I am doing wrong? Is it the syntax, the format or am I missing something?
Thanks!
COPY assumes that each newline indicates a new record. Unfortunately, the Google Maps DistanceMatrix API is pretty-printing your response which means that it comes through as 23 rows, none of which are valid JSON.
You can get around this by piping the curl response through something like jq.
copy imports(info) from program 'curl "https://maps.googleapis.com/maps/api/distancematrix/json?units=imperial&origins=London&destinations=Paris&key=<my_key>" --insecure | /usr/local/bin/jq "." -c'
jq has lots of useful features if you want to massage the response a bit more before stashing it in the database.

Trouble following Encrypted Big-Query tutorial document

I wanted to try out the encrypted big query client for google big query and I've been having some trouble.
I'm following the instructions outlined in this PDF:
https://docs.google.com/file/d/0B-WB8hYCrhZ6cmxfWFpBci1lOVE/edit
I get to the point where I'm running this command:
ebq load --master_key_filename="key_file" testdataset.cars cars.csv cars.schema
And I'm getting an error string which ends with:
raise ValueError("No JSON object could be decoded")
I've tried a few different formats for my .csv and .schema files but none have worked. Here are my latest versions.
cars.schema:
[{"name": "Year", "type": "integer", "mode": "required", "encrypt": "none"}
{"name": "Make", "type": "string", "mode": "required", "encrypt": "pseudonym"}
{"name": "Model", "type": "string", "mode": "required", "encrypt": "probabilistic_searchwords"}
{"name": "Description", "type": "string", "mode": "nullable", "encrypt": "searchwords"}
{"name": "Website", "type": "string", "mode": "nullable", "encrypt": "searchwords","searchwords_separator": "/"}
{"name": "Price", "type": "float", "mode": "required", "encrypt": "probabilistic"}
{"name": "Invoice_Price", "type": "integer", "mode": "required", "encrypt": "homomorphic"}
{"name": "Holdback_Percentage", "type": "float", "mode": "required", "encrypt":"homomorphic"}]
cars.csv:
1997,Ford,E350, "ac\xc4a\x87, abs, moon","www.ford.com",3000.00,2000,1.2
1999,Chevy,"Venture ""Extended Edition""","","www.cheverolet.com",4900.00,3800,2.3
1999,Chevy,"Venture ""Extended Edition, Very Large""","","www.chevrolet.com",5000.00,4300,1.9
1996,Jeep,Grand Cherokee,"MUST SELL! air, moon roof,loaded","www.chrysler.com/jeep/grand­cherokee",4799.00,3950,2.4
I believe the issue may be that you need to move the --master_key_filename argument before the load argument. If that doesn't work, can you send the output of adding --apilog=- as the first argument?
Also, there is an example script file of running ebq here:
https://code.google.com/p/bigquery-e2e/source/browse/#git%2Fsamples%2Fch13