Oracle SQLcl: Spool to json, only include content in items array? - json

I'm making a query via Oracle SQLcl. I am spooling into a .json file.
The correct data is presented from the query, but the format is strange.
Starting off as:
SET ENCODING UTF-8
SET SQLFORMAT JSON
SPOOL content.json
Follwed by a query, produces a JSON file as requested.
However, how do I remove the outer structure, meaning this part:
{"results":[{"columns":[{"name":"ID","type":"NUMBER"},
{"name":"LANGUAGE","type":"VARCHAR2"},{"name":"LOCATION","type":"VARCHAR2"},{"name":"NAME","type":"VARCHAR2"}],"items": [
// Here is the actual data I want to see in the file exclusively
]
I only want to spool everything in the items array, not including that key itself.
Is this possible to set as a parameter before querying? Reading the Oracle docs have not yielded any answers, hence asking here.

Thats how I handle this.
After output to some file, I use jq command to recreate the file with only the items
ssh cat file.json | jq --compact-output --raw-output '.results[0].items' > items.json
`
Using this library = https://stedolan.github.io/jq/

Related

iterating json to store key value pairs using shell script

I have a json file that is getting created at runtime using the sh script within groovy code. The json file has below contents.
cat.json
{
"user1":"pass1",
"user2":"pass2",
"user3":"pass3"
}
Now I want to create a file at runtime which stores key value pairs in below format
test
user1:pass1
user2:pass2
user3:pass3
can some one help me out shell codes for writing this.
You have literally dozen ways to convert that JSON document to a tabular data file (pretty much like CSV/colon-SV) since you mentioned Java, Groovy, including Java-driven scripting engines (BeanShell, JavaScript, Groovy itself), but if you can use jq then you can extract k/v pairs at least for simple values that do not require any escaping:
#!/bin/sh
jq -r 'to_entries[] | "\(.key):\(.value)"' \
< cat.json
This answer is inspired by searching for extracting entries using jq (or converting a JSON file to a CSV file) and especially by the answer https://stackoverflow.com/a/50496145/12232870 by #peak.

Replace value of object property in multiple JSON files

I'm working with multiple JSON files that are located in the same folder.
Files contain objects with the same properties and they are such as:
{
"identifier": "cameraA",
"alias": "a",
"rtsp": "192.168.1.1"
}
I want to replace a property for all the objects in the JSON files at the same time for a certain condition.
For example, let's say that I want to replace all the rtsp values of the objects with identifier equal to "cameraA".
I've been trying with something like:
jq 'if .identifier == \"cameraA" then .rtsp=\"cameraX" else . end' -c *.json
But it isn't working.
Is there a simple way to replace the property of an object among multiple JSON files?
jq can only write to STDIN and STDOUT, so the simplest approach would be to process one file at a time, e.g. putting your jq program inside a shell loop. sponge is often used when employing this approach.
However, there is an alternative that has the advantage of efficiency. It requires only one invocation of jq, the output of which would include the filename information (obtained from input_filename). This output would then be the input of an auxiliary process, e.g. awk.

How to remove a specific data from the beginning of the json/avro schema file and the last bracket from the end of the file?

I am extracting the schema of a table from an oracle DB using Apache Nifi which I need to use to create a table in BigQuery. The extract SQL processor in NiFi is giving me a schema file which I am saving in my home directory. Now to use this schema file in BigQuery, I need to remove a certain part of the schema file from the beginning and end. How do I do this in unix using sed/awk?
Here is the content of the output file:
Obj^A^D^Vavro.schema<88>^L{"type":"record","name":"NiFi_ExecuteSQL_Record","namespace":"any.data","fields":[{"name":"FEED_SEQUENCE","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":0}]},{"name":"FEED_UNIQUE_NAME","type":["null","string"]},{"name":"COUNTRY_CODE","type":["null","string"]},{"name":"EXTRACTION_TYPE","type":["null","string"]},{"name":"PROJECT_SEQUENCE","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":0}]},{"name":"CREATED_BY","type":["null","string"]},{"name":"CREATED_DATE","type":["null",{"type":"long","logicalType":"timestamp-millis"}]},{"name":"UPDATED_BY","type":["null","string"]},{"name":"UPDATED_DATE","type":["null",{"type":"long","logicalType":"timestamp-millis"}]},{"name":"FEED_DESC","type":["null","string"]}]}^Tavro.codec^Hnull^#àÂ<87>)[ù<8b><97><90>"õ^S<98>[<98>±
I want to remove the Initial part Obj^A^D^Vavro.schema<88>^L{"type":"record","name":"NiFi_ExecuteSQL_Record","namespace":"any.data","fields":
and the ending part }^Tavro.codec^Hnull^#àÂ<87>)[ù<8b><97><90>"õ^S<98>[<98>± from the above.
Considering that You want to remove everything outside first [ and last ]:
sed 's/^[^[]*//;s/[^]]*$//'
Test:
$ cat out.file
Obj^A^D^Vavro.schema<88>^L{"type":"record","name":"NiFi_ExecuteSQL_Record","namespace":"any.data","fields":[{"name":"FEED_SEQUENCE","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":0}]},{"name":"FEED_UNIQUE_NAME","type":["null","string"]},{"name":"COUNTRY_CODE","type":["null","string"]},{"name":"EXTRACTION_TYPE","type":["null","string"]},{"name":"PROJECT_SEQUENCE","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":0}]},{"name":"CREATED_BY","type":["null","string"]},{"name":"CREATED_DATE","type":["null",{"type":"long","logicalType":"timestamp-millis"}]},{"name":"UPDATED_BY","type":["null","string"]},{"name":"UPDATED_DATE","type":["null",{"type":"long","logicalType":"timestamp-millis"}]},{"name":"FEED_DESC","type":["null","string"]}]}^Tavro.codec^Hnull^#àÂ<87>)[ù<8b><97><90>"õ^S<98>[<98>±
$ sed 's/^[^[]*//;s/[^]]*$//' out.file
[{"name":"FEED_SEQUENCE","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":0}]},{"name":"FEED_UNIQUE_NAME","type":["null","string"]},{"name":"COUNTRY_CODE","type":["null","string"]},{"name":"EXTRACTION_TYPE","type":["null","string"]},{"name":"PROJECT_SEQUENCE","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":0}]},{"name":"CREATED_BY","type":["null","string"]},{"name":"CREATED_DATE","type":["null",{"type":"long","logicalType":"timestamp-millis"}]},{"name":"UPDATED_BY","type":["null","string"]},{"name":"UPDATED_DATE","type":["null",{"type":"long","logicalType":"timestamp-millis"}]},{"name":"FEED_DESC","type":["null","string"]}]
You can use ExtractAvroMetadata processor to extract only the avro.schema from the avro flowfile.
In the processor for Metadata Keys property specify value as avro.schema, then processor extracts avro metadata and keep as flowfile attribute.
Use the attribute value(${avro.schema} in ReplaceText processor to overwrite the content of flowfile and create the table.
the data in 'd' file, by gnu sed,
sed -E 's/^[^\[]+(\[\{.+\})[^\}]+/\1/' d
consider use regex in Perl if you'd work on Json string manipulation

docker and format json

I'm trying to get usable json from the docker cli, however it seems it will only produce json for individual items, and not the complete result, as a whole.
For example, running docker container ls -a --format="{{ json .Names }}" produces:
"hopeful_payne"
"trusting_turing"
"stupefied_morse"
"unruffled_noyce"
"pensive_fermi"
"objective_neumann"
"confident_bhaskara"
"unruffled_cray"
"epic_newton"
"boring_bartik"
"priceless_sinoussi"
"naughty_grothendieck"
"hardcore_bose"
"sad_jones"
"optimistic_napier"
"trusting_stallman"
"xenodochial_dijkstra"
"pedantic_cocks"
The above is not json.
How can I produce a result that is, ideally, a json array?
I think you cannot do this using docker only.
The command-line's format function is effectively taking each input line (one for each container) and applying the Go template to it. So you need another tool to aggregate the lines into a JSON array.
One way that you can achieve your goal is using the excellent jq tool:
docker container ls --format="{\"name\":\"{{.Names}}\"}" --all | jq --slurp
This generates each container line as a JSON string: {"name": "[VALUE]"} and then uses jq to slurp them into a JSON array.
A challenge doing this directly in bash is JSON's stricture that the last element in a list can't be terminated with a ,. So, the following simple bash script generates invalid JSON and you'd need extra logic to remove it (or better yet, not add the last one):
echo "[$(for CONTAINER in $(docker container ls --format="{{.Names}}" --all); do echo "{\"name\":\"${CONTAINER}\"},"; done;)]"
What are you trying to do with these JSON responses? It might be easier just to talk directly to the Docker API, which will give you JSON responses directly. E.g., to get a list of containers:
curl --unix-socket /var/run/docker.sock http://localhost/v1.24/containers/json
You can, as DazWilkin suggested, use jq for filtering JSON on the command line. E.g., if we want a list of container names:
curl --unix-socket /var/run/docker.sock http://localhost/v1.24/containers/json |
jq '[.[]|.Names]'
You can find Docker API documentation here.
One way to think of the output is that it's JSONL: http://jsonlines.org/
This Docker output is JSON, per line. Since you asked for a single attribute -- just the name -- you're simply getting a string back. But, notice it's quoted. It's technically JSON. It may make more sense if you update your format to {{ json . }}, which will then output lines that look more like the JSON you're expecting.
However, it's still a JSON document per line, so you'd have to process each line as its own document.

Efficiently get the first record of a JSONL file

Is it possible to efficiently get the first record of a JSONL file without consuming the entire stream / file? One way I have been able to inefficiently do so is the following:
curl -s http://example.org/file.jsonl | jq -s '.[0]'
I realize that head could be used here to extract the first line, but assume that the file may not use a newline as the record separator and may simply be concatenated objects or arrays.
If I'm understanding correctly, the JSONL format just returns a stream of JSON objects which jq handles quite nicely. Best case scenario that you wanted the first item, you could just utilize the input filter to grab the first item.
I think you could just do this:
$ curl -s http://example.org/file.jsonl | jq -n 'input'
You need the null input -n to not process the input immediately then input just gets one input from the stream. No need to go through the rest of the input stream.