I have a json file that is getting created at runtime using the sh script within groovy code. The json file has below contents.
cat.json
{
"user1":"pass1",
"user2":"pass2",
"user3":"pass3"
}
Now I want to create a file at runtime which stores key value pairs in below format
test
user1:pass1
user2:pass2
user3:pass3
can some one help me out shell codes for writing this.
You have literally dozen ways to convert that JSON document to a tabular data file (pretty much like CSV/colon-SV) since you mentioned Java, Groovy, including Java-driven scripting engines (BeanShell, JavaScript, Groovy itself), but if you can use jq then you can extract k/v pairs at least for simple values that do not require any escaping:
#!/bin/sh
jq -r 'to_entries[] | "\(.key):\(.value)"' \
< cat.json
This answer is inspired by searching for extracting entries using jq (or converting a JSON file to a CSV file) and especially by the answer https://stackoverflow.com/a/50496145/12232870 by #peak.
I'm working with multiple JSON files that are located in the same folder.
Files contain objects with the same properties and they are such as:
{
"identifier": "cameraA",
"alias": "a",
"rtsp": "192.168.1.1"
}
I want to replace a property for all the objects in the JSON files at the same time for a certain condition.
For example, let's say that I want to replace all the rtsp values of the objects with identifier equal to "cameraA".
I've been trying with something like:
jq 'if .identifier == \"cameraA" then .rtsp=\"cameraX" else . end' -c *.json
But it isn't working.
Is there a simple way to replace the property of an object among multiple JSON files?
jq can only write to STDIN and STDOUT, so the simplest approach would be to process one file at a time, e.g. putting your jq program inside a shell loop. sponge is often used when employing this approach.
However, there is an alternative that has the advantage of efficiency. It requires only one invocation of jq, the output of which would include the filename information (obtained from input_filename). This output would then be the input of an auxiliary process, e.g. awk.
I am extracting the schema of a table from an oracle DB using Apache Nifi which I need to use to create a table in BigQuery. The extract SQL processor in NiFi is giving me a schema file which I am saving in my home directory. Now to use this schema file in BigQuery, I need to remove a certain part of the schema file from the beginning and end. How do I do this in unix using sed/awk?
Here is the content of the output file:
Obj^A^D^Vavro.schema<88>^L{"type":"record","name":"NiFi_ExecuteSQL_Record","namespace":"any.data","fields":[{"name":"FEED_SEQUENCE","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":0}]},{"name":"FEED_UNIQUE_NAME","type":["null","string"]},{"name":"COUNTRY_CODE","type":["null","string"]},{"name":"EXTRACTION_TYPE","type":["null","string"]},{"name":"PROJECT_SEQUENCE","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":0}]},{"name":"CREATED_BY","type":["null","string"]},{"name":"CREATED_DATE","type":["null",{"type":"long","logicalType":"timestamp-millis"}]},{"name":"UPDATED_BY","type":["null","string"]},{"name":"UPDATED_DATE","type":["null",{"type":"long","logicalType":"timestamp-millis"}]},{"name":"FEED_DESC","type":["null","string"]}]}^Tavro.codec^Hnull^#àÂ<87>)[ù<8b><97><90>"õ^S<98>[<98>±
I want to remove the Initial part Obj^A^D^Vavro.schema<88>^L{"type":"record","name":"NiFi_ExecuteSQL_Record","namespace":"any.data","fields":
and the ending part }^Tavro.codec^Hnull^#àÂ<87>)[ù<8b><97><90>"õ^S<98>[<98>± from the above.
Considering that You want to remove everything outside first [ and last ]:
sed 's/^[^[]*//;s/[^]]*$//'
Test:
$ cat out.file
Obj^A^D^Vavro.schema<88>^L{"type":"record","name":"NiFi_ExecuteSQL_Record","namespace":"any.data","fields":[{"name":"FEED_SEQUENCE","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":0}]},{"name":"FEED_UNIQUE_NAME","type":["null","string"]},{"name":"COUNTRY_CODE","type":["null","string"]},{"name":"EXTRACTION_TYPE","type":["null","string"]},{"name":"PROJECT_SEQUENCE","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":0}]},{"name":"CREATED_BY","type":["null","string"]},{"name":"CREATED_DATE","type":["null",{"type":"long","logicalType":"timestamp-millis"}]},{"name":"UPDATED_BY","type":["null","string"]},{"name":"UPDATED_DATE","type":["null",{"type":"long","logicalType":"timestamp-millis"}]},{"name":"FEED_DESC","type":["null","string"]}]}^Tavro.codec^Hnull^#àÂ<87>)[ù<8b><97><90>"õ^S<98>[<98>±
$ sed 's/^[^[]*//;s/[^]]*$//' out.file
[{"name":"FEED_SEQUENCE","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":0}]},{"name":"FEED_UNIQUE_NAME","type":["null","string"]},{"name":"COUNTRY_CODE","type":["null","string"]},{"name":"EXTRACTION_TYPE","type":["null","string"]},{"name":"PROJECT_SEQUENCE","type":["null",{"type":"bytes","logicalType":"decimal","precision":38,"scale":0}]},{"name":"CREATED_BY","type":["null","string"]},{"name":"CREATED_DATE","type":["null",{"type":"long","logicalType":"timestamp-millis"}]},{"name":"UPDATED_BY","type":["null","string"]},{"name":"UPDATED_DATE","type":["null",{"type":"long","logicalType":"timestamp-millis"}]},{"name":"FEED_DESC","type":["null","string"]}]
You can use ExtractAvroMetadata processor to extract only the avro.schema from the avro flowfile.
In the processor for Metadata Keys property specify value as avro.schema, then processor extracts avro metadata and keep as flowfile attribute.
Use the attribute value(${avro.schema} in ReplaceText processor to overwrite the content of flowfile and create the table.
the data in 'd' file, by gnu sed,
sed -E 's/^[^\[]+(\[\{.+\})[^\}]+/\1/' d
consider use regex in Perl if you'd work on Json string manipulation
I'm trying to get usable json from the docker cli, however it seems it will only produce json for individual items, and not the complete result, as a whole.
For example, running docker container ls -a --format="{{ json .Names }}" produces:
"hopeful_payne"
"trusting_turing"
"stupefied_morse"
"unruffled_noyce"
"pensive_fermi"
"objective_neumann"
"confident_bhaskara"
"unruffled_cray"
"epic_newton"
"boring_bartik"
"priceless_sinoussi"
"naughty_grothendieck"
"hardcore_bose"
"sad_jones"
"optimistic_napier"
"trusting_stallman"
"xenodochial_dijkstra"
"pedantic_cocks"
The above is not json.
How can I produce a result that is, ideally, a json array?
I think you cannot do this using docker only.
The command-line's format function is effectively taking each input line (one for each container) and applying the Go template to it. So you need another tool to aggregate the lines into a JSON array.
One way that you can achieve your goal is using the excellent jq tool:
docker container ls --format="{\"name\":\"{{.Names}}\"}" --all | jq --slurp
This generates each container line as a JSON string: {"name": "[VALUE]"} and then uses jq to slurp them into a JSON array.
A challenge doing this directly in bash is JSON's stricture that the last element in a list can't be terminated with a ,. So, the following simple bash script generates invalid JSON and you'd need extra logic to remove it (or better yet, not add the last one):
echo "[$(for CONTAINER in $(docker container ls --format="{{.Names}}" --all); do echo "{\"name\":\"${CONTAINER}\"},"; done;)]"
What are you trying to do with these JSON responses? It might be easier just to talk directly to the Docker API, which will give you JSON responses directly. E.g., to get a list of containers:
curl --unix-socket /var/run/docker.sock http://localhost/v1.24/containers/json
You can, as DazWilkin suggested, use jq for filtering JSON on the command line. E.g., if we want a list of container names:
curl --unix-socket /var/run/docker.sock http://localhost/v1.24/containers/json |
jq '[.[]|.Names]'
You can find Docker API documentation here.
One way to think of the output is that it's JSONL: http://jsonlines.org/
This Docker output is JSON, per line. Since you asked for a single attribute -- just the name -- you're simply getting a string back. But, notice it's quoted. It's technically JSON. It may make more sense if you update your format to {{ json . }}, which will then output lines that look more like the JSON you're expecting.
However, it's still a JSON document per line, so you'd have to process each line as its own document.
Is it possible to efficiently get the first record of a JSONL file without consuming the entire stream / file? One way I have been able to inefficiently do so is the following:
curl -s http://example.org/file.jsonl | jq -s '.[0]'
I realize that head could be used here to extract the first line, but assume that the file may not use a newline as the record separator and may simply be concatenated objects or arrays.
If I'm understanding correctly, the JSONL format just returns a stream of JSON objects which jq handles quite nicely. Best case scenario that you wanted the first item, you could just utilize the input filter to grab the first item.
I think you could just do this:
$ curl -s http://example.org/file.jsonl | jq -n 'input'
You need the null input -n to not process the input immediately then input just gets one input from the stream. No need to go through the rest of the input stream.