Is it possible to get Boto3 | python output in tabular format - json

in aws cli we can set output format as json or table. Now I can get json output from json.dumps is there anyway could achieve output in table format?
I tried pretty table but no success

Python Boto3 does not return the data in the tabular format. You will need to parse the data and use another python lib to output the data in the tabular format . Pretty table works good for me, read the pretty table lib docs and debug your code.

Related

Convert JSON to Avro in Nifi

I want to convert json data to avro.
I have used GenerateFlowFile and put dummy json value [{"firstname":"prathik","age":21},{"firstname":"arun","age":22}].
I have then used ConvertRecord processor and set JsonTreeReader and AvroRecordSetWriter with AvroSchemaRegistry which has the following schema:AvroScehma
But i am getting this as my output: Output (Avro Data)
I am new to Apache Nifi.
Thanks in Advance.
But i am getting this as my output: Output (Avro Data)
That's to be expected. Avro is a binary file format, and what you see is an attempt to make that data viewable in a text format. It's supposed to act like that.

Where can I get the FFProbe JSON schema definition?

I am using FFProbe to get media file information in JSON format.
I am looking for a complete schema definition for the JSON output option in FFProbe.
See: https://ffmpeg.org/ffprobe.html#json
Without the schema I find that different files produce different output, and I have to add more serialization logic by hand as I discover more properties and more tags in the JSON.
Something equivalent to MkvToolNix's full JSON schema definition, but for FFProbe:
See: https://gitlab.com/mbunkus/mkvtoolnix/-/blob/master/doc/json-schema/mkvmerge-identification-output-schema-v12.json
Any ideas if such a schema exists for FFProbe?
There isn't one but there is a XML schema which you could try to convert. It's at https://github.com/FFmpeg/FFmpeg/blob/master/doc/ffprobe.xsd

Is format always json when SELECTing from stage?

Snowflake supports multiple file types via creation FILE_FORMAT (avro, json, csv etc).
Now I have tested SELECTing from snowflake stage (s3) both:
*.avro files (generated from nifi processor batching 10k source oracle table).
*.json files (json per line).
And when Select $1 from #myStg, snowflake expands as many rows as records on avro or json files (cool), but.. the $1 variant is both json format and now i wonder if whatever snowflake file_format we use do records always arrive as json on the variant $1 ?
I haven't tested csv or others snowflake file_formats.
Or i wonder if i get json from the avros (from oracle table) because maybe NiFi processor creates avro files (with internally uses json format).
Maybe im making some confusion here.. i know avro files contain both:
avro schema - language similar to json key/value.
compressed data (binary).
Thanks,
Emanuel O.
I tried with CSV, When Its came to CSV its parsing each records in the file like below
So when its came to JSON it will treat one complete JSON as one records so its displaying in JSON format.

how to convert nested json file into csv in scala

I want to convert my nested json into csv ,i used
df.write.format("com.databricks.spark.csv").option("header", "true").save("mydata.csv")
But it can use to normal json but not nested json. Anyway that I can convert my nested json to csv?help will be appreciated,Thanks!
When you ask Spark to convert a JSON structure to a CSV, Spark can only map the first level of the JSON.
This happens because of the simplicity of the CSV files. It is just asigning a value to a name. That is why {"name1":"value1", "name2":"value2"...} can be represented as a CSV with this structure:
name1,name2, ...
value1,value2,...
In your case, you are converting a JSON with several levels, so Spark exception is saying that it cannot figure out how to convert such a complex structure into a CSV.
If you try to add only a second level to your JSON, it will work, but be careful. It will remove the names of the second level to include only the values in an array.
You can have a look at this link to see the example for json datasets. It includes an example.
As I have no information about the nature of the data, I can't say much more about it. But if you need to write the information as a CSV you will need to simplify the structure of your data.
Read json file in spark and create dataframe.
val path = "examples/src/main/resources/people.json"
val people = sqlContext.read.json(path)
Save the dataframe using spark-csv
people.write
.format("com.databricks.spark.csv")
.option("header", "true")
.save("newcars.csv")
Source :
read json
save to csv

AWS Lambda output format - JSON

I trying to format my output from a lambda function into JSON. The lambda function queries my Amazon Aurora RDS instance and returns an array of rows in the following format:
[[name,age,town,postcode]]
which gives the an example output:
[["James", 23, "Maidenhead","sl72qw"]]
I understand that mapping templates are designed to translate one format to another but I don't understand how I can take the output above and map in to a JSON format using these mapping templates.
I have checked the documentation and it only covers converting one JSON to another.
Without seeing the code you're specifically using, it's difficult to give you a definitely correct answer, but I suspect what you're after is returning the data from python as a dictionary then converting that to JSON.
It looks like this thread contains the relevant details on how to do that.
More specifically, using the DictCursor
cursor = connection.cursor(pymysql.cursors.DictCursor)