I have Json data which is saved in bigquery, i want to seperate json to table
Row jobStatistics
JSON
1
{
billingTier: 1
createTime: "2023-01-14T10:03:57.605Z"
endTime: "2023-01-14T10:04:10.476Z"
queryOutputRowCount: "417653"
referencedTables: [2]
startTime: "2023-01-14T10:03:57.659Z"
totalBilledBytes: "671088640"
totalProcessedBytes: "670635727"
totalSlotMs: "64184"
totalTablesProcessed: 2
}
I want to have a result set where each process will be in a different row. Something like:
totalBilledBytes
totalTablesProcessed
createTime
671088640
2
2023-01-14T10:03:57.605Z
Or, is there a different way to return the desired result set?
Related
I wan to convert all the rows of BigQuery query output to an array of JSON.
For example: I want to convert the following output rows
Col1
Col2
ex1a
ex1b
ex2a
ex2b
Convert this to the following JSON:
{
"Col1":"ex1a",
"Col2":"ex1b"
},
{
"Col1":"ex2a",
"Col2":"ex2b"
}
]```
Use below approach
select format('[%s]', string_agg(to_json_string(t)))
from your_table t
if applied to sample data in your question - output is
Another option (with same output) is
select to_json_string(array_agg(t))
from your_table t
I have a bunch of json files which have an array with column names and a separate array for the rows.
I want a dynamic way of retrieving column names and merge them with the rows for each json file.
Been playing around with derived columns and column patterns, but struggling to get it working.
I want the column names from [data.column.shortText] and values for each corresponding [data.rows.value] according to the order.
Example format
{
"messages":{
},
"data":{
"columns":[
{
"columnName":"SelectionCriteria1",
"shortText":"Case no."
},
{
"columnName":"SelectionCriteria2",
"shortText":"Period for periodical values",
},
{
"columnName":"SelectionCriteria3",
"shortText":"Location"
},
{
"columnName":"SelectionCriteriaAggregate",
"shortText":"Value"
}
],
"rows":[
[
{
"value":"23523"
},
{
"value":12342349
},
{
"value":"234234",
"code":3342
},
{
"value":234234234
}
]
]
}
}
First, you need to fix your Json data, i can see you have an extra comma in columns second Json and in rows you have value as int and as string so when i tried to parse it in ADF i got an error.
i don't quite understand why you're trying to do merge by position because normally we get rows more than columns, and if you'll get 5 rows and 3 columns you will get an error.
Here is my approach to your problem:
the main idea is that i added index column to both arrays and joined the jsons by Inner Join.
created a Source Data (its 2 but you can make it one to simplify your data flow)
added Select activity to select relevant arrays from the data.
flattened the array(in order to add index column)
added index by using rank activity (please read more about rank and dense rank and what is the difference between the two)
added a Join activity , inner join by index column.
Select activity to remove index column from the result.
saved output to sink.
Json Data that i worked with:
Data Flow:
SelectRows Activity:
Flatten Activity:
Rank actitity:
Join activity:
please check these links:
https://learn.microsoft.com/en-us/azure/data-factory/data-flow-expressions-usage#mapAssociation
https://learn.microsoft.com/en-us/azure/data-factory/data-flow-map-functions
Let's take a simple schema with two tables, one that describes an simple entity item (id, name)
id | name
------------
1 | foo
2 | bar
and another, lets call it collection, that references to an item, but inside a JSON Object in something like
{
items: [
{
id: 1,
quantity: 2
}
]
}
I'm looking for a way to eventually enrich this field (kind of like populate in Mongo) in the collection with the item element referenced, to retrieve something like
{
...
items: [
{
item: {
id: 1,
name: foo
},
quantity: 2
}
]
...
}
If you have a solution with PostgreSQL, I take it as well.
If I understood correctly, your requirement is to convert an Input JSON data into MySQL table so that you can work with JSON but leverage the power of SQL.
Mysql8 recently released JSONTABLE function. By using this function, you can store your JSON in the table directly and then query it like any other SQL query.
It should serve your immediate case, but this means that your table schema will have a JSON column instead of traditional MySQL columns. You will need to check if it serves your purpose.
This is a good tutorial for the same.
In my database table I have a column named data and that column's type is jsonb. Here is a sample json of the column.
{"query": {"end-date": "2016-01-31", "start-date": "2016-01-01", "max-results": 1000, "start-index": 1 }}
This is the result in a formal format.
{
"query":{
"end-date":"2016-01-31",
"start-date":"2016-01-01",
"max-results":1000,
"start-index":1
}
}
I need to get the data from the 'start date' inside the 'query' element. How get the data from the start date from a pgsql query
You can use the Postgres in-built function named 'json_extract_path'. Document.
The first parameter in this function is the column name, the second parameter is JSON root element and the third parameter is the key name of which you want to get data.
select json_extract_path(data::json,'query','start-date') as test FROM "schema".tbl_name
I'm using Postgrex in Elixir, and when it returns query results, it returns them in the following struct format:
%{columns: ["id", "email", "name"], command: :select, num_rows: 2, rows: [{1, "me#me.com", "Bobbly Long"}, {6, "email#tts.me", "Woll Smoth"}]}
It should be noted I am using Postgrex directly WITHOUT Ecto.
The columns (table headers) are returned as a collection, but the results (rows) are returned as a list of tuples. (which seems odd, as they could get very large).
I'm trying to find the best way to programmatically create JSON objects for each result in which the JSON key is the column title and the JSON value the corresponding value from the tuple.
I've tried creating maps from both, merging and then serialising to JSON objects but it seems there should be an easier/better way of doing this.
Has anyone dealt with this before? What is the best way of creating a JSON object from a separate collection and tuple?
Something like this should work:
result = Postgrex.query!(...)
Enum.map(result.rows, fn row ->
Enum.zip(result.columns, Tuple.to_list(row))
|> Enum.into(%{})
|> JSON.encode
end)
This will result in a list of json objects where each row in the resultset is a json object.