Data Factory Copy Data Source as Body of Sink Object - json

I've been trying to create an ADF pipeline to move data from one of our databases into an azure storage folder - but I can't seem to get the transform to work correctly.
I'm using a Copy Data task and have the source and sink set up as datasets and data is flowing from one to the other, it's just the format that's bugging me.
In our Database we have a single field that contains a JSON object, this needs to be mapped into the sink object but doesn't have a Column name, it is simply the base object.
So for example the source looks like this
and my output needs to look like this
[
{
"ID": 123,
"Field":"Hello World",
"AnotherField":"Foo"
},
{
"ID": 456,
"Field":"Don't Panic",
"AnotherField":"Bar"
}
]
However, the Copy Data task seems to only seems to accept direct Source -> Sink mapping, and also is treating the SQL Server field as VARCHAR (which I suppose it is). So as a result I'm getting this out the other side
[
{
"Json": "{\"ID\": 123,\"Field\":\"Hello World\",\"AnotherField\":\"Foo\"}"
},
{
"Json": "{\"ID\": 456,\"Field\":\"Don't Panic\",\"AnotherField\":\"Bar\"}"
}
]
I've tried using the internal #json() parse function on the source field but this causes errors in the pipeline. I also can't get the sink to just map directly as an object inside the output array.
I have a feeling I just shouldn't be using Copy Data, or that Copy Data doesn't support the level of transformation I'm trying to do. Can anybody set me on the right path?

Using a JSON dataset as a source in your data flow allows you to set five additional settings. These settings can be found under the JSON settings accordion in the Source Options tab. For Document Form setting, you can select one of Single document, Document per line and Array of documents types.
Select Document form as Array of documents.
Refer - https://learn.microsoft.com/en-us/azure/data-factory/format-json

Related

Splitting set of JSON files using Azure Data Factory into multiple files

I am pulling data from an API at regular intervals (ic every 5 mins or more frequently).
Example of the returned info:
{
"timestamp":"2022-09-28T00:33:53Z",
"data":[
{
"id":"bdcb2ad8-9e19-4468-a4f3-b440de3a7b40",
"value":3,
"created": "2020-09-28T00:00:00Z"
},
{
"id":"7f8d07eb-433b-404c-a9b3-f1832bdd780f",
"value":4,
"created": "2020-09-28T00:00:00Z"
},
{
"id":"7f8d07eb-433b-404c-a9b3-f1832bdd780f",
"value":6,
"created": "2020-09-28T00:05:00Z"
}
]
}
So after a certain amount of time I would have a set of files in the landing zone:
2022-09-28T00:33:53Z.json
2022-09-28T00:43:44Z.json
etc...
I would like to use ADF to take those files and split them on a certain property.
So it would end up in a hierarchy like this:
/bdcb2ad8-9e19-4468-a4f3-b440de3a7b40/2022-09-28T00:33:53Z.json
/bdcb2ad8-9e19-4468-a4f3-b440de3a7b40/2022-09-28T00:43:44Z.json
/7f8d07eb-433b-404c-a9b3-f1832bdd780f/2022-09-28T00:33:53Z.json
/7f8d07eb-433b-404c-a9b3-f1832bdd780f/2022-09-28T00:43:44Z.json
where each file only has the data related to itself.
Any thoughts on how to pull this off?
Bonus: if for every run of the ADF I could union the files into a single one, that would be great but not absolutely necessary
Read the source file.
From the source file Data field will iterate with ForEach activity.
Parameters were provided for the sink dataset with FolderName and FileName.
Provided sink folder path and file path dynamically.
Values for the given parameters were provided in COPY ACTIVITY
Result with Expected files

Transform Avro file with Wrangler into JSON in cloud Datafusion

I try to read an Avro file, make a basic transformation (remove records with name = Ben) using Wrangler and write the result as JSON file into google cloud storage.
The Avro file has the following schema:
{
"type": "record",
"name": "etlSchemaBody",
"fields": [
{
"type": "string",
"name": "name"
}
]
}
Transformation in wrangler is the following:
transformation
The following is the output schema for JSON file:
output schema
When I run the pipeline it runs successfully and the JSON file is created in cloud storage. But the JSON output is empty.
When trying a preview run I get the following message:
warning message
Why is the JSON output file in gcloud storage empty?
When using the Wrangler to make transformations, the default values for the GCS source are format: text and body: string (data type); however, to properly work with an Avro file in the Wrangler you need to change that, you need to set the format to blob and the body data type to bytes, as follows:
After that, the preview for your pipeline should produce output records. You can see my working example next:
Sample data
Transformations
Input records preview for GCS sink (final output)
Edit:
You need to set the format: blob and the output schema as body: bytes if you want to parse the file to Avro within the Wrangler, as described before, because it needs the content of the file in a binary format.
On the other hand if you only want to apply filters (within the Wrangler), you could do the following:
Open the file using format: avro, see img.
Set the output schema according to the fields that your Avro file has, in this case name with string data type, see img.
Use only filters on the Wrangler (no parsing to Avro here), see img.
And this way you can also get the desired result.

What does it mean when a KinematicBody2D is stored in a JSON file - Godot?

After writing a few files for saving in my JSON file in Godot. I saved the information in a variable called LData and it is working. LData looks like this:
{
"ingredients":[
"[KinematicBody2D:1370]"
],
"collected":[
{
"iname":"Pineapple",
"collected":true
},{
"iname":"Banana",
"collected":false
}
]
}
What does it mean when the file says KinematicBody2D:1370? I understand that it is saving the node in the file - or is it just saving a string? Is it saving the node's properties as well?
When I tried retrieving data - a variable that is assigned to the saved KinematicBody2D.
Code:
for ingredient in LData.ingredients:
print(ingredient.iname)
Error:
Invalid get index name 'iname' (on base: 'String')
I am assuming that the data is stored as a String and I need to put some code to get the exact node it saved. Using get_node is also throwing an error.
Code:
for ingredient in LData.ingredients:
print(get_node(ingredient).iname)
Error:
Invalid get index 'iname' (on base: 'null instance')
What information is it exactly storing when it says [KinematicBody2D:1370]? How do I access the variable iname and any other variables - variables that are assigned to the node when the game is loaded - and is not changed through the entire game?
[KinematicBody2D:1370] is just the string representation of a Node, which comes from Object.to_string:
Returns a String representing the object. If not overridden, defaults to "[ClassName:RID]".
If you truly want to serialize an entire Object, you could use Marshalls.variant_to_base64 and put that string in your json file. However, this will likely bloat your json file and contain much more information than you actually need to save a game. Do you really need to save an entire KinematicBody, or can you figure out the few properties that need to be saved (postion, type of object, ect.) and reconstruct the rest at runtime?
You can also save objects as Resources, which is more powerful and flexible than a JSON file, but tends to be better suited to game assets than save games. However, you could read the Resource docs and see if saving Resources seems like a more appropriate solution to you.

Loosing data from Source to Sink in Copy Data

In my MS Azure datafactory, I have a rest API connection to a nested JSON dataset.
The Source "Preview data" shows all data. (7 orders from the online store)
In the "Activity Copy Data", is the menu tab "Mapping" where I map JSON fields with the sink SQL table columns. If I under "Collection Reference" I select None, all 7 orders are copied over.
But if I want the nested metadata, I select the meta field in "Collection Reference", then I get my nested data, in multiple order lines, each with a one metadata point, but I only get data from 1 order, not 7
I think I have a reason for my problem. One of the fields in the nested meta data, is both a string and array. But I still don't have a solution
sceen shot of meta data
Your sense is right,it caused by your nested structure meta data. Based on the statements of Collection Reference property:
If you want to iterate and extract data from the objects inside an
array field with the same pattern and convert to per row per object,
specify the JSON path of that array to do cross-apply. This property
is supported only when hierarchical data is source.
same pattern is key point here, I think. However, your data inside metadata array are not same as your screenshot.
My workaround is using Azure Blob Storage to make a transition, REST API ---> Azure Blob Storage--->Your sink dataset. Inside Blob Storage Dataset, you could flatten the incoming JSON data with Cross-apply nested JSON array setting:
You could refer to this blog to learn about this feature. Then you could copy the flatten data into your destination.

Efficient Portable Database for Hierarchical Dataset - Json, Sqlite or?

I need to make a file that contains a hierarchical dataset. The dataset in question is a file-system listing (directory names, file name/sizes in each directory, sub-directories, ...).
My first instinct was to use Json and flatten the hierarchy using paths so the parser doesn't have to recurse so much. As seen in the example below, each entry is a path ("/", "/child01", "/child01/gchild01",...) and it's files.
{
"entries":
[
{
"path":"/",
"files":
[
{"name":"File1", "size":1024},
{"name":"File2", "size":1024}
]
},
{
"path":"/child01",
"files":
[
{"name":"File1", "size":1024},
{"name":"File2", "size":1024}
]
},
{
"path":"/child01/gchild01",
"files":
[
{"name":"File1", "size":1024},
{"name":"File2", "size":1024}
]
},
{
"path":"/child02",
"files":
[
{"name":"File1", "size":1024},
{"name":"File2", "size":1024}
]
}
]
}
Then I thought that repeating the keys over and over ("name", "size") for each file kind of sucks. So I found this article about how to use Json as if it were a database - http://peter.michaux.ca/articles/json-db-a-compressed-json-format
Using that technique I'd have a Json table like "Entry" with columns "Id", "ParentId", "EntryType", "Name", "FileSize" where "EntryType" would be 0 for Directory and 1 for File.
So, at this point, I'm wondering if sqlite would be a better choice. I'm thinking that the file size would be a LOT smaller than a Json file, but it might only be negligible if I use Json-DB-compressed format from the article. Besides size, are there any other advantages that you can think of?
I think a Javascript object for datasource, loaded as a file stream into the browser and then used in javascript logic in the browser would consume the least time and have good performance.. BUT only until a limited hierarchy size of the content.
Also, not storing the hierarchy anywhere else and keeping it only as a JSON file badly limits your data source's use in your project to client-side technologies.. or forces conversions to other technologies.
If you are building a pure javascript based application (html, js, css only app), then you could keep it as JSON object alone.. and limit your hierarchy sizes.. you could split bigger hierarchies into multiple files linking json objects.
If you will have server-side code like php, in your project,
Considering managebility of code, and scaling, you should ideally store the data in SQLite DB, at runtime create your json hierarchies for limited levels as ajax loads from your page.
If this is the only data your application stores then you can do something really simple like just store the data in an easy to parse/read text file like this:
File1:1024
File2:1024
child01
File1:1024
File2:1024
gchild01
File1:1024
File2:1024
child02
File1:1024
File2:1024
Files get File:Size and directories get just their name. Indentation gives structure. For something slightly more standard but just as easy to read, use yaml.
http://www.yaml.org/
Both can benefit from decreased file size (but decreased user readability) by gzipping the file.
And if you have more data to store, then use SQLite. SQLite is great.
Don't use JSON for data persistence. It's wasteful.