I am new to python, but have java experience. Pretty different animals. I have a method that creates a json/dictionary based on walking through a directory structure and I have it where it creates a json like the one below.
I am trying to get another method to populate a treeview based on it. I have seen several examples here on stackoverflow and have attempted to follow them. Below is what I have come up with, but it always errs out after going through the first directory, like it lost track of where it was. the following errors are returned:
Traceback (most recent call last):
File "temp.py", line 147, in <module>
load_treeview_from_project_dictionary(myData, tree_view)
File "temp.py", line 134, in load_treeview_from_project_dictionary
load_treeview_from_project_dictionary(c, tree_view, int(c['index']))
File "temp.py", line 136, in load_treeview_from_project_dictionary
tree_view.insert(parent_id, "end", text=c['name'], values=("", ""))
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\tkinter\ttk.py", line 1364, in insert
res = self.tk.call(self._w, "insert", parent, index, *opts)
_tkinter.TclError: Item 1 not found
I worked several hours trying to refactor and figure this out, but have been unsuccessful. I am now reaching out to this great community to point out the error of my ways!
Please help.
import json
import tkinter
from tkinter import ttk
from tkinter.ttk import Label
myData = {
"name": "MyRoot",
"type": "directory",
"index": 0,
"children": [
{
"name": "SubDir1",
"type": "directory",
"index": 1,
"children": [
{
"name": "apic.png",
"type": "file"
}
]
},
{
"name": "SubDir2",
"type": "directory",
"index": 2,
"children": [
{
"name": "somefile.txt",
"type": "file"
},
{
"name": "anotherfile.txt",
"type": "file"
}
]
},
{
"name": "SubDir3",
"type": "directory",
"index": 3,
"children": []
},
{
"name": "SubDir4",
"type": "directory",
"index": 4,
"children": [
{
"name": "asomefile.txt",
"type": "file"
},
{
"name": "bsomefile.txt",
"type": "file"
},
{
"name": "csomefile.txt",
"type": "file"
},
{
"name": "dsomefile.txt",
"type": "file"
},
{
"name": "esomefile.txt",
"type": "file"
}
]
},
{
"name": "SubDir5",
"type": "directory",
"index": 5,
"children": [
{
"name": "NesterDir1",
"type": "directory",
"index": 6,
"children": []
}
]
},
{
"name": "SubDir6",
"type": "directory",
"index": 7,
"children": []
},
{
"name": "SubDir7",
"type": "directory",
"index": 8,
"children": []
},
{
"name": "SubDir8",
"type": "directory",
"index": 9,
"children": []
},
{
"name": "SubDir9",
"type": "directory",
"index": 10,
"children": []
},
{
"name": "SubDir10",
"type": "directory",
"index": 11,
"children": []
},
{
"name": "SubDir11",
"type": "directory",
"index": 12,
"children": []
}
]
}
def load_treeview_from_project_dictionary(data, my_tree_view, parent_id=None):
print("this:" + data['name'] + " called function!")
if parent_id is None:
my_tree_view.insert("", "0", text=data['name'], values=("", "")) # applies to first iteration only
if data['children'] is not None:
for c in data['children']:
print("child: " + c['name'])
if c['type'] == "directory":
my_tree_view.insert('', int(c['index']), text=c['name'], values=("", ""))
load_treeview_from_project_dictionary(c, my_tree_view, int(c['index']))
else:
my_tree_view.insert(parent_id, "end", text=c['name'], values=("", ""))
load_treeview_from_project_dictionary(c, my_tree_view, parent_id)
root = tkinter.Tk()
main_label = Label(root, text="Directory Tree")
tree_view = ttk.Treeview(root, height=23)
tree_view.heading("#0", text="Directory Structure")
load_treeview_from_project_dictionary(myData, tree_view)
main_label.pack()
tree_view.pack()
root.mainloop()
Thanks in Advance!
So. After reviewing the tutorial link posted by D.L and then combing through my code and debugging over and over, I came to the conclusion that there was too much recursion going on. Watching the flow I found that the method always stopped after the first file was added to the tree. Taking the recursive call after the file insert fixed a large part of the issue. I scrutinized the insertion process and found that I could use the indices in the json for the treeview's iid's. I then decided that it would be more efficient to use the treeview.move to place the entries where I wanted them as they are being inserted. Below is what I came up with and it works great. I am posting this here for anyone else that runs into the same issue. After the code, there is a screenshot of the resulting treeview (or a link to it due to my rank- I will try to fix that later)
import json
import tkinter
from tkinter import ttk
from tkinter.ttk import Label
myData = {
"name": "MyRoot",
"type": "directory",
"index": 0,
"children": [
{
"name": "SubDir1",
"type": "directory",
"index": 1,
"children": [
{
"name": "apic.png",
"type": "file"
}
]
},
{
"name": "SubDir2",
"type": "directory",
"index": 2,
"children": [
{
"name": "somefile.txt",
"type": "file"
},
{
"name": "anotherfile.txt",
"type": "file"
}
]
},
{
"name": "SubDir3",
"type": "directory",
"index": 3,
"children": []
},
{
"name": "SubDir4",
"type": "directory",
"index": 4,
"children": [
{
"name": "asomefile.txt",
"type": "file"
},
{
"name": "bsomefile.txt",
"type": "file"
},
{
"name": "csomefile.txt",
"type": "file"
},
{
"name": "dsomefile.txt",
"type": "file"
},
{
"name": "esomefile.txt",
"type": "file"
}
]
},
{
"name": "SubDir5",
"type": "directory",
"index": 5,
"children": [
{
"name": "NestedDir1",
"type": "directory",
"index": 6,
"children": [
{
"name": "yetAnotherfile.txt",
"type": "file"
}
]
}
]
},
{
"name": "SubDir6",
"type": "directory",
"index": 7,
"children": []
},
{
"name": "SubDir7",
"type": "directory",
"index": 8,
"children": []
},
{
"name": "SubDir8",
"type": "directory",
"index": 9,
"children": []
},
{
"name": "SubDir9",
"type": "directory",
"index": 10,
"children": []
},
{
"name": "SubDir10",
"type": "directory",
"index": 11,
"children": []
},
{
"name": "SubDir11",
"type": "directory",
"index": 12,
"children": []
}
]
}
def load_treeview_from_project_dictionary(data, my_tree_view, parent_id=None):
print('this:' + data['name'] + ' called function!')
if parent_id is None:
my_tree_view.insert('', '0', text=data['name'], iid=0) # applies to first iteration only
for c in data['children']:
print('child: ' + c['name'])
if c['type'] == 'directory':
my_tree_view.insert('', 'end', text=c['name'], iid=c['index'])
my_tree_view.move(c['index'], data['index'], 'end')
load_treeview_from_project_dictionary(c, my_tree_view, data['index'])
else:
file_index = my_tree_view.insert('', 'end', text=c['name'])
my_tree_view.move(file_index, data['index'], 'end')
root = tkinter.Tk()
main_label = Label(root, text='Directory Tree')
tree_view = ttk.Treeview(root, height=23)
tree_view.heading('#0', text='Directory Structure')
load_treeview_from_project_dictionary(myData, tree_view)
main_label.pack()
tree_view.pack()
root.mainloop()
Treeview screenshot
Related
We created a Cloudformation template for auto implementation of the AWS Sitewise monitoring dashboard. We would like to dynamically refer and assign the Asset logical id inside the below dashboard definition.
{\"widgets\":[{\"type\":\"sc-line-chart\",\"title\":\"power_all_plants_5m\",\"x\":0,\"y\":0,\"height\":3,\"width\":3,\"metrics\":[{\"type\":\"iotsitewise\",\"label\":\"power_all_plants_5m (All Power Plants)\",\"assetId\":\"0cd25cb9-89f9-4a93-b2bf-88050436f700\",\"propertyId\":\"fd34bba7-4ea2-4d62-9058-ab78b726b61a\",\"dataType\":\"DOUBLE\"}],\"alarms\":[],\"properties\":{\"colorDataAcrossThresholds\":true},\"annotations\":{\"y\":[]}},{\"type\":\"sc-line-chart\",\"title\":\"Generator-1\",\"x\":3,\"y\":0,\"height\":3,\"width\":3,\"metrics\":[{\"type\":\"iotsitewise\",\"label\":\"sum_watts_5m (Generator-1)\",\"assetId\":\"45b97aaa-3f0c-4312-a8a5-a00e4da8ec37\",\"propertyId\":\"e22d9a23-4ac8-432a-816b-cc4a2138b287\",\"dataType\":\"DOUBLE\"},{\"type\":\"iotsitewise\",\"label\":\"rpm (Generator-1)\",\"assetId\":\"45b97aaa-3f0c-4312-a8a5-a00e4da8ec37\",\"propertyId\":\"c6a40902-f07b-40ba-b6c5-3509b069dd4c\",\"dataType\":\"DOUBLE\"}],\"alarms\":[],\"properties\":{\"colorDataAcrossThresholds\":true},\"annotations\":{\"y\":[]}},{\"type\":\"sc-line-chart\",\"title\":\"Generator-2\",\"x\":0,\"y\":3,\"height\":3,\"width\":3,\"metrics\":[{\"type\":\"iotsitewise\",\"label\":\"sum_watts_5m (Generator-2)\",\"assetId\":\"b999319c-20ec-4060-b3b7-bc5ce7ef189c\",\"propertyId\":\"e22d9a23-4ac8-432a-816b-cc4a2138b287\",\"dataType\":\"DOUBLE\"},{\"type\":\"iotsitewise\",\"label\":\"rpm (Generator-2)\",\"assetId\":\"b999319c-20ec-4060-b3b7-bc5ce7ef189c\",\"propertyId\":\"c6a40902-f07b-40ba-b6c5-3509b069dd4c\",\"dataType\":\"DOUBLE\"}],\"alarms\":[],\"properties\":{\"colorDataAcrossThresholds\":true},\"annotations\":{\"y\":[]}}]}
This JSON literal is converted to YAML by adding backward slashes () along with the double quotes because we are using YAML as the default language of the Cloudformation template otherwise it looks like the below.
{
"widgets": [
{
"type": "sc-line-chart",
"title": "power_all_plants_5m",
"x": 0,
"y": 0,
"height": 3,
"width": 3,
"metrics": [
{
"type": "iotsitewise",
"label": "power_all_plants_5m (All Power Plants)",
"assetId": "0cd25cb9-89f9-4a93-b2bf-88050436f700",
"propertyId": "fd34bba7-4ea2-4d62-9058-ab78b726b61a",
"dataType": "DOUBLE"
}
],
"alarms": [],
"properties": {
"colorDataAcrossThresholds": true
},
"annotations": {
"y": []
}
},
{
"type": "sc-line-chart",
"title": "Generator-1",
"x": 3,
"y": 0,
"height": 3,
"width": 3,
"metrics": [
{
"type": "iotsitewise",
"label": "sum_watts_5m (Generator-1)",
"assetId": "45b97aaa-3f0c-4312-a8a5-a00e4da8ec37",
"propertyId": "e22d9a23-4ac8-432a-816b-cc4a2138b287",
"dataType": "DOUBLE"
},
{
"type": "iotsitewise",
"label": "rpm (Generator-1)",
"assetId": "45b97aaa-3f0c-4312-a8a5-a00e4da8ec37",
"propertyId": "c6a40902-f07b-40ba-b6c5-3509b069dd4c",
"dataType": "DOUBLE"
}
],
"alarms": [],
"properties": {
"colorDataAcrossThresholds": true
},
"annotations": {
"y": []
}
},
{
"type": "sc-line-chart",
"title": "Generator-2",
"x": 0,
"y": 3,
"height": 3,
"width": 3,
"metrics": [
{
"type": "iotsitewise",
"label": "sum_watts_5m (Generator-2)",
"assetId": "b999319c-20ec-4060-b3b7-bc5ce7ef189c",
"propertyId": "e22d9a23-4ac8-432a-816b-cc4a2138b287",
"dataType": "DOUBLE"
},
{
"type": "iotsitewise",
"label": "rpm (Generator-2)",
"assetId": "b999319c-20ec-4060-b3b7-bc5ce7ef189c",
"propertyId": "c6a40902-f07b-40ba-b6c5-3509b069dd4c",
"dataType": "DOUBLE"
}
],
"alarms": [],
"properties": {
"colorDataAcrossThresholds": true
},
"annotations": {
"y": []
}
}
]
}
We would like to assign Asset ID dynamically using "!Ref" with pre-created Asset. We have tried the below variations but no fate.
Existing value -> 0cd25cb9-89f9-4a93-b2bf-88050436f700
Tried below changes:-
[{\"Ref\":\"GeneratorAsset\"}]
!Ref GeneratorAsset
\"!Ref GeneratorAsset\"
{\"Ref\":GeneratorAsset}
{\"Ref\":\"GeneratorAsset\"}
\"{\"Ref\":\"GeneratorAsset\"}\"
\"{\"Fn::Sub\":${GeneratorAsset}}\"
\"{\"Fn::Sub\":${!GeneratorAsset}}\"
\"{\"Fn::Sub\":${!GeneratorAsset} }\"
Here GeneratorAsset is a resource that is already created before Dashboard. Request any Cloudformation expert to help us replace the id value with the correct dynamic string.
Ref link:- https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iotsitewise-dashboard.html
can someone please send me solution or link for PowerShell 5 and 7 how can I access child elements if specific condition is fulfilled for JSON file which I have as output.json. I haven't find it on the net.
I want to retrieve value of the "children" elements if type element has value FILE and to put that into some list. So final result should be [test1.txt,test2.txt]
Thank you!!!
{
"path": {
"components": [
"Packages"
],
"parent": "",
"name": "Packages",
},
"children": {
"values": [
{
"path": {
"components": [
"test1.txt"
],
"parent": "",
"name": "test1.txt",
},
"type": "FILE",
"size": 405
},
{
"path": {
"components": [
"test2.txt"
],
"parent": "",
"name": "test2.txt",
},
"type": "FILE",
"size": 409
},
{
"path": {
"components": [
"FOLDER"
],
"parent": "",
"name": "FOLDER",
},
"type": "DIRECTORY",
"size": 1625
}
]
"start": 0
}
}
1.) The json is incorrect, I assumt that this one is the correct one:
{
"path": {
"components": [
"Packages"
],
"parent": "",
"name": "Packages"
},
"children": {
"values": [
{
"path": {
"components": [
"test1.txt"
],
"parent": "",
"name": "test1.txt"
},
"type": "FILE",
"size": 405
},
{
"path": {
"components": [
"test2.txt"
],
"parent": "",
"name": "test2.txt"
},
"type": "FILE",
"size": 409
},
{
"path": {
"components": [
"FOLDER"
],
"parent": "",
"name": "FOLDER"
},
"type": "DIRECTORY",
"size": 1625
}
],
"start": 0
}
}
2.) The structure is not absolute clear, but for your example this seems to me to be the correct solution:
$element = $json | ConvertFrom-Json
$result = #()
$element.children.values | foreach {
if ($_.type -eq 'FILE') { $result += $_.path.name }
}
$result | ConvertTo-Json
Be aware, that the used construct $result += $_.path.name is fine if you have up to ~10k items, but for very large items its getting very slow and you need to use an arraylist. https://adamtheautomator.com/powershell-arraylist/
I have an array of JSON objects formatted as follows:
[
{
"id": 1,
"names": [
{
"name": "Bulbasaur",
"language": {
"name": "en",
"url": "http://myserver.com:8000/api/v2/language/9/"
}
},
],
},
{
"id": 1,
"types": [
{
"slot": 1,
"type": {
"name": "grass",
"url": "http://myserver.com:8000/api/v2/type/12/"
}
},
{
"slot": 2,
"type": {
"name": "poison",
"url": "http://myserver.com:8000/api/v2/type/4/"
}
}
]
},
{
"id": 2,
"names": [
{
"name": "Ivysaur",
"language": {
"name": "en",
"url": "http://myserver.com:8000/api/v2/language/9/"
}
},
],
},
{
"id": 2,
"types": [
{
"slot": 1,
"type": {
"name": "ice",
"url": "http://myserver.com:8000/api/v2/type/10/"
}
},
{
"slot": 2,
"type": {
"name": "electric",
"url": "http://myserver.com:8000/api/v2/type/8/"
}
}
]
},
{
"id": 3,
"names": [
{
"name": "Venusaur",
"language": {
"name": "en",
"url": "http://myserver.com:8000/api/v2/language/9/"
}
},
],
},
{
"id": 3,
"types": [
{
"slot": 1,
"type": {
"name": "ground",
"url": "http://myserver.com:8000/api/v2/type/2/"
}
},
{
"slot": 2,
"type": {
"name": "rock",
"url": "http://myserver.com:8000/api/v2/type/3/"
}
}
]
}
]
Note that these are pairs of separate objects that appear sequentially in a JSON array, with each pair sharing an id field. This pattern repeats several hundred times in the array. What I need to accomplish is to "merge" each id-sharing pair into one object. So, the resultant output would be
[
{
"id": 1,
"names": [
{
"name": "Bulbasaur",
"language": {
"name": "en",
"url": "http://myserver.com:8000/api/v2/language/9/"
}
},
],
"types": [
{
"slot": 1,
"type": {
"name": "grass",
"url": "http://myserver.com:8000/api/v2/type/12/"
}
},
{
"slot": 2,
"type": {
"name": "poison",
"url": "http://myserver.com:8000/api/v2/type/4/"
}
}
]
},
{
"id": 2,
"names": [
{
"name": "Ivysaur",
"language": {
"name": "en",
"url": "http://myserver.com:8000/api/v2/language/9/"
}
},
],
"types": [
{
"slot": 1,
"type": {
"name": "ice",
"url": "http://myserver.com:8000/api/v2/type/10/"
}
},
{
"slot": 2,
"type": {
"name": "electric",
"url": "http://myserver.com:8000/api/v2/type/8/"
}
}
]
},
{
"id": 3,
"names": [
{
"name": "Venusaur",
"language": {
"name": "en",
"url": "http://myserver.com:8000/api/v2/language/9/"
}
},
],
"types": [
{
"slot": 1,
"type": {
"name": "ground",
"url": "http://myserver.com:8000/api/v2/type/2/"
}
},
{
"slot": 2,
"type": {
"name": "rock",
"url": "http://myserver.com:8000/api/v2/type/3/"
}
}
]
}
]
I've gotten these objects to appear next to each other via the group_by(.id) command, but I'm at a loss as to how I should actually combine them. I'm very much still a novice with jq so I'm a bit overwhelmed with the amount of possible solutions.
[Note: The following assumes that the data shown in the Q have been corrected so that they are valid JSON.]
The merging you want can be achieved by object addition (x + y). For example, given the two JSON objects as shown in the question (i.e., as a stream), you could write:
jq -s '.[0] + .[1]'
However, since the question also indicates these objects are actually in an array, let's next consider the case of an array with two objects. In that case, you could simply write:
jq add
Finally, if you have an array of arrays each of which is an array of objects, you could use map(add). Since you don't have a very large array, you could simply write:
group_by(.id) | map(add)
Please note that jq defines object addition in a non-commutative way. Specifically, there is a bias towards the right-most key.
I'm want to make my json to csv so that i can upload it on google sheets and make it as json api. Whenever i have change data i will just change it on google sheets. But I'm having problems on converting my json file to csv because it changes the variables whenever i convert it. I'm using https://toolslick.com/csv-to-json-converter to convert my json file to csv.
What is the best way to convert json nested to csv ?
JSON
{
"options": [
{
"id": "1",
"value": "Jumbo",
"shortcut": "J",
"textColor": "#FFFFFF",
"backgroundColor": "#00000"
},
{
"id": "2",
"value": "Hot",
"shortcut": "D",
"textColor": "#FFFFFF",
"backgroundColor": "#FFFFFF"
}
],
"categories": [
{
"id": "1",
"order": 1,
"name": "First Category",
"active": true
},
{
"id": "2",
"order": 2,
"name": "Second Category",
"shortcut": "MT",
"active": true
}
],
"products": [
{
"id": "03c6787c-fc2a-4aa8-93a3-5e0f0f98cfb2",
"categoryId": "1",
"name": "First Product",
"shortcut": "First",
"options": [
{
"optionId": "1",
"price": 23
},
{
"optionId": "2",
"price": 45
}
],
"active": true
},
{
"id": "e8669cea-4c9c-431c-84ba-0b014f0f9bc2",
"categoryId": "2",
"name": "Second Product",
"shortcut": "Second",
"options": [
{
"optionId": "1",
"price": 11
},
{
"optionId": "2",
"price": 20
}
],
"active": true
}
],
"discounts": [
{
"id": "1",
"name": "S",
"type": 1,
"amount": 20,
"active": true
},
{
"id": "2",
"name": "P",
"type": 1,
"amount": 20,
"active": true
},
{
"id": "3",
"name": "G",
"type": 2,
"amount": 5,
"active": true
}
]
}
Using python, this can be easily done or almost done. Maybe this code will help you in some way to understand that.
import json,csv
data = []
with open('your_json_file_here.json') as file:
for line in file:
data.append(json.loads(line))
length = len(data)
with open('create_new_file.csv','w') as f:
writer = csv.writer(f)
writers = csv.DictWriter(f, fieldnames=['header1','header2'])
writers.writeheader()
for iter in range(length):
writer.writerow((data[iter]['specific_col_name1'],data[iter]['specific_col_name2']))
f.close()
I have been working on this for a couple days and cannot get past this error. I have 2 activities in this pipeline. The first activity copies data from an ODBC connection to an Azure database, which is successful. The 2nd activity transfers the data from Azure table to another Azure table and keeps failing.
The error message is:
Copy activity met invalid parameters: 'UnknownParameterName', Detailed message: An item with the same key has already been added..
I do not see any invalid parameters or unknown parameter names. I have rewritten this multiple times using their add activity code template and by myself, but do not receive any errors when deploying on when it is running. Below is the JSON pipeline code.
Only the 2nd activity is receiving an error.
Thanks.
Source Data set
{
"name": "AnalyticsDB-SHIPUPS_06shp-01src_AZ-915PM",
"properties": {
"structure": [
{
"name": "UPSD_BOL",
"type": "String"
},
{
"name": "UPSD_ORDN",
"type": "String"
}
],
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "Source-SQLAzure",
"typeProperties": {},
"availability": {
"frequency": "Day",
"interval": 1,
"offset": "04:15:00"
},
"external": true,
"policy": {}
}
}
Destination Data set
{
"name": "AnalyticsDB-SHIPUPS_06shp-02dst_AZ-915PM",
"properties": {
"structure": [
{
"name": "SHIP_SYS_TRACK_NUM",
"type": "String"
},
{
"name": "SHIP_TRACK_NUM",
"type": "String"
}
],
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "Destination-Azure-AnalyticsDB",
"typeProperties": {
"tableName": "[olcm].[SHIP_Tracking]"
},
"availability": {
"frequency": "Day",
"interval": 1,
"offset": "04:15:00"
},
"external": false,
"policy": {}
}
}
Pipeline
{
"name": "SHIPUPS_FC_COPY-915PM",
"properties": {
"description": "copy shipments ",
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "RelationalSource",
"query": "$$Text.Format('SELECT COMPANY, UPSD_ORDN, UPSD_BOL FROM \"orupsd - UPS interface Dtl\" WHERE COMPANY = \\'01\\'', WindowStart, WindowEnd)"
},
"sink": {
"type": "SqlSink",
"sqlWriterCleanupScript": "$$Text.Format('delete imp_fc.SHIP_UPS_IntDtl_Tracking', WindowStart, WindowEnd)",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
},
"translator": {
"type": "TabularTranslator",
"columnMappings": "COMPANY:COMPANY, UPSD_ORDN:UPSD_ORDN, UPSD_BOL:UPSD_BOL"
}
},
"inputs": [
{
"name": "AnalyticsDB-SHIPUPS_03shp-01src_FC-915PM"
}
],
"outputs": [
{
"name": "AnalyticsDB-SHIPUPS_03shp-02dst_AZ-915PM"
}
],
"policy": {
"timeout": "1.00:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst",
"style": "StartOfInterval",
"retry": 3,
"longRetry": 0,
"longRetryInterval": "00:00:00"
},
"scheduler": {
"frequency": "Day",
"interval": 1,
"offset": "04:15:00"
},
"name": "915PM-SHIPUPS-fc-copy->[imp_fc]_[SHIP_UPS_IntDtl_Tracking]"
},
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "SqlSource",
"sqlReaderQuery": "$$Text.Format('select distinct ups.UPSD_BOL, ups.UPSD_BOL from imp_fc.SHIP_UPS_IntDtl_Tracking ups LEFT JOIN olcm.SHIP_Tracking st ON ups.UPSD_BOL = st.SHIP_SYS_TRACK_NUM WHERE st.SHIP_SYS_TRACK_NUM IS NULL', WindowStart, WindowEnd)"
},
"sink": {
"type": "SqlSink",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
},
"translator": {
"type": "TabularTranslator",
"columnMappings": "UPSD_BOL:SHIP_SYS_TRACK_NUM, UPSD_BOL:SHIP_TRACK_NUM"
}
},
"inputs": [
{
"name": "AnalyticsDB-SHIPUPS_06shp-01src_AZ-915PM"
}
],
"outputs": [
{
"name": "AnalyticsDB-SHIPUPS_06shp-02dst_AZ-915PM"
}
],
"policy": {
"timeout": "1.00:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst",
"style": "StartOfInterval",
"retry": 3,
"longRetryInterval": "00:00:00"
},
"scheduler": {
"frequency": "Day",
"interval": 1,
"offset": "04:15:00"
},
"name": "915PM-SHIPUPS-AZ-update->[olcm]_[SHIP_Tracking]"
}
],
"start": "2017-08-22T03:00:00Z",
"end": "2099-12-31T08:00:00Z",
"isPaused": false,
"hubName": "adf-tm-prod-01_hub",
"pipelineMode": "Scheduled"
}
}
Have you seen this link?
They get the same error message and suggest using AzureTableSink instead of SqlSink
"sink": {
"type": "AzureTableSink",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
}
It would make sense for you too since your 2nd copy activity is Azure to Azure
It could be a red herring but I'm pretty sure "tableName" is a require entry in the typeProperties for a sqlSource. Yours is missing this for the input dataset. Appreciate you have a join in the sqlReaderQuery so probably best to put a dummy (but real) table name in there.
Btw, not clear why you are using $$Text.Format and WindowStart/WindowEnd on your queries if you're not transposing these values into the query; you could just put the query between double quotes.