The API output from Smartsheet returns rows and columns as separate objects,that are independent of each other.
This results in separate records for the columns(A list of field names) and another set of records for the rows(records with a single field of values from various fields)
Is there a way to return a single list of JSON (with rows and columns resulting in a single list of records)?
This is the code I'm using in the Query Editor that returns separate Rows and Columns
= Web.Contents(
"https://api.smartsheet.com/1.1/sheet/[SHEET_ID]",
[
Headers =
[
#"Authorization" = "Bearer YOUR_API_TOKEN"
]
]
)
I used the sample data on their site to come up with this set of transformations:
let
Source = Json.Document(File.Contents("D:\testdata\foo.json")),
ColumnIds = List.Transform(Source[columns], each Text.From([id])),
ColumnNames = List.Transform(Source[columns], each [title]),
Table = Table.FromList(Source[rows], Splitter.SplitByNothing(), null, null, ExtraValues.Error),
Expanded = Table.ExpandRecordColumn(Table, "Column1", {"rowNumber", "cells"}, {"rowNumber", "cells"}),
Mapped = Table.TransformColumns(Expanded, {"cells",
each Record.Combine(List.Transform(_, each Record.AddField([], Text.From([columnId]), [value])))}),
Result = Table.ExpandRecordColumn(Mapped, "cells", ColumnIds, ColumnNames)
in
Result
Related
I currently have two lists, one is from an external api (splynx), which returns a list of all customers, and another list which returns a list of all Account names from the contacts module in Zoho crm, at the moment, I just want write a code that confirms if the two lists contain matching entries (like one entry in the splynx list matches another entry in the crm list).
What I actually want to achieve is for each matching entry, I want to update crm records with the Customer ID field from Splynx, with a custom field called Splynx ID in the accounts module in CRM (because this ID is auto generated so as to maintain consistency across both apps). I want to know if this even achievable.
This is the code I have written so far
headersmap = Map();
headersmap.put("Authorization","Basic xxxxxxx);
response = invokeurl
[
url :"https://selfcare.dotmac.ng/api/2.0/admin/customers/customer?"
type :GET
headers:headersmap
];
AccountlistSplynx = List();
li1 = List();
li2 = List();
li3 = List();
rows = response.toJSONList();
rows1 = response.toJSONList();
rows2 = response.toJSONList();
for each row in rows
{
Name = row.getjson("name");
AccountlistSplynx.add(Name);
}
for each row in rows1
{
Address = row.getjson("street_1");
li1.add(Address);
}
for each row in rows2
{
CustomerID = row.getjson("id");
li2.add(CustomerID);
}
Accountlistzoho = List();
mp = Map();
contacts = zoho.crm.getRecords("Contacts");
for each contact in contacts
{
account = ifnull(contact.getJSON("Account_Name"),Map());
if(account.size() > 0)
{
accountname = account.getJSON("name");
Accountlistzoho.add(accountname);
}
}
if ( Accountlistzoho == AccountlistSplynx )
{
info "Matching records!";
}
else
{
info "No matching records!";
}
I also want to know if this is the best route to follow in trying to achieve this because I had already imported these contacts from Splynx to CRM before I realized that I did not create the custom field for Accounts
Take a look at the intersect list function:
<variable> = <listVariableOne>.intersect( <listVariableTwo> );
Note!:
<listVariableOne>.intersect( <listVariableTwo> );
and
<listVariableTwo>.intersect( <listVariableOne> );
should return the same intersection set but sometimes one of these calls returns a smaller set. To work around this, call intersect() both ways and if they differ use the one that gives the expected set.
For this task intersect() would be used something like this:
headersmap = Map();
headersmap.put("Authorization","Basic xxxxxxx);
response = invokeurl
[
url:"https://selfcare.dotmac.ng/api/2.0/admin/customers/customer?"
type :GET
headers:headersmap
];
// Note: Using a Map to associate Splynx names and ids.
SplynxMap = Map();
rows = response.toJSONList();
for each row in rows
{
SplynxMap.put(row.getjson("name"), row.getjson("id");
}
// Here make a list of Splynx names based on the map keys.
AccountlistSplynx = List();
AccountlistSplynx = SplynxMap.keys();
// Intersect() function
ItemsToProcess = Accountlistzoho.intersect(AccountlistSplynx);
// Get Zoho record and update with Splynx Customer ID
// Here is one way to do this, but probably not the best or
// most efficient. There is should be a way in CRM to request
// a specific record based on the "name" field and avoid
// looping through contacts for each item to process.
for each item in itemsToProcess
{
for each contact in contacts
{
account = ifnull(contact.getJSON("Account_Name"),Map());
if(account.size() > 0)
{
if ( item == account.getJSON("name"))
{
account.Splynx_ID = SplynxMap.get(item);
}
}
}
}
Regarding your needs, do you want to update Zoho CRM records if they are matched in Splynx records?
Step 1:
Store all Splynx records into the List data type variable in the deluge.
Store all Zoho records into the List data type variable in the deluge.
Step 2:
Assume that API field Names of both Lists are not equally matched.
Step 3:
This is how to determine matched records in Zoho records if we assume that Account Name is the criteria to be said it is matched. Please note that api_field_name or keys will be different from your actual data.
SplynxData = {{"s_account_name":"Account A","ID":"s_ID_1"},{"s_account_name":"Account B","ID":"s_ID_2"},{"s_account_name":"Account C","ID":"s_ID_3"},{"s_account_name":"Account D","ID":"s_ID_4"}};
ZohoData = {{"z_account_name":"Account A","ID":"z_ID_1"},{"z_account_name":"Account B"},{"z_account_name":"Account C"}};
ZohoData_group_Name = Map();
for each ZohoData_item in ZohoData{
item_map = Map();
ZohoData_group_Name.put(ZohoData_item.get('z_account_name'),ZohoData_item);
}
for each SplynxData_item in SplynxData{
matched_zoho_item = ZohoData_group_Name.get(SplynxData_item.get('s_account_name')) ;
if (matched_zoho_item != null){
info matched_zoho_item;
//Do zoho.crm.updateRecords methods
}
}
You can try this deluge script in https://deluge.zoho.com/tryout.
Please refer to this link on how to update Zoho Records https://www.zoho.com/deluge/help/crm/update-record.html
I'm trying to import a JSON file with the following format into Excel:
[
[1,2,3,4],
[5,6,7,8]
]
I want to get a spreadsheet with 2 rows and 4 columns, where each row contains the contents of the inner array as separate column values, e.g.
Column A
Column B
Column C
Column D
1
2
3
4
5
6
7
8
Although this would seem to be an easy problem to solve, I can't seem to find the right PowerQuery syntax, or locate an existing answer that covers this scenario. I can easily import as a single column with 8 values, but can't seem to split the inner array into separate columns.
Assuming the JSON looks like
[
[1,2,3,4],
[5,6,7,8]
]
then this code in Powerquery
let Source = Json.Document(File.Contents("C:\temp\j.json")),
#"Converted to Table" = Table.FromList(Source, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Added Custom" = Table.AddColumn(#"Converted to Table", "Custom", each Text.Combine(List.Transform([Column1], each Text.From(_)),",")),
ColumnTitles = List.Transform({1 .. List.Max(Table.AddColumn(#"Added Custom", "count", each List.Count(Text.PositionOfAny([Custom],{","},Occurrence.All))+1)[count])}, each "Column." & Text.From(_)),
#"Split Column by Delimiter" = Table.SplitColumn(#"Added Custom", "Custom", Splitter.SplitTextByDelimiter(",", QuoteStyle.Csv), ColumnTitles),
#"Removed Columns" = Table.RemoveColumns(#"Split Column by Delimiter",{"Column1"})
in #"Removed Columns"
generates
It converts the JSON to a list of lists, converts to a table of lists, expands the list to be text with commas, then expands that into columns after dynamically creating column names for the new columns by counting the max number of commas
Have search on a few sites, and any suggestions I am finding does not fit my situation. In many cases, such as how to export html table to excel, with pagination there are only limited responses. I am able to pull data using API key from a website, but it is paginated. I have been able to adjust my query to pull 100 records per page (default 25) and can input a page number to pull a select page, but have been unsuccessful in pulling as one table. Currently one of the data sets is over 800 records, so my work around is 8 queries, all pulling down a separate page, and then using the query amalgamate function to group into one table. I have a new project, that will likely return several thousand line items, and would prefer a simpler way to handle this.
This is current code
let
Source = Json.Document(Web.Contents("https://api.keeptruckin.com/v1/vehicles?access_token=xxxxxxxxxxxxxx&per_page=100&page_no=1", [Headers=[#"X-Api-Key"="f4f1f1f0-005b-4fbb-a525-3144ba89e1f2", #"Content-Type"="application/x-www-form-urlencoded"]])),
vehicles = Source[vehicles],#"Converted to Table" = Table.FromList(vehicles, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Expanded Column1" = Table.ExpandRecordColumn(#"Converted to Table", "Column1", {"vehicle"}, {"Column1.vehicle"}),
#"Expanded Column1.vehicle" = Table.ExpandRecordColumn(#"Expanded Column1", "Column1.vehicle", {"id", "company_id", "number", "status", "ifta", "vin", "make", "model", "year", "license_plate_state", "license_plate_number", "metric_units", "fuel_type", "prevent_auto_odometer_entry", "eld_device", "current_driver"}, {"Column1.vehicle.id", "Column1.vehicle.company_id", "Column1.vehicle.number", "Column1.vehicle.status", "Column1.vehicle.ifta", "Column1.vehicle.vin", "Column1.vehicle.make", "Column1.vehicle.model", "Column1.vehicle.year", "Column1.vehicle.license_plate_state", "Column1.vehicle.license_plate_number", "Column1.vehicle.metric_units", "Column1.vehicle.fuel_type", "Column1.vehicle.prevent_auto_odometer_entry", "Column1.vehicle.eld_device", "Column1.vehicle.current_driver"})
in
#"Expanded Column1.vehicle"
I'm using wget to fetch several dozen JSON files on a daily basis that go like this:
{
"results": [
{
"id": "ABC789",
"title": "Apple",
},
{
"id": "XYZ123",
"title": "Orange",
}]
}
My goal is to find row's position on each JSON file given a value or set of values (i.e. "In which row XYZ123 is located?"). In previous example ABC789 is in row 1, XYZ123 in row 2 and so on.
As for now I use Google Regine to "quickly" visualize (using the Text Filter option) where the XYZ123 is standing (row 2).
But since it takes a while to do this manually for each file I was wondering if there is a quick and efficient way in one go.
What can I do and how can I fetch and do the request? Thanks in advance! FoF0
In python:
import json
#assume json_string = your loaded data
data = json.loads(json_string)
mapped_vals = []
for ent in data:
mapped_vals.append(ent['id'])
The order of items in the list will be indexed according to the json data, since the list is a sequenced collection.
In PHP:
$data = json_decode($json_string);
$output = array();
foreach($data as $values){
$output[] = $values->id;
}
Again, the ordered nature of PHP arrays ensure that the output will be ordered as-is with regard to indexes.
Either example could be modified to use a mapped dictionary (python) or an associative array (php) if needs demand.
You could adapt these to functions that take the id value as an argument, track how far they are into the array, and when found, break out and return the current index.
Wow. I posted the original question 10 months ago when I knew nothing about Python nor computer programming whatsoever!
Answer
But I learned basic Python last December and came up with a solution for not only get the rank order but to insert the results into a MySQL database:
import urllib.request
import json
# Make connection and get the content
response = urllib.request.urlopen(http://whatever.com/search?=ids=1212,125,54,454)
content = response.read()
# Decode Json search results to type dict
json_search = json.loads(content.decode("utf8"))
# Get 'results' key-value pairs to a list
search_data_all = []
for i in json_search['results']:
search_data_all.append(i)
# Prepare MySQL list with ranking order for each id item
ranks_list_to_mysql = []
for i in range(len(search_data_all)):
d = {}
d['id'] = search_data_all[i]['id']
d['rank'] = i + 1
ranks_list_to_mysql.append(d)
I'm trying to process the following with an JSON Input step:
{"address":[
{"AddressId":"1_1","Street":"A Street"},
{"AddressId":"1_101","Street":"Another Street"},
{"AddressId":"1_102","Street":"One more street", "Locality":"Buenos Aires"},
{"AddressId":"1_102","Locality":"New York"}
]}
However this seems not to be possible:
Json Input.0 - ERROR (version 4.2.1-stable, build 15952 from 2011-10-25 15.27.10 by buildguy) :
The data structure is not the same inside the resource!
We found 1 values for json path [$..Locality], which is different that the number retourned for path [$..Street] (3509 values).
We MUST have the same number of values for all paths.
The step provides Ignore Missing Path flag but it only works if all the rows misses the same path. In that case that step acts as as expected an fills the missing values with null.
This limits the power of this step to read uneven data, which was really one of my priorities.
My step Fields are defined as follows:
Am I missing something? Is this the correct behavior?
What I have done is use JSON Input using $.address[*] to read to a jsonRow field the full map of each element p.e:
{"address":[
{"AddressId":"1_1","Street":"A Street"},
{"AddressId":"1_101","Street":"Another Street"},
{"AddressId":"1_102","Street":"One more street", "Locality":"Buenos Aires"},
{"AddressId":"1_102","Locality":"New York"}
]}
This results in 4 jsonRows one for each element, p.e. jsonRow = {"AddressId":"1_101","Street":"Another Street"}. Then using a Javascript step I map my values using this:
var AddressId = getFromMap('AddressId', jsonRow);
var Street = getFromMap('Street', jsonRow);
var Locality = getFromMap('Locality', jsonRow);
In a second script tab I inserted minified JSON parse code from https://github.com/douglascrockford/JSON-js and the getFromMap function:
function getFromMap(key,jsonRow){
try{
var map = JSON.parse(jsonRow);
}
catch(e){
var message = "Unparsable JSON: "+jsonRow+" Desc: "+e.message;
var nr_errors = 1;
var field = "jsonRow";
var errcode = "JSON_PARSE";
_step_.putError(getInputRowMeta(), row, nr_errors, message, field, errcode);
trans_Status = SKIP_TRANSFORMATION;
return null;
}
if(map[key] == undefined){
return null;
}
trans_Status = CONTINUE_TRANSFORMATION;
return map[key]
}
You can solve this by changing the JSONPath and splitting up the steps in two JSON input steps. The following website explains a lot about JSONPath: http://goessner.net/articles/JsonPath/
$..AddressId
Does in fact return all the AddressId's in the address array, BUT since Pentaho is using grid rows for input and output [4 rows x 3 columns], it can't handle a missing value aka null value when you want as results return all the Streets (3 rows) and return all the Locality (2 rows), simply because there are no null values in the array itself as in you can't drive out of your garage with 3 wheels on your car instead of the usual 4.
I guess your script returns null (where X is zero) values like:
A S X
A S X
A S L
A X L
The scripting step can be avoided same by changing the Fields path of the first JSONinput step into:
$.address[*]
This is to retrieve all the 4 address lines. Create a next JSONinput step based on the new source field which contains the address line(s) to retrieve the address details per line:
$.AddressId
$.Street
$.Locality
This yields the null values on the four address lines when a address details is not available in an address line.