JSON Query for import.io - json

I'm using import.io and trying to figure out how to write a code that that uses multiple inputs to run a connector query. I've never used JSON before but essentially what I'm trying to do is to expand this existing query:
{
"input": {
"name": "Marin Academy",
"city": "San Rafael"
},
}
To include multiple names. So that when I run the query, IO automatically searches a list of organizations. What would be the correct syntax to achieve this?
I tried
{
"input": {
"name": "Marin Academy",
"city": "San Rafael"
},
{
"name": "Mt. Hood Community College",
"city": "Gresham"
}
}
But it gives me a syntax error.
Thanks!

In case of an an array, please use square brackets before the beginning and after the end of curly brackets like...
{
"input": [
{
"name": "Marin Academy",
"city": "San Rafael"
},
{
"name": "Mt. Hood Community College",
"city": "Gresham"
}
]
}
This will solve the JSON sytax error at least.

Related

Amadeus flights API error: carrier code is a 2 or 3 alphanum except YY and YYY

I am using the following SDK to search for and purchase flights via Amadeus:
https://github.com/autotune/amadeus/pull/1/files
This was a previously abandoned project I have decided to take on and make work. As part of that project I am trying to purchase a ticket in the sandbox environment and getting the following error:
{
"errors": [
{
"code": 477,
"title": "INVALID FORMAT",
"detail": "carrier code is a 2 or 3 alphanum except YY and YYY",
"source": {
"pointer": "/data/flightOffers[0]/itineraries[1]/segments[0]/operating/carrierCode",
"example": "AF"
},
"status": 400
}
]
}
Here is the json data being sent:
{
"type": "flight-order",
"travelers": [
{
"id": "1",
"dateOfBirth": "1990-02-15",
"name": {
"firstName": "Foo",
"lastName": "Bar"
},
"gender": "MALE",
"contact": {
"emailAddress": "foo#bar.com",
"phones": [
{
"deviceType": "MOBILE",
"countryCallingCode": "33",
"number": "5555555555"
}
]
}
}
],
"ticketingAgreement": {
"option": "DELAY_TO_CANCEL",
"delay": "6D"
},
"remarks": {},
"operating": {
"carrierCode": "UA"
}
}
Any help appreciated!
The error suggests that the sent payload is invalid. I'd advice you use a tool like Curl or Postman to verify you're using the right API documentation, before debugging actual code.
After further reading your PR and checking the API reference at :
https://developers.amadeus.com/self-service/category/air/api-doc/flight-create-orders/api-reference
I think you need to confirm that the Carrier code being passed is available in the segments under:
flightOffers > itineraries > segments
Although the API reference doesn't have operating > carrierCode like you used in the data sent, my guess after seeing the API error response you shared is that they are performing a check against the flight offers passed.
I suggest you check the results gotten when you call the flightOffers and also add it to the payload sent to the sandbox.

replace "key" name in whole JSON python for bulk data in efficient way

Actually i am pushing data to other system but before pushing i have to change the "key" in the whole JSON. JSON may contain 200 or 10000 or 250000 data.
sample JSON:
{
"insert": "table",
"contacts": [
{
"testName": "testname",
"ContactID": 212121
},
{
"testName": "testname",
"ContactID": 2146354564
},
{
"testName": "testname",
"ContactID": 12312
},
{
"testName": "testname",
"ContactID": 211221
},
{
"testName": "testname",
"ContactID": 10218550
}
]
}
I need to change contacts array Keys. These contacts may be in bulk. So i need to work with this efficiently with minimal complexity.
The above JSON to be converted as below
{
"insert": "table",
"contacts": [
{
"name": "testname",
"phone": 212121
},
{
"name": "testname",
"phone": 2146354564
},
{
"name": "testname",
"phone": 12312
},
{
"name": "testname",
"phone": 211221
},
{
"name": "testname",
"phone": 10218550
}
]
}
here is my code trying by loop
ini_dict = request.data
contact_data = ini_dict['contacts']
for i in contact_data:
i['name'] = i.pop('testName')
print(contact_data)
Please suggest me how can i change the key names efficiently for bulk data. i mean for 50000 lists in contacts. "for loop" will be leading a performance issue. So please let me know the efficient way to achieve this
I dont know how fast you need it to be nor how you are choosing to store your json. One simple solution is just store it as a string and then replace all the instances of your attributes.
# Something like this using a jsonstring
jsonstring.replace("'testName':", "'name':")
jsonstring.replace("'ContactId':", "'phone':")
If you want to do this in bulk you, may need to create some batch process to be able to fetch multiple existing records and make changes at once. I have done this before with the java equivalent of https://pypi.org/project/JayDeBeApi/ but, that was more for modifying existing records in a database.

Azure Logic Apps - Map Json to Json with Liquid flatten array

Any help would be much appreciated. What I am trying to achieve is to request a record from Dynamics 365(cloud) to an on-premise system (exposed by mulesoft) I have decided to use Azure logic apps to do the integration and to use Liquid to do the mapping, however I am battling to flatten the array with liquid, I'm getting a JSON payload from the on-premise system which I need to transform readily to load into dynamics 365, what I am getting is something like the following:
{
"person": {
"firstname": " Fred",
"surname" : "Smith",
"age": 27,
"phoneno":"123456789",
"addresses": [
{
"address": {
"AddressLine1":"1 milky way",
"AddressLine2":"galaxy cresent",
"city": "tempest",
"state": "Idiho",
"postcode": "12345"
}
},
{
"address": {
"AddressLine1":"52 Saturn Drive",
"AddressLine2":"Wharfridge",
"city": "tempest",
"state": "Idiho",
"postcode": "12345"
}
}
]
}
}
and what I need is to flatten the array into the root node like this:
{
"person": {
"firstname": " Fred",
"surname" : "Smith",
"age": 27,
"phoneno":"123456789",
"addr1_AddressLine1":"1 milky way",
"addr1_AddressLine2":"galaxy cresent",
"addr1_city": "tempest",
"addr1_state": "Idiho",
"addr1_postcode": "12345",
"addr2_AddressLine1":"52 Saturn Drive",
"addr2_AddressLine2":"Wharfridge",
"addr2_city": "tempest",
"addr2_state": "Idiho",
"addr2_postcode": "12345"
}
}
If there any other solutions\ideas, i am all ears.
Thanks in advance for your help
Paul
So i found a solution or rather a work around, for some reason the liquid connector in logic apps does not support the "increment" tag, this was causing my issue. i was able to evaluate a property form the input json to decide where my fields would reside. but thanks for

Reading complex json data without iteration

I am working with some data and often the data is nested and i am required to perform some CRUD operations based on the structure of the data i have. For instance i have this json structure
{
"_id": "KnNLkJEhrDsvWedLu",
"createdAt": {
"$date": "2016-10-13T11:24:13.843Z"
},
"services": {
"password": {
"bcrypt": "$2a$30$1/cniPwPNCuwZ/MQDPQkLej..cAATkoGX.qD1TS4iHgf/pwZYE.j."
},
"email": {
"verificationTokens": [
{
"token": "qxe_T9IS7jW7gntpK0Q7UQ35RJ9jO9m2lclnokO3z87",
"address": "drwho#gmail.com",
"when": {
"$date": "2016-10-13T11:24:14.428Z"
}
}
]
},
"resume": {
"loginTokens": []
}
},
"username": "doctorwho",
"emails": [
{
"address": "drwho#gmail.com",
"verified": false
}
],
"persodata": {
"lastlogin": {
"$date": "2016-10-13T11:29:36.816Z"
},
"fname": "Doctor",
"lname": "Who",
"mobile": "+4480000000",
"identity": "1",
"email": "drwho#gmail.com",
"gender": null
}
}
I have several data sets with such complex structure. I need to read the data, edit and also delete. Before i get to iteration, i was wondering how i can read the data without iteration then iterate when i absolutely have to.
What are the rules i should keep in mind when reading such complex json structures to enable me read any complex structure i come across?.
I am currently using javascript but i am looking for rules that apply in other languages as well.
Parsing Json in JavaScript should be easy. http://www.json.org/js.html.
"Since JSON is a proper subset of JavaScript, the compiler will correctly parse the text and produce an object structure". Just follow the examples on that page.
If you want to use another language, in Java you could use Jackson or Gson to map those json strings to objects. Then using them becomes easy. Both libraries are annotation based, and wouldn't be difficult to implement.

How to do Fulltext search on the below Document with ArangoDB?

{
"rootElement": {
"names": {
"name": [
"Haseb",
"Anil",
"Ajinkya",
{
"city": "mumbai",
"state": "maharashtra",
"job": {
"second": "bosch",
"first": "infosys"
}
}
]
},
"places": {
"place": {
"origin": "INDIA",
"current": "GERMANY"
}
}
}
}
If I had the document like the above example and I want to search the value like "mumbai" or "infosys" then how would I do the indexing and search for the same.
As we already discussed in other questions, you can only index one field in the document.
How about using a yaml dump of the whole structure in another attribute that you do the index on?
So, lets say paralell to rootElement you add wordTokens with that dump, and put a fulltext index on that?
You would probably want to use some regular expressions to strip keywords from the yaml dump, and since you don't want to be able to de-serialize it, remove unneeded whitespace and linebreaks too.