Ive been trying to handle this kind of situation where the api call response gives me this type of response and I no longer know how I will handle it. The response sometimes exposes a value and sometimes a null resulting for app to crash. The api also changes depending in the situation. Sometime the field is not there, sometime its there but has a value of null, and sometimes has a value of empty string. Do you know any work around to handle this situation?
Api response:
"valrep_landimp_lot_details": [
{
"valrep_landimp_propdesc_area": "1000",
"valrep_landimp_propdesc_block": null,
"valrep_landimp_propdesc_lot": "3",
"valrep_landimp_propdesc_lot_classification": null,
"valrep_landimp_propdesc_place": null,
"valrep_landimp_propdesc_registered_owner": "RANK A",
"valrep_landimp_propdesc_registry_date_day": "01",
"valrep_landimp_propdesc_registry_date_month": "01",
"valrep_landimp_propdesc_registry_date_year": "1980",
"valrep_landimp_propdesc_registry_of_deeds": "Makati City",
"valrep_landimp_propdesc_survey_nos": null,
"valrep_landimp_propdesc_tct_no": "T-33333"
}
]
Data class:
#field:SerializedName("valrep_landimp_propdesc_block")
val valrepLandimpPropdescBlock: String? = null
How call the variable:
it?.valrepLandimpPropdescBlock!!
Error : kotlin.KotlinNullPointerException
Thank you in advance!
Related
I'm trying to save data to my MySql db from a Node method. This includes a field called attachments.
console.log(JSON.stringify(post.acf.attachments[0])); returns:
{
"ID": 4776,
"id": 4776,
"title": "bla",
"filename": "bla.pdf",
"filesize": 1242207,
"url": "https://example.com/wp-content/uploads/bla.pdf",
"link": "https://example.com/bla/",
"alt": "",
"author": "1",
"description": "",
"caption": "",
"name": "bla",
"status": "inherit",
"uploaded_to": 0,
"date": "2020-10-23 18:05:13",
"modified": "2020-10-23 18:05:13",
"menu_order": 0,
"mime_type": "application/pdf",
"type": "application",
"subtype": "pdf",
"icon": "https://example.com/wp-includes/images/media/document.png"
}
This is indeed the data I want to save to the db:
await existing_post.save({
...
attachments: post.acf.attachments[0],
)};
However, the attachments field produces a 422 server error (if I comment out this field, the other fields save without a problem to the db). I'm not getting what is causing this error. Any ideas?
I've also tried
await existing_post.save({
...
attachments: post.acf.attachments,
)};
but then it seems to just save "[object Object]" to the database.
The field in the database is defined as text. I've also tried it by defining the field as json, but that made no difference.
exports.up = function (knex, Promise) {
return knex.schema.table("posts", function (table) {
table.longtext("attachments");
});
};
The 422 error code is about the server unable to process the data you are sending to it. In your case, your table field is longtext when post.acf.attachments seems like an object. That's why it saves [object Object] to your db (It is the return value of the toString() method).
Try using
await existing_post.save({
...
attachments: JSON.stringify(post.acf.attachments),
)};
MySQL and knex both support the JSON format, I'd suggest you change the field to json. (See knex docs and mysql 8 docs). You'll stiil need to stringify your objects tho.
EDIT: I just saw that Knex supports jsonInsert (and plenty other neat stuff) as a query builder that should be useful for you.
Mysql also support a large range of cool stuffs for handling jsons
In addition, when you fetch the results in the database, you'll need to parse the JSON result to get an actual JSON object:
const acf = await knex('posts').select('acf').first();
const attachment = JSON.parse(acf.attachment;
Knex also provide jsonExtract that should fill your needs (See also the mysql json_extract
I'm using Volley library to communicate with my API. I'm pretty new to Android and Kotlin and I'm really confused about extracting keys from the following JSON data
{
"message": {
"_id": "60bc7fa7abeedb25643fa692",
"hash": "3a54b415461a63abac1fc6dfa0e140584047bd15358e33a177f9505ed2faa4d4",
"blockchain": "ethereum",
"amount": 5000,
"amount_usd": 13352971,
"from": "d3d69228cb2292f933572399593617f574c70eb1",
"to": "fe9996da73d6bf5252f15024811954ae37ab68be",
"__v": 0
}
}
The volley library returns all of this JSON data in a variable called response and I'm using response.getString("message") to extract the message key but, I don't understand how to extract the internal data such as hash, blockchain, amount, etc.
I'm using the following code to get the JSON data from my backend.
val jsonRequest = JsonObjectRequest(
Request.Method.GET, url, null,
{ response ->
tweet_text.setText(response.getString("message"))
Log.d("resp", response.toString())
},
{
Log.d("err", it.localizedMessage)
})
Any help would be appreciated, Thanks!
I found it, I just used the getJSONObject() method to make it work
val jsonRequest = JsonObjectRequest(
Request.Method.GET, url, null,
{ response ->
val txn = response.getJSONObject("message")
//txn object can be used to extract the internal data
},
{
Log.d("err", it.localizedMessage)
})
I have a requirement to convert the json into csv(or a SQL table) or any other flatten structure using Data Flow in Azure Data Factory. I need to take the property names at some hierarchy and values of the child properties at lower of hierrarchy from the source json and add them both as column/row values in csv or any other flatten structure.
Source Data Rules/Constraints :
Parent level data property names will change dynamically (e.g. ABCDataPoints,CementUse, CoalUse, ABCUseIndicators names are dynamic)
The hierarchy always remains same as in below sample json.
I need some help in defining Json path/expression to get the names ABCDataPoints,CementUse, CoalUse, ABCUseIndicators etc. I am able to figure out how to retrieve the values for the properties Value,ValueDate,ValueScore,AsReported.
Source Data Structure :
{
"ABCDataPoints": {
"CementUse": {
"Value": null,
"ValueDate": null,
"ValueScore": null,
"AsReported": [],
"Sources": []
},
"CoalUse": {
"Value": null,
"ValueDate": null,
"AsReported": [],
"Sources": []
}
},
"ABCUseIndicators": {
"EnvironmentalControversies": {
"Value": false,
"ValueDate": "2021-03-06T23:22:49.870Z"
},
"RenewableEnergyUseRatio": {
"Value": null,
"ValueDate": null,
"ValueScore": null
}
},
"XYZDataPoints": {
"AccountingControversiesCount": {
"Value": null,
"ValueDate": null,
"AsReported": [],
"Sources": []
},
"AdvanceNotices": {
"Value": null,
"ValueDate": null,
"Sources": []
}
},
"XYXIndicators": {
"AccountingControversies": {
"Value": false,
"ValueDate": "2021-03-06T23:22:49.870Z"
},
"AntiTakeoverDevicesAboveTwo": {
"Value": 4,
"ValueDate": "2021-03-06T23:22:49.870Z",
"ValueScore": "0.8351945854483925"
}
}
}
Expected Flatten structure
Background:
After having multiple calls with ADF experts at Microsoft(Our workplace have Microsoft/Azure partnership), they concluded this is not possible with out of the box activities provided by ADF as is, neither by Dataflow(need not to use data flow though) nor Flatten feature. Reasons are Dataflow/Flatten only unroll the Array objects and there are no mapping functions available to pick the property names - Custom expression are in internal beta testing and will in PA in near future.
Conclusion/Solution:
We concluded with an agreement based on calls with Microsoft emps ended up to go multiple approaches but both needs the custom code - with out custom code this is not possible by using out of box activities.
Solution-1 : Use some code to flatten as per requirement using a ADF Custom Activity. The downside of this you need to use an external compute(VM/Batch), the options supported are not on-demand. So it is little bit expensive but works best if have continuous stream workloads. This approach also continuously monitor if input sources are of different sizes because the compute needs to be elastic in this case or else you will get out of memory exceptions.
Solution-2 : Still needs to write the custom code - but in a function app.
Create a Copy Activity with source as the files with Json content(preferably storage account).
Use target as Rest Endpoint of function(Not as a function activity because it has 90sec timeout when called from an ADF activity)
The function app will takes Json lines as input and parse and flatten.
If you use the above way so you can scale the number of lines cane be send in each request to function and also scale the parallel requests.
The function will do the flatten as required to one file or multiple files and store in blob storage.
The pipeline will continue from there as needed from there.
One problem with this approach is if any of the range is failed the copy activity will retry but it will run the whole process again.
Trying something very similar, is there any other / native solution to address this?
As mentioned in the response above, has this been GA yet? If yes, any reference documentation / samples would be of great help!
Custom expression are in internal beta testing and will in PA in near future.
I receive a error from my SharePoint Get-Item action when I try to use data from the Service Bus message triggering my Logic App (inner xml omitted):
Unable to process template language expressions in action 'Get_items' inputs at line '1' and column '1641': 'The template language function 'json' parameter is not valid. The provided value '<?xml version="1.0" encoding="utf-8"?>
<Projektaufgabe id="b92d6817-694e-e611-80ca-005056a5e651" messagename="Update">
...
</Projektaufgabe>' cannot be parsed: 'Unexpected character encountered while parsing value: . Path '', line 0, position 0.'.
The decoded message xml looks okay even quoted in the error message.
The received queue message body seems okay - only the ContentType is empty:
(ContentData truncated)
{
"ContentData": "77u/PD94bWwgdmVyc2lvbj0iMS4wIiBl...=",
"ContentType": "",
"ContentTransferEncoding": "Base64",
"Properties": {
"DeliveryCount": "1",
"EnqueuedSequenceNumber": "20000001",
"EnqueuedTimeUtc": "2016-07-29T09:03:40Z",
"ExpiresAtUtc": "2016-08-12T09:03:40Z",
"LockedUntilUtc": "2016-07-29T09:04:10Z",
"LockToken": "67796ed8-a9f0-4f6a-952b-ccf4eda00071",
"MessageId": "f3ac2ce4e7b6417386611f6817bf5da1",
"ScheduledEnqueueTimeUtc": "0001-01-01T00:00:00Z",
"SequenceNumber": "31806672388304129",
"Size": "1989",
"State": "Active",
"TimeToLive": "12096000000000"
},
"MessageId": "f3ac2ce4e7b6417386611f6817bf5da1",
"To": null,
"ReplyTo": null,
"ReplyToSessionId": null,
"Label": null,
"ScheduledEnqueueTimeUtc": "0001-01-01T00:00:00Z",
"SessionId": null,
"CorrelationId": null,
"TimeToLive": "12096000000000"
}
My parsing function for the SharePoint Get-Item OData filter looks like this:
#{json(base64ToString(triggerBody().ContentData)).Projektaufgabe.id}
I already tried to separate decoding and casting to string:
#{json(string(decodeBase64(triggerBody().ContentData))).Projektaufgabe.id}
Since it seems to be an issue decoding the message, I reckoned it wouldn't help much receiving a json message instead of xml from the Service Bus queue.
So from what I can see you are trying to convert an xml to json. You are very close - only issue is #json() expects either
A string that is a valid JSON object
An application/xml object to convert to JSON
Here, the #base64toString() is converting to a string, but you really need to let #json() know this is #2 and not #1, so changing expression to this should work:
#{json(xml(base64toBinary(triggerBody()[contentdata])))[foo]}
Let me know
I am getting JSON returned in this format:
{
"status": "success",
"data": {
"debtor": {
"debtor_id": 1301,
"key": value,
"key": value,
"key": value
}
}
}
Somehow, my RESTAdapter needs to provide my debtor model properties from "debtor" section of the JSON.
Currently, I am getting a successful call back from the server, but a console error saying that Ember cannot find a model for "status". I can't find in the Ember Model Guide how to deal with JSON that is nested like this?
So far, I have been able to do a few simple things like extending the RESTSerializer to accept "debtor_id" as the primaryKey, and also remove the pluralization of the GET URL request... but I can't find any clear guide to reach a deeply nested JSON property.
Extending the problem detail for clarity:
I need to somehow alter the default behavior of the Adapter/Serializer, because this JSON convention is being used for many purposes other than my Ember app.
My solution thus far:
With a friend we were able to dissect the "extract API" (thanks #lame_coder for pointing me to it)
we came up with a way to extend the serializer on a case-by-case basis, but not sure if it really an "Ember Approved" solution...
// app/serializers/debtor.js
export default DS.RESTSerializer.extend({
primaryKey: "debtor_id",
extract: function(store, type, payload, id, requestType) {
payload.data.debtor.id = payload.data.debtor.debtor_id;
return payload.data.debtor;
}
});
It seems that even though I was able to change my primaryKey for requesting data, Ember was still trying to use a hard coded ID to identify the correct record (rather than the debtor_id that I had set). So we just overwrote the extract method to force Ember to look for the correct primary key that I wanted.
Again, this works for me currently, but I have yet to see if this change will cause any problems moving forward....
I would still be looking for a different solution that might be more stable/reusable/future-proof/etc, if anyone has any insights?
From description of the problem it looks like that your model definition and JSON structure is not matching. You need to make it exactly same in order to get it mapped correctly by Serializer.
If you decide to change your REST API return statement would be something like, (I am using mock data)
//your Get method on service
public object Get()
{
return new {debtor= new { debtor_id=1301,key1=value1,key2=value2}};
}
The json that ember is expecting needs to look like this:
"debtor": {
"id": 1301,
"key": value,
"key": value,
"key": value
}
It sees the status as a model that it needs to load data for. The next problem is it needs to have "id" in there and not "debtor_id".
If you need to return several objects you would do this:
"debtors": [{
"id": 1301,
"key": value,
"key": value,
"key": value
},{
"id": 1302,
"key": value,
"key": value,
"key": value
}]
Make sense?