I am using Laravel 5.5.13.
I succesfully tested everything on my loclahost. I now uploaded to server.
I did an export from phpMyAdmin with default settings in my localhost (XAMPP, Windows 10), then did a import on remote phpMyAdmin with default settings.
When hit the remote host, it is now giving all fields that I setup in my migrations like this:
$table->integer('extension_id')->unsigned();
as a string, which is so weird, becuase when on localhost, it is giving as a number.
In the data below, please notice, that in localhost, displayname_id and extensions_id values are not wrapped with quotes. However id is not, which i don't understand, it is also unsigned. My goal is to make even id column be string (or if not possible, then make the *_id back to number).
Here it is from remote:
[
{
"id": 2,
"name": "Stencil",
"kind": "cws",
"created_at": "2017-11-11 00:26:52",
"updated_at": "2017-11-11 00:26:52",
"thumbs_count": "1",
"thumbs_yes_count": "0",
"latest_comment": {
"id": 1,
"body": "huh?",
"displayname_id": "1",
"extension_id": "2",
"created_at": "2017-11-11 00:26:56",
"updated_at": "2017-11-11 00:26:56"
}
}
]
Here it is from localhost:
[
{
"id": 2,
"name": "Stencil",
"kind": "cws",
"created_at": "2017-11-11 00:26:52",
"updated_at": "2017-11-11 00:26:52",
"thumbs_count": "1",
"thumbs_yes_count": "0",
"latest_comment": {
"id": 1,
"body": "huh?",
"displayname_id": 1,
"extension_id": 2,
"created_at": "2017-11-11 00:26:56",
"updated_at": "2017-11-11 00:26:56"
}
}
]
Here is screesnhots of my phpMyAdmin's:
My remote phpMyAdmin is this -
And my local is this -
Here is a screenshot of the table structure, please notice that id column's are also unsigned, however they are not being json_encode'ed into string.
Here is export screenshot: https://screenshots.firefoxusercontent.com/images/51d1e47f-fe78-4cdc-8de4-b113d4b576a9.png
Here is import screenshot: https://screenshots.firefoxusercontent.com/images/ff112f86-1c1c-4554-b03c-4b15307c042a.png
The difference is your MySQL client driver. Your local machine is using the MySQL Native Driver (mysqlnd), whereas your remote server is using the MySQL Client Library (libmysql).
The native driver (mysqlnd) will treat all integers from the database as integers in PHP. However, the client library (libmysql) will treat all fields as strings in PHP.
The reason that the id field shows up as an integer on both servers is because of some Laravel magic. Laravel uses the model's $casts property to cast specific fields to specific types when accessed. If your $incrementing property on your model is true (which it is by default), Laravel automatically adds the primary key field (default id) to the $casts property with the type defined by the $keyType property (default int). Because of this, whenever you access the id field, it will be a PHP integer.
If you want the integer fields to be treated as integers, you could install the MySQL Native Driver (mysqlnd) on your remote server.
If that is not an option, or not desirable, you can specify that those fields be treated as integers using the $casts property:
protected $casts = [
'displayname_id' => 'int',
'extension_id' => 'int',
];
Now those two fields will be treated as integers regardless of the MySQL driver used.
If you wanted the id to be treated as a string, you have a couple options.
First, you could change the $keyType value to string, but that may have unintended consequences. For example, the relationHasIncrementingId on the BelongsTo class checks if the key is incrementing and if the key type is int, so this method will return false if you change the $keyType to string.
Second, you could directly add 'id' => 'string' to your $casts array, as the $casts value takes priority over the $keyType value when accessing the attribute. This would be safer and more semantically correct than changing the $keyType value.
And third, if you wanted the id to be treated as a string only for JSON conversions, you could override the jsonSerialize() method on your model.
public function jsonSerialize()
{
$data = parent::jsonSerialize();
if (isset($data[$this->primaryKey])) {
$data[$this->primaryKey] = $this->castAttribute('string', $data[$this->primaryKey]);
}
return $data;
}
For Sure this problem is coming from your database OR an old version of php < 5.2.9
But you can try to pass JSON_NUMERIC_CHECK option to your json_encode and you are done, it will turn strings representing numbers automatically into numbers:
Native PHP solution:
echo json_encode($a, JSON_NUMERIC_CHECK);
Online Example
Laravel solution:
return response()->json($a, 200, [], JSON_NUMERIC_CHECK);
Related
I have a requirement to convert the json into csv(or a SQL table) or any other flatten structure using Data Flow in Azure Data Factory. I need to take the property names at some hierarchy and values of the child properties at lower of hierrarchy from the source json and add them both as column/row values in csv or any other flatten structure.
Source Data Rules/Constraints :
Parent level data property names will change dynamically (e.g. ABCDataPoints,CementUse, CoalUse, ABCUseIndicators names are dynamic)
The hierarchy always remains same as in below sample json.
I need some help in defining Json path/expression to get the names ABCDataPoints,CementUse, CoalUse, ABCUseIndicators etc. I am able to figure out how to retrieve the values for the properties Value,ValueDate,ValueScore,AsReported.
Source Data Structure :
{
"ABCDataPoints": {
"CementUse": {
"Value": null,
"ValueDate": null,
"ValueScore": null,
"AsReported": [],
"Sources": []
},
"CoalUse": {
"Value": null,
"ValueDate": null,
"AsReported": [],
"Sources": []
}
},
"ABCUseIndicators": {
"EnvironmentalControversies": {
"Value": false,
"ValueDate": "2021-03-06T23:22:49.870Z"
},
"RenewableEnergyUseRatio": {
"Value": null,
"ValueDate": null,
"ValueScore": null
}
},
"XYZDataPoints": {
"AccountingControversiesCount": {
"Value": null,
"ValueDate": null,
"AsReported": [],
"Sources": []
},
"AdvanceNotices": {
"Value": null,
"ValueDate": null,
"Sources": []
}
},
"XYXIndicators": {
"AccountingControversies": {
"Value": false,
"ValueDate": "2021-03-06T23:22:49.870Z"
},
"AntiTakeoverDevicesAboveTwo": {
"Value": 4,
"ValueDate": "2021-03-06T23:22:49.870Z",
"ValueScore": "0.8351945854483925"
}
}
}
Expected Flatten structure
Background:
After having multiple calls with ADF experts at Microsoft(Our workplace have Microsoft/Azure partnership), they concluded this is not possible with out of the box activities provided by ADF as is, neither by Dataflow(need not to use data flow though) nor Flatten feature. Reasons are Dataflow/Flatten only unroll the Array objects and there are no mapping functions available to pick the property names - Custom expression are in internal beta testing and will in PA in near future.
Conclusion/Solution:
We concluded with an agreement based on calls with Microsoft emps ended up to go multiple approaches but both needs the custom code - with out custom code this is not possible by using out of box activities.
Solution-1 : Use some code to flatten as per requirement using a ADF Custom Activity. The downside of this you need to use an external compute(VM/Batch), the options supported are not on-demand. So it is little bit expensive but works best if have continuous stream workloads. This approach also continuously monitor if input sources are of different sizes because the compute needs to be elastic in this case or else you will get out of memory exceptions.
Solution-2 : Still needs to write the custom code - but in a function app.
Create a Copy Activity with source as the files with Json content(preferably storage account).
Use target as Rest Endpoint of function(Not as a function activity because it has 90sec timeout when called from an ADF activity)
The function app will takes Json lines as input and parse and flatten.
If you use the above way so you can scale the number of lines cane be send in each request to function and also scale the parallel requests.
The function will do the flatten as required to one file or multiple files and store in blob storage.
The pipeline will continue from there as needed from there.
One problem with this approach is if any of the range is failed the copy activity will retry but it will run the whole process again.
Trying something very similar, is there any other / native solution to address this?
As mentioned in the response above, has this been GA yet? If yes, any reference documentation / samples would be of great help!
Custom expression are in internal beta testing and will in PA in near future.
Azure Function with a complex (List of objects) configuration type is working locally (with that complex type in local.settings.json) but fails to read / create list of objects in Azure (with that complex type in Azure Function configuration settings). I'm looking for the recommended / optimal way to support that across both platforms / methods of access.
This works great in my local.settings.json where I use the configuration builder and pull data out like
var myList = config.GetSection("ConfigurationList").Get<List<MyType>>();
however this doesn't seem to work in Azure Functions?? Now I think that is because in local.settings.json it is a json file and looks like
"ConfigurationList" : [ { "Name": "A", "Value": 2 }, { "Name": "B", "Value": 3 }]
while in Azure Functions it is a setting "ConfigurationList" with the value
[ { "Name": "A", "Value": 2 }, { "Name": "B", "Value": 3 }]
(so there isn't really a "section" in Azure Functions?)
It seems like the "easy" solution to this is to just change the .json to be a quoted string and deserialize the string (and then it would work the same in both places); but that doesn't seem like it would be the "best" (or "recommended" solution)
i.e. something like
"ConfigurationList" : "[ { \"Name\": \"A\", \"Value\": 2 }, { \"Name\": \"B\", \"Value\": 3 }]"
var myList = (List<MyType>)JsonConvert.DeserializeObject(config["ConfigurationList"], typeof(List<MyType>));
Which isn't the worst; but makes the json a bit "not as nice" and doesn't "flow" across the two platforms ... if it is what I have to do, fine; but hoping for a more standard approach / recommendation
As I metioned in the comment, on local you can process local.settings.json as a json file, but when on azure, the value in configuration settings is environment variable. There is no section, it just string.
Please notice that only string values are allowed, and that anything nested will break. Learn how to use nest settings on azure web app(azure functon is based on azure app service sandbox, so it is the same.):
https://learn.microsoft.com/en-us/archive/blogs/waws/asp-net-core-settings-for-azure-app-service
For example, if this is the json structure:
{
"Parent": {
"ChildOne": "C1 from secrets.json",
"ChildTwo": "C2 from secrets.json"
}
}
Then in web app, you should save it like this:
(source: windows.net)
Not sure if you are looking something like this , it seems a list but if it is a simple JObject like
"ConfigurationList" : {
"Name": "A",
"Value": 2
}
Then you can declare ConfigurationList:Name , ConfigurationList:Value in the configuration settings of function app
I am using the template that makes a Multi-AZ lamp stack. The only things I am changing are the existing VPC ID, adding the 2 existing subnets, and naming the RDB database, user and pass. The code validates ok when I click the check button, but when I try to launch the network it fails with the code error,
"Template contains errors.: Template format error: Every Description member must be a string."
I have been looking for example SIMPLE templates, that do not use any foo-bar type "everybody knows this is to be filled with their own value" stuff. I have been putting in hours of search and test. This is the first one I have ever done, and it just cannot be all that hard, right?
I am using the suggested list of AMIs, though in the future I will put in my customized AMI instead.
"Parameters" : {
"VpcId" : {
"Type" : "AWS::EC2::VPC::Id",
"Description" : "vpc-123456789456",
"ConstraintDescription" : "must be the VPC Id of an existing Virtual Private Cloud."
},
"Subnets" : {
"Type" : "List<AWS::EC2::Subnet::Id>",
"Description" : [
"subnet-12345621ff4c" ,
"subnet-1234562188d1"],
This is the only one I have found that doesn't throw errors saying "Expecting a ':' instead of a ','"
Should I be listing the name as
"List"
"Description" has to be a string. It's a textual description that shows up in the UI when you create the stack.
I think you're looking for either "Default" or "AllowedValues". The first will set the default value in case your template user doesn't specify anything. To put a list of values, you need to separate them by a comma. For example:
"Parameters": {
"VpcId": {
"Type": "AWS::EC2::VPC::Id",
"Default": "vpc-123456789456",
"ConstraintDescription": "must be the VPC Id of an existing Virtual Private Cloud."
},
"Subnets": {
"Type": "List<AWS::EC2::Subnet::Id>",
"Default": "subnet-12345621ff4c,subnet-1234562188d1"
}
}
The second is a list of allowed values the user can select. That one actually does take a list. For example:
"Parameters": {
"VpcId": {
"Type": "AWS::EC2::VPC::Id",
"AllowedValues": ["vpc-123456789456", "vpc-xxx"],
"ConstraintDescription": "must be the VPC Id of an existing Virtual Private Cloud."
}
}
I'm not sure if "ConstraintDescription" will show if the user selects a wrong one. I think that only applies to "AllowedPattern".
Yes, it can be that hard and very frustrating, but it does get easier over time. The learning curve is steep.
I have a storage for a community set up in Firebase. As I have a class defined in my Swift project I need to to know whether it´s an Array or a Dictionary when generating an object from it.
I downloaded the json File and it looks like this - 2 users are stored in different data formats in firebase for the same "table":
[{"user": {
"User1": {
"adminOf": {
"3": true,
"5": true
},
"alias": "borussenpeter",
"communities": {
"3": true,
"5": true
}
},
"User2": {
"adminOf": [null, true, true, null, true],
"alias": "burkart",
"communities": [null, true, true, null, true]
}
}}]
I tried downloading the file, edit it so both users are looking the same and uploaded it again, but Firebase saves it again this way.
Of course initialising the object fails when using the wrong data type. Any thoughts on that? Thanks
The answer is in the Firebase documentation on arrays:
However, to help developers that are storing arrays in a Firebase database,... if the data looks like an array, Firebase clients will render it as an array. In particular, if all of the keys are integers, and more than half of the keys between 0 and the maximum key in the object have non-empty values, then Firebase clients will render it as an array.
In the JSON for User1 you have 2 values (3 and 5) for 5 indices. That is less than half of the keys, so Firebase doesn't render it as an array.
In the JSON for User2 you have 3 values (2, 3 and 5) for 5 indices. That is more than half of the keys, so Firebase renders it as an array.
A few ways to deal with this:
prefix the integers with a string, e.g. "group3": true.
add a dummy non-integer to the "array": e.g. "NOTUSED": false.
store values for all indices: e.g. "group0": false
In general:
[ ] = array
{ } = dictionary
How the data appears in Firebase is directly related to how it was stored in Firebase.
My guess is that when user 2 was written to firebase, the keys were missing so it wrote them as an array.
I am getting JSON returned in this format:
{
"status": "success",
"data": {
"debtor": {
"debtor_id": 1301,
"key": value,
"key": value,
"key": value
}
}
}
Somehow, my RESTAdapter needs to provide my debtor model properties from "debtor" section of the JSON.
Currently, I am getting a successful call back from the server, but a console error saying that Ember cannot find a model for "status". I can't find in the Ember Model Guide how to deal with JSON that is nested like this?
So far, I have been able to do a few simple things like extending the RESTSerializer to accept "debtor_id" as the primaryKey, and also remove the pluralization of the GET URL request... but I can't find any clear guide to reach a deeply nested JSON property.
Extending the problem detail for clarity:
I need to somehow alter the default behavior of the Adapter/Serializer, because this JSON convention is being used for many purposes other than my Ember app.
My solution thus far:
With a friend we were able to dissect the "extract API" (thanks #lame_coder for pointing me to it)
we came up with a way to extend the serializer on a case-by-case basis, but not sure if it really an "Ember Approved" solution...
// app/serializers/debtor.js
export default DS.RESTSerializer.extend({
primaryKey: "debtor_id",
extract: function(store, type, payload, id, requestType) {
payload.data.debtor.id = payload.data.debtor.debtor_id;
return payload.data.debtor;
}
});
It seems that even though I was able to change my primaryKey for requesting data, Ember was still trying to use a hard coded ID to identify the correct record (rather than the debtor_id that I had set). So we just overwrote the extract method to force Ember to look for the correct primary key that I wanted.
Again, this works for me currently, but I have yet to see if this change will cause any problems moving forward....
I would still be looking for a different solution that might be more stable/reusable/future-proof/etc, if anyone has any insights?
From description of the problem it looks like that your model definition and JSON structure is not matching. You need to make it exactly same in order to get it mapped correctly by Serializer.
If you decide to change your REST API return statement would be something like, (I am using mock data)
//your Get method on service
public object Get()
{
return new {debtor= new { debtor_id=1301,key1=value1,key2=value2}};
}
The json that ember is expecting needs to look like this:
"debtor": {
"id": 1301,
"key": value,
"key": value,
"key": value
}
It sees the status as a model that it needs to load data for. The next problem is it needs to have "id" in there and not "debtor_id".
If you need to return several objects you would do this:
"debtors": [{
"id": 1301,
"key": value,
"key": value,
"key": value
},{
"id": 1302,
"key": value,
"key": value,
"key": value
}]
Make sense?