Parse pointer in JSON file are blank after import - json

first time poster here.
I uploaded a JSON file to Parse, one of my "columns" is an array of Pointers, but it's not pointing to the objectId field, like so:
{ "Tags": [
{
"__type": "Pointer",
"className": "TAGS_Categories",
"TAGS": "Tag1"
},
{
"__type": "Pointer",
"className": "TAGS_Categories",
"TAGS": "Tag2"
},
{
"__type": "Pointer",
"className": "TAGS_Categories",
"TAGS": "Tag3"
}
]
}
But after I imported the file to Parse, this is what appears under "Tags":
[{},{},{}]
My questions are:
1) is the data somehow hidden and it's just not appearing on the website's spreadsheet?
2) if it's truly gone, what would the best way to fix my JSON file so that it will appear?
Help :-(

When uploading content you need to follow the required data format. Pointers are connected by a combination of the class name and the object id for the item to connect to. Without the object id the item in the data store can't be found (a name lookup will not be performed).
You need to update your JSON payload to include the object ids.
Each item must have the fields:
{"__type":"Pointer","className":"XXXX","objectId":"YYYYYYYYYY"}

Related

Pentaho Kettle: How to dynamically fetch JSON file columns

Background: I work for a company that basically sells passes. Every order that is placed by the customer will contain N number of passes.
Issue: I have these JSON event-transaction files coming into a S3 bucket on a daily basis from DocumentDB (MongoDB). This JSON file is associated to the relevant type of event (insert, modify or delete) for every document key (which is an order in my case). The example below illustrates a "Insert" type of event that came through to the S3 bucket:
{
"_id": {
"_data": "11111111111111"
},
"operationType": "insert",
"clusterTime": {
"$timestamp": {
"t": 11111111,
"i": 1
}
},
"ns": {
"db": "abc",
"coll": "abc"
},
"documentKey": {
"_id": {
"$uuid": "abcabcabcabcabcabc"
}
},
"fullDocument": {
"_id": {
"$uuid": "abcabcabcabcabcabc"
},
"orderNumber": "1234567",
"externalOrderId": "12345678",
"orderDateTime": "2020-09-11T08:06:26Z[UTC]",
"attraction": "abc",
"entryDate": {
"$date": 2020-09-13
},
"entryTime": {
"$date": 04000000
},
"requestId": "abc",
"ticketUrl": "abc",
"tickets": [
{
"passId": "1111111",
"externalTicketId": "1234567"
},
{
"passId": "222222222",
"externalTicketId": "122442492"
}
],
"_class": "abc"
}
}
As we see above, every JSON file might contain N number of passes and every pass is - in turn - is associated to an external ticket id, which is a different column (as seen above). I want to use Pentaho Kettle to read these JSON files and load the data into the DW. I am aware of the Json input step and Row Normalizer that could then transpose "PassID 1", "PassID 2", "PassID 3"..."PassID N" columns into 1 unique column "Pass" and I would have to have to apply a similar logic to the other column "External ticket id". The problem with that approach is that it is quite static, as in, I need to "tell" Pentaho how many Passes are coming in advance in the Json input step. However what if tomorrow I have an order with 10 different passes? How can I do this dynamically to ensure the job will not break?
If you want a tabular output like
TicketUrl Pass ExternalTicketID
---------- ------ ----------------
abc PassID1Value1 ExTicketIDvalue1
abc PassID1Value2 ExTicketIDvalue2
abc PassID1Value3 ExTicketIDvalue3
And make incoming value dynamic based on JSON input file values, then you can download this transformation Updated Link
I found everything work dynamic in JSON input.

Multiple inputs - trying to send several request (http method: PUT)

I want to send different JSONs to an endpoint:
{{URL_API}}/products/{sku}
I need to update several information related to different products so i need to specify the product within the endpoint, i mean, i.e:
If you access this particular endpoint: {{URL_API}}/products/ you will get all the products but i need to specify the product that i want to update:
{{URL_API}}/products/99RE345GT
Take a look at this, i want to send a JSON like this:
{
"sku": "99RE345GT",
"price": "56665.0000",
"status": 1,
"group_prices": [
{
"group": "CLASS A",
"price": 145198.794
},
{
"group": "CLASS B",
"price": 145198.794
},
{
"group": "CLASS C",
"price": 145198.794
}
]
}
AND another one like this (both JSONs share the same structure BUT with different information):
{
"sku": "98PA345GT",
"price": "17534.0000",
"status": 1,
"group_prices": [
{
"group": "CLASS A",
"price": 145198.794
},
{
"group": "CLASS B",
"price": 145198.794
},
{
"group": "CLASS C",
"price": 145198.794
}
]
}
How can i do that?.I have already generated more than 200 JSONs for every product..
So, i have to update 200 products so i generated one JSON for every product, do you get me?
Following my example i would need to edit (somehow) the endpoint for every product and send a JSON, i.e:
since the first JSON has the SKU: 99RE345GT it should perform a http method: PUT over this enpoint:
{{URL_API}}/products/99RE345GT
Then, since the second JSON has the SKU: 98PA345GT it should perform a http method: PUT over this enpoint:
{{URL_API}}/products/98PA345GT
I have never done something like this before.. i read something about CSV + POSTMAN runner but i did not understand the way.
EDIT
I was working on a file (Excel file) and i did this:
So now i have all the different JSON for every product.
EDIT#2. It fails when it validates de Request_URL
I did this:
1)I created a new collection
2)I put this Request_url: {{URL_API}}/products/{{sku}}
3)I saved the changes and then, i went to the Collector Runner:
4)After cliking on the run button. i got this error message:
Invalid URL:
Have you tried adding those data sets to a CSV?
https://learning.postman.com/docs/postman/collection-runs/working-with-data-files/
If you have 2 column headers in a CSV file, one with sku and the other with requestBody - Add that variable value to the request body of the PUT request instead of the JSON.
sku,requestBody
99RE345GT, {JSON Payload}
98PA345GT, {...}
Add a couple of values under those headings to start with, once you prove that it works in the collection Runner.
Once you're happy, add the rest into the file. You may need to do some parsing of the JSON in the Pre-request Script but it should work.
Alternatively, use this template in the PUT request body and this create a CSV withe same column heading as the values in the {{...}} syntax. The values in the datafile will resolve to the values in the request body.
{
"sku": "{{sku}}",
"price": "{{price}}",
"status": {{status}},
"group_prices": [
{
"group": "{{groupA}}",
"price": {{groupAPrice}}
},
{
"group": "{{groupB}}",
"price": {{groupBPrice}}
},
{
"group": "{{groupC}}",
"price": {{groupCPrice}}
}
]
}
The CSV might look like this:
sku,price,status,groupA,groupAPrice,...
99RE345GT,1234,1,Group A, 555
98PA345GT,1235,1,Group A, 666

Reading Inconsistent Nested JSON in Athena

In Athena, I am reading some nested JSON files into a table. The field that actually contains the nested JSON has an inconsistent number of fields within it across the different files in the raw data.
Sometimes the data looks something like this:
{
"id": "9f1e07b4",
"date": "05/20/2018 02:30:53.110 AM",
"data": {
"a": "asd",
"b": "adf",
"body": {
"sid": {
"uif": "yes",
"sidd": "no",
"state": "idle"
}
},
"category": "scene"
}
}
Other times the data looks something like this:
{
"id": "9f1e07b4",
"date": "05/20/2018 02:30:45.436 AM",
"data": {
"a": "event",
"b": "state",
"body": {
"persona": {
"one": {
"movement": "idle"
}
}
},
"category": "scene"
}
}
Other times the "body" field contains both the "sid" struct and the "persona" struct.
As you can see the fields given within "body" are not always consistent. I tried to add all of the possible fields and their structures within my CREATE EXTERNAL TABLE query. However, the "data" column that contains the "body" field still does not fill and remains blank when I "preview table" in Athena.
In the CREATE TABLE DDL, is there a way to indicate that I want to fill all of columns that aren't present in the nested JSON of each file with null values?
Furthermore, the 'names' given to the fields in the query do not have to correspond to the key values in the raw JSON. It seems Athena is simply reading the structure and nothing else. Is there a way to indicate which JSON key corresponds to which Athena field name directly? So that if some fields are missing from the "body" of one file, Athena can know which one is missing and fill it in as null?

How to get the full pointer data in Parse.com

i'm making a new application using Parse.com as a backend, i'm trying to make less requests to the Parse, I have a class which is pointing to another object of another class.
Class1(things):
ObjectID Name Category(pointer)
JDFHSJFxv Apple QSGKqf343
Class2(Categories):
ObjectID Name Number Image
QSGKqf343 Fruits 45 http://myserver.com/fruits.jpeg
when i'm trying to retreive data for my first class things using REST API i'm getting this json object :
{
"results": [
{
"Name": "Apple",
"createdAt": "2015-07-12T02:50:20.291Z",
"objectId": "JDFHSJFxv",
"category": {
"__type": "Pointer",
"className": "Teams",
"objectId": "QSGKqf343"
},
"updatedAt": "2015-07-12T02:55:33.696Z"
}
]
}
the json doesn't contains all the data included in the object i'm pointing to, I will have to make another request to get all the data of that object,
is There any way to fix that
You need to tell Parse to return the related object in your query, via the include key.
e.g., add the following to your CURL --data-urlencode 'include=category'

Obtain a different JSON object structure in AngularJS

I'm Working on AngularJS.
In this part of the project my goal is to obtain a JSON structure after filling a form with some particulars values.
Here's the fiddle of my simple form: Fiddle
With the form I will do a query to KairosDB, that is my NoSql Database, I will query data from it by a JSON object. The form is structured in this way:
a Name
a certain Number of Tags, with Tag Id ("ch" for example) and tag value ("932" for example)
a certain Number of Aggregators to manipulate data coming from DB
Start Timestamp and End Timestamp (now they are static and only included in the final JSON Object)
After filling this form, with my code I'll obtain for example this JSON object:
{
"metrics": [
{
"tags": [
{
"id": "ch",
"value": "932"
},
{
"id": "ch",
"value": "931"
}
],
"aggregators": {
"name": "sum",
"sampling": [
{
"value": "1",
"unit": "milliseconds",
"type": "SUM"
}
]
}
}
],
"cache_time": 0,
"start_absolute": 123,
"end_absolute": 1234
}
Unfortunately, KairosDB accepts a different structure, and as you could see, Tag id "ch" doesn't hase an "id" string before, or for example, Tag values coming from the same tag id are grouped together
{
"metrics": [
{
"tags": {
"ch": [
"932",
"931"
]
},
"name": "AIENR",
"aggregators": [
{
"name": "sum",
"sampling": {
"value": "1",
"unit": "milliseconds"
}
}
]
}
],
"cache_time": 0,
"start_absolute": 1367359200000,
"end_absolute": 1386025200000
}
My question is: Is there a way to obtain the JSON structure like the one accepted by Kairos DB with an Angular JS form?. Thanks to everyone.
I've seen this topic as the one more similar to mine but it isn't in AngularJS.
Personally, I'd do the refactoring work in the backend - Have what ever server interfaces sends and receives data do the manipulation - Otherwise you'll end up needing to refactor your data inside Angular anywhere you want to use that dataset.
Where as doing it in the backend would put it in a single access point.
Of course, you could do it in Angular, just replace userString in the submitData method with a copy of the array and replace the tags section with data in the new format, and likewise refactor the returned result to the correct format when you get a reply.