Prevent change of 'copied' object affecting original using ng-repeat in angular - json

I have a setup in angular which displays a json string called 'items'. Each item contains an array of field ids. By matching the field ids, it pulls information for the fields using a seperate 'fields' json string.
{
"data": [
{
"id": "1",
"title": "Item 1",
"fields": [
1,
1
]
},
{
"id": "2",
"title": "Item 2",
"fields": [
1,
3
]
},
{
"id": 3,
"title": "fsdfs"
}
]
}
You can copy or delete either the items or fields, which will modify the 'items' json.
Everything works except when I copy one item (and its fields), and then choose to delete a field for that specific item.
What happens is that it deletes the field for both the copied item AND the original.
Plunker -
http://plnkr.co/edit/hN8tQiBMBhQ1jwmPiZp3?p=preview
I've read that using 'track by' helps to index each item as unique and prevent duplicate keys, but this seems to be having no effect.
Any help appreciated, thanks
Edit -
Credit to Eric McCormick for this one, using angular.copy with array.push solved this issue.
Instead of -
$scope.copyItem = function (index) {
items.data.push({
id: $scope.items.data.length + 1,
title: items.data[index].title,
fields: items.data[index].fields
});
}
This worked -
$scope.copyItem = function (index) {
items.data.push(angular.copy(items.data[index]));
}

I recommend using angular.copy, which is a "deep copy" of the source object. This is a unique object from the source one.
It may seem slightly counter-intuitive, but a direct reference (as you're observing) interacts with the original object. If you inspect the element's scope after it's instantiated in the DOM, you can see there's a $id property assigned to the object in memory. Basically, by using angular.copy(source, destination), you ensure a copying of all the properties/values and having a unique object.
Example:
//inside the controller, a function to instantiate new copy of selected object
this.selectItem = function(item){
var copyOfItem = angular.copy(item);
//new item with same properties and values but unique object!
}
Egghead.io has a video on angular.copy.

Related

What JSON format does STRIP_OUTER_ARRAY support?

I have a file composed of a single array containing multiple records.
{
"Client": [
{
"ClientNo": 1,
"ClientName": "Alpha",
"ClientBusiness": [
{
"BusinessNo": 1,
"IndustryCode": "12345"
},
{
"BusinessNo": 2,
"IndustryCode": "23456"
}
]
},
{
"ClientNo": 2,
"ClientName": "Bravo",
"ClientBusiness": [
{
"BusinessNo": 1,
"IndustryCode": "34567"
},
{
"BusinessNo": 2,
"IndustryCode": "45678"
}
]
}
]
}
I load it with the following code:
create or replace stage stage.test
url='azure://xxx/xxx'
credentials=(azure_sas_token='xxx');
create table if not exists stage.client (json_data variant not null);
copy into stage.client_test
from #stage.test/client_test.json
file_format = (type = 'JSON' strip_outer_array = true);
Snowflake imports the entire file as one row.
I would like the the COPY INTO command to remove the outer array structure and load the records into separate table rows.
When I load larger files, I hit the size limit for variant and get the error Error parsing JSON: document is too large, max size 16777216 bytes.
If you can import the file into Snowflake, into a single row, then you can use LATERAL FLATTEN on the Clients field to generate one row per element in the array.
Here's a blog post on LATERAL and FLATTEN (or you could look them up in the snowflake docs):
https://support.snowflake.net/s/article/How-To-Lateral-Join-Tutorial
If the format of the file is, as specified, a single object with a single property that contains an array with 500 MB worth of elements in it, then perhaps importing it will still work -- if that works, then LATERAL FLATTEN is exactly what you want. But that form is not particularly great for data processing. You might want to use some text processing script to massage the data if that's needed.
RECOMMENDATION #1:
The problem with your JSON is that it doesn't have an outer array. It has a single outer object containing a property with an inner array.
If you can fix the JSON, that would be the best solution, and then STRIP_OUTER_ARRAY will work as you expected.
You could also try to recompose the JSON (an ugly business) after reading line for line with:
CREATE OR REPLACE TABLE X (CLIENT VARCHAR);
COPY INTO X FROM (SELECT $1 CLIENT FROM #My_Stage/Client.json);
User Response to Recommendation #1:
Thank you. So from what I gather, COPY with STRIP_OUTER_ARRAY can handle a file starting and ending with square brackets, and parse the file as if they were not there.
The real files don't have line breaks, so I can't read the file line by line. I will see if the source system can change the export.
RECOMMENDATION #2:
Also if you would like to see what the JSON parser does, you can experiment using this code, I have parsed JSON on the copy command using similar code. Working with your JSON data in small project can help you shape the Copy command to work as intended.
CREATE OR REPLACE TABLE SAMPLE_JSON
(ID INTEGER,
DATA VARIANT
);
INSERT INTO SAMPLE_JSON(ID,DATA)
SELECT
1,parse_json('{
"Client": [
{
"ClientNo": 1,
"ClientName": "Alpha",
"ClientBusiness": [
{
"BusinessNo": 1,
"IndustryCode": "12345"
},
{
"BusinessNo": 2,
"IndustryCode": "23456"
}
]
},
{
"ClientNo": 2,
"ClientName": "Bravo",
"ClientBusiness": [
{
"BusinessNo": 1,
"IndustryCode": "34567"
},
{
"BusinessNo": 2,
"IndustryCode": "45678"
}
]
}
]
}');
SELECT
C.value:ClientNo AS ClientNo
,C.value:ClientName::STRING AS ClientName
,ClientBusiness.value:BusinessNo::Integer AS BusinessNo
,ClientBusiness.value:IndustryCode::Integer AS IndustryCode
from SAMPLE_JSON f
,table(flatten( f.DATA,'Client' )) C
,table(flatten(c.value:ClientBusiness,'')) ClientBusiness;
User Response to Recommendation #2:
Thank you for the parse_json example!
Trouble is, the real files are sometimes 500 MB, so the parse_json function chokes.
Follow-up on Recommendation #2:
The JSON needs to be in the NDJSON http://ndjson.org/ format. Otherwise the JSON will be impossible to parse because of the potential for large files.
Hope the above helps other running into similar questions!

Check if JSON string in Orbeon repeating grid contains a specific value

Working with the repeating grids through the form builder.
I have a custom control that has a string value represented in json.
{
"data": {
"type": "File",
"itemID": "12345",
"name": "Annual Summary",
"parentFolderID": "fileID",
"owner": "Owner",
"lastModifiedDate": "2016-10-17 22:48:05Z"
}
}
In the controls outside of the repeating grid, i need to check if name = "Annual Summary"
Previously, i had a drop down control and using Calculated Value $dropdownControl = "Annual Summary" it was able to return true if any of the repeated rows contained the value. My understanding is that using the = operator, it will validate against all rows.
Now with the json output of the control, I am attempting to use
contains($jsonStringValue, 'Annual Summary')
However, this only works with one entry and will be null if there are multiple rows.
2 questions:
How would validate whether "Annual Summary" (or any other text) is present within any of the repeated rows?
Is there any way to navigate the json or parse it to XML and navigate it?
Constraint:
within the Calculated Value or Visibility fields within form builder
manipulating the source that is generated by the form builder
You probably want to parse the JSON string first. See also this other Stackoverflow question.
Until Orbeon Forms 2016.3 is released, you would write:
(
for $v in $jsonStringValue
return converter:jsonStringToXml($v)
)//name = 'Annual Summary'
With the above, you also need to scope the namespace:
xmlns:converter="org.orbeon.oxf.json.Converter"
Once Orbeon Forms 2016.3 is released you can switch to:
$jsonStringValue/xxf:json-to-xml()//name = 'Annual Summary'

MongoDB find() to return the sub document when a (field,value) is matched

This is a single collection which has 2 json files. I am searching for a particular field: value in an object and the entire sub document must be returned in case of a match ( That particular sub document from the collection must be returned out of the 2 sub documents in the following collection). Thanks in advance.
{
"clinical_study": {
"#rank": "379",
"#comment": [],
"required_header": {
"download_date": "ClinicalTrials.gov processed this data on March 18, 2015",
"link_text": "Link to the current ClinicalTrials.gov record.",
"url": "http://clinicaltrials.gov/show/NCT00000738"
},
"id_info": {
"org_study_id": "ACTG 162",
"secondary_id": "11137",
"nct_id": "NCT00000738"
},
"brief_title": "Randomized, Double-Blind, Placebo-Controlled Trial of Nimodipine for the Neurological Manifestations of HIV-1",
"official_title": "Randomized, Double-Blind, Placebo-Controlled Trial of Nimodipine for the Neurological Manifestations of HIV-1",
}
{
"clinical_study": {
"#rank": "381",
"#comment": [],
"required_header": {
"download_date": "ClinicalTrials.gov processed this data on March 18, 2015",
"link_text": "Link to the current ClinicalTrials.gov record.",
"url": "http://clinicaltrials.gov/show/NCT00001292"
},
"id_info": {
"org_study_id": "920106",
"secondary_id": "92-C-0106",
"nct_id": "NCT00001292"
},
"brief_title": "Study of Scaling Disorders and Other Inherited Skin Diseases",
"official_title": "Clinical and Genetic Studies of the Scaling Disorders and Other Selected Genodermatoses",
}
Your example documents are malformed - right now both clinical_study keys are part of the same object, and that object is missing a closing }. I assume you want them to be two separate documents, although you call them subdocuments. It doesn't make sense to have them be subdocuments of a document if they are both named under the same key. You cannot save the document that way, and in the mongo shell it will silently replace the first instance of the key with the second:
> var x = { "a" : 1, "a" : 2 }
> x
{ "a" : 2 }
If you just want to return the clinical_study part of the document when you match on clinical_study.#rank, use projection:
db.test.find({ "clinical_study.#rank" : "379" }, { "clinical_study" : 1, "_id" : 0 })
If instead you meant for the clinical_study documents to be elements of an array inside a larger document, then use $. Here, clinical_study is now the name of an array field which has as its elements the two values of the clinical_study key in your non-documents:
db.test.find({ "clinical_study.#rank" : "379" }, { "_id" : 0, "clinical_study.$" : 1 })

How to add nested json object to Lucene Index

I need a little help regarding lucene index files, thought, maybe some of you guys can help me out.
I have json like this:
[
{
"Id": 4476,
"UrlName": null,
"PhoneData": [
{
"PhoneType": "O",
"PhoneNumber": "0065898",
},
{
"PhoneType": "F",
"PhoneNumber": "0065898",
}
],
"Contact": [],
"Services": [
{
"ServiceId": 10,
"ServiceGroup": 2
},
{
"ServiceId": 20,
"ServiceGroup": 1
}
],
}
]
Adding first two fields is relatively easy:
// add lucene fields mapped to db fields
doc.Add(new Field("Id", sampleData.Id.Value.ToString(), Field.Store.YES, Field.Index.NOT_ANALYZED));
doc.Add(new Field("UrlName", sampleData.UrlName.Value ?? "null" , Field.Store.YES, Field.Index.ANALYZED));
But how I can add PhoneData and Services to index so it can be connected to unique Id??
For indexing JSON objects I would go this way:
Store the whole value under a payload field, named for example $json. This field would be stored but not indexed.
For each (indexable) property (maybe nested) create an indexable field with its name as a XMLPath-like expression identifying the property, for example PhoneData.PhoneType
If is ok that all nested properties will be indexed then it's simple, just iterate over all of them generating this indexable field.
But if you don't want to index all of them (a more realistic case), how to know which property is indexable is another problem; in this case you could:
Accept from the client the path expressions of the index fields to be created when storing the document, or
Put JSON Schema into play to describe your data (assuming your JSON records have a common schema), and extend it with a custom property that would allow you to tag which properties are indexable.
I have created a library doing this (and much more) that maybe can help you.
You can check it at https://github.com/brutusin/flea-db

JSON List of Tags/Values

I am working on a project that requires an object to have multiple values. For instance, a list of 'things' all with a name but each thing also has another category, "Tags" with could include things like round, circle, ball, sport, etc. all words describing what the 'thing' is. I am trying to define this in JSON but I keep running into errors with jsonlint. Take a look:
{
"things":[{
"name":"thing1",
"tags":[{"globe","circle","round","world"}]
},
{
"name":"thing2",
"tags":[{"cloud","weather","lightning","sky"}]
},
{
"name":"thing3",
"tags":[{"round","bullseye","target"}]
},
{
"name":"thing4",
"tags":[{"round","key","lock"}]
},
{
"name":"thing5",
"tags":[{"weather","sun","sky","summer"}]
}]
}
Hopefully you see what I am trying to accomplish.
Thanks for your help!
Anthony
Your "tags" objects seem off to me. The code:
[{"globe","circle","round","world"}]
would try to create an array of an object whose properties don't have any values, but I'm not sure that's even valid syntax. Do you want an array of words for "tags?" If so, you need to drop those curly braces:
var myThings = {
"things":[{
"name":"thing1",
"tags":["globe","circle","round","world"]
}]
};
That would give you an object with property "things" that contains an array of one object with a "name" and a "tags" property, where "tags" is an array of words.
You could access "tags" like so:
var thing1tag2 = myThings.things[0].tags[1]; // = "circle"
If each thing has a name then why not use that name as a tag as follows:
var things = {
"thing1": ["globe", "circle", "round", "world"],
"thing2": ["cloud", "weather", "lightning", "sky"],
"thing3": ["round", "bullseye", "target"],
"thing4": ["round", "key", "lock"],
"thing5", ["weather", "sun", "sky", "summer"]
};
Now you can refer to things.thing1 or things.thing2 and the tags as things.thing1[0] and things.thing2[2] etc.