Working with the repeating grids through the form builder.
I have a custom control that has a string value represented in json.
{
"data": {
"type": "File",
"itemID": "12345",
"name": "Annual Summary",
"parentFolderID": "fileID",
"owner": "Owner",
"lastModifiedDate": "2016-10-17 22:48:05Z"
}
}
In the controls outside of the repeating grid, i need to check if name = "Annual Summary"
Previously, i had a drop down control and using Calculated Value $dropdownControl = "Annual Summary" it was able to return true if any of the repeated rows contained the value. My understanding is that using the = operator, it will validate against all rows.
Now with the json output of the control, I am attempting to use
contains($jsonStringValue, 'Annual Summary')
However, this only works with one entry and will be null if there are multiple rows.
2 questions:
How would validate whether "Annual Summary" (or any other text) is present within any of the repeated rows?
Is there any way to navigate the json or parse it to XML and navigate it?
Constraint:
within the Calculated Value or Visibility fields within form builder
manipulating the source that is generated by the form builder
You probably want to parse the JSON string first. See also this other Stackoverflow question.
Until Orbeon Forms 2016.3 is released, you would write:
(
for $v in $jsonStringValue
return converter:jsonStringToXml($v)
)//name = 'Annual Summary'
With the above, you also need to scope the namespace:
xmlns:converter="org.orbeon.oxf.json.Converter"
Once Orbeon Forms 2016.3 is released you can switch to:
$jsonStringValue/xxf:json-to-xml()//name = 'Annual Summary'
Related
I'll be getting json dynamically from service as follows:
{"items":[[{"key": "SerialID","value": "P1.M1.T1"},{ "key": "Description", "value": "Dummy Desc 1"},{ "key": "Label", "value": "A123"}],[{"key": "SerialID","value": "P1.M1.T2"},{ "key": "Description", "value": "Dummy Desc 2"},{ "key": "Label", "value": "B123"}]]}
Here, sample 2 rows in items array are given. I would get multiple such rows. Also, columns in every row may very (e.g.: Here, I am receiving 3 columns per row. I might receive n number. However, it'll be same for all rows in input).
I want to pass this json as a data source to primeNG table and render it in tabular format. So both, columns and rows, need to be generated dynamically. I tried parsing the json but somehow not able to form appropriate inputs for primeNG table.
Any help is highly appreciated.
I finally iterated my JSON, formed column and row arrays in the format which PrimeNG expects and passed it to component. It worked!
I'm using Google App Script to migrate data through BigQuery and I've run into an issue because the SQL I'm using to perform a WRITE_TRUNCATE load is causing the destination table to be recreated with column modes of NULLABLE rather than their previous mode of REQUIRED.
Attempting to change the modes to REQUIRED after the data is loaded using a metadata patch causes an error even though the columns don't contain any null values.
I considered working around the issue by dropping the table and recreating it again with the same REQUIRED modes, then loading the data using WRITE_APPEND instead of WRITE_TRUNCATE. But this isn't possible because a user wants to have the same source and destination table in their SQL.
Does anyone know if it's possible to define a BigQuery.Jobs.insert request that includes the output schema information/metadata?
If it's not possible the only alternative I can see is to use my original work around of a WRITE_APPEND but add a temporary table into the process, to allow for the destination table appearing in the source SQL. But if this can be avoid that would be nice.
Additional Information:
I did experiment with different ways of setting the schema information but when they didn't return an error message the schema seemed to get ignored.
I.e. this is the json I'm passing into BigQuery.Jobs.insert
jsnConfig =
{
"configuration":
{
"query":
{
"destinationTable":
{
"projectId":"my-project",
"datasetId":"sandbox_dataset",
"tableId":"hello_world"
},
"writeDisposition":"WRITE_TRUNCATE",
"useLegacySql":false,
"query":"SELECT COL_A, COL_B, '1' AS COL_C, COL_TIMESTAMP, COL_REQUIRED FROM `my-project.sandbox_dataset.hello_world_2` ",
"allowLargeResults":true,
"schema":
{
"fields":
[
{
"description":"Desc of Column A",
"type":"STRING",
"mode":"NULLABLE",
"name":"COL_A"
},
{
"description":"Desc of Column B",
"type":"STRING",
"mode":"REQUIRED",
"name":"COL_B"
},
{
"description":"Desc of Column C",
"type":"STRING",
"mode":"REPEATED",
"name":"COL_C"
},
{
"description":"Desc of Column Timestamp",
"type":"INTEGER",
"mode":"NULLABLE",
"name":"COL_TIMESTAMP"
},
{
"description":"Desc of Column Required",
"type":"STRING",
"mode":"REQUIRED",
"name":"COL_REQUIRED"
}
]
}
}
}
}
var job = BigQuery.Jobs.insert(jsnConfig, "my-project");
The result is that the new or existing hello_world table is truncated and loaded with the data specified in the query (so part of the json package is being read), but the column descriptions and modes aren't added as defined in the schema section. They're just blank and NULLABLE in the table.
More
When I tested the REST request above using Googles API page for BigQuery.Jobs.Insert it highlighted the "schema" property in the request as invalid. I think it appears the schema can be defined if you're loading the data from a file, i.e. BigQuery.Jobs.Load but it doesn't seem to support that functionality if you're putting the data in using an SQL source.
See the documentation here: https://cloud.google.com/bigquery/docs/schemas#specify-schema-manual-python
You can pass a schema object with your load job, meaning you can set fields to mode=REQUIRED
this is the command you should use:
bq --location=[LOCATION] load --source_format=[FORMAT] [PROJECT_ID]:[DATASET].[TABLE] [PATH_TO_DATA_FILE] [PATH_TO_SCHEMA_FILE]
as #Roy answered, this is done via load only. Can you output the logs of this command?
Short
Can I pass additional data through JSON objects to Datatables which won't be rendered by Datatables?
Longer description
I'm having issues with Internet Explorer (sigh) rendering Datatables in a frustratingly slow manor. Having scoured the web for solutions the best bet I've figured is that I should transfer the creation of the table from HTML to JSON.
Our HTML rows currently look like this:
<tr their_ID="{$data[$i]['their_ID']}" class="$no_select $exists">
<td>$ii.</td>
<td surname>{$data[$i]['surname']}</td>
<td forename>{$data[$i]['forename']}</td>
<td title>{$data[$i]['title']}</td>
<td gender>{$data[$i]['gender']}</td>
<td email>{$data[$i]['email']}</td>
<td import class="centerText"><input type="checkbox" $checked ourID="$ourID" /></td>
</tr>
The JSON seems great if all you are passing is raw data. But for the rest of the functionality to work we need all the additional data for each row and table cell to be passed.
If I have the following:
{
"DT_RowId": "1234",
"ii": "$ii",
"surname": "Surname",
"forename": "Forename",
"title": "Title",
"gender": "M",
"email": "abc#xyz.com",
"import": "$ourID"
},
Using the createdRow callback I can set most of the row and cell data using JS. The ID could be taken and renamed to the custom attribute their_ID. The keynames could be used to add the custom attributes to the cells. I could convert the import cell into a checkbox using JS etc. Determining whether the row could be selected or not ($no_select) could also be done with JS.
The problem we're left with is:
How do we pass the server side determined data of whether the user exists in our database or not?
Is it possible to pass additional data through the JSON which won't be rendered by the Datatables but will be available for us to use? $exists and $checked (which really could be a single variable as far as the JS is concerned)
E.g. theoretically:
{
"DT_RowId": "1234",
"ii": "$ii",
"surname": "Surname",
"forename": "Forename",
"title": "Title",
"gender": "M",
"email": "abc#xyz.com",
"import": $ourID,
"hiddenClientSideData": {
"exists": 1|0,
"checked": 1|0
}
},
Thanks for any help, and if you need any clarifications please ask.
You should be able to use the ColumnDefs and set the visible property to false. Create a column for that data. And then target that column and make it invisible
https://datatables.net/reference/option/columnDefs
You would create two additional th and two addadditional td cells in your table definition. Then just send the data as you are with other properties. To begin with render it on the UI to make sure it is all sending correctly. Then use the columnDefs to target and hide those last two columns. You will still be able to use datatables api to get the data to act on it if needed
I have a setup in angular which displays a json string called 'items'. Each item contains an array of field ids. By matching the field ids, it pulls information for the fields using a seperate 'fields' json string.
{
"data": [
{
"id": "1",
"title": "Item 1",
"fields": [
1,
1
]
},
{
"id": "2",
"title": "Item 2",
"fields": [
1,
3
]
},
{
"id": 3,
"title": "fsdfs"
}
]
}
You can copy or delete either the items or fields, which will modify the 'items' json.
Everything works except when I copy one item (and its fields), and then choose to delete a field for that specific item.
What happens is that it deletes the field for both the copied item AND the original.
Plunker -
http://plnkr.co/edit/hN8tQiBMBhQ1jwmPiZp3?p=preview
I've read that using 'track by' helps to index each item as unique and prevent duplicate keys, but this seems to be having no effect.
Any help appreciated, thanks
Edit -
Credit to Eric McCormick for this one, using angular.copy with array.push solved this issue.
Instead of -
$scope.copyItem = function (index) {
items.data.push({
id: $scope.items.data.length + 1,
title: items.data[index].title,
fields: items.data[index].fields
});
}
This worked -
$scope.copyItem = function (index) {
items.data.push(angular.copy(items.data[index]));
}
I recommend using angular.copy, which is a "deep copy" of the source object. This is a unique object from the source one.
It may seem slightly counter-intuitive, but a direct reference (as you're observing) interacts with the original object. If you inspect the element's scope after it's instantiated in the DOM, you can see there's a $id property assigned to the object in memory. Basically, by using angular.copy(source, destination), you ensure a copying of all the properties/values and having a unique object.
Example:
//inside the controller, a function to instantiate new copy of selected object
this.selectItem = function(item){
var copyOfItem = angular.copy(item);
//new item with same properties and values but unique object!
}
Egghead.io has a video on angular.copy.
I need a little help regarding lucene index files, thought, maybe some of you guys can help me out.
I have json like this:
[
{
"Id": 4476,
"UrlName": null,
"PhoneData": [
{
"PhoneType": "O",
"PhoneNumber": "0065898",
},
{
"PhoneType": "F",
"PhoneNumber": "0065898",
}
],
"Contact": [],
"Services": [
{
"ServiceId": 10,
"ServiceGroup": 2
},
{
"ServiceId": 20,
"ServiceGroup": 1
}
],
}
]
Adding first two fields is relatively easy:
// add lucene fields mapped to db fields
doc.Add(new Field("Id", sampleData.Id.Value.ToString(), Field.Store.YES, Field.Index.NOT_ANALYZED));
doc.Add(new Field("UrlName", sampleData.UrlName.Value ?? "null" , Field.Store.YES, Field.Index.ANALYZED));
But how I can add PhoneData and Services to index so it can be connected to unique Id??
For indexing JSON objects I would go this way:
Store the whole value under a payload field, named for example $json. This field would be stored but not indexed.
For each (indexable) property (maybe nested) create an indexable field with its name as a XMLPath-like expression identifying the property, for example PhoneData.PhoneType
If is ok that all nested properties will be indexed then it's simple, just iterate over all of them generating this indexable field.
But if you don't want to index all of them (a more realistic case), how to know which property is indexable is another problem; in this case you could:
Accept from the client the path expressions of the index fields to be created when storing the document, or
Put JSON Schema into play to describe your data (assuming your JSON records have a common schema), and extend it with a custom property that would allow you to tag which properties are indexable.
I have created a library doing this (and much more) that maybe can help you.
You can check it at https://github.com/brutusin/flea-db