I have the following json
{
"namespace": "monitoring",
"name": "alok",
"spec": {
"replicas": 1,
"template": {
"metadata": "aaa",
"spec": {
"containers": [
{
"image": "practodev/test:test",
"env": [
{
"name":"GF_SERVER_HTTP_PORT",
"value":"3000"
},
{
"name":"GF_SERVER_HTTPS_PORT",
"value":"443"
},
]
}
]
}
}
}
}
How do I add deployment_env.json using jsonnet?
{
"env": [
{
"name":"GF_AUTH_DISABLE_LOGIN_FORM",
"value":"false"
},
{
"name":"GF_AUTH_BASIC_ENABLED",
"value":"false"
},
]
}
I need to add it under spec.template.containers[0].env = deployment_env.json
I wrote the below jsonnet to do that. It appends a new element. But i need to change the existing 0th container element in the json. Please suggest how to do it.
local grafana_envs = (import 'custom_grafana/deployment_env.json');
local grafanaDeployment = (import 'nested.json') + {
spec+: {
template+: {
spec+: {
containers+: [{
envs: grafana_envs.env,
}]
}
}
},
};
grafanaDeployment
See below for an implementation that allows adding env to an existing container by its index in the containers[] array.
Do note that jsonnet is much better suited to work with objects (i.e. dictionaries / maps) rather than arrays, thus it needs contrived handling via std.mapWithIndex(), to be able to modify an entry from its matching index.
local grafana_envs = (import 'deployment_env.json');
// Add extra_env to a container by its idx passed containers array
local override_env(containers, idx, extra_env) = (
local f(i, x) = (
if i == idx then x {env+: extra_env} else x
);
std.mapWithIndex(f, containers)
);
local grafanaDeployment = (import 'nested.json') + {
spec+: {
template+: {
spec+: {
containers: override_env(super.containers, 0, grafana_envs.env)
}
}
},
};
grafanaDeployment
Alternative implementation, not relying on the array index position, but image value instead (which makes more sense here as the env must be understood by the image implementation)
local grafana_envs = (import 'deployment_env.json');
local TARGET_CONTAINER_IMAGE = 'practodev/test:test';
local grafanaDeployment = (import 'nested.json') + {
spec+: {
template+: {
spec+: {
containers: [
// TARGET_CONTAINER_IMAGE identifies which container to modify
if x.image == TARGET_CONTAINER_IMAGE
then x { env+: grafana_envs.env }
else x
for x in super.containers
],
},
},
},
};
grafanaDeployment
An alternative to std.mapWithIndex is to explicitly iterate through the indexes based on the size of the list.
local grafana_envs = (import 'deployment_env.json');
local grafanaDeployment = (import 'nested.json') + {
spec+: {
template+: {
spec+: {
containers:
[super.containers[0] { env+: grafana_envs.env }]
+
[
super.containers[i]
for i in std.range(1, std.length(super.containers) - 1)
]
}
}
},
};
grafanaDeployment
If one needed to modify a specific index other than 0, say 5, then they could do so by putting an if i == 5 in the loop.
Related
With jq, how can I transform the following:
{
"root": {
"branch1": {
"leaf": 1
},
"branch2": {
"leaf": 2
},
"branch3": {
"leaf": 3
}
},
"another-root": {
"branch": 123
},
"foo": "bar"
}
to this:
{
"root": {
"branch1": {
"leaf": "updated"
},
"branch2": {
"leaf": "updated"
},
"branch3": {
"leaf": "updated"
}
},
"another-root": {
"branch": 123
},
"foo": "bar"
}
🤦 Apparently [] can be used on object too. I had though it was only for lists.
The following was all I needed.
.root[].leaf="updated"
First you need to parse the json and then modify the resulting object as required using for ... in statement (example below)
const flatJSON = '{"root":{"branch1":{"leaf":1},"branch2":{"leaf":2},"branch3":{"leaf":3}},"another-root":{"branch":123},"foo":"bar"}';
const parsedJSON = JSON.parse(flatJSON);
const root = parsedJSON.root;
for (let property in root) {
root[property].leaf = "updated"; (or root[property]["leaf"] = "updated";)
}
If you want to use jquery you have to replace for ... in statement with jQuery.each() method that iterates over both objects and arrays.
Don't forget to convert it back to json with JSON.stringify() method (if required).
Hope that this helps.
All the best.
I have imported the following json file:
[
{
"case_id": "1234",
"thread": [
{
"t_id": "1111",
"text": "test"
},
{
"t_id": "2222",
"text": "test"
}
]
},
{
"case_id": "5678",
"thread": [
{
"t_id": "9999",
"text": "test"
},
{
"t_id": "8888",
"text": "test"
},
{
"t_id": "777",
"text": "test"
}
]
}
]
using the following:
import cases from '../cases.json'
The whole json dataset is available in cases variable and can be used in the template with the support of v-if and v-for.
How can I create a separate dataset (thecase) that contains only threads for a given case_id? In the template I would only like to use v-for to display all threads for a given case_id.
Below is my export default section:
export default {
name: "details",
props: {
case_id: {
required: true,
type: String
}
},
data () {
return {
cases,
thecase: ### THIS IS THE PART I CANNOT FIGURE OUT ###
}
}
};
You can remove thecase from data options and use a computed property instead for thecase. Inside the computed property, we will need to use array .find() method to find the case where case_id is same as the case_id passed in the prop:
data: {
cases,
},
computed: {
thecase: function() {
return this.cases.find(c => c.case_id === (this.case_id || ''))
}
}
and then you can use v-for on thecase.thread just like you would do for a data option like:
<li v-for="item in thecase.thread" :key="item.t_id">
{{ item.text }}
</li>
You can further modify it and use v-if & v-else to show a text like No cases were found with give case id in case there is no match found.
I have a rather large json schema. The problematic part is a smaller schema within the schema called "translations", and which looks like this:
"translations": {
"bsonType": "object",
"patternProperties": {
"id": {
"bsonType": "string"
},
"^[a-z]{2}$": {
"anyOf": [
{
"bsonType": "object"
},
{
"bsonType": "array"
}
]
}
}
}
Where the object defined by the regex contains many more properties (a field called "text", for example) and the array is an array of these objects, but I only left the parts that are important for understanding the structure.
My issue is that when I validate my files against this schema, it fails every single one of them, but when I remove the "bsonType": "object" from the first object in the anyOf array, it works properly.
All of my files are such that at least one of the objects in the translation objects, which have the regular expression as key, are of type "object". so I don't understand why it fails them.
I use mongoDB 3.6.0.
Here is an example for a file that would fail:
"translations":{
"id":"12345",
"br":{
"text":"string1"
},
"en":{
"text":"string2"
},
"ja":[
{
"text":"string3"
},
{
"text":"string4"
}
],
"no":[
{
"text":"string6"
},
{
"text":"string7"
}
]
}
In case it wasn't clear- the problem is that files like this one fail when the schema is defined with "bsonType": "object" in the first object of the anyOf array, and works when i take that off. The "bsonType": "array" in the second object of the anyOf array works fine.
I think your problem that id collide with the regex try this:
let MongoClient = require('mongodb').MongoClient;
let collectionName = 'translations';
let scheme = {
$jsonSchema:{
"bsonType": "object",
"patternProperties": {
"^id$":{
"bsonType":"string"
},
"^(?!id)([a-z]{2})$": {
"anyOf": [
{
"bsonType": "object"
},
{
"bsonType": "array"
}
]
}
},
}
};
let goodJson ={
"id": "12345",
"br":{
"text":"string1"
},
"en":{
"text":"string2"
},
"ja":[
{
"text":"string3"
},
{
"text":"string4"
}
],
"no":[
{
"text":"string6"
},
{
"text":"string7"
}
]
};
let badJson ={
"id": "12345",
"br":{
"text":"string1"
},
"en":{
"text":"string2"
},
"ja":[
{
"text":"string3"
},
{
"text":"string4"
}
],
"no":[
{
"text":"string6"
},
{
"text":"string7"
}
],
"nt": "not_object_or_array"
};
async function run() {
let db = await MongoClient.connect('mongodb://localhost:27017/exampleDb');
let dbo = db.db('mydb');
let collections = await dbo.collections();
let collectionsNames = collections.map(c => c.s.name);
if (collectionsNames.includes(collectionName)) {
console.log('dropping collection');
await dbo.collection(collectionName).drop();
}
console.log('creating collection');
await dbo.createCollection(collectionName, {validator: scheme});
let translationCollection = dbo.collection(collectionName);
console.log('this will validate successfully');
await translationCollection.insertOne(goodJson);
console.log('this will raise validation error because: "nt": "not_object_or_array"');
try {
await translationCollection.insertOne(badJson);
} catch(error) {
console.log(error);
}
await db.close();
}
run();
{
"shop": {
"homebackground": "http://padmenu.s3.amazonaws.com/15/11/2014/05/08/2ec2ff61-d6a0-11e3-8857-10ddb1e6e201.jpg",
"name": {
"tr": "My Shop"
},
"menus": [{
"name": {
"en": "Menu"
},
"children": [{
"name": {
"en_US": "Category"
},
"images": [
"http://www.progressivedental-ellenlimdds.com/wp-content/uploads/2014/06/red-wine.jpg"
],
"children": [{
"name": {
"en_US": "Item"
},
"images": [
"http://res.cloudinary.com/finedine/image/upload/c_fill,g_center,h_600/v1435916818/WIne-Bottle_uz03a0.jpg",
"http://media.riepenau.com/wines/17973_b.jpg",
"http://lorempixel.com/400/400/food/3",
"http://lorempixel.com/400/400/food/4",
"http://lorempixel.com/400/400/food/5",
"http://lorempixel.com/400/400/food/6",
"http://lorempixel.com/400/400/food/7"
]
}]
}]
}]
}
}
I want to select all the "images" arrays from shop's "children" objects.
How can i do this by using Lodash library?
The output should be an array of consists of image urls:
["url1","url2","url3"]
The easiest approach to solve this problem is by plucking through children and their descendants recursively. The important points are in the getImages() function; wherein it flattens all children arrays in one level, pluck each image arrays and compact all items to remove undefined values(caused by children with no images), and then flattening the images array and readied for concatenation. The stopping point of the recursion is when there are no images for the current children, returning an empty array. If images are found, then we recursively concatenate all potential descendant images. As to how we get the descendants, we use the same chaining sequence that we used in getting the images array but with children as the plucking key.
DEMO
function getImages(children) {
var images = _(children).flatten().pluck('images').compact().flatten().value();
if(_.isEmpty(images)) {
return [];
}
var descendants = _(children).flatten().pluck('children').compact().flatten().value();
return images.concat(getImages(descendants));
}
function getShopImages(data) {
var children = _.pluck(data.shop.menus, 'children');
return getImages(children);
}
console.log(getShopImages(data));
Pseudo Code
You can solve this with a little bit of recursion:
Grab the children list.
Extract all the images from the children list with pluck.
Repeat step 1 with all descendants.
Concat all results and flatten.
Core Code
function deepExtract(collection, childKey, property) {
var exists = _.negate(_.isEmpty);
var children = _.chain(collection).pluck(childKey).filter(exists).flatten();
if (_.isEmpty(children.value())) {
return [];
}
var images = children.pluck(property).value();
var descendantImages = deepExtract(children.value(), childKey, property);
return _.flatten(images.concat(descendantImages));
};
var tree = _.chain(data).get('shop.menus').value();
var images = deepExtract(tree, 'children', 'images');
Demo
var data = {
"shop": {
"homebackground": "http://padmenu.s3.amazonaws.com/15/11/2014/05/08/2ec2ff61-d6a0-11e3-8857-10ddb1e6e201.jpg",
"name": {
"tr": "My Shop"
},
"menus": [{
"name": {
"en": "Menu"
},
"children": [{
"name": {
"en_US": "Category"
},
"images": [
"http://www.progressivedental-ellenlimdds.com/wp-content/uploads/2014/06/red-wine.jpg"
],
"children": [{
"name": {
"en_US": "Item"
},
"images": [
"http://res.cloudinary.com/finedine/image/upload/c_fill,g_center,h_600/v1435916818/WIne-Bottle_uz03a0.jpg",
"http://media.riepenau.com/wines/17973_b.jpg",
"http://lorempixel.com/400/400/food/3",
"http://lorempixel.com/400/400/food/4",
"http://lorempixel.com/400/400/food/5",
"http://lorempixel.com/400/400/food/6",
"http://lorempixel.com/400/400/food/7"
]
}]
}]
}]
}
};
function deepExtract(collection, childKey, property) {
var exists = _.negate(_.isEmpty);
var children = _.chain(collection).pluck(childKey).filter(exists).flatten();
if (_.isEmpty(children.value())) {
return [];
}
var images = children.pluck(property).value();
var descendantImages = deepExtract(children.value(), childKey, property);
return _.flatten(images.concat(descendantImages));
};
var tree = _.chain(data).get('shop.menus').value();
log(deepExtract(tree, 'children', 'images'));
// Helper method to output to screen
function log(value) {
document.getElementById("output").innerHTML += JSON.stringify(value, null, 2) + "\n"
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/3.10.0/lodash.min.js"></script>
<pre id="output"></pre>
I found an alternative solution to my question here:
var children = _(shop.menus[0].children)
.thru(function(coll) {
return _.union(coll, _.pluck(coll, 'children'));
})
.flatten();
var images = _.chain(children).pluck('images').flattenDeep().compact().uniq().value();
The output "images" is an image array.
I am having a JSON data like below.
{
"divisions": [{
"name": "division1",
"id": "div1",
"subdivisions": [{
"name": "Sub1Div1",
"id": "div1sub1",
"schemes": [{
"name": "Scheme1",
"id": "scheme1"
}, {
"name": "Scheme2",
"id": "scheme2"
}]
}, {
"name": "Sub2Div1",
"id": "div1sub2",
"schemes": [{
"name": "Scheme3",
"id": "scheme3"
}]
}
]
}]
}
I want to read this into a TreeStore, but cannot change the subfields ( divisions, subdivisions, schemes ) to be the same (eg, children).
How can achieve I this?
When nested JSON is loaded into a TreeStore, essentially the children nodes are loaded through a recursive calls between TreeStore.fillNode() method and NodeInterface.appendChild().
The actual retrieval of each node's children field is done within TreeStore.onNodeAdded() on this line:
dataRoot = reader.getRoot(data);
The getRoot() of the reader is dynamically created in the reader's buildExtractors() method, which is what you'll need to override in order to deal with varying children fields within nested JSON. Here is how it's done:
Ext.define('MyVariJsonReader', {
extend: 'Ext.data.reader.Json',
alias : 'reader.varijson',
buildExtractors : function()
{
var me = this;
me.callParent(arguments);
me.getRoot = function ( aObj ) {
// Special cases
switch( aObj.name )
{
case 'Bill': return aObj[ 'children' ];
case 'Norman': return aObj[ 'sons' ];
}
// Default root is `people`
return aObj[ 'people' ];
};
}
});
This will be able to interpret such JSON:
{
"people":[
{
"name":"Bill",
"expanded":true,
"children":[
{
"name":"Kate",
"leaf":true
},
{
"name":"John",
"leaf":true
}
]
},
{
"name":"Norman",
"expanded":true,
"sons":[
{
"name":"Mike",
"leaf":true
},
{
"name":"Harry",
"leaf":true
}
]
}
]
}
See this JsFiddle for fully working code.