How to merge/update kubernetes configMaps with added or removed data? - json

I have an existing configMap with JSON data. The data could be anything that is allowed in JSON format - arrays, objects, strings, integers, etc.
For example:
{
"channels": ["10", "20", "30"],
"settings": "off",
"expiry": 100,
"metadata": {
"name": "test",
"action": "update"
}
}
Now I want to update the configMap with newer data.
The catch is that I don't want to update any of the values, but just to add or remove any fields that have been added or removed in the new data.
The reason for this is that the values are defaults and might have been already updated in the configMap by other pods/services.
So for example, if the new data contains the below JSON data (expiry field removed and some values changed):
{
"channels": ["10", "20", "30", "100", "10000"],
"settings": "on",
"metadata": {
"name": "test",
"action": "delete"
}
}
Then I expect the configMap to be updated to look like this:
{
"channels": ["10", "20", "30"],
"settings": "off",
"metadata": {
"name": "test",
"action": "update"
}
}
so the values stayed as they were, but the 'expiry' field was removed.
I am using ansible to deploy the kubernetes resources, but I am open to other tools/scripts that could help me achieve what I need.
Thanks in advance

This is not supported by Kubernetes. As you said, the data is JSON-encoded, it's a string. ConfigMap (and Secrets) only understand strings, not nested data of any kind. That's why you have to encode it before storage. You'll need to fetch the data, decode it, make your changes, and then encode and update/patch in the API.

Related

Elasticsearch dynamic mapping for object within attribute

Wondering if I can create a "dynamic mapping" within an elasticsearch index. The problem I am trying to solve is the following: I have a schema that has an attribute that contains an object that can differ greatly between records. I would like to mirror this data within elasticsearch if possible but believe that automatic mapping may get in the way.
Imagine a scenario where I have a schema like the following:
{
name: string
origin: string
payload: object // can be of any type / schema
}
Is it possible to create a mapping that supports this? I do not need to query the records by this payload attribute, but it would be great if I can.
Note that I have checked the documentation but am confused on if what elastic calls dynamic mapping is what I am looking for.
It's certainly possible to specify which queryable fields you expect the payload to contain and what those fields' mappings should be.
Let's say each doc will include the fields payload.livemode and payload.created_at. If these are the only two fields you'll want to perform queries on, and you'd like to disable dynamic, index-time mappings autogenerated by Elasticsearch for the rest of the fields, you can use dynamic templates like so:
PUT my-payload-index
{
"mappings": {
"dynamic_templates": [
{
"variable_payload": {
"path_match": "payload",
"mapping": {
"type": "object",
"dynamic": false,
"properties": {
"created_at": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss"
},
"livemode": {
"type": "boolean"
}
}
}
}
}
],
"properties": {
"name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"origin": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
Then, as you ingest your docs:
POST my-payload-index/_doc
{
"name": "abc",
"origin": "web.dev",
"payload": {
"created_at": "2021-04-05 08:00:00",
"livemode": false,
"abc":"def"
}
}
POST my-payload-index/_doc
{
"name": "abc",
"origin": "web.dev",
"payload": {
"created_at": "2021-04-05 08:00:00",
"livemode": true,
"modified_at": "2021-04-05 09:00:00"
}
}
and verify with
GET my-payload-index/_mapping
no new mappings will be generated for the fields payload.abc nor payload.modified_at.
Not only that — the new fields will also be ignored, as per the documentation:
These fields will not be indexed or searchable, but will still appear in the _source field of returned hits.
Side note: if fields are neither stored nor searchable, they're effectively the opposite of enabled.
The Big Picture
Working with variable contents of a single, top-level object is quite standard. Take for instance the stripe event object — each event has an id, an api_version and a few other shared params. Then there's the data object that's analogous to your payload field.
Now, all is fine, until you need to aggregate on the contents of your payload. See, since the content is variable, so are the data paths / accessors. But wildcards in aggregation paths don't work in Elasticsearch. Scripts do but are onerous to maintain.
Back to stripe. They partially solved it through what they call polymorphic, typed hashes — as discussed in their blog on API design:
A pretty neat approach that's worth emulating.
P.S. I discuss dynamic templates in more detail in the chapter "Mapping Automation" of my ES Handbook.

postman duplicate collection / export + re-import

relatively new to Postman, having problem with the following simple scenario - I have a collection of Postman requests that all point to a local IP where I am developing my application. Let's suppose I finished my local development, deployed the application on some other server, and want to repeat the requests I previously created on THAT server. I know that probably one way to do this would be to use variables.
Instead of that, though, I did an export of the collection, and did a manual edit of the exported JSON file, replacing all the old local IP's with the new server IP. Also changed the collection name, and ID to something arbitrary. While the import back to Postman works, and I see the requests, they all have the old IP still hanging there, as if my replace didn't work, or as if Postman somehow caches the requests and thinks that that new collection is the same as the old one. I also tried "Duplicating" a collection and exporting the duplicated one / replacing / importing again - but the behavior seems to be the same.
Did I miss something, or should I approach what I want to do differently?
Thank you.
duh, I am dumb enough to have been substituting the "raw" URL, while right below there were the old values for "host" and "port" that are the ones Postman constructs URL from:
{
"info": {
"_postman_id": "1499274a-07bc-4ed2-87d4-b10d0cef8f8f",
"name": "some-collection-DEVSERVER",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
},
"item": [
{
"name": "login (success - bad locale)",
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"name": "Content-Type",
"value": "application/json",
"type": "text"
}
],
"body": {
"mode": "raw",
"raw": "{\n\t\"username\" : \"TEST\",\n\t\"password\" : \"123456\",\n\t\"locale\" : \"asd\"\n}"
},
"url": {
"raw": "http://SERVER-IP:SERVER-PORT/new-path/login",
"protocol": "http",
"host": [
"127",
"0",
"0",
"1"
],
"port": "8081",
"path": [
"old-path",
"login"
]
}
},
"response": []
},
...
]
}
So, after suggestion to use variables I ended up creating two Collection variables "base-URL-LOCAL" and "base-URL-SERVER", that play the role of constants, and a third variable "base-url" which e.g. could have the value of {{base-URL-LOCAL}} (both initial and current values have to be updated). In my exported JSON collection, i substituted all "url" elements with something like the following:
"url": {
"raw": "{{base-url}}/login",
"host": [
"{{base-url}}"
],
"path": [
"login"
]
}
That way somebody who gets my collection won't have to have pre-defined environments set up, and will have to edit collection variables, setting e.g. base-url to {{base-URL-SERVER}}

Elastic Search + JSON import (ELK Stack)

I'm currently trying to do a basic JSON file import into my ELK stack. I tried importing it directly via a POST request like this:
curl -XPOST http://localhost:9200/kwd_results/TS_Cart -d #/home/local/TS_Cart.json
ES says ok for the import, but when I'm trying to view the logs in Kibanna, they are not indexed by the nodes of the JSON file. I'm guessing I need like a template mapping to view it properly.
My JSON file looks like this:
{
"testResults": {
"FitNesseVersion": "v20160618",
"rootPath": "K1System.CountryDe.DriverFirefox.TestCases.MainFolder.TestVariants.SmokeTests_B2C.TS_Cart",
"result": [
{
"counts": {
"right": "16",
"wrong": "2",
"ignores": "3",
"exceptions": "1"
},
"date": "2017-05-10T00:01:11+02:00",
"runTimeInMillis": "117242",
"relativePageName": "TestCase_1",
"pageHistoryLink": "K1System.CountryDe.DriverFirefox.TestCases.MainFolder.TestVariants.SmokeTests_B2C.TS_Cart.B2CFreeCatalogueOrder?pageHistory&resultDate=20170510000111",
"tags": "de, at"
},
{
"counts": {
"right": "16",
"wrong": "0",
"ignores": "0",
"exceptions": "0"
},
"date": "2017-05-10T00:03:08+02:00",
"runTimeInMillis": "85680",
"relativePageName": "TestCase_2",
"pageHistoryLink": "K1System.CountryDe.DriverFirefox.TestCases.MainFolder.TestVariants.SmokeTests_B2C.TS_Cart.B2CGiftCardOrderWithAdvancePayment?pageHistory&resultDate=20170510000308",
"tags": "at, de"
}
],
"finalCounts": {
"right": "4",
"wrong": "1",
"ignores": "0",
"exceptions": "0"
},
"totalRunTimeInMillis": "482346"
}
}
Basically I would need rootPath to be used as an index, while having the following childs: counts, relativePageName, date and tags. Notice that I have two nodes that are childs of the result[] array.
Any help would be greatly appreciated!
Thank you.
Well, it's one JSON document so Elasticsearch treats it as such.
You'll need to (programmatically) split up the document into the right documents and then you can store them (potentially with one _bulk request).
For the index name:
Must be lowercase, so you'll need to cast that value.
Will you have many different root paths with jut a few docs each? Then you shouldn't make all of them an index since there is an overhead for each one of them (actually the underlying shards).

Defining query parameters for basic CRUD operations in Loopback

We are using Loopback successfully so far, but we want to add query params to our API documentation.
In our swagger.json file, we might have something that looks like =>
{
"swagger": "2.0",
"info": {
"version": "1.0.0",
"title": "poc-discovery"
},
"basePath": "/api",
"paths": {
"/Users/{id}/accessTokens/{fk}": {
"get": {
"tags": [
"User"
],
"summary": "Find a related item by id for accessTokens.",
"operationId": "User.prototype.__findById__accessTokens",
"parameters": [
{
"name": "fk",
"in": "path",
"description": "Foreign key for accessTokens",
"required": true,
"type": "string",
"format": "JSON"
},
{
"name": "id",
"in": "path",
"description": "User id",
"required": true,
"type": "string",
"format": "JSON"
},
{
"name":"searchText",
"in":"query",
"description":"The Product that needs to be fetched",
"required":true,
"type":"string"
},
{
"name":"ctrCode",
"in":"query",
"description":"The Product locale needs to be fetched. Example=en-GB, fr-FR, etc.",
"required":true,
"type":"string"
},
],
I am 99% certain the swagger.json information gets generated dynamically via information from the .json files in the /server/models directory.
I am hoping that I can add the query params that we accept for each model in those .json files. What I want to avoid is having to modify swagger.json directly.
What is the best approach to add our query params so that they show up in our docs? Very confused as to how to best approach this.
After a few hours of tinkering, I'm afraid there is not a straight forward way to achieve this as the swagger spec generated here is representation of remoting metadata for model methods along with Model data from model.json files.
Thus, updating remoting metadata for built-in model methods would be challenging and it might not be fully supported by method implementations.
Right approach, IMO, here is to:
- create a remoteMethod wrapper around built-in method for which you want additional params to be injected with requried http mapping data.
- And, disable the REST end-point for the built-in method using
MyModel.disableRemoteMethod(<methodName>, <isStatic>).

Creating Ext.data.Model using JSON config

In the app we're developing, we create all the JSON at the server side using dinamically generated configs (JSON objects). We use that for stores (and other stuff, like GUIs), with a dinamically generated list of its data fields.
With a JSON like this:
{
"proxy": {
"type": "rest",
"url": "/feature/163",
"timeout": 600000
},
"baseParams": {
"node": "163"
},
"fields": [{"name": "id", "type": "int" },
{"name": "iconCls", "type": "auto"},
{"name": "text","type": "string"
},{ "name": "name", "type": "auto"}
],
"xtype": "jsonstore",
"autoLoad": true,
"autoDestroy": true
}, ...
Ext will gently create an "implicit model" with which I'll be able to work with, load it on forms, save it, delete it, etc.
What I want is to specify through a JSON config not the fields, but the model itself. Is this possible?
Something like:
{
model: {
name: 'MiClass',
extends: 'Ext.data.Model',
"proxy": {
"type": "rest",
"url": "/feature/163",
"timeout": 600000},
etc... }
"autoLoad": true,
"autoDestroy": true
}, ...
That way I would be able to create a whole JSON from the server without having to glue stuff using JS statements on the client side.
Best regards,
I don't see why not. The syntax to create a model class is similar to that of store and components:
Ext.define('MyApp.model.MyClass', {
extend:'Ext.data.Model',
fields:[..]
});
So if you take this apart you could call Ext.define(className,config);
where className is a string and config is a JSON object and both are generated on the server.
There's no way to achieve what I want.
The only way you can do it is by means of defining the fields of the Ext.data.Store and have it to generate the implicit model by using the fields configuration.