How can I update my node config file using Jenkins environment variables? - json

I integrated Jenkins with my node project.
While building the project on Jenkins, I want to update my node config variables using Jenkins environment variables.
Here is my template config file.
{
"aws": {
"s3": {
"base_url": "AWS_S3_BASE_URL",
"bucket": "AWS_S3_BUCKET",
"region": "AWS_S3_REGION",
"accessKeyId": "AWS_S3_ACCESS_KEY",
"secretAccessKey": "AWS_S3_SECRET_ACCESS_KEY"
}
},
"db": {
"sequelize": {
"dialect": "postgres",
"logging": true,
"database": "DB_DATABASE",
"port": 5432,
"minConnections": 1,
"maxConnections": 4,
"maxIdleTime": 1000,
"write": {
"endpoint": "DB_HOST",
"username": "DB_USER",
"password": "DB_PASSWORD"
},
"read": [
{
"endpoint": "DB_HOST",
"username": "DB_USER",
"password": "DB_PASSWORD"
}
]
}
}
}
How can I update these variables using Jenkins environment variables?
Thanks in advance!

Related

adding nested OPC-UA Variable results in "String cannot be coerced to a nodeId"

Error: String cannot be coerced to a nodeId
Hi,
I was busy setting up a connection between the Orion Broker and an PLC with OPC-UA Server using the opcua iotagent agent.
I managed to setup all parts and I am able to receive (test) data, but I am unable to follow the tutorial with regards to adding an entity to the Orion-Broker using a json file:
curl http://localhost:4001/iot/devices -H "fiware-service: plcservice" -H "fiware-servicepath: /demo" -H "Content-Type: application/json" -d #add_device.json
The expected result would be an added entity to the OrionBroker with the supplied data, but this only results in a error message:
{"name":"Error","message":"String cannot be coerced to a nodeId : ns*4:s*MAIN.mainVar"}
suspected Error
Is it possible that the iotagent does not work nicely with nested Variables?
steps taken
doublechecked availability of OPC Data:
OPC data changes every second, can be seen in Broker log
reduced complexity of setup to only include Broker and IOT-agent
additional information:
add_device.json file:
{
"devices": [
{
"device_id": "plc1",
"entity_name": "PLC1",
"entity_type": "plc",
"attributes": [
{
"object_id": "ns*4:s*MAIN.mainVar",
"name": "main",
"type": "Number"
}
],
"lazy": [
],
"commands" : []
}
]
}
config of IOT-agent (from localhost:4081/config):
{
"config": {
"logLevel": "DEBUG",
"contextBroker": {
"host": "orion",
"port": 1026
},
"server": {
"port": 4001,
"baseRoot": "/"
},
"deviceRegistry": {
"type": "memory"
},
"mongodb": {
"host": "iotmongo",
"port": "27017",
"db": "iotagent",
"retries": 5,
"retryTime": 5
},
"types": {
"plc": {
"service": "plcservice",
"subservice": "/demo",
"active": [
{
"name": "main",
"type": "Int16"
},
{
"name": "test1",
"type": "Int16"
},
{
"name": "test2",
"type": "Int16"
}
],
"lazy": [],
"commands": []
}
},
"browseServerOptions": null,
"service": "plc",
"subservice": "/demo",
"providerUrl": "http://iotage:4001",
"pollingExpiration": "200000",
"pollingDaemonFrequency": "20000",
"deviceRegistrationDuration": "P1M",
"defaultType": null,
"contexts": [
{
"id": "plc_1",
"type": "plc",
"service": "plcservice",
"subservice": "/demo",
"polling": false,
"mappings": [
{
"ocb_id": "test1",
"opcua_id": "ns=4;s=test.TestVar.test1",
"object_id": null,
"inputArguments": []
},
{
"ocb_id": "test2",
"opcua_id": "ns=4;s=test.TestVar.test2",
"object_id": null,
"inputArguments": []
},
{
"ocb_id": "main",
"opcua_id": "ns=4;s=MAIN.mainVar",
"object_id": null,
"inputArguments": []
}
]
}
]
}
}
I'm one of the maintainers of the iotagent-opcua repo, we have identified and fixed the bug you were addressing, please update your agent to the latest version (1.4.0)
If you haven't ever heard about it, starting from 1.3.8 we have introduced a new configuration property called "relaxTemplateValidation" which let you use previously forbidden characters (e.g. = and ; ). I suggest you to have a look at it on the configuration examples provided.

Create a json file from other json files

I have 2 json files:
User.json:
{
"users": [
{
"username": "User1",
"app": "Git",
"role": "Manager"
},
{
"username": "user2",
"app": "Git",
"role": "Developer"
}
]
}
App.js:
{
"apps": [
{
"appName": "Git",
"repo": "http://repo1..."
},
{
"appName": "Jenkins",
"repo": "htpp://repo2..."
}
]
}
I'm working on an Angular-CLI apllication for the first time and I want to generate a new json file called infos.json containing the content of the 2 files (User.json + App.json) without redundancy.
Expected file:
Infos.json:
{
"infos": [
{
"username": "User1",
"appName": "GIT",
"role": "Manager",
"repo": "http://repo1..."
},
{
"username": "User2",
"appName": "Jenkins",
"role": "Developer",
"repo": "htpp://repo2..."
}
]
}
How can I do it in my Angular-CLI app ?
You can do this by creating task in task runner. Some of the task runner are Grunt, gulp etc.
Grunt and gulp have different inbuilt packages.
Grunt: npm i grunt-merge-json
Gulp: npm i gulp-merge-json
If you are using web-pack so there is a inbuilt package called merge-webpack-plugin

How to config the marathon-lb when the container run in HOST network?

My json for marathon is below
{
"id": "/storage/mysql",
"cmd": null,
"cpus": 1,
"mem": 512,
"disk": 0,
"instances": 1,
"constraints": [
[
"hostname",
"UNIQUE"
]
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "reg.xxxxx.cn/library/mysql:5.7",
"network": "HOST",
"portMappings": [],
"privileged": true,
"parameters": [],
"forcePullImage": false
}
},
"env": {
"MYSQL_ROOT_PASSWORD": "123456"
},
"labels": {
"HAPROXY_GROUP": "internal"
},
"portDefinitions": [
{
"port": 3306,
"protocol": "tcp",
"labels": {}
}
]
}
I find the haproxy (run at 192.168.30.142 View the screenshot) config is :
frontend storage_mysql_3306
bind *:3306
mode tcp
use_backend storage_mysql_3306
backend storage_mysql_3306
balance roundrobin
mode tcp
server 192_168_30_144_31695 192.168.30.144:31695
the mysql container is run at 192.168.30.144 View the screenshot, so what i want is :
server 192_168_30_144_3306 192.168.30.144:3306
so what should i do to slove it?
thanks!
I think I found the answer
http://mesosphere.github.io/marathon/docs/host-port.html
Mesos agents do NOT offer all ports
Am I right?

How to convert docker run command into json file?

I was wondering if anyone knows how to create a json file that would be the same as running:
docker run -p 80:80 -p 443:443 starblade/pydio-v4
I trying something very ambitious, I want to start my docker container in kubernetes-mesos cluster but can't seem to get the ports correct in the json file, alas I am still very new to this.
Thanks,
TT
Here are my json files:
`
{
"id": "frontend-controller",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 3,
"replicaSelector": {"name": "frontend"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "frontend-controller",
"containers": [{
"name": "pydio-v4",
"image": "starblade/pydio-v4",
"ports": [{"containerPort": 10001, "protocol": "TCP"}]
}]
}
},
"labels": {"name": "frontend"}
}},
"labels": {"name": "frontend"}
}
{
"id": "frontend",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 80,
"port": 443,
"targetPort": 10001,
"selector": {
"name": "frontend"
},
"publicIPs": [
"${servicehost}"
]
}
Docker container Env info pulled from docker inspect command:
"Env": [
"FRONTEND_SERVICE_HOST=10.10.10.14",
"FRONTEND_SERVICE_PORT=443",
"FRONTEND_PORT=tcp://10.10.10.14:443",
"FRONTEND_PORT_443_TCP=tcp://10.10.10.14:443",
"FRONTEND_PORT_443_TCP_PROTO=tcp",
"FRONTEND_PORT_443_TCP_PORT=443",
"FRONTEND_PORT_443_TCP_ADDR=10.10.10.14",
"KUBERNETES_SERVICE_HOST=10.10.10.2",
"KUBERNETES_SERVICE_PORT=443",
"KUBERNETES_PORT=tcp://10.10.10.2:443",
"KUBERNETES_PORT_443_TCP=tcp://10.10.10.2:443",
"KUBERNETES_PORT_443_TCP_PROTO=tcp",
"KUBERNETES_PORT_443_TCP_PORT=443",
"KUBERNETES_PORT_443_TCP_ADDR=10.10.10.2",
"KUBERNETES_RO_SERVICE_HOST=10.10.10.1",
"KUBERNETES_RO_SERVICE_PORT=80",
"KUBERNETES_RO_PORT=tcp://10.10.10.1:80",
"KUBERNETES_RO_PORT_80_TCP=tcp://10.10.10.1:80",
"KUBERNETES_RO_PORT_80_TCP_PROTO=tcp",
"KUBERNETES_RO_PORT_80_TCP_PORT=80",
"KUBERNETES_RO_PORT_80_TCP_ADDR=10.10.10.1",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"PYDIO_VERSION=6.0.5"
],
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
`
The pod and service both start and run ok.
However I am unable to access the running Pydio site on any of the master, minion or frontend ips.
Note:
I am running a modified version of the this docker container:
https://registry.hub.docker.com/u/kdelfour/pydio-docker/
My container has been tested and it runs as expected.
You should see the login screen once it is running.
Please let me know if I can provide any other information.
Thanks again.
So, I finally got this to work using the following .json files:
frontend-service.json
{
"id": "frontend",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 443,
"selector": {
"name": "frontend"
},
"publicIPs": [
"${servicehost}"
]
}
frontend-controller.json
{
"id": "frontend-controller",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 1,
"replicaSelector": {"name": "frontend"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "frontend-controller",
"containers": [{
"name": "pydio-v4",
"image": "starblade/pydio-v4",
"ports": [{"containerPort": 443, "hostPort": 31000}]
}]
}
},
"labels": {"name": "frontend"}
}},
"labels": {"name": "frontend"}
}
I now have pydio with SSL running in a Mesos-Kubernetes env on GCE.
Going to run some tests using more hostPorts to see if I can get more than one replica running on one host. At this point I can resize up to 3.
Hope this helps someone.
Thanks,
TT

OrientDB Transformer 'jdbc' not found

I've recently installed OrientDB and trying to create an import using the ETL module.
Running on OS X, i've installed orientDB using homebrew.
I've created the following ETL script:
{
"config": {
"log": "debug"
},
"begin": [
],
"extractor" : {
"row": {}
},
"transformers": [
{ "jdbc": {
"driver": "com.mysql.jdbc.Driver",
"url": "jdbc:mysql://localhost/dev_database",
"userName": "root",
"userPassword": "",
"query": "select * from users limit 20"
}
},
{ "vertex": { "class": "V" } }
],
"loader": {
"orientdb": {
"dbURL": "plocal:../databases/ETLDemo",
"dbUser": "admin",
"dbPassword": "admin",
"dbAutoCreate": true,
"tx": false,
"batchCommit": 1000,
"dbType": "graph"
}
}
}
Followed the instructions here: http://www.orientechnologies.com/docs/2.0/orientdb-etl.wiki/Import-from-DBMS.html
and installed the jdbc driver for mysql from here: http://dev.mysql.com/downloads/connector/j/
and set the classpath as described.
Running the command:
./oetl.sh ../import_mysql.json
Gives the following output:
OrientDB etl v.2.0.2 (build #BUILD#) www.orientechnologies.com
Exception in thread "main" com.orientechnologies.orient.core.exception.OConfigurationException: Error on creating ETL processor
at com.orientechnologies.orient.etl.OETLProcessor.parse(OETLProcessor.java:278)
at com.orientechnologies.orient.etl.OETLProcessor.parse(OETLProcessor.java:188)
at com.orientechnologies.orient.etl.OETLProcessor.main(OETLProcessor.java:163)
Caused by: java.lang.IllegalArgumentException: Transformer 'jdbc' not found
at com.orientechnologies.orient.etl.OETLComponentFactory.getTransformer(OETLComponentFactory.java:141)
at com.orientechnologies.orient.etl.OETLProcessor.parse(OETLProcessor.java:260)
... 2 more
I did manage to create a working import using a CSV file so i'm pretty sure that the database is set up correctly.
Thoughts?
{
"config": {
"log": "debug"
},
"extractor": {
"jdbc": {
"driver": "com.mysql.jdbc.Driver",
"url": "jdbc:mysql://localhost/dev_database",
"userName": "root",
"userPassword": "",
"query": "select * from users limit 20"
}
},
"transformers" : [
{ "vertex": { "class": "V"} }
],
"loader": {
"orientdb": {
"dbURL": "plocal:../databases/ETLDemo",
"dbUser": "admin",
"dbPassword": "admin",
"dbAutoCreate": true,
"tx": false,
"batchCommit": 1000,
"dbType": "graph"
}
}
}
Can you see if this solves the problem?