I have installed node-push-server. The configuration is loaded from a json like this:
{
"webPort": 8000,
"mongodbUrl": "mongodb://username:password#localhost/database",
"gcm": {
"apiKey": "YOUR_API_KEY_HERE"
},
"apn": {
"connection": {
"gateway": "gateway.sandbox.push.apple.com",
"cert": "/path/to/cert.pem",
"key": "/path/to/key.pem"
},
"feedback": {
"address": "feedback.sandbox.push.apple.com",
"cert": "/path/to/cert.pem",
"key": "/path/to/key.pem",
"interval": 43200,
"batchFeedback": true
}
}
}
How can I set the enviroment variables for my application in this json file?
I don't think it's possible. You should be able to change all these settings in the code though. For example in node you can do: process.env.OPENSHIFT_VARIABLENAME to read an environment variable.
Example for MongoDB connection string from docs:
//provide a sensible default for local development
mongodb_connection_string = 'mongodb://127.0.0.1:27017/' + db_name;
//take advantage of openshift env vars when available:
if(process.env.OPENSHIFT_MONGODB_DB_URL){
mongodb_connection_string = process.env.OPENSHIFT_MONGODB_DB_URL + db_name;
}
As an alternative, there is a quick and easy deployable gear called AeroGear Push that might serve your needs.
Config files can be awkward because including them in your source repo isn't always a good move.
OpenShift deployments are mostly git push-driven, so there are several options for helping you correctly resolve your configs on the server.
Configuring your service using ENV vars is the most common approach, but since this one requires a flat file, you'll need to find a way to update the file with the correct values.
If you know what keys and values are needed, you should be able to write a script that updates the example json, or merges two json objects to produce a flat config file including the strings node-pushserver will expect.
It looks like mongodbUrl, webPort, (and domain?) would need to be populated with OpenShift-provided values (when available). config-multipaas might be able to help with that.
I would probably implement the config bootstrapping / merging work as a build step, allowing you to prep the config file and start node-pushserver in it's usual way
Related
I'm developing an Azure Function which has to consume JSON as input and then trigger a hybrid CI/CD pipeline split between on-prem and Azure DevOps. To split configuration from code I intend to use an Azure App Configuration store to retrieve configuration settings that the Function will use to trigger the correct pipeline depending on JSON input. I'm completely new to App Config but have tried to investigate how to properly use it. However, I have stumbled into a perplexing issue and can't find an explanation for it. I apologize if I have missed something obvious out there.
For the purpose of this question I have abstracted away any business-related terminology.
Imagine I have a JSON object stored in a file TestStructure.json that looks like this:
{
"TestStructure": {
"Repository1": {
"RepositoryName": "Repository1",
"RepositoryUrl": "https://url.repository1.com/"
},
"Repository2": {
"RepositoryName": "Repository2",
"RepositoryUrl": "https://url.repository2.com/"
},
"Repository3": {
"RepositoryName": "Repository3",
"RepositoryUrl": "https://url.repository3.com/"
}
}
}
I store this in my App Config using the Azure CLI with the following command:
az appconfig kv import -n <myAppConfigName> -s file --format json --path "C:\workspace\TestStructure.json" --content-type "application/json" --separator . --depth 2
The command yields the following key-value pairings:
---------------- Key Values Preview ----------------
Adding:
{"key": "TestStructure.Repository1", "value": "{\"RepositoryName\": \"Repository1\", \"RepositoryUrl\": \"https://url.repository1.com/\"}"}
{"key": "TestStructure.Repository2", "value": "{\"RepositoryName\": \"Repository2\", \"RepositoryUrl\": \"https://url.repository2.com/\"}"}
{"key": "TestStructure.Repository3", "value": "{\"RepositoryName\": \"Repository3\", \"RepositoryUrl\": \"https://url.repository3.com/\"}"}
These keys are what I expect to find in my App Config store.
Going to the App Config in the Azure Portal I find that the JSON object has been stored correctly, i.e. the keys are TestStructure.Repository1, TestStructure.Repository2 and so forth, all with their corresponding values as the Azure CLI command reported back. This screenshot verifies it:
Now, to the actual problem. When I try to fetch a key from my App Config I get some weird behavior.
I have put together a simple Console App in .NET 6 to test how to read from the App Config:
1 using Microsoft.Extensions.Configuration;
2
3 var config = new ConfigurationBuilder()
4 .AddAzureAppConfiguration("MyConnectionString")
5 .Build();
6
7 var repository = config["TestStructure.Repository1"] // Returns null
It doesn't make sense to me why line 7 returns null, so I attached a debugger to inspect the ConfigurationRoot object a bit further and found the following:
What is going on here? Inspecting the config object reveals that the actual keys to index with are stored as TestStructure.Repository1:RepositoryName and not TestStructure.Repository1 and then the corresponding values.
Thank you for taking your time to read my question. I hope I have expressed clearly what I am trying to achieve and what my problem is.
I'm doing all the work based on the code. I want to work on a simple task of edit and save hostingstart.html in kudu ui, but I don't know how to do it.
Currently, we have checked the connection through Azure app service distribution and dns authentication with terraform, and even checked whether the change is good through hostingstart.html in kuduui.
If possible, I wanted to work with the terraform code, so I wrote it as below and put the html file inside, but it didn't work.
(If it's not terraform, yaml or sh direction is also good.)
resource "azurerm_app_service" "service" {
provider = azurerm.generic
name = "${local.service_name}-service"
app_service_plan_id = azurerm_app_service_plan.service_plan.id
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
https_only = true
source_control {
repo_url = "https://git.a.git"
branch = "master"
}
}
Or can we specify the default path in the internal folder in this way?
tree : web
+page
- hostingstart.html
+terraform
- main.tf
- app_service.tf
site_config {
always_on = true
default_documents = "../page/hostingstart.html"
}
For the moment. It seems best to deploy and apply through blob storage.
(https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_blob)
for Terraform you can’t easily edit that file from management plane APIs, which Terraform would use. Instead, you can deploy a minimal application with whatever you want to show. Here’s an example of deploying code with an ARM template: https://github.com/JasonFreeberg/zip-deploy-arm-template.
I am using Prometheus for our monitoring and I have a lot of configs (our prometheus.yml main config file is 8000+ lines long).
I would like to divide this out into logical groupings so that it becomes much readable.
I came to know that Prometheus doesn't support this and we can use configuration management systems like Ansible.
Has anyone done this with their Prometheus config file? If so, how did you do it?
Assuming you have lots of node to scrape with different tags and such, prometheus support file based discovery which you can use to organize it according to your need. i would go with
in prometheus.yml
- job_name: 'dummy' # it's mandatory
file_sd_configs:
- files:
- /etc/prometheus/file_sd/*.json
and json file can contains logical grouping.
example.json
[
{
"targets": ["host:port"],
"labels": {
"job": "job_name",
"environment": "test_env",
"service": "test_service"
}
}
]
Here is a nice Blog post about it https://www.robustperception.io/using-json-file-service-discovery-with-prometheus
I am using artifactory to store my artifacts, using a generic repo (I named it 'generic-local') and a layout that I have customized based on the maven2 layout (I believe one of the default layouts)
unchanged layout
[orgPath]/[module]/baseRev/[module]-baseRev(-[classifier]).[ext]
my version are of the following format
myartifact-1.0.0
myartifact-1.0.0-develop
myartifact-1.0.0-branch1234
to detect and flag release artifact.. I understand artifactory relies on certain regex
the Folder Integration Revision RegExp and File Integration Revision RegExp
for both I have set this regexp to 'branch.*|develop.*'
I would expect artifactory to now flag as 'integration' any artifact following the two last artifacts in my list above but it isn't working so far..
http://myrepo.com/artifactory/api/search/versions?g=My.Applications&a=myartifact&repos=generic-local
returns
{
"results": [
{
"version": "1.0.267-branch1234",
"integration": false
},
{
"version": "1.0.266-branch1234",
"integration": false
},
{
"version": "1.0.265-branch1234",
"integration": false
}
}
I tested the Test Artifact Path Resolution form in artifactory .. for each artifacts above, it returned :
Folder Integration Revision: branch1234
File Integration Revision: branch1234
Which makes me think my regex is valid. thus the artifacts are seen as integration .. however the api returns false..
What am I doing wrong
The above is working. I can see artifacts finally flagged with the flag integration=true.
I can use this to, for example, run 'deploy latest stable version'.
The fix was to wait. Seems artifactory does not apply the rule right away. even for new artifacts added after the rule change. Confusing and wished their documentation would mentioned it.
I've just started experimenting with Azure functions and I'm trying to understand how to control the app settings depending on environment.
In dotnet core you could have appsettings.json, appsettings.development.json etc. And as you moved between different environments the config would change.
However from looking at Azure function documentation all I can find is that you can set up config in the azure portal but I can't see anything about setting up config in the solution?
So what is the best way to manage build environment?
Thanks in advance :-)
The best way, in my opinion, is using a proper build and release system, like VSTS.
What I've done in one of my solutions is creating an ARM template of my Function App and deploy this using a release pipeline with VSTS RM.
This way you can just add a value to the template.json, like the one from below.
"appSettings": [
// other entries
{
"name": "MyValue",
"value": "[parameters('myValue')]"
}
You will need another file, called parameters.json which will hold the values. This file looks like so (at the moment).
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"name": {},
"storageName": {},
"location": {},
"subscriptionId": {}
}
}
Back in VSTS you can just change/override the values of these parameters in the portal.
By using such a workflow you will get a professional CI/CD implementation where no one has to bother themselves with the actual secrets. They are only known to the system administrators.