I've a nodejs application which connect to mssql using the connection string defined in a json file. Different environment will connect to different db.
In minishift, what is the proper way to pass the json config file to different container at runtime?
Regards, nww
First, define this json file as a configmap. You can do it on web UI (under Resources/Config Maps) or command line, for example:
oc create configmap mssql1 --from-file=json=/path/to/your/json
Secondly, mount the configmap onto your nodejs deployment. You can do it on web UI (find your nodejs deployment, switch to Configuration tab, and click Add Config Files link).
Related
I have a lot of configuration objects that get populated by reading an appsettings file. Similar to this:
builder.Services.Configure<VaRideshare.Service.Common.Provider.Google.GoogleApiConfiguration>(builder.Configuration.GetSection("GoogleApiConfiguration"));
I see where environment variables get defined in Elastic Beanstalk but is it possible to use a production appSettings file to define the configuration or do I need to break apart my appsettings file into individual environment variables in order to access them?
I am new to OpenShift so apologies in advance if this question is not very clear.
I have a project starting in Openshift and will use the Elasticsearch provided docker image as a data store.
ElasiticSearch is bound only to local host by default when installed, and if I was running app on a server I would keep this configuration so as not to expose ElasticSearch interface as connectivity only required by the application, no need to expose outside of project.
If I make a route for Elasticsearch without changing it's default config, it is accessible to other Pods in project but also outside of the project, like the main application. Is it possible to make a route that is internal to the project only so that Elasticsearch interface is not accessible outside of the project or by other means ? Or a way to have a common local host address between pods/applications ?
I tried to group the services but still not available.
Any support to put me in right direction really appreciated.
I have been trying to create a stream with Spring Cloud Dataflow but have not had much luck (mostly due to the lack of documentation).
Issue 1: Accessing web GUI of dockerized Spring Cloud Dataflow
I have a dockerized Spring Cloud server running with Kafka on a basic Ubuntu container. For some reason I can't access the web GUI in Windows (at < docker-machine ip >:9393/dashboard). However, I have a separate Docker Ubuntu container running Nginx reverse proxy, which shows up when I go to < docker-machine ip >/index.html or etc. I don't think it is an issue with ports, I have the Spring Cloud container setup with -p 9393:9393 and the port is otherwise unused.
Issue 2: Routing by JSON Header
My ultimate goal is to get a file loaded in from Nginx and routed based on its JSON header (there are two different JSON headers) and then ingest queries to Cassandra.
I can do all of this except the sorting by JSON header. Which app would you recommend I use?
Issue 1: Accessing web GUI of dockerized Spring Cloud Dataflow
We might need little more details around this. Assuming this is the local-server, perhaps you could share the docker scripts/image, so we could try it out.
Issue 2: Routing by JSON Header
The router-sink application would come handy for this type of use-cases. This application routes the payload to named destinations based on certain conditions, so you'd have the opportunity to route the payload with respective ingest-query to Cassandra.
stream 1:
stream create test --definition "file | router --expression=header.contains('a')?':foo':':bar’"
stream 2:
stream create baz --definition ":foo > cassandra --ingest-query=\"query1\""
stream 3:
stream create wiz --definition ":bar > cassandra --ingest-query=\"query2\""
(where: foo and bar are named destinations)
I have created a JSON template to create the Amazon AWS LAMP stack with RDS (free tier) and succeffully created the stack. But when I tried to move the files to the var/www/html folder it seems to have no permission for the ec2-user. I know changing permission with help of SSH. But my intention is to create a template to setup a stack (hosting environment) without using any ssh client.
Also I know how to add a file or copy a zipped source to var/ww/html with the cloudformation JSON templating. What need to do is, just create the environment and later upload the files using ftp client and db using workbench or something. Please help me attain my goal, which I will share publicly for AWS beginners who are not familiar with setting up things with SSH.
The JSON template is a bit lengthy and so here is the link to the code http://pasted.co/803836f5
use the Cloud formation init Meta instead of Userdata.
That way you can run commands on the server such as pulling down files from S3 and then running gzip to expand them.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html
tar files and distribution dependent files like .deb or .rpm include the file permissions for directories. So you could set up a tar or custom .rpm file to include ec2-user as the owner
Alternatively, whatever scripting element installs the apache could also run a set of updates to set the owner of the /var/www/html to ec2-user
Of course you might run into trouble with the User / Group that apache runs under and be able to upload with ftp but not able to read with apache. It would need some thought, and possibly adding the ec2-user to the apache group or ftp'ing as the apache user or some other combination that gives the ttpd server read access and the ssh user write access
I have a JSON file with initial admin user information in it. My startup.js uses that user data to create the first admin user on startup.
When I deploy to meteor, I use meteor deploy --settings settings.json
How can I perform this similarly when deploying to bluemix so I can access my application with my user credentials?
From the Bluemix documentation:
You can specify meteor settings by setting the METEOR_SETTINGS
environment variable:
cf set-env [APP_NAME] METEOR_SETTINGS '{"herp":"derp"}'