Currently we are maintaining all our properties in the database and applications are referenced through their Spring Profile Name , now we are transitioning into Cloud Foundry, keeping this in focus how can we build Spring Cloud Config Server using existing database to read application properties, so far in the documentation i see reference to Git Repository
http://cloud.spring.io/spring-cloud-config/spring-cloud-config.html#_spring_cloud_config_server
Not currently, we are limited to git and svn. There is a pull request for mongodb for an example though.
No longer true, now support is available for jdbc http://cloud.spring.io/spring-cloud-config/single/spring-cloud-config.html#_jdbc_backend
Related
First let me give some background
We have our own VPS, so we do not wish to use Azure to host our web applications.
We have already successfully created a CI/CD pipeline to our VPS by installing an agent on it for a .NET Core project.
We use Azure DevOps (formerly known as VSTS) to host our code in GIT and handle our backlogs and CI/CD pipelines.
We have several .NET Framework projects where we use XTD transforms to transform our web.config files on delivery/deployment to have the correct connection strings and other configuration properties.
This makes it possible to pull the master branch from our remote repo and have it working in seconds on a previously unused (for this application) development environment without the need for any configuration.
Now to get to my question
The master branch of the .NET Core project for which we already have the CI/CD pipeline in place holds the configuration in the json files for the staging environment it is continuously delivered to. When a developer pull the master branch, he/she first needs to configure these to suite the local debug environment.
This is an undesirable situation for us.
How can we make it so that if we use .NET Core we can use a mechanism that will allow us to have the project work on a local debug environment without any configuration and in the CI/CD pipeline?
What have we already tried?
we have found that we can have multiple versions of the appsettings.json file for the different environments like appsettings.debug.json and than in the static method CreateWebhost of the Program class we can call on or the other. But how we can automate this is something that we haven't been able to figure out or find documentation about.
Okay, so here are some options you can take advantage of TODAY. (there are im sure more options/approaches)
Option A
Configure the master branch to have appsetting.development.json with connection string to DEV database( or lowest environment)
remove any connection string from appsettings.json
Merge master accordingly.
Create environment variables on each of the backend servers for the connection string; ex, system environment variable named ConnectionStrings:cartDB with connection string to the database for the environment for which the backend server used.
The result of this will be that when running using DEVELOPMENT as the environment variable, then it will be able to connect to database everyone can access.
However, since all OTHER web servers have environment variables with connection string, they will take highest level of precedence, and will therefore be the values set when calling something such as
string connectionString = Configuration.GetConnectionString("cartDB");
This will satisfy the requirements you mentioned above.
Option B:
Configure the master branch to have appsetting.development.json with connection string to DEV database( or lowest environment)
remove any connection string from appsettings.json
Place appsetting.staging.json, appsettings.prod.json in source control, and set environment name variable in web servers. :/ not the best of options/advised against.
(its worth mentioning since I have seen this happen, we all have)
Option C
Add appsetting.staging.json, appsettings.prod.json to source control and use a token in place of the connection string value. Then, leverage some type of Tokenization Task to replace those tokens with the appropriate values.
I have a Spring Boot project that currently consists of three microservices (all of them are maven children of the mentioned project), namely:
eureka-server : as the name says, it's simply a Eureka project that works as a server for registering other microservices
user-server : a project that holds a 'monolithic stack' (model, DAO, service and controller). Here is where the problem is. More on this later.
web-server : a project that contains the AngularJS application and a controller that is accessible from AngularJS and that communicates with the user-server module.
Eureka forces me to include a hsqldb dependency in the parent pom in order to launch the three mentioned applications.
The problem is that I was using MySQL in user-server and hsqldb has somehow overriden the MySQL data source.
In other words, the database engine of user-server is now hsqldb and I want to keep working with MySQL, and if I remove the dependency, the application will obviously not launch.
Is there any way to solve this and work with, maybe, two databases in user-server?
Thank you everyone!
I finally figured out how to get it working. I'll just post it here in case someone faces a similar problem.
It seems that application.properties wasn't being read when launching the application because telling Spring Boot which .yml configuration file for Eureka should be read, it was overriden.
In the .yml file of the microservice I wasn't able either to set the datasource to MySQL, so the solution was to hardcode the datasource properties when launching the microservice, as follows:
System.setProperty("spring.datasource.platform","mysql");
System.setProperty("spring.datasource.url","jdbc:mysql...");
I was looking at a README file that raised some questions about database persistence on Openshift.
Note: Every time you push, everything in your remote repo dir gets recreated
please store long term items (like an sqlite database) in the OpenShift
data directory, which will persist between pushes of your repo.
The OpenShift data directory is accessible relative to the remote repo
directory (../data) or via an environment variable OPENSHIFT_DATA_DIR.
https://github.com/ryanj/nodejs-custom-version-openshift/blob/master/README#L24
However, I could find no confirmation of this on the Openshift website. Is this README out of date? I'd rather not test this, so it would be much appreciated if anyone had any firsthand knowledge they'd be willing to share.
Yep, that readme file is up to date regarding SQLite. All gears have SQLite installed on them. Data should be stored in the persistent storage directory on your gear. This does not apply to MySQL/MongoDB/PostgreSQL as those databases are add-on cartridges pre-configured to use persistent storage, whereas SQLite is simply installed and available for use.
See the first notice found in the OpenShift Origin documentation here: https://docs.openshift.org/origin-m4/oo_cartridge_guide.html
Specifically:
Cartridges and Persistent Storage: Every time you push, everything in
your remote repo directory is recreated. Store long term items (like
an sqlite database) in the OpenShift data directory, which will
persist between pushes of your repo. The OpenShift data directory can
be found via the environment variable $OPENSHIFT_DATA_DIR.
The official OpenShift Django QuickStart shows the design pattern you should follow for adding SQLite to your application via the deploy action hook. See: https://github.com/openshift/django-example/blob/master/.openshift/action_hooks/deploy
I'd like to create a build chain for open source projects I'm working on. I'm currently using github, travis and coveralls. This is working fine but I'd like to add some kind of static code analyze.
I was thinking about hosting SonarQube on openshift, but problem is that openshift does not allow remote connection to database.
I have come to following solutions, but none of them seems to be easy to achieve:
Any REST API for sonar that could be used instead of raw db access
Any alternative for sonar that could be hosted on openshift
Migrate from travis to jenkins hosted on openshift and use this
Any other (free) alternative to openshift which would allow raw db access
Any other option
1 would be an ideal solution but I've searched all sonar plugins I could find and haven't found any :/
Am I missing something? There is no easy way to host sonar without exposing db access?
It looks like at least one person has gotten SonarQube running on OpenShift using the DIY cartridge:
http://majecek.wordpress.com/2013/12/06/how-to-run-sonarqube-4-0-on-openshift/
I was able to get SonarQube to start following those instructions.
EDIT: databases in OpenShift applications are only exposed publicly in scaled applications. You will want to create your sonar app with the -s option if you need to populate your database from outside OpenShift.
I am using Intellij IDEA to develop my applications and I use glassfish for my applications.
When I want to run/debug my application I can configure it from Glassfish Server -> Local and define arguments at there. However there is another section instead of Glassfish Server, there is a Remote section for configuration. I can easily configure and debug my application just defining host and port variables.
So my question is why to need for Glassfish Server Local configuration(except for when defining extra parameters) and what is difference between them(I mean performance or etc.)?
There are a number of development work-flow optimizations and automation that can be performed by an IDE when it is working with a local server. I don't have a strong background in IDEA, so I am not sure which of the following they may have implemented:
using in-place|exploded|directory deployment can eliminate jar/war/ear creation in the IDE and deconstruction in the server. This can be a significant time saver.
linked to 1 is smarter redeployment. In some cases, a file change (like changing a jsp or an html file) does not need to trigger redeployment.
JDBC driver integration allows users to configure their IDE to access a DB and then propagates that configuration (which usually includes driver jars, etc.) into the server's classpath as part of deployment of an app.
access to server log files during deployment and execution.
The ability to start and stop the server... even today, you do need to restart GlassFish sometimes.
view the generated Java sources of a JSP.
Most of these features are not available with a remote server and that has a negative effect on iterative development since the break between edit and validate can be fairly long.
This answer is based on my familiarity with the work that we have done for the NetBeans/GlassFish integration. The guys at IntelliJ are smart, so I would not be surprised if they have other features that are available when you are working with a local server.
Local starts Glassfish for you and performs the deployment. With Remote you start Glassfish manually. Remote can be used to debug apps running on another machines, Local is useful for development and testing.