Referring to official NestJS documentation, it is recommended to use ConfigService in order to use environment variables.
So in the code, we access all vars defined in an .env file with something like:
config.get('PORT')
But it is not recommended to use .env in production environment. So how to deploy in that way?
Why not just use the standard method with dotenv and process.env.PORT?
There are two problems that make the ConfigService less useful.
First
When no .env file is present in any environment, readFileSync in
dotenv.parse(fs.readFileSync(filePath))
will fail:
[Nest] 63403 [ExceptionHandler] path must be a string or Buffer
TypeError: path must be a string or Buffer
at Object.fs.openSync (fs.js:646:18)
at Object.fs.readFileSync (fs.js:551:33)
at new ConfigService (../config/config.service.ts:8:38)
Even if e.g. process.env.API_KEY is available
this.configService.get('API_KEY')
will not return anything. So the ConfigService forces you to use a prod.env file, which dotenv advocates against:
No. We strongly recommend against having a "main" .env file and an
"environment" .env file like .env.test. Your config should vary
between deploys, and you should not be sharing values between
environments.
https://github.com/motdotla/dotenv#should-i-have-multiple-env-files
Second
You have to import the config module and inject the service in order to use it. When you use env variables like this
imports: [
MongooseModule.forRoot(process.env.MONGO_URI, { useNewUrlParser: true }),
ConfigModule,
],
the config service is useless.
Read more about config in the environment here: https://12factor.net/config
But this is not recommended to use .env in production environnement. So how to deploy that way ?
Actually, it is not recommended to commit your .env files. It's perfectly fine to use them in production :-).
Why not use the standard method with dotenv and process.env.PORT?
It allows decoupling your core code from the code responsible for providing configuration data. Thus:
The core code is easier to test: doing some manual changes/mocking of process.env is such - a - pain, whereas mocking a "ConfigService" is pretty easy
You can imagine using anything else than environment variables in the future by just replacing a single method (or a few getters) in a dedicated class, instead of replacing all the occurrences of process.env.* in your code // to be fair, this is unlikely to happen, as using env. variables is the most common way to load configuration data, but still.
Using #nestjs/config (a.k.a. ConfigModule) makes environment variables available to your app whether they come from a .env file or set in the environment. Locally you use a .env file and on production use the environment.
Related
I'm running NodeJS Express app,
Currently have dev env, test env, and prod env.
However, the DB connection settings are in the code, is there a secure and best practice way to store DB config and all other configs in JSON file format by declaring them in a module (separately for each env or all in one module to be exported, maybe have a default.JSON, Dev.JSON, Prod.JSON...etc) for each environment then require them accordingly by setting the correct configuration for the correct environment in app.js.
I would like to achieve this without depending on any 3rd party package like .env or ncof.
Most of the main NodeJs hosting providers uses a simple environment variable. You can use this :
process.env.NODE_ENV
For defining it by yourself, for exemple 'development' on your local, you can do :
NODE_ENV=developpment node yourapp.js
With this, I suggest you to use a config tool, like nconf (there are some good competitors). You can do like this for example :
nconf
.argv() // Takes arguments from CLI
.file('./env.' + process.env.NODE_ENV + '.json') // takes from specific env file
.file('package', './package.json'); // takes from package.json
Here priority is from the most important to the least :
1) argv
2) specific environment file
3) package.json
You can require file based on the environment.
const env = 'test'; // This value can be taken from config or .env
const configs = require(`../path/${env}`);
console.log('DB Config', configs.DB_PATH);
Depending on your environment you can load the file. And value for environment can be retrieved from .env or any other config.
I have my page configuration done via JCR configuration.
I have the component configuration using YAML configuration.
I want to make this component available to the a template configured in the JCR.
The component config is under: /project-website-module/src/main/resources/website-module/components/linkList/linkList.yaml
I tried to reference this in the template's component availability in different ways:
website-module:components/linkList/linkList
website-module:components/linkList/linkList.yaml
/website-module/components/linkList/linkList
/website-module/components/linkList/linkList.yaml
src/main/resources/website-module/components/linkList/linkList
src/main/resources/website-module/components/linkList/linkList.yaml
But no luck, I keep getting the error:
"Caused by: info.magnolia.config.registry.Registry$NoSuchDefinitionException: <pathToComponentConfiguration>"
The component config is under:
/project-website-module/src/main/resources/website-module/dialogs/linkList/linkList.yaml
... that's the dialog config, where's the component config yaml?
Path to the component config not to dialog config is the one that you need to use when referring to the component. And that config file needs to be physically at src/main/resources/website-module/templates/components/... and reference is then website-module:components/....
Also if you have specified module descriptor for the module, name in the module descriptor better match the website-module.
I had to move the YAML component configuration under (notice the templates directory added):
/project-website-module/src/main/resources/website-module/templates
In my case, move linkList.yaml under:
/project-website-module/src/main/resources/website-module/templates/components/linkList/linkList.yaml
Then in the JCR config, use the following path:
website-module:components/linkList/linkList
Note: likewise, YAML dialog configurations must be under:
/project-website-module/src/main/resources/website-module/dialogs
We are building a large web app as several WAR files. Each WAR file is a Spring Boot application. For development and testing, we can run these WAR files independently. But in production, we want to run all of the WAR files together under one instance of Jetty (9.x).
The question we have is, what is the best way to deal with externalized configuration in this scenario? Each WAR file has its own set of configuration files (application.properties and others) that it needs. How can we set things up so that each WAR file reads its own configuration files and ignores the rest?
You can use spring.config.name and spring.config.location to give each application a distinct name and/or location for its external configuration files. I'd set these properties in the configure method that you've overriden in your SpringBootServletInitializer subclass.
Another option that might work out better is to use the #PropertySources annotation on your #SpringBootApplication class for each Spring Boot application.
For example,
You can then rename application.properties for each application, like app1.properties, app2.properties, and so on.
Then you can start up Jetty providing a common configuration folder
-Dapplication.home=C:/apphome
And in each #SpringBootApplication, add a #PropertySources annotation that looks like this
#SpringBootApplication
#PropertySources({
#PropertySource("classpath:app1.properties"),
#PropertySource(value = "file:${application.home}/app1/app1.properties", ignoreResourceNotFound = true)
})
public class App1Config {
...
}
In development, the app#.properties will be read. Then in production, when you define application.home, the application.home/app#/app#.properties will override the one in the classpath
In moqui, I am trying to configure to use mysql, commented out derby and uncommented mysql in defaultconf, I copied the connector to framework lib, included the dependency in framework build.gradle, on running load, I get this error - java.lang.reflect.InvocationTargetExceptionjavax.management.InstanceAlreadyExistsException: bitronix.tm:type=JDBC,UniqueName=DEFAULT_transactional_DS,Id=0 -- thanks for any help
Can you post a snippet of code you have modified in MoquiDefaultConf.xml and build.graddle file.
A viable alternative to configure MySQL with Moqui is by doing related setting in configuration files (i.e. MoquiDevConf.xml for development instance, MoquiStagingConf.xml for staging instance and MoquiProductionConf.xml for production instance.). Follow the steps below to configure MySQL with Moqui.
Since, May be you are trying to do some development, you need to make changes in MoquiDevConf.xml file only.
Replace the <entity-facade> code in MoquiDevConf.xml with the following code.
<entity-facade crypt-pass="MoquiDefaultPassword:CHANGEME">
<datasource group-name="transactional" database-conf-name="mysql" schema-name="">
<inline-jdbc jdbc-uri="jdbc:mysql://127.0.0.1:3306/MoquiTransactional?autoReconnect=true&useUnicode=true&characterEncoding=UTF-8"
jdbc-username="MYSQL_USER_NAME" jdbc-password="MYSQL_PASSWORD" pool-minsize="2" pool-maxsize="50"/>
</datasource>
</entity-facade>
In the code above 'MoquiDEFAULT' is the name of database. Replace the MYSQL_USER_NAME and MYSQL_PASSWORD with your MySQL username and password.
Create a database in MySQL (as per the code above, create the database with name MoquiTransactional).
Add the jdbc driver for MySQL in the runtime/lib directory.
In MoquiInit.properties file, set MoquiDevConf.xml file path to "moqui.conf" property i.e. moqui.conf=conf/MoquiDevConf.xml
Now just simply build, load and run.
To answer your question for loading seed data,
you can simply the run the gradle command gradle load -Ptypes=seed, this only loads the seed type data.
Without more details my best guess is that you have another instance of Bitronix running on the machine, by the UniqueName almost certainly another instance of Moqui running. Make sure no other instance is running, killing background processes if there are any, before starting your new instance.
Grails 1.x allows using external configuration files by setting the grails.config.locations directive. Is there a similar approach available for externalizing the database configuration in Datasource.groovy (without setting up JNDI)?
It would prove helpful to be able to configure DB credentials in a simple configuration file outside the application.
Thanks in advance!
You can use a properties file specified in the grails.config.locations as a way to externalize the datasource configuration. Below is how I typically set up a Grails project:
In my DataSource.groovy I specify this for the production environment:
....
....
production {
dataSource {
dbCreate = "update"
driverClassName = "com.myorg.jdbcDriverNotExists"
url = ""
username = ""
password = ""
}
}
....
....
I specify an external properties file in my Config.groovy:
grails.config.locations = [ "classpath:app-config.properties"]
In the properties file (stored in grails-app/conf/) I specify the actual datasource info:
dataSource.driverClassName=oracle.jdbc.OracleDriver
dataSource.url=jdbc:oracle:thin:#host:port:sid
dataSource.username=sa
dataSource.password=secret
I also use the properties file as a way to override other values that are in Config.groovy. When the app is deployed, if I have to modify the datasource info I just edit the /WEB-INF/classes/app-config.properties file and restart.
The answer above does not really externalize configuration. He is close, but the configuration is still residing in the application.
I would use a JVM environment var on startup of the application/server to point to a location outside the application where the external configuration resides. Read out the environment var in the config.groovy file and use it get the external configuration file. Something like this:
def extConfig = System.properties.getProperty('ENVVAR');
grails.config.locations = [ "file:${extConfig}/${appName}-config.groovy"]
For me this doesn't work. To get an environment variable.
Better use :
System.getenv().get("ENVVAR").toString()
Just put the configuration file location as following in Config.groovy file
grails.config.locations = [
"file:/yourDirectory/${appName}/${Environment.current.name}-datasource.properties",
"file:/yourDirectory/${appName}/${Environment.current.name}-config.groovy",
"classpath:${appName}-${Environment.current.name}-datasource.properties",
"classpath:${appName}-${Environment.current.name}-config.groovy"
]
And put all the details about datasource and other config values in your appropriate file. Hence you can externalize the configuration and need not restart to change values.