Configuring directory aliases in Starman (or other PSGI servers) - configuration

I am used to setting aliases to different directories in Apache httpd.conf. For example, the following works for me
Alias /lib /path/to/lib
Then I can include paths such as <script src="/lib/jquery/plugin/funky.js"></script> no matter what the application path.
I am trying out Starman (and other PSGI servers such as HTTP::Server::PSGI), and can't figure out any way to set configuration parameters such as alias to directories.
Can this be done? How?

It can be easily done by using Plack::Middleware::Static.
use Plack::Builder;
builder {
enable "Static", path => sub { s!^/lib/!! }, root => "/path/to/lib/";
$app;
};
and you'll get "/lib/foo.js" loaded from "/path/to/lib/foo.js". This should work with Starman and any PSGI supported web servers.
More information is available in the online documenetation.

Related

How can i remove port from url to make source for images from xampp database to work properly?

I'm new to angular and i have a problem with angular and xampp. I'm trying to load images from mySQL database where i stored their destination. Problem is that angular tries to access that destination through this:
My HTML is:
But my url to that image is without port :4200
When i access localhost/bcPraca/php/upload/imageName then that image shows up so it works.
So how can i remove that port from Url. Or what can i do to make it work ?
Everything else is working properly except that source in the image.
You have to use a proxy in order to be able to communicate with your backend/
In order to use a proxy:
Create a proxy.conf.json in the root of your workspace (adapt the following to our needs):
{
"/api": {
"target": "http://localhost/api",
"secure": false,
"changeOrigin": true
"logLevel": "debug"
}
}
Start your app with the following command:
ng serve --proxy-config proxy.conf.json
You can read more about angular proxy here
You Can put prefix like http:// Before Your Image Source And It Will Work

nestjs configuration with dotenv

Referring to official NestJS documentation, it is recommended to use ConfigService in order to use environment variables.
So in the code, we access all vars defined in an .env file with something like:
config.get('PORT')
But it is not recommended to use .env in production environment. So how to deploy in that way?
Why not just use the standard method with dotenv and process.env.PORT?
There are two problems that make the ConfigService less useful.
First
When no .env file is present in any environment, readFileSync in
dotenv.parse(fs.readFileSync(filePath))
will fail:
[Nest] 63403 [ExceptionHandler] path must be a string or Buffer
TypeError: path must be a string or Buffer
at Object.fs.openSync (fs.js:646:18)
at Object.fs.readFileSync (fs.js:551:33)
at new ConfigService (../config/config.service.ts:8:38)
Even if e.g. process.env.API_KEY is available
this.configService.get('API_KEY')
will not return anything. So the ConfigService forces you to use a prod.env file, which dotenv advocates against:
No. We strongly recommend against having a "main" .env file and an
"environment" .env file like .env.test. Your config should vary
between deploys, and you should not be sharing values between
environments.
https://github.com/motdotla/dotenv#should-i-have-multiple-env-files
Second
You have to import the config module and inject the service in order to use it. When you use env variables like this
imports: [
MongooseModule.forRoot(process.env.MONGO_URI, { useNewUrlParser: true }),
ConfigModule,
],
the config service is useless.
Read more about config in the environment here: https://12factor.net/config
But this is not recommended to use .env in production environnement. So how to deploy that way ?
Actually, it is not recommended to commit your .env files. It's perfectly fine to use them in production :-).
Why not use the standard method with dotenv and process.env.PORT?
It allows decoupling your core code from the code responsible for providing configuration data. Thus:
The core code is easier to test: doing some manual changes/mocking of process.env is such - a - pain, whereas mocking a "ConfigService" is pretty easy
You can imagine using anything else than environment variables in the future by just replacing a single method (or a few getters) in a dedicated class, instead of replacing all the occurrences of process.env.* in your code // to be fair, this is unlikely to happen, as using env. variables is the most common way to load configuration data, but still.
Using #nestjs/config (a.k.a. ConfigModule) makes environment variables available to your app whether they come from a .env file or set in the environment. Locally you use a .env file and on production use the environment.

Specify JFROG_ACCESS home instead of ~/.jfrog_access (Artifactory 5.5.2)

I managed to set up artifactory using our existing tomcat. I have set to ARTIFACTORY_HOME=/opt/artifactory, that part works well. There is, however, also the jfrog access.war file, which needs to be running as well. I didn't figure out which variable to use to specify its home, therefore it defaults to ~/.jfrog_access, which is not at all what I like.
I moved the content over to my $ARTIFACTORY_HOME/access and symlinked it, but that's not the way to go for sure. Any help appreciated.
In case someone is stumbling over this thread and struggles with the same problem:
Solution for me was to also extract the Context files (access.xml and artifactory.xml which are available in the zip file under <zip extract>/misc/tomcat) to the Tomcat configuration folder, e.g. $CATALINA_HOME/conf/Catalina/localhost/. After that the $ARTIFACTORY_HOME env will be recognized on Access startup.
A previous answer finally put me on the right track for solving this problem on Amazon Linux.
In addition to copying access.xml and artifactory.xml to ${catalina.home}/host/MY_HOSTNAME, I found that some other changes were needed.
I modified the docBase attributes in the XML context files because my server has multiple hostnames:
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/access.xml
<Context path="/access" docBase="${catalina.home}/host/repo.mydomain.org/access.war">
<Parameter name="jfrog.access.bundled" value="true" override="true"/>
<!-- enable annotations scanning of access jar files -->
<JarScanner scanClassPath="false">
<JarScanFilter defaultPluggabilityScan="false" pluggabilityScan="access*" defaultTldScan="false"/>
</JarScanner>
</Context>
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/artifactory.xml
<Context crossContext="true" path="/artifactory" docBase="${catalina.home}/host/repo.mydomain.org/artifactory.war">
</Context>
Important Note: In order to prevent the above two XML files from being deleted by Tomcat Manager during upgrades via Undeploy/Deploy WAR, make sure they are owned by root and not writable by the tomcat user:
chown root.root access.xml artifactory.xml
chmod 644 access.xml artifactory.xml
If you forget to do the above, you will likely end up missing these files, which will break the communication between the access and artifactory web applications, resulting in login failures ("Username or Password Are Incorrect"). In this case, these errors result from the lack of communication between the web applications, not a problem with the credentials themselves.
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/manager.xml
This gives me the ability to upload new versions of access.war and artifactory.war via https://repo.mydomain.org:8443/manager/html:
<Context docBase="${catalina.home}/webapps/manager" privileged="true" antiResourceLocking="false">
</Context>
Additionally, I created the following folder to serve as the artifactory.home:
sudo mkdir /usr/share/artifactory
sudo chown tomcat.tomcat /usr/share/artifactory
tomcat8.conf
Add (or modify) the following line:
JAVA_OPTS="-Dartifactory.home=/usr/share/artifactory -Djfrog.access.home=/usr/share/artifactory/access -Dartifactory.access.client.serverUrl.override=http://localhost:8080/access"
Note: The Access Client URL specified above must use localhost in order to avoid the Server HTTP parameter from being overwritten by Apache and its modules. For instance, if I use:
https://repo.mydomain.org/access/api/v1/system/ping
The Server HTTP header value in the response is:
Server: Apache/2.4.33 (Amazon) OpenSSL/1.0.2k-fips mod_jk/1.2.43
And the Access Client produces the following exception:
[ERROR] (o.j.a.c.AccessClientImpl:154) - Access client/server version mismatch. Client version: 4.1.5, Server version: 2.4.33 (Amazon) OpenSSL
Which means the Access Client is depending on the first string matching #.#.# in the server header. This seems like a really fragile part of the Access Client. They should have used X-JFrog-Access-Server or something instead of trying to control a value that is set by the web server. So, to reiterate, use http://localhost:8080/access to connect directly to the tomcat server.
Artifactory 6.2.0 depends on Apache Derby (the specific version can be found in jfrog-artifactory-oss-6.2.0.zip\artifactory-oss-6.2.0\tomcat\lib). This should be added as a shared library to Tomcat:
mkdir /usr/share/tomcat8/shared
cd /usr/share/tomcat8/shared
wget http://central.maven.org/maven2/org/apache/derby/derby/10.11.1.1/derby-10.11.1.1.jar
Add or modify the following line in catalina.properties:
shared.loader=${catalina.home}/shared/*.jar
Since we want https://repo.mydomain.org to go to the Artifactory webapp:
mkdir /usr/share/tomcat8/host/repo.mydomain.org/ROOT
echo '<html><head><meta http-equiv="refresh" content="0;URL=/artifactory"></meta></head><body></body></html>' > /usr/share/tomcat8/host/repo.mydomain.org/ROOT/index.html
And make sure the services automatically start on reboot:
sudo chkconfig httpd on
sudo chkconfig tomcat8 on
Artifactory will then be available at the url:
https://repo.mydomain.org/artifactory/webapp/

How to run several IPFS nodes on a single machine?

For testing, I want to be able to run several IPFS nodes on a single machine.
This is the scenario:
I am building small services on top of IPFS core library, following the Making your own IPFS service guide. When I try to put client and server on the same machine (note that each of them will create their own IPFS node), I will get the following:
panic: cannot acquire lock: Lock FcntlFlock of /Users/long/.ipfs/repo.lock failed: resource temporarily unavailable
Usually, when you start with IPFS, you will use ipfs init, which will create a new node. The default data and config stored for that particular node are located at ~/.ipfs. Here is how you can create a new node and config it so it can run besides your default node.
1. Create a new node
For a new node you have to use ipfs init again. Use for instance the following:
IPFS_PATH=~/.ipfs2 ipfs init
This will create a new node at ~/.ipfs2 (not using the default path).
2. Change Address Configs
As both of your nodes now bind to the same ports, you need to change the port configuration, so both nodes can run side by side. For this, open ~/.ipfs2/configand findAddresses`:
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5001",
"Gateway": "/ip4/127.0.0.1/tcp/8080",
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001"
]
}
To for example the following:
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5002",
"Gateway": "/ip4/127.0.0.1/tcp/8081",
"Swarm": [
"/ip4/0.0.0.0/tcp/4002",
"/ip6/::/tcp/4002"
]
}
With this, you should be able to run both node .ipfs and .ipfs2 on a single machine.
Notes:
Whenever you use .ipfs2, you need to set the env variable IPFS_PATH=~/.ipfs2
In your example you need to change either your client or server node from ~/.ipfs to ~/.ipfs2
you can also start the daemon on the second node using IPFS_PATH=~/.ipfs2 ipfs daemon &
Hello, I use ipfs2, after running two daemons at the same time, can indeed open localhost:5001 / webui, run the second localhost:5002 / webui has an error, as shown in the attachment
Here are some ways I've used to create multiple nodes/peers ids.
I use windows 10.
1st node go-ipfs (latest version)
2nd node Siderus Orion ifps (connect to Orion node , not local) -- https://orion.siderus.io/
Use VirtualBox to run a minimal ubuntu installation. (You can set up as many as you want)
Repeat the process and you have 4 nodes or as many as you want.
https://discuss.ipfs.io/t/ipfs-manager-download-install-manage-debug-your-ipfs-node/3534 is another gui that installs and lets you manage all ipfs commands without CMD. He just released it a few days ago and it looks well worth lots of reviews.
Disclaimer I am not a coder or computer professional. Just a huge fan of IPFS! I hope we can raise awareness and change the world.

how to configure debug mode for production environment in Cake 3.2?

The latest Cake will produce this inside the app.php
'debug' => filter_var(env('DEBUG', true), FILTER_VALIDATE_BOOLEAN),
I need to set up for production use.
How do I make it debug false on production server without changing this?
Apache
You can set the Environment DEBUG value to false via the .htaccess on the production server.
You'll just have to add SetEnv DEBUG false to the .htaccess file you're using.
This StackOverflow post explains it a little more.
Nginx
If you are using Nginx you can set environment values in two different ways.
You can add an extra fastcgi_param to the locationblock with the desired name and value:
location / {
...
fastcgi_param DEBUG false;
...
}
php-fpm
You also can configure the php-fpm or the php-cgi config and add the following:
env[DEBUG] = false
According to CakePHP's documentation the env() requires one parameter, they environment value key. The second parameter is optional and a default, in case the value isn't set.