Gunicorn: No module named '/path/to/my/django/project' - gunicorn

I was using gunicorn with nginx on Ubuntu 16.04 system to deploy a django project and want to create a systemd service for gunicorn. In /lib/systemd/system/gunicorn-mywebsite.service, I write following codes:
ExecStart=/home/myusername/sites/pythonEnv/bin/gunicorn --bind unix:/tmp/mywebsite.socket /path/to/my/django/project.wsgi:application
But when I ran service gunicorn-mywebsite start, there was problem No module named '/path/to/my/django/project'.
If I run the same command my django project directory with relative path of my wsgi:application, it will work.
How can I fix this problem?

You can't give gunicorn a path to a file, it needs to be a module path, with application entry point name. So just project.wsgi:application. If the directory containing project is not in your path, then use the --pythonpath to gunicorn to tell it where it is.

Related

WSGI path doen't work with Amazon Linux 2

I'm trying to get going with Elastic Beanstalk and Amazon Linux 2. One thing I've noticed is that the WSGI path seems to be a little different.
For a Django app I would usually set <<app_name>>.wsgi.py
Instead, the new way to define it is with a namespace like this. This, however, does not seem to work from the eb config buy only from .ebextensions
<<app_name>>.wsgi:application
I feel like, I'm doing something wrong here or not getting this namespace concept?
Why do I suddenly have to add a namespace?
Since other people may run into this issue.
Here is the fix to my problem:
Amazon Linux 2 uses Gunicorn as it's default webserver. Gunicorn expects a certain syntax when specifying the Path to the WSGI config. This syntax does not only include the path to the file but also the name of the exported function (or class) accepting the WSGI parameters.
This is why you have to use the syntax above.
There are several ways to specify the WSGI path for your project. You can do it via the AWS gui but my recommendation would be to add a Procfile to your project.
My Procfile looks like this:
web: gunicorn --bind :8000 --workers 3 --threads 2 <<my_app>>.wsgi:application

az webapp deployment source config choose solution file

I am trying to deploy an app using the following:
az webapp deployment source config --branch master --manual-integration --name myapp --repo-url https://$GITUSERNAME:$GITUSERPASSWORD#dev.azure.com/<Company>/Project/_git/<repo> --resource-group my-windows-resources --repository-type git
The git repo contains 2 .sln solution files and this causes an error when attempting to deploy. Is there any way I can specify which solution file to use? I can seem to find a way in the docs but wondered if there might be a workaround.
I found a solution where you create a .deployment file in the root of the solution with these contents
[config]
project = <PATHTOPROJECT>
command = deploy.cmd
Then a deploy.cmd
nuget.exe restore "<PATHTOSOLUTION>" -MSBuildPath "%MSBUILD_15_DIR%"
The -MSBuildPath may be optional for you

how to create imagestream of jbossweb in openshift origin

How can I create and use the imagestream of jboss webserver in openshift origin ?
Image yaml available in this link. I see that it is automatically built with openshift enterprise version (link) . but why not in origin ?
Thanks.
I expected it to pull itself the image during build but did not happen.
D:\docker\apps>oc new-build --image-stream=jboss-webserver31-tomcat7-openshift:1.1 --name=newapp --binary=true
warning: Cannot find git. Ensure that it is installed and in your path. Git is required to work with git repositories.
error: unable to locate any images in image streams with name "jboss-webserver31-tomcat7-openshift:1.1"
The 'oc new-build' command will match arguments to the following types:
1. Images tagged into image streams in the current project or the 'openshift' project
- if you don't specify a tag, we'll add ':latest'
2. Images in the Docker Hub, on remote registries, or on the local Docker engine
3. Git repository URLs or local paths that point to Git repositories
--allow-missing-images can be used to force the use of an image that was not matched
See 'oc new-build -h' for examples.
So I tried to create the import yaml in webconsole but got below error with yaml.
Failed to process the resource.
Resource is missing kind field.
Got it. Apparently one has to be logged in redhat
oc import-image my-jboss-webserver-3/webserver31-tomcat7-openshift --from=registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift --confirm

How to setup/configure laravel project on cloud server

I have this laravel application on /var/www/html/application-folder/public_html
When I enter the apache server IP it doesn't load the laravel application instead it displays the Apache home page
How can I display the url /var/www/html/application-folder/public_html ?
When I type the full url I get the following error:
Forbidden
You don't have permission to access /folder/public_html/index.php on this server.
Apache/2.2.15 (CentOS) Server
If you have full root access to your server then you can
Step 1
upload your laravel project from the development machine(local) to the /var/www on your server - upload all folders except vendor and node_modules
Step2
once the project is uploaded, run composer install, if you need any of the node packages run npm install
Step 3
create virtual host for your site with DocumentRoot /var/www/yourProjectFolder/public
Step 4
Ensure storage folder has write permissions for your apache/webserver user - recursively
Step 5
Ensure that public folder has appropriate permissions - recursively, if you have uploaded the project as root then you will need to change the owner/permissions
Following this workflow you will be able to get your Laravel site up and running.
Important distinction to make here is that you have full root access to your server and that you can install composer and/or npm on your server before proceeding with the workflow.
If you can't install composer and/or npm on your server, then you have to upload the vendor directory to your server as well (and if you need any node packages then the node_modules as well) - following the documentation link in my comment above will help you.
Refer Virtual Host on Cent Os 6

Separate Archiva configuration and installation directories

The Archiva documentation states:
The standalone installation of Archiva is capable of separating its configuration from installation
However, I didn't manage to do so with 1.3.6 and 1.4 (using Ubuntu 12.04).
$ARCHIVA_BASE is set to /var/archiva and exported, but the wrapper doesn't seem to take care of that and always launches in the installation directory.
Moreover, the 1.4 init script (bin/archiva) uses $BASEDIR instead of $ARCHIVA_BASE
Does someone have a clue?
Post is old but I faced this problem recently and wanted to share what worked for me to make Archiva 2.2.3 execute as a service on RHEL 6.8
Archiva installed at /opt/archiva
Archiva data directory created at /var/archiva_data using these instructions
Edited /opt/archiva/conf/wrapper.conf and made the following change
set.default.ARCHIVA_BASE=/var/archiva_data
Edited /opt/archiva/bin/archiva and made the following change
RUN_AS_USER=foo
Linked /etc/init.d/archiva with /opt/archiva/bin/archiva
Start service using service archiva start
How I do personally.
My archiva app installed in /x1/archiva/archiva with
archiva -> /x1/archiva/apache-archiva-1.4-M4-SNAPSHOT
And all datas archiva.xml in /x1/archiva/archiva-base
archiva start script is modified with:
BASEDIR=/x1/archiva/archiva/bin
BASEDIR_CONF="/x1/archiva/archiva-base"
WRAPPER_CONF="$BASEDIR_CONF/conf/wrapper.conf"
PIDDIR="$BASEDIR_CONF/logs"
And it works fine as it :-)
This post might be a little old, I'd nonetheless like to share my experience using Archiva 2.2.1. Seperating the base from the installation directory by simply setting $ARCHIVA_BASE (as described on http://archiva.apache.org/docs/1.4-M4/adminguide/standalone.html) still doesn't work. I did the following to get Archiva up and running:
My setup
Archiva binaries and installation files in /opt/archiva/current
(current being a symlink pointing to apache-archiva-2.2.1)
Directories conf/ data/ logs/ temp/ moved to /data/archiva_data
Adjustments in Archiva config files
File /opt/archiva/current/bin:
BASEDIR_CONFIG="/data/archiva_data"
WRAPPER_CONF="$BASEDIR_CONFIG/conf/wrapper.conf"
PIDDIR="$BASEDIR_CONFIG/logs"
Wrapper config File /data/archiva_data/conf/wrapper.conf:
#Manually set the Archiva Basedir
set.default.ARCHIVA_BASE=/data/archiva_data
So the steps pointed out by olamy did work, however a further adjustment was needed in the wrapper config file to reflect the configuration being moved away from the installation directory.
After the changes, I was able to start and use Archiva.
1 remember to use cp when you copy configuration files from original conf folder to your folder.
2 if you're trying to run archiva as service you need to modify wrapper.conf at line 14 with added your ARCHIVA_BASE folder.