gunicorn: proc_name not working - gunicorn

I'm running gunicorn with django and nginx and deploying with salt.
I want change gunicorn's proc name so I made a config file with python
gunicorn_config.py
bind = ...
workers = ...
proc_name = 'Name'
daemon = True
...
and in salt state file,
gunicorn_config_file:
file.managed:
- name: /etc/gunicorn_config.py
- source: salt://.../files/gunicorn_config.py
- template: jinja
and runs it
start_gunicorn:
cmd.run:
- name: '{{venv}}/gunicorn -c /etc/gunicorn_conig.py myProject.wsgi'
- cwd: /path/to/django/project
the minion returns all succeed but the process name of gunicorn is still
gunicorn: master and gunicorn: worker
Other configurations like workers are working well, but proc_name is not.
How can I change the proc_name properly? I installed setproctitle with pip in venv too.
thank you.

Related

Use kubernetes with helm and provide my predefined user and password with bitnami correctly

I am using kubernetes with helm 3.
I need to create a kubernetes pod with sql - creating:
database name: my_database
user: root
password:12345
port: 3306
The steps:
creating chart by:
helm create test
after the chart is created, change the Chart.yaml file in test folder, by adding dependencies section.
apiVersion: v2
name: test3
description: A Helm chart for Kubernetes
version: 0.1.0
appVersion: "1.16.0"
dependencies:
name: mysql
version: 8.8.23 repository: "https://charts.bitnami.com/bitnami"
run:
helm dependencies build test
After that there is a compressed file tgz.
So I extracted it and there is tar file - I extracted it too, and leave only the final extracted folder.
I presume this isn't the best approach of changing parameter in yaml for bitnami,
and using also the security.yaml - I would like knowing that better approach too.
I need to change the user + password, and link to database,
so I changed the values.yaml directly (any better approach?), for values: auth:rootPassword and auth:my_database.
the another following steps:
helm build dependencies test
helm install test --namespace test --create-namespace
after that there are two pods created.
I could check it by:
kubectl get pods -n test
and I see two pods running (maybe replication).
one of the pod: test-mysql-0 (the other is with random parse).
run:
kubectl exec --stdin --tty test-mysql-0 --namespace test-mysql -- /bin/sh
did enter the pod.
run:
mysql -rroot -p12345;
and then:
show databases;
That did showing all the database, including seeing the created database: my_database, successfully.
When I tried openning the mysql database from 'mysql workbench', and test (same user: root, and password, and port: 3306, and localhost), I couldn't run test (test connection button in database properties returns: 'failed to connect to database').
Why cannot I run properly 'mysql workbench', while in the pad itself - without any particular problem?
Is there any better approach than extrating the tgz file as I described above, and can I pass in better way (some secured yaml) the user+password?
(Right now is only the root password)
Thanks.
It sounds like you're trying to set the parameters in the dependent chart (please correct me if I'm wrong)
If this is right, all you need to do is add another section in your chart's values.yaml
name-of-dependency:
user-name: ABC
password: abcdef
the "name-of-dependency" is specified in your Chart.yaml file when you declare your chart. For example, here's my redis dependency from one of my own charts
dependencies:
- name: redis
repository: https://charts.bitnami.com/bitnami/
version: x.x.x
Then when I install the chart, I can override the redis chart's settings by doing this in my own chart's values.yaml
redis:
architecture: standalone
auth:
password: "secret-password-here"

Use case of OpenShift + buildConfig + ConfigMaps

I am trying to create and run a buildconfig yml file.
C:\OpenShift>oc version
Client Version: 4.5.31
Kubernetes Version: v1.18.3+65bd32d
Background:-
I have multiple Springboot WebUI applications which i need to deploy on OpenShift
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes),
for each and every application seems to be very inefficient.
Instead i would like to have a single set of parameterized yml files
to which i can pass on custom parameters to setup each individual application
Solution so far:-
Version One
Dockerfile-
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties
configmap/myapp-configmap created
$ oc describe cm myapp-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
APPPATH:
----
/app
ARTIFACT:
----
myapp.jar
ARTIFACTURL:
----
"https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
MY_PORT:
----
12305
Events: <none>
buildconfig.yaml snippet
strategy:
dockerStrategy:
env:
- name: GIT_SSL_NO_VERIFY
value: "true"
- name: ARTIFACTURL
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACTURL
- name: ARTIFACT
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACT
This works fine. However I somehow need to have those env: variables in file.
I am doing this to have greater flexibility, i.e. lets say I have a new variable introduced in docker file, I need NOT change the buildconfig.yml
I just add the new key:value pair to the property file, rebuild and we are good to go
This is what I do next;
Version Two
Dockerfile
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
#Intializing the variables file;
RUN ["sh", "-c", "source ./MyApp.properties"]
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties=C:\MyRepo\MyTemplates\MyApp.properties
configmap/myapp-configmap created
C:\OpenShift>oc describe configmaps test-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
MyApp.properties:
----
APPPATH=/app
ARTIFACTURL="https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
ARTIFACT=myapp.jar
MY_PORT=12035
Events: <none>
buildconfig.yaml snippet
source:
contextDir: "${param_source_contextdir}"
configMaps:
- configMap:
name: "${param_app_name}-configmap"
However the build fails
STEP 9: RUN ls ./MyApp.properties
ls: cannot access ./MyApp.properties: No such file or directory
error: build error: error building at STEP "RUN ls ./MyApp.properties": error while running runtime: exit status 2
This means that the config map file didnt get copy to folder.
Can you please suggest what to do next?
I think you are misunderstanding Openshift a bit.
The first thing you say is
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes), for each and every application seems to be very inefficient.
But that's how kubernetes/openshift works. If your resource files look the same, but only use a different git resource or image for example, then you probably are looking for Openshift Templates.
Instead i would like to have a single set of parameterized yml files to which i can pass on custom parameters to setup each individual application
Yep, I think Openshift Templates is what you are looking for. If you upload your template to the service catalog, whenever you have a new application to deploy, you can add some variables in a UI and click deploy.
An Openshift Template is just a parameterised file for all of your openshift resources (configmap, service, buildconfig, etc.).
If your application needs to be build from some git repo, using some credentials, you can parameterise those variables.
But also take a look at Openshift's Source-to-Image solution (I'm not sure what version you are using, so you'll have to google some resources). It can build and deploy your application without you having to write your own Resource files.

Vagrant, Ansible and MySql: ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path

I'm dealing with a project in Symfony. It came with a Vagrant file. When I vagrant up, this comes out:
ERROR! no action detected in task. This often indicates a misspelled
module name, or incorrect module path.
The error appears to be in
'/home/chris/Projects/TechAnalyzePlatform/deploy/ansible/roles/db/tasks/mysql.yml':
line 16, column 5, but may be elsewhere in the file depending on the
exact syntax problem.
The offending line appears to be:
# http://ansible.cc/docs/modules.html#mysql-user
- name: update mysql root password for all root accounts
^ here
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
The mysql.yml file contains the following:
---
# MySQL setup
- name: Install MySQL/MariaDB
action: yum name={{ item }}
with_items:
- MySQL-python
- perl-Cache-Cache
- mariadb
- mariadb-server
- name: Start the MySQL service
action: service name=mariadb state=started enabled=yes
# 'localhost' needs to be the last item for idempotency, see
# http://ansible.cc/docs/modules.html#mysql-user
- name: update mysql root password for all root accounts
mysql_user: name=root host={{ item }} password=admin priv=*.*:ALL,GRANT
with_items:
- "{{ ansible_hostname }}"
- 127.0.0.1
- ::1
- localhost
- name: create /.my.cnf
template: src=my.cnf dest=~/.my.cnf
The module exists, what kind of action should I insert there? What can cause this?
Many thanks
Solution: Update Ansible. Solved.

How can I use Ansible when I only have read-only access?

I am using Ansible to automate some network troubleshooting tasks, but when I try to ping all my devices as a sanity check I get the following error:
"msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in \"/tmp\".
When I run the command in Ansible verbose mode, right before this error I get the following output:
<10.25.100.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo Cmd exec error./.ansible/tmp/ansible-tmp-1500330345.12-194265391907358" && echo ansible-tmp-1500330345.12-194265391907358="echo Cmd exec error./.ansible/tmp/ansible-tmp-1500330345.12-194265391907358" ) && sleep 0'
I am an intern and thus only have read-only access to all devices; therefore, I believe the error is occurring because of the mkdir command. My two questions are thus:
1) Is there anyway to configure Ansible to not create any temp files on the devices?
2) Is there some other factor that may be causing this error that I might have missed?
I have tried searching through the Ansible documentation for any relevant configurations, but I do not have much experience working with Ansible so I have been unable to find anything.
The question does not make sense in a broader context. Ansible is a tool for server configuration automation. Without write access you can't configure anything on the target machine, so there is no use case for Ansible.
In a narrower context, although you did not post any code, you seem to be trying to ping the target server. Ansible ping module is not an ICMP ping. Instead, it is a component which connects to the target server, transfers Python scripts and runs them. The scripts produce a response which means the target system meets minimal requirements to run Ansible modules.
However you seem to want to run a regular ping command using Ansible command module on your control machine and check the status:
- hosts: localhost
vars:
target_host: 192.168.1.1
tasks:
- command: ping {{ target_host }}
You might want to play with failed_when, ignore_errors, or changed_when parameters. See Error handling in playbook.
Note, that I suggested running the whole play on localhost, because in your configuration, it doesn't make sense to configure the target machines to which you have limited access rights in the inventory.
Additionally:
Is there anyway to configure Ansible to not create any temp files on the devices?
Yes. Running commands through raw module will not create temporary files.
As you seem to have an SSH access, you can use it to run a command and check its result:
- hosts: 192.168.1.1
tasks:
- raw: echo Hello World
register: echo
- debug:
var: echo.stdout
If someone have multiple nodes and sudo permission, and you want to bypass Read Only restriction, try to use raw module, to remount disk, on remoute node with raed/write option, it was helful for me.
Playbook example:
---
- hosts: bs
gather_facts: no
pre_tasks:
- name: read/write
raw: ansible bs -m raw -a "mount -o remount,rw /" -b --vault-password-file=vault.txt
delegate_to: localhost
tasks:
- name: dns
raw: systemctl restart dnsmasq
- name: read only
raw: mount -o remount,ro /

Migration doesn't execute when I push my Laravel app on Pagodabox.io?

When I push my Laravel app on Pagodabox, it seams to cancel migrations, it keeps saying "Command Cancelled! SUCCESS" and I when I try to see a live app, I am getting an error message:
SQLSTATE[42S02]: Base table or view not found: 1146 Table 'gopagoda.posts' doesn't exist (SQL: select * from `posts`).
I did set up all db credentials for the production (for mysql db).
An app works fine on my local server.
Also, it may be relevant, I have a free account. I am not sure if migrations are available for free accounts!?
<= :::::::::::::::::::::: END BUILD OUTPUT :::::::::::::::::::::::::::
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
+> Uploading libs to storage warehouse
+> Uploading build to storage warehouse
+> Provisioning production servers (may take a while)
web1 :: BEFORE DEPLOY HOOK 1 ::
///////////////////////////////
$ php artisan -n migrate --seed
**************************************
* Application In Production! *
**************************************
Command Cancelled!
[✓] SUCCESS
My Boxfile:
####### PHP BOXFILE #######
# The Boxfile is a yaml config file that houses all configuration
# related to your app’s deployment and infrastructure. It allows
# you to custom-configure your app's environment specific to your
# project's needs.
# DOCUMENTATION LINKS
# The Boxfile : pagodabox.io/docs/boxfile_overview
# PHP Settings in the Boxfile : pagodabox.io/docs/boxfile_php_settings
# PHP on Pagoda Box : pagodabox.io/docs/php
# Build & Deploy Hooks : pagodabox.io/docs/build_deploy_hooks
global:
env:
- LARAVEL_ENV:production
build:
type: php
stability: production
lib_dir: 'vendor'
web1:
type: php
name: laravel
httpd_document_root: public
php_extensions:
- mcrypt
- pdo_mysql
network_dirs:
storage1:
- app/storage/cache
- app/storage/logs
- app/storage/meta
- app/storage/sessions
- app/storage/views
before_deploy:
- 'php artisan -n migrate --seed'
after_deploy:
- 'php artisan -n cache:clear'
- 'rm -f app/storage/views/*'
database1:
name: gopagoda
type: **mysql**
storage1:
type: nfs
name: laravel-writables
With interactivity disabled, artisan automatically aborts migrations run in a "production" environment. You can force the migration to run by adding the --force flag to the migrate command.
web1:
before_deploy:
- 'php artisan -n migrate --seed --force'