I need to rotate my Resque logs on AWS Elastic Beanstalk on AMI Linux 2 with Ruby. My Puma and Nginx logs rotate properly. I've added the following config below, but the logs are not getting rotated.
.ebextensions/03_publish-logs.config
files:
"/opt/elasticbeanstalk/tasks/publishlogs.d/resque.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/log/resque/rotated/*
.ebextensions/04_rotate-logs.config
files:
"/etc/logrotate.elasticbeanstalk.hourly/logrotate.elasticbeanstalk.resque.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/log/resque/* {
su root root
size 10M
rotate 5
missingok
compress
notifempty
copytruncate
dateext
dateformat %s
olddir /var/log/resque/rotated
}
I'm following this documentation: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.logging.html#health-logs-logrotate
You forgot to add this config to logrotate.
Add this code to .ebextensions:
files:
"/etc/cron.hourly/cron.logrotate.elasticbeanstalk.resque.conf" :
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
test -x /usr/sbin/logrotate || exit 0
/usr/sbin/logrotate /etc/logrotate.elasticbeanstalk.hourly/logrotate.elasticbeanstalk.resque.conf
Related
I try to setup a MySQL DB with Ansible, however, I have trouble with changing the initial root password.
- name: Get temporary root password from install log
shell: cat /var/log/mysqld.log | grep "temporary password" | grep -oE '[^ ]+$'
register: tmp_root_password
- name: Set new password from temporary password
shell: 'mysql -e \"ALTER USER root IDENTIFIED BY("{{ mysql_root_password }}");\" --connect-expired-password -uroot -p"{{ tmp_root_password.stdout }}"'
Fails with the following error:
fatal: [mysqlhost.mydomain]: FAILED! => {"changed": true, "cmd": "mysql -e \\\"ALTER USER root IDENTIFIED BY(\" MyNewPassword\");\\\" --connect-expired-password -uroot -p\"MyTmpPassword\"", "delta": "0:00:00.003081", "end": "2021-11-28 08:40:52.000198", "msg": "non-zero return code", "rc": 1, "start": "2021-11-28 08:40:51.997117", "stderr": "/bin/sh: -c: line 0: syntax error near unexpected token `('\n/bin/sh: -c: line 0: `mysql -e \\\"ALTER USER root IDENTIFIED BY(\" MyNewPassword\");\\\" --connect-expired-password -uroot -p\"MyTmpPassword\"'", "stderr_lines": ["/bin/sh: -c: line 0: syntax error near unexpected token `('", "/bin/sh: -c: line 0: `mysql -e \\\"ALTER USER root IDENTIFIED BY(\" MyNewPassword\");\\\" --connect-expired-password -uroot -p\"MyTmpPassword\"'"], "stdout": "", "stdout_lines": []}
I've tried to set the root password based on the below guide, as well, without any luck.
https://docs.ansible.com/ansible/latest/collections/community/mysql/mysql_user_module.html#ansible-collections-community-mysql-mysql-user-module
Thanks!
The following is based the Ansible role I created for mysql/percona and is idempotent.
This is the playbook you could use, taken from the repo described above.
This sets the 'debian-sys-main' user as a root user of the database.
This also assumes you build MySQL for the first time, and not while already being active/installed.
---
- name: root | stat to check whether /root/.my.cnf exists
stat:
path: /root/.my.cnf
register: cnf_file
- block:
- name: root | place temporary cnf file
template:
src: temp_cnf.j2
dest: /etc/my.cnf
mode: '0644'
- name: root | start mysql to add the debian-sys-maint user
systemd:
name: mysql
state: started
enabled: true
- name: root | get temp root password
shell: >-
grep 'temporary password' /var/log/mysqld.log |
awk '{print $NF}' | tail -n 1
register: temp_root_pw
no_log: true
- name: root | set root password
shell: >-
mysqladmin -u root
--password="{{ temp_root_pw.stdout }}"
password "{{ mysql_root_password }}"
no_log: true
- name: root | set debian-sys-maint user and password
mysql_user:
name: debian-sys-maint
password: "{{ mysql_system_password }}"
priv: '*.*:ALL,GRANT'
update_password: always
state: present
login_unix_socket: /var/run/mysqld/mysqld.sock
login_user: root
login_password: "{{ mysql_root_password }}"
no_log: true
- name: root | copy root.cnf
template:
src: root.cnf.j2
dest: /etc/mysql/root.cnf
mode: '0600'
owner: root
group: root
- name: root | make symlink of file for root db access
file:
state: link
src: /etc/mysql/root.cnf
path: /root/.my.cnf
- name: root | delete anonymous connections
mysql_user:
name: ""
host_all: true
state: absent
no_log: true
- name: root | secure root user
mysql_user:
name: root
host: "{{ item }}"
no_log: true
loop:
- ::1
- 127.0.0.1
- localhost
- name: root | ensure test database is removed
mysql_db:
name: test
login_user: root
state: absent
- name: root | stop mysql again
systemd:
name: mysql
state: stopped
enabled: true
- name: root | remove mysqld log file
file:
path: /var/log/mysqld.log
state: absent
when: not cnf_file.stat.exists
The temp_cnf.j2:
[client]
socket=/var/run/mysqld/mysqld.sock
[mysqld]
server-id=1
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
And the root.cnf.j2
{{ ansible_managed | comment }}
# This file is symlinked to /root/.my.cnf to use passwordless login for the root user
[client]
socket = {{ mysqld.socket }}
user = debian-sys-maint
password = {{ percona_system_password }}
[mysql_upgrade]
socket = {{ mysqld.socket }}
user = debian-sys-maint
password = {{ percona_system_password }}
Some vars:
mysql_root_password: my_password
mysql_system_password: my_password
mysqld:
socket: /var/run/mysqld/mysqld.sock
Should work for CentOS 8, Rocky Linux and Oracle Linux as well.
Regarding the initial question for RHEL 7 and MySQL Server 8.0.21, I've found the following approach working in the mentioned environment.
- name: Delete all anonymous SQL user accounts
mysql_user:
user: ""
host_all: yes
state: absent
- name: Remove the SQL test database
mysql_db:
db: test
state: absent
- name: Change root user password on first run
mysql_user:
login_user: root
login_password: ''
name: root
password: "{{ SQL_ROOT_PASSWORD }}"
priv: "*.*:ALL,GRANT"
host: "{{ item }}"
with_items:
- "{{ ansible_hostname }}"
- "127.0.0.1"
- "::1"
- "localhost"
I am trying to run migrations through Sequelize in Node JS on Google Cloud Run connecting to a MySQL Google Cloud SQL database. I followed
https://stackoverflow.com/a/58441728/4487248 to get the Google Cloud proxy setup. Given this log setting up the proxy connection to the database seems to have worked:
Step #2 - "migrate": Already have image (with digest): gcr.io/cloud-builders/yarn
Step #2 - "migrate": 2021/10/02 14:19:58 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here.
Step #2 - "migrate": 2021/10/02 14:19:58 Listening on /workspace/<MY-INSTANCE-NAME> for <MY-INSTANCE-NAME>
Step #2 - "migrate": 2021/10/02 14:19:58 Ready for new connections
Step #2 - "migrate": 2021/10/02 14:19:58 Generated RSA key in 74.706896ms
However, when I try to run migrations with yarn knex migrate:latest or ./node_modules/.bin/sequelize db:migrate I run into:
getaddrinfo ENOTFOUND /workspace/<MY-INSTANCE-NAME>
This seems to imply that the host could not be found.
Output / Logs
My cloudbuild.yaml (composed of https://stackoverflow.com/a/52366671/4487248 & https://stackoverflow.com/a/58441728/4487248):
steps:
# Install Node.js dependencies
- id: yarn-install
name: gcr.io/cloud-builders/yarn
waitFor: ["-"]
# Install Cloud SQL proxy
- id: proxy-install
name: gcr.io/cloud-builders/yarn
entrypoint: sh
args:
- "-c"
- "wget https://storage.googleapis.com/cloudsql-proxy/v1.25.0/cloud_sql_proxy.linux.amd64 -O /workspace/cloud_sql_proxy && chmod +x /workspace/cloud_sql_proxy"
waitFor: ["-"]
- id: migrate
name: gcr.io/cloud-builders/yarn
entrypoint: sh
args:
- "-c"
- "(/workspace/cloud_sql_proxy -dir=/workspace -instances=<MY-INSTANCE-NAME> & sleep 2) && ./node_modules/.bin/sequelize db:migrate"
timeout: "1200s"
waitFor: ["yarn-install", "proxy-install"]
timeout: "1200s"
My .sequelizerc (Documentation here):
const path = require('path');
module.exports = {
'config': path.resolve('config', 'config.js')
}
My config/config.js:
module.exports = {
production: {
username: process.env.PROD_DB_USERNAME,
password: process.env.PROD_DB_PASSWORD,
database: process.env.PROD_DB_NAME,
host: `/workspace/${process.env.INSTANCE_CONNECTION_NAME}`, // Replacing this line with `/workspace/cloudsql/${..}` or `/cloudsql/${..}` leads to the same error
dialect: 'mysql',
}
}
I did enable Public IP on the MySQL instance:
Setting the host to localhost and adding the instance path in socketPath in config.js fixed the issue:
module.exports = {
production: {
username: process.env.PROD_DB_USERNAME,
password: process.env.PROD_DB_PASSWORD,
database: process.env.PROD_DB_NAME,
host: localhost,
dialect: 'mysql',
dialectOptions: {
socketPath: `/workspace/${process.env.INSTANCE_CONNECTION_NAME}`,
},
}
}
when I run realize start for a Go program, I got this error result
[14:55:13][V2-USER-API.YUMMY.ID] : Watching 159 file/s 118 folder/s
[14:55:13][V2-USER-API.YUMMY.ID] : Install started
[14:55:13][V2-USER-API.YUMMY.ID] : Install
exec: not started
I have set up my file .realize.yaml, like this
settings:
legacy:
force: false
interval: 0s
schema:
- name: v2-user-api.yummy.id
path: ./cmd/server
commands:
run:
status: true
watcher:
extensions:
- go
paths:
- ../../
ignored_paths:
- .git
- .realize
- vendor
but I got error after run realize start
This command work for me
#!/usr/bin/env bash
export GO111MODULE=off
cd ~/
go get github.com/oxequa/realize
cd /go/src/github.com/oxequa/realize && \
git fetch && \
git checkout v2.0.2 && \
go get github.com/oxequa/realize
RV=$(realize --version)
echo "Realize installed #: $RV"
export GO111MODULE=on
use realize version v2.0.2
Since 2 weeks, I'm working on MySQL deployment and stuff with Ansible. I have to install MySQL on a LV.
Before MySQL deployment, Ansible script creates /var/lib/mysql, LV and mount it on /var/lib/mysql. Then, it create MySQL user and MySQL group to set 0700 right on MySQL directory. When it done, Ansible deploy MySQL 5.7.
Part of my Ansible code :
- name: "Group : mysql"
group:
name: "mysql"
state: "present"
tags:
- User mysql
- name: "user : mysql"
user:
name: "mysql"
shell: "mysql"
group: "mysql"
createhome: "no"
append: "True"
state: "present"
tags:
- User
- name: "Set rights on mysql dir "
file:
path: "/var/lib/mysql"
owner: "mysql"
group: "mysql"
mode: 0700
tags:
- mysql dir rights
- name: "mysql root password"
debconf:
name: "mysql-server"
question: "mysql-server/root_password"
value: "{{ password_root_mysql }}"
vtype: "password"
when: password_root_mysql is defined
tags:
- Install
- name: "mysql root password confirmation"
debconf:
name: "mysql-server"
question: "mysql-server/root_password_again"
value: "{{ password_root_mysql }}"
vtype: "password"
when: password_root_mysql is defined
tags:
- Install mysql
- name: "Install : MySQL Server"
apt:
update_cache: "True"
name: "mysql-server"
install_recommends: "True"
tags:
- Install mysql
notify:
- stop mysql
- name: "Copie du template root.cnf.j2 vers root/.my.cnf "
template:
src: "{{ mysql_template_rootcnf }}"
dest: "~/.my.cnf"
owner: "root"
mode: "0600"
tags:
- Install mysql
So when I try to install mysql-server without any LV and directory settings, it works. But when I prepare directory MySQL with good rights, installation doesn't work, whether manual or automatic deployment.
Any ideas ?
Ubuntu 16.04 with MYSQL 5.7.
Ansible v2.7
Ok, I've found the problem, Lost+Found directory in /var/lib/mysql (lv mounted on it) is considerated like a database, mysql doesn't like it. In my code, ive just added :
- name: "Remove lost+found from {{ mysql_dir }}"
file:
path: "{{ mysql_dir }}/lost+found"
state: absent
I am trying to create an Openshift template for a Job that passes the job's command line arguments in a template parameter using the following template:
apiVersion: v1
kind: Template
metadata:
name: test-template
objects:
- apiVersion: batch/v2alpha1
kind: Job
metadata:
name: "${JOB_NAME}"
spec:
parallelism: 1
completions: 1
autoSelector: true
template:
metadata:
name: "${JOB_NAME}"
spec:
containers:
- name: "app"
image: "batch-poc/sample-job:latest"
args: "${{JOB_ARGS}}"
parameters:
- name: JOB_NAME
description: "Job Name"
required: true
- name: JOB_ARGS
description: "Job command line parameters"
Because the 'args' need to be an array, I am trying to set the template parameter using JSON syntax, e.g. from the command line:
oc process -o=yaml test-template -v=JOB_NAME=myjob,JOB_ARGS='["A","B"]'
or programmatically through the Spring Cloud Launcher OpenShift Client:
OpenShiftClient client;
Map<String,String> templateParameters = new HashMap<String,String>();
templateParameters.put("JOB_NAME", jobId);
templateParameters.put("JOB_ARGS", "[ \"A\", \"B\", \"C\" ]");
KubernetesList processed = client.templates()
.inNamespace(client.getNamespace())
.withName("test-template")
.process(templateParameters);
In both cases, it seems to fail because Openshift is interpreting the comma after the first array element as a delimiter and not parsing the remainder of the string.
The oc process command sets the parameter value to '["A"' and reports an error: "invalid parameter assignment in "test-template": "\"B\"]"".
The Java version throws an exception:
Error executing: GET at: https://kubernetes.default.svc/oapi/v1/namespaces/batch-poc/templates/test-template. Cause: Can not deserialize instance of java.util.ArrayList out of VALUE_STRING token\n at [Source: N/A; line: -1, column: -1] (through reference chain: io.fabric8.openshift.api.model.Template[\"objects\"]->java.util.ArrayList[0]->io.fabric8.kubernetes.api.model.Job[\"spec\"]->io.fabric8.kubernetes.api.model.JobSpec[\"template\"]->io.fabric8.kubernetes.api.model.PodTemplateSpec[\"spec\"]->io.fabric8.kubernetes.api.model.PodSpec[\"containers\"]->java.util.ArrayList[0]->io.fabric8.kubernetes.api.model.Container[\"args\"])
I believe this is due to a known Openshift issue.
I was wondering if anyone has a workaround or an alternative way of setting the job's parameters?
Interestingly, if I go to the OpenShift web console, click 'Add to Project' and choose test-template, it prompts me to enter a value for the JOB_ARGS parameter. If I enter a literal JSON array there, it works, so I figure there must be a way to do this programmatically.
We worked out how to do it; template snippet:
spec:
securityContext:
supplementalGroups: "${{SUPPLEMENTAL_GROUPS}}"
parameters:
- description: Supplemental linux groups
name: SUPPLEMENTAL_GROUPS
value: "[14051, 14052, 48, 65533, 9050]"
In our case we have 3 files :
- environment configuration,
- template yaml
- sh file which run oc process.
And working case looks like this :
environment file :
#-- CORS ---------------------------------------------------------
cors_origins='["*"]'
cors_acceptable_headers='["*","Authorization"]'
template yaml :
- apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: plugin-common-cors
annotations:
kubernetes.io/ingress.class: ${ingress_class}
config:
origins: "${{origins}}"
headers: "${{acceptable_headers}}"
credentials: true
max_age: 3600
plugin: cors
sh file running oc :
if [ -f templates/kong-plugins-template.yaml ]; then
echo "++ Applying Global Plugin Template ..."
oc process -f templates/kong-plugins-template.yaml \
-p ingress_class="${kong_ingress_class}" \
-p origins=${cors_origins} \
-p acceptable_headers=${cors_acceptable_headers} \
-p request_per_second=${kong_throttling_request_per_second:-100} \
-p request_per_minute=${kong_throttling_request_per_minute:-2000} \
-p rate_limit_by="${kong_throttling_limit_by:-ip}" \
-o yaml \
> yaml.tmp && \
cat yaml.tmp | oc $param_mode -f -
[ $? -ne 0 ] && [ "$param_mode" != "delete" ] && exit 1
rm -f *.tmp
fi
The sh file should read environment file.