Will supervisor change the program master process - gunicorn

Supervisor version: 3.3.3
I use supervisor to manage gunicorn processes.It works very well, but recently when I send hub to supervisor to reload gunicorn, gunicorn logs show me occasionally that
[2018-07-03 16:16:49 +0000] [25949] [INFO] Handling signal: hup
[2018-07-03 16:16:49 +0000] [25949] [INFO] Hang up: Master
[2018-07-03 16:16:52 +0000] [18459] [INFO] Parent changed, shutting down: <Worker 18459>
[2018-07-03 16:16:52 +0000] [18459] [INFO] Worker exiting (pid: 18459)
[2018-07-03 16:17:00 +0000] [18458] [INFO] Parent changed, shutting down: <Worker 18458>
[2018-07-03 16:17:00 +0000] [18458] [INFO] Worker exiting (pid: 18458)
[2018-07-03 16:17:00 +0000] [18456] [INFO] Parent changed, shutting down: <Worker 18456>
[2018-07-03 16:17:00 +0000] [18456] [INFO] Worker exiting (pid: 18456)
[2018-07-03 16:17:00 +0000] [18455] [INFO] Parent changed, shutting down: <Worker 18455>
[2018-07-03 16:17:00 +0000] [18455] [INFO] Worker exiting (pid: 18455)
[2018-07-03 16:17:00 +0000] [18457] [INFO] Parent changed, shutting down: <Worker 18457>
I have no idea to handle this situation.
Is that supervisor changed the gunicorn master process?
and how can I deal with this situation?

Related

Docker Spring Boot with MySQL database, connection refused

I try to connect from my Spring Boot Container to a MySQL database but the connection breaks
This is my compose file.
version: "3.8"
services:
myapi:
image: myapi:local
container_name: myapi
build:
context: ./../
dockerfile: ./docker/Dockerfile
ports:
- 5000:5000
depends_on:
- mysql
environment:
- SERVER_PORT=5000
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/testdb
- SPRING_DATASOURCE_USERNAME=admindev
- SPRING_DATASOURCE_PASSWORD=admindev
- SPRING_DATASOURCE_DRIVER_CLASS_NAME=com.mysql.cj.jdbc.Driver
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
- SPRING_JPA_DATABASE_PLATFORM=org.hibernate.dialect.MySQL5InnoDBDialect
volumes:
- mysql-data:/var/lib/mysql
mysql:
image: mysql:8
container_name: mysql
restart: always
ports:
- 5001:3306
environment:
- MYSQL_ROOT_PASSWORD=1234
- MYSQL_USER=admindev
- MYSQL_PASSWORD=admindev
- MYSQL_DATABASE=testdb
volumes:
mysql-data:
driver: local
The MySQL container starts and it's waiting for connections.
The application also starts, but when it tries to connect to the DB it breaks.
I can't find anything wrong with the docker-compose file. I looked over the documentation and I saw that in the data source URL, I can even omit the port.
UPDATE
mysql container logs:
2021-03-05 13:20:21+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.23-1debian10 started.
2021-03-05 13:20:21+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2021-03-05 13:20:21+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.23-1debian10 started.
2021-03-05 13:20:21+00:00 [Note] [Entrypoint]: Initializing database files
2021-03-05T13:20:21.662023Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.23) initializing of server in progress as process 42
2021-03-05T13:20:21.675007Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-03-05T13:20:22.643221Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-03-05T13:20:24.871463Z 6 [Warning] [MY-010453] [Server] root#localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
2021-03-05 13:20:29+00:00 [Note] [Entrypoint]: Database files initialized
2021-03-05 13:20:29+00:00 [Note] [Entrypoint]: Starting temporary server
2021-03-05T13:20:29.854747Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.23) starting as process 87
2021-03-05T13:20:29.882866Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-03-05T13:20:30.177247Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-03-05T13:20:30.340904Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: /var/run/mysqld/mysqlx.sock
2021-03-05T13:20:30.477364Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2021-03-05T13:20:30.477747Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2021-03-05T13:20:30.482566Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2021-03-05T13:20:30.505587Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.23' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server - GPL.
2021-03-05 13:20:30+00:00 [Note] [Entrypoint]: Temporary server started.
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it.
2021-03-05 13:20:33+00:00 [Note] [Entrypoint]: Creating database thesis
2021-03-05 13:20:33+00:00 [Note] [Entrypoint]: Creating user admindev
2021-03-05 13:20:33+00:00 [Note] [Entrypoint]: Giving user admindev access to schema thesis
2021-03-05 13:20:33+00:00 [Note] [Entrypoint]: Stopping temporary server
2021-03-05T13:20:33.332383Z 13 [System] [MY-013172] [Server] Received SHUTDOWN from user root. Shutting down mysqld (Version: 8.0.23).
2021-03-05T13:20:35.763423Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.23) MySQL Community Server - GPL.
2021-03-05 13:20:36+00:00 [Note] [Entrypoint]: Temporary server stopped
2021-03-05 13:20:36+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up.
2021-03-05T13:20:36.568558Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.23) starting as process 1
2021-03-05T13:20:36.580255Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-03-05T13:20:36.776157Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-03-05T13:20:36.885922Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2021-03-05T13:20:36.950707Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2021-03-05T13:20:36.950893Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2021-03-05T13:20:36.956239Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2021-03-05T13:20:36.971274Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.23' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
myapi container logs:
2021-03-05 13:23:09.481 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFERRED mode.
2021-03-05 13:23:09.566 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 74ms. Found 1 JPA repository interfaces.
2021-03-05 13:23:10.186 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler#2eee3069' of type [org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2021-03-05 13:23:10.194 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'methodSecurityMetadataSource' of type [org.springframework.security.access.method.DelegatingMethodSecurityMetadataSource] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2021-03-05 13:23:10.714 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 5000 (http)
2021-03-05 13:23:10.732 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2021-03-05 13:23:10.732 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.38]
2021-03-05 13:23:10.828 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/api] : Initializing Spring embedded WebApplicationContext
2021-03-05 13:23:10.828 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 2738 ms
2021-03-05 13:23:11.152 INFO 1 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2021-03-05 13:23:11.222 INFO 1 --- [ task-1] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default]
2021-03-05 13:23:11.289 WARN 1 --- [ main] JpaBaseConfiguration$JpaWebConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning
2021-03-05 13:23:11.320 INFO 1 --- [ task-1] org.hibernate.Version : HHH000412: Hibernate ORM core version 5.4.21.Final
2021-03-05 13:23:11.606 INFO 1 --- [ task-1] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.0.Final}
2021-03-05 13:23:11.782 INFO 1 --- [ task-1] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2021-03-05 13:23:12.386 INFO 1 --- [ main] o.s.s.web.DefaultSecurityFilterChain : Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter#5649ec46, org.springframework.security.web.context.SecurityContextPersistenceFilter#3e598df9, org.springframework.security.web.header.HeaderWriterFilter#7f9e1534, org.springframework.web.filter.CorsFilter#78dc4696, org.springframework.security.web.authentication.logout.LogoutFilter#79b663b3, stefan.buciu.config.filter.AuthenticationFilter#502f8b57, stefan.buciu.config.filter.AuthorizationFilter#5652f555, org.springframework.security.web.savedrequest.RequestCacheAwareFilter#99a65d3, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter#6813a331, org.springframework.security.web.authentication.AnonymousAuthenticationFilter#4fe01805, org.springframework.security.web.session.SessionManagementFilter#81ff872, org.springframework.security.web.access.ExceptionTranslationFilter#5f8890c2]
2021-03-05 13:23:13.038 ERROR 1 --- [ task-1] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) ~[mysql-connector-java-8.0.21.jar!/:8.0.21]
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) ~[mysql-connector-java-8.0.21.jar!/:8.0.21]
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836) ~[mysql-connector-java-8.0.21.jar!/:8.0.21]
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456) ~[mysql-connector-java-8.0.21.jar!/:8.0.21]
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) ~[mysql-connector-java-8.0.21.jar!/:8.0.21]
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:197) ~[mysql-connector-java-8.0.21.jar!/:8.0.21]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:560) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) ~[HikariCP-3.4.5.jar!/:na]
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess.obtainConnection(JdbcEnvironmentInitiator.java:180) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:68) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:35) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.initiateService(StandardServiceRegistryImpl.java:101) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.createService(AbstractServiceRegistryImpl.java:263) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:237) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.id.factory.internal.DefaultIdentifierGeneratorFactory.injectServices(DefaultIdentifierGeneratorFactory.java:152) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.injectDependencies(AbstractServiceRegistryImpl.java:286) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:243) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.boot.internal.InFlightMetadataCollectorImpl.<init>(InFlightMetadataCollectorImpl.java:176) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.boot.model.process.spi.MetadataBuildingProcess.complete(MetadataBuildingProcess.java:118) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.metadata(EntityManagerFactoryBuilderImpl.java:1224) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:1255) ~[hibernate-core-5.4.21.Final.jar!/:5.4.21.Final]
at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:58) ~[spring-orm-5.2.9.RELEASE.jar!/:5.2.9.RELEASE]
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:365) ~[spring-orm-5.2.9.RELEASE.jar!/:5.2.9.RELEASE]
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:391) ~[spring-orm-5.2.9.RELEASE.jar!/:5.2.9.RELEASE]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
my application.yml file:
spring:
datasource:
username: admindev
password: admindev
url: jdbc:mysql://localhost:3306/testdb
driver-class-name: com.mysql.cj.jdbc.Driver
jpa:
hibernate:
ddl-auto: update
database-platform: org.hibernate.dialect.MySQL5InnoDBDialect
server:
port: 5000
servlet:
contextPath: /api
I also thought of attaching the Dockerfile:
FROM openjdk:11-jdk-slim
RUN mkdir -p /home/app
COPY ./target/*.jar /home/app/application.jar
WORKDIR /home/app
CMD ["java", "-jar", "application.jar"]
In the COPY command I used a star because there is only one *.jar produce by maven, and I didn't want to always change my Dockerfile when I change my version. That command works if there is only one file.
The artefacts are built using mvn package and I had no errors there, if I run myself the command: java -jar *.jar in the target dir, the server starts running without any problems.

Not able to start MySQL service in Ubuntu 20.10

Issue Details:
I was running MySQL v8 on one of my Raspberry Pi 4 (8 GB) for a few months, however the I am unable to start mysql using sudo service mysql start since morning and receiving below error message:
Job for mysql.service failed because the control process exited with error code.
See "systemctl status mysql.service" and "journalctl -xe" for details.
Troubleshooting & Analysis
Re-installed MySQL 8
Tried to start the MySQL service using the above command, the service started without any issues.
However, when I updated the data directory from /etc/mysql/mysql.conf.d/mysqld.cnf file, the issue crept up again.
Does this tell me that there could be an issue on the the data directory ? Any solution to recover from this scenario ?
Execution Result: systemctl status mysql.service
mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2021-02-02 18:08:10 IST; 1min 16s ago
Process: 3285 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
Process: 3294 ExecStart=/usr/sbin/mysqld (code=exited, status=2)
Main PID: 3294 (code=exited, status=2)
Status: "Server startup in progress"
error.log
2021-02-02T12:38:03.012089Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.23-0ubuntu0.20.10.1) starting as process 3185
2021-02-02T12:38:03.059589Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-02-02T12:38:03.596555Z 1 [ERROR] [MY-013183] [InnoDB] Assertion failure: fil0fil.cc:10815:initial_fsize == (file->size * phy_page_size) thread 281473363472416
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/8.0/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
12:38:03 UTC - mysqld got signal 6 ;
Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.
Thread pointer: 0xaaaadfc38ef0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = ffff9fd7e690 thread_stack 0x46000
/usr/sbin/mysqld(my_print_stacktrace(unsigned char const*, unsigned long)+0x44) [0xaaaaceb08ae4]
/usr/sbin/mysqld(handle_fatal_signal+0x294) [0xaaaacdd1e484]
linux-vdso.so.1(__kernel_rt_sigreturn+0) [0xffffacbaf5d8]
/lib/aarch64-linux-gnu/libc.so.6(gsignal+0xd8) [0xffffac24ef58]
/lib/aarch64-linux-gnu/libc.so.6(abort+0xf4) [0xffffac23b50c]
/usr/sbin/mysqld(ut_dbg_assertion_failed(char const*, char const*, unsigned long)+0x26c) [0xaaaacedb993c]
/usr/sbin/mysqld(fil_tablespace_redo_extend(unsigned char*, unsigned char const*, page_id_t const&, unsigned long, bool)+0x4a8) [0xaaaaceeedd68]
/usr/sbin/mysqld(+0x1e79ba0) [0xaaaacec9aba0]
/usr/sbin/mysqld(+0x1e7aa44) [0xaaaacec9ba44]
/usr/sbin/mysqld(+0x1e7f670) [0xaaaaceca0670]
/usr/sbin/mysqld(recv_recovery_from_checkpoint_start(log_t&, unsigned long)+0x580) [0xaaaaceca1570]
/usr/sbin/mysqld(srv_start(bool)+0x1e68) [0xaaaaced6f7c8]
/usr/sbin/mysqld(+0x1dc8154) [0xaaaacebe9154]
/usr/sbin/mysqld(dd::bootstrap::DDSE_dict_init(THD*, dict_init_mode_t, unsigned int)+0x6c) [0xaaaace9205fc]
/usr/sbin/mysqld(dd::upgrade_57::do_pre_checks_and_initialize_dd(THD*)+0x164) [0xaaaacead8224]
/usr/sbin/mysqld(+0xf9d874) [0xaaaacddbe874]
/usr/sbin/mysqld(+0x21b9d84) [0xaaaacefdad84]
/lib/aarch64-linux-gnu/libpthread.so.0(+0x7f74) [0xffffac64ff74]
/lib/aarch64-linux-gnu/libc.so.6(+0xd73dc) [0xffffac2ee3dc]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0): is an invalid pointer
Connection ID (thread ID): 1
Status: NOT_KILLED
Execution Result: sudo service mysql status
systemd[1]: mysql.service: Scheduled restart job, restart counter is at 5.
systemd[1]: Stopped MySQL Community Server.
systemd[1]: mysql.service: Start request repeated too quickly.
systemd[1]: mysql.service: Failed with result 'exit-code'.
systemd[1]: Failed to start MySQL Community Server.
It's a directory permission issue, make sure your directory is owned by mysql, and permission set to drwxr-x--- , you can do this by
chown mysql:mysql "data-dir"
chmod 760 "data-dir"

Mysql server not running on EC2 instance created from AMI

I have an EC2 instance in which Mysql server is working fine. I created an AMI from this and launched a new instance from this AMI. Now Mysql server is not able to start on this new AMI at all.
Below is the o/p:
ubuntu#ip-172-31-66-160:~$ sudo service mysql status
● mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: activating (start-post) (Result: exit-code) since Sun 2019-07-21 20:59:25 IST; 28s ago
Process: 1870 ExecStart=/usr/sbin/mysqld (code=exited, status=1/FAILURE)
Process: 1862 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
Main PID: 1870 (code=exited, status=1/FAILURE); : 1871 (mysql-systemd-s)
Tasks: 2
Memory: 3.2M
CPU: 247ms
CGroup: /system.slice/mysql.service
└─control
├─1871 /bin/bash /usr/share/mysql/mysql-systemd-start post
└─2529 sleep 1
Jul 21 20:59:25 ip-172-31-66-160 systemd[1]: mysql.service: Service hold-off time over, scheduling restart.
Jul 21 20:59:25 ip-172-31-66-160 systemd[1]: Stopped MySQL Community Server.
Jul 21 20:59:25 ip-172-31-66-160 systemd[1]: Starting MySQL Community Server...
Jul 21 20:59:26 ip-172-31-66-160 systemd[1]: mysql.service: Main process exited, code=exited, status=1/FAILURE
ubuntu#ip-172-31-66-160:~$ sudo service mysql restart
Job for mysql.service failed because the control process exited with error code. See "systemctl status mysql.service" and "journalctl -xe" for details.
I tried everything but not able to get this started.
Closest threads which I came across and followed without any luck:
https://askubuntu.com/questions/916009/mysql-wont-start-because-of-apparmor
https://support.plesk.com/hc/en-us/articles/360004185293-Unable-to-start-MySQL-on-Ubuntu-AVC-apparmor-DENIED-operation-open-
Kindly help, I am just clueless about this.
Finally, I was able to make the server run on my new instance. What I was missing is that mysql also has many config options which gets changed over due course of time and they also depend on the EC2 instance config.
In my case, my new instance has very limited RAM and the mysql.cnf file was trying to allocate more RAM than the machine had hence it was failing. Somehow,
journalctl -xe
is giving a different error which is forcing me to think in wrong direction. I should have looked in to the mysql log file which quickly pointed me to the right issue.
I found below in the error log file which helped me to isolate / fix the issue:
2019-07-21T16:00:55.658985Z 0 [ERROR] InnoDB: mmap(137428992 bytes) failed; errno 12 2019-07-21T16:00:55.658993Z 0 [ERROR] InnoDB: Cannot allocate memory for the buffer pool
2019-07-21T16:00:55.658998Z 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2019-07-21T16:00:55.659003Z 0 [ERROR] Plugin 'InnoDB' init function returned error.
2019-07-21T16:00:55.659620Z 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2019-07-21T16:00:55.659628Z 0 [ERROR] Failed to initialize builtin plugins.
2019-07-21T16:00:55.659632Z 0 [ERROR] Aborting
So mysql wasn't able to start since requested amount of memory isn't present in the system. I commented all the memory allocation lines in my.cnf file (relying on the default settings) and then it worked.
innodb_buffer_pool_size = 16G
key_buffer_size = 2G
max_allowed_packet = 128M
group_concat_max_len = 50000
query_cache_size = 2147483648
query_cache_limit =67108864
Hope someone would find this information useful.

Kubernetes Galera: WSREP: failed to open gcomm backend connection: 110:

I am trying to setup a Kubernetes Galera 3 Replica set. I am using:
https://github.com/kubernetes/kubernetes/tree/master/test/e2e/testing-manifests/statefulset/mysql-galera
The first pod spins up fine, but the second pod gets stuck:
1 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
at gcomm/src/pc.cpp:connect():162
2018-07-21 18:24:40 1 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -110 (Connection timed out)
2018-07-21 18:24:40 1 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1379: Failed to open channel 'mysql' at 'gcomm://mysql-0.galera.mysql.svc.cluster.local,mysql-1.galera.mysql.svc.cluster.local': -110 (Connection timed out)
2018-07-21 18:24:40 1 [ERROR] WSREP: gcs connect failed: Connection timed out
2018-07-21 18:24:40 1 [ERROR] WSREP: wsrep::connect(gcomm://mysql-0.galera.mysql.svc.cluster.local,mysql-1.galera.mysql.svc.cluster.local) failed: 7
2018-07-21 18:24:40 1 [ERROR] Aborting
Do I need to have etcd setup for this cluster to work? Any suggestions would be appreciated.
Thank you!
Kubernetes Info:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

mariadb, add 4th galera node failed

I have three node setup and running perfectly for the past months.
Recently I want to add another node in a different location but somehow I keep on getting errors.
At first, I was just following this tutorial (where I setup the first time few months ago) https://www.howtoforge.com/tutorial/how-to-install-and-configure-galera-cluster-on-ubuntu-1604/ I did not start all the nodes again from the beginning, I just has to find the file of /mysql/conf.d/galera.cnf in the other three nodes I added the new nodes ip into the previous three. So for the forth one I had the /etc/mysql/conf.d/galera.cnf setup like...
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
# Galera Cluster Configuration
wsrep_cluster_name="galera_cluster"
wsrep_cluster_address="gcomm://node1_ip,node2_ip,node3_ip,node4_ip"
# Galera Synchronization Configuration
wsrep_sst_method=rsync
# Galera Node Configuration
wsrep_node_address="xx.xx.xxx.xxx"
wsrep_node_name="Node4"
somehow I am getting this HUGE error,
Group state: e3ade7e7-e682-11e7-8d16-be7d28cda90e:36273
Local state: 00000000-0000-0000-0000-000000000000:-1
[Note] WSREP: New cluster view: global state: e3ade7e7-e682-11e7-8d16-be7d28cda90e:36273, view# 122: Primary, number of nodes: 4, my
[Warning] WSREP: Gap in state sequence. Need state transfer.
[Note] WSREP: Running: 'wsrep_sst_rsync --role 'joiner' --address 'xxx.node.4.ip' --datadir '/var/lib/mysql/' --parent '22828' ''
rsyncd version 3.1.1 starting, listening on port 4444
[Note] WSREP: Prepared SST request: rsync|xxx.node.4.ip:4444/rsync_sst
[Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
[Note] WSREP: REPL Protocols: 7 (3, 2)
[Note] WSREP: Assign initial position for certification: 36273, protocol version: 3
[Note] WSREP: Service thread queue flushed.
[Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not
at galera/src/replicator_str.cpp:prepare_for_IST():482. IST will be unavailable.
[Note] WSREP: Member 0.0 (Node4) requested state transfer from '*any*'. Selected 1.0 (Node1)(SYNCED) as donor.
[Note] WSREP: Shifting PRIMARY -> JOINER (TO: 36273)
[Note] WSREP: Requesting state transfer: success, donor: 1
[Note] WSREP: GCache history reset: 00000000-0000-0000-0000-000000000000:0 -> e3ade7e7-e682-11e7-8d16-be7d28cda90e:36273
[Note] WSREP: (7642cf37, 'tcp://0.0.0.0:4567') connection to peer 7642cf37 with addr tcp://xxx.node.4.ip:4567 timed out, no messages
[Note] WSREP: (7642cf37, 'tcp://0.0.0.0:4567') turning message relay requesting off
mariadb.service: Start operation timed out. Terminating.
Terminated
WSREP_SST: [INFO] Joiner cleanup. rsync PID: 22875
sent 0 bytes received 0 bytes total size 0
WSREP_SST: [INFO] Joiner cleanup done.
[ERROR] WSREP: Process was aborted.
[ERROR] WSREP: Process completed with error: wsrep_sst_rsync --role 'joiner' --address 'xxx.node.4.ip' --datadir '/var/lib/mysql/'
[ERROR] WSREP: Failed to read uuid:seqno and wsrep_gtid_domain_id from joiner script.
[ERROR] WSREP: SST failed: 2 (No such file or directory)
[ERROR] Aborting
Error in my_thread_global_end(): 1 threads didn't exit
mariadb.service: Main process exited, code=exited, status=1/FAILURE
Failed to start MariaDB 10.1.33 database server.
P.S for the older 3 nodes maria db version is 10.1.29 and the new node is 10.1.33
Thanks in advance for any suggestions.