Payara 5.193.1 - Cannot purge JBatch Repository - mysql

I'm trying to purge JBatch repository with ./asadmin purge-jbatch-repository server-config:jobs_/jobs but I get the following error:
remote failure: com.ibm.jbatch.container.exception.PersistenceException: java.sql.SQLException: You can't specify target table 'STEPEXECUTIONINSTANCEDATA' for update in FROM clause
java.sql.SQLException: You can't specify target table 'STEPEXECUTIONINSTANCEDATA' for update in FROM clause
Command purge-jbatch-repository failed.
My datasource comes from a MySQL connection pool, connected to a server version 5.7.27-0ubuntu0.18.04.1

Related

MySQL Shell cant add instance due to 'unconnected' user

I am having an issue adding an instance to my ReplicationSet with MySQL 8.0.28 and MySQL Shell
rs.addInstance('a.b.c.d:3306')
the response I get is
Adding instance to the replicaset...
* Performing validation checks
This instance reports its own address as a.b.c.d:3306
a.b.c.d:3306: Instance configuration is suitable.
* Checking async replication topology...
* Checking transaction state of the instance...
WARNING: A GTID set check of the MySQL instance at 'a.b.c.d:3306' determined that it contains transactions that do not originate from the replicaset, which must be discarded before it can join the replicaset.
a.b.c.d:3306 has the following errant GTIDs that do not exist in the replicaset:
2b575744-e07d-11ec-ada9-00ff6b3adad4:1-67
WARNING: Discarding these extra GTID events can either be done manually or by completely overwriting the state of a.b.c.d:3306 with a physical snapshot from an existing replicaset member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
Having extra GTID events is not expected, and it is recommended to investigate this further and ensure that the data can be removed prior to choosing the clone recovery method.
Please select a recovery method [C]lone/[A]bort (default Abort): C
* Updating topology
Waiting for clone process of the new member to complete. Press ^C to abort the operation.
* Waiting for clone to finish...
NOTE: a.b.c.d:3306 is being cloned from x.y.z.x:3306
ERROR: The clone process has failed: Clone Donor Error: 1184 : Aborted connection 554 to db: 'unconnected' user: 'mysql_innodb_rs_10' host: 'xxx' (init_connect command failed). (3862)
ERROR: Error adding instance to replicaset: Clone Donor Error: 1184 : Aborted connection 554 to db: 'unconnected' user: 'mysql_innodb_rs_10' host: xxx' (init_connect command failed).
Reverting topology changes...
Changes successfully reverted.
ERROR: a.b.c.d:3306 could not be added to the replicaset
ReplicaSet.addInstance: Clone Donor Error: 1184 : Aborted connection 554 to db: 'unconnected' user: 'mysql_innodb_rs_10' host: 'xxx' (init_connect command failed). (RuntimeError)
I can't find any information on how to proceed with this, any help would be appreciated?
Assuming that your user has all of the required permissions, then the first thing to check based on (init_connect command failed) in your description would be the init_connect variable on both master and slave:
SHOW GLOBAL VARIABLES LIKE 'init_connect';
It should be the same on both servers.
(The 'unconnected' in your subject will simply be referring to the db - which is not an issue).

Unable to start Hive metastore related to java exception

I installed Apache hive 3, Apache Hadoop 3, Trino and Mysql (mysql Ver 14.14 Distrib 5.7.20, for Linux (x86_64) using EditLine wrapper).
When I start Hive metastore I had an exception, it blocked the start of the metastore:
Exception:
Exception in thread "main" java.lang.NoSuchMethodError: com/google/common/base/Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V (loaded from file:/root/trino_poc/trino-server-351/etc/catalog/apache-hive-3.1.2-bin/lib/guava-19.0.jar by jdk.internal.loader.ClassLoaders$AppClassLoader#8753f89b) called from class org.apache.hadoop.conf.Configuration (loaded from file:/root/trino_poc/hadoop-3.2.2/share/hadoop/common/hadoop-common-3.2.2.jar by jdk.internal.loader.ClassLoaders$AppClassLoader#8753f89b).
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
My solution was (I found it on github) to replace the guava-19.0.jar by guava-22.0.jar
So the metastore started well on localhost:9083
But the port is not in listen: netstat -na | grep :9083 ==> nothing
in hive log I have this error:
Unable to open a test connection to the given database. JDBC url = jdbc:mysql://localhost:3306/metastore_dbs?useSSL=false;createDatabaseIfNotExist=true, username = root. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------^M
java.sql.SQLException: The connection property 'useSSL' only accepts values of the form: 'true', 'false', 'yes' or 'no'. The value 'false;createDatabaseIfNotExist=true' is not in this set
So the issue is here: 'false;createDatabaseIfNotExist=true' the ';' is not supported in xml file because this code is exist in hive-site.xml and I need to properly escape the characters.
I used '?' instead, Hivemetastore started but port 9083 is not listen, and I got almost the same exception'false?createDatabaseIfNotExist=true'
I used '&' ==> Hive metastore is not started (failed) and I got this exception:
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.common.base.internal.Finalizer (file:/root/trino_poc/trino-server-351/etc/catalog/apache-hive-3.1.2-bin/lib/guava-22.0.jar) to field java.lang.Thread.inheritableThreadLocals
WARNING: Please consider reporting this to the maintainers of com.google.common.base.internal.Finalizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
MetaException(message:org.jets3t.service.security.ProviderCredentials)
If I resume:
To resolve exception to initialize Database and start hive metastore I replace the guava-19.0.jar by guava-22.0.jar but port 9083 is not listen.
Then to resolve the issue of port listening:
I Used '?' instead of ';' ==> hive metastore still started well
but port 9083 in not listen.
I tried using '&' instead of '?' ==> Hive metastore not
started (failed) and I got the exception above related to
guava-22.0.jar
I hope that I was clear. Really I need to your help, because I spent a lot of hours trying to resolve the issue.
Thanks in advance

Unable to analyse MySQL error logs in OSSEC

I am trying to analyze MySQL error logs that are generated on my OSSEC agent and raise alerts using OSSEC server.
here is the code block added to /var/ossec/etc/ossec.conf on the agent side to read the error logs of MySQL from the agent:
<localfile>
<log_format>mysql_log</log_format>
<location>/var/log/mysql/error.log</location>
</localfile>
After doing so I have restarted the agent and server but unable test any error logs that are getting generated on the agent side like:
2020-09-15T04:09:24.164859Z 12 [Note] Access denied for user 'root'#'localhost' (using password: YES)
As per doc https://ossec-docs.readthedocs.io/en/latest/docs/programs/ossec-logtest.html under Caveats we need to add MySQL log: to the log generated for the ossec-logtest.
This will be added automatically when we send these logs to the OSSEC server for analysis from the agent.
ossec-logtest result for MySQL error log
ossec-logtest is working fine after adding MySQL log: to the beginning but they are not working in the realtime.
Can anyone please help me through this problem.
The fact that ossec-logtest trigger an alert means that mysql decoder and rules are working fine
Check on Agent
MySql is running. systemctl status mysqld.service
MySql configuration (loglevel and output file) allow to log that kind of event . See here
If the value is greater than 1, aborted connections are written to the
error log, and access-denied errors for new connection attempts are
written.
MySql is effectively logging 'Access denied': grep "Access denied" /var/log/mysql/error.log
Ossec and their processes is running ok: /var/ossec/bin/ossec-control status
Check on Manager
log_alert_level field in /var/ossec/etc/ossec.conf is lower o equal than 9 (loglevel showed in your ossec-logtest)

Upgrade fails with mysql.user wrong column count and expired password

I moved from mariadb to mysql (which worked). Now I wanted to upgrade mysql to 5.7 but it threw an error:
Running queries to upgrade MySQL server.
mysql_upgrade: (non fatal) [ERROR] 1728: Cannot load from mysql.proc. The table is probably corrupted
mysql_upgrade: (non fatal) [ERROR] 1545: Failed to open mysql.event
mysql_upgrade: [ERROR] 1072: Key column 'Id' doesn't exist in table
mysql_upgrade failed with exit status 5
I wanted to run mysqlcheck but it threw error:
Your password has expired. To log in you must change it using a client that supports expired passwords.
When I log in as root and want to SET PASSWORD I get this error
Column count of mysql.user is wrong. Expected 45, found 46.
When I want to start mysql with ignoring grant tables with
mysqld --skip-grant-tables
It fails silently.
What else can I try here? Reinstalling mysql results in the same
Key column 'Id' doesn't exist in table
installed mysql-server-5.7 package post-installation script subprocess returned error exit status 1
No apport report written because the error message indicates its a followup error from a previous failure.
dpkg: dependency problems prevent configuration of mysql-server:
mysql-server depends on mysql-server-5.7; however:
Package mysql-server-5.7 is not configured yet.
dpkg: error processing package mysql-server (--configure):
dependency problems - leaving unconfigured
error.
I added skip-grant-tables to my.cnf and issued:
ALTER TABLE mysql.user DROP COLUMN is_role;
that deleted the extra column.
Now I was able to update my passowrd again.
The remaining problem is the following. Apt is stuck on unconfigured packages. If apt wants to configure them it issues mysql_upgrade which will fail with "Key column 'Id' doesn't exist in table". It does not provide any other information. How can I debug this?
Then I used logging of mysql queries as #Piemol suggested to trace the last query.
The last query of msql_upgrade is:
ALTER TABLE slave_worker_info ADD Channel_name CHAR(64) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL DEFAULT '' COMMENT 'The channel on which the slave is connected to a source. Used in Multisource Replication', DROP PRIMARY KEY, ADD PRIMARY KEY(Channel_name, Id)
Quit
slave_worker_info had no column ID, so I dropped the table (as it was empty) and created it again ( https://dba.stackexchange.com/questions/54608/innodb-error-table-mysql-innodb-table-stats-not-found-after-upgrade-to-mys )

Hive metastore configuration

I configured hive with MYSQL as repository. When I start the hive server using my standard user (infa_hadoop) it is giving me an error "cant connect to metastore using the URI provided".
But if I login as root and start the hive server it starts well.
command used:
hive --service hiveserver
But when I tried to execute the ETL job (informatica) it is giving me the Access control exception!
Error :
Function [INFASQLExecute] failed in adapter [/u01/app/informatica/plugins/dynamic/hiveruntime/libhive.so] with error code [-1].
FnName: INFASQLExecute -- execute(). SQLException: Query returned non-zero code: 12, cause: FAILED: Hive Internal Error: java.lang.RuntimeException(org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="scratchdir":Infa_Linux:supergroup:rwxr-xr-x)
Function [INFASQLGetDiagRecW] failed in adapter [/u01/app/informatica/plugins/dynamic/hiveruntime/libhive.so] with error code [100].
FnName: INFASQLExecute -- execute(). SQLException: Query returned non-zero code: 12, cause: FAILED: Hive Internal Error: java.lang.RuntimeException(org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="scratchdir":Infa_Linux:supergroup:rwxr-xr-x)
Function [INFASQLGetDiagRecW] failed in adapter [/u01/app/informatica/plugins/dynamic/hiveruntime/libhive.so] with error code [100].].
But hive is working fine in command promt ? Any suggestions..
First, please check if your hive thrift server is up and running .
It is advised to use the following command to start the hive thrift server
hive --service hiveserver -p 10001
Telnet and check if the server is running in port 10001 , If yes I suppose your issue would be resolved