teiid doesn't connect to mysql datasource - mysql

I have a problem on connecting to a mysql datasource from JBoss EAP 6.3 with Teiid 8.10.1 copied over it. I need to mention that I've copied mysql connector driver as a module for using mysql datasource.
Also, when I try with JBoss EAP 6.3 without Teiid, the connection works.
Anybody that confronted the same problem?
This is the error message I receive in administration console of JBoss:
"Unknown error
Unexpected HTTP response: 500
Request {
"address" => [
("subsystem" => "datasources"),
("data-source" => "database")
],
"operation" => "test-connection-in-pool" }
Response
Internal Server Error {
"outcome" => "failed",
"failure-description" => "JBAS010440: failed to invoke operation: JBAS010447: Connection is not valid",
"rolled-back" => true }"
Need to mention that in logs I don't receive a more verbose message for the error, with all logs enabled(Trace, Debug, Error, etc).

Related

Handshake inactivity timeout while using VSCode SQL Tools

I'm using the SQL Tools extension for VSCode to connect to my database which is hosted on a Debian VM and whenever I try to connect to it, I always get a timeout error.
[1664037417336] INFO (ext): EXECUTING COMMAND => sqltools.getConnections
[1664037417336] INFO (ls): REQUEST RECEIVED => connection/GetConnectionsRequest
[1664037417337] INFO (ext): EXECUTING COMMAND => sqltools.getConnections
[1664037417337] INFO (ls): REQUEST RECEIVED => connection/GetConnectionsRequest
[1664037417338] INFO (ls): REQUEST RECEIVED => connection/GetConnectionPasswordRequest
[1664037422395] INFO (ls): REQUEST RECEIVED => connection/ConnectRequest
[1664037422395] INFO (ls): Connection instance created for Social Bot (dev).
ns: "conn-manager"
[1664037432398] ERROR (ls): {"code":-1,"data":{"driver":"MySQL","driverOptions":{"mysqlOptions":{"authProtocol":"default"}}},"name":"Error"}
ns: "conn"
[1664037432399] ERROR (ls): Connecting error: {"code":-1,"data":{"driver":"MySQL","driverOptions":{"mysqlOptions":{"authProtocol":"default"}}},"name":"Error"}
ns: "conn-manager"
[1664037432399] ERROR (ls): Open connection error
ns: "conn-manager"
[1664037432401] ERROR (ext): ERROR: Error opening connection Handshake inactivity timeout, {"code":-1,"data":{"driver":"MySQL","driverOptions":{"mysqlOptions":{"authProtocol":"default"}}}}
ns: "error-handler"
This is what my connection config looks like:
Why does this happen?

Terraform 11 AWS RDS Upgrade Fails

I am using Terraform v11 to upgrade DB's running on AWS RDS with MySql engine from v5.6 to v5.7.
Each of the DB's has it's own dedicated options group and parameter group.
The upgrade process fails with error:
aws_db_parameter_group.manager: Error deleting DB parameter group: InvalidDBParameterGroupState: One or more database instances are still members of this parameter group foo-rds-mysql56, so the group cannot be deleted
Is there any workaround for this?
Reply to ydaetskcoR:
The plan output is :
-/+ aws_db_parameter_group.foo (new resource required)
id: "foo-mysql-56" => <computed> (forces new resource)
arn: "arn:aws:rds:eu-west-2:238425939713:pg:foo-mysql-56" => <computed>
description: "Managed by Terraform" => "Managed by Terraform"
family: "mysql5.6" => "mysql5.7" (forces new resource)
name: "foo-mysql-56" => "foo-mysql-57" (forces new resource)
name_prefix: "" => <computed>
parameter.#: "1" => "1"
parameter.2547865204.apply_method: "pending-reboot" => "pending-reboot"
parameter.2547865204.name: "max_connections" => "max_connections"
parameter.2547865204.value: "128" => "128"

Logstash: Unable to connect to external Amazon RDS Database

Am relatively new to logstash & Elasticsearch...
Installed logstash & Elasticsearch using on macOS Mojave (10.14.2):
brew install logstash
brew install elasticsearch
When I check for these versions:
brew list --versions
Receive the following output:
elasticsearch 6.5.4
logstash 6.5.4
When I open up Google Chrome and type this into the URL Address field:
localhost:9200
This is the JSON response that I receive:
{
"name" : "9oJAP16",
"cluster_name" : "elasticsearch_local",
"cluster_uuid" : "PgaDRw8rSJi-NDo80v_6gQ",
"version" : {
"number" : "6.5.4",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "d2ef93d",
"build_date" : "2018-12-17T21:17:40.758843Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Inside:
/usr/local/etc/logstash/logstash.yml
Resides the following variables:
path.data: /usr/local/Cellar/logstash/6.5.4/libexec/data
pipeline.workers: 2
path.config: /usr/local/etc/logstash/conf.d
log.level: info
path.logs: /usr/local/var/log
Inside:
/usr/local/etc/logstash/pipelines.yml
Resides the following variables:
- pipeline.id: main
path.config: "/usr/local/etc/logstash/conf.d/*.conf"
Have setup the following logstash_etl.conf file underneath:
/usr/local/etc/logstash/conf.d
Its contents:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://myapp-production.crankbftdpmc.us-west-2.rds.amazonaws.com:3306/products"
jdbc_user => "products_admin"
jdbc_password => "products123"
jdbc_driver_library => "/etc/logstash/mysql-connector/mysql-connector-java-5.1.21.jar"
jdbc_driver_class => "com.mysql.jdbc.driver"
schedule => "*/5 * * * *"
statement => "select * from products"
use_column_value => false
clean_run => true
}
}
# sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-exec
output {
if ([purge_task] == "yes") {
exec {
command => "curl -XPOST 'localhost:9200/_all/products/_delete_by_query?conflicts=proceed' -H 'Content-Type: application/json' -d'
{
\"query\": {
\"range\" : {
\"#timestamp\" : {
\"lte\" : \"now-3h\"
}
}
}
}
'"
}
}
else {
stdout { codec => json_lines}
elasticsearch {
"hosts" => "localhost:9200"
"index" => "product_%{product_api_key}"
"document_type" => "%{[#metadata][index_type]}"
"document_id" => "%{[#metadata][index_id]}"
"doc_as_upsert" => true
"action" => "update"
"retry_on_conflict" => 7
}
}
}
When I do this:
brew services start logstash
Receive the following inside my /usr/local/var/log/logstash-plain.log file:
[2019-01-15T14:51:15,319][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x399927c7 run>"}
[2019-01-15T14:51:15,663][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-15T14:51:16,514][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-15T14:57:31,432][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::ComMysqlCjJdbcExceptions::CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."}
[2019-01-15T14:57:31,435][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::ComMysqlCjJdbcExceptions::CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."}[2019-01-15T14:51:15,319][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x399927c7 run>"}
[2019-01-15T14:51:15,663][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-15T14:51:16,514][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-15T14:57:31,432][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times
What am I possibly doing wrong?
Is there a way to obtain a dump (e.g. mysqldump) from an Elasticsearch server (Stage or Production) and then reimport into a local instance running Elasticsearch without using logstash?
This is the same configuration file that works inside an Amazon EC-2 Production Instance but don't know why its not working in my local macOS Mojave instance?
You may encounter the SSL issue of RDS, since
If you use either the MySQL Java Connector v5.1.38 or later, or the MySQL Java Connector v8.0.9 or later to connect to your databases, even if you haven't explicitly configured your applications to use SSL/TLS when connecting to your databases, these client drivers default to using SSL/TLS. In addition, when using SSL/TLS, they perform partial certificate verification and fail to connect if the database server certificate is expired.
as described in AWS RDS Doc
To overcome, either set up the trust store for the LogStash, which is described in the above link as well.
Or take the risk to disable the SSL in the connecting string, like
jdbc_connection_string => "jdbc:mysql://myapp-production.crankbftdpmc.us-west-2.rds.amazonaws.com:3306/products?sslMode=DISABLED"

Use of undefined constant SIGKILL - assumed 'SIGKILL' in Laravel 5.7 queue

I'm getting this error "Use of undefined constant SIGKILL - assumed 'SIGKILL'" from my AJAX request, that starts this artisan command ->
Artisan::call('queue:work', [
'connection' => 'database',
'--memory' => '700',
'--tries' => '1',
'--timeout' => '35000',
'--queue' => 'updates'
]);
I'm using Laravel 5.7 as framework for application.
Jobs are managed from database, configuration ->
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 18000,
],
Problem appeared recently.. That is weird, because troubles wasn't here before and all this "system" worked just fine. Now worker get some jobs done just fine, but then it drops to error, and writes to table "failed_jobs" in DB this ->
ErrorException: PDOStatement::execute(): MySQL server has gone away in /srv/migration-xxxxxx-xxxx-xxxxxx/www/vendor/laravel/framework/src/Illuminate/Database/Connection.php:458
As DB I'm using Microsoft Azure MySQL DB. Microsoft specialist after consultation find nothing .. server is working correctly. Queries are just fine, not that big to fail.
Please help, don't know what to do, or what is wrong...

Error when migrating database using edeliver

I've always used edeliver to deploy my apps, but on my new app, I'm getting a weird error.
When I run mix edeliver migrate production, I'm getting this response:
EDELIVER MYPROJECT WITH MIGRATE COMMAND
-----> migrateing production servers
production node:
user : user
host : example.com
path : /home/user/app_release
response: RPC to 'myproject#127.0.0.1' failed: {'EXIT',
{#{'__exception__' => true,
'__struct__' =>
'Elixir.ArgumentError',
message => <<"argument error">>},
[{ets,lookup_element,
['Elixir.Ecto.Registry',nil,3],
[]},
{'Elixir.Ecto.Registry',lookup,1,
[{file,"lib/ecto/registry.ex"},
{line,18}]},
{'Elixir.Ecto.Adapters.SQL',sql_call,
6,
[{file,"lib/ecto/adapters/sql.ex"},
{line,251}]},
{'Elixir.Ecto.Adapters.SQL','query!',
5,
[{file,"lib/ecto/adapters/sql.ex"},
{line,198}]},
{'Elixir.Ecto.Adapters.MySQL',
'-execute_ddl/3-fun-0-',4,
[{file,"lib/ecto/adapters/mysql.ex"},
{line,107}]},
{'Elixir.Enum',
'-reduce/3-lists^foldl/2-0-',3,
[{file,"lib/enum.ex"},{line,1826}]},
{'Elixir.Ecto.Adapters.MySQL',
execute_ddl,3,
[{file,"lib/ecto/adapters/mysql.ex"},
{line,107}]},
{'Elixir.Ecto.Migrator',
'-migrated_versions/2-fun-0-',2,
[{file,"lib/ecto/migrator.ex"},
{line,44}]}]}}
But when I type mix edeliver restart production followed by the migration command, everything goes normally. Why is this happening?