Error when migrating database using edeliver - mysql

I've always used edeliver to deploy my apps, but on my new app, I'm getting a weird error.
When I run mix edeliver migrate production, I'm getting this response:
EDELIVER MYPROJECT WITH MIGRATE COMMAND
-----> migrateing production servers
production node:
user : user
host : example.com
path : /home/user/app_release
response: RPC to 'myproject#127.0.0.1' failed: {'EXIT',
{#{'__exception__' => true,
'__struct__' =>
'Elixir.ArgumentError',
message => <<"argument error">>},
[{ets,lookup_element,
['Elixir.Ecto.Registry',nil,3],
[]},
{'Elixir.Ecto.Registry',lookup,1,
[{file,"lib/ecto/registry.ex"},
{line,18}]},
{'Elixir.Ecto.Adapters.SQL',sql_call,
6,
[{file,"lib/ecto/adapters/sql.ex"},
{line,251}]},
{'Elixir.Ecto.Adapters.SQL','query!',
5,
[{file,"lib/ecto/adapters/sql.ex"},
{line,198}]},
{'Elixir.Ecto.Adapters.MySQL',
'-execute_ddl/3-fun-0-',4,
[{file,"lib/ecto/adapters/mysql.ex"},
{line,107}]},
{'Elixir.Enum',
'-reduce/3-lists^foldl/2-0-',3,
[{file,"lib/enum.ex"},{line,1826}]},
{'Elixir.Ecto.Adapters.MySQL',
execute_ddl,3,
[{file,"lib/ecto/adapters/mysql.ex"},
{line,107}]},
{'Elixir.Ecto.Migrator',
'-migrated_versions/2-fun-0-',2,
[{file,"lib/ecto/migrator.ex"},
{line,44}]}]}}
But when I type mix edeliver restart production followed by the migration command, everything goes normally. Why is this happening?

Related

Handshake inactivity timeout while using VSCode SQL Tools

I'm using the SQL Tools extension for VSCode to connect to my database which is hosted on a Debian VM and whenever I try to connect to it, I always get a timeout error.
[1664037417336] INFO (ext): EXECUTING COMMAND => sqltools.getConnections
[1664037417336] INFO (ls): REQUEST RECEIVED => connection/GetConnectionsRequest
[1664037417337] INFO (ext): EXECUTING COMMAND => sqltools.getConnections
[1664037417337] INFO (ls): REQUEST RECEIVED => connection/GetConnectionsRequest
[1664037417338] INFO (ls): REQUEST RECEIVED => connection/GetConnectionPasswordRequest
[1664037422395] INFO (ls): REQUEST RECEIVED => connection/ConnectRequest
[1664037422395] INFO (ls): Connection instance created for Social Bot (dev).
ns: "conn-manager"
[1664037432398] ERROR (ls): {"code":-1,"data":{"driver":"MySQL","driverOptions":{"mysqlOptions":{"authProtocol":"default"}}},"name":"Error"}
ns: "conn"
[1664037432399] ERROR (ls): Connecting error: {"code":-1,"data":{"driver":"MySQL","driverOptions":{"mysqlOptions":{"authProtocol":"default"}}},"name":"Error"}
ns: "conn-manager"
[1664037432399] ERROR (ls): Open connection error
ns: "conn-manager"
[1664037432401] ERROR (ext): ERROR: Error opening connection Handshake inactivity timeout, {"code":-1,"data":{"driver":"MySQL","driverOptions":{"mysqlOptions":{"authProtocol":"default"}}}}
ns: "error-handler"
This is what my connection config looks like:
Why does this happen?

Node.js GraphQL API Stops working as soon as I deploy it: "Error validating datasource `db`: the URL must start with the protocol `mysql://"

I build a GraphQL API with Apollo and Prisma ORM which is connected to my hosted MySQL Database (The Database has already content in it).
When I run it on my localhost everything works fine and I can query the Database with GraphQL statements.
As soon as I deploy my node.js project to DigitalOcean (auto deployed with GitHub) it stops working and I get the following error:
{
"errors": [
{
"message": "\nInvalid `prisma.content.findMany()` invocation in\n/workspace/src/schema.js:36:29\n\n 33 const resolvers = {\n 34 Query: {\n 35 memes: (parent, args) => {\nā†’ 36 return prisma.content.findMany(\n error: Error validating datasource `db`: the URL must start with the protocol `mysql://`.\n --> schema.prisma:7\n | \n 6 | provider = \"mysql\"\n 7 | url = env(\"DATABASE_URL\")\n | \n\nValidation Error Count: 1",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": [
"memes"
],
"extensions": {
"code": "INTERNAL_SERVER_ERROR",
"exception": {
"clientVersion": "3.6.0",
"stacktrace": [
"Error: ",
"Invalid `prisma.content.findMany()` invocation in",
"/workspace/src/schema.js:36:29",
"",
" 33 const resolvers = {",
" 34 Query: {",
" 35 memes: (parent, args) => {",
"ā†’ 36 return prisma.content.findMany(",
" error: Error validating datasource `db`: the URL must start with the protocol `mysql://`.",
" --> schema.prisma:7",
" | ",
" 6 | provider = \"mysql\"",
" 7 | url = env(\"DATABASE_URL\")",
" | ",
"",
"Validation Error Count: 1",
" at cb (/workspace/node_modules/#prisma/client/runtime/index.js:38689:17)",
" at processTicksAndRejections (internal/process/task_queues.js:97:5)"
]
}
}
}
],
"data": null
}
Here is my schema.prisma file:
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "mysql"
url = env("DATABASE_URL")
}
...
The only thing that is different from the hosted project compared to the local project is that I put .env file and node_modules on the .gitignore file.
So it seems like the project is accessing the wrong DATABASE_URL, but how should the hosted project know the DATABASE_URL in my .env file when the .env file is on .gitignore?
Here is what I do:
Change the DATABASE_URL in my .env file to my local MySQL Database hosted on a docker container
Run npx prisma migrate dev --preview-feature to generate the migration files
Run git add .
Run git commit -m "New Commit"
Run DATABASE_URL=mysql://censored:censored#censored:3306/censored npx prisma migrate resolve --applied "my_migration_folder_name" --preview-feature which succeeds and tells me "Migration my_migration_folder_name marked as applied."
Run git push
I can see that the Migration is successfully created on my MySQL Database but as soon as I run the app and try to query the database it gives me that error.
The code has to be correct because it is working on my localhost even when querying the hosted MySQL Database.
I also double checked that the Model in the schema.prisma file is in sync with my hosted MySQL Database schema.
I'm running out of ideas on what I could try.
EDIT
I actually think it has something to do with the environment variables I set in the settings of my DigitalOcean application.
Before it was set to:
envs:
- key: DATABASE_URL
scope: RUN_AND_BUILD_TIME
value: ${db.DATABASE_URL}
Now I set it to:
envs:
- key: DATABASE_URL
scope: RUN_AND_BUILD_TIME
value: mysql://censored:cesnored#censored:3306/censored
I thought that this will fix the problem but now it tells me that the connection fails because of wrong database credentials even though it is the right link with the right credentials.
I fixed it by clicking "Force rebuild and deploy" on my digitalOcean app.

Logstash: Unable to connect to external Amazon RDS Database

Am relatively new to logstash & Elasticsearch...
Installed logstash & Elasticsearch using on macOS Mojave (10.14.2):
brew install logstash
brew install elasticsearch
When I check for these versions:
brew list --versions
Receive the following output:
elasticsearch 6.5.4
logstash 6.5.4
When I open up Google Chrome and type this into the URL Address field:
localhost:9200
This is the JSON response that I receive:
{
"name" : "9oJAP16",
"cluster_name" : "elasticsearch_local",
"cluster_uuid" : "PgaDRw8rSJi-NDo80v_6gQ",
"version" : {
"number" : "6.5.4",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "d2ef93d",
"build_date" : "2018-12-17T21:17:40.758843Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Inside:
/usr/local/etc/logstash/logstash.yml
Resides the following variables:
path.data: /usr/local/Cellar/logstash/6.5.4/libexec/data
pipeline.workers: 2
path.config: /usr/local/etc/logstash/conf.d
log.level: info
path.logs: /usr/local/var/log
Inside:
/usr/local/etc/logstash/pipelines.yml
Resides the following variables:
- pipeline.id: main
path.config: "/usr/local/etc/logstash/conf.d/*.conf"
Have setup the following logstash_etl.conf file underneath:
/usr/local/etc/logstash/conf.d
Its contents:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://myapp-production.crankbftdpmc.us-west-2.rds.amazonaws.com:3306/products"
jdbc_user => "products_admin"
jdbc_password => "products123"
jdbc_driver_library => "/etc/logstash/mysql-connector/mysql-connector-java-5.1.21.jar"
jdbc_driver_class => "com.mysql.jdbc.driver"
schedule => "*/5 * * * *"
statement => "select * from products"
use_column_value => false
clean_run => true
}
}
# sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-exec
output {
if ([purge_task] == "yes") {
exec {
command => "curl -XPOST 'localhost:9200/_all/products/_delete_by_query?conflicts=proceed' -H 'Content-Type: application/json' -d'
{
\"query\": {
\"range\" : {
\"#timestamp\" : {
\"lte\" : \"now-3h\"
}
}
}
}
'"
}
}
else {
stdout { codec => json_lines}
elasticsearch {
"hosts" => "localhost:9200"
"index" => "product_%{product_api_key}"
"document_type" => "%{[#metadata][index_type]}"
"document_id" => "%{[#metadata][index_id]}"
"doc_as_upsert" => true
"action" => "update"
"retry_on_conflict" => 7
}
}
}
When I do this:
brew services start logstash
Receive the following inside my /usr/local/var/log/logstash-plain.log file:
[2019-01-15T14:51:15,319][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x399927c7 run>"}
[2019-01-15T14:51:15,663][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-15T14:51:16,514][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-15T14:57:31,432][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::ComMysqlCjJdbcExceptions::CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."}
[2019-01-15T14:57:31,435][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::ComMysqlCjJdbcExceptions::CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."}[2019-01-15T14:51:15,319][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x399927c7 run>"}
[2019-01-15T14:51:15,663][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-15T14:51:16,514][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-15T14:57:31,432][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times
What am I possibly doing wrong?
Is there a way to obtain a dump (e.g. mysqldump) from an Elasticsearch server (Stage or Production) and then reimport into a local instance running Elasticsearch without using logstash?
This is the same configuration file that works inside an Amazon EC-2 Production Instance but don't know why its not working in my local macOS Mojave instance?
You may encounter the SSL issue of RDS, since
If you use either the MySQL Java Connector v5.1.38 or later, or the MySQL Java Connector v8.0.9 or later to connect to your databases, even if you haven't explicitly configured your applications to use SSL/TLS when connecting to your databases, these client drivers default to using SSL/TLS. In addition, when using SSL/TLS, they perform partial certificate verification and fail to connect if the database server certificate is expired.
as described in AWS RDS Doc
To overcome, either set up the trust store for the LogStash, which is described in the above link as well.
Or take the risk to disable the SSL in the connecting string, like
jdbc_connection_string => "jdbc:mysql://myapp-production.crankbftdpmc.us-west-2.rds.amazonaws.com:3306/products?sslMode=DISABLED"

teiid doesn't connect to mysql datasource

I have a problem on connecting to a mysql datasource from JBoss EAP 6.3 with Teiid 8.10.1 copied over it. I need to mention that I've copied mysql connector driver as a module for using mysql datasource.
Also, when I try with JBoss EAP 6.3 without Teiid, the connection works.
Anybody that confronted the same problem?
This is the error message I receive in administration console of JBoss:
"Unknown error
Unexpected HTTP response: 500
Request {
"address" => [
("subsystem" => "datasources"),
("data-source" => "database")
],
"operation" => "test-connection-in-pool" }
Response
Internal Server Error {
"outcome" => "failed",
"failure-description" => "JBAS010440: failed to invoke operation: JBAS010447: Connection is not valid",
"rolled-back" => true }"
Need to mention that in logs I don't receive a more verbose message for the error, with all logs enabled(Trace, Debug, Error, etc).

How can one connect to remote hypertable in python

I'm trying to connect to a hypertable master machine, hypertable is deployed via mesos, When I copy hypertable.cfg file from master machine to some arbitrary machine, after running start-thriftbroker.sh, all I get is about ten lines of "Waiting for ThriftBroker to come up..." and then "ERROR: ThriftBroker did not come up", ThirftBroker's logfile says:
1342340080 NOTICE ThriftBroker : (/root/src/hypertable/src/cc/Common/Config.cc:526) Initializing ThriftBroker (Hypertable 0.9.5.6 (v0.9.5.6-dirty))...
CPU cores count=1
CephBroker.MonAddr=10.0.1.245:6789
CephBroker.Port=38030
CephBroker.Workers=20
DfsBroker.Host=localhost
DfsBroker.Local.Port=38030
DfsBroker.Local.Root=fs/local
DfsBroker.Port=38030
HdfsBroker.Port=38030
HdfsBroker.Workers=20
HdfsBroker.fs.default.name=hdfs://<ip>:9010
Hyperspace.GracePeriod=200000
Hyperspace.KeepAlive.Interval=30000
Hyperspace.Lease.Interval=1000000
Hyperspace.Replica.Dir=hyperspace
Hyperspace.Replica.Host=[<ip>]
Hyperspace.Replica.Port=38040
Hyperspace.Replica.Workers=20
Hypertable.Master.Port=38050
Hypertable.Master.Workers=20
Hypertable.RangeServer.Port=38060
Hypertable.Verbose=true
ThriftBroker.Port=38080
pidfile=/opt/hypertable/current/run/ThriftBroker.pid
port=38080
reactors=1
verbose=true
1342340080 INFO ThriftBroker : (/root/src/hypertable/src/cc/Hyperspace/Session.cc:63) Hyperspace session setup to reconnect
1342340082 ERROR ThriftBroker : main (/root/src/hypertable/src/cc/ThriftBroker/ThriftBroker.cc:2404): Hypertable::Exception: Hyperspace 'mkdir' error, name=/hypertable/namemap/names - HYPERSPACE file exists
at void Hyperspace::Session::mkdir(const std::string&, bool, const std::vector<Hyperspace::Attribute, std::allocator<Hyperspace::Attribute> >*, Hypertable::Timer*) (/root/src/hypertable/src/cc/Hyperspace/Session.cc:1257)
It got solved by updating to new version of ht.