Optional ecto repos - configuration

I am having 3 optional repos(not all the databases need to exist). if I am successfully able to connect to any of the repos(I will check all the repos), I need to query(if all the databases are succesfully connected then on all the database) and perform my action.
I have created repos.ex files and corresponding configs.
after that when I need to query, I would start the repo like this
{:ok, pid} = DataLayer.DevRepo.start_link(name: :atom)
Note: (I have tried name: nil as well)
and when I try to query something, following error
DataLayer.DevRepo.all DataLayer.User
** (ArgumentError) argument error
(stdlib) :ets.lookup_element(Ecto.Registry, nil, 3)
(ecto) lib/ecto/registry.ex:18: Ecto.Registry.lookup/1
(ecto) lib/ecto/adapters/sql.ex:251: Ecto.Adapters.SQL.sql_call/6
(ecto) lib/ecto/adapters/sql.ex:426: Ecto.Adapters.SQL.execute_and_cache/7
(ecto) lib/ecto/repo/queryable.ex:133: Ecto.Repo.Queryable.execute/5
(ecto) lib/ecto/repo/queryable.ex:37: Ecto.Repo.Queryable.all/4
Any help will be really appreciated. Thanks
ecto version 2.2.11

Related

How can we run queries concurrently, using go routines?

I am using gorm v1 (ORM), go version 1.14
DB connection is created at the start of my app
and that DB is being passed throughout the app.
I have a complex & long functionality.
Let's say I have 10 sets of queries to run and the order doesn't matter.
So, what I did was
go queryset1(DB)
go queryset2(DB)
...
go queryset10(DB)
// here I have a wait, maybe via channel or WaitGroup.
Inside queryset1:
func queryset1(db *gorm.DB, /*wg or errChannel*/){
db.Count() // basic count query
wg.Done() or errChannel <- nil
}
Now, the problem is I encounter the error :1040 "too many connections" - Mysql.
Why is this happening? Does every go routine create a new connection?
If so, is there a way to check this & "live connections" in mysql
(Not the show status variables like connection)
How can I concurrently query the DB?
Edit:
This guy has the same problem
The error is not directly related to go-gorm, but to the underlying MySQL configuration and your initial connection configuration. In your code, you can manage the following parameters during your initial connection to the database.
maximum open connections (SetMaxOpenConns function)
maximum idle connections (SetMaxIdleConns function)
maximum timeout for idle connections (SetConnMaxLifetime function)
For more details, check the official docs or this article how to get the maximum performance from your connection configuration.
If you want to prevent a situation where each goroutine uses a separate connection, you can do something like this:
// restrict goroutines to be executed 5 at a time
connCh := make(chan bool, 5)
go queryset1(DB, &wg, connCh)
go queryset2(DB, &wg, connCh)
...
go queryset10(DB, &wg, connCh)
wg.Wait()
close(connCh)
Inside your queryset functions:
func queryset1(db *gorm.DB, wg *sync.WaitGroup, connCh chan bool){
connCh <- true
db.Count() // basic count query
<-connCh
wg.Done()
}
The connCh will allow the first 5 goroutines to write in it and block the execution of the rest of the goroutines until one of the first 5 goroutines takes the value from the connCh channel. This will prevent the situations where each goroutine will start it's own connection. Some of the connections should be reused, but that also depends on the initial connection configuration.

Writing from cascalog to MySQL does not work. How to debug this?

I'm trying to write the result of a cascalog query into a MySQL-Database. For this, I'm using cascading-jdbc and following an example i found here. I'm using cascading-jdbc-core and cascading-jdbc-mysql in version 3.0.0.
I'm executing precisely this code from my REPL:
(let [data [["foo1" "bar1"]
["foo2" "bar2"]]
query-params (into-array String ["?col1" "?col2"])
column-names (into-array String ["col1" "col2"])
update-params (into-array String ["?col1"])
update-column-names (into-array String ["col1"])
jdbc-tap (fn []
(let [scheme (JDBCScheme.
(Fields. query-params)
column-names
nil
(Fields. update-params)
update-column-names)
table-desc (TableDesc.
"test_table"
query-params
column-names
(into-array String []))
tap (JDBCTap.
"jdbc:mysql://192.168.99.101:3306/test_db?user=root&password=my-secret-pw"
"com.mysql.jdbc.Driver"
table-desc
scheme)]
tap))]
(?<- (jdbc-tap)
[?col1 ?col2]
(data ?col1 ?col2)))
When I'm running the code, I'm seeing these logs inside the REPL:
15/12/11 11:08:44 INFO hadoop.FlowMapper: sinking to: JDBCTap{connectionUrl='jdbc:mysql://192.168.99.101:3306/test_db?user=root&password=my-secret-pw', driverClassName='com.mysql.jdbc.Driver', tableDesc=TableDesc{tableName='test_table', columnNames=[?col1, ?col2], columnDefs=[col1, col2], primaryKeys=[]}}
15/12/11 11:08:44 INFO mapred.Task: Task:attempt_local1324562503_0006_m_000000_0 is done. And is in the process of commiting
15/12/11 11:08:44 INFO mapred.LocalJobRunner:
15/12/11 11:08:44 INFO mapred.Task: Task 'attempt_local1324562503_0006_m_000000_0' done.
15/12/11 11:08:44 INFO mapred.LocalJobRunner: Finishing task: attempt_local1324562503_0006_m_000000_0
15/12/11 11:08:44 INFO mapred.LocalJobRunner: Map task executor complete.
Everything looks fine. However, no data is written. I checket with tcpdump that not even a connection with my local MySQL-database is being established. Also, when I change the JDBC-connection-string to obvious wrong values (user names that do not exist, a non-existing DB name and even a non-existing IP for the DB server), I get the same logs that do not complain about anything.
Also, changing the jdbc-tap to stdout produces the expected values.
I do not know at all how to debug this. Is there a possibility to produce error outputs? Right now, I have no clue what is going wrong.
As it turns out, I was using the wrong version of cascading-jdbc. Cascalog 2.1.1 is using Cascading 2.5.3. Switching to a 2.5 version fixed the problem.
I was not able to see this from the error messages though (as there were none). One of the developers of cascading-jdbc was kind enough to point this out to me.

Go write unix /tmp/mysql.sock: broken pipe when sending a lot of requests

I have a Go API endpoint that makes several MySQL query. When the endpoint receives a small number of requests, it works just fine. However, I am now testing it using apache bench with 100 requests. The first 100 all went through. However, the 2nd 100 caused this error to appear
2014/01/15 12:08:03 http: panic serving 127.0.0.1:58602: runtime error: invalid memory address or nil pointer dereference
goroutine 973 [running]:
net/http.funcĀ·009()
/usr/local/Cellar/go/1.2/libexec/src/pkg/net/http/server.go:1093 +0xae
runtime.panic(0x402960, 0x9cf419)
/usr/local/Cellar/go/1.2/libexec/src/pkg/runtime/panic.c:248 +0x106
database/sql.(*Rows).Close(0x0, 0xc2107af540, 0x69)
/usr/local/Cellar/go/1.2/libexec/src/pkg/database/sql/sql.go:1576 +0x1e
store.findProductByQuery(0xc2107af540, 0x69, 0x0, 0xb88e80, 0xc21000ac70)
/Users/dennis.suratna/workspace/session-go/src/store/product.go:83 +0xe3
store.FindProductByAppKey(0xc210337748, 0x7, 0x496960, 0x6, 0xc2105eb1b0)
/Users/dennis.suratna/workspace/session-go/src/store/product.go:28 +0x11c
api.SessionHandler(0xb9eff8, 0xc2108ee200, 0xc2108f5750, 0xc2103285a0, 0x0, ...)
/Users/dennis.suratna/workspace/session-go/src/api/session_handler.go:31 +0x2fb
api.funcĀ·001(0xb9eff8, 0xc2108ee200, 0xc2108f5750, 0xc2103285a0)
/Users/dennis.suratna/workspace/session-go/src/api/api.go:81 +0x4f
reflect.Value.call(0x3ad9a0, 0xc2101ffdb0, 0x130, 0x48d520, 0x4, ...)
/usr/local/Cellar/go/1.2/libexec/src/pkg/reflect/value.go:474 +0xe0b
reflect.Value.Call(0x3ad9a0, 0xc2101ffdb0, 0x130, 0xc2103c4a00, 0x3, ...)
/usr/local/Cellar/go/1.2/libexec/src/pkg/reflect/value.go:345 +0x9d
github.com/codegangsta/inject.(*injector).Invoke(0xc2103379c0, 0x3ad9a0, 0xc2101ffdb0, 0x4311a0, 0x1db94e, ...)
It looks like it's not caused by the number of concurrent requests but, rather, something that is not properly closed. I am already closing every prepare statement that I create in my code. I am wondering if anyone has ever seen this before.
Edit:
This is how I am initializing my MySQL connection:
func InitStore(environment string) error {
db, err := sql.Open("mysql", connStr(environment))
....
S = &Store{
Mysql: db,
Environment: environment,
}
}
In this happens only once when I start the server.
Ok so I was able to solve this problem and now I can send ~500 requests with 10 concurrency with no more Broken pipe or Too many connections error.
I think it all comes down to following best practices. When you don't expect multiple rows to be returned user QueryRow instead of Query AND chain it with Scan
db.QueryRow(...).Scan(...)
If you don't expect rows to be returned and if you're not going to re-use your statements, use Exec not Prepare.
If you have prepared statement or querying multiple rows. Don't forget to Close()
Got all of the above from
https://github.com/go-sql-driver/mysql/issues/111
If you use Go 1.2.x you can use db.SetMaxOpenConns to tell the sql package to not open more than X connections. Queries that need a database connection after X connections are already open (and busy) will block until there's an available connection.
That being said: what are the next lines of the "stack trace"? Line ~1093 in http/server.go is the recover code when your serve function fails. It looks more like you are just mishandling some data and that makes it fail or you are missing an error check and then try processing data when you really were returned an error, etc.

python: sqlalchemy - how do I ensure connection not stale using new event system

I am using the sqlalchemy package in python. I have an operation that takes some time to execute after I perform an autoload on an existing table. This causes the following error when I attempt to use the connection:
sqlalchemy.exc.OperationalError: (OperationalError) (2006, 'MySQL server has gone away')
I have a simple utility function that performs an insert many:
def insert_data(data_2_insert, table_name):
engine = create_engine('mysql://blah:blah123#localhost/dbname')
# Metadata is a Table catalog.
metadata = MetaData()
table = Table(table_name, metadata, autoload=True, autoload_with=engine)
for c in mytable.c:
print c
column_names = tuple(c.name for c in mytable.c)
final_data = [dict(zip(column_names, x)) for x in data_2_insert]
ins = mytable.insert()
conn = engine.connect()
conn.execute(ins, final_data)
conn.close()
It is the following line that times long time to execute since 'data_2_insert' has 677,161 rows.
final_data = [dict(zip(column_names, x)) for x in data_2_insert]
I came across this question which refers to a similar problem. However I am not sure how to implement the connection management suggested by the accepted answer because robots.jpg pointed this out in a comment:
Note for SQLAlchemy 0.7 - PoolListener is deprecated, but the same solution can be implemented using the new event system.
If someone can please show me a couple of pointers on how I could go about integrating the suggestions into the way I use sqlalchemy I would be very appreciative. Thank you.
I think you are looking for something like this:
from sqlalchemy import exc, event
from sqlalchemy.pool import Pool
#event.listens_for(Pool, "checkout")
def check_connection(dbapi_con, con_record, con_proxy):
'''Listener for Pool checkout events that pings every connection before using.
Implements pessimistic disconnect handling strategy. See also:
http://docs.sqlalchemy.org/en/rel_0_8/core/pooling.html#disconnect-handling-pessimistic'''
cursor = dbapi_con.cursor()
try:
cursor.execute("SELECT 1") # could also be dbapi_con.ping(),
# not sure what is better
except exc.OperationalError, ex:
if ex.args[0] in (2006, # MySQL server has gone away
2013, # Lost connection to MySQL server during query
2055): # Lost connection to MySQL server at '%s', system error: %d
# caught by pool, which will retry with a new connection
raise exc.DisconnectionError()
else:
raise
If you wish to trigger this strategy conditionally, you should avoid use of decorator here and instead register listener using listen() function:
# somewhere during app initialization
if config.check_connection_on_checkout:
event.listen(Pool, "checkout", check_connection)
More info:
Connection Pool Events
Events API
There is a better way to handle it right now - pool_recycle
engine = create_engine('mysql://...', pool_recycle=3600)
MySQL has a default timeout of 8 hours.
This leads to the connection to be closed by MySQL but the engine above it (such as SQLAlchemy) to not know about it.
There are 2 ways to solve it -
Optimistic - Using pool_recycle
Pessimistic - using pool_pre_ping=True
I prefer to go with the pool_recycle as it doesn't emit a SELECT 1 before each query - causing less stress on the db

Connecting to a remote MySQL server from a Delphi program through SSL

I don't have a good knowledge of SSL principles, but just want the encryption to work for me.
I have a DB and a user with "REQUIRE X509" specified.
The necessary certificates have been created as described in MySQL docs, and work well - i can connect to the server from Windows command line.
The problem arises, when i try to do the same from my program using MySQL Client API (without SSL, the program also works fine).
The unit used is: http://www.audio-data.de/mysql.html.
These are my action paths:
1) if i just add mysql_ssl_set() call (with proper params) before mysql_real_connect(), the last one gives generic SSL Connection Error.
2) the MySQL docs in en/mysql-ssl-set.html say, that the function always returns 0. But when i checked that, it appeared that the result is the number 11150848. Then i wrote it like that:
showmessage(inttostr(mysql_ssl_set(mys, '.\certs\client-key.pem', '.\certs\client-cert.pem', '.\certs\ca-cert.pem', nil)));
...and repeated the line 8 times.
Each time it returned a slightly greater number - 11158528, 11158784, 11159040, ... and two zeroes for the last two calls.
After which mysql_real_connect() was finally successful! The program even managed to execute some queries, return proper results for them (i know the data), but then it crashed with an Access Violation: write of address ... at some place.
The crash point varied between runs and slight changes to code.
It looks much like a version incompatibility issue. I tried libraries from both MySQL 5.0 and 5.1 Windows installations (the server is 5.1 and runs under Linux remotely; however, 5.0 mysql-client programs do not have troubles when SSL-connecting to it), but with no success.
Is anybody familiar with the issue? Thanks a lot for the help & sorry for the mistakes in the question.
As I see the mysql_ssl_set declaration is incorrect. It is declared:
function mysql_ssl_set(_mysql: PMYSQL; key, cert, ca, capath: PAnsiChar): longint; stdcall;
But the mysql.h contains:
my_bool STDCALL mysql_ssl_set(MYSQL *mysql, const char *key,
const char *cert, const char *ca,
const char *capath, const char *cipher);
That explains the garbage in return value, AV's and so on.