What could this REDIS StackExchange.Redis exception mean - exception

Am trying to understand what could the REDIS error mean. we have a master and a localhost as slave. from time to time we receive such an error which we think that it's produced from a call to big cache.but as i said it does not happen all the time
Exception message: Timeout performing EVAL, inst: 2, mgr: ExecuteSelect, err: never, queue: 0, qu: 0, qs: 0, qc: 0, wr: 0, wq: 0, in: 0, ar: 0, IOCP: (Busy=0,Free=1000,Min=32,Max=1000), WORKER: (Busy=5,Free=32762,Min=32,Max=32767), clientName: BSKKAYSIS-IIS01

Related

Ethereum private testnet failing due to peers not connecting

The problem
I want to create a private Ethereum network, however my two peers refuse to connect. They always fail with the following error: Snapshot extension registration failed peer=af5dfeb7 err="peer connected on snap without compatible eth support".
My attempted fixes
Setting --snapshot=false produces the same error
Changing the --syncmode to full, fast or light made no difference
Adding the peers manually with admin.addPeer(${peer1.admin.nodeInfo.enode}) returns true, however net.peerCount and admin.peers.length return 0
Changing the chainId did not produce any different results
Waiting did not help
My setup proccess
Creating the peers:
geth init --datadir "peer1" genesis.json
geth init --datadir "peer2" genesis.json
Starting the peers and opening the console:
geth --datadir "peer1" --networkid 1111 --port 30401 console 2>peer1.log
geth --datadir "peer2" --networkid 1111 --port 30402 console 2>peer2.log
The genesis file (Created by puppeth)
{
"config": {
"chainId": 1111,
"homesteadBlock": 0,
"eip150Block": 0,
"eip150Hash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"ethash": {}
},
"nonce": "0x0",
"timestamp": "0x61af5dd9",
"extraData": "0x0000000000000000000000000000000000000000000000000000000000000000",
"gasLimit": "0x47b760",
"difficulty": "0x1",
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"coinbase": "0x0000000000000000000000000000000000000000",
"alloc": {},
"number": "0x0",
"gasUsed": "0x0",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"baseFeePerGas": null
}
Geth version info
Geth
Version: 1.10.13-stable
Git Commit: 7a0c19f813e285516f4b525305fd73b625d2dec8
Architecture: amd64
Go Version: go1.17.2
Operating System: linux
GOPATH=
GOROOT=go

How to handle connection drops

I'm using a connection pool and I'm clueless about what to do when the mysql server drops my client's connection due to inactivity/mysql server goes down. I'm calling the below function everytime I've to make a query:
def getDbCnx():
try:
dbConn = mysql.connector.connect(pool_name = "connectionPool", pool_size = 3, pool_reset_session=True, **dbConfig)
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
dbConn.close()
return None
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exist")
dbConn.close()
return None
else:
print(err)
dbConn.close()
return None
else:
return dbConn
As per my current understanding, the connection pool will be initialised on the first call of this function. And after that, it will just return a free connection from the already initialised pool. Now, suppose connection pool gets initialised successfully on the first call. And after sometime, say the mysql server goes down or it drops the connection due to inactivity. What will happen, when I query after such a situation. Because I suppose the older context would have gone stale.
Basically how do I ensure that the connection pool refreshes its internal contexts everytime it loses connectivity with the mysql server.
When you invoke dbConn.close() The connection will be reset (and we can observe the source here: https://github.com/mysql/mysql-connector-python/.../mysql/connector/pooling.py#L118 we can expect session variables deallocated, lost uncommitted transactions, etc.). The connection is not fully closed and it can be check by printing the connection id (it should not change if it is the same connection).
Once you attempt to retrieve another connection from the pool with mysql.connector.connect(pool_name = "connectionPool") it will check the connection and if the connection can not be reconnected, a new connection will be open (with a new session id), but in the case the new connection fails an error will be raised. So, if there is a server online and the user account you are using exist in the server is almost certain you will get the connection if the pool is not exhausted and server is online, even if the server was restarted or if you have updated your server after the creation of the connection pool, and also if the server has closed the inactive session, so make sure you close the connection so it can back to the pool and can be reused.
In the below example I shutdown the server with SHUTDOWN command from the MySQL console and then restart it with mysqladmin, you can see the connection id of each connection in the pool (some connections where reused), and that the variables are deallocated due to the connection being reset when goes back to the pool.
from time import sleep
import mysql.connector
from mysql.connector import errorcode
from builtins import range
from mysql.connector import errors
dbConfig = {
'host': '127.0.0.1',
'user': 'some_user', 'password': 'some_pass',
'port': 4824,
}
def getDbCnx():
try:
dbConn = mysql.connector.connect(
pool_name = "connectionPool",
pool_size = 3,
pool_reset_session=True,
**dbConfig
)
return dbConn
except (AttributeError, errors.InterfaceError) as err:
# Errors from bad configuration. not supported options or not valid
print(f"Something is wrong with the connection pool: {err}", flush=True)
except (errors.PoolError) as err:
# Errors from bad connection pool configuration or pool exhausted
print(f"Something is wrong with the connection pool: {err}", flush=True)
except errors.OperationalError as err:
# Errors from MySQL like Lost connection 2013 2055
print(f"Something is wrong with the MySQL server: {err}", flush=True)
except errors.ProgrammingError as err:
# Errors from bad connection data
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password", flush=True)
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exist", flush=True)
except mysql.connector.Error as err:
print(f"{err}", flush=True)
print(f"err type: {type(err)}")
return None
def can_connect():
print("Getting connections...")
greetings = ["hello", "hi", "howdy", "hola"]
for n in range(4):
print(f"getting connection {n}")
cnx = getDbCnx()
if not cnx:
print("No database connection!!!")
return False
cur = cnx.cursor()
cur.execute("select connection_id()")
res = cur.fetchall()
print(f"connection id: {res}")
cur.execute('show variables like "%greeting%"')
res = cur.fetchall()
print(f"greeting?: {res}")
cur.execute(f"select #greeting")
greet = cur.fetchall()
print(f"greet: {greet}")
cur.execute(f"SET #greeting='{greetings[n]}'")
cur.execute(f"select #greeting")
greet = cur.fetchall()
print(f"greet: {greet}\n")
cur.close()
cnx.close()
print("")
return True
def pause(sleep_secs=30, count_down=29):
sleep(sleep_secs)
for s in range(count_down, 0, -1):
print(f"{s}, ", end='')
sleep(1)
print()
def test():
print("Initial test")
assert can_connect()
print("\nStop the server now...")
pause(10, 20)
print("\ntest with server stoped")
print("\ngetting connections with server shutdown should fail")
assert not can_connect()
print("\nStart the server now...")
pause()
print("\ntest if we can get connections again")
print("second test")
assert can_connect()
if __name__ == "__main__":
test()
Here is the output of the example above, even if the server was shutdown you can still retrieve connections once the server comes back online:
Initial test
Getting connections...
getting connection 0
connection id: [(9,)]
greeting?: []
greet: [(None,)]
greet: [('hello',)]
getting connection 1
connection id: [(10,)]
greeting?: []
greet: [(None,)]
greet: [('hi',)]
getting connection 2
connection id: [(11,)]
greeting?: []
greet: [(None,)]
greet: [('howdy',)]
getting connection 3
connection id: [(9,)]
greeting?: []
greet: [(None,)]
greet: [('hola',)]
Stop the server now...
20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1,
test with server stoped
getting connections with server shutdown should fail
Getting connections...
getting connection 0
Something is wrong with the connection pool: Can not reconnect to MySQL after 1 attempt(s): 2003: Can't connect to MySQL server on '127.0.0.1:4824' (10061 No connection could be made because the target machine actively refused it)
No database connection!!!
Start the server now...
29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1,
test if we can get connections again
second test
Getting connections...
getting connection 0
connection id: [(23,)]
greeting?: []
greet: [(None,)]
greet: [('hello',)]
getting connection 1
connection id: [(24,)]
greeting?: []
greet: [(None,)]
greet: [('hi',)]
getting connection 2
connection id: [(25,)]
greeting?: []
greet: [(None,)]
greet: [('howdy',)]
getting connection 3
connection id: [(23,)]
greeting?: []
greet: [(None,)]
greet: [('hola',)]
We can see that the first time we retrieve connections from the pool we have the connections ids [9, 10, 11] and the connection 9 was reused. Later when the shutdown the connection the "No database connection!!!" text is printed and after I started the server the connections ids where [23, 24, 25] and the connection with id 23 was reused. In addition the greeting variable was deallocated in the server.

Sequelize connection timeout while using Serverless Aurora, looking for a way to increase timeout duration or retry connection

I'm having an issue at the moment where I'm trying to make use of a Serverless Aurora database as part of my application.
The problem is essentially that when the database is cold, time to establish a connection can be greater than 30 seconds (due to db spinup) - This seems to be longer than the default timeout in Sequelize (using mysql), and as far as I can see I can't find any other way to increase this timeout or perhaps I need some way of re-attempting a connection?
Here's my current config:
const sequelize = new Sequelize(DATABASE, DB_USER, DB_PASSWORD, {
host: DB_ENDPOINT,
dialect: "mysql",
operatorsAliases: false,
pool: {
max: 2,
min: 0,
acquire: 120000, // This needs to be fairly high to account for a
serverless db spinup
idle: 120000,
evict: 120000
}
});
A couple of extra points:
Once the database is warm then everything works perfectly.
Keeping the database "hot", while it would technically work, kind of defeats the point of having it as a serverless db (Cost reasons).
I'm open to simply having my client re-try the API call in the event the timeout is a connection error.
Here's the logs in case they help at all.
{
"name": "SequelizeConnectionError",
"parent": {
"errorno": "ETIMEDOUT",
"code": "ETIMEDOUT",
"syscall": "connect",
"fatal": true
},
"original": {
"errorno": "ETIMEDOUT",
"code": "ETIMEDOUT",
"syscall": "connect",
"fatal": true
}
}
So after some more digging it looks like you can use the dialectOptions prop on the options object to pass things down to the underlying connection.
dialectOptions: {
connectTimeout: 60000
}
This seems to be doing the trick.

Error when migrating database using edeliver

I've always used edeliver to deploy my apps, but on my new app, I'm getting a weird error.
When I run mix edeliver migrate production, I'm getting this response:
EDELIVER MYPROJECT WITH MIGRATE COMMAND
-----> migrateing production servers
production node:
user : user
host : example.com
path : /home/user/app_release
response: RPC to 'myproject#127.0.0.1' failed: {'EXIT',
{#{'__exception__' => true,
'__struct__' =>
'Elixir.ArgumentError',
message => <<"argument error">>},
[{ets,lookup_element,
['Elixir.Ecto.Registry',nil,3],
[]},
{'Elixir.Ecto.Registry',lookup,1,
[{file,"lib/ecto/registry.ex"},
{line,18}]},
{'Elixir.Ecto.Adapters.SQL',sql_call,
6,
[{file,"lib/ecto/adapters/sql.ex"},
{line,251}]},
{'Elixir.Ecto.Adapters.SQL','query!',
5,
[{file,"lib/ecto/adapters/sql.ex"},
{line,198}]},
{'Elixir.Ecto.Adapters.MySQL',
'-execute_ddl/3-fun-0-',4,
[{file,"lib/ecto/adapters/mysql.ex"},
{line,107}]},
{'Elixir.Enum',
'-reduce/3-lists^foldl/2-0-',3,
[{file,"lib/enum.ex"},{line,1826}]},
{'Elixir.Ecto.Adapters.MySQL',
execute_ddl,3,
[{file,"lib/ecto/adapters/mysql.ex"},
{line,107}]},
{'Elixir.Ecto.Migrator',
'-migrated_versions/2-fun-0-',2,
[{file,"lib/ecto/migrator.ex"},
{line,44}]}]}}
But when I type mix edeliver restart production followed by the migration command, everything goes normally. Why is this happening?

How to detect incorrect endpoint for my database using C3P0 pool and JDBC

A silly bug while copying the connection host made me point to an incorrect endpoint... this blocked the initialization process for 30 minutes...
and finally the exception:
Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30).
Trying to reproduce the error I simply point to google.es with the following connection string
jdbc:mysql://google.es/myDB
Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [acquireIncrement -> 1, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, dataSourceName -> 1hgeksr8t1vk3sn21ui8jk0|53689fd0, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> com.mysql.jdbc.Driver, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, identityToken -> 1hgeksr8t1vk3sn21ui8jk0|53689fd0, idleConnectionTestPeriod -> 0, initialPoolSize -> 3, jdbcUrl -> jdbc:mysql://google.es/myDB, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 3600, maxIdleTimeExcessConnections -> 300, maxPoolSize -> 5, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 1, numHelperThreads -> 3, preferredTestQuery -> null, properties -> {user=*, password=*}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ]
and the initialization gets stuck for those 30 longs minutes...
I'd like it to throw an exception faster, but I'm unsure as to which configuration values should I touch: c3p0 acquireRetryAttempts? or jdbc socketTimeout? and most important what it may break if I change this...
the default set-up will take about 30 secs (not 30 mins!) to detect a bad database: it makes acquireRetryAttempts=30 with a delay of acquireRetryDelay=1000ms before concluding that a Connection cannot be acquired. if you wish faster detection of a bad endpoint, recide either or both of those variables. you can set acquireRetryAttempts to one, if you'd like, in which case any Exception on Connection acquisition will be interpreted as a problem with the endpoint.
See http://www.mchange.com/projects/c3p0/#configuring_recovery
The problem lies with the JDBC timeout configuration.
As specified on this blog: Understanding JDBC Internals & Timeout Configuration
JDBC defaults to 0ms for conection and socket timeout, that is, no timeout.
If the target endpoint exists but does not answer (packets are probably swallowed by the firewall) connections stay trapped and only after a whole minute (why one minute? still a mistery) does c3p0 attempt a connection retry... therefore exception appeared after way too long...
The solution lies in adding a connectTimeout=XXXms to JDBC (can be passed as a parameter: mysql://google.es/myDB?connectTimeout=1000) and after a minute (30 tries at 1 sec for timeout 1 sec for retry delay) exception occurs...
Still all parameters need to be tuned to your needs, as they have other implications and may disrupt functioning. It is also recommended to check c3p0 forum thread about possible configurations such as activating breakAfterAcquireFailure.