flushing database cache in SWI-Prolog - mysql

We are using swi-prolog to run our testcases. Whenever the test starts, I am opening the connection to MYSQL database and storing the Name of the Test hat is being done and then closing the DB. These tests run for about 2 days continuously. After the tests are done, the results basically gets stored in folder in the server. There is a predicate in another prolog file that is called to update the results to the MYSQL database. The code is simple, I use odbc library and just call odbc_* predicates to connect and update the mysql by issuing direct queries.
The actual problem is :
If I try to call the Predicate from the same Prolog window, where the test just got completed, I get an error as updating to the DB server. Although I do not get any error in the connection. If I close the session of that prolog with halt and closing all the open prolog windows , then open an other complete new instance of Prolog and run the predicate the update goes well.
I have a feeling that there is some connection reference to the MySQL DB in Prolog database. Is there any way to clear the database in prolog so that I can run the same predicate without closing any existing prolog windows?
Any ideas appreciated.
Thanks.

If you open the connection, than do a long processing, MySQL can drop the connection in between after a certain timeout (that I believe can do configured in my.cnf).
EDIT: swi-prolog has an odbc_disconnect that can be used to explicitly close the connection after using it and an "aliasing" mode that can be used to obtain a previously opened connection when using odbc_open. In you case you can try either closing the connection after using it. You should also avoid using an alias when opening.

Related

Load Testing Database with JMETER : force re open connection to load test queries with connection opening

I need to validate a workload on a DB used to answer to http api.
In this context, on production, there are a lot of connections opened / closed. For a connection, there are only 2 or 3 small queries launched.. So connection 'activity' (open/close) has to be taken into account in our application.
I need to 'bench' / test the DB without the application stack, so I'd like JMETER to query directly the database like the web service would do..
When using / configuring odbc connection pool through "jdbc connection configuration", I only see the way to define a large pool of connection that will be used, after, to launch queries. That mean... the connections stay alive after playing ThreadGroup scenario, and are reused. In real application, for a scenario, this would make a new connection, and would close this one at the end.
Is there a way to do it (make a new connection for every ThreadGroup run) in JMETER with JDBC 'components' ?
as a workarround, I created a small script and asked jmeter to run it... but it's far more heavier for the server to do it (launch a new process each time to execute the (php) script.. and I couldn't load the server enough by doing it, to reproduce the workload.
JMeter is actually calling Connection.close() function after executing the statement, under the hood the connection is being returned to the pool and it waits for the next thread which requires the connection.
If your application behaviour is the same you don't need to worry about anything. If it's different - you won't get such precise control with the JDBC Connection Configuration and JDBC Request sampler.
If you want to create and destroy connections manually you will have to switch to JSR232 Sampler and implement connection and query logic in Groovy, see Working with a relational database Groovy user manual chapter for more details, code examples, etc.

Check if mariadb can be connected to and fix if not

I am selecting and inserting data in a mariadb database with my node.js web app. In the mariadb node.js documentation (https://mariadb.com/kb/en/getting-started-with-the-nodejs-connector/), it begins by querying the database with "SELECT 1 as val", and then after that, it executes the actual query, which is an insert in the example provided. It is my understanding that "SELECT 1 as val" is used to check a connection with the database can be achieved, because otherwise, if you were to query the database without checking, and a connection could not be achieved, the entire web app would crash.
My question is, is "SELECT 1 as val" the best way to check if a connection with the database can be achieved? It is true that if "SELECT 1 as val" fails, the web app will not crash? Also if a connection cannot be achieved, do I fix it? Do I have to redefine the 'pool' block again? Or the pool.connection block? Is there something I can do to restart the database server?
Don't bother with such a "ping". Instead, always check for errors after running queries.
If the error says that you lost the connection, restart your transaction.
If the server is dead, you have no way to repair that.

MySQL Connection lost after successfull query

This question is theoretical. I've no real use case; I'm just trying to understand the MySQL behaviour.
Suppose I send a query (or a transaction) to the server (using transactional tables of course), and the query or transaction executes fine, but the connection is lost before the client (f.e., mysql or an App connecting to a remote server throught a C interface or any other framework like QtSQL) receives the answer of the server. So, the server knows the transaction finished properly, but the client doesn't because the answer didn't arrive.
What does it happen in this case? Does the server roll back the transaction even knowing that it finished succesfully? Any option to control the behaviour in these scenaries?

Force to reconnect MySQL in Rails

How to force MySQL reconnect at my will in Rails application? I would like to do this either periodically or on DB exceptions like "MySQL server has gone away".
I found ActiveRecord::Base.remove_connection but as it is written, it should be called for some model, not the whole application.
It's a huge pain to restart the Rails console when I'm running it via Heroku with a bunch of objects in variables and then lose my database connection.
The following is code I would not consider "good" to put in your actual application but it temporarily gets over the oft encountered Mysql2::Error: closed MySQL connection in a console:
ActiveRecord::Base.connection.reconnect!
How about using reconnect = true in your database.yml as described here?

xp_cmdshell hangs after called exe has exited

I have a problem with a hang using xp_cmdshell.
The executable is called, performs its work, and exits. It is not hanging because of a ui prompt in the exe. The exe is not hanging at all. The exe disappears from the process list in task manager, and internal logging from the exe confirms that it executed the very last line in the main function
the call to xp_cmdshell does NOT return control in SQL. It hangs on that line (it is the last line of the batch). Killing the process is ineffective. It actually requires a restart of sql server to get rid of the hung process (ugh)
The hang only happens the first time it is run. Subsequent calls to the procedure with identical parameters work and exit correctly so long as the first one is hung. Once SQL is restarted, the first subsequent call will hang again.
If it makes any difference, I am trying to receive the return value from the exe -- my sql procedure ends with:
exec #i = xp_cmdshell #cmd;
return #i;
Activity Monitor is reporting the process to be stuck on a wait type of PREEMPTIVE_OS_PROCESSOPS (what the other developer saw) or PREEMPTIVE_OS_PIPEOPS (what I'm seeing on my current testing)
Any ideas?
Just came across this situation myself where I've run an invalid comment via xp_cmdshell.
I managed to kill it without restarting SQL, what I've done was to identify the process that run the command and kill it from Task Manager.
Assume your SQL was running in Windows 2008 upward:
Under Task Manager, Processes tab. I enabled the column to show Command Line of each process (e.g.: View -> Select Columns..).
If you unsure what command you've run via xp_cmdshell, dbcc inputbuffer(SPID) should give you a clue.
We had the same issue, with SQL Server 2008, also with calls involving xp_cmdshell and BCP. Killing the sql process ID didn't help, it would just stay stuck in "KILLED/ROLLBACK" status.
Only way to kill it was to kill the bcp.exe process in Windows task manager.
In the end we traced the issue down to wrong SQL in sproc that was calling the xp_cmdshell. It was by mistake opening multiple transactions in a loop and not closing them. After BEGIN/COMMIT trans issues were fixed, PREEMPTIVE_OS_PROCESSOPS never came back again.
We actually did eventually figure out the problem here. The app being called was used to automatically dump some documents to a printer when certain conditions happened.
It turns out that a particular print driver popped up a weird little window in the notification tray on a print job. So it was hanging because of a ui window popping up -- but our app was exiting properly because it wasn't our window, it was a window triggered by the print driver.
That driver included an option to turn off that display window. Our problem went away when that option was set.