Persistent connection to MYSQL - mysql

I have a database on a local machine that is queried repeatedly as fast as possible.
Currently I am executing mysql_real_connect() before each query and mysql_close() right after. Since speed is of the essence, connecting and reconnecting creates an unacceptable amount of overhead.
I have done some research and found a mysqli command to create a persistent connection (mysqli_pconnect). Unfortunately I am not using PHP (I am using the mysql50 library in FreePascal/Lazarus) and the mysqli library is not available to me; I have to settle for the standard mysql_* commands.
Does anyone have a solution?

Related

Flyway does not handle implicity committed statements when flyway process crashes

Ran into this situation recently using SpringBoot (1.2.3) and Flyway (3.1), and could not find much about how to handle:
Server was spinning up and executing a long running alter table add column statement against a mysql database (5.6) 20-30mins. As the script was running the server process was hard terminated since it was not responding to health checks in a given timeframe. Since the MySQL server was processing the statement, it continued to process the statement to completion but the script was not marked as failed or success. When another server was spun up, it tried to execute the script which failed cause the column already existed.
Given that the server could crash at anytime for any reason during a long running script, other than idempotent scripts or a manual db upgrade process, would like to understand established patterns for handling this situation.
Possibly a setting that indicates the server platform uses implicit commits so mark it as run when the script is sent to the server?
You bring up a good point but unfortunately, I don't think Flyway or Spring Boot have any native support for this.
One workaround, ugly as it is, is to implement the beforeEachMigrate and afterEachMigrate callbacks that Flyway provides. You could use them to maintain a separate migration table that keeps track of which migrations have been started and which ones have been completed. Then, if it contains unfinished migrations the next time your application starts, you can shut it down with a descriptive error message.
I recommend creating a feature request about it. If you do, please link us to it!
My approach would be to have separate migration scripts for any long-running SQL that has an implicit commit. Flyway makes it really easy to add minor version numbered scripts, so there's not a good reason to overcomplicate the implementation with what you're suggesting. If you're using PostgreSQL you probably wouldn't need to do this, but Oracle and MySQL would require it.

Perl DBI Connect to keep session active after script completed

Is there anyway I could keep the DBI session active even after the script exits?
http://mysqlresources.com/documentation/perl-dbi/connect
Basically I need to call the perl (DBI) script multiple times with different parameters (decide pass/fail after it completes). Each time its called Perl is making new connection to Mysql and destroys while exiting which itself is adding considerable amount of delay.
Just wondering if there is any way I could store and use the session for future?
Your connection and its associated socket is process specific, so there's no way of keeping it alive after your process terminates.
You should be able to better tune your server so that connecting is faster. A common issue is doing a reverse IP lookup by enabling the skip-name-resolve configuration parameter in my.cnf.
Barring that, what you might do is use either MySQL Proxy to keep a pool of warm connections, or to combine all your various operations into a single script that can run several stages without terminating.

Use mysql command line interface with memcached

I'm trying to test the performance of using memcached on a MySQL server to improve performance.
I want to be able to use the normal MySQL command line, but I can't seem to get it to connect to memcached, even when I specify the right port.
I'm running the MySQL command on the same machine as both the memcached process and the MySQL server.
I've looked around online, but I can't seem to find anything about using memcached other than with program APIs. Any ideas?
Memcached has its own protocol. The MySQL client cannot connect directly to a memcached server.
You may be thinking of the MySQL 5.6 feature that allows MySQL server to respond to connections using a memcached-compatible protocol, and read and write directly to InnoDB tables. See http://dev.mysql.com/doc/refman/5.6/en/innodb-memcached.html
But this does not allow MySQL clients to connect to memcached -- it's the opposite, allowing memcached clients to connect to mysqld.
Re your comment:
The InnoDB memcached interface is not really a caching solution per se, it's a solution for using a familiar key/value API for persistent data in InnoDB tables. InnoDB does do transparent caching of data pages in its buffer pool, but this is no different from conventional data reads with SQL. InnoDB also commits all changes to its transaction log synchronously on commit.
Here's a blog from my colleague at Percona. He tested whether the MySQL 5.6 memcached API could be used as a caching layer, and found that actually using memcached is still superior.
http://www.mysqlperformanceblog.com/2013/03/29/mysql-5-6-innodb-memcached-plugin-as-a-caching-layer/
Here's one conclusion from that blog:
As expected, there is a slowdown for write operations when using the InnoDB version. But there is also a slight increase in the average fetch time.

The best practice to create a daemon on Linux server

Here is the senario:
We have a site running on NodeJS. Periodically, we pull some data from internet, analyze it, and update a MySQL Database.
My questions are:
What is the best practice to create a Linux daemon? gcc? Can I do it in PHP or other languages?
Since NodeJs will be accessing to the same Database, how can we create mutex?
How can we manage the daemon? For example If the daemon crashes, we want to restart it automatically.
You can use forever.js ... see How does one start a node.js server as a daemon process?. It answers your 1st and 3rd question. I guess you should have searched stack overflow or just have googled a bit !!
You can code a daemon in any language: C, C++, Ocaml, Haskell, ... (but I won't code it in PHP).
The most important in coding a daemon is to be sure the code is robust and fault-detecting.
Concurrent access to the database should be handled by the MySQL server.
If you only share resources by a shared database, you can use its transaction isolation guarantees to stop other processes seeing incomplete data.
This means that you need to either do your operation atomically in SQL (a single statement) or use a transaction.
In any case, it means you need to use a transactional engine in MySQL (probably InnoDB) and your application needs to be aware of and handle deadlocks correctly.

Slow query stops apache from getting answer

I have a problem where I can not query my mysql db until other queries are done. This happens when I run a heavy sql-query (30s) from apache, or when I ran a series of sql-querys from within the same apache-request.
Since my queries are only selects (no updates or modifications and no transactions) I think it should be possible to run simulatanious queries. How can I make this possible?
I am using Zend_Db::factory($config->db->adapter, $dbConfig);
I am not sure if this limits the connections or try to re use same connection always. I manually close the connection between each call in the "serie of calls".
/Peter
You probably have a connection pool set up in Apache which is running out of connections. I'm not sure which module you're using in apache, but if its mod_dbd then check your DBDMax parameter.