I'm using Symfony 1.4 for my web application which uses doctrine 1.2 to connect to my MySQL database. I have a Symfony task which first fetches some data using doctrine queries and then it does a very time-consuming non- database involved operation (ex: a never-ending for loop) which doesn't need a database connection. Since the latter operation doesn't need database connection to be open, I need to close the database connection after the database operations are done. So I tried the following code to close the database connection.
sfContext::createInstance($configuration);
sfContext::getInstance()->getDatabaseConnection('doctrine')->setAttribute(Doctrine_Core::ATTR_AUTO_FREE_QUERY_OBJECTS, true );
// Do the doctrine queries and fetch data
sfContext::getInstance()->getDatabaseManager()->shutdown();
gc_collect_cycles();
// Do the non-database operation
While the non-database operation is executing, I checked the active mysql processes using SHOW PROCESSLIST command running on MySQL. It still shows that the MySQL connection created by the Symfony application is not closed but in the Sleep state.
Is there any way to close the database connection permanently in Symfony.
Note: I found that the doctrine connection closes if the connection is closed before execuring the doctrine queries.
This is most probably because Doctrine uses PDO and PDO needs to unset all the references before making the MySQL connection close otherwise it keeps the connection active. But I don't know how to clear the PDO references in Doctrine.
That is some really old stuff you are working with. I'm not sure why you are getting things from the sfContext singleton. You should be able to do this instead:
$conn = Doctrine_Manager::connection();
$conn->close();
This should close the connection, but leave it registered with the connection manager. This is basically right out of the Doctrine1 manual.
This will let you iterate through the connections and close them while also removing them from the manager:
$manager = Doctrine_Manager::getInstance();
foreach($manager as $conn) {
$manager->closeConnection($conn);
}
Related
I was testing my SpringBoot app which connects to a remote SQL database. I was also using MySQL workbench to view the tables. Then when I tried to run my app, it gave an error message as follows:
Data source rejected establishment of connection, message from server: "Too many connections"
I have tried restarting my PC but it still gives the same error. How can I solve it? I believe the previous connection was not properly closed. What can I do now?
The connections are automatically closed (or return to the connection pool) if you are using Spring Data Repository or JdbcTemplate. Your application may really need too many connections compared to your database limit, in that case you should check your database configuration. You can also check your connection properties in application.properties (pool size, idle time, timeout). Please add more details like code or configuration.
I've read a ton about persistent database connections between PHP and MySQL (mysql_connect vs. mysql_pconnect). Same with PDO and MySQLi. It's definitely just my lack of understanding on this one, but how can a database connection be persistent between webpages? In this code:
$conn = mysql_pconnect( $server , $user, $pass );
mysql_select_db( $dbname );
If two users load this page at the same time, with two different $dbname variables, will PHP only make one connection to the database or two? I am fairly certain that
$conn = mysql_connect( $server , $user, $pass );
would make two connections.
If pconnect reuses the connection opened by the first user, will the mysql_select_db call work for the second user?
Ideally, what I am looking for is a way to have fewer database connections but still be able to set the default database in each PHP script. I have clients who all use the same PHP scripts, but the data is stored in their own client database (hence, $dbname is always different, but the MySQL connection parameters are the same - same mysql ip address, user and password).
Hope that makes sense. We can use MySQL, MySQLi or PDO, just need to know how to accomplish this the best way without having the possibility for clients to accidently write data to someone else's database! Thanks in advance.
The persistence is done by the copy of the PHP that's embedded in the webserver. Ordinarily you'd be right- if PHP was running in CGI mode, it would be impossible to have a persistent connection, because there'd be nothing left to persist when the request is done and PHP shuts down.
However, since there's a copy of PHP embedded in the webserver, and the webserver itself keeps running between requests, it is possible to maintain a pool of persistent connections within that "permanent" PHP.
However, note that on Apache multi-worker type server models, the connection pools are maintained PER-CHILD. If you set your pool limit to 10, you'll have 10 connections per Apache child. 20 children = 200 connections.
Persistent connections will also lead to long-term problems with deadlocks and other hard-to-debug problems. Remember - there's no guarantee that a user's HTTP requests will be serviced by the SAME apache child/mysql connection. If a script dies part-way through a database transaction, that transaction will NOT be rolled back, because MySQL does not see the HTTP side of things - all it sees is that the mysql<->apache connection is still open and assumes all's well.
The next user to hit that particular apache/mysql child/connection combination will now magically end up in the middle of that transaction, with no clue that the transaction is open. Basically, it's the Web equivalent of an unflushed toilet - all the "garbage" from the previous user is still there.
With non-persistent connections, you're guaranteed to have a 'clean' environment each time you connect.
From my reading of documentation and comments, I see:
Docs on mysql_pconnect (deprecated method)
Second, the connection to the SQL server will not be closed when the execution of the script ends. Instead, the link will remain open for future use ( mysql_close() will not close links established by mysql_pconnect()).
and a comment on that page
Persistent connections work well for CGI PHP managed by fastCGI, contrary to the suggestion above that they only work for the module version. That's because fastCGI keeps PHP processes running between requests. Persistent connections in this mode are easily made immune to connection limits too, because you can set PHP_FCGI_CHILDREN << mysql's max_connections <<< Apache's MaxClients. This also saves resources.
Docs on mysqli_connect (new method)
Prepending host by p: opens a persistent connection. mysqli_change_user() is automatically called on connections opened from the connection pool.
Docs for mysqli_change_user:
Changes the user of the specified database connection and sets the current database.
So my understanding is as follows: pconnect keeps the connection open after a script ends but while a process (or maybe group of processes) is still alive (like in a server with FCGI set up). Only one script at a time uses a connection, and when a new script grabs that connection the user and database are updated.
Thus if you use FCGI and persistent connections you can reduce the number of db connections open, but scripts running simultaneously will not be sharing the same connection. There is no problem with the connection being confused as to which database is selected.
If we don't close or end mysql connection in node js, is it effects in feature or not.
I am doing like this
var mysql = require('mysql');
var connection = mysql.createConnection(...);
connection.query('SELECT 1', function(err, rows) {
// connected! (unless `err` is set)
});
Am not ending mysql connection any where in my code. My question is
Is it necessary to close the mysql connection.
If we don't close the mysql connection will i face any other problems in future.
Please help me am new to nodejs
EDIT1
I will not face any problems like unable to connect, too many connections to open etc, means any resource related issues? right.
EDIT2
At which instant mysql connection will be close if we don't end it manually or by using end function?
You should close the connection.
The point at which you do it depends on what your program does with that connection.
If the program is long lived and only needs a single connection but uses that connection continuously then you can just leave that one connection open, but you should be be prepared to re-open the connection if required - e.g. if the server gets restarted.
If the program just opens one connection, does some stuff, and then doesn't use the connection for some time you should close the connection while you're not using it.
If the program is short lived, i.e. it makes a connection, does some stuff, and then exits you can get away without closing the connection because it'll get closed automagically when your program exits.
Actually, you will run into problems such as this one from MySQL:
"[host] is blocked because of many connection errors; unblock with ' mysqladmin flush-hosts"
As MySQL will start counting those failed disconnects as "connection errors". And then, your database remains unusable until you execute FLUSH HOSTS on the MySQL server. I know this for a fact, it happened with our NodeJS project.
Write some code to explicitly close the db connection before any output or render statements using connection.end() or connection.destroy()
No you don't need to close it.
If you question is "is there any specific cleanup on mysql side when I tell server that I'm closing connection" then the answer is "no, if you just exit your client process or close socket its the same as calling connection.end()"
I have developed a application which send large number of emails to different users in batches. The most common problem I faced in this application is the problem of mysql connection timeout. In between the batches when there is no queries executed in the previously opened connection and connection remained idle for long time, mysql itself close the connection. After sending the current batch when I again try to execute any sql query it gives me mysql connection error.
Right now I am using mysql_ping($conn) function to check whether the connection id timedout or not. If the connect is timed out I connect again with mysql_connect() function. Now I am moving to doctrine rather than native PHP function. Is there a recconnect() function in Doctorine as well ?
Never faced this problem with my batch actions but I think you could probably do something like this in those places in your code where you think there's a risk the connection will time out:
// Fetch current connection
$conn = Doctrine_Manager::connection();
if(!$conn) {
// Open a new connection
$conn = Doctrine_Manager::connection('mysql://username:password#localhost/whatever', 'connection 1');
}
http://docs.doctrine-project.org/projects/doctrine1/en/latest/en/manual/connections.html
There's a custom bundle that allow connection to reopen:
https://github.com/facile-it/doctrine-mysql-come-back
You can set a maximum amount of retry, or you can manually call method retry on Connection!
We are using swi-prolog to run our testcases. Whenever the test starts, I am opening the connection to MYSQL database and storing the Name of the Test hat is being done and then closing the DB. These tests run for about 2 days continuously. After the tests are done, the results basically gets stored in folder in the server. There is a predicate in another prolog file that is called to update the results to the MYSQL database. The code is simple, I use odbc library and just call odbc_* predicates to connect and update the mysql by issuing direct queries.
The actual problem is :
If I try to call the Predicate from the same Prolog window, where the test just got completed, I get an error as updating to the DB server. Although I do not get any error in the connection. If I close the session of that prolog with halt and closing all the open prolog windows , then open an other complete new instance of Prolog and run the predicate the update goes well.
I have a feeling that there is some connection reference to the MySQL DB in Prolog database. Is there any way to clear the database in prolog so that I can run the same predicate without closing any existing prolog windows?
Any ideas appreciated.
Thanks.
If you open the connection, than do a long processing, MySQL can drop the connection in between after a certain timeout (that I believe can do configured in my.cnf).
EDIT: swi-prolog has an odbc_disconnect that can be used to explicitly close the connection after using it and an "aliasing" mode that can be used to obtain a previously opened connection when using odbc_open. In you case you can try either closing the connection after using it. You should also avoid using an alias when opening.