I made a small app that connects to a mysql db using dbx. It works ok with my local mysql server, but it's supposed to work with a remote server.
Connecting to the remote server takes a few seconds, which freezes the app.
So my question is, how can I put the connection code in a different thread?
I'll have to pass that connection to the main thread somehow, so that the dbgrid I have on the main form works.
I read that db stuff working in a different thread should have their own connections. So I'm not sure how to do what I want.
Any ideas? Anything to read about working with remote servers?
Thanks.
Edit: The components I'm using on the form are: TSQLConnection -> TSimpleDataSet > TDataSource > TDBGrid.
You only need a connection per thread if your threads are going to do simultaneous database access. Basically what you want, is for a thread to connect, and come back to you when the connection has been established. You can do this in a thread, and when the thread is ready (i.e. connection established), it can send a message back to the main thread to let it know that the dbx connection is now available. See this tutorial for ideas on how to set up the thread, and communicate between the thread and the main VCL thread.
Threading Tutorial
this has really helped me for doing multi thread Apps in rad studio
Writing multi-threaded applications Index
if their is any thing else post and ill try to help
Related
I need to validate a workload on a DB used to answer to http api.
In this context, on production, there are a lot of connections opened / closed. For a connection, there are only 2 or 3 small queries launched.. So connection 'activity' (open/close) has to be taken into account in our application.
I need to 'bench' / test the DB without the application stack, so I'd like JMETER to query directly the database like the web service would do..
When using / configuring odbc connection pool through "jdbc connection configuration", I only see the way to define a large pool of connection that will be used, after, to launch queries. That mean... the connections stay alive after playing ThreadGroup scenario, and are reused. In real application, for a scenario, this would make a new connection, and would close this one at the end.
Is there a way to do it (make a new connection for every ThreadGroup run) in JMETER with JDBC 'components' ?
as a workarround, I created a small script and asked jmeter to run it... but it's far more heavier for the server to do it (launch a new process each time to execute the (php) script.. and I couldn't load the server enough by doing it, to reproduce the workload.
JMeter is actually calling Connection.close() function after executing the statement, under the hood the connection is being returned to the pool and it waits for the next thread which requires the connection.
If your application behaviour is the same you don't need to worry about anything. If it's different - you won't get such precise control with the JDBC Connection Configuration and JDBC Request sampler.
If you want to create and destroy connections manually you will have to switch to JSR232 Sampler and implement connection and query logic in Groovy, see Working with a relational database Groovy user manual chapter for more details, code examples, etc.
Is it possible to cache database connections when using PHP like you would in a J2EE container? If so, how?
There is no connection pooling in php.
mysql_pconnect and connection pooling are two different things.
There are many problems connected with mysql_pconnect and first you should read the manual and carefully use it, but this is not connection pooling.
Connection pooling is a technique where the application server manages the connections. When the application needs a connection it asks the application server for it and the application server returns one of the pooled connections if there is one free.
We can do connection scaling in php for that please go through following link: http://www.oracle.com/technetwork/articles/dsl/white-php-part1-355135.html
So no connection pooling in php.
As Julio said apache releases all resources when the request ends for the current reques. You can use mysql_pconnect but you are limited with that function and you must be very careful. Other choice is to use singleton pattern, but none of this is pooling.
This is a good article: https://blogs.oracle.com/opal/highly-scalable-connection-pooling-in-php
Also read this one http://www.apache2.es/2.2.2/mod/mod_dbd.html
Persistent connections are nothing like connection pooling. A persistent connection in php will only be reused if you make multiple db connects within the same request/script execution context. In most typical web dev scenarios you'll max out your connections way faster if you use mysql_pconnect because your script will have no way to get a reference to any open connections on your next request. The best way to use db connections in php is to make a singleton instance of a db object so that the connection is reused within the context of your script execution. This still incurs at least 1 db connect per request, but it's better than making multiple db connects per reqeust.
There is no real db connection pooling in php due to the nature of php. Php is not an application server that can sit there in between requests and manage references to a pool of open connections, at least not without some kind of major hack. I think in theory you could write an app server in php and run it as a commandline script that would just sit there in the background and keep a bunch of db connections open and pass references to them to your other scripts, but I don't know if that would be possible in practice, how you'd pass the references from your commandline script to other scripts, and I sort of doubt it would perform well even if you could pull it off. Anyway that's mostly speculation. I did just notice the link someone else posted to an apache module to allow connection pooling for prefork servers such as php. Looks interesting:
https://github.com/junamai2000/mod_namy_pool#readme
I suppose you're using mod_php, right?
When a PHP file finishes executing all it's state is killed so there's no way (in PHP code) to do connection pooling. Instead you have to rely on extensions.
You can mysql_pconnect so that your connections won't get closed after the page finishes, that way they get reused in the next request.
This might be all that you need but this isn't the same as connection pooling as there's no way to specify the number of connections to maintain opened.
You can use MySQLi.
For more info, scroll down to Connection pooling section # http://www.php.net/manual/en/mysqli.quickstart.connections.php#example-1622
Note that Connection pooling is also dependent on your server (i.e. Apache httpd) and its configuration.
If an unused persistent connection for a given combination of "host, username, password, socket, port and default database can not be found" in the open connection pool, then only mysqli opens a new connection otherwise it would reuse already open available persistent connections, which is in a way similar to the concept of connection pooling. The use of persistent connections can be enabled and disabled using the PHP directive mysqli.allow_persistent. The total number of connections opened by a script can be limited with mysqli.max_links (this may be interesting to you to address max_user_connections issue hitting hosting server's limit). The maximum number of persistent connections per PHP process can be restricted with mysqli.max_persistent.
In wider programming context, it's a task of web/app server however in this context, it's being handled by mysqli directive of PHP itself in a way supporting connection re-usability. You may also implement a singleton class to get a static instance of connection to reuse just like in Java. Just want to remind that java also doesn't support connection pooling as part of its standard JDBC, they're being different module/layers on top of JDBC drivers.
Coming to PHP, the good thing is that for the common databases in the PHP echosystem it does support Persistent Database Connections which persists the connection for 500 requests (config of max_requests in php.ini) and this avoids creating a new connection in each request. So check it out in docs in detail, it solves most of your challenges. Please note that PHP is not so much sophisticated in terms of extensive multi-threading mechanism and concurrent processing together with powerful asynchronous event handling, when compared to strictly object oriented Java. So in a way it is very less effective for PHP to have such in-built mechanism like pooling.
You cannot instantiate connection pools manually.
But you can use the "built in" connection pooling with the mysql_pconnect function.
I would like to suggest PDO::ATTR_PERSISTENT
Persistent connections are links that do not close when the execution of your script ends. When a persistent connection is requested, PHP checks if there's already an identical persistent connection (that remained open from earlier) - and if it exists, it uses it. If it does not exist, it creates the link.
Connection pooling works at MySQL server side like this.
If persistence connection is enabled into MySQL server config then MySQL keep a connection open and in sleep state after requested client (php script) finises its work and die.
When a 2nd request comes with same credential data (Same User Name, Same Password, Same Connection Parameter, Same Database name, Maybe from same IP, I am not sure about the IP) Then MySQL pool the previous connection from sleep state to active state and let the client use the connection. This helps MySQL to save time for initial resource for connection and reduce the total number of connection.
So the connection pooling option is actually available at MySQL server side. At PHP code end there is no option. mysql_pconnect() is just a wrapper that inform PHP to not send connection close request signal at the end of script run.
For features such as connection pooling - you need to install swoole extension first: https://openswoole.com/
It adds async features to php.
After that its trivial to add mysql and redis connection pooling:
https://github.com/open-smf/connection-pool
Some PHP frameworks come with pooling built-in: https://hyperf.wiki/2.2/#/en/pool
I'm running into an interesting threadding problem while running a D programming that uses the MySQL C API. I am getting error 2013 "Lost connection to MySQL server during query." The problem appears to occurs when enough threads flood the network interface buffer, but the server still has more to transfer. This is my best guess based on some research and running the program on two different computers. One computer has a 100Mb connection to the server and the other has a 1Gb connection. The computer with the 100Mb connection throws the error, while the 1Gb computer does not. I am wondering if I am running into what is described in the first paragraph of How to Write a Threaded Client in the MySQL documentation. If I am, what do I need to do with SIGPIPE and how do I do it?
For those who are interested, I am calling mysql_library_init before any library call and I am creating a new MYSQL* for each thread with mysql_init and mysql_real_connect. Also of note, the queries that I am executing are small SELECTs, only a few thousand records returned from each query and all queries are executed from the same table.
Please try this before mysql_real_connect:
my_bool myb = 1;
mysql_options(conn, mysql_option.MYSQL_OPT_RECONNECT, &myb);
Also please check this mysql troubleshooting page:
http://dev.mysql.com/doc/refman/5.5/en/gone-away.html
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
I'm getting a SQL Server error:
A transport-level error has occurred
when receiving results from the
server. (provider: Shared Memory
Provider, error: 0 - The handle is
invalid.)
I'm running Sql Server 2008 SP1, Windows 2008 Standard 64 bit.
It's a .Net 4.0 web application. It happens when a request is made to the server. It's intermittent. Any idea how I can resolve it?
The database connection is closed by the database server. The connection remains valid in the connection pool of your app; as a result, when you pickup the shared connection string and try to execute it's not able to reach the database. If you are developing Visual Studio, simply close the temporary web server on your task bar.
If it happens in production, resetting your application pool for your web site should recycle the connection pool.
Try the following command on the command prompt:
netsh interface tcp set global autotuning=disabled
This turns off the auto scaling abilities of the network stack
I had the same problem. I restarted Visual Studio and that fixed the problem
Transport level errors are often linked to the connection to sql server being broken ... usually network.
Timeout Expired is usually thrown when a sql query takes too long to run.
So few options can be :
Check for the connection in VPN (if used) or any other tool
Restart IIS
Restart machine
Optimize sql queries.
For those not using IIS, I had this issue when debugging with Visual Studio 2010. I ended all of the debugger processes: WebDev.WebServer40.EXE which solved the issue.
All you need is to Stop the ASP.NET Development Server and run the project again
If you are connected to your database via Microsoft SQL Server Management, close all your connections and retry.
Had this error when connected to another Azure Database, and worked for me when closed it.
Still don't know why ..
Look at the MSDN blog which details out this error:
Removing Connections
The connection pooler removes a connection from the pool after it has
been idle for a long time, or if the pooler detects that the
connection with the server has been severed.
Note that a severed connection can be detected only after attempting
to communicate with the server. If a connection is found that is no
longer connected to the server, it is marked as invalid.
Invalid connections are removed from the connection pool only when
they are closed or reclaimed.
If a connection exists to a server that has disappeared, this
connection can be drawn from the pool even if the connection pooler
has not detected the severed connection and marked it as invalid.
This is the case because the overhead of checking that the connection
is still valid would eliminate the benefits of having a pooler by
causing another round trip to the server to occur.
When this occurs, the first attempt to use the connection will detect
that the connection has been severed, and an exception is thrown.
Basically what you are seeing is that exception in the last sentence.
A connection is taken from the connection pool, the application does
not know that the physical connection is gone, an attempt to use it is
done under the assumption that the physical connection is still there.
And you get your exception.
There are a few common reasons for this.
The server has been restarted, this will close the existing connections.
In this case, have a look at the SQL Server log, usually found at:
C:\Program Files\Microsoft SQL Server\\MSSQL\LOG
If the timestamp for startup is very recent, then we can suspect that
this is what caused the error. Try to correlate this timestamp with
the time of exception.
2009-04-16 11:32:15.62 Server Logging SQL Server messages in file
‘C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG’.
Someone or something has killed the SPID that is being used.
Again, take a look in the SQL Server log. If you find a kill, try to
correlate this timestamp with the time of exception.
2009-04-16 11:34:09.57 spidXX Process ID XX was killed by
hostname xxxxx, host process ID XXXX.
There is a failover (in a mirror setup for example) again, take a look in the SQL Server log.
If there is a failover, try to correlate this timestamp with the time
of exception.
2009-04-16 11:35:12.93 spidXX The mirrored database “” is changing roles from “PRINCIPAL” to “MIRROR” due to
Failover.
Was getting this, always after about 5 minutes of operation. Investigated and found that a warning from e1iexpress always occurred before the failure. This apparently is an error having to do with certain TCP/IP adapters. But changing from WiFi to hardwired didn't affect it.
So tried Plan B and restarted Visual Studio. Then it worked fine.
On closer study I noticed that, when working correctly, the message The Thread '<No Name>' has exited with code 0 occurred at almost exactly the time the run crashed in previous attempts. Some Googling reveals that that message comes up when (among other things) the server is trimming the thread pool.
Presumably there was a bogus thread in the thread pool and every time the server attempted to "trim" it it took the app down.
You get this message when your script make SQL Service stopped for some reasons. so if you start SQL Service again perhaps your problem will be resolved.
I know this may not help everyone (who knows, maybe yes), but I had the same problem and after some time, we realized that the cause was something out of the code itself.
The computer trying to reach the server, was in another network, the connection could be established but then dropped.
The way we used to fix it, was to add a static route to the computer, allowing direct access to the server without passing thru the firewall.
route add –p YourServerNetwork mask NetworkMask Router
Sample:
route add –p 172.16.12.0 mask 255.255.255.0 192.168.11.2
I hope it helps someone, it's better to have this, at least as a clue, so if you face it, you know how to solve it.
I got the same error in Visual Studion 2012 development environment, stopped the IIS Express and rerun the application, it started working.
I had the same issue. I solved it, truncating the SQL Server LOG.
Check doing that, and then tell us, if this solution helped you.
For me the solution was totally different.
In my case I had an objectsource which required a datetimestamp parameter. Even though that ODS parameter ConvertEmptyStringToNull was true 1/1/0001 was being passed to SelectMethod. That in turn caused a sql datetime overflow exception when that datetime was passed to the sql server.
Added an additional check for datetime.year != 0001 and that solved it for me.
Weird that it would throw a transport level error and not a datetime overflow error.
Anyways..
In my case the "SQL Server" Server service stopped. When I restarted the service that enabled me to run the query and eliminate the error.
Its also a good idea to examine your query to find out why the query made this service stop
For me the answer is to upgrade the OS from 2008R2 to 2012R2, the solution of iisreset or restart apppool didn't work for me.
I also tried to turn of TCP Chimney Offload setting, but I didn't restart the server because it is a production server, which didn't work either.
We encountered this error recently between our business server and our database server.
The solution for us was to disable "IP Offloading" on the network interfaces.
Then the error went away.
One of the reason I found for this error is 'Packet Size=xxxxx' in connection string. if the value of xxxx is too large, we will see this error. Either remove this value and let SQL server handle it or keep it low, depending on the network capabilities.
It happened to me when I was trying to restore a SQL database and checked following Check Box in Options tab,
As it's a stand alone database server just closing down SSMS and reopening it solved the issue for me.
This occurs when the database is dropped and re-created some shared resources is still considering the database still exists, so when you re-run execute query to create tables in the database after it was re-created the error will not show again and Command(s) completed successfully. message will show instead of the error message Msg 233, Level 20, State 0, Line 0 A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.).
Simply ignore this error when you are dropping and recreating databases and re-execute your DDL queries with no worries.
I faced the same issue recently, but i was not able to get answer in google.
So thought of sharing it here, so that it can help someone in future.
Error:
While executing query the query will provide few output then it will throw below error.
"Transport level error has occurred when receiving output from
server(TCP:provider,error:0- specified network name is no longer
available"
Solution:
Check the provider of that linked server
In that provider properties ,Enable "Allow inprocess" option for that particular provider to fix the issue.
I have several server processes that once in a while respond to messages from the clients and perform read-only transactions.
After about a few days that the servers are running, they stop working correctly and when I check it turns out that there's a whole bunch of messages about the connection being closed.
When I checked it out, it turned out that hibernate by default works in some sort of development mode where connections are dropped after a few hours, and I started using c3po for connection pooling.
However, even with c3po, I get that problem about 24 hours or so after the servers are started.
Has anyone encountered that problem and knows how to address it? I'm not familiar enough with the intricacies of configuring hibernate.
The MySQL JDBC driver times out after 8 hours of inactivity and drops the connection.
You can set autoReconnect=true in your JDBC URL, and this causes the driver to reconnect if you try to query after it has disconnected. But this has side effects; for instance session state and transactions cannot be maintained over a new connection.
If you use autoReconnect, the JDBC connection is reestablished, but it doesn't automatically re-execute your query that got the exception. So you do need to catch SQLException in your application and retry queries.
Read http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html for more details.
MySql basically timeouts by default in 8 hours.
I got the same exception & resolved the issue after 3 hectic days.Check if you are using I hibernate3. In this version it is required to explicitly mention the connection class name. Also check if the jar is in classpath. Check steps & comments in below link
http://hibernatedb.blogspot.com/2009/05/automatic-reconnect-from-hibernate-to.html
Remove autoReconnect=true
I changed my hibernate configuration file by adding thoses lines and it works for now:
<property name="connection.autoReconnect">true</property>
<property name="connection.autoReconnectForPools">true</property>
<property name="connection.is-connection-validation-required">true</property>
I think that using c3p0 pool is better and recomanded but this solution is working for now and don't present ant problem.
I let the Tomcat On for 24hours and the connection wasn't lost .
Please try it .
I would suggest that, in almost any client/server set-up, it's a bad idea to leave connections open when they're not needed.
I'm thinking specifically about DB2/z connections but it applies equally to all servers (database and otherwise). These connections consume resources at the server that could be best utilized elsewhere.
If you were to hold connections open in a corporate environment where tens of thousand of clients connect to the database, you would probably even bring a mainframe to its knees.
I'm all for the idea of connection pooling but not so much for the idea of trying to hold individual sessions open for ever.
My advice would be as follows:
1/ Have three sorts of connections in your connection pool:
closed (so not actually in your pool).
ready, meaning open but not in use by a client.
active, meaning in use by a client.
2/ Have your connection pooling maintain a small number of ready connections, minimum of N and maximum of M. N can be adjusted depending on the peak speed at which your clients request connections. If the number of ready connections ever drops to zero, you need a bigger N.
3/ When a client wants a connection, give them one of the ready ones (making it active), then immediately open a new one if there's now less than N ready (but don't make the client wait for this to complete, or you'll lose the advantage of pooling). This ensures there will always be at least N ready connections. If none are ready when the client wants one, they will have to wait around while you create a new one.
4/ When the client finishes with an active connection, return it to the ready state if there's less than M ready connections. Otherwise close it. This prevents you from having more than M ready connections.
5/ Periodically recycle the ready connections to prevent stale connections. If there's more than N ready connections, just close the oldest connection. Otherwise close it and re-open another.
This has the advantage of having enough ready AND youthful connections available in your connection pool without overloading the server.