I have a Delphi application which hits a database (usually MySql) every 60 seconds through a TTimer. The application is more or less an unattended bulletin board. If the network drops the application needs to continue running and connect back to the database when the connection is back. Often it might be over broadband, so chances are the connection is not always the best.
I am using the TAdoConnection component. This is opened at application startup and remains open. Whenever I need to make a new query I set the Connection to the open TAdoConnection. But I am finding this is not very reliable if there is a network drop.
What is the best way to connect to the database in this instance?
I have seen ways where you can build the connection string directly into the TAdoQuery. Would this be the proper way? Or is this excessively resource intensive? Sometimes I need to open 5-10 queries to get all the information.
Or how about doing this in the TTimer.OnTimer event:
Create TAdoConnection
Do All Queries
Free TAdoConnection
Thanks.
You should use single TAdoConnection object to avoid setting connection string to each component. Keep your connection object closed and open it when you need to access data. Something like this:
procedure OnTimer;
begin
MyAdoConnection.Open;
try
// Data access code here
...
finally
MyAdoConnection.Close;
end;
end;
You can additionally put another try/except block around MyAdoConnection.Open to catch situation where network is not available.
About second part of your question, best would be to put all your data access components in data module that you will create when you need to run data access procedures. Then you can put all your data access code in that data module and separate it from rest of the code.
You could try to open connection in OnCreate event of datamodule, but be careful to handle possible exceptions when opening connection. Close connection in OnDestroy event. Then you can use that datamodule like this:
procedure OnTimer;
var myDataModule : TMyDataModule;
begin
myDataModule := TMyDataModule.Create;
try
// Data access code here
myDataModule.DoSomeDatabaseWork;
finally
myDataModule.Free;
end;
end;
Related
I have implemented a complex csv import script in Golang.
I use a Workerpool implementation for it. Inside that workerpool, workers run through 1000s of small csv files, categorizing, tagging and branding the products.
And they all write to the same database table. So far so good.
The problem i'm facing is, that if i chose more than 2 workers, the process crashes with the following message randomly
The workflow is
foreach (csv) {
workerPool.submit(csv)
}
func worker(csv) {
foreach (line) {
import(line)
}
}
import(line) {
product = get(line)
product.category = determine_category(product)
product.brand = determine_brand(product)
save(brand)
product.tags = determine_tags(product)
//and after all
save(product)
}
I tried to wrap the save() calls in transactions, but it didn't help.
Now i have the following questions:
Is MySQL suited to save concurrently to 1 table?
If transactions are need to accomplish this, where should they be set?
Is the Go SQL Driver (where the error ALWAYS happens in packets.go:1102) suited to do this ?
Could anyone help me (maybe by hiring for a few hours)?
I'm completely stuck. I can also share the sourcecode if that helps. But I first wanted to know i you guess that it's rather my code or a general issue.
Open a new db connection in each goroutine (or thread, for languages that use threads).
MySQL's protocol is stateful, which means if multiple goroutines attempt to use the same connection, the requests and responses get very confused.
You would have the same problem trying to share any other kind of stateful protocol connection between goroutines.
For example ftp is also a stateful protocol, and that may be easier to understand. A client goroutine might send a message like "get file x" and the response should be a series of messages containing the content of that file. If another goroutine tries to use the same connection while that request/response is inprogress, both clients will be confused. The second goroutine will read packets that belong to a file it didn't request. The first goroutine who requested the file will find some packets it was expecting have already been read.
Similarly, MySQL's protocol does not support multiple client goroutines sharing a single connection.
In the init phase of my ms access app, I set some links to ODBC tables in a postgresql db. I also set the application name with a statement "set application_name = ... ;".
So far it works well, but ...
After a phase of inactivity (or some other reason) the connection is closed. After accessing a linked table, the connection is reopen automatically. That is pretty cool, but ...
=> the application_name is lost.
Question: Can I use a trigger function, when ms access is opening a new connection or is there any other solution?
If you're just using linked tables, then no, unfortunately, this is not possible.
Access manages the connection internally for tables, and doesn't fire any events when reconnecting.
If you're only using forms, you can manage the connection yourself using a predeclared self-healing object. In that case you can raise events when reconnecting, and let the object set the application name.
Another half-solution is to use passthrough queries instead of tables, and start them all off with set application_name
I have a question regarding the flow of go lang code.
In my main function, I am opening mysql connection and then using `defer" to close the connection at the end of the connection.
I have route where WebSocket is set up and used.
My Question is will program open connection every time, WebSocket is used to send and receive a message or will it just open once the page was loaded.
Here is how my code looks like:-
package main
import (
// Loading various package
)
func main() {
// Opening DB connection -> *sql.DB
db := openMySql()
// Closing DB connection
defer db.Close()
// Route for "websocket" end point
app.Get("/ws", wsHandler(db))
// Another route using "WebSocket" endpoint.
app.Get("/message", message(db))
}
Now, while a user is at "message" route, whenever he is sending the message to other users, Will mysql - open and close connection event will happen every time, when the message is being sent and receive using "/ws" route?
Or will it happen Just once? whenever "/message" route and "/ws" event is called the first time.
My Purpose of using "db" in "wsHandler" function is to verify and check if the user has permission to send a message to the particular room or not.
But there is no point opening and closing connection every second while WebSocket emits message or typing event.
What would be the best way to handle permission checking in "/ws" route, if above code is horror? Considering a fact there will be few hundred thousand concurrent users.
Assuming db is *sql.DB your code seems fine, I'm also assuming that your example is incomplete and your main does not actually return right away.
The docs on Open state:
The returned DB is safe for concurrent use by multiple goroutines and
maintains its own pool of idle connections. Thus, the Open function
should be called just once. It is rarely necessary to close a DB.
So wsHandler and message should be ok to use it as they please as long as they don't close DB themselves.
We have a system where we have a Master / Multiple Slaves .
Currently everything happens on the Master and the slaves are just here for backup .
We use Codeigniter as a development platform .
Now we decided to user the slaves for the Reads and the Master for the Write queries .
I have been told that this is not doable without modifying the source code because proxy can't know the type of the query .
Any idea how to proceed with this without causing too much damages for a perfectly working system ...
We will use this : http://dev.mysql.com/downloads/mysql-proxy/
It does exactly what we want :
More info here :
http://jan.kneschke.de/2007/8/1/mysql-proxy-learns-r-w-splitting/
http://www.infoq.com/news/2007/10/mysqlproxyrwsplitting
http://archive.oreilly.com/pub/a/databases/2007/07/12/getting-started-with-mysql-proxy.html
something i was also looking, few month back i did something like this but i added 3 web server with master slave mysql servers, first web server enabled with mod_proxy to redirect request to read and write server all request will come to this server, if post,put or delete request come to server it will go to write server, all get or normal request will go to read server
here you can find mod_proxy setting which i used
http://pastebin.com/a30BRHFq
here you can read about load balancing
http://www.rackspace.com/knowledge_center/article/simple-load-balancing-with-apache
still looking for better solution with less hardware involved
figure out another solution through CI, create two database connections in database.php file keep save mysql server as default database connection and other connection for write only server
you can use this base model extend
https://github.com/jamierumbelow/codeigniter-base-model
you need to extend your models with this model and need to extend you model with this, it has functionality for callbacks before and after insert,update, delete and get queries, only you need to add one custom method or callback change_db_group
//this method in MY_Model
function change_db_group{
$this->_database = $this->load->database('writedb', TRUE)
}
no your example model
class Example_Model extends MY_Model{
protected $_table = 'example_table';
protected $before_create = array('change_db_group');
protected $before_update = array('change_db_group');
protected $before_delete = array('change_db_group');
}
you database connection will be changed before executing insert,update or delete queries
In using the erlang mysql module the exposed external functions are:
%% External exports
-export([start_link/5,
start_link/6,
start_link/7,
start_link/8,
start/5,
start/6,
start/7,
start/8,
connect/7,
connect/8,
connect/9,
fetch/1,
fetch/2,
fetch/3,
prepare/2,
execute/1,
execute/2,
execute/3,
execute/4,
unprepare/1,
get_prepared/1,
get_prepared/2,
transaction/2,
transaction/3,
get_result_field_info/1,
get_result_rows/1,
get_result_affected_rows/1,
get_result_reason/1,
encode/1,
encode/2,
asciz_binary/2
]).
From the this this, it is not apparent how to close a connection.
How a connection closed?
I quickly browsed through the mysql_driver code. You're right - it doesn't seem to have a mechanism to close opened connections. In fact I actually don't even see proper clean-up code to close the open sockets when a gen_server let's say gets shutdown (in the terminate method).
{Type, Result} = mysql:start_link(P1, Host, User, Passwd, DB),
stop(Result) closes the connection