When get new data from the database, does this mean a new connection? mysql database
Eg:
SELECT * FROM employees WHERE ID=1
Does this mean a new connection on the database server?
Do I need a connection to perform a query?
Yes, absolutely.
Do I need a fresh connection to get the latest data?
No, you only need to query again.
Does every query need a new connection?
No, you only need one connection, then you can make several queries. Typically, you open the connection once when starting your program, and then do whatever you need; even closing the db is optional, modern languages do that for you when you exit the script, but it's nice to take care of it when you're done.
If you are not so familiar with SQL, this is an SQL Query. It is a kind of specifying the requirements of data which you are requesting from the database. Database Connection is a different thing.
I recommend you should dig in to SQL to understand bit more, it is very easy to learn..
Does this mean a new connection on the database server?
If you are already connected then no
If you are not connected then yes
If your connection hit a timeout on no activity, or similar then yes
You can see your connections (as a normal user) using:
show processlist
https://dev.mysql.com/doc/refman/8.0/en/show-processlist.html
You can see the connection ID and if you are looking to see if your connection is changing then that will help.
Related
I am using this
var mysql = require('mysql');
in my node.js app. I am interested to make my app perform the fastest. I have many functions that connect to SQL. There is 2 approaches I am familiar with
For every request, I make a new connection and then execute the query and the close the connection.
Open the connection and make it a global variable, and then never close it. Then for every request that comes in, it just uses the opened connection saved globally.
Which is generally better to use? Also for number 2, if the server closes unexpectedly, then the sql connection doesn't close. Is that bad?
Thanks
Approach 2 is faster, but to avoid the potential problem of connections dropping without unexpectedly, you'll have to implement testing mechanism for every segment that queries the database (ex: count the number of returned rows).
To take this approach further, you can define connections bank or pool. Where you can deal with connection testing and distributions. The basic idea is to have many connections to the database and just inject healthy connections to consumers (functions, or objects that query the database). As Andrew mentions in the comments You can check this question: node.js + mysql connection pooling
Since the database is an essential asset to a project, if this is not a homework or learning project, it might not be a bad idea to explore 3rd party libraries, where a lot of the connections and security details is covered and automated.
I am on Linux platform with PostgreSQL 5.5. I am trying to monitor all traffic related to PostgreSQL between Master and Slave. To that end, I used Wireshark to monitor the traffic. Then, I started PostgreSQL and ran three queries (Create table Hello, Create table Bye & inserted an image to PostgreSQL database). During queries, I ran Wireshark on Master just to capture the traffic between Master and Slave.
But there is one problems with PostgreSQL traffic captured using Wireshark. All the traffic is sent/received in TCP packets and that traffic is in coded form. I can't read that data. I want to find out all those three queries from Wirehsark that I inserted in PostgreSQL database.
What is the best way to go about finding queries of PostgreSQL?
On the other hand, I ran same queries on MySQL database and repeated above mentioned experiment. I can easily read all those three queries in wireshark dump because they are not in coded form.
Wireshark file of PostgreSQL experiment is available on Wireshark-File. I need to find out above three queries from Wirehsark file.
About File:
192.168.50.11 is the source machine from where I inserted queries to remote PostgreSQL's Master server. 192.168.50.12 is the IP of Master's server. 192.168.50.13 is the slave's IP address. Queries were executed from .11 and inserted into .12 and then replicated to .13 using Master-Slave approach.
Pointers will be very welcome.
You are probably using WAL-based replication (the default) which means you can't.
This involves shipping the transaction-logs between machines. This is actual on-disk representation of the data.
There are alternative trigger-based replication methods (slony etc) and the new logical replication.
Neither will let you recreate the complete original query as I understand it, but would let you get closer.
There are systems which duplicate the queries on nodes (like MySQL) but they aren't quite the same thing.
If you want to know exactly what queries are running on the master, turn on query logging and monitor the logs instead.
Solution to my own problem:
I got the solution of my question.
I used Python code to insert queries into remote PostgreSQL database. I used following line in PostgreSQL to connect with database.
con = psycopg2.connect(host="192.168.50.12", database="postgres", user="postgres", password="faban")
If you use above approach then all the data will be sent in encrypted form. If you use the approach given below in python code then all the data will be sent in decrypted form. You can easily read all queries in Wireshark.
con = psycopg2.connect("host=192.168.50.12 dbname=postgres user=postgres password=faban sslmode=disable")
Same is the case in C-Code as well.
Decrypted data
sprintf(conninfo, "dbname=postgres hostaddr=192.168.50.12 user=postgres password=faban sslmode=disable");
Encrypted Data
sprintf(conninfo, "dbname=postgres hostaddr=192.168.50.12 user=postgres password=faban");
I am developing a server application, which uses mysql for some data storing. Should I create a connection to the mysql on the server's start and use it for all queries, or create connection on each query? Which is better/faster?
If you ask me, It's better to use only one connection, that way you can use session variables without any issues. Besides, with static variables, it's easy to permanently save the connection somewhere in the script.
Newbie question....sorry
I have a simple mysql database running in our Intranet (windows server) which >20 people connect to for searching/inserting records, etc
This is done with a simple Excel GUI.
Process is:
Search Strings are typed in excel cells
VBA opens connection to Mysql and query is run
Results retrieved are put on excel Connection to
mysql closed with VBA
The above process takes in general 0-2 seconds. Records retrieved <100.
Everthing runs fine so far.
In order to be able to connect more people in future, I would like to have some feedback on whether it is ok to continously connect and disconnect from mysql in the way I am doing.
Can it cause some type of crash/memory leaks, etc ??
Is there some better way to do this?
I am hoping to get <2000 users, but I understand the more users connected, worse it is.
By disconnecting after each search/insert, I am hoping to keep the number of live connections as low as possible.
thanks for your input
This constant connecting and disconnecting is an expensive process.
A better way would be to use server-side scripting to manage your connections. This way you would have a single persistent connection to each server, and the users will execute their queries through the single connection. You will also need to implement some sort of job queue for execution.
I've been thinking, why does Apache start a new connection to the MySQL server for each page request? Why doesn't it just keep ONE connection open at all times and send all sql queries through that one connection (obviously with client id attached to each req)?
It cuts down on the handshake time overhead, and a couple of other advantages that I see.
It's like plugging in a computer every time you want to use it. Why go to the outlet each time when you can just leave it plugged in?
MySQL does not support multiple sessions over a single connection.
Oracle, for instance, allows this, and you can setup Apache to mutliplex several logical sessions over a single TCP connection.
This is limitation of MySQL, not Apache or script languages.
There are modules that can do session pooling:
Precreate a number of connections
Pick a free connection on demand
Create additional connections if not free connection is available.
the reason is: it's simpler.
to re-use connections, you have to invent and implement connection pooling. this adds another almost-layer of code that has to be developed, maintained, etc.
plus pooled connections invite a whole other class of bugs that you have to watch out for while developing your application. for example, if you define a user variable but the next user of that connection goes down a code path that branches based on the existence of that variable or not then that user runs the wrong code. other problems include: temporary tables, transaction deadlocks, session variables, etc. all of these become very hard to duplicate because it depends on the subsequent actions of two different users that appear to have no ties to each other.
besides, the connection overhead on a mysql connection is tiny. in my experience, connection pooling does increase the number of users a server can support by very much.
Because that's the purpose of the mod_dbd module.