I need to put up a server with many and many connections, approximately 1 million at the same time.
I need to know how to do it and what technologies to use.
I have messages from users (i'll use xmpp protocol), and they have to pass from a server, so if 1 million people at the same time use the same server it will crash. And the users will use database (mysql?) for registration.
So...how do I set the server so that it doesn't crash? What kind of server i'll use (apache mysql server...)? And what to do for the database to hold all that traffic? queries won't take too much time?
I've read those documents for configuring mysql and they don't suggest me to have more then 1000 connections.. link
Thank you!
You should set up loadbalancers(like nginx, haproxy) which send users to the right server, and you can build a cloud: webserver cluster, database cluster behind load balancers. You can also set up a backup server and rsync which can restore your data if anything goes wrong.
check digitalocean tutorials, its worth a shot:
haproxy
Related
I'm planning to create a system which tracks visitors clicks into the database. I'm expecting around 1M inserts/day into the Database.
On the backend, I'll have an analytics system which will analyze all the data that's been collected over the days/weeks/months/years.
My question is: is it a practical approach to have 2 different MySQL Servers + 1 Web server? MySQL Server A would insert the clicks into it's DB and it would be connected to MySQL Server B by group replication, so whenever I create reports, etc on MySQL Server B, it doesn't load Server A heavily.
These 2 Database servers would then be connected to the Web Server which would handle all the click requests and displaying the backend reports also.
Is it a practical solution, or is it better to have one bigger server to handle all the MySQL data? Or have multiple MySQL servers that are load balancing each other? Anything else perhaps?
1M inserts/day is not a high load by modern standards. That's less than 12 per second on average.
On sufficiently powerful servers with fast storage and proper tuning of MySQL options, you can expect to support at least 100x that load with a single MySQL server.
A better reason to use multiple MySQL servers is redundancy. Inevitably, any MySQL server needs to be upgraded, or you might have hardware failures and need to replace a disk, or other components. To avoid downtime, you should have a standby database server, which stays in sync with the primary server, either using MySQL replication or by disk-level replication like DRBD.
I have 3 mysql servers that i need to backup daily.. each server uses just 1 database w/ multiple tables..
I've scripted a mysql dump script on each server.. but this time i want each mysql server backing up to a 4th server (MASTER SERVER) (w/c is remote location) ..
The Master server will serve as a MIRROR for all 3 servers, so that we can view the data of the other servers even if one of them goes down, because the Master server will be on a more reliable internet connection .
NOTES and LIMITATIONS:
1) EACH SERVER needs to "send" their backups to the MASTER SERVER, because the master server can not do "incoming" connection to each slave servers (port forwarding not supported on the slaves)
2) Prefer that only the "changes" are backed up to make things lighter on the network. (synchronization? incremental?)
3) All are running windows 7 at the moment, because for now i'm using Navicat MySQL's synchronization features.. I would prefer to use a PHP script based solution so i can migrate things to *nix.. i've read about replication and all that stuff, but I kinda wanted a ready solution, perhaps a software i could download or buy or something.. I've no time to code my own sync/replication scripts/software. just wana get over this remote sync hurdle and move on w/ the project.
regards to all
i've read about replication and all that stuff, but I kinda wanted a
ready solution
But replication is a ready-solution just type a few commands and change a few configuration.
I'm trying to implement a proxy layer in front of MySQL server, that will catch redundant SQL queries and send them only once to the server. In other words, I have many clients (in PHP, Perl, on different web nodes) that talk to the MySQL and very often repeat the same SELECT queries. When traffic goes up MySQL, very often, goes down.
The question is - are you aware of any open source (or commercial) tool that can help? I tried MySQL Proxy, but looks like it can't help.
Two suggestions:
MySQL Proxy
This is a front end proxy from MySQL which does what you want as far as I know
vtocc
From the vitess project, used in the YouTube mysql environment, also does a similar thing. Query consolidation: The ability to reuse the results of an in-flight query to any subsequent requests that were received while the query was still executing.
You may want to look into HAProxy and how it works.
Here two additional suggestions
SUGGESTION #1 Setup a Cluster
If your data is all InnoDB, you should try Percona XtraDB Cluster and use HAProxy in conjunction with it. You can load balance across all server in the Cluster including the Write Master.
SUGGESTION #2 Setup a Cluster via MySQL Replication to 1 or more DB Servers
Use HAProxy to load balance your reads across the Read Slaves
If you are on a budget and your data is relatively small, setup multiple MySQL Instances on one server
I have a MySQL database running on our server at this location.
However, the internet connection at this location is slow (Especially when several users are connected remotely).
We also have a remote web server on a very fast internet connection.
Can I run another MySQL server on the remote server and still be able to run queries and updates on it?
I want to have two servers because
- Users at this location can connect via lan (fast)
- Users working remotely can connect to synced remote server (fast)
Is this possible? From what I understand replication does not work this way. What is replication used for then? Backups?
Thanks for your help!
[Edit]
After doing some more reading, I am a little worried about setting up multi-master replication due to the fact that I had not considered multi-master when designing the database and conflicts could be an issue.
The good news though is that most time consuming operations are queries not updates.
And, I found out that there is a driver that handles master-slave connections.
http://dev.mysql.com/doc/refman/5.1/en/connector-j-reference-replication-connection.html
That way writes will be sent to the master and reads can come from the faster connection.
Has anyone tried doing this before? My one concern is that if I update to the master, then run a query expecting to see the update on the slave, will it be there right away? Or will the slow connection make this solution just as slow as using the master for both read and write?
What you're asking, I believe, is called Multi-Master Replication, by which both servers serve as replication masters to each other. Changes on either server become replicated back to the other as soon as possible. MySQL can be configured to do it, however I'm not sure how the differences in speed would affect your performance and data integrity.
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-replication-multi-master.html
One of the ERP applications I worked with was configured in such a way that there was only 1 user (for example USER A) who connected to the database. Any user of the application (workforce was in the thousands) who logged on to the system and tried to do anything was in effect calling USER A to connect to the database and execute queries for him. The database was Oracle.
I was wondering how to achieve a similar thing with mySQL. I have a web application built with php and mySQL database. I expect different people to query the database via the web. Currently when a user opens up the web page, a connection to the database is made via a single db user. At the end of the query, I close the connection. However the database has a maximum user connection of 10 which in my understanding means one user can only establish a max of 10 connections. I do not want to have to create several users for all the people who try to use my application (I do not even know the number of people who will use the application and I do not believe this will be a scalable solution)
You should look for a db connection caching mechanism as a component for either your web server or your programming language. Such a mechanism will reuse connections transparently for you.
If the database connection is refused return HTTP error 502. If connections are closed at the end of each pageload they should only last ~100ms, so concurrent connections will be low for most situations.
Should you need to adjust it, edit my.cnf to increase concurrent connections:
max_connections = 150
max_user_connections = 150
If traffic is very high you can enable persistent MySQL connections in PHP, or cache your content so not to hammer the database.
Hope that helps!