Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Our production environments typically consists in 4-8 Apache web servers and 2 (My)SQL servers :
Each web server is affiliated to one SQL server
SQL servers have a circular replication setup
All web servers are load balanced, by Pound for example.
Every night a job backups one of the SQL servers, locking the affiliated web servers for about 10-15 minutes.
Is there a way to configure the balancing to avoid reaching those locked servers for a short time?
Is there another way to handle this lock, other than backuping a non-production third server?
PS: We envisage to reload the Pound configuration, just before and after the backup, with an appropriate configuration file, but it feels a bit odd...
How about using poundctl to disable and reenable the backend server? It must be run locally (the command protocol uses unix sockets), but you could probably have it launched remotely through an ssh session.
From the man page:
OPTIONS
[...]
-B/-b n m r
Enable/disable a back-end. A disabled back-end will not be passed requests to answer. Note however that existing sessions may still cause requests to be sent their way.
-n n m k
Remove a session from service m in listener n. The session key is k.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have been building a web application for 50k users. My application will include:
APIs + Socket server: NestJS + SocketIO
Database server: MySQL
Frontend server: ReactJS
I'm going to choose EC2 instances for those. Could you help me to choose appropriate instances for each server (eg. t2.xlarge or ...)? My application will have 3 environments: develop, staging & production.
Thanks!
Nobody can provide the information you seek.
Every application is different. Some apps are compute-intensive (eg video transcoding), some are memory-intensive (eg data manipulation) and some are network-intensive (eg video chat). Also, the way users interact with apps are different with each app.
The only way you will know the "appropriate instances for each server" is to setup a test platform, select a particular server configuration, then simulate typical usage of your application with the desired number of users (eg 50k). Monitor each server (CPU, RAM) and find any bottlenecks. Then, adjust instance type and app configurations, and test again.
Yes, it's a lot of work, but that's the only way you'll really know what system sizes and configurations are required. Or, of course, you can simply get real users on your app, monitor it very closely and make changes on-the-fly.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
My client strongly desires for a highly availability access to data stored in their MySQL database. They want to be confident that a reliable solution exists to avert downtime due to a database server failure.
In the scope of MySQL database, how may I provide a resilient data storage solution to my clients?
There should be a means to ensure our app stays up and is not starved of data necessity for its operation when DB server goes down. I googled and found this:
http://galeracluster.com/documentation-webpages/configuration.html
But I think there should be an easier way to switch between different DB servers, am I right?
In any case my question is: what is the practices to handle such situations when DB server goes offline?
You are looking for a database cluster (probably with multi-master replication)
https://dev.mysql.com/doc/refman/5.7/en/mysql-cluster-replication-multi-master.html
This topic is WAY too deep for an SO post, but this is the direction you should be heading.
The solution to the challenge you described is Database Replication.
Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.
Database replication can be used on many database management systems (DBMS), usually with a master-slave relationship between the original and the copies. The master logs the updates, which then ripple through to the slaves. Each slave outputs a message stating that it has received the update successfully, thus allowing the sending of subsequent updates.
MySQL supports DB replication when you configure it. So, you do not have to implement the actual replication process.
See the official MySQL documentation on ⇢ MySQL database replication.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a laravel website and a MySQL database for it.
My website has few users, and any user has a SQL database on his computer and that database has the same name, same password, and same configuration.
Now I want my user login to my website and via website information, make changes to his SQL database on his computer.
How can I do this?
How can I connect to any SQL database on user's computer by my website?
The direct answer to your question is that if you know the details of your users' machines, you can create connection strings for them in your config file, and use those connections to open MySQL sessions on the client machines - see https://laravel.com/docs/5.5/database, section "Using Multiple Database Connections".
This assumes all the machines are accessible from your server - presumably because they are on the same, local, non-internet-accessible network.
If your user's machines are accessible from the Internet, please do not do this - they will get hacked. It's a question of "when", not "if".
It's also a pretty horrible solution from an application architecture point of view - presumably the databases on the users' machines expect certain things about the database to be true, and your application would have to guarantee all those things. For instance, your application might expect "all orders have a valid customer; all customers have a valid country code". That's hard enough to guarantee on a single database, but on a distributed system, it's really hard.
It's much better to use MySQL replication for scenarios like this.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have two identically configured MySQL 5.6.10 servers and needed to move the data files fast from one to the other. Is this an OK procedure?
Here is what I did:
1) Shut down both servers
2) Moved all the files from one box to the other (DATA is on a separate drive on both machines)
3) Turned the second server on
4) Connected it back to the app server
It took about 5 minutes to move all files (~50GB) and all seems to work. I just wonder if I missed anything?
Thanks much for your feedback.
If both the server versions are same, then I think, it's perfectly fine, not just OK, as I have done the same many times, without any data loss, but this method comes with cost:
You have to shut down mysql server (which is not good, if it's a production server)
You have to make sure the permission of data (mysql) directory is same as the previous one.
You will have to monitor the mysql_error log while starting the second server.
You can use mysqldump, but if you don't want to, then you can use Mysql Workbench's migration wizard, it really takes care of everything.
A much safer and recommended way would be Database Backup And Recovery.
Do a full backup from server1 and restore it to server2. Later then on, you can go for a differential backup.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Do you have any experience about this question? I have currently 1900 MySQL databases in a single domain in my plesk control panel and I wonder if my MySQL server gets overloaded or out-of-service due to such high number of databases in the system.
Do you have any suggestions? Each database is for a user in my service by the way.
MySQL itself doesn't place any restrictions on the number of databases you can have, and I doubt Plesk does either, I'm sure it just displays all the databases present on the MySQL server.
However, your host may have a limit (which you'd have to ask them about), or if you start getting a huge number of databases, you may actually run into a filesystem limit. As the MySQL documentation says, each database is stored as a directory, so you could hypothetically hit the filesystem's upper limit for how many subdirectories are allowed.
ive got well over 5000 databases running on alinux based plesk cluster (one db one web server) and its running fine, though i have had to increase open files limits due to the huge amounts of files. i cant run the mysql tuning primer any more though, well i can but it takes about 4 hours