MySQL: Handling high write rate + occasional reading to analytics table (Master/Slave) - mysql

We will be providing analytics, multiple writes will be made per seconds. Currently the databases are MariaDb. My aim is to be able to write as fast as I could to the database, and to be able to query the data occasionally on user request (through the web application). The read data doesn't have to be latest time. I could query analytics data and parse it every 5 minutes.
As I understand, if I set up a master/slave database relationship, I will be able to read from the slave database, and write as fast as I could to the master, while not locking the database. Is that right?
Are there any better ideas?

Related

Pretend to be mysql server

For a project we are working with an several external partner. For the project we need access to their MySQL database. The problem is, they cant do that. Their databse is hosted in a managed environment where they don't have much configuration possibilities. And they dont want do give us access to all of their data. So the solution they came up with, is the federated storage engine.
We now have one table for each table of their database. The problem is, the amount of data we get is huge and will even increase in the future. That means there are a lot of inserts performed on our database. The optimal solution for us would be to intercept all incoming MySQL traffic, process it and then store it in bulk. We also thought about using someting like redis to store the data.
Additionnaly, we plan to get more data from different partners. They will potentialy provide us the data in different ways. So using redis would allow us, to have all our data in one place.
Copying the data to redis after its stored in the mysql database is not an option. We just cant handle that many inserts and we need the data as fast as possible.
TL;DR
is there a way to pretend to be a MySQL server so we can directly process data received via the federated storage engine?
We also thought about using the blackhole engine in combination with binary logging on our side. So incoming data would only be written to the binary log and wouldn't be stored in the database. But then performance would still be limited by Disk I/O.

How to setup a script that automatically clones the DB every 2 hours?

I'm planning to build a system that will have 30+ tables and 100+ million rows in a few of those. Going to use MySQL - InnoDB (any better alternative for this?)
My scripts are going to add a couple of hundreds of thousands of clicks to the database every day. On the other hand, I'd like to do heavy database queries during the day as well.
What I came us with is to have two different servers. Server A would take all the clicks and store them and Server B would work on retrieving the results.
Question A: Is this the right approach to do? Question B: Is it possible to set up a script that's cloning the database over from Server A to Server B - so the data is semi-up to date?
Edit: LEMP stack
You should not do this via a batch process that runs a large update every so often. Instead, use MySQL’s built-in replication features.
In particular, use a master-slave configuration. This allows you to keep multiple servers current in (essentially) real-time, while splitting reads (fast) from writes (slow) to get maximum performance.

Should I worry about the load on a MySQL database?

I'm developing a site that is heavily dynamic and uses a MySQL database constantly. My question is - should I worry about the load on the database?
For example, a part of the site has a live chat which uses AJAX to contact the database every second for each user. Depending on how many users are connected, that's a lot of queries!
Is this something a MySQL database can handle, or am I pushing it? Thanks.
You are actually pushing it. Depending on your server and online users count MySQL can handle at some point.
MySQL and other database management systems are data storage systems, and you are not actually storing the data! You are just sending data between clients through MySQL and that is not efficient.
But to speed things up, you can use MySQL Memory Tables for instant messages and keep offline messages in another MyISAM or InnoDB table (which will be storing the data)
But the best way to have a chat infrastructure is having a backend application which keeps all the messages in the memory, and after some limit sending not received messages to the MySQL as offline messages. This is very much like MySQL Memory Tables but you will have more control over the data. The problem with this is you need to implement logical and efficient data structures with good memory management, which is a very hard task if you are not doing a commercial product and unnecessary if you are not thinking about selling that chat system so I recommend use MySQL Memory Tables as I described.
Update
Mysql Memory Tables are volatile (will be reset on service/server restart), so don't use it for storing, use only for keeping data in a short time for instant messages.

SQLAlchemy MySQL Caching?

I am developing an intense financial MySQL database (django + SQLAlchemy) which is interrogated and manipulated constantly. My DB contains a lot of date-value pairs. I keep loading more and more data as time progresses, but historical values don't change which is why I think caching could really improve performance for me.
Is beaker really my best option, or should I implement my own caching over Redis? I would love to hear some ideas for caching architectures - thanks!
The Mysql cache stores the text of a SELECT statement together with the corresponding result that was sent to the client.
If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again. The query cache is shared among sessions, so a result set generated by one client can be sent in response to the same query issued by another client.

MySQL synchronization questions

I have a MySQL DB which manages users’ accounts data.
Each user can only query he’s own data.
I have a script that on initial login gets the user data and inserts it to the DB.
I scheduled a cron process which updates all users’ data every 4 hours.
Here are my questions regarding it:
(1) - Do I need to implement some kind of lock mechanism on the initial login script?
This script can be executed by large number of users simultaneously - but every
user has a dedicated place in the DB so it does not affect other DB rows.
(2) - Same question on the cron process, should I handle this scenario:
While the cron process updates user i data, user i tries to fetch his data
from the DB.
I mean does MySQL already support and handles this scenario?
Any help would be appreciated.
Thanks.
No, you don't need to lock the database, MySQL engine handles this task for you. If you would make your database engine by yourself, you would have to be sure, that nothing will get in the way or conflict with data update, but since you are running such a smart thing as MySQL, you don't need to worry about it.
While data is updated, all queries will stand in line, until update finishes.