Data is stored but when database shutdown/restart, data is missing - mysql

I have problem with my database. I using mysql cluster to operate it. Mysql cluster has 1 management node and 3 data & SQL nodes. The databases is load balanced by haproxy and there 2 load balancers is failovered by keepalived. Here list of IPs:
192.168.1.11: virtual ip for failover
192.168.1.12: load balancer master
192.168.1.13: load balancer backup
192.168.1.14: data & SQL node 1
192.168.1.15: data & SQL node 2
192.168.1.16: data & SQL node 3
192.168.1.17: management node
The problem is when web server(php webpage) connect to database through 192.168.1.11 or direct to database ex: 192.168.1.14 data is stored and when check with heidiSQL data is stored too in database, but the problem come when I shutdown or restart database server and when start it again data that already stored in database is missing. I don't know what the problem is, so what I must to do? Thank's for your attention guys :D

All databases normally don't autocommit your data.
Your data is stored temporarily,and normally afterwards,only after your data is COMMITTED,does your database actually make the changes.
SQL would require you to type the keyword COMMIT.

Related

Setting up MySQL (Master-Slave) replication with all ready configured databases/tables

I am trying to configure MySQL databases using the Master-Slave replication. Before I realized that I had to set up my environment using this replication, I already have 2 separate servers running their own MySQL DB. Each of these servers are configured the exact same. The MySQL DB are configured with hundreds of tables.
Is there a way that i can set up (Master-Slave) Replication using the configured DB's? Or will i have to start from scratch and configure the replication first and then load in all the DB tables?
You can delete all data from one of the servers. Remaining one with the data will be your Master. Then use mysqldump to backup all the data and insert it to the slave.
Take a look for the detailed instructions on the page below:
https://livecaller.io/blog/how-to-set-up-mysql-master-slave-replication/
If the data is exactly same in both the MySQL database then you can start master slave replication, but you need to be sure that the data is same. MySQL will not check that, and if there is some discrepancy in the primary key then it will throw error immediately after next DML statement.
To be on a safer side, drop the database from one server, and restore it using the MySQL dump of another server. This will give the surety that database is same on both the server.
Take the reference from the below link to establish replication between two MySQL servers.
https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-in-mysql

Moving of large MySQL database from limited resource server

I have a Windows Server with MySQL Database Server installed.
Multiple databases exist among them, database A contains a huge table named 'tlog', size about 220gb.
I would like to move over database A to another server for backup purposes.
I know I can do SQL Dump or use MySQL Workbench/SQLyog to do table copy.
But due to limited disk storage in server (less than 50gb) SQL Dump is not possible.
The server is serving other works so basically the CPU & RAM is limited too. As a result, copy table without used up CPU & RAM is not possible.
Is there any other method that can do the moving of the huge database A over to another server please?
Thanks in advance.
You have a few ways:
Method 1
Dump and compress at the same time: mysqldump ... | gzip > blah.sql.gz
This method is good because chances are your database will be less than 50GB; as the database dump should be in ASCII; you're then compressing it on the fly.
Method 2
You can use slave replication; this method will require a dump of the data.
Method 3
You can also use xtrabackup.
Method 4
You can shutdown the database, and rsync the data directory.
Note: You don't actually have to shutdown the database; you can however do multiple rsyncs; and eventually nothing will change (unlikely if the database is busy; have to do during slow time); which means the database would have sync'd over.
I've had to do this method with fairly large PostgreSQL databases (1TB+). It takes a few rsyncs: but, hey; it's the cost of 0 down time.
Method 5
If you're in a virtual environment you could:
Clone the disk image.
If you're in AWS you could create an AMI.
You could add another disk and just sync locally; then detach the disk, and re-attach to the new VM.
If you're worried about consuming resources during the dump or transfer you can use ionice and renice to limit the priority of the dump/transfer.

Linking different schemas from different mysql servers into only one mysql server so he can manage the queries

I have 5 different schemas, eventually I want to separate them into different servers for specific RAM and CPU assignment depending on the load.
How can I configure so I can show a schema from a different server into a "front" mysql server?
MySQL Proxy:
The MySQL Proxy is an application that communicates over the network using the MySQL network protocol and provides communication between one or more MySQL servers and one or more MySQL clients.
However, note:
 Warning
MySQL Proxy is currently an Alpha release and should not be used within production environments.
The FEDERATED Storage Engine:
The FEDERATED storage engine lets you access data from a remote MySQL database without using replication or cluster technology. Querying a local FEDERATED table automatically pulls the data from the remote (federated) tables. No data is stored on the local tables.
Replication:
Replication enables data from one MySQL database server (the master) to be replicated to one or more MySQL database servers (the slaves).
However, note:
In this environment, all writes and updates must take place on the master server. Reads, however, may take place on one or more slaves.

How configure mysql for a heavy load ajax application which is using tomcat connection pooling?

I've a GWT ajax application on tomcat server 7 which uses connection pooling to interact with its back-end mysql 5 database (My database only contains InnoDB tables).
The application is extremely based on db operations & for each little operation it need to fetch some data or write data from/to db, call db SPs & user-defined functions & so on...
The program will use each connection for a short time & then will return it to connection pool. the maximum concurrent connections may increase about 100-200 connections at the pick load time.
It works fine at the beginning of running mysql server, but after one or two hours, it became too slow & the number of failed connections to database increases about 10% off all connections.
I've tried to use my-large or my-huge default suggested configurations of mysql server, but I think i need something more than these to achieve a stable situation for mysql server.
any idea ?

How to Get Transactional MySQL data into a SQL Server database

I'm working on a project that has a MySQL transactional database backing up a web application. The company uses SQL Server for back office and reporting applications. What is the best way to update SQL Server with the data from MySQL? Right now, we are performing a dump of the MySQL data and doing a full restore. This may not be feasible much longer due to the increasing size of the database.
I would prefer a solution that copies only newly inserted and updated rows. I also need the SQL Server database to be static after the updates are applied. Basically, it should change once a day. I can update SQL Server from a local copy of MySQL (i.e. not production) Is there a way to apply MySQL replication to a slave server at specified intervals? A perfect solution is to run a once daily update on MySQL that syncs the database as of a point in time.
Can you find a way to snapshot the mySQL DB and then do the copy? It would make an instant logical copy of the database which would be frozen in time.
http://aspiringsysadmin.com/blog/2007/08/13/consistent-mysql-backups-using-zfs-snapshots/
ZFS filesystem can do this - but you haven't mentioned your hardware/OS.
Also, perhaps you could restrict the data you are pulling - whatever is time sensitive so that your pull will only get data that is older than 1 hour if your pull takes 45 minutes. Or to make things a little safer - how about just pulling the day before?
I believe SSIS 2008 has a new module called 'maintain' table that does the common task of getting updated/inserted records and optionally deletes.
Look into DTS, Microsoft's ETL tool. It's rather nice. Do the mapping, schedule it as a cron job, and Bob's your uncle.
Regardless of how you do the import to SqlServer from the MySQL clone, I don't think you need to worry about restricting MySQL replication to specific times.
MySQL replication only requires one thread in the master server and basically just transfers the transaction log to the slave. If you can, put the master and slave MySQL servers on a private LAN segment so that replication traffic does not impact the web traffic.
if you have SQL Server Standard or higher, SQL Server will take care of all of your needs.
use ssis to grab the data
use agent to schedule your timed tasks
btw - I'm doing the exact same thing that you are doing. SQL Server is awesome - it was easy to setup (I'm a noob to SSIS) and it worked on the first shot.
It sounds like what you need to do is to set up a script to start and stop replication on a slave database. If you can do that via a script, then you can establish a workflow in SSIS such as follows:
Stop Replication to Slave MySQL Database
If Replication has Stopped, then Take Snapshot of Slave MySQL Database
If Snapshot has been Taken, then
a= Start Replication to Slave MySQL Database
b= Import Slave MySQL Database Replica into SQL Server
NB: 3a and 3b can run in parallel.
I think your best bet in such a scenario would be to use SSIS to enable and disable MySQL database replication to the slave as well as to take a snapshot of the slave database. Then you can drive the whole thing from the SQL Server Agent mechanism.
Hope this helps