symmetric bandwidth usage mysql - mysql

I have 2 servers.
Server A is where I have scripts that download html files from websites and then insert those texts into a MySQL server that its on server B.
Server B, at least for now, is just for Writing to MySQL, we dont read (select) to that server.
Data is downloaded using a PHP Script and another PHP Scrips opens a connection to server B a writes to MySQL using mysqli library.
By the way, the only service (that matters) running on server B is MySQL.
My problem is that server B bandwidth consumption is almost Symmetric, every data that comes in is the same that comes out, I asume its going back for some reason to server A, but I cant understand why o what is going back.
If you are doing only INSERTS, the Received data should go UP and the SENT data should stay low. but this is no happeing.
(See attached image showing the transfered data for MySQL)
I have use VNStat and Nethogs to try debugging but I cant figure it out. For some reason MySQL is transferring some data back to server A which is costing me a lot of BW.
Any ideas of why this could be happening?
Best Regards.

After a long night analyzing my PHP code I found a Loop with a Mysql query inside that at first I thought it was a light query, which it was but if you add up every cicle of the loop you get a lot of data being sent to the MySQL server.
I created a better way to do that and now the BW consumption is at least half.

Related

MySQL update remote server from local server

I am after some advice please. I am no developer and outsource my work requirements to various freelancers. I have a specific requirement but due to my lack of skills I’m not quite sure what to ask for, hence my question here.
I have a system where i have several Raspberry Pi "drones" that collect data. These drones are all connected to the web and at present instantly send the data via a live feed direct to a MySQL server hosted at Amazon. This server is accessible via a static IP address.
Each drone is given a unique ID and the data collected is tagged with that ID so we know where it comes from.
The existing MySQL server collects and processes all this data and we have a website that displays the stats. Nothing really complicated and the current system works very well.
The issue i have is we occasionally have internet connection issues from the drones so i want to make the whole system more robust. When the drones do have a connection issue we lose data as the drone do not store anything which is what I want to resolve.
Just as a heads up… due to the data structure the drone will not write to a file, they have to feed direct to a MySQL server.
To resolve this issue my Plan is to have a MySQL server run on each RPI with the same table structure etc as the main server. Each RPI will write to its own local MySQL server and i then need that server to "update" the main server at Amazon. Please note the data will only ever be sent in this direction, it will never come from Amazon back to the drones. When the drone can communicate with the main server I would like the drone based MySQL server to communicate pretty much instantly ( or as close as i can get it ) but where there is an internet connection issue i need the drone to store its own data until the internet connection is restored at which point it will update the main server.
As i have said, i am no developer so i wouldn’t be undertaking this work myself but i would like to know what i need to ask for in order to get the right system.
If anyone can help i would appreciate some pointers. In addition if this is the type of work you could undertake please feel free to let me know and maybe we could talk further via PM, after all … someone needs to do it 
Many Thanks.
I recommend to use a schedule update to the Amazon Database, using the programming language that you are already using or whatever, something that looks like:
While(gattering data){
Store data into local MySQL
for(each record in local MySQL){
if(there is internet){
store record in remote MySQL
optional: read remote record to check data was correctly stored
delete record in local MySQL
}else{
break;
}
}
}

Is there a difinitive answer to what causes catastrophic failure in delphi

I have read a few/lots of things on this but they don't seem to help much.
I have an App (it's called "TieUp" but that is irrelevant) I run it manually daily to collate data from several locations.
It is using as sources:
A) Data from a remote SOAP source and loaded into an in-memory TClientDataset via an XMLtransform setup.
B) CSV files downloaded daily and loaded into an in-memory TClientDataset
C) A Mysql Database on the same computer as the program (it's a restored backup of the live source)
D) A remote MS-SQL (SQLServer 2008) database
E) A Mysql Database on a remote server
Data is only read from sources A, B, C and D
Data source E is updated with the consolidated data.
There are between 800 to 2000 records daily so the datasets are not vast although the target (E) has grown to around 150,000 and increasing daily.
I can normally run this all happily and everything works as expected if a little slowly because of all the individual remote lookups to the MS-SQL system) but some days it really screws up and the error is always "Catastrophic Failure!".
The failure does not occur during any particular phase or operation that I can see. The steps are:
1) Get the SOAP(A) data first.
2) Tie in with CSV/In Memory data(B).
3) Lookup References data on Sources C and D to collate
4) Write the consolidated data to source E
After reading in the data into the in memory datasets every thing is In TClientDatasets accesses via DatasetProviders linked to TSQLQueries (they all on the same servers currently but I did it that way to keep some flexibility in future where it might goes true three tier). All queries are contained within the SQLQuery components as they are actually quite simple - it's just a matter of tying things together.
I am using completely standard components from Delphi 2009 Enterprise. All updates and database update packs have been applied. Each data source has its own DataModule these are auto created at startup
There is obviously quite a lot of data access going on here but when it crashes (with catastrophic failure) It gets stuck, completely stuck. Windows can't end the task from the normal "TieUp has stopped working" I have to go to the process and kill it.
There is so much going on and as this only happens once a week or so I really don't know where to start looking.
The reasons for asking the question is twofold: 1) is that I am trying to eliminate any manual stuff and fully automate it, but I can't rely on it if if bombs every week or so. 2) if it happens in the update phase to E - I have to manually delete the new records for the day and start again as I do not have (or haven't written yet) a mechanism to restart from a random point and I would still have to query the DB manually to establish that point for certain.
My next step is to install Delphi on another computer and always run it under the debugger until I can catch it, if it does not freeze first. But that introduces yet another different network connection (instead of the local host one).
So: "Is there a definite answer?" or what is the most likely offending component/connection? Where is the favoured place to start looking?
Thanks in advance...

Database synchronization Server to Local

I have a local app that uses SQLite. Whenever it has internet access it requests the whole database from the server and recreates the local one from that. Local and Server databases have the same structure, basically the point of the local one is to guarantee function even when no internet is available.
This is a very inefficient way of doing this.
My question is, how to ask for only data that is missing?
Should I send the last ID from each local table and have the server send data from that ID onward?
What happens if an existing ID was modified? This would mean that all data should be checked, but sending the whole database for checking and getting back the modifications or additions also seems stupid.
The configuration is Local SQLite, Server MySQL. I could probably change the server to SQLite if it's recommended.
EDIT:
Multiple clients make requests to the same server MySQL Database, PHP processes the request and replies.
How would you tackle this?
Thank you.
I'd either timestamp the rows in the database and fetch by date, or use rsync (or librsync or similar) to synchronize the database files.

Ways of managing the data in a database

I'm new to databases and web servers and that kind of thing. So I am looking for information so I can begin to figure out a starting point and options open to me.
I need to have a database that can be accessed by an iPhone app. So logically it will be hosted on a webserver somewhere.
To get/insert the data from/into the database the app would make a HTTP connection to a php file on the same server as the DB which would then insert/return the relevant data. To stop random hackers messing with the DB the app would have some validation code inside it to send to the php file to check that its not a hacker trying to mess with the database. This all making sense or will that not be secure enough.
Now the most confusing part to get my head around is :
I need check every minute has any data in the database become to old and remove it if so. So something needs to be running on the server constantly checking/manageing the database. What would this be? What is commonly used to do this kinda of thing? Is there somekey word for it that i can start searching and reading about to see what options there are?
Thanks for your advise,
-Code
One way to do this is to have a purge script run via crontab. The script can run every minute and check for old data and remove it.
MySQL version greater than 5.1.6 has inbuilt event scheduler which can be used to schedule periodic jobs inside mysql server itself.
http://dev.mysql.com/doc/refman/5.1/en/events.html
Sounds to me like you need a cron job. Cron is the standard scheduling task application for Unix type systems.
You would have some sort of script that connects to the database and performs a cleanup query, and you would schedule that script via cron.
http://en.wikipedia.org/wiki/Cron

MySQL odbc timeout from R

I'm using R to read some data from a MySQL database using the RODBC package. The data is then processed and some results are sent back to the database. The problem is that the server closes the connection after about a minute due to inactivity, which is the time needed to process the data locally. It's a shared server, so the host won't bump up the timeout time.
I think there are two possibilities to get around this:
Open a connection before every database transaction and close it immediately after
Send some small 'ping' command to the server every 30 seconds or so to let the server know that I'm still there.
I can implement the first fairly easily, but it seems pretty slow to constantly open and close connections. Does anyone know an efficient command for the second? Or is a better way altogether?
The first solution is the one I prefer. It's really hard to do the latter with a single threaded program like R. If R is busy running analysis there's no way for it to handle the ping. Unless you are doing hundreds of reads/writes the method of opening and closing the connection should not introduce an extreme amount of overhead.