Run a mysql query remotely from a PC without mysql - mysql

I am trying to automate the upload of data to a mysql database. I am using MySql Workbench on a windows pc to remotely access my database on AWS. I have a sql file that I use to load a csv file into the db using LOAD DATA LOCAL INFILE. My csv file is created daily using a scheduled task and I would like to load it into the db using a batch type file and a scheduled task.
Is it possible?

At Windows you may use PHP form Wamp Server, its very straightforward installation. You don't need MySQL Server at your local PC to update remote AWS with the data but only a scripting language.
I would suggest to install MySQL on your local PC to check firstly on that local MySQL if the update does what you expect it to do. Once it meets your expectation you just change the MySQL connection parameters to these from AWS and your set.
In MySQL Workbench you may add additional MySQL Server as local to check the local database and all applied to it changes
Perhaps this example can help link you to do first steps in writing php script to update database
PHP scripts can be executed form command line as well so once you write your script that updates the database you should be able to run it from windows CMD console this way
php -f path-to-your-sript.php
but if so you need to edit php scipt this way that it already knows where the csv file is and reads its content, maybe by this function file_get_contents() or you can also give a try a dedicated to csv files function fgetcsv() that is even more suitable because it reads line by line your CSV file so if you use loop you can even process very big CSV files without running out of the memory.

Related

How can I transfer data from AWS RDS PostgreSQL db instance to mysql db which is on a different server?

Till now I have been transferring the data manually by just exporting the data and then importing it to MySQL DB but now I have to automate the whole process.
I want to generate CSV and FTP these CSV files to MySQL server.
pgAdmin let me download the file through windows but when export via the "COPY" command; however, I get the error that I need to be a superuser and that I should use "\copy" instead.
And I cannot access the Operating system for the Postgre server.

How do I populate timezones for Google Cloud SQL (from Windows)?

We are using a Google Cloud SQL instance, and we need to populate the time zones without using a Linux computer.
The answer to this SO question outlines how to accomplish this using Linux: Change Google Cloud SQL CURRENT_TIMESTAMP time zone?. And this post demonstrates how to accomplish this with MySQL on Windows, if you have access to the Windows server files: http://www.geeksengine.com/article/populate-time-zone-data-for-mysql.html.
However, neither of these work for our situation; we need to populate the time zones remotely, using a Windows computer, without access to the remote file system. I think all we really need is the output of mysql_tzinfo_to_sql, except that we don't have a Linux computer to run this command. And MySQL's pre-populated download does us no good because we do not have access to the remote file system.
So can we populate the time zones for a Google Cloud SQL instance remotely using a Windows computer?
Two options.
Option 1: You can use mysql_tzinfo_to_sql /usr/share/zoneinfo to get the file. The file just contains a bunch of sql statements to populate the timezone table. Then you can pipe it to your windows mysql remotely.
Option 2: You can launch a local mysql server on your local windows machine. Download the prepackaged mysql time zone tables and populate your local mysql server. After that you use mysqldump to generate a file containing sql statements for your timezone tables. Then you use this file to populate your remote mysql server.
For both options, you can optionally put the dumped sql file into a Google Cloud Storage bucket and do an import from that file (https://cloud.google.com/sql/docs/import-export#importdatabase). It will import much faster than you do from your local machine.

How to convert bak file into sql file (sql bak file to mysql supported) and the execute that file to restore automatically

i want to convert a bak file of sql server into .sql/.csv/.xml file for mysql database.
And then i need a script that will automatically pick the file from server drive(suppose file is in D drive in mycomputer and script will automatically pick file from D drive) and execute sql/csv file to restore into my sql database(online) .
Actually i have two system, 1- is on VB that runs on local host and generates a bakup file daily
in the system and now i am going to create a new web application for same system that runs on VB.
i want that in my web application i write a script that will pick and backup file and ist convert into mysql supoported format(sql/csv/xml)(if necessary) and then execute that file for updating into mysql online database.
ANy kind of help will be highly appericiated.
many thanx in advance.
Possible by using MySql Migration Tool kit.
First install the software ,follow below steps

mysqldump on remote server

If there are two machines client and server .From client how to do a mysqldump to the server such that the dump is avaliable on the client and not stored in the server
Thanks..
Here is a PHP script that generates a mysqldump. It outputs directly to the client, and does not create any files on the server.
https://github.com/tylerl/web-scripts/tree/master/mysqldump
Do this in two steps:
dump data on server
transfer to client (possibly compress first)
If you need to do it often, then write a script on the server that dumps, compresses and copies the data to the client (don't forget to archive/delete old backups on server, as neded)
You could write a simple script, that could run in your crontab to create such dump and move it to some particular area of your file system, like an http accessible folder, or an ftp folder.
Then you could write an script to run in your clients that would fetch such dumps if you need this to be automatic too.
Either you do the backup serverside (if you have access to the server), using mysqldump to dump it, gzip or bzip2 to zip the file, and ftp/sftp/scp to transfer the file to the client afterwards. You can later script this, and afterwards crontab it to have it run automatically each X time. Checkout logrotate to avoid storing too many backups.
Or you use a tool on the client to fetch the data. The default (free) MySQL Workbench can back-up an entire database, or you can select which tables to backup (and interestingly, afterwards which tables to restore - nice if you only need to reset 1 table)
See the answer to similar question elsewhere:
https://stackoverflow.com/a/2990732/176623
In short, you can use mysqldump on the client to connect to and dump the server data directly on the client.

How do I migrate a populated mySQL database from dev to a shared host?

The title pretty much says it all, but to elaborate: If I build a mySQL database on my local dev machine, populate it with data, and subsequently want to migrate the database to a shared host (in this case, Siteground,) how do I do so in a way that keeps structure and data intact?
In this case, I don't have file access to the database server.
use mysqldump (doc) and dump your database (mysqldump [databasename] for a simple configuration) on your development machine to a dump (a file containing sql statements needed to recover both schema and data). Now insert the dump on your shared-host using the provided utilities (normaly you get phpMyAdmin preinstalled from your hoster, which can import dumps)
In addition to the response made by theomega (namely, do a dump of your development database and then insert the dump into your production database), be aware that you may need to enable large SQL insert statements if you have a lot of data. I would recommend you first FTP the file to the host, and then do the insert from a file. Each host has their own way of doing it, but if you can connect to the remote server using SSH, there is likely the ability to run the insert using the command line.
also in addition to theomega: most tools for mysql has dump / execute functions for sql files.
if you're using navicat, for an example, you're just a right-click away:
right-click on the database you want to export, and choose "dump sql file". this will allow you to save the .sql file on your local drive in the folder of your choosing.
then, right click on the destination database and choose "execute batch file". browse to the newly-created .sql file and it will execute all sql commands from that file in the destination database. namely, creating a copy of the exported db.