How do I populate timezones for Google Cloud SQL (from Windows)? - mysql

We are using a Google Cloud SQL instance, and we need to populate the time zones without using a Linux computer.
The answer to this SO question outlines how to accomplish this using Linux: Change Google Cloud SQL CURRENT_TIMESTAMP time zone?. And this post demonstrates how to accomplish this with MySQL on Windows, if you have access to the Windows server files: http://www.geeksengine.com/article/populate-time-zone-data-for-mysql.html.
However, neither of these work for our situation; we need to populate the time zones remotely, using a Windows computer, without access to the remote file system. I think all we really need is the output of mysql_tzinfo_to_sql, except that we don't have a Linux computer to run this command. And MySQL's pre-populated download does us no good because we do not have access to the remote file system.
So can we populate the time zones for a Google Cloud SQL instance remotely using a Windows computer?

Two options.
Option 1: You can use mysql_tzinfo_to_sql /usr/share/zoneinfo to get the file. The file just contains a bunch of sql statements to populate the timezone table. Then you can pipe it to your windows mysql remotely.
Option 2: You can launch a local mysql server on your local windows machine. Download the prepackaged mysql time zone tables and populate your local mysql server. After that you use mysqldump to generate a file containing sql statements for your timezone tables. Then you use this file to populate your remote mysql server.
For both options, you can optionally put the dumped sql file into a Google Cloud Storage bucket and do an import from that file (https://cloud.google.com/sql/docs/import-export#importdatabase). It will import much faster than you do from your local machine.

Related

Run a mysql query remotely from a PC without mysql

I am trying to automate the upload of data to a mysql database. I am using MySql Workbench on a windows pc to remotely access my database on AWS. I have a sql file that I use to load a csv file into the db using LOAD DATA LOCAL INFILE. My csv file is created daily using a scheduled task and I would like to load it into the db using a batch type file and a scheduled task.
Is it possible?
At Windows you may use PHP form Wamp Server, its very straightforward installation. You don't need MySQL Server at your local PC to update remote AWS with the data but only a scripting language.
I would suggest to install MySQL on your local PC to check firstly on that local MySQL if the update does what you expect it to do. Once it meets your expectation you just change the MySQL connection parameters to these from AWS and your set.
In MySQL Workbench you may add additional MySQL Server as local to check the local database and all applied to it changes
Perhaps this example can help link you to do first steps in writing php script to update database
PHP scripts can be executed form command line as well so once you write your script that updates the database you should be able to run it from windows CMD console this way
php -f path-to-your-sript.php
but if so you need to edit php scipt this way that it already knows where the csv file is and reads its content, maybe by this function file_get_contents() or you can also give a try a dedicated to csv files function fgetcsv() that is even more suitable because it reads line by line your CSV file so if you use loop you can even process very big CSV files without running out of the memory.

Import data from CSV file to Amazon Web Services RDS MySQL database

I have created a Relational Database (MySQL) hosted on Amazon Web Services. What I would like to do next is, import the data in my local CSV files into this database. I would really appreciate if someone provides me an outline on how to go about it.Thanks!
This is easiest and most hands-off by using MySQL command line. For large loads, consider spinning up a new EC2 instance, installing MySQL CL tools, and transferring your file to that machine. Then, after connecting to your database via CL, you'd do something like:
mysql> LOAD DATA LOCAL INFILE 'C:/upload.csv' INTO TABLE myTable;
Also options to match your file's details and ignore header (plenty more in the docs)
mysql> LOAD DATA LOCAL INFILE 'C:/upload.csv' INTO TABLE myTable FIELDS TERMINATED BY ','
ENCLOSED BY '"' IGNORE 1 LINES;
If you're hesitant to use CL, download MySQL Workbench. It connects no prob to AWS RDS.
Closing thoughts:
MySQL LOAD DATA Docs
AWS' Aurora RDS is MySQL-compatible so command works there too
"LOCAL" flag actually transfers the file from your client machine (where you're running the command) to the DB server. Without LOCAL, the file must be on the DB server (not possible to transfer it there in advance with RDS)
Works great on huge files too! Just sent a 8.2GB file via this method (260 million rows). Took just over 10 hours from a t2-medium EC2 to db.t2.small Aurora
Not a solution if you need to watch out for unique keys or read the CSV row-by-row and change the data before inserting/updating
I did some digging and found this official AWS documentation on how to import data from any source to MySQL hosted on RDS.
It is a very detailed step by step guide and icludes an explanation on how to import CSV files.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.AnySource.html
Basically, each table must have its own file. Data for multiple tables cannot be combined in the same file. Give each file the same name as the table it corresponds to. The file extension can be anything you like. For example, if the table name is "sales", the file name could be "sales.csv" or "sales.txt", but not "sales_01.csv".
Whenever possible, order the data by the primary key of the table being loaded. This drastically improves load times and minimizes disk storage requirements.
There is another option to import data to MySQL database, you can use an external tool Alooma that can do the data import for you in real time.
Depending on how large is your file, but if it is under 1GB I found that DataGrip imports smaller files without any issues: https://www.jetbrains.com/datagrip/
You get nice mapping tool and graphical IDE to play around. DataGrip is available as a trial for 30 days free.
I am experiencing myself RDS connection dropouts with bigger files like > 2GB. Not sure if it is about the DataGrip or AWS side.
I think your best bet would be to develop a script in your language of choice to connect to the database and import it.
If your database is internet accessible then you can run that script locally. If it is in a private subnet then you can either run that script on an EC2 instance with access to the private subnet or on lambda connected to your VPC. You should really only use lambda if you expect runtime to be less than 5 minutes or so.
Edit: Note that lambda only supports a handful of languages
AWS Lambda supports code written in Node.js (JavaScript), Python, Java
(Java 8 compatible), and C# (.NET Core).

How to convert '.bak' file into '.sql' file in order to import the database in MySQL phpMyAdmin?

I'm a PHP developer by profession. I'm using Ubuntu Linux on my machine.
I don't have any idea about .Net framework and MS SQL Server Express database.
I've received a file titled project_db.bak and I have to convert it into project_db.sql in order to import the same database into MySQL.
I searched over the Internet for the solution. I found couple of answers but they are asking to use MS SQL server tools which I can not. I have to achieve this conversion in some other way.
Can someone please help me in this regard?
MS Sql Server typically generates binary backups, so what you have I guess is a backup. To restore it to a "querable" state you will need MS tools or RESTORE statment someway executed against the Motor (that you will need). Once it was "restored" (that is the reverse to a MS backup) you can dump (in MySql terms) with a tool or with a script
Create a Virtual Machine Windows 7 or better.
In the VM make sure you have a second network card that's set to a private network with your Host so you can connect to your Host MySQL you will need a User in your MySQL Server setup that allows connections from your remote network
in this VM install SQL Server, and SQL Server Management Studio & Navicat from that you can then restore the .bak file, once you have it restored. you will need another external tool that allows you to export as another format for this i use Navicat export is as another format. you can then connect to your MySQL Server and import that exported file.

inspect mySQL database

I have a mySQL database on my Windows PC. I'm pretty sure I've found the relevant files, namely the following:
formula.frm
formula.ibd
db.opt
What is the natural way to inspect, edit, and generally play with the contents of these files?
You do not view the binary database files directly. MySQL is a service that you connect to with a client and then perform SQL commands. You will need a client (such as MySQL Workbench) to work with the server.
MySQL Workbench is the GUI tool that allows you to connect to a MySQL database and perform actions on it including querying and creating/modifying the various parts of the database.
MySQL Workbench intro: http://dev.mysql.com/doc/workbench/en/wb-intro.html
Getting started with MySQL: http://dev.mysql.com/doc/refman/5.6/en/tutorial.html
There is also the command-line utility that is included when you install the server. It will be in the BIN folder of the MySQL install directory.
Command-line client info: http://dev.mysql.com/doc/refman/5.1/en/mysql.html
Use a tool like Mysql Workbench to connect to the DB. You do nothing directly to the files. You connect to the service and use the DB.
William, it sounds like your question is "how do I take mysql binary files and turn them into something usable on my machine?". If that's the case, you'll want to first install MySQL on your machine if you haven't already. Then you might have a look here for how to recreate a database from a .ibd file.

How do I migrate a populated mySQL database from dev to a shared host?

The title pretty much says it all, but to elaborate: If I build a mySQL database on my local dev machine, populate it with data, and subsequently want to migrate the database to a shared host (in this case, Siteground,) how do I do so in a way that keeps structure and data intact?
In this case, I don't have file access to the database server.
use mysqldump (doc) and dump your database (mysqldump [databasename] for a simple configuration) on your development machine to a dump (a file containing sql statements needed to recover both schema and data). Now insert the dump on your shared-host using the provided utilities (normaly you get phpMyAdmin preinstalled from your hoster, which can import dumps)
In addition to the response made by theomega (namely, do a dump of your development database and then insert the dump into your production database), be aware that you may need to enable large SQL insert statements if you have a lot of data. I would recommend you first FTP the file to the host, and then do the insert from a file. Each host has their own way of doing it, but if you can connect to the remote server using SSH, there is likely the ability to run the insert using the command line.
also in addition to theomega: most tools for mysql has dump / execute functions for sql files.
if you're using navicat, for an example, you're just a right-click away:
right-click on the database you want to export, and choose "dump sql file". this will allow you to save the .sql file on your local drive in the folder of your choosing.
then, right click on the destination database and choose "execute batch file". browse to the newly-created .sql file and it will execute all sql commands from that file in the destination database. namely, creating a copy of the exported db.