I am a beginner GCP administrator. I have several applications running on one instance. Each application has its own database. I set up automatic instance backup via the GCP GUI.
I would like to prepare for a possible failure of one of the applications, i.e. one database. I would like to prepare a procedure for restoring such a database, but in the GCP GUI there is no option to restore one database, I need to restore the entire instance, which I cannot due to the operation of other applications on this instance.
I also read in the documentation that a backup cannot be exported.
Is there any way to restore only one database from the entire instance backup?
Will I have to write a MySQL script that will backup each database separately and save it to Cloud Storage?
Like Daniel mentioned you can use gcloud sql export/import to do this. You'll also need a Google Storage Bucket.
First export a database to a file
gcloud sql export sql [instance-name] [gs://path-to-export-file.gz] --database=[database-name]
Create an empty database
gcloud sql databases create [new-database-name] --instance=[instance-name]
Use the export file to populate your fresh, empty database.
gcloud sql import sql [instance-name] [gs://path-to-export-file.gz] --database=[database-name]
I'm also a beginner here, but as an alternative, I think could you do the following:
Create a new instance with the same configuration
Restore the original backup into the new instance (this is possible)
Create a dump of the one database that you are interested in
Finally, import that dump into the production instance
In this way, you avoid messing around with data exports, limit the dump operation to the unlikely case of a restore, and save money on database instances.
Curious what people think about this approach?
As of now there is no way to restore only one database from the entire instance backup. As you can check on the documentation the rest of the applications will also experience downtime (since the target instance will be unavailable for connections and existing connections will be lost).
Since there is no built in method to restore only one database from the entire backup instance you are correct and writing a MySQL script to backup each database separately and use import and export operations (here is the relevant documentation regarding import and export operations in the Cloud SQL MySQL context).
But I would recommend you from an implementation point of view to use a separate Cloud SQL instance for each application, and then you could restore the database in case one particular application fails without causing downtime or issues on the rest of the applications.
I see that the topic has been raised again. Below is a description of how I solved the problem with doing backup individual databases from one instance, without using the built-in instance backup mechanism in GCP and uploud it to cloud storage.
To solve the problem, I used Google Cloud Functions written in Node.js 8.
Here is step by step solution:
Create a Cloud Storage Bucket.
Create Cloud Function using Node.js 8.
Edit below code to meet your instance and database parameters:
const {google} = require("googleapis");
const {auth} = require("google-auth-library");
var sqladmin = google.sqladmin("v1beta4");
exports.exportDatabase = (_req, res) => {
async function doBackup() {
const authRes = await auth.getApplicationDefault();
let authClient = authRes.credential;
var request = {
// Project ID
project: "",
// Cloud SQL instance ID
instance: "",
resource: {
// Contains details about the export operation.
exportContext: {
// This is always sql#exportContext.
kind: "sql#exportContext",
// The file type for the specified uri (e.g. SQL or CSV)
fileType: "SQL",
/**
* The path to the file in GCS where the export will be stored.
* The URI is in the form gs://bucketName/fileName.
* If the file already exists, the operation fails.
* If fileType is SQL and the filename ends with .gz, the contents are compressed.
*/
uri:``,
/**
* Databases from which the export is made.
* If fileType is SQL and no database is specified, all databases are exported.
* If fileType is CSV, you can optionally specify at most one database to export.
* If csvExportOptions.selectQuery also specifies the database, this field will be ignored.
*/
databases: [""]
}
},
// Auth client
auth: authClient
};
// Kick off export with requested arguments.
sqladmin.instances.export(request, function(err, result) {
if (err) {
console.log(err);
} else {
console.log(result);
}
res.status(200).send("Command completed", err, result);
}); } doBackup(); };
Sorry for the last line but I couldn't format it well.
Save and deploy this Cloud Function
Copy the Trigger URL from configuration page of Cloud function.
In order for the function to run automatically with a specified frequency, use Cloud
Scheduler: Descrition: "", Frequency: USE UNIX-CORN !!!, Time zone: Choose
yours, Target: HTTP, URL: PAST COPIED BEFORE TRIGGER URL HTTP
method: POST
Thats All, it shoudl work fine.
Related
TL;DR: Vertical or Horizontal scaling for this system design?
I have NGINX running as a load balancer for my application. It distributes across 4 EC2 (t2.micro's cuz I'm cheap) to route traffic and those are all currently hitting one server for my MySQL database (also a t2.micro, totalling 6 separate EC2 instances for the whole system).
I thinking about horizontally scale my database via Source/Replica distribution, and my thought is that I should route all read queries/GET requests (the highest traffic volume I'll get) to the Replicas and all write queries/POST requests to the Source db.
I know that I'll have to programmatically choose which DB my servers point to based on request method, but I'm unsure of how best to approach that or if I'm better off vertically scaling my DB at that point and investing in a larger EC2 instance.
Currently I'm connecting to the Source DB using an express server and it's handling everything. I haven't implemented the Source/Replica configuration just yet because I want to get my server-side planned out first.
Here's the current static connection setup:
const mysql = require('mysql2');
const Promise = require('bluebird');
const connection = mysql.createConnection({
host: '****',
port: 3306,
user: '****',
password: '*****',
database: 'qandapi',
});
const db = Promise.promisifyAll(connection, { multiArgs: true });
db.connectAsync().then(() =>
console.log(`Connected to QandApi as ID ${db.threadId}`)
);
module.exports = db;
What I want to happen is I want to either:
set up an express middleware function that looks at the request method and connects to the appropriate database by creating 2 configuration templates to put into the createConnection function (I'm unsure of how I would make sure it doesn't try to reconnect if a connection already exists, though)
if possible just open two connections simultaneously and route which database takes which method (I'm hopeful this option will work so that I can make things simpler)
Is this feasible? Am I going to see worse performance doing this than if I just vertically scaled my EC2 to something with more vCPUs?
Please let me know if any additional info is needed.
Simultaneous MySQL Database Connection
I would be hesitant to use any client input to connect to a server, but I understand how this could be something you would need to do in some scenarios. The simplest and quickest way around this issue would be to create a second database connection file. In order to make this dynamic, you can simply require the module based on conditions in your code, so sometimes it will be called and promised at only certain points, after certain conditions. This process could be risky and requires requiring modules in the middle of your code so it isn't ideal but can get the job done. Ex :
const dbConnection = require("../utils/dbConnection");
//conditional {
const controlledDBConnection = require("../utils/controlledDBConnection");
var [row] = await controlledDBConnection.execute("SELECT * FROM `foo`;")
}
Although using more files could potentially have an effect on space constraints and could potentially slow down code while waiting for a new promise, but the overall effect will be minimal. controlledDBConnection.js would just be something close to a duplicate to dbConnection.js with slightly different parameters depending on your needs.
Another path you can take if you want to avoid using multiple files is to export a module with a dynamically set variable from your controller file, and then import it into a standard connection file. This would allow you to change up your connection without rewriting a duplicate, but you will need diligent error checks and a default.
Info on modules in JS : https://javascript.info/import-export
Some other points
Use Environment Variables for your database information like host, etc. since this will allow for you to easily change information for your database all in one place, while also allowing you to include your .env file in .gitignore if you are using github
Here is another great stack overflow question/answer that might help with setting up a dynamic connection file : How to create dynamically database connection in Node.js?
How to set up .env files : https://nodejs.dev/learn/how-to-read-environment-variables-from-nodejs
How to set up .gitignore : https://stackabuse.com/git-ignore-files-with-gitignore/
As part of a communications effort to a large user base, I need to send upwards of 75,000 emails per day. The emails of the users I'm contacting are stored in a CSV file. I've been using Postman Runner to send these requests via SendGrid (Email API), but with such a large volume, my computer either slows way down or Postman completely crashes before the batch completes. Even if it doesn't crash, it takes upwards of 3 hours to send this many POST requests via Runner.
I'd like to upload the CSV containing the emails into a Cloud Storage bucket and then access the file using Cloud Functions to send a POST request for each email. This way, all the processing can be handled by GCP and not by my personal machine. However, I can't seem to get the Cloud Function to read the CSV data line-by-line. I've tried using createReadStream() from the Cloud Storage NodeJS client library along with csv-parser, but can't get this solution to work. Below is what I tried:
const sendGridMail = require('#sendgrid/mail');
const { Storage } = require('#google-cloud/storage');
const fs = require('fs');
const csv = require('csv-parser');
exports.sendMailFromCSV = (file, context) => {
console.log(` Event: ${context.eventId}`);
console.log(` Event Type: ${context.eventType}`);
console.log(` Bucket: ${file.bucket}`);
console.log(` File: ${file.name}`);
console.log(` Metageneration: ${file.metageneration}`);
console.log(` Created: ${file.timeCreated}`);
console.log(` Updated: ${file.updated}`);
const storage = new Storage();
const bucket = storage.bucket(file.bucket);
const remoteFile = bucket.file(file.name);
console.log(remoteFile);
let emails = [];
fs.createReadStream(remoteFile)
.pipe(csv())
.on('data', function (row) {
console.log(`Email read: ${row.email}`);
emails.push(row.email);
//send email using the SendGrid helper library
const msg = {
to: [{
"email": row.email;
}],
from: "fakeemail#gmail.com",
template_id: "fakeTemplate",
};
sendGridMail.send(msg).then(() =>
context.status(200).send(file.body))
.catch(function (err) {
console.log(err);
context.status(400).send(file.body);
});
})
.on('end', function () {
console.table(emails);
});
};
The Cloud Function is currently triggered by an upload to the Cloud Storage bucket.
Is there a way to build a solution to this problem without loading the file into memory? Is Cloud Functions to right path to be moving down, or would it be better to use App Engine or some other tool? Willing to try any GCP solution that moves this process to the cloud
Cloud Function's memory can be shared/used as a temporary directory /tmp. Thus, you can download the csv file from the cloud storage bucket into that directory as a local file, and then process it, as if that file is handled from the local drive.
At the same time, you may would like to remember about 2 main restrictions:
Memory - up to 2Gb for everything
Timeout - no more than 540 seconds per invocation.
I personally would create a solution based on a combination of a few GCP resources.
The first cloud function is triggered by a 'finlize' event - when the csv file is saved in the bucket. This cloud function reads the file and for every record composes a Pub/Sub message with relevant details (enough to send an email). That message is posted into a Pub/Sub topic.
The Pub/Sub topic is used to transfer all messages from the first cloud function to trigger the second cloud function.
The second cloud function is triggered by a Pub/Sub message, which contains all neccessary details to process and send an email. As there may be 75K records in the source csv file (for example), you should expect 75K invocations of the second cloud function.
That may be enough at a high level. Pub/Sub paradigm guarantees at least once delivery (but may be more than once), so if you need no more than one email per address, some additional resources may be required to achieve an idempotent behaviour.
Basically you will have to download the file locally in the Cloud Function machine to be able to read it in this way.
Now there are multiple options to workaround this.
The most basic/simplest is to provision a Compute Engine machine and run this operation from it if is a once on a time event.
If you need to do this more frequently (i.e. daily) you can use an online tool to convert your csv file into json and import it to Firestore, then you can read a lot faster the emails from Firestore.
We have a system where we have a Master / Multiple Slaves .
Currently everything happens on the Master and the slaves are just here for backup .
We use Codeigniter as a development platform .
Now we decided to user the slaves for the Reads and the Master for the Write queries .
I have been told that this is not doable without modifying the source code because proxy can't know the type of the query .
Any idea how to proceed with this without causing too much damages for a perfectly working system ...
We will use this : http://dev.mysql.com/downloads/mysql-proxy/
It does exactly what we want :
More info here :
http://jan.kneschke.de/2007/8/1/mysql-proxy-learns-r-w-splitting/
http://www.infoq.com/news/2007/10/mysqlproxyrwsplitting
http://archive.oreilly.com/pub/a/databases/2007/07/12/getting-started-with-mysql-proxy.html
something i was also looking, few month back i did something like this but i added 3 web server with master slave mysql servers, first web server enabled with mod_proxy to redirect request to read and write server all request will come to this server, if post,put or delete request come to server it will go to write server, all get or normal request will go to read server
here you can find mod_proxy setting which i used
http://pastebin.com/a30BRHFq
here you can read about load balancing
http://www.rackspace.com/knowledge_center/article/simple-load-balancing-with-apache
still looking for better solution with less hardware involved
figure out another solution through CI, create two database connections in database.php file keep save mysql server as default database connection and other connection for write only server
you can use this base model extend
https://github.com/jamierumbelow/codeigniter-base-model
you need to extend your models with this model and need to extend you model with this, it has functionality for callbacks before and after insert,update, delete and get queries, only you need to add one custom method or callback change_db_group
//this method in MY_Model
function change_db_group{
$this->_database = $this->load->database('writedb', TRUE)
}
no your example model
class Example_Model extends MY_Model{
protected $_table = 'example_table';
protected $before_create = array('change_db_group');
protected $before_update = array('change_db_group');
protected $before_delete = array('change_db_group');
}
you database connection will be changed before executing insert,update or delete queries
My CodeIgniter app on Google App Engine is not able to connect to my database on Google Cloud SQL. I tried so many things.
My site loads when I leave database username, password & database name empty but, pages that have database calls show an error. It says that no database was selected.
I noticed that my database was not created and created a new database and a user with all privileges. I entered this details in my app and now, it doesn't even connect to the database server. No pages serve.
When I remove only the username & password fields in database.php, it connects to the database server but, doesn't connect to the database.
I checked the mysql database for users and my user has all privileges. I checked all spellings and it is correct. The app is working locally. HOW I CAN FIX THIS? i just can't get it to connect.
Out of the box CodeIgniter will not connect to a Google Cloud SQL instance, modifications to the CI database driver files are required, this is because CI expects that it’s choices are either to connect to localhost or to a remote tcpip host, the developers never anticipated that anybody would want to connect directly to a socket.
I chose to use the Mysqli driver instead of Mysql for performance reasons and here is how I did it:
Step 1) Edit the codeigniter/system/database/drivers/mysqli/mysqli_driver.php file and replace the db_connect function with the following code:
function db_connect()
{
if(isset($this->socket)){
return mysqli_connect(null, $this->username, null, $this->database, null, $this->socket);
}
elseif ($this->port != ”)
{
return mysqli_connect($this->hostname, $this->username, $this->password, $this->database, $this->port);
}
else
{
return mysqli_connect($this->hostname, $this->username, $this->password, $this->database);
}
}
Step 2) Alter your application’s config/database.php (or wherver you want to declare your database settings) - Depending on your application you may choose to add “database” to the autoload array in the yourapp/config/autoload.php or you may choose to manually call the load->database() function. This assumes your application name is “myappname”. Replace APPENGINE-ID and DATABASE-INSTANCE-ID and YOUR_DATABASE_NAME appropriately.
$db[‘myappname’][‘hostname’] = ‘localhost’;
$db[‘myappname’][‘username’] = ‘root’;
$db[‘myappname’][‘password’] = null;
$db[‘myappname’][‘database’] = ‘YOUR_DATABASE_NAME’;
$db[‘myappname’][‘dbdriver’] = ‘mysqli’;
$db[‘myappname’][‘pconnect’] = FALSE;
$db[‘myappname’][‘dbprefix’] = ‘’;
$db[‘myappname’][‘swap_pre’] = ‘’;
$db[‘myappname’][‘db_debug’] = FALSE;
$db[‘myappname’][‘cache_on’] = FALSE;
$db[‘myappname’][‘autoinit’] = FALSE;
$db[‘myappname’][‘char_set’] = ‘utf8’;
$db[‘myappname’][‘dbcollat’] = ‘utf8_general_ci’;
$db[‘myappname’][‘cachedir’] = ”;
$db[‘myappname’][‘socket’] = ‘/cloudsql/APPENGINE-ID:DATABASE-INSTANCE-ID’;
Viola, your CodeIgniter application should now be able to connect and talk to your Google Cloud MySQL database!
Now if you want to get really fancy and enable the database caching, either make alterations to the CI code to use memcache (fastest) or Google Cloud Storage (more guaranteed persistance) but I won’t cover that in this blog…
Answer courtesy of http://arlogilbert.com/post/67855755252/how-to-connect-a-codeigniter-project-to-google-cloud
Have you authorized your appengine app for access to the Cloud SQL instance? Go to the access control panel on the console for the instance (at https://cloud.google.com/console#/project/{project name}/sql/instances/{instance name}/access-control). Look for authorized app engine applications.
Otherwise, if you're connecting to the instance successfully, you'll have to choose the database from your code or configuration (depending on the app). For example, from the "running wordpress" guide (https://developers.google.com/appengine/articles/wordpress) you have to define DB_NAME. If you're handling the connections in your own code you'll need to use mysql_select_db.
From skimming the codeigniter docs, it looks like you need something like:
$config['database'] = "mydatabase";
I'm not familiar with this framework though, so check the docs yourself (http://ellislab.com/codeigniter/user-guide/database/configuration.html).
I am about to develop an application where employees go to service repair machines at customer premises. They need to fill up a service card using a tablet or any other mobile device.
In case of no Internet connection, I am thinking about using HTML5 offline storage, mainly IndexedDB to store the service card (web form) data locally, and do a sync at the office where Internet exists. The sync is with a MySQL database.
So the question: is it possible to sync indexedDB with mysql? I have never worked with indexedDB, I am only doing research and saw it is a potential.
Web SQL is deprecated. Otherwise, it could have been the closer solution.
Any other alternatives in case the above is difficult or outside the standard?
Your opinions are highly appreciated.
Thanks.
This is definitly do able. I am only just starting to learn indexeddb the last couple of days. This is how I would see it working tho. Sorry dont have code to give you.
Website knows its in offline mode somehow
Clicking submit form saves the data into indexeddb
Later laptop or whatever is back online or on intranet and can now talk to main server sends all indexeddb rows to server to be stored in mysql via an ajax call.
indexeddb is cleared
repeat
A little bit late, but i hope it helps.
This is posible, am not sure if is the best choice. I can tell you that am building a webapp where I have a mysql database and the app must work offline and keep trace of the data. I try using indexedDB and it was very confusing for me so I implemented DexieJs, a minimalistic and straight forward API to comunicate with indexedDB in an easy way.
Now the app is working online then if it goes down the internet, it works offline until it gets internet back and then upload the data to the mysql database. One of the solutions i read to save the data was to store in a TEXT field the json object been passed to JSON.stringify(), and once you need the data back JSON.parse().
This was my motivation to build the app in that way and also that we couldn't change of database :
IndexedDB Tutorial
Sync IndexedDB with MySQL
Connect node to mysql
[Update for 2021]
For anyone reading this, I can recommend to check out AceBase.
AceBase is a realtime database that enables easy storage and synchronization between browser and server databases. It uses IndexedDB in the browser, and its own binary db format or SQL Server / SQLite storage on the server side. MySQL storage is also on the roadmap. Offline edits are synced upon reconnecting and clients are notified of remote database changes in realtime through a websocket (FAST!).
On top of this, AceBase has a unique feature called "live data proxies" that allow you to have all changes to in-memory objects to be persisted and synced to local and server databases, so you can forget about database coding altogether, and program as if you're only using local objects. No matter if you're online or offline.
The following example shows how to create a local IndexedDB database in the browser, how to connect to a remote database server that syncs with the local database, and how to create a live data proxy that eliminates further database coding altogether.
const { AceBaseClient } = require('acebase-client');
const { AceBase } = require('acebase');
// Create local database with IndexedDB storage:
const cacheDb = AceBase.WithIndexedDB('mydb-local');
// Connect to server database, use local db for offline storage:
const db = new AceBaseClient({ dbname: 'mydb', host: 'db.myproject.com', port: 443, https: true, cache: { db: cacheDb } });
// Wait for remote database to be connected, or ready to use when offline:
db.ready(async () => {
// Create live data proxy for a chat:
const emptyChat = { title: 'New chat', messages: {} };
const proxy = await db.ref('chats/chatid1').proxy(emptyChat); // Use emptyChat if chat node doesn't exist
// Get object reference containing live data:
const chat = proxy.value;
// Update chat's properties to save to local database,
// sync to server AND all other clients monitoring this chat in realtime:
chat.title = `Changing the title`;
chat.messages.push({
from: 'ewout',
sent: new Date(),
text: `Sending a message that is stored in the database and synced automatically was never this easy!` +
`This message might have been sent while we were offline. Who knows!`
});
// To monitor realtime changes to the chat:
chat.onChanged((val, prev, isRemoteChange, context) => {
if (val.title !== prev.title) {
console.log(`Chat title changed to ${val.title} by ${isRemoteChange ? 'us' : 'someone else'}`);
}
});
});
For more examples and documentation, see AceBase realtime database engine at npmjs.com