How to import AWS Aurora cluster including instances in to terraform - json

I need to import the existing Aurora cluster in to terraform. I tried terraform import aws_rds_cluster.sample_cluster cluster statement.
I got the state file ready also I could also do Terraform show However, When I try to destroy the cluster Terraform tries to delete the cluster without the instances under it -
So the destroy command is failing.
`Error: error deleting RDS Cluster (test): InvalidDBClusterStateFault: Cluster cannot be deleted, it still contains DB instances in non-deleting state.status code: 400, request id: 15dfbae8-aa13-4838-bc42-8020a2c87fe9`
Is there a way I can import the entire cluster that includes instances as well? I need to have a single statefile that can be used to manage entire cluster(including underlying instances).
Here is the main.tf that is getting used to call the import -
access_key = "***"
secret_key = "*****"
region = "us-east-1"
}
resource "aws_rds_cluster" "test" {
engine = "aurora-postgresql"
engine_version = "11.9"
instance_class = "db.r5.2xlarge"
name = "test"
username = "user"
password = "******"
parameter_group_name = "test"
}```

Based on the comments.
Importing just aws_rds_cluster into TF is not enough. One must also import all aws_rds_cluster_instance resources which are part of the cluster.
If the existing infrastructure is complex, instead of fully manual development of TF config files for the importing procedure, an open-sourced third party tool, called former2, could be considered. The tool can generate TF config files from existing resources:
Former2 allows you to generate Infrastructure-as-Code outputs from your existing resources within your AWS account.
TF is one of the outputs supported.

Related

How can we modify the hostingstart.html of kudu in the azure app service?

I'm doing all the work based on the code. I want to work on a simple task of edit and save hostingstart.html in kudu ui, but I don't know how to do it.
Currently, we have checked the connection through Azure app service distribution and dns authentication with terraform, and even checked whether the change is good through hostingstart.html in kuduui.
If possible, I wanted to work with the terraform code, so I wrote it as below and put the html file inside, but it didn't work.
(If it's not terraform, yaml or sh direction is also good.)
resource "azurerm_app_service" "service" {
provider = azurerm.generic
name = "${local.service_name}-service"
app_service_plan_id = azurerm_app_service_plan.service_plan.id
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
https_only = true
source_control {
repo_url = "https://git.a.git"
branch = "master"
}
}
Or can we specify the default path in the internal folder in this way?
tree : web
+page
- hostingstart.html
+terraform
- main.tf
- app_service.tf
site_config {
always_on = true
default_documents = "../page/hostingstart.html"
}
For the moment. It seems best to deploy and apply through blob storage.
(https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_blob)
for Terraform you can’t easily edit that file from management plane APIs, which Terraform would use. Instead, you can deploy a minimal application with whatever you want to show. Here’s an example of deploying code with an ARM template: https://github.com/JasonFreeberg/zip-deploy-arm-template.

How to restore single database from instance backup on GCP?

I am a beginner GCP administrator. I have several applications running on one instance. Each application has its own database. I set up automatic instance backup via the GCP GUI.
I would like to prepare for a possible failure of one of the applications, i.e. one database. I would like to prepare a procedure for restoring such a database, but in the GCP GUI there is no option to restore one database, I need to restore the entire instance, which I cannot due to the operation of other applications on this instance.
I also read in the documentation that a backup cannot be exported.
Is there any way to restore only one database from the entire instance backup?
Will I have to write a MySQL script that will backup each database separately and save it to Cloud Storage?
Like Daniel mentioned you can use gcloud sql export/import to do this. You'll also need a Google Storage Bucket.
First export a database to a file
gcloud sql export sql [instance-name] [gs://path-to-export-file.gz] --database=[database-name]
Create an empty database
gcloud sql databases create [new-database-name] --instance=[instance-name]
Use the export file to populate your fresh, empty database.
gcloud sql import sql [instance-name] [gs://path-to-export-file.gz] --database=[database-name]
I'm also a beginner here, but as an alternative, I think could you do the following:
Create a new instance with the same configuration
Restore the original backup into the new instance (this is possible)
Create a dump of the one database that you are interested in
Finally, import that dump into the production instance
In this way, you avoid messing around with data exports, limit the dump operation to the unlikely case of a restore, and save money on database instances.
Curious what people think about this approach?
As of now there is no way to restore only one database from the entire instance backup. As you can check on the documentation the rest of the applications will also experience downtime (since the target instance will be unavailable for connections and existing connections will be lost).
Since there is no built in method to restore only one database from the entire backup instance you are correct and writing a MySQL script to backup each database separately and use import and export operations (here is the relevant documentation regarding import and export operations in the Cloud SQL MySQL context).
But I would recommend you from an implementation point of view to use a separate Cloud SQL instance for each application, and then you could restore the database in case one particular application fails without causing downtime or issues on the rest of the applications.
I see that the topic has been raised again. Below is a description of how I solved the problem with doing backup individual databases from one instance, without using the built-in instance backup mechanism in GCP and uploud it to cloud storage.
To solve the problem, I used Google Cloud Functions written in Node.js 8.
Here is step by step solution:
Create a Cloud Storage Bucket.
Create Cloud Function using Node.js 8.
Edit below code to meet your instance and database parameters:
const {google} = require("googleapis");
const {auth} = require("google-auth-library");
var sqladmin = google.sqladmin("v1beta4");
exports.exportDatabase = (_req, res) => {
async function doBackup() {
const authRes = await auth.getApplicationDefault();
let authClient = authRes.credential;
var request = {
// Project ID
project: "",
// Cloud SQL instance ID
instance: "",
resource: {
// Contains details about the export operation.
exportContext: {
// This is always sql#exportContext.
kind: "sql#exportContext",
// The file type for the specified uri (e.g. SQL or CSV)
fileType: "SQL",
/**
* The path to the file in GCS where the export will be stored.
* The URI is in the form gs://bucketName/fileName.
* If the file already exists, the operation fails.
* If fileType is SQL and the filename ends with .gz, the contents are compressed.
*/
uri:``,
/**
* Databases from which the export is made.
* If fileType is SQL and no database is specified, all databases are exported.
* If fileType is CSV, you can optionally specify at most one database to export.
* If csvExportOptions.selectQuery also specifies the database, this field will be ignored.
*/
databases: [""]
}
},
// Auth client
auth: authClient
};
// Kick off export with requested arguments.
sqladmin.instances.export(request, function(err, result) {
if (err) {
console.log(err);
} else {
console.log(result);
}
res.status(200).send("Command completed", err, result);
}); } doBackup(); };
Sorry for the last line but I couldn't format it well.
Save and deploy this Cloud Function
Copy the Trigger URL from configuration page of Cloud function.
In order for the function to run automatically with a specified frequency, use Cloud
Scheduler: Descrition: "", Frequency: USE UNIX-CORN !!!, Time zone: Choose
yours, Target: HTTP, URL: PAST COPIED BEFORE TRIGGER URL HTTP
method: POST
Thats All, it shoudl work fine.

Disable Source/Destination Check AWS Python Boto

I am trying to automate deployment of a aws VPN [IPSec] instance using python boto. I am launching new instance using, 'ec2.run_instances'.
reservations = ec2.run_instances(
image_id,
subnet_id=subnet_id,
instance_type=instance_type,
instance_initiated_shutdown_behavior='stop',
key_name=key_name,
security_group_ids=[security_group])
For this script to work, I need to disable source/destination check for this instance. I couldn't find a way to disable this using python boto. As per the boto documentation I can do this using 'modify_instance_attribute'.
http://boto.likedoc.net/en/latest/ref/ec2.html
However I couldn't find any sample script using this attribute. Please give me some examples so that I can complete this.
Thanks in advance.
From boto3 documentation the way you would do this is:
response = requests.get('http://169.254.169.254/latest/meta-data/instance-id')
instance_id = response.text
ec2_client = boto3.client('ec2')
result = ec2_client.modify_instance_attribute(InstanceId=instance_id, SourceDestCheck={'Value': False})
You would have to use the modify_instance_attribute method after you have launched the instance with run_instances. Assuming your call to run_instances returns a single instance:
instance = reservations[0].instances[0]
ec2.modify_instance_attribute(instance.id, attribute='sourceDestCheck', value=False)

Grails: changing dataSource url at runtime to achieve multi tenant database separation

I'm building a multi tenant application with Grails and I want to keep separate databases.
I need to change the url dynamically at runtime to point GORM to different database.
I have a front-end acting as a balancer distributing requests to a cluster of backend hosts. Each backend host runs a Grails 2.3.5 instance and a mysql-server with several databases (one per tenant). I would like to change dataSource dynamically so that GORM can access domain entities on the right database.
Any ideas ?
Thanks
You can configure multiple data source in your DataSource.groovy, have a look in the blog.
In your domains: add which data source your domain can interact, eg.,
static mapping = {
datasources(['dataSource1', 'dataSource2'])
}
or "ALL" for all datasources, eg.,
static mapping = {
datasource 'ALL'
}
and then you can make queries with data source name to which you want to get/set data, eg.,
def userClass = User.class
User user = userClass.dataSource1.findByName('username')
Ref:- multipleDatasources, Querying on multiple datasource in grails

How do I connect to a Google Cloud SQL database using CodeIgniter?

My CodeIgniter app on Google App Engine is not able to connect to my database on Google Cloud SQL. I tried so many things.
My site loads when I leave database username, password & database name empty but, pages that have database calls show an error. It says that no database was selected.
I noticed that my database was not created and created a new database and a user with all privileges. I entered this details in my app and now, it doesn't even connect to the database server. No pages serve.
When I remove only the username & password fields in database.php, it connects to the database server but, doesn't connect to the database.
I checked the mysql database for users and my user has all privileges. I checked all spellings and it is correct. The app is working locally. HOW I CAN FIX THIS? i just can't get it to connect.
Out of the box CodeIgniter will not connect to a Google Cloud SQL instance, modifications to the CI database driver files are required, this is because CI expects that it’s choices are either to connect to localhost or to a remote tcpip host, the developers never anticipated that anybody would want to connect directly to a socket.
I chose to use the Mysqli driver instead of Mysql for performance reasons and here is how I did it:
Step 1) Edit the codeigniter/system/database/drivers/mysqli/mysqli_driver.php file and replace the db_connect function with the following code:
function db_connect()
{
if(isset($this->socket)){
return mysqli_connect(null, $this->username, null, $this->database, null, $this->socket);
}
elseif ($this->port != ”)
{
return mysqli_connect($this->hostname, $this->username, $this->password, $this->database, $this->port);
}
else
{
return mysqli_connect($this->hostname, $this->username, $this->password, $this->database);
}
}
Step 2) Alter your application’s config/database.php (or wherver you want to declare your database settings) - Depending on your application you may choose to add “database” to the autoload array in the yourapp/config/autoload.php or you may choose to manually call the load->database() function. This assumes your application name is “myappname”. Replace APPENGINE-ID and DATABASE-INSTANCE-ID and YOUR_DATABASE_NAME appropriately.
$db[‘myappname’][‘hostname’] = ‘localhost’;
$db[‘myappname’][‘username’] = ‘root’;
$db[‘myappname’][‘password’] = null;
$db[‘myappname’][‘database’] = ‘YOUR_DATABASE_NAME’;
$db[‘myappname’][‘dbdriver’] = ‘mysqli’;
$db[‘myappname’][‘pconnect’] = FALSE;
$db[‘myappname’][‘dbprefix’] = ‘’;
$db[‘myappname’][‘swap_pre’] = ‘’;
$db[‘myappname’][‘db_debug’] = FALSE;
$db[‘myappname’][‘cache_on’] = FALSE;
$db[‘myappname’][‘autoinit’] = FALSE;
$db[‘myappname’][‘char_set’] = ‘utf8’;
$db[‘myappname’][‘dbcollat’] = ‘utf8_general_ci’;
$db[‘myappname’][‘cachedir’] = ”;
$db[‘myappname’][‘socket’] = ‘/cloudsql/APPENGINE-ID:DATABASE-INSTANCE-ID’;
Viola, your CodeIgniter application should now be able to connect and talk to your Google Cloud MySQL database!
Now if you want to get really fancy and enable the database caching, either make alterations to the CI code to use memcache (fastest) or Google Cloud Storage (more guaranteed persistance) but I won’t cover that in this blog…
Answer courtesy of http://arlogilbert.com/post/67855755252/how-to-connect-a-codeigniter-project-to-google-cloud
Have you authorized your appengine app for access to the Cloud SQL instance? Go to the access control panel on the console for the instance (at https://cloud.google.com/console#/project/{project name}/sql/instances/{instance name}/access-control). Look for authorized app engine applications.
Otherwise, if you're connecting to the instance successfully, you'll have to choose the database from your code or configuration (depending on the app). For example, from the "running wordpress" guide (https://developers.google.com/appengine/articles/wordpress) you have to define DB_NAME. If you're handling the connections in your own code you'll need to use mysql_select_db.
From skimming the codeigniter docs, it looks like you need something like:
$config['database'] = "mydatabase";
I'm not familiar with this framework though, so check the docs yourself (http://ellislab.com/codeigniter/user-guide/database/configuration.html).