Unable to Test AWS DMS End points in CLoud fromation template - json

I am trying to build up a CloudFormation template to automate the migration process from on-premises to AWS Cloud. I have created all the required resources in Database Migration Service (DMS) including the Replication Instance, Endpoints and Tasks through CloudFormation itself.
Now, in order to go further, I need to test the Endpoints from the Replication Instance. This should be done in an automated way. Is it possible to achieve this task in a CloudFormation template?

The Database Migration Service (DMS) exposes a service API called TestConnection. You can use the TestConnection API to validate connectivity to the endpoints that you've configured.
In order for the endpoint connectivity testing to succeed however, the DMS Replication Instance must be fully operational, according to the service documentation.
However, you can only test connectivity after the replication instance has been created, because the replication instance is used in the connection.
You could call the DMS TestConnection API from an AWS Lambda function. AWS Lambda has the AWS SDK built-in, so you can simply embed your Lambda code directly into the CloudFormation template. You don't need to worry about building a ZIP archive that includes the AWS SDK, unless you want to add other dependencies to your Lambda function.
Database Migration Service | API Reference | TestConnection
Boto3 | AWS Python SDK | Database Migration Service | test_connection() method

Related

Google Cloud Run: Could not find specified network to attach to app

I have a Cloud Run container that uses a Serverless Connector to connect to a Cloud SQL instance all in the same project. This configuration works just fine.
I have moved the Cloud SQL instance to another project in the same organisation and setup a Serverless Connector there as per the instructions. I have tested this Serverless Connector with a Cloud Function in the same project that accesses the database and reports the number of rows in a table, this works without problems.
I have now updated the Cloud Run instance to point to the new connector reference. I have used the specified format: projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME. When I release a new revision of the container, I get the error message: "Could not find specified network to attach to app." I see the message "Ready condition status changed to False for Service {service name} with message: Deploying Revision." in the Cloud Run logs for this service.
Any ideas on how to get this working please?
Documentation:
Configuring Serverless VPC Access
Configure connectors in the Shared VPC host project
Info:
Command gcloud compute networks vpc-access connectors describe --region=europe-west3 projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME gives the output:
connectedProjects:
- company-service-dev
- a-project-name
ipCidrRange: 10.8.0.0/28
machineType: f1-micro
maxInstances: 3
maxThroughput: 300
minInstances: 2
minThroughput: 200
name: projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME
network: company-project-servicename
state: READY
The connector MUST be in the same region AND the same project as the Cloud Run service.
The wrong solution is to create a peering between the Cloud Run project VPC and the Cloud SQL project VPC. But it won't work because of network transitivity issue (CLoud SQL to Project create 1 peering and Cloud Run VPC to Project create another peering -> 2 peering in a row aren't transitive).
The correct solution is to create Shared VPC architecture to share the same VPC and therefore not to require to perform peering between project.
Another ack exists: you can create a VPN between Cloud Run project VPC and Cloud SQL project VPC. It's ugly, but it works.
Solved!
Problem: Configuration. There was a VPC created for the Cloud SQL db to get an IP address assigned in. The Serverless Connector was created and had access to the same network. I, mistakenly, thought that was all that is needed. As #guillaume-blaquiere points out, this is for a single project only.
To fix: Create a Shared VPC configuration in the host project. In the Google Cloud Console it was as easy as turning on Shared VPC (VPC Network > Shared VPC). Setup a configuration with pretty much the default options it gives you and then you can use the Serverless Connector reference projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME in your Cloud Run or Cloud Functions and all works just fine!

How to publish AWS SNS data to MySql database

I am new to AWS/Database.
Since i am completely beginner to this, any suggestions will be appreciated.
Currently in the project it has been planned like data from AWS database will be pushed using SNS HTTP fanout to external MySql Database.
NOTE :
1.The data will be pushed by the Client using AWS SNS
2. We have no access to the AWS account nor we are planning to have a AWS account.
3. External MySql database is a private database running on Linux Server
I have gone through the Official documentation of AWS SNS, and also some websites. This is all i found :
Use external applications like Zapier to map the data.
Develop some application to map the data.
Is it like using a Servlet application in the receiver side to update the table, or is there any other methods?
AWS DB -----> SNS -----> _________ -----> External MySql DB
Thanks
If you cannot have an AWS Account, you can have your own web server consume the SNS Messages. SNS can deliver messages to an HTTP/HTTPS endpoint in a predefined structure. Read more details here. You can enable such an endpoint on your own server and share your server URL with the AWS Account owner. They can create a subscription from their SNS topic to your endpoint.
For setting up this endpoint, there are many options. ExpressJS is one such popular framework to quickly implement HTTP APIs.
Probably, option two would be more suited, or at least first to be considered. For that option you would have have to develop a lambda function which would receive data from SNS, re-format if needed and upload it to MySQL. So your architecture would look like:
Data--->SNS--->Lambda function---> MySQL
Depending on the amount of incoming data to the SNS, you may add SQS queue as well to the mix, to buffer the records and enable fun-out architecture. For example:
/---> SQS queue 1---> Lambda function 1---> MySQL
Data -->SNS --/
\
\--- SQS queue 2 ---> Lambda function 2, EC2 instance, Container ---> Other destination
Other solutions are possible. But I would first consider the above, before looking into other ways.

Server vs Serverless for REST API

I have a REST API that I was thinking about deploying using a Serverless model. My data is in an AWS RDS server that needs to be put in a VPC for security reasons. To allow a Lambda to access the RDS, I need to configure the lambda to be in a VPC, but this makes cold starts an average of 8 seconds longer according to articles I read.
The REST API is for a website so an 8 second page load is not acceptable.
Is there anyway I can use a Serverless model to implement my REST API or should I just use a regular EC2 server?
Unfortunately, this is not yet released, but let us hope that this is a matter of weeks/months now. At re:Invent 2018 AWS has introduced Remote NAT for Lambda to be available this year (2019).
For now you have to either expose RDS to the outside (directly or through a tunnel), but this is a security issue. Or Create Lambda ENIs in VPC.
In order to keep your Lambdas "warm" you may create a scheduled "ping" mechanism. Some example of this pattern you can find in the Article of Yan Cui.

WSO2 API manager API not displaying properly

I'm deploying WSO2 API manager 2.6.0 with an external MySQL database and I'm trying to have my API's persist when I change my deployment.
Currently I have 2 deployments using the same external database, one local and the other hosted on an AWS EKS cluster. When I create an API on my local deployment, I can only view it on my AWS deployment if I'm logged in to the store, and visa-versa for my localhost deployment.
The expected and desired behaviour is that all APIs created on both deployments should be displayed on the store no matter if I'm logged in or not, is there any configurations I can change to make the happen?
Here is the doc I used to configure the external database: https://docs.wso2.com/display/AM260/Installing+and+Configuring+the+Databases

Bluemix using Node-RED bind to existing ClearDB MySQL service

I am using Node-RED on IBM Bluemix. I am trying to connect to MySQL hosted by ClearDB, but I cannot find a suitable node in the database category.
How can I bind to existing ClearDB service that I already have bound to another app?
You can take a look at this MySQL node for Node-RED in the flow and node library, it is an extension. The steps to add additional node types to the editor is explained in the Node-RED documentation in general, however it does not directly apply to Bluemix. For your Bluemix environment you would need to access and modify the environment. See this post on how to deploy your customized Node-RED environment to Bluemix.