I have an application on Sql Server that send change data to Target database using Sql Service Broker. I just capture the data from Trigger and push data into service broker Queue. Now I want to make compatible my application to MySql. Now the problem is how i achive exactly the same implementation in MySql because its not supported. If I use external Message Broker like RabbitMq how MySql Table Trigger directly communicate with RabbitMq.
Thanks in Advance
If you can use Kafka as an alternative to RabbitMQ there are some tutorials on how to accomplish a similar goal:
Using Kafka Connect
Using GCP services
Change data capture (CDC) is a term used to classify this pattern of reacting to data changes and then delivering those changes in real-time to a downstream process.
I'm looking for the CLI call to create a Memory store cluster that uses the keys json file?
It seems like this link/command authenticates the gcloud cli with a service account credential. If the service account has IAM policy to memorystore, and the main issue is just authentication when running the create command, this might work, but I'd like to confirm.
I've reviewed I found this:
https://cloud.google.com/memorystore/docs/memcached/creating-managing-instances
and this
https://cloud.google.com/appengine/docs/standard/python/memcache/using
but am struggling on putting it all together.
I am trying to build up a CloudFormation template to automate the migration process from on-premises to AWS Cloud. I have created all the required resources in Database Migration Service (DMS) including the Replication Instance, Endpoints and Tasks through CloudFormation itself.
Now, in order to go further, I need to test the Endpoints from the Replication Instance. This should be done in an automated way. Is it possible to achieve this task in a CloudFormation template?
The Database Migration Service (DMS) exposes a service API called TestConnection. You can use the TestConnection API to validate connectivity to the endpoints that you've configured.
In order for the endpoint connectivity testing to succeed however, the DMS Replication Instance must be fully operational, according to the service documentation.
However, you can only test connectivity after the replication instance has been created, because the replication instance is used in the connection.
You could call the DMS TestConnection API from an AWS Lambda function. AWS Lambda has the AWS SDK built-in, so you can simply embed your Lambda code directly into the CloudFormation template. You don't need to worry about building a ZIP archive that includes the AWS SDK, unless you want to add other dependencies to your Lambda function.
Database Migration Service | API Reference | TestConnection
Boto3 | AWS Python SDK | Database Migration Service | test_connection() method
my company has a messaging system which sends real-time messages in JSON format, and it's not built on AWS, and will not have any VPN connection with AWS.
our team is trying to use AWS SQS to receive these messages, which will then have DynamoDB process JSON messages to TSV, then load into RDS.
however, as per the FAQ, SQS can only receive message from within AWS.
https://aws.amazon.com/sqs/faqs/
Q: Who can perform operations on a message queue?
Only an AWS account owner (or an AWS account that the account owner has delegated rights to can perform operations on an Amazon SQS message queue.
In order to use SQS, one way I can think of is to create a public-facing EC2 instance, which receives messages and passes over to SQS.
My questions here are:
is my idea correct?
if it's correct, can you share any details on how to build any applications on this EC2 instance to achieve the functionality (I have no experience on application development, your insights are really appreciated!)
is there any easier/better options in AWS that can achieve the goal to receive message in my use case?
is my idea correct?
No, it isn't.
You're misinterpreting the (admittedly somewhat unclear) information in the FAQ.
SQS is accessible and usable from anywhere on the Internet. Its only exposed interface is HTTP(S). In fact, from inside EC2, SQS is not accessible unless the EC2 instance actually has outbound access to the Internet.
The point being made in the documentation is not that you need to be "inside" AWS to use queues, but rather that you need to be in possession of an authorized set of AWS credentials in order to work with queues.¹
If you have an AWS account, you have credentials, and you can use SQS. There is no requirement that you access the queue from "inside" AWS.
Choose the endpoint closest to your servers (for lowest latency) and you should find it open and accessible, from anywhere.
¹Queues can be configured to allow anonymous acccess after they are created. (Don't do it, I'm just saying it is possible.) This section of the FAQ seems to be referring to a subset of operations, such as creating queues.
I was not able to write to SQS from an external service. I found some partial explanations but got stuck at the role creation.
The alternative I found is using AWS services Lambda + API Gateway to write to SQS.
This tutorial was extremely helpful, explaining all the steps in great details:
https://startupnextdoor.com/adding-to-sqs-queue-using-aws-lambda-and-a-serverless-api-endpoint/
You can access sqs from anywhere once you have proper permission through accesskey&secret key or IAM role.
SQS is not specific to vpc
It is clear that you try to do this :
Take message from your company messaging system, send it to SQS.
It is not wrong using your method (using EC2 as a bridge). However, you don't need EC2 to connect to SQS.
All AWS services can be access using AWS API(e.g. Python boto3, etc) from internet, as long as you provide the correct credential. So you can put your "middleware" in anywhere as long as you are able establish connection to the said services.
So there is lots of more options available to you. e.g. trigger from your messaging system; use AWS Lambda, etc.
Thanks for sharing the information and your insights with me!
I have tested below solution, which works for my use case:
created an endpoint in AWS API Gateway, which is able to receive messages from company messaging system, a system that does not carry AWS credentials
created a Lambda function triggered by API Gateway, so once a message arrives, Lambda will digest the JSON message and convert it to TSV, and then load into RDS
I have been trying to create a stream with Spring Cloud Dataflow but have not had much luck (mostly due to the lack of documentation).
Issue 1: Accessing web GUI of dockerized Spring Cloud Dataflow
I have a dockerized Spring Cloud server running with Kafka on a basic Ubuntu container. For some reason I can't access the web GUI in Windows (at < docker-machine ip >:9393/dashboard). However, I have a separate Docker Ubuntu container running Nginx reverse proxy, which shows up when I go to < docker-machine ip >/index.html or etc. I don't think it is an issue with ports, I have the Spring Cloud container setup with -p 9393:9393 and the port is otherwise unused.
Issue 2: Routing by JSON Header
My ultimate goal is to get a file loaded in from Nginx and routed based on its JSON header (there are two different JSON headers) and then ingest queries to Cassandra.
I can do all of this except the sorting by JSON header. Which app would you recommend I use?
Issue 1: Accessing web GUI of dockerized Spring Cloud Dataflow
We might need little more details around this. Assuming this is the local-server, perhaps you could share the docker scripts/image, so we could try it out.
Issue 2: Routing by JSON Header
The router-sink application would come handy for this type of use-cases. This application routes the payload to named destinations based on certain conditions, so you'd have the opportunity to route the payload with respective ingest-query to Cassandra.
stream 1:
stream create test --definition "file | router --expression=header.contains('a')?':foo':':bar’"
stream 2:
stream create baz --definition ":foo > cassandra --ingest-query=\"query1\""
stream 3:
stream create wiz --definition ":bar > cassandra --ingest-query=\"query2\""
(where: foo and bar are named destinations)