Looking for the CLI call to create a Memory store cluster that uses the keys json file? - json

I'm looking for the CLI call to create a Memory store cluster that uses the keys json file?
It seems like this link/command authenticates the gcloud cli with a service account credential. If the service account has IAM policy to memorystore, and the main issue is just authentication when running the create command, this might work, but I'd like to confirm.
I've reviewed I found this:
https://cloud.google.com/memorystore/docs/memcached/creating-managing-instances
and this
https://cloud.google.com/appengine/docs/standard/python/memcache/using
but am struggling on putting it all together.

Related

How to publish AWS SNS data to MySql database

I am new to AWS/Database.
Since i am completely beginner to this, any suggestions will be appreciated.
Currently in the project it has been planned like data from AWS database will be pushed using SNS HTTP fanout to external MySql Database.
NOTE :
1.The data will be pushed by the Client using AWS SNS
2. We have no access to the AWS account nor we are planning to have a AWS account.
3. External MySql database is a private database running on Linux Server
I have gone through the Official documentation of AWS SNS, and also some websites. This is all i found :
Use external applications like Zapier to map the data.
Develop some application to map the data.
Is it like using a Servlet application in the receiver side to update the table, or is there any other methods?
AWS DB -----> SNS -----> _________ -----> External MySql DB
Thanks
If you cannot have an AWS Account, you can have your own web server consume the SNS Messages. SNS can deliver messages to an HTTP/HTTPS endpoint in a predefined structure. Read more details here. You can enable such an endpoint on your own server and share your server URL with the AWS Account owner. They can create a subscription from their SNS topic to your endpoint.
For setting up this endpoint, there are many options. ExpressJS is one such popular framework to quickly implement HTTP APIs.
Probably, option two would be more suited, or at least first to be considered. For that option you would have have to develop a lambda function which would receive data from SNS, re-format if needed and upload it to MySQL. So your architecture would look like:
Data--->SNS--->Lambda function---> MySQL
Depending on the amount of incoming data to the SNS, you may add SQS queue as well to the mix, to buffer the records and enable fun-out architecture. For example:
/---> SQS queue 1---> Lambda function 1---> MySQL
Data -->SNS --/
\
\--- SQS queue 2 ---> Lambda function 2, EC2 instance, Container ---> Other destination
Other solutions are possible. But I would first consider the above, before looking into other ways.

is there any way to login zabbix API without giving Username ans password in script

I am using pyzabbix module to use the Zabbix API, but is there any way to login the Zabbix API without giving the username and password in Python script?
Like any API token which serves the purpose.
There are no API tokens or similar access methods in Zabbix currently.
There is not, but you should use an environment variable (see environment variable in python) to store the password/token anyway, in order to avoid having it inside the code in cleartext. The environment is visible to the user only, and is usually initialized from a protected file (0600 permission in unix style), or a masked CI/CD variable.
I am using Zabbix 5.4.7
There is a section API tokens under:
Administration -> General -> API tokens

AWS authentication to Vault

We're using Vault to store our application secrets and config. When our app (Java) starts, a script does all the magic of getting the secrets and config from Vault and storing them locally for the application to read. The script is authenticating to Vault using AWS IAM role.
Now we're getting to a situation where the application needs to read secrets from Vault on the go, not just on startup. For that purpose, I need it to be able to do the authentication pretty much on every request. It's worth mentioning that the app might also run on the developer machine, so whatever authentication done - it needs to work on the EC2 instance as well as the local development environment.
I'm currently leaning towards creating a username and password, store them in Vault for the application to get when starting up. Then the application could use that username/password to authenticate to Vault when it needs.
I'm also considering AppRole, but can't really see any real advantage to it over simple user/password setup.
What's the best solution for this use-case? Any advise would be highly appreciated!
Thanks,
Yosi
The AWS recommendation for storing secrets is to use AWS Systems Manager Parameter Store.
Software running on an Amazon EC2 instance with an assigned Role can use those credentials to access the Parameter Store to retrieve application secrets.
The Parameter Store can also be used outside of EC2, but some AWS credentials will still be needed to authenticate to the Parameter Store.

AWS SQS to receive message from outside of AWS

my company has a messaging system which sends real-time messages in JSON format, and it's not built on AWS, and will not have any VPN connection with AWS.
our team is trying to use AWS SQS to receive these messages, which will then have DynamoDB process JSON messages to TSV, then load into RDS.
however, as per the FAQ, SQS can only receive message from within AWS.
https://aws.amazon.com/sqs/faqs/
Q: Who can perform operations on a message queue?
Only an AWS account owner (or an AWS account that the account owner has delegated rights to can perform operations on an Amazon SQS message queue.
In order to use SQS, one way I can think of is to create a public-facing EC2 instance, which receives messages and passes over to SQS.
My questions here are:
is my idea correct?
if it's correct, can you share any details on how to build any applications on this EC2 instance to achieve the functionality (I have no experience on application development, your insights are really appreciated!)
is there any easier/better options in AWS that can achieve the goal to receive message in my use case?
is my idea correct?
No, it isn't.
You're misinterpreting the (admittedly somewhat unclear) information in the FAQ.
SQS is accessible and usable from anywhere on the Internet. Its only exposed interface is HTTP(S). In fact, from inside EC2, SQS is not accessible unless the EC2 instance actually has outbound access to the Internet.
The point being made in the documentation is not that you need to be "inside" AWS to use queues, but rather that you need to be in possession of an authorized set of AWS credentials in order to work with queues.¹
If you have an AWS account, you have credentials, and you can use SQS. There is no requirement that you access the queue from "inside" AWS.
Choose the endpoint closest to your servers (for lowest latency) and you should find it open and accessible, from anywhere.
¹Queues can be configured to allow anonymous acccess after they are created. (Don't do it, I'm just saying it is possible.) This section of the FAQ seems to be referring to a subset of operations, such as creating queues.
I was not able to write to SQS from an external service. I found some partial explanations but got stuck at the role creation.
The alternative I found is using AWS services Lambda + API Gateway to write to SQS.
This tutorial was extremely helpful, explaining all the steps in great details:
https://startupnextdoor.com/adding-to-sqs-queue-using-aws-lambda-and-a-serverless-api-endpoint/
You can access sqs from anywhere once you have proper permission through accesskey&secret key or IAM role.
SQS is not specific to vpc
It is clear that you try to do this :
Take message from your company messaging system, send it to SQS.
It is not wrong using your method (using EC2 as a bridge). However, you don't need EC2 to connect to SQS.
All AWS services can be access using AWS API(e.g. Python boto3, etc) from internet, as long as you provide the correct credential. So you can put your "middleware" in anywhere as long as you are able establish connection to the said services.
So there is lots of more options available to you. e.g. trigger from your messaging system; use AWS Lambda, etc.
Thanks for sharing the information and your insights with me!
I have tested below solution, which works for my use case:
created an endpoint in AWS API Gateway, which is able to receive messages from company messaging system, a system that does not carry AWS credentials
created a Lambda function triggered by API Gateway, so once a message arrives, Lambda will digest the JSON message and convert it to TSV, and then load into RDS

Spring Cloud Dataflow routing by JSON header

I have been trying to create a stream with Spring Cloud Dataflow but have not had much luck (mostly due to the lack of documentation).
Issue 1: Accessing web GUI of dockerized Spring Cloud Dataflow
I have a dockerized Spring Cloud server running with Kafka on a basic Ubuntu container. For some reason I can't access the web GUI in Windows (at < docker-machine ip >:9393/dashboard). However, I have a separate Docker Ubuntu container running Nginx reverse proxy, which shows up when I go to < docker-machine ip >/index.html or etc. I don't think it is an issue with ports, I have the Spring Cloud container setup with -p 9393:9393 and the port is otherwise unused.
Issue 2: Routing by JSON Header
My ultimate goal is to get a file loaded in from Nginx and routed based on its JSON header (there are two different JSON headers) and then ingest queries to Cassandra.
I can do all of this except the sorting by JSON header. Which app would you recommend I use?
Issue 1: Accessing web GUI of dockerized Spring Cloud Dataflow
We might need little more details around this. Assuming this is the local-server, perhaps you could share the docker scripts/image, so we could try it out.
Issue 2: Routing by JSON Header
The router-sink application would come handy for this type of use-cases. This application routes the payload to named destinations based on certain conditions, so you'd have the opportunity to route the payload with respective ingest-query to Cassandra.
stream 1:
stream create test --definition "file | router --expression=header.contains('a')?':foo':':bar’"
stream 2:
stream create baz --definition ":foo > cassandra --ingest-query=\"query1\""
stream 3:
stream create wiz --definition ":bar > cassandra --ingest-query=\"query2\""
(where: foo and bar are named destinations)