How to use Vault as Boundary Credential Store to keep hosts ssh static credentials - integration

Integrating vault with boundary credential store , inorder to login into aws ec2 instance,
but have to statically provide the credential inorder to login into ec2.
./boundary connect ssh -target-id ttcp_1234567890 -addr=http://XXXXXXXXXXX -username hello
Credentials:
Credential Source ID: clvlt_jnEOdYQyew
Credential Source Name: new ec2
Credential Store ID: csvlt_4lWBv8Wke7
Credential Store Type: vault
Secret:
null
hello#hst_1234567890’s password:
It is not taking the vault secret created in vault ,basically which it should
So is there any alternative to login into AWS resources through credential provided in vault .
the documentation has the analyst role able to connect postgres db without providing the credential.
boundary connect postgres -target-id ttcp_1r9XGCXdwE -dbname northwind
So is there the similar approach for logging into ec2 instance

Related

Serverless nodejs mysql handshake error after deploy

I am working on serverless node basic application which has the basic function of CRUD operation with mysql. The mysql is created in the RDS Mysql and assigned to the public, In local system its connected and working fine, After the deploying serverless deploy, I am getting an error as Handshake inactivity timeout in response body.
Note: I used the serverless deploy and RDS MySQL in same AWS account.
Even if both are deployed in the same account, and the RDS is public, the lambda might be deployed in a private subnet that does not have internet access. Without seeing your serverless deployment script, my guess is you may have to configure the correct security groups and subnets for your lambda function to be able to connect to this public database.
Here's a medium article that may help with the setup. Under the vpc configuration of your lambda function, you may have to modify the following.
provider:
name: aws
stage: prod
runtime: nodejs6.10
region: us-east-1
vpc:
securityGroupIds:
- HERE_YOUR_SECURITY_GROUP
subnetIds:
- HERE_YOUR_SUBNET_1
- HERE_YOUR_SUBNET_2
- HERE_YOUR_SUBNET_3
environment:
MYSQLHOST: 'xxxxx.rds.amazonaws.com'
MYSQLPORT: 'xxx'
MYSQLUSER: 'xxx'
MYSQLPASS: 'xxxxx'
MYSQLDATABASE: 'xxxx'

Error Connecting GCP CloudSql using Sequilize through nodejs in app engine?

When I try to connect second generation Cloud mysql with app engine using sequilize it give me following Error:
{"name":"SequelizeConnectionError","parent":{"errno":"ENOENT","code":"ENOENT","syscall":"connect","address":"/cloudsql/phrasal-charger-215107:asia-south1:newdish123","fatal":true},"original":{"errno":"ENOENT","code":"ENOENT","syscall":"connect","address":"/cloudsql/phrasal-charger-215107:asia-south1:newdish123","fatal":true}}
Sequelize option set:-
const db = new Sequelize(
config.database.db,
config.database.username,
config.database.password,
{
host:'localhost',
dialect: "mysql",
port: 3306,
dialectOptions: {
socketPath: '/cloudsql/phrasal-charger-215107:asia-south1:newdish123'
}
}
);
app.yaml
runtime: nodejs
env: flex
beta_settings:
cloud_sql_instances: phrasal-charger-215107:asia-south1:newdish123
MORE INFO
I tried putting '/cloudsql/phrasal-charger-215107:asia-south1:newdish123' in host didn't work.
when I put public ip address in the host given by CloudSQL and setting my ip with GCP as authorised ip then the sequelize is running perfectly and performing operation as expected.
I was getting the exact same issue, all of my code base didn't change at all. What I did to fix it was to create a new project and then create new App Engine Instance and a new Cloud SQL database in my new project.
Then I enabled SQL Admin for the project, created a new profile by running
gcloud init
and creating a new profile for the new project.
Then I deployed the app again:
gcloud app deploy
You can find the setup of my project here:
Node.JS on Google App Enging with Cloud SQLError: connect ENOENT /cloudsql/
Manually add the IP addresses of machines/networks that need access to Cloud SQL to the Authorized Networks section.
And
Check your proxy while making connection

Openshift monitoring with ManageIQ

I am a newbie of manageiq monitoring tool. I am trying to access to Openshift with manageiq. My development environment is Windows 10 and Docker container with manageiq/manageiq:fine-1image. I made successful connection to Hawkular on localhost with typed-in real ip address of Hawkular. But in case of Openshift, I can hardly make connection of Openshift provider. The below pictures show my Openshift provider configuration on manageiq.
openshift host : https://api.starter-us-east-1.openshift.com
API port : 443
Verify TLS Certificates : deactivated.
username and password : Red Hat sign in username and password
But it throws the the error message.
Like the connection with Hawkular, do I have to type in the actual IP address? Where can I find the IP address of Openshift cloud?
I chose the wrong menu. The correct menu is "Compute" => "Container" => "Provider" . And the configuration is shown in the below picture

Can't connect Google Cloud SQL(2nd) from GCE (Google Compute Engine)

I can't connect Google Cloud SQL from GCE even I added public IP (external IP) of my GCE instance as a authorized network. It works when I add "0.0.0.0" into authorized network. Obviously I don't want to do that. The authorized network setting may be the cause. But I can't find out it. Does anyone know about this.
I'm using Google Cloud SQL version 2 beta. I am trying to connect from GCP cloud console. Although it may be not necessary, I changed external IP setting from ephemeral to static but it didn't work.
mysql -u root -p -h xxxx <--- I can login normally if I add "0.0.0.0" into authorized network.
I've double checked this same question..
Linking Google Compute Engine and Google Cloud SQL
1. Ensure your Cloud SQL instance has an IPv4 address.
2. Find out the public IP address of your GCE instance and add it as an authorized network on your Cloud SQL instance.
3. Add a MySQL username and password for your instance with remote access.
4. When connecting from GCE use you standard MySQL connection system (e.g. mysqli_connect) with the username and password you just set up, connecting to the IPv4 address of your Cloud SQL instance.
Edit 1
I noticed this description.
Note: Connecting to Cloud SQL from Compute Engine using the Cloud SQL Proxy is currently available only for Cloud SQL Second Generation instances.
https://cloud.google.com/sql/docs/compute-engine-access
Does it mean that I have to use the Proxy..?
Edit 2
$ mysql -u root -p -h (Cloud SQL Instance's IP)
Enter password:
ERROR 2003 (HY000): Can't connect to MySQL server on '(Cloud SQL Instance's IP)' (110)
Edit 3
Does it mean that I have to use the Proxy..?
According to the official document as Vadim said, Cloud SQL Proxy seems to be optional but it sounds better for security, flexibility and also the price. (static IP will be charged. However, the proxy setting may be complicated for me..)
https://cloud.google.com/sql/docs/compute-engine-access
If you are connecting to a Cloud SQL First Generation instance, then you must use its IP address to connect. However, if you are using a Cloud SQL Second Generation instance, you can also use the Cloud SQL Proxy or the Cloud SQL Proxy Docker image.
Edit 4
I found the reason... I was stupid... I tried connect from Google Cloud Shell but that was not my gce instance. It works when I try to connect from my gce instance.
Did you add the public IP of the GCE VM under authorized networks?
From your post:
2. Find out the public IP address of your GCE instance and add it as an authorized network on your Cloud SQL instance.
The official documentation is here:
https://cloud.google.com/sql/docs/external#appaccessIP

How to obtain service credentials for a service instance created on IBM Bluemix without binding the instance to an application on Bluemix?

I have created a ClearDB MySQL instance on IBM Bluemix. Can I see the credentials (hostname, username, password etc) without binding the instance to an application running on Bluemix ?
Thank you, Sandhya
It depends if the service provider implemented the Service Keys feature. If they have, you can generate new credentials by clicking on "Service Credentials" on the service dashboard page.
The ClearDB currently requires you to bind it to a Cloud Foundry application for service credentials.