Connect to Google Cloud MySql From Cloud Run (knative) - mysql

For a sample project I am working on (https://gitlab.com/connorbutch/reading-comprehension-ws), I am having issues connecting to a google cloud mysql database from google cloud run. However, when I run locally with the same args (in both docker and kubernetes) the application looks to succeed.
The steps I followed in setting up my google cloud run application are listed here (https://cloud.google.com/sql/docs/mysql/connect-run). I included the mysql db in the cloud database info. Things I have tried
connecting using ip address in jdbc connection string (which works locally, but this statement on the page suggests it might not on google cloud run, "Cloud Run (fully managed) does not support connecting to the Cloud SQL instance using TCP. Your code should not try to access the instance using an IP address such as 127.0.0.1 or 172.17.0.1.")
connecting using unix socket as suggested, server does not even start
When I start the application with the ip address in the jdbc url, on google cloud, it looks like the app starts successfully:
2020-02-12T02:51:01.733606Z 2020-02-12 02:51:01.733 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-02-12T02:51:01.740162Z 2020-02-12 02:51:01.739 INFO 1 --- [ main] com.connor.Application : Started Application in 15.717 seconds (JVM running for 17.715)
However, when I make the first request, I see the following:
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
Caused by: java.net.SocketTimeoutException: connect timed out
Caused by: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
Is there any suggestions you may have? I am wondering if it may be related to having to configure our DataSource as listed here: https://github.com/GoogleCloudPlatform/java-docs-samples/blob/master/cloud-sql/mysql/servlet/src/main/java/com/example/cloudsql/ConnectionPoolContextListener.java

Following the documentation steps:
Build and deploy container
Connect from Cloud Run
On the last step: “Connecting to CloudSQL” of the 2nd link instead of using the code snippet, I used the following command from the GitHub instructions:
gcloud run services update helloworld --add-cloudsql-instances [INSTANCE_CONNECTION_NAME] --set-env-vars CLOUD_SQL_CONNECTION_NAME=[INSTANCE_CONNECTION_NAME],DB_USER=[MY_DB_USER],DB_PASS=[MY_DB_PASS],DB_NAME=[MY_DB]
where "helloworld" is the name of my service.
Please be meticulous with the spacing on this command, as it can easily throw errors.
Having said that after performing a curl command I did not receive any errors, so my application runs successfully.
Additionally, during my investigation on the error you received, I came across this link, which contains a list of possible causes for this error.
Finally, since the error indicates a timeout, you could also try modifying the request timeout of Cloud Run by following this.
Some further links that could come in handy for you are:
Diagnosing issues with Cloud SQL instances
Troubleshooting Cloud Run (fully managed)
I hope this information helps.

Sadly the https://cloud.google.com/sql/docs/mysql/connect-run document is currently not documenting the Knative instructions. Any time you see "Cloud Run (fully managed)" that's not to Kubernetes implementation.
If you are using Cloud Run on a Kubernetes/GKE cluster, this is probably more applicable. https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine That said this approach uses "Cloud SQL Proxy" sidecar container, which is not yet a notion supported by Knative today.
However, using those instructions, you can connect to Cloud SQL instance over a private IP, as GKE clusters can be in a VPC (though "Cloud Run fully-managed" applications currently cannot).

Related

Zabbix Mattermost notification integrations - Timeout exceeded while connecting to 'localhost' when testing Mattermost Media Type

I am trying to intergrate our mattermost with zabbix to receive notifications on alerts. I've followed up the instructions on this link. We are using Zabbix 4.4 with MM 5.19.
After enabling the integration, No alerts are being posted on Mattermost. I tried testing the Media type on Administration > Media Types > Mattermost > Test.
I've added the following as the parameters, but it throws the error : Connection timeout of 3 seconds exceeded when connecting to Zabbix server "localhost".
bot_token : {Token generated for the Bot in Mattemost}
mattermost_url : {https://mattermost.our-company.com}
send_mode : alarm
Tried changing {ZABBIX_URL} to both http://127.0.0.1 and http://zabbix.our-company.com (The DNS is resolved only internally, but our mattermost is available on public network) but none of them work.
I checked the logs inside /var/log/zabbix but no error or anything. I even tried putting the zabbix logs to Debug mode but no luck in any case, the only Debug log I've got is the following :
2063:20200216:090224.146 trapper got '{"request":"alert.send","sid":"74095b240dd6783618571516f029187a","data":{"parameters":{"zabbix_url":"{$ZABBIX.URL}","send_mode":"alarm","send_to":"{ALERT.SENDTO}","event_tags":"{EVENT.TAGS}","event_name":"{EVENT.NAME}","event_nseverity":"{EVENT.NSEVERITY}","event_ack_status":"{EVENT.ACK.STATUS}","event_value":"{EVENT.VALUE}","event_update_status":"{EVENT.UPDATE.STATUS}","event_date":"{EVENT.DATE}","event_time":"{EVENT.TIME}","event_severity":"{EVENT.SEVERITY}","event_opdata":"{EVENT.OPDATA}","event_id":"{EVENT.ID}","event_update_message":"{EVENT.UPDATE.MESSAGE}","trigger_id":"{TRIGGER.ID}","trigger_description":"{TRIGGER.DESCRIPTION}","host_name":"{HOST.NAME}","host_ip":"{HOST.IP}","event_update_date":"{EVENT.UPDATE.DATE}","event_update_time":"{EVENT.UPDATE.TIME}","event_recovery_date":"{EVENT.RECOVERY.DATE}","event_recovery_time":"{EVENT.RECOVERY.TIME}","bot_token":"qs3rkqdappy6i8gs3a8871phxc","mattermost_url":"https:\/\/mattermost.our-company.com"},"mediatypeid":"7"}}'
What can be the issue? Is there a way to "debug" and find the root cause of this error? Any help is appreciated! Note that right now we have integrated Slack with Zabbix and it's working fine, but we are moving to Mattermost and therefore, we need to migrate the integrations as well.
We found out the issue with our Network Admin. The problem was that our Zabbix server was trying to resolve Mattermost name from local network route (i.e. 192.168.x.x) and it kept failing, therefore, no SSL connection could be initiated.
It seems that Zabbix integration tests' error messages are quite generic and sometimes, misleading. Thorough investigation is needed for finding out the root cause.

ENOENT when connecting to Google Cloud SQL from App Engine

I'm trying to deploy my Node.js app on Google App Engine and it deployed fine, but it can't connect to Google Cloud SQL for some reason. Here's what it throws:
Error: connect ENOENT /cloudsql/my-project-id:asia-east1:my-sql-instance
Here's how I configured the connection:
if (process.env.INSTANCE_CONNECTION_NAME) {
exports.mysqlConfig = {
user: process.env.GCLOUD_SQL_USERNAME,
password: process.env.GCLOUD_SQL_PASSWORD,
socketPath: '/cloudsql/' + process.env.INSTANCE_CONNECTION_NAME
}
} else {
// Use settings for localhost
}
I'm using node-mysql module to connect to the database.
The App Engine and Cloud SQL are already in the same project.
My theory is that the App Engine and the Cloud SQL has to be in the same project and same region, but I'm not sure.
Check your logs for any errors during startup using:
the following cmd gcloud app logs tail -s default or,
with the log viewer https://console.cloud.google.com/logs/viewer
Chances are that you have not enabled the Cloud SQL API for your project: https://console.developers.google.com/apis/api/sqladmin/overview
make sure you have added following setting in app.yaml
beta_settings:
# The connection name of your instance, available by using
# 'gcloud beta sql instances describe [INSTANCE_NAME]' or from
# the Instance details page in the Google Cloud Platform Console.
cloud_sql_instances: YOUR_INSTANCE_CONNECTION_NAME
ref:https://cloud.google.com/appengine/docs/flexible/nodejs/using-cloud-sql-postgres
Apparently the order you do things matters...
enable Cloud SQL API
then (re)deploy your app (gcloud app deploy)
When I did deploy -> create databases -> enable sql ipi I got the ENOENT error
For anyone using 2nd gen Cloud Functions - they added a portion in the documentation:
If you're using Cloud Functions (2nd gen) and not Cloud Functions (1st
gen), the following are required (also see Configure Cloud Run):
They go on to list the steps required. They're a bit scary, but do work eventually.
(If you find yourself looking for the SQL Connection in the new Cloud Run revision, notice there is a separate "Connections" tab for this)

Connection to AWS Database fails with Mule app in Runtime Manager

I've recently created a Mule application (3.7.0 CE) on a laptop. I'm connected to an AWS RDS instance when running locally in AnyPoint Studio using Maven. I started with a local MySQL DB and migrated it to AWS because my application "proofofconcept" is just that a proof of concept and I would like to show the application online (public url) instead of my laptop for a presentation. I added the database.url=... property to the application properties when I deployed to Anypoint Runtime Manager in the cloud. I'm currently getting a:
communications link failure
I've tried several things and nothing has worked. I tried a basic database connection first in the database config. And, then I created a JDBC datasource in Spring-beans. Both methods worked locally and in-communication with AWS (remote). When I deploy to Runtime Manager, the application deploys. And, I get the console that's generated runtime by the RAML. When I call a url e.g. api/v1/orders it runs and runs and after timeout provides the communication error.
Does anyone 1) know if the communication is allowed? 2) know how to fix this? I would like to demo the POC online for my client.
Thanks in advance
My issue was with Amazon VPC and the default security group assigned to my RDS instance. By default all outbound activity is set to any protocol and any port for any ip (0.0.0.0/0). Inbound routing, however was specifying only port 3306 but also a custom using-ip that was my home network public ip. I changed the ip specification to be 0.0.0.0/0. This now mean's that any ip can send a request though port 3306 to my Amazon MySQL instance.

JMeter - trouble configuring payload for jmeter-server test connecting over SSH

I'm tearing my hair out over a JMeter config issue. I'm running JMeter on a dedicated injection server, using the gui on my local box to control the tests [EDIT: The connection is SSH. The client is Windows 7 and the server is Linux). I've run the tests from my local box and I confirmed that they're working correctly from there. I put the payload (text files containing one JSON object each) on to the injection server and changed the Publisher configuration in the message source section so the path pointed to the files on there and...nothing.
This is the only output I get:
2012/09/24 14:26:50 INFO - jmeter.engine.ClientJMeterEngine: running clientengine run method
2012/09/24 14:26:50 INFO - jmeter.samplers.StandardSampleSender: Using StandardSampleSender for this test run
2012/09/24 14:26:50 INFO - jmeter.samplers.StandardSampleSender: Using StandardSampleSender for this test run
2012/09/24 14:26:50 INFO - jmeter.engine.ClientJMeterEngine: sent test to <IP_ADDRESS_OBSCURED> basedir='.'
2012/09/24 14:26:50 INFO - jmeter.engine.ClientJMeterEngine: Sending properties {}
2012/09/24 14:26:50 INFO - jmeter.engine.ClientJMeterEngine: sent run command to <IP_ADDRESS_OBSCURED>
I don't know what I'm doing wrong. I tried Apache's highly comprehensive documentation, but surprisingly there's nothing at all about this. How should I be configuring the path to the payload on the server?
Coincidentally, I solved this one today and was on my way home to post the answer. The important thing to note is that the tests weren't running at all. The server reported stop-start but the tests weren't running. This is why:
I was using a JMS Producer sampler and connecting over SSH. This was part of the problem. In order to connect to a remote SSH server, it's necessary first to create an SSH tunnel, then start the JMeter server and client with special parameters. The process is described in this helpful and concise blog post:
http://blog.ionelmc.ro/2012/02/16/how-to-run-jmeter-over-ssh-tunnel/
The second mistake I was making was that I was running the server on a Linux box (CentOS) and the client on a Windows 7 desktop. It's not recommended to do this, but I didn't realise that it'd stop the test from running. I dropped a Linux VM on my windows box, ran the tests from there and everything worked perfectly.

Sqoop authenticates but fails to start a map reduce job

I am trying to transfer data using sqoop from HDFS to the MSSQL server. But for some reasons, sqoop hangs at
tool.BaseSqoopTool: Enabled debug logging.
sqoop.ConnFactory: Added factory com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory specified by /usr/lib/sqoop/conf/managers.d/mssqoop-sqlserver
DEBUG sqoop.ConnFactory: Loaded manager factory: com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
DEBUG sqoop.ConnFactory: Loaded manager factory: com.cloudera.sqoop.manager.DefaultManagerFactory
DEBUG sqoop.ConnFactory: Trying ManagerFactory: com.microsoft.sqoop.SqlServer.MSSQLServerManagerFactory
INFO SqlServer.MSSQLServerManagerFactory: Using Microsoft's SQL Server - Hadoop Connector
INFO manager.SqlManager: Using default fetchSize of 1000
DEBUG sqoop.ConnFactory: Instantiated ConnManager com.microsoft.sqoop.SqlServer.MSSQLServerManager#45db05b2
INFO tool.CodeGenTool: Beginning code generation
DEBUG manager.SqlManager: No connection paramenters specified. Using regular API for making connection.
I check the firewall and it is allowing connections without any restrictions. Sqoop gets authenticated but doesnt initiate a map reduce job after it gets authenticated. Any one has faced similar problems before?
Try using --verbose to print more information.
Is your SQL Server running on a Virtual Machine? I had a similar problem with Oracle. I was running Oracle on a VM with a static IP and a Bridged network adapter. Servers within the same network as the Oracle server could connect fine, but servers outside the network showed these same symptoms. The solution was to change from a Bridged Interface to a NAT'd interface. Then you need to set up a port forwarding rule on the host machine to your database server, and make your Sqoop connection to the host machine IP rather than the VM's IP. It took me several days to get this figured out. Hope it helps.
We has MsSQL server running on our machines. The problem was that the particular version of JVM (Java(TM) SE Runtime Environment (build 1.6.0_29-b11)) had bug and caused the client to hang in the getconnection method.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7103725
We upgraded to a newer version and things worked fine.