MCF: Can't provision mysql service - mysql

Running MCF on VMWare Fusion 5.0.1 on OSX 10.7.4. VMC is version 0.3.19
When I attempt to create a mysql service on the MCF, I get on vmc:
maguro:Desktop darrellberry$ vmc create-service
1: redis
2: mongodb
3: postgresql
4: mysql
5: rabbitmq
Which service would you like to provision?: 4
Creating Service [mysql-eaca7]:
Error 503: Unexpected response from service gateway
On the MCF instance, /var/vcap/sys/log/mysql_gateway/mysql_gateway.log shows:
[2012-09-06 09:14:56] mysql_gateway - 3249 c74f 72e9 INFO -- Sending info to cloud controller: http://api.xx.cloudfoundry.me/services/v1/offerings
[2012-09-06 09:14:56] mysql_gateway - 3249 c74f 72e9 INFO -- Successfully registered with cloud controller
[2012-09-06 09:15:55] mysql_gateway - 3249 c74f 72e9 DEBUG -- Provision request for label=mysql-5.1 plan=free
[2012-09-06 09:15:55] mysql_gateway - 3249 c74f 72e9 DEBUG -- [MyaaS-Provisioner] Attempting to provision instance (request={:label=>"mysql-5.1", :name=>"mysql-83457", :email=>"xx#xx.com", :plan=>"free"})
[2012-09-06 09:15:56] mysql_gateway - 3249 c74f 72e9 INFO -- Sending info to cloud controller: http://api.xx.cloudfoundry.me/services/v1/offerings
[2012-09-06 09:15:56] mysql_gateway - 3249 c74f 72e9 INFO -- Successfully registered with cloud controller
[2012-09-06 09:15:57] mysql_gateway - 3249 c74f 72e9 DEBUG -- [MyaaS-Provisioner] Found the following nodes: []
[2012-09-06 09:16:05] mysql_gateway - 3249 c74f 72e9 WARN -- Request timeout in 10 seconds.
(urls and email obfuscated here -- the ones in the logs look correct)
This is 100% repeatable. I can however provision services of the other types (postgresql, rabbitmq etc) without error. All help appreciated.

For whatever reason, the mysql node is not starting (each service has a node and a gateway). Have a look at the last 100 lines of /var/vcap/sys/log/mysql/mysqld.err.log and see if there is anything glaringly obvious.
Better yet, double check the service is running by installing telnet and connecting to port 3306 on the VM itself;
sudo apt-get install telnet
telnet localhost 3306
If the connection opens immediately and you something like;
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
<
5.1.54-rel12.5?xaZ!5%Wh?'%&,Ks%Xn#4"^]
then mysql is definitely running. mysqld should also appear in the process list too.

Related

Change default path to chrome execution in karate-chrome to use chromium [duplicate]

I want to setup a headless chrome driver for UI Test Automation in jenkins.
But to run the test command
sudo -E java -jar karate-0.9.3.jar karate_GUI.feature
I have to run as root and it requires --no-sandbox, which, if I'm not wrong, it's still not supported in v0.9.3.
If possible, how can I include --no-sandbox option?
I checked https://intuit.github.io/karate/karate-core/ and there is no --no-sandbox option.
My feature configuration:
Feature: message end-point
Background:
* configure driver = { type: 'chrome', executable: '/usr/bin/google-chrome', headless: true }
# Login Url
* def browserManagementUrl = 'http://localhost:8000/login/'
Scenario: GUI Testing for Login page
Given driver browserManagementUrl
And eval driver.input('input[name=name]', 'admin')
And eval driver.input('input[name=password]', 'adminadmin')
And driver.submit('#login-button')
When driver.submit('#login-button')
Then match driver.location == 'http://localhost:8000/select/'
The linux command and it's results
sudo -E java -jar karate-0.9.3.jar karate_GUI.feature
07:15:56.296 [main] INFO com.intuit.karate.Main - Karate version: 0.9.3
07:15:57.345 [ForkJoinPool-1-worker-1] WARN com.intuit.karate - skipping bootstrap configuration: could not find or read file: classpath:karate-config.js
07:15:57.418 [chrome_1560323757416] DEBUG c.i.k.driver.chrome_1560323757416 - command: [/usr/bin/google-chrome, --remote-debugging-port=9222, --no-first-run, --user-data-dir=/var/jenkins_home/workspace/my-karate_GUI#2/integrations/target/chrome_1560323757416, --disable-popup-blocking, --headless]
07:15:57.419 [ForkJoinPool-1-worker-1] DEBUG c.i.k.driver.chrome_1560323757416 - poll attempt #0 for port to be ready - localhost:9222
07:15:57.420 [chrome_1560323757416] DEBUG c.i.k.driver.chrome_1560323757416 - env PATH: /sbin:/bin:/usr/sbin:/usr/bin
07:15:57.423 [ForkJoinPool-1-worker-1] DEBUG c.i.k.driver.chrome_1560323757416 - sleeping for millis: 250
07:15:57.674 [ForkJoinPool-1-worker-1] DEBUG c.i.k.driver.chrome_1560323757416 - poll attempt #1 for port to be ready - localhost:9222
07:15:57.675 [ForkJoinPool-1-worker-1] DEBUG c.i.k.driver.chrome_1560323757416 - sleeping for millis: 250
07:15:57.793 [chrome_1560323757416] DEBUG c.i.k.driver.chrome_1560323757416 - [0612/071557.791933:ERROR:zygote_host_impl_linux.cc(89)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180.
07:15:57.810 [chrome_1560323757416] DEBUG c.intuit.karate.shell.CommandThread - command complete, exit code: 1 - [/usr/bin/google-chrome, --remote-debugging-port=9222, --no-first-run, --user-data-dir=/var/jenkins_home/workspace/my-karate_GUI#2/integrations/target/chrome_1560323757416, --disable-popup-blocking, --headless]
07:15:57.926 [ForkJoinPool-1-worker-1] DEBUG c.i.k.driver.chrome_1560323757416 - poll attempt #2 for port to be ready - localhost:9222
07:15:57.927 [ForkJoinPool-1-worker-1] DEBUG c.i.k.driver.chrome_1560323757416 - sleeping for millis: 250
07:15:58.178 [ForkJoinPool-1-worker-1] DEBUG c.i.k.driver.chrome_1560323757416 - poll attempt #3 for port to be ready - localhost:9222
[...]
07:16:02.206 [ForkJoinPool-1-worker-1] DEBUG c.i.k.driver.chrome_1560323757416 - poll attempt #19 for port to be ready - localhost:9222
07:16:02.207 [ForkJoinPool-1-worker-1] DEBUG c.i.k.driver.chrome_1560323757416 - sleeping for millis: 250
07:16:02.848 [ForkJoinPool-1-worker-1] DEBUG c.i.k.driver.chrome_1560323757416 - request:
1 > GET http://localhost:9222/json
1 > Accept-Encoding: gzip,deflate
1 > Connection: Keep-Alive
1 > Host: localhost:9222
1 > User-Agent: Apache-HttpClient/4.5.5 (Java/1.8.0_212)
07:16:02.862 [ForkJoinPool-1-worker-1] ERROR c.i.k.driver.chrome_1560323757416 - org.apache.http.conn.HttpHostConnectException: Connect to localhost:9222 [localhost/127.0.0.1] failed: Connection refused (Connection refused), http call failed after 13 milliseconds for URL: http://localhost:9222/json
07:16:02.863 [ForkJoinPool-1-worker-1] ERROR c.i.k.driver.chrome_1560323757416 - http request failed:
org.apache.http.conn.HttpHostConnectException: Connect to localhost:9222 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
07:16:02.918 [pool-1-thread-1] INFO com.intuit.karate.Runner - <<fail>> feature 1 of 1: karate_GUI.feature
---------------------------------------------------------
feature: karate_GUI.feature
report: target/karate_GUI.json
scenarios: 1 | passed: 0 | failed: 1 | time: 5.4993
---------------------------------------------------------
Karate version: 0.9.3
======================================================
elapsed: 6.39 | threads: 1 | thread time: 5.50
features: 1 | ignored: 0 | efficiency: 0.86
scenarios: 1 | passed: 0 | failed: 1
======================================================
failed features:
karate_GUI: karate_GUI.feature:8 -
org.apache.http.conn.HttpHostConnectException: Connect to localhost:9222 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
Exception in thread "main" picocli.CommandLine$ExecutionException: there are test failures
at com.intuit.karate.Main$1.handleExecutionException(Main.java:133)
at picocli.CommandLine.parseWithHandlers(CommandLine.java:1157)
at com.intuit.karate.Main.main(Main.java:139)
I guess you do know that the UI automation pieces are still experimental and yes I don't think we support --no-sandbox - feel free to open a ticket and help us with some links to why this is needed, what it does etc.
A suggested workaround is you can pass a batch file as the executable key to the configure driver call. In this batch file you can then call the chromedriver executable with whatever custom parameters or arguments you need.
Do let us know if that works. It also sounds to me that a way to pass any custom flags is a needed feature, do add this to your feature request.
EDIT: for those landing here in future, I'm not 100% sure, but maybe the info here will help: https://github.com/intuit/karate/issues/1134#issuecomment-638990087

My Google Compute Engine startup script has CommunicationsLinkFailures (SQL State 08S01 Error 0) even though it runs fine locally?

I'm deploying my Java8 SpringBoot App to a Google Compute Engine instance and trying to connect it to a Debian9 CloudSQL instance. I'm trying to get the instance to run with my startup-script.sh, but when it tries to boot up the SpringBoot Application, according to the daemon.log, when the .war is ran by "java -jar order-routing-0.0.1-SNAPSHOT.war" the startup fails with a "Unable to obtain connection from database: Communications link failure", with SQL State:08S01 and error code 0.
I mapped the GCE instance to a static external IP, as well as whitelisting that IP on the CloudSQL instance's connections configs. I also verify that the war file runs locally with "java -jar order-routing.war".
Here is my startup script.sh:
#!/usr/bin/env bash
# This script is passed to the GCE instance by the setup script. It is run on the instance when it is spun up.
# Derived from GCE Tutorial at https://cloud.google.com/java/docs/tutorials/bookshelf-on-compute-engine
# [START script]
set -e
set -v
# Talk to the metadata server to get the project id
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
BUCKET=$(curl -s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/BUCKET" -H "Metadata-Flavor: Google")
echo "Project ID: ${PROJECTID}"
# get our file(s)
gsutil cp "gs://order-routing-install/gce/"** .
# Install dependencies from apt
apt-get update
apt-get install mysql-client -y
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64
mv cloud_sql_proxy.linux.amd64 cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=instance-qa1:us-central1:instance-qa1-cloudsql-0=tcp:3307 &
apt-get install -yq default-jre
apt-get install -yq default-jdk
java -jar order-routing-0.0.1-SNAPSHOT.war
# [END script]
Here is the failing error log from the daemon.log:
ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'flywayInitializer' defined in class path resource [org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]: Invocation of init method failed; nested exception is org.flywaydb.core.internal.exception.FlywaySqlException:
Jul 29 03:39:30 order-routing-group-hk66 startup-script: INFO startup-script: Unable to obtain connection from database: Communications link failure
Jul 29 03:39:30 order-routing-group-hk66 startup-script: INFO startup-script: 2019-07-29 03:39:30.435 INFO 9051 --- [ main] ConditionEvaluationReportLoggingListener :
Jul 29 03:39:30 order-routing-group-hk66 startup-script: INFO startup-script:n from database: Communications link failure
Jul 29 03:39:30 order-routing-group-hk66 startup-script: INFO startup-script: The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
Jul 29 03:39:30 order-routing-group-hk66 startup-script: INFO startup-script: SQL State : 08S01
Jul 29 03:39:30 order-routing-group-hk66 startup-script: INFO startup-script: Error Code : 0
Jul 29 03:39:30 order-routing-group-hk66 startup-script: INFO startup-script: Message : Communications link failure
Expected Result:
The GCE instance starts up the war file successfully and I can access my app using the external IP.
Actual Result:
Getting CommunicationsLinkFailures upon starting up Spring Boot Java app.

Unable to set endpoint using the Azure CLI

I used docker-machine with Azure as the driver to spin up a VM. I then deployed a simple nginx test container on to the host. My issue is that when I try to set and endpoint I am getting the following error:
azure vm endpoint create huldra 80 32769
info: Executing command vm endpoint create
+ Getting virtual machines
+ Reading network configuration
+ Updating network configuration
error: Parameter 'ConsoleScreenshotBlobUri' should not be set.
info: Error information has been recorded to /Users/ryan/.azure/azure.err
error: vm endpoint create command failed
When I look at the error log it pretty much repeats what the console said Parameter 'ConsoleScreenshotBlobUri' should not be set.
Here are my docker and azure environment details:
❯ docker info
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 3
Server Version: 1.10.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 21
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 4.2.0-18-generic
Operating System: Ubuntu 15.10
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.636 GiB
Name: huldra
ID: PHUY:JRE3:DOJO:NNWO:JBBH:42H2:56ZO:HVSB:MZDE:QLOI:GO6F:SCC5
WARNING: No swap limit support
Labels:
provider=azure
~/Projects/dockerswarm master*
❯ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce51127b2bb8 nginx "nginx -g 'daemon off" 11 minutes ago Up 11 minutes 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp machinenginx
❯ azure --version
0.9.17 (node: 5.8.0)
❯ azure vm list
info: Executing command vm list
+ Getting virtual machines
data: Name Status Location DNS Name IP Address
data: ------ --------- -------- ------------------- -------------
data: huldra ReadyRole West US huldra.cloudapp.net x.x.x.x
info: vm list command OK

hadoop examples not running on amazon ec2

I am using hadoop-1.0.4 on amazon ec2 of 3 ubuntu 12.10 instances, 1 master and 2 slaves, just under ~ directory.
Now start-all.sh and stop-all.sh is ok, but when i run jps on master or slaves, it prints nothing. Then i tested hadoop examples:
~/hadoop$ bin/hadoop jar hadoop-examples-1.0.4.jar pi 10 10000
It shows
Exception in thread "main" java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createTempFile(File.java:1879)
at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
However i've chmod 777 -R tmp to tmp folders.
~/hadoop$ sudo bin/hadoop jar hadoop-examples-1.0.4.jar pi 10 10000
With sudo, it produces
13/05/12 03:58:11 WARN conf.Configuration: DEPRECATED: hadoop-site.xml
found in the classpath. Usage of hadoop-site.xml is deprecated.
Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to
override properties of core-default.xml, mapred-default.xml
and hdfs-default.xml respectively
Number of Maps = 10
Samples per Map = 10000
13/05/12 03:58:12 WARN fs.FileSystem: "54.235.101.85:50001" is a deprecated
filesystem name. Use "hdfs://54.235.101.85:50001/" instead.
13/05/12 03:58:13 INFO ipc.Client: Retrying connect to server:
hdmaster/54.235.101.85:50001. Already tried 0 time(s).
13/05/12 03:58:14 INFO ipc.Client: Retrying connect to server:
hdmaster/54.235.101.85:50001. Already tried 1 time(s).
13/05/12 03:58:15 INFO ipc.Client: Retrying connect to server:
hdmaster/54.235.101.85:50001. Already tried 2 time(s).
Then failed to connect. So what is the problem? should i put sudo to run the examples? Thanks a lot.
I think, the problem is, 54.235.101.85 is suppose to be a public IP address. Use ifconfig in all the nodes to get a list of IP address and check for IP beginning with 10.x.x.x/172.x.x.x/192.x.x.x. If you find any, modify your configuration files in all the nodes accordingly.

MySQL Cluster - [ [ndbd] ERROR -- Couldn't start as daemon, error: 'Failed to open logfile ]

recently I want to set up mysql cluster, one Mgmt node, one sql node and two data node,
it seems successfully installed and Mgmt node started, but when I try to start data node, I hit a problem...
here is the error message when I try to start data node:
Does anyone know what's going wrong?
basically I follow the step by step tutorial on this site and this site
It would be very appreciated if you can give me some advice!
thanks
Okay, I came up with a solution to fix this issue : 013-01-18 09:26:10 [ndbd] ERROR -- Couldn't start as daemon, error: 'Failed to open logfile
I was stuck with the same issue and after exploring I opened the $MY_CLUSTER_INSTALLATION/ndb_data/ndb_1_cluster.log
1.I found the following message present in the log:
2013-01-18 09:24:50 [MgmtSrvr] INFO -- Got initial configuration
from 'conf/config.ini',
will try to set it when all ndb_mgmd(s) started
2013-01-18 09:24:50 [MgmtSrvr] INFO -- Node 1: Node 1 Connected
2013-01-18 09:24:54 [MgmtSrvr] ERROR -- Unable to bind management
service port: *:1186!
Please check if the port is already used,
(perhaps a ndb_mgmd is already running),
and if you are executing on the correct computer
2013-01-18 09:24:54 [MgmtSrvr] ERROR -- Failed to start mangement service!
2.I checked the services running on port on my Mac machine using following command:
lsof -i :1186
And sure enough, I found the ndb_mgmd(s):
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ndb_mgmd 418 8u IPv4 0x33a882b4d23b342d 0t0 TCP *:mysql-cluster (LISTEN)
ndb_mgmd 418 9u IPv4 0x33a882b4d147fe85 0t0 TCP localhost:50218->localhost:mysql-cluster (ESTABLISHED)
ndb_mgmd 418 10u IPv4 0x33a882b4d26901a5 0t0 TCP localhost:mysql-cluster->localhost:50218 (ESTABLISHED)
3.To kill the processes on the specific port (for me : 1186) I ran following command:
sof -P | grep '1186' | awk '{print $2}' | xargs kill -9
4.I repeated the steps listed in mySql Cluster installation pdf again:
$PATH/mysqlc/bin/ndb_mgmd -f conf/config.ini --initial --configdir=/$PATH/my_cluster/conf/
$PATH/mysqlc/bin/ndbd -c localhost:1186
Hope this helps!
Hope this will be useful
In my case, two data node were connected already
you can check this out in your management node
[root#ab0]# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
what i did was
ndb_mgm> shutdown
and then execute the restart command. it works for me
Check that the datadir exists and is writeable with "ls -ld /home/netdb/mysql_cluster/data" on datanode1.