I followed the instructions in sample and ran:
vertx run eventbus_pointtopoint/receiver.rb -cluster
vertx run eventbus_pointtopoint/sender.rb -cluster
Then I only got:
➜ ruby vertx run eventbus_pointtopoint/receiver.rb -cluster
Starting clustering...
No cluster-host specified so using address 192.168.56.1
Succeeded in deploying verticle
➜ ruby vertx run eventbus_pointtopoint/sender.rb -cluster
Starting clustering...
No cluster-host specified so using address 192.168.56.1
Succeeded in deploying verticle
But no message received. What am I doing wrong?
If you just want to see how event bus messaging works you can just move both pieces of code into the one file and run it.
require "vertx"
include Vertx
EventBus.register_handler('ping-address') do |msg|
puts "Received message: #{msg.body}"
# Now reply to it
msg.reply('pong!')
end
Vertx::set_periodic(1000) do
EventBus.send('ping-address', 'ping') do |reply|
puts "Received reply: #{reply.body}"
end
end
To run clustered
If you want to run the example specified with clustering you may need to change your cluster.xml config file, it is in the conf folder of your vertx install. I had to change two lines, first change
<multicast enabled="true">
to false:
<multicast enabled="false">
then change
<tcp-ip enabled="false">
to true
<tcp-ip enabled="true">
and make sure the interface tag has the correct IP address. Then the commands you specified above should run.
Related
I built the input file (decoded base64 file into p12 file) as CERTIFICATE_PATH, P12_PASSWORD is password in secret, KEYCHAIN_PATH is defined. when I run the command on CLI, I get "1 item imported" success message. but when I run from *.yml file on GitHub action, I get "security: SecKeychainItemImport: One or more parameters passed to a function were not valid." error. any suggestions?
security import $CERTIFICATE_PATH -P $P12_PASSWORD -A -t cert -f pkcs12 -k $KEYCHAIN_PATH
CERTIFICATE_PATH - file that contains cert.p12 data,
KEYCHAIN_PATH is TEMP/app-signing.keychain-db
Another reason in Github actions could be that you are using the wrong environment.
Take a look at this ---> Difference between Github's "Environment" and "Repository" secrets?.
Set the right environment:
environment: production
found the issue.. was passing wrong cert file.. once added correct file in the security build , was able to get it working
Here's my problem :
I would like to connect to a gcp instance. When I run the Google Cloud SDK shell as an administrator with the command :
gcloud compute ssh my_instance --zone=europe-west1-b -- -L=8081:locahost:8081
..I get this error : ERROR (gcloud.compute.ssh) [..../putty.exe] exited with return code [1]
My instance is running with the metadata enable-oslogin as TRUE, as the project.
Do you have an idea of what is the problem ?
When using -- in the command, you are passing SSH flags after the dashes and not gcloud command flags. To explain, gcloud compute ssh is a thin wrapper around the ssh(1) command that takes care of authentication and the translation of the instance name into an IP address.
In this case, -- is equivalent to --ssh-flag as per this SDK reference. It seems that putty is outputting an error that is not passed into the command line (SDK shell). The actual error should be visible in the dialog window before putty exits.
I have tried the command myself on Windows and the exact error was unknown option "L=8081:localhost:8081". The SSH flag is not accepted as you have an = sign there (typo).
According to linuxcommand.org manual, the flag should be in this format:
-L [bind_address:]port:host:hostport
Hence, you should run the command like this:
gcloud compute ssh my_instance --zone=europe-west1-b -- -L 8081:locahost:8081
Note also that you may have to create a firewall rule to allow Ingress to the instance on port 8081.
I managed to set up artifactory using our existing tomcat. I have set to ARTIFACTORY_HOME=/opt/artifactory, that part works well. There is, however, also the jfrog access.war file, which needs to be running as well. I didn't figure out which variable to use to specify its home, therefore it defaults to ~/.jfrog_access, which is not at all what I like.
I moved the content over to my $ARTIFACTORY_HOME/access and symlinked it, but that's not the way to go for sure. Any help appreciated.
In case someone is stumbling over this thread and struggles with the same problem:
Solution for me was to also extract the Context files (access.xml and artifactory.xml which are available in the zip file under <zip extract>/misc/tomcat) to the Tomcat configuration folder, e.g. $CATALINA_HOME/conf/Catalina/localhost/. After that the $ARTIFACTORY_HOME env will be recognized on Access startup.
A previous answer finally put me on the right track for solving this problem on Amazon Linux.
In addition to copying access.xml and artifactory.xml to ${catalina.home}/host/MY_HOSTNAME, I found that some other changes were needed.
I modified the docBase attributes in the XML context files because my server has multiple hostnames:
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/access.xml
<Context path="/access" docBase="${catalina.home}/host/repo.mydomain.org/access.war">
<Parameter name="jfrog.access.bundled" value="true" override="true"/>
<!-- enable annotations scanning of access jar files -->
<JarScanner scanClassPath="false">
<JarScanFilter defaultPluggabilityScan="false" pluggabilityScan="access*" defaultTldScan="false"/>
</JarScanner>
</Context>
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/artifactory.xml
<Context crossContext="true" path="/artifactory" docBase="${catalina.home}/host/repo.mydomain.org/artifactory.war">
</Context>
Important Note: In order to prevent the above two XML files from being deleted by Tomcat Manager during upgrades via Undeploy/Deploy WAR, make sure they are owned by root and not writable by the tomcat user:
chown root.root access.xml artifactory.xml
chmod 644 access.xml artifactory.xml
If you forget to do the above, you will likely end up missing these files, which will break the communication between the access and artifactory web applications, resulting in login failures ("Username or Password Are Incorrect"). In this case, these errors result from the lack of communication between the web applications, not a problem with the credentials themselves.
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/manager.xml
This gives me the ability to upload new versions of access.war and artifactory.war via https://repo.mydomain.org:8443/manager/html:
<Context docBase="${catalina.home}/webapps/manager" privileged="true" antiResourceLocking="false">
</Context>
Additionally, I created the following folder to serve as the artifactory.home:
sudo mkdir /usr/share/artifactory
sudo chown tomcat.tomcat /usr/share/artifactory
tomcat8.conf
Add (or modify) the following line:
JAVA_OPTS="-Dartifactory.home=/usr/share/artifactory -Djfrog.access.home=/usr/share/artifactory/access -Dartifactory.access.client.serverUrl.override=http://localhost:8080/access"
Note: The Access Client URL specified above must use localhost in order to avoid the Server HTTP parameter from being overwritten by Apache and its modules. For instance, if I use:
https://repo.mydomain.org/access/api/v1/system/ping
The Server HTTP header value in the response is:
Server: Apache/2.4.33 (Amazon) OpenSSL/1.0.2k-fips mod_jk/1.2.43
And the Access Client produces the following exception:
[ERROR] (o.j.a.c.AccessClientImpl:154) - Access client/server version mismatch. Client version: 4.1.5, Server version: 2.4.33 (Amazon) OpenSSL
Which means the Access Client is depending on the first string matching #.#.# in the server header. This seems like a really fragile part of the Access Client. They should have used X-JFrog-Access-Server or something instead of trying to control a value that is set by the web server. So, to reiterate, use http://localhost:8080/access to connect directly to the tomcat server.
Artifactory 6.2.0 depends on Apache Derby (the specific version can be found in jfrog-artifactory-oss-6.2.0.zip\artifactory-oss-6.2.0\tomcat\lib). This should be added as a shared library to Tomcat:
mkdir /usr/share/tomcat8/shared
cd /usr/share/tomcat8/shared
wget http://central.maven.org/maven2/org/apache/derby/derby/10.11.1.1/derby-10.11.1.1.jar
Add or modify the following line in catalina.properties:
shared.loader=${catalina.home}/shared/*.jar
Since we want https://repo.mydomain.org to go to the Artifactory webapp:
mkdir /usr/share/tomcat8/host/repo.mydomain.org/ROOT
echo '<html><head><meta http-equiv="refresh" content="0;URL=/artifactory"></meta></head><body></body></html>' > /usr/share/tomcat8/host/repo.mydomain.org/ROOT/index.html
And make sure the services automatically start on reboot:
sudo chkconfig httpd on
sudo chkconfig tomcat8 on
Artifactory will then be available at the url:
https://repo.mydomain.org/artifactory/webapp/
I am getting the following error when (re)starting my Yesod app on openshift:
server: InvalidYaml (Just (YamlException "Yaml file not found: xxx.xxx.xxx.xxx"))
Where xxx.xxx.xxx.xxx is an IP address. I did find a link to a Heroku+Yesod issue saying something about "removing an argument" but it didn't say from where, and of course the scripts/settings are going to be different in the case of OpenShift. Any ideas what this error is and how to get past it?
I'm assuming based on the question that you're using the standard scaffolding. If you look in the code, you'll find that uses loadAppSettingsArgs, which is described as:
Same as loadAppSettings, but get the list of runtime config files from the command line arguments.
If you don't want to pay attention to command line arguments, just replace the call to loadAppSettingsArgs with loadAppSettings [].
We are using BirtActuate in our application in showing reports.
Actuate -----> JDBC driver --------> MysqlDB
We are aiming to TRACE errors that appears while connecting via JDBC to mysql.
We have followed instructions available at http://dev.mysql.com/doc/connector-j/en/connector-j-reference-configuration-properties.html
and tried making connection using following connection string:
jdbc:mysql://192.168.0.1/TestDB?interactiveClient=true&autoReconnect=true&profileSQL=true&traceProtocol=true
As per the documentation of logger parameter in link mentioned we found that
The name of a class that implements "com.mysql.jdbc.log.Log" that will
be used to log messages to. (default is
"com.mysql.jdbc.log.StandardLogger", which logs to STDERR)
We want to trap all errors in a file so we can send that to support people to help us solving issues. I do not really know how to do that.
Adding &profileSQL=true&traceProtocol=true to JDBC connection URL will cause extra traces to be logged by the BirtActuate's default's logger in directory which in present birtActuateServer is $BIRT_HOME/server/data/logs
Go to the logs directory and run on command prompt
> grep -rl com.mysql.jdbc.exceptions .
This command should list the files in which it has found "com.mysql.jdbc.exceptions" string