I am trying to find the best way to terminate an oci instance using the oci python library. I know that I can stop an instance with
base_compute.instance_action(instance_id, 'STOP')
However, I do not see a TERMINATE action in their documentation:https://docs.oracle.com/en-us/iaas/tools/oci-cli/2.21.5/oci_cli_docs/cmdref/compute/instance/action.html for the method. Any help with this would be great.
Please refer this document, it has the Oracle CLI command to terminate the instance.
Command: oci compute instance terminate [OPTIONS]
You can terminate an instance by calling either compute_client.terminate_instance or compute_client_composite_operations.terminate_instance_and_wait_for_state if you want your code wait for it to finish.
Here's an example from the Python SDK:
def terminate_instance(compute_client_composite_operations, instance):
print('Terminating Instance: {}'.format(instance.id))
compute_client_composite_operations.terminate_instance_and_wait_for_state(
instance.id,
wait_for_states=[oci.core.models.Instance.LIFECYCLE_STATE_TERMINATED]
)
print('Terminated Instance: {}'.format(instance.id))
print()
Related
Jenkins pipeline is building Docker images. OpenShift plugin(s) are used for the same.
An example command:
openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
While this works smoothly most of the time, whenever this command fails due to some underlying platform issues, almost no information is seen in the Jenkins build job console:
[Pipeline] }
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] ............................................................
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Uploading finished
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
[Pipeline] }
ERROR: Error running start-build on at least one item: [buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd];
{err=, verb=start-build, cmd=oc --server=https://api.scp-west-zone02-z01.net:6443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=sb-1166-amld5-car-service-se --token=XXXXX start-build buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd --from-dir=./build/libs --wait --follow -o=name , out=Uploading directory "build/libs" as binary input for the build ...
............................................................
Uploading finished
Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
, status=1}
[Pipeline] // catchError
I need more verbosity, detailed error information. I checked the start-build command reference, and I thought --build-loglevel [0-5] might help here. When I used it, I got a warning that since I am using source type as 'Binary' in the BuildConfig, logging isn't supported(seriously???)
NOTE: the selector returned when -F/--follow is supplied to startBuild() will be inoperative for the various selector operations.
Consider removing those options from startBuild and using the logs() command to follow the build output.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying --build-loglevel with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying environment variables with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] Uploading directory "build/libs" as binary input for the build ...
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] ..
How do I get more logs, info. while executing the start-build command?
I was facing the same problem, I just used something like:
def build = openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
build.logs('-f')
And so far it seems to work, I got the logs from my openshift build in my Jenkins pipeline. Now I'll try to get the logs only if build does not Complete, to reduce the overall logs.
(for future searchers like me ^^)
I have the following code that I would like to execute. I have tried requiring mysql and node-mysql and they both give me the same error:
Code:
var AWS = require("aws-sdk");
var mysql = require("mysql");
exports.handler = (event, context, callback) => {
try {
console.log("GOOD");
}
catch (error) {
context.fail(`Exception: ${error}`)
}
};
Error:
{
"errorMessage": "Cannot find module 'mysql'",
"errorType": "Error",
"stackTrace": [
"Function.Module._load (module.js:417:25)",
"Module.require (module.js:497:17)",
"require (internal/module.js:20:19)",
"Object.<anonymous> (/var/task/index.js:2:13)",
"Module._compile (module.js:570:32)",
"Object.Module._extensions..js (module.js:579:10)",
"Module.load (module.js:487:32)",
"tryModuleLoad (module.js:446:12)",
"Function.Module._load (module.js:438:3)"
]
}
How do I import mysql into node using lambda or get this to work?
Ohk so this is expected to happen.
The problem is that AWS Lambda runs on a different machine and there is no way you can configure that particular machine to run in a custom environment. You can however package the Node Module of mysql or node-mysql in a zip and upload to AWS Lambda. Steps are,
npm install mysql --save
Zip your folder and INCLUDING your node package
Upload this zip file as your code in AWS Lambda.
You can also take a better approach by using Serverless Framework. More info here. In this approach, you write a YAML file which contains all the details and configuration you want to deploy your lambda with. Under your lambda configuration, specify path to your node module (say, nodemodule/**) under package -> include section. This will package your required alongwith your code. Later using command line you can deploy this lambda. It uses AWS Cloudformation service and is one of most prefered way of deploying resources.
More information on packaging using Serverless Framework can be found here.
Note: To use serverless framework there couple of steps like getting API keys for your user, setting right permissions in IAM etc. These are just initial setup and won't be need later. Do perform those prior to deploying using serverless framework.
Hope this helps!
In case any body needs an alternative,
You can use the cloud9 IDE which is free to open the lambda function and execute the npm init using the terminal window against the lambda function folder this will provide the node package file, which then can be used to install dependencies.
if using package.json, simply add below and run "npm install"
{
"dependencies": {
"mysql": "2.12.0"
}
}
I experienced this when using knex, although I had mysql in my package.json.
I had to require('mysql') in my lambda (or a file it references) so that Serverless packages it during deployment.
I install apache drill on a cluster with 3 nodes.
When I use the following command to start it,it will not really running.
bin/drillbit.sh start
error
I don't know how to solve it and want you help.
The zookeeper is running without problems.
Then I check the log, and it show the following infos:
Exception in thread "main" org.apache.drill.exec.exception.DrillbitStartupException: Failure while initializing values in Drillbit.
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:287)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:271)
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:267)
Caused by: org.apache.drill.exec.exception.DrillbitStartupException: Problem in finding the native library of JPAM (Pluggable Authenticator Module API). Make sure to set Drillbit JVM option 'java.library.path' to point to the directory where the native JPAM exists.:no jpam in java.library.path
I check the java.library.path, it is the following:
/home/hadoop/bigdata/hadoop-2.7.2/lib/native/::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
So, I add the following setting:
declare -x DRILL_JAVA_LIB_PATH="/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib"
However, it not work and turn out the same problem like before.
The declare -x DRILL_JAVA_LIB_PATH snippet you provided will not point drill to the pam library. Please follow all the instructions in the Drill docs here https://drill.apache.org/docs/using-jpam-as-the-pam-authenticator/
Note: you will have to perform those steps on all 3 nodes of your cluster.
Does Vibe.D have a build-in terminate function, for when the library is run through a static initializer? I want to terminate the application when vibe.d throws an exception when for example opening a file.
I have a server listening using the listenHTTP function.
Try getEventDriver().exitEventLoop();, from here and here.
EDIT: There's a simpler version, the standalone function vibe.core.core.exitEventLoop.
I'm trying to write a script that is executed with the jruby-complete.jar like so:
java -cp derby.jar; -Djdbc.drivers=org.apache.derby.jdbc.EmbeddedDriver -jar jruby-complete.jar -S my_script.rb
I'm using JVM 1.6.0_11 and JRuby 1.4.
In my jruby script I attempt to connect to the database like this.
connection = Java::com.sql.DriverManager.getConnection("jdbc:derby:path_to_my_DB")
This throws a java.sql.SQLException: "No suitable driver found" exception.
I've tried manually loading the driver into the class loader using Class.forName which gives me the same error.
It looks like to me that the class loader being used by the DriverManager is not the same as the current thread's. I've tried setting the current thread's class loader using:
JThread = java.lang.Thread
...
class_loader = JavaLang::URLClassLoader.new(
[JavaLang::URL.new("jar:file:/derby.jar!/")].to_java(
JavaLang::URL),JRuby.runtime.jruby_class_loader)
JThread.currentThread().setContextClassLoader(class_loader )
But this doesn't help.
Any ideas?
OK I downloaded jruby-complete.jar and had a go....
This seems to work for me:
java -classpath c:\ruby\db-derby-10.5.3.0-bin\lib\derby.jar;jruby-complete-1.4.0.jar org.jruby.Main -S derby.rb
When using the -jar switch, the -classpath option is ignored (maybe the CLASSPATH shell var is too). But on the above line, we put both required jars on the class path and pass the class name to execute (i.e. org.jruby.Main). The script being passed in is as per my other answer.
Another option (which I have not tried) would be to alter the jruby-complete.jar manifest file to specify as classpath, as described here:
Adding Classes to the JAR File's Classpath
First, make sure your driver jar is not corrupted (this made me waste a couple of days one time).
Second, read this about JRuby/Java classloade: JRuby Wiki
Third (because I haven't played with "jruby-complete") try this simple script and then see if you can adapt as you need.
require 'java'
require 'C:\ruby\db-derby-10.5.3.0-bin\lib\derby.jar' # adjust for your machine
include_class "java.sql.DriverManager"
derby = org.apache.derby.jdbc.EmbeddedDriver.new
connection = DriverManager.getConnection("jdbc:derby:derbyDB;create=true")