I am trying to make automation in API testing.
For that I used newman.
It is working in my shell.
But when I am trying to put it in jenkins. It doesn't work.
I am using below command for run the my test case in jenkins
newman -c /home/soham/Desktop/api.json --insecure
Jenkins gives below error:
Started by user anonymous
Building in workspace /var/lib/jenkins/jobs/api_automation/workspace
[workspace] $ /bin/sh -xe /tmp/hudson4223675241512864410.sh
+ newman -c /home/soham/Desktop/api.json --insecure
[31mThe collection file /home/soham/Desktop/api.json could not be parsed.
[39mBuild step 'Execute shell' marked build as failure
Finished: FAILURE
How can I solve this problem?
Related
I have Kafka consumer Golang application. I'm trying to deploy it in PKS cluster. Here is the docker file that I have defined,
FROM golang:1.19-alpine as c-bindings
RUN apk update && apk upgrade && apk add pkgconf git bash build-base sudo
FROM c-bindings as app-builder
WORKDIR /go/app
COPY . .
RUN go mod download
RUN go mod verify
RUN apk add librdkafka-dev pkgconf
RUN go build -race -tags dynamic --ldflags "-extldflags -static -s -w" -o main ./main.go
FROM scratch AS app-runner
WORKDIR /go/app/
COPY --from=app-builder /go/app/main ./main
CMD ["/go/app/main"]
I need GSSAPI as the SASL mechanism, hence added this in the docker (above),
RUN apk add librdkafka-dev pkgconf
However, while building the image it ends giving below error,
ERROR [app-builder 6/6] RUN go build -race -tags dynamic --ldflags "-extldflags -static -s -w" -o main ./main.go 9.9s
------ [app-builder 6/6] RUN go build -race -tags dynamic --ldflags "-extldflags -static -s -w" -o main ./main.go:
#13 4.598 # github.com/confluentinc/confluent-kafka-go/kafka
#13 4.598 ../pkg/mod/github.com/confluentinc/confluent-kafka-go#v1.9.2/kafka/00version.go:44:2: error: #error "confluent-kafka-go requires librdkafka v1.9.0 or later. Install the latest version of librdkafka from the Confluent repositories, see http://docs.confluent.io/current/installation.html"
#13 4.598 44 | #error "confluent-kafka-go requires librdkafka v1.9.0 or later. Install the latest version of librdkafka from the Confluent repositories, see http://docs.confluent.io/current/installation.html"
#13 4.598 | ^~~~~
------ executor failed running [/bin/sh -c go build -race -tags dynamic --ldflags "-extldflags -static -s -w" -o main ./main.go]: exit code: 2
Apparently
RUN apk add librdkafka-dev pkgconf
is not able to pull the latest version of librdkafka for the golang:1.19-alpine base. Am I missing something here?
I'm trying to add a predeploy hook for AWS Beanstalk.
The file is
+-- .platform
+-- hooks
+-- predeploy
+-- 01_npm_install_and_build.sh
With the following contents:
curl --silent --location https://rpm.nodesource.com/setup_16.x | sudo bash -
sudo yum -y install nodejs
cd /var/app/current/
sudo npm install
sudo npm run build
I've tested the code works by SSHing to the instance and running sh 01_npm_install_and_build.sh
by looking at the log file tail -f /var/log/eb-engine.log
I also tried postdeploy with the same issue, here's that error:
[ERROR] An error occurred during execution of command [app-deploy] -
[RunAppDeployPostDeployHooks]. Stop running the command. Error:
Command .platform/hooks/postdeploy/01_npm_install_and_build.sh failed
with error fork/exec
.platform/hooks/postdeploy/01_npm_install_and_build.sh: exec format
error
The problem was that I was missing a "shebang" at the top of the sh script.
The sh script should start with:
#!/bin/bash... or see What is the preferred Bash shebang ("#!")? to check which shebang you should be using. which sh should give you an idea.
Furthermore. /var/app/current/ isn't available at predeploy, so use /var/app/staging/ instead.
I use a shutdown-script to backup the files on an instance before it is shutdown.
In this shutdown-script, the gsutil tool is used to send files to a bucket at google cloud storage.
/snap/bin/gsutil -m rsync -d -r /home/ganjin/notebook gs://ganjin-computing/XXXXXXXXXXX/TEST-202104/notebook
It worked well for long days. But recently, there occurs some error as below.
If I run the code manually, it works well. It seems that there is something wrong with jobs management of systemd.
Could anyone give me some hint?
INFO shutdown-script: /snap/bin/gsutil -m rsync -d -r /home/ganjin/notebook gs://ganjin-computing/XXXXXXXXXXX/TEST-202104/notebook
Apr 25 03:00:41 instance-XXXXXXXXXXX systemd[1]: Requested transaction contradicts existing jobs: Transaction for snap.google-cloud-sdk.gsutil.d027e14e-3905-4c96-9e42-c1f5ee9c6b1d.scope/start is destructive (poweroff.target has 'start' job queued, but 'stop' is included in transaction).
Apr 25 03:00:41 instance-XXXXXXXXXXX shutdown-script: INFO shutdown-script: internal error, please report: running "google-cloud-sdk.gsutil" failed: cannot create transient scope: DBus error "org.freedesktop.systemd1.TransactionIsDestructive": [Transaction for snap.google-cloud-sdk.gsutil.d027e14e-3905-4c96-9e42-c1f5ee9c6b1d.scope/start is destructive (poweroff.target has 'start' job queued, but 'stop' is included in transaction).]
Update gsutil with -f option.
update gsutil -f
If the above command doesn’t work then try the command below:
sudo apt-get update && sudo apt-get --only-upgrade install google-cloud-sdk
Update guest environment and try to shutdown the instance. Use the link below as a reference to update the guest environment.
https://cloud.google.com/compute/docs/images/install-guest-environment#update-guest
If still facing issues do forceful shutdown:
sudo poweroff -f
I am trying to send a prediction request as a JSON to a docker image of AutoML model running on a docker container. I have exported the image from the AutoML UI and stored it in the Google Cloud Storage.
I am running the following to launch the docker image.
CPU_DOCKER_GCS_PATH="gcr.io/automl-vision-ondevice/gcloud-container-1.12.0:latest"
YOUR_MODEL_PATH="gs://../../saved_model.pb"
PORT=8501
CONTAINER_NAME="my_random_name"
sudo docker run --rm --name ${CONTAINER_NAME} -p ${PORT}:8501 -v ${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -t ${CPU_DOCKER_GCS_PATH}
when I run this command, I get the following error but the program runs.
2019-05-09 11:29:06.810470: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:369] FileSystemStoragePathSource encountered a file-system access error: Could not find base path /tmp/mounted_model/ for servable default
I am running the following command to send the prediction request.
curl -d #/home/arkanil/saved_model/cloud_output.json -X POST http://localhost:8501/v1/models/default:predict
This returns
curl: (52) Empty reply from server.
I have tried to follow the steps written in the google docs mentioned below.
https://cloud.google.com/vision/automl/docs/containers-gcs-tutorial#install-docker
https://docs.docker.com/install/linux/docker-ce/debian/
Getting output as
curl: (52) Empty reply from server.
The expected result should be a JSON file depicting the prediction numbers of the AutoML model that is running in the docker.
Seems like you are trying to run with passing path of your model at google storage.
You should download saved_model.pb from GS to your local computer and pass its path to YOUR_MODEL_PATH variable.
To download model use:
gsutil cp ${YOUR_MODEL_PATH} ${YOUR_LOCAL_MODEL_PATH}/saved_model.pb
I am working through the Digital Asset quickstart guide.
I am able to run:
curl -X GET http://localhost:8080/iou
And:
curl -X GET http://localhost:8080/iou/0
Without a problem. However, I am having trouble running:
curl -X PUT -d '{"issuer":"Alice","owner":"Alice","currency":"AliceCoin","amount":1.0,"observers":[]}' http://localhost:8080/iou
And:
curl -X POST -d '{ "newOwner":"Bob" }' http://localhost:8080/iou/ID/transfer
I get an output of
<html><body><h2>500 Internal Server Error</h2></body></html>
Is there a log somewhere that allows me to see what error occured? How can I debug this issue?
First I stopped the mvn, navigator and sandbox processes. Then I re-ran
da run damlc -- package daml/Main.daml target/daml/iou
Then I restarted sandbox. and re-entered
mvn clean compile exec:java
Now it works fine...