Cloudbees AWS Elastic Beanstalk deployment - application not found error - amazon-elastic-beanstalk

I am trying to deploy application from Jenkins build from Dev#cloud to AWS
using the instructions given at
https://developer.cloudbees.com/bin/view/DEV/ElasticBeanstalk
However, I am stuck because "cloudbees-deployer:elastic-beanstalk" is not
able to locate my application at AWS.
Here is the Console output from Jenkins Build
[cloudbees-deployer:elastic-beanstalk] Checking if S3 bucket
'photoid-reports-aws' exists...
[cloudbees-deployer:elastic-beanstalk] Checking if S3 bucket
'photoid-reports-aws' location...
[cloudbees-deployer:elastic-beanstalk] S3 bucket 'photoid-reports-aws'
location matches: us-east-1
[cloudbees-deployer:elastic-beanstalk] Uploading application to S3
bucket 'photoid-reports-aws/jenkins-photoid-reports-aws-9'...
[cloudbees-deployer:elastic-beanstalk] Application uploaded to S3
bucket 'photoid-reports-aws' with key
'jenkins-photoid-reports-aws-9/deploytest', version id 'null' and eTag
'427d78c1e5bfbaa7a1d10f46280236cc-8'
[cloudbees-deployer:elastic-beanstalk] Checking if application version
'prod-build' exists...
[cloudbees-deployer:elastic-beanstalk] Creating application version
'prod-build'...
com.cloudbees.plugins.deployer.exceptions.DeployException: No
Application named 'deploytest' found. (Service: AWSElasticBeanstalk;
Status Code: 400; Error Code: InvalidParameterValue; Request ID:
0cc70036-470e-11e4-90e5-1717b7862a74)
at com.cloudbees.plugins.deployer.engines.Engine.process(Engine.java:185)
at com.cloudbees.plugins.deployer.engines.Engine.perform(Engine.java:119)
at com.cloudbees.plugins.deployer.DeployBuilder.perform(DeployBuilder.java:104)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:825)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:160)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:606)
at hudson.model.Run.execute(Run.java:1684)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:232)
Caused by: com.amazonaws.AmazonServiceException: No Application named
'deploytest' found. (Service: AWSElasticBeanstalk; Status Code: 400;
Error Code: InvalidParameterValue; Request ID:
0cc70036-470e-11e4-90e5-1717b7862a74)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:820)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:439)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:245)
at com.amazonaws.services.elasticbeanstalk.AWSElasticBeanstalkClient.invoke(AWSElasticBeanstalkClient.java:1679)
at com.amazonaws.services.elasticbeanstalk.AWSElasticBeanstalkClient.createApplicationVersion(AWSElasticBeanstalkClient.java:540)
at com.cloudbees.plugins.deployer.impl.amazon.EngineImpl$DeployFileCallable.invoke(EngineImpl.java:355)
at com.cloudbees.plugins.deployer.impl.amazon.EngineImpl$DeployFileCallable.invoke(EngineImpl.java:224)
at com.cloudbees.plugins.deployer.engines.Engine$FingerprintingWrapper.invoke(Engine.java:271)
at com.cloudbees.plugins.deployer.engines.Engine$FingerprintingWrapper.invoke(Engine.java:259)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2462)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:328)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Build step 'Deploy applications' marked build as failure
Finished: FAILURE

Interesting. It looks like Cloudbees is assuming that you already have an application named "deploytest". The log looks like it is only trying to create a new application version as you can see after the S3 upload succeeded. It checks to make sure the app-version doesn't exist and then tries to create it.
What happens if you go through the Elastic Beanstalk Console to setup a new application with the name 'deploytest'? Just select the desired Environment Tier, Platform, and then Environment Type and try running that again. When it asks for application version, you can just use the sample app which should be selected by default.
Let me know if that helps.

Related

Autodesk Forge design automation [workitems request] fail in heroku deployment "Failed to create a workitem"

I've downloaded the Design automation sample from github https://github.com/Autodesk-Forge/learn.forge.designautomation/tree/nodejs
I followed the toturial to make it working locally,
my issue is after deploying the app to heroku and filled the secret data, every thing is working correctly except for the workitems request it fails with code 500 Internal Server Error and response:
{"diagnostic":"Failed to create a workitem"}
error screen shot
the error log file: https://mega.nz/file/b2Y3wTJD#FqyWubUvewk175j_Y75TfpwfkzZHNXhSH1Tt5NY4HPc
You need to update FORGE_WEBHOOK_URL to https://test-f-daa.herokuapp.com on Heroku as well, from the error log -
{"url":["Failed to create URL for 'undefined/api/forge/callback/designautomation?....

GCP deployment fails on "Updating service"

I have asp.net core application hosted on GCP App Engine. When I try to deploy the application it fails on last step:
Updating service [name] (this may take several minutes)... ...failed
ERROR: (gcloud.app.deploy) Error Response: [9] An internal error occurred while processing task /app-engine-flex/flex_await_healthy/flex_await_healthy>blablabla.wm.1
The exception stack trace show that service running in background couldn't find MySQL table (that table obviously exists).
my app.yaml file:
service: XXX
runtime: custom
env: flex
automatic_scaling:
max_concurrent_requests: 80
min_num_instances: 1
max_num_instances: 1
resources:
cpu: XXX
memory_gb: XXX
beta_settings:
cloud_sql_instances: "XXX:XXXX:XXXX=tcp:3306"
It looks like the application is deployed properly despite the error. This is the only error and backgroud service desn't throw any exceptions at later point. In fact it works properly and can connect to the database.
My guess was that maybe GCP is checking health while the application is not connected do database. So I tried to add liveness_check and readiness_check to app.yaml and configured dedicated /healthcheck endpoint in my application but it didn't make any change.
Any ideas how to fix it and what might be a cause?
Deploying app with new version fixed the issue

Pyspark read all JSON files from a subdirectory of S3 bucket

I am trying to read JSON files from a subdirectory called world from a S3 bucket named hello. When I list all the objects of that directory using boto3, I can see several part files(which were possibly created by a spark job) like below.
world/
world/_SUCCESS
world/part-r-00000-....json
world/part-r-00001-....json
world/part-r-00002-....json
world/part-r-00003-....json
world/part-r-00004-....json
world/part-r-00005-....json
world/part-r-00006-....json
world/part-r-00007-....json
I have written the following code to read all these files.
spark_session = SparkSession
.builder
.config(
conf=SparkConf().setAll(spark_config).setAppName(app_name)
).getOrCreate()
hadoop_conf = spark_session._jsc.hadoopConfiguration()
hadoop_conf.set("fs.s3a.server-side-encryption-algorithm", "AES256")
hadoop_conf.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider")
hadoop_conf.set("fs.s3a.access.key", "my-aws-access-key")
hadoop_conf.set("fs.s3a.secret.key", "my-aws-secret-key")
hadoop_conf.set("com.amazonaws.services.s3a.enableV4", "true")
df = spark_session.read.json("s3a://hello/world/")
and getting the following error
py4j.protocol.Py4JJavaError: An error occurred while calling o98.json.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: , AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID:
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:976)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:956)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:892)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:557)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:355)
at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:392)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:834)
I have tried with "s3a://hello/world/*" and "s3a://hello/world/*.json"as well but still getting the same error.
FYI, I am using the following versions of the tools:
pyspark 2.4.5
com.amazonaws:aws-java-sdk:1.7.4
org.apache.hadoop:hadoop-aws:2.7.1
org.apache.hadoop:hadoop-common:2.7.1
Can anyone help me with this?
it seems that the credentials you are using to access the bucket/ folder doesn't have required access right .
Please check the following things
Credentials or role specified in your application code
Policy attached to the Amazon Elastic Compute Cloud (Amazon EC2)
instance profile role
Amazon S3 VPC endpoint policy
Amazon S3 source and destination bucket policies
Few things which you can use to debug quickly
on your master node of the cluster try to access the bucket using
aws s3 ls s3://hello/world/
if this throws the error try to resolve the access control by following this link
https://aws.amazon.com/premiumsupport/knowledge-center/emr-s3-403-access-denied/

How to upload a file to OCI Object storage

I am trying to use UploadObjectExample.java code to upload a file to OCI object storage. I am running into connection timeout error while connecting to the object storage URL. The same config file is used by OCI CLI to successfully upload files to OCI config.
Here is the Error log:
Exception in thread "main" com.oracle.bmc.model.BmcException: (-1, null, true) Timed out while communicating to: https://objectstorage.us-ashburn-1.oraclecloud.com (outbound opc-request-id: 1EB5AA4A7FD64D58A54F876AD0C9E83B)
at com.oracle.bmc.http.internal.RestClient.convertToBmcException(RestClient.java:572)
at com.oracle.bmc.http.internal.RestClient.put(RestClient.java:380)
at com.oracle.bmc.objectstorage.ObjectStorageClient.putObject(ObjectStorageClient.java:1053)
at com.oracle.bmc.objectstorage.transfer.internal.SimpleRetry$1.apply(SimpleRetry.java:34)
at com.oracle.bmc.objectstorage.transfer.internal.SimpleRetry$1.apply(SimpleRetry.java:26)
at com.oracle.bmc.objectstorage.transfer.UploadManager.singleUpload(UploadManager.java:111)
at com.oracle.bmc.objectstorage.transfer.UploadManager.upload(UploadManager.java:73)
at UploadObjectExample.main(UploadObjectExample.java:74)
Caused by: javax.ws.rs.ProcessingException: java.net.SocketTimeoutException: connect timed out
at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:284)
at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:278)
at org.glassfish.jersey.client.JerseyInvocation.lambda$invoke$0(JerseyInvocation.java:753)
at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
at org.glassfish.jersey.internal.Errors.process(Errors.java:229)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:414)
at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:752)
at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:445)
at org.glassfish.jersey.client.JerseyInvocation$Builder.put(JerseyInvocation.java:334)
at com.oracle.bmc.http.internal.ForwardingInvocationBuilder.put(ForwardingInvocationBuilder.java:141)
at com.oracle.bmc.http.internal.RestClient.put(RestClient.java:377)
Please test curl -v https://objectstorage.us-ashburn-1.oraclecloud.com from the same machine where the Java client times out, just to make sure there are no connection issues. If it works fine you may try to change the timeout value in ClientConfiguration. You can see more details here: https://github.com/oracle/oci-java-sdk/issues/92
Before creating a support ticket you might also try to create a new issue on github/oci-java-sdk.
without knowing more about the config file (I do not suggest you post it here), your home region and other elements it is very hard to help.
I would suggest you open a support ticket at https://support.oracle.com, making sure that you select the Cloud tab and the Service as "Oracle Cloud Infrastructure".
Are you using a proxy? If so, you may need to use the OCI Java SDK ApacheConnector.
This was an issue with the proxy. This was resolved by using the ash7 proxy.

axis2/c server could not be opened

I'm going to use axis2/c on windows to build a program, but when i click the "axis_http_server.exe" to start up the axis2/c server, nothing happened but a flashing window, i could't see the message "Started Simple Axis2 HTTP Server..." which was seen in the offical tutorial by apache, i've not changed anything but set the System's environment variable which was initial, so how can i open this?
i've turned to the directory axix2c/logs,the details in the axis2.log are as below:
....\src\core\deployment\conf_builder.c(903) Transport sender is NULL for transport http, unable to continue
....\src\core\deployment\conf_builder.c(262) Processing transport senders failed, unable to continue
....\src\core\deployment\dep_engine.c(752) Populating Axis2 Configuration failed
....\src\core\deployment\conf_init.c(64) Loading deployment engine failed for repository ../.
....\src\core\transport\http\receiver\http_receiver.c(126) unable to create private configuration contextfor repo path ../
....\src\core\transport\http\server\simple_axis2_server\http_server_main.c(215) Server creation failed: Error code: 103 :: Failed in creating DLL