Cloud Function Build Failed - function

I just changed two lines of code in the Google cloud functions source code using inline statement, the two lines of code involve parsing date string using datetime library, no updates to anything else. This same deployment has been working for more than a year now.
All of a sudden I get two errors -
Error 1 -
(gcloud.functions.deploy) OperationError: code=3, message=Build
failed: {"error": {"canonicalCode": "INTERNAL", "errorMessage":
"pip_install_from_wheels had stderr
output:\n/opt/python3.7/bin/python3.7: No module named pip\n\nerror:
pip_install_from_wheels returned code: 1", "errorType":
"InternalError", "errorId": "ECB5F712"}}
Resolved that by removing pip from requirements.txt (again not sure why this is a problem now and not for over a year)
If I address 1, I get the following error -
Error 2 -
(gcloud.functions.deploy) OperationError: code=3, message=Build
failed: {"error": {"canonicalCode": "INTERNAL", "errorMessage":
"gzip_tar_runtime_package gzip /tmp/tmpGLHQx9.tar -1\nexited with
error [Errno 12] Cannot allocate memory\ngzip_tar_runtime_package is
likely not on the path", "errorType": "InternalError", "errorId":
"2A1581FF"}}
Memory is already at 2048 and nothing changed other than the two lines of code above.
Let me know if this has been happening and what is the resolution.

It looks like this has to do with packages than anything else. I deployed a dummy function and added each package from requirements.txt until it failed. It turns out the problem packages were
a. gpflow
b. tensorflow
Last deployment with these packages was successful as of Feb 20. Not sure why I can't install them without those errors anymore. regardless, tried using the versions that would have been consistent with Feb 20 timeline with no luck. So refactored my code and removed all the functionality that was using that and deployed successfully.
Request to Google Cloud Folks : Why this behavior ? Also "Invalid ARGUMENT" in logs (Stackdriver or google cloud logging whatever you call it) is misleading.

The first error, as explained in this post, is due to pip being defined in your requirements.txt file. Specifying pip as a dependency for the function causes message to appear. You did the right thing by removing it from the requirements.txt file.
The second error usually appears if the number of files or the size of the content that is being uploaded is too big and the instance used to deploy your code runs out of memory. You perhaps were using too many dependencies or static files, as explained here.

Related

Getting logs/more information during start-build command execution

Jenkins pipeline is building Docker images. OpenShift plugin(s) are used for the same.
An example command:
openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
While this works smoothly most of the time, whenever this command fails due to some underlying platform issues, almost no information is seen in the Jenkins build job console:
[Pipeline] }
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] ............................................................
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Uploading finished
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
[Pipeline] }
ERROR: Error running start-build on at least one item: [buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd];
{err=, verb=start-build, cmd=oc --server=https://api.scp-west-zone02-z01.net:6443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=sb-1166-amld5-car-service-se --token=XXXXX start-build buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd --from-dir=./build/libs --wait --follow -o=name , out=Uploading directory "build/libs" as binary input for the build ...
............................................................
Uploading finished
Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
, status=1}
[Pipeline] // catchError
I need more verbosity, detailed error information. I checked the start-build command reference, and I thought --build-loglevel [0-5] might help here. When I used it, I got a warning that since I am using source type as 'Binary' in the BuildConfig, logging isn't supported(seriously???)
NOTE: the selector returned when -F/--follow is supplied to startBuild() will be inoperative for the various selector operations.
Consider removing those options from startBuild and using the logs() command to follow the build output.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying --build-loglevel with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying environment variables with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] Uploading directory "build/libs" as binary input for the build ...
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] ..
How do I get more logs, info. while executing the start-build command?
I was facing the same problem, I just used something like:
def build = openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
build.logs('-f')
And so far it seems to work, I got the logs from my openshift build in my Jenkins pipeline. Now I'll try to get the logs only if build does not Complete, to reduce the overall logs.
(for future searchers like me ^^)

An unexpected error happened during phase Publishing of job

I am trying to use the ForgeApp with Revit. For the same I am trying to execute the workitem from Postman. During the execution I am getting below error.
An unexpected error happened during phase Publishing of job.
The parts of the actual report (after removing some of the sensitive info) is as follows:
[10/15/2020 05:45:24] Finished running. Process will return: Success
[10/15/2020 05:45:24] ====== Revit finished running: revitcoreconsole ======
[10/15/2020 05:45:25] End Revit Core Engine standard output dump.
[10/15/2020 05:45:25] End script phase.
[10/15/2020 05:45:25] Start upload phase.
[10/15/2020 05:45:25] Error: Non-optional output [result.json] is missing.
[10/15/2020 05:45:25] Error: An unexpected error happened during phase Publishing of job.
[10/15/2020 05:45:25] Job finished with result FailedMissingOutput
[10/15/2020 05:45:25] Job Status:
From the error it is clear that the Plugin has failed in processing but it gives give much idea as to why it has failed . After receiving this error, I tried to debug locally by following https://forge.autodesk.com/blog/design-automation-debug-revit-plugin-locally
But during debugging it is failing with below error while Executing the Plugin itself. It executes the Onstartup without any issues but after that it is not going in HandleDesignAutomationReadyEvent.
Managed Debugging Assistant 'FatalExecutionEngineError' : 'The runtime has encountered a fatal error. The address of the error was at 0xdb9b8a8d, on thread 0x3784*
So I am not sure what to ahead to resolve this. If I can get this working somehow in the Local with Debugger or through Postman then it would help.
Further update - I have now found the root cause. Even though it was complaining about the missing output file leading me to believe the code was not running properly, the actual root cause was that it was not finding the model from the inputFile parameter. This became clear once I tried putting Console.Write in the Plugin code and then debugging became easy. I wasn't sure Console would be printed in the output at first and hence turned the debug logs off at first . But as I didn't have any other means to have the verbose logging , I put in lots of Console write and now got to know the root cause. Thanks

Xcode Server Bot, ipatool error

We have an Xcode Server bot set up for CI for our project, using Xcode 7.1. It's set to produce an IPA. We only recently noticed, but a few weeks ago, it started giving this warning every build:
Bot Issue for CareConsult Bot (develop) (build service warning)
Assertion: exportArchive: ipatool failed with an exception:
File: (null):(null)
This prevents it from producing an IPA, which is a problem.
I've tried:
- Creating a new bot
- updating gems (saw a similar issue that was resolved this way)
Doing an archive & export on my local machine gives the same error if I choose to "Export for specific device". So the problem is not specific to the build server.
Any ideas?
My suspicion is that this has to do with enabling bit code, and the build bot is using the "compile from bitcode" option by default. I'm still digging into this, but figured I'd share what I have found thus far.

Play TypeSafe Activator fails to start - IllegalArgumentException "Failed to download new template catalog properties"

Moving from play 2.2.x to latest activator last night. Downloaded minimal 1.2.10, extracted it in program file (x86)\typesafe... and put the directory into the system path variable. cloned my repository, and when i executed activator run it downloaded the required modules and my app is up and running. All great so far. run works!
Then I tried to create a new app, and activator fails, with the following trace:
Checking for a newer version of Activator (current version 1.2.10)...
... our current version 1.2.10 looks like the latest.
Found previous process id: 9632
FOUND REPO = activator-local # file:////C:/Program%20Files%20(x86)/Typesafe/activator-1.2.10-minimal/repository
Play server process ID is 9760
[info] play - Application started (Prod)
[info] play - Listening for HTTP on /127.0.0.1:8888
[info] a.e.s.Slf4jLogger - Slf4jLogger started
[WARN] [10/30/2014 10:47:13.972] [default-akka.actor.default-dispatcher-2] [ActorSystem(default)] Failed to download new template ca
talog properties: java.lang.IllegalArgumentException: requirement failed: Source file 'C:\Users\admin\.activator\1.2.10\templates\in
dex.db_6e0565f0c8826b17.tmp' is a directory.
[ERROR] [10/30/2014 10:47:13.972] [default-akka.actor.default-dispatcher-2] [akka://default/user/template-cache] Could not find a te
mplate catalog. (activator.templates.repository.RepositoryException: We don't have C:\Users\admin\.activator\1.2.10\templates\cache.
properties with an index hash in it, even though we should have downloaded one
activator.templates.repository.RepositoryException: We don't have C:\Users\admin\.activator\1.2.10\templates\cache.properties with a
n index hash in it, even though we should have downloaded one
at activator.cache.TemplateCacheActor.preStart(TemplateCacheActor.scala:184)
at akka.actor.Actor$class.aroundPreStart(Actor.scala:470)
at activator.cache.TemplateCacheActor.aroundPreStart(TemplateCacheActor.scala:25)
at akka.actor.ActorCell.create(ActorCell.scala:580)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:456)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
I've taken a look at several similar issues on SO and elsewhere. I've deleted .activator directory and retried, I've tried this process from behind a proxy and not, as well as offline (surely offline should work!), but it consistently gives the above error. activator ui gives the same error. I'm stuck and any suggestions would be appreciated. (Edit. tried with full activator download, rather than minimal, and I get the same error.)
Look for reasons it might be impossible to create or access 'C:\Users\admin.activator\1.2.10\templates\in
dex.db_6e0565f0c8826b17.tmp' ... maybe a permissions issue?
The failed check is for "is a directory" but that also fails if it just doesn't exist or can't be accessed.

drive.files.get error 500 from one server

We are having problems listening and getting files using Goole Drive SDK. The problem started today without any changes on our end. The same code runs fine on our local test-server, so it seems to be due to the IP of the server.
The exception is quite vague:
Backend Error [500]
Errors [
Message[Backend Error] Location[ - ] Reason[backendError] Domain[global]
]
Can it be an IP-ban due to quota? Although I would imagine a more informative error would have been returned if that was the case.