An unexpected error happened during phase Publishing of job - autodesk-forge

I am trying to use the ForgeApp with Revit. For the same I am trying to execute the workitem from Postman. During the execution I am getting below error.
An unexpected error happened during phase Publishing of job.
The parts of the actual report (after removing some of the sensitive info) is as follows:
[10/15/2020 05:45:24] Finished running. Process will return: Success
[10/15/2020 05:45:24] ====== Revit finished running: revitcoreconsole ======
[10/15/2020 05:45:25] End Revit Core Engine standard output dump.
[10/15/2020 05:45:25] End script phase.
[10/15/2020 05:45:25] Start upload phase.
[10/15/2020 05:45:25] Error: Non-optional output [result.json] is missing.
[10/15/2020 05:45:25] Error: An unexpected error happened during phase Publishing of job.
[10/15/2020 05:45:25] Job finished with result FailedMissingOutput
[10/15/2020 05:45:25] Job Status:
From the error it is clear that the Plugin has failed in processing but it gives give much idea as to why it has failed . After receiving this error, I tried to debug locally by following https://forge.autodesk.com/blog/design-automation-debug-revit-plugin-locally
But during debugging it is failing with below error while Executing the Plugin itself. It executes the Onstartup without any issues but after that it is not going in HandleDesignAutomationReadyEvent.
Managed Debugging Assistant 'FatalExecutionEngineError' : 'The runtime has encountered a fatal error. The address of the error was at 0xdb9b8a8d, on thread 0x3784*
So I am not sure what to ahead to resolve this. If I can get this working somehow in the Local with Debugger or through Postman then it would help.

Further update - I have now found the root cause. Even though it was complaining about the missing output file leading me to believe the code was not running properly, the actual root cause was that it was not finding the model from the inputFile parameter. This became clear once I tried putting Console.Write in the Plugin code and then debugging became easy. I wasn't sure Console would be printed in the output at first and hence turned the debug logs off at first . But as I didn't have any other means to have the verbose logging , I put in lots of Console write and now got to know the root cause. Thanks

Related

Revit Design Automation failedExecution with code -536852669

I was running a workitem on Design Automation for Revit, and it crashed with code 536852669. Looking at the log, it seems like it had almost finished the job when it crashed:
[06/08/2022 20:55:21] End Revit Core Engine standard output dump.
[06/08/2022 20:55:21] Error: Application revitcoreconsole.exe exits with code -536852669 which indicates an error.
[06/08/2022 20:55:21] End script phase.
[06/08/2022 20:55:21] Error: An unexpected error happened during phase CoreEngineExecution of job.
[06/08/2022 20:55:21] Job finished with result FailedExecution
The addon is a custom exporter (based on IExportContext) and was processing a relatively large model (7GB of Revit files). Any idea what may have caused the crash?

Getting logs/more information during start-build command execution

Jenkins pipeline is building Docker images. OpenShift plugin(s) are used for the same.
An example command:
openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
While this works smoothly most of the time, whenever this command fails due to some underlying platform issues, almost no information is seen in the Jenkins build job console:
[Pipeline] }
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] ............................................................
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Uploading finished
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
[Pipeline] }
ERROR: Error running start-build on at least one item: [buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd];
{err=, verb=start-build, cmd=oc --server=https://api.scp-west-zone02-z01.net:6443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=sb-1166-amld5-car-service-se --token=XXXXX start-build buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd --from-dir=./build/libs --wait --follow -o=name , out=Uploading directory "build/libs" as binary input for the build ...
............................................................
Uploading finished
Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
, status=1}
[Pipeline] // catchError
I need more verbosity, detailed error information. I checked the start-build command reference, and I thought --build-loglevel [0-5] might help here. When I used it, I got a warning that since I am using source type as 'Binary' in the BuildConfig, logging isn't supported(seriously???)
NOTE: the selector returned when -F/--follow is supplied to startBuild() will be inoperative for the various selector operations.
Consider removing those options from startBuild and using the logs() command to follow the build output.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying --build-loglevel with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying environment variables with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] Uploading directory "build/libs" as binary input for the build ...
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] ..
How do I get more logs, info. while executing the start-build command?
I was facing the same problem, I just used something like:
def build = openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
build.logs('-f')
And so far it seems to work, I got the logs from my openshift build in my Jenkins pipeline. Now I'll try to get the logs only if build does not Complete, to reduce the overall logs.
(for future searchers like me ^^)

Azure pipeline getting error: [error]The read operation failed, see inner exception on mac hosted agent

im getting this error, which i try to find why and what happened Suddenly:
and more importantly how to debug such an error .
what this line means :
Error The read operation failed, see inner exception.
where is this : inner exception?
020-09-30T18:47:22.0199830Z ##[section]Starting: Initialize job
2020-09-30T18:47:22.0201330Z Agent name: 'Hosted Agent'
2020-09-30T18:47:22.0201750Z Agent machine name: 'Mac-1601490664598'
2020-09-30T18:47:22.0202040Z Current agent version: '2.175.2'
2020-09-30T18:47:22.0219900Z Current image version: '20200904.1'
2020-09-30T18:47:22.0229850Z Agent running as: 'runner'
2020-09-30T18:47:22.0293150Z Prepare build directory.
2020-09-30T18:47:22.0595770Z Set build variables.
2020-09-30T18:47:22.0631220Z Download all required tasks.
2020-09-30T18:47:22.0751440Z Downloading task: CmdLine (2.164.2)
2020-09-30T18:48:02.2372880Z Downloading task: UseRubyVersion (0.165.2)
2020-09-30T18:48:48.2651220Z Downloading task: DownloadBuildArtifacts (0.167.2)
2020-09-30T18:51:03.2405560Z ##[warning]Failed to download task 'DownloadBuildArtifacts'. Error The read operation failed, see inner exception.
2020-09-30T18:51:03.2423990Z ##[warning]Inner Exception: {ex.InnerException.Message}
2020-09-30T18:51:03.2428450Z ##[warning]Back off 23.799 seconds before retry.
2020-09-30T18:53:07.4698560Z ##[warning]Failed to download task 'DownloadBuildArtifacts'. Error The read operation failed, see inner exception.
2020-09-30T18:53:07.4701220Z ##[warning]Inner Exception: {ex.InnerException.Message}
2020-09-30T18:53:07.4704340Z ##[warning]Back off 13.329 seconds before retry.
2020-09-30T18:57:08.7191850Z ##[error]The read operation failed, see inner exception.
2020-09-30T18:57:08.7198800Z ##[section]Finishing: Initialize job
You are not the only one who encountered this interruption, see this post.
I reviewed our internal service telemetry log, the issue you encountered should caused by our service event. https://status.dev.azure.com/_history
There were some exception occurred on our backend start from 15:23:27 CST, which make you encountered pipeline interruption.
how to debug such an error
As normal, it's hard for users to check the inner exception if you are using hosted pool. The detailed exception messages are recorded in our backend telemetry log. You can contact our team by clicking on Report outage button mentioned below if you are blocked again in the future and would like to know the details message about it:
Since the event has been mitigated now, I'm sure your pipelines will work fine if you re-run the pipeline now.

Cloud Function Build Failed

I just changed two lines of code in the Google cloud functions source code using inline statement, the two lines of code involve parsing date string using datetime library, no updates to anything else. This same deployment has been working for more than a year now.
All of a sudden I get two errors -
Error 1 -
(gcloud.functions.deploy) OperationError: code=3, message=Build
failed: {"error": {"canonicalCode": "INTERNAL", "errorMessage":
"pip_install_from_wheels had stderr
output:\n/opt/python3.7/bin/python3.7: No module named pip\n\nerror:
pip_install_from_wheels returned code: 1", "errorType":
"InternalError", "errorId": "ECB5F712"}}
Resolved that by removing pip from requirements.txt (again not sure why this is a problem now and not for over a year)
If I address 1, I get the following error -
Error 2 -
(gcloud.functions.deploy) OperationError: code=3, message=Build
failed: {"error": {"canonicalCode": "INTERNAL", "errorMessage":
"gzip_tar_runtime_package gzip /tmp/tmpGLHQx9.tar -1\nexited with
error [Errno 12] Cannot allocate memory\ngzip_tar_runtime_package is
likely not on the path", "errorType": "InternalError", "errorId":
"2A1581FF"}}
Memory is already at 2048 and nothing changed other than the two lines of code above.
Let me know if this has been happening and what is the resolution.
It looks like this has to do with packages than anything else. I deployed a dummy function and added each package from requirements.txt until it failed. It turns out the problem packages were
a. gpflow
b. tensorflow
Last deployment with these packages was successful as of Feb 20. Not sure why I can't install them without those errors anymore. regardless, tried using the versions that would have been consistent with Feb 20 timeline with no luck. So refactored my code and removed all the functionality that was using that and deployed successfully.
Request to Google Cloud Folks : Why this behavior ? Also "Invalid ARGUMENT" in logs (Stackdriver or google cloud logging whatever you call it) is misleading.
The first error, as explained in this post, is due to pip being defined in your requirements.txt file. Specifying pip as a dependency for the function causes message to appear. You did the right thing by removing it from the requirements.txt file.
The second error usually appears if the number of files or the size of the content that is being uploaded is too big and the instance used to deploy your code runs out of memory. You perhaps were using too many dependencies or static files, as explained here.

popen2("unix") not working in octave

I'm trying to get octave to execute a 2-way sub-process, in order to communicate with the shell 'online', while processing data acquired from the shell.
the normal popen is not good for me because it waits for the sub-process to return before i'm able to process the data.
So I tried all kinds of ways, and I've read the octave example for using popen2("sort"), but it didn't help me to get popen2("unix") working.
The error I get is:
error: popen2: popen2 (child): unable to start process -- No such file or directory*
I get this error for other popen2 commands such as popen2("help"). Maybe I'm missing something out.
The error message
error: popen2: popen2 (child): unable to start process -- No such file or directory*
tries to tell you, that there is no command or program "unix". Which OS are you using and why are you expecting that there is a command "unix" available? Btw, have you had a look at system?
If you really want a two way communication with a shell try
[in, out, pid] = popen2 ("bash");