popen2("unix") not working in octave - octave

I'm trying to get octave to execute a 2-way sub-process, in order to communicate with the shell 'online', while processing data acquired from the shell.
the normal popen is not good for me because it waits for the sub-process to return before i'm able to process the data.
So I tried all kinds of ways, and I've read the octave example for using popen2("sort"), but it didn't help me to get popen2("unix") working.
The error I get is:
error: popen2: popen2 (child): unable to start process -- No such file or directory*
I get this error for other popen2 commands such as popen2("help"). Maybe I'm missing something out.

The error message
error: popen2: popen2 (child): unable to start process -- No such file or directory*
tries to tell you, that there is no command or program "unix". Which OS are you using and why are you expecting that there is a command "unix" available? Btw, have you had a look at system?
If you really want a two way communication with a shell try
[in, out, pid] = popen2 ("bash");

Related

Getting logs/more information during start-build command execution

Jenkins pipeline is building Docker images. OpenShift plugin(s) are used for the same.
An example command:
openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
While this works smoothly most of the time, whenever this command fails due to some underlying platform issues, almost no information is seen in the Jenkins build job console:
[Pipeline] }
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] ............................................................
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Uploading finished
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
[Pipeline] }
ERROR: Error running start-build on at least one item: [buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd];
{err=, verb=start-build, cmd=oc --server=https://api.scp-west-zone02-z01.net:6443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=sb-1166-amld5-car-service-se --token=XXXXX start-build buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd --from-dir=./build/libs --wait --follow -o=name , out=Uploading directory "build/libs" as binary input for the build ...
............................................................
Uploading finished
Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
, status=1}
[Pipeline] // catchError
I need more verbosity, detailed error information. I checked the start-build command reference, and I thought --build-loglevel [0-5] might help here. When I used it, I got a warning that since I am using source type as 'Binary' in the BuildConfig, logging isn't supported(seriously???)
NOTE: the selector returned when -F/--follow is supplied to startBuild() will be inoperative for the various selector operations.
Consider removing those options from startBuild and using the logs() command to follow the build output.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying --build-loglevel with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying environment variables with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] Uploading directory "build/libs" as binary input for the build ...
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] ..
How do I get more logs, info. while executing the start-build command?
I was facing the same problem, I just used something like:
def build = openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
build.logs('-f')
And so far it seems to work, I got the logs from my openshift build in my Jenkins pipeline. Now I'll try to get the logs only if build does not Complete, to reduce the overall logs.
(for future searchers like me ^^)

An unexpected error happened during phase Publishing of job

I am trying to use the ForgeApp with Revit. For the same I am trying to execute the workitem from Postman. During the execution I am getting below error.
An unexpected error happened during phase Publishing of job.
The parts of the actual report (after removing some of the sensitive info) is as follows:
[10/15/2020 05:45:24] Finished running. Process will return: Success
[10/15/2020 05:45:24] ====== Revit finished running: revitcoreconsole ======
[10/15/2020 05:45:25] End Revit Core Engine standard output dump.
[10/15/2020 05:45:25] End script phase.
[10/15/2020 05:45:25] Start upload phase.
[10/15/2020 05:45:25] Error: Non-optional output [result.json] is missing.
[10/15/2020 05:45:25] Error: An unexpected error happened during phase Publishing of job.
[10/15/2020 05:45:25] Job finished with result FailedMissingOutput
[10/15/2020 05:45:25] Job Status:
From the error it is clear that the Plugin has failed in processing but it gives give much idea as to why it has failed . After receiving this error, I tried to debug locally by following https://forge.autodesk.com/blog/design-automation-debug-revit-plugin-locally
But during debugging it is failing with below error while Executing the Plugin itself. It executes the Onstartup without any issues but after that it is not going in HandleDesignAutomationReadyEvent.
Managed Debugging Assistant 'FatalExecutionEngineError' : 'The runtime has encountered a fatal error. The address of the error was at 0xdb9b8a8d, on thread 0x3784*
So I am not sure what to ahead to resolve this. If I can get this working somehow in the Local with Debugger or through Postman then it would help.
Further update - I have now found the root cause. Even though it was complaining about the missing output file leading me to believe the code was not running properly, the actual root cause was that it was not finding the model from the inputFile parameter. This became clear once I tried putting Console.Write in the Plugin code and then debugging became easy. I wasn't sure Console would be printed in the output at first and hence turned the debug logs off at first . But as I didn't have any other means to have the verbose logging , I put in lots of Console write and now got to know the root cause. Thanks

Cloud Function Build Failed

I just changed two lines of code in the Google cloud functions source code using inline statement, the two lines of code involve parsing date string using datetime library, no updates to anything else. This same deployment has been working for more than a year now.
All of a sudden I get two errors -
Error 1 -
(gcloud.functions.deploy) OperationError: code=3, message=Build
failed: {"error": {"canonicalCode": "INTERNAL", "errorMessage":
"pip_install_from_wheels had stderr
output:\n/opt/python3.7/bin/python3.7: No module named pip\n\nerror:
pip_install_from_wheels returned code: 1", "errorType":
"InternalError", "errorId": "ECB5F712"}}
Resolved that by removing pip from requirements.txt (again not sure why this is a problem now and not for over a year)
If I address 1, I get the following error -
Error 2 -
(gcloud.functions.deploy) OperationError: code=3, message=Build
failed: {"error": {"canonicalCode": "INTERNAL", "errorMessage":
"gzip_tar_runtime_package gzip /tmp/tmpGLHQx9.tar -1\nexited with
error [Errno 12] Cannot allocate memory\ngzip_tar_runtime_package is
likely not on the path", "errorType": "InternalError", "errorId":
"2A1581FF"}}
Memory is already at 2048 and nothing changed other than the two lines of code above.
Let me know if this has been happening and what is the resolution.
It looks like this has to do with packages than anything else. I deployed a dummy function and added each package from requirements.txt until it failed. It turns out the problem packages were
a. gpflow
b. tensorflow
Last deployment with these packages was successful as of Feb 20. Not sure why I can't install them without those errors anymore. regardless, tried using the versions that would have been consistent with Feb 20 timeline with no luck. So refactored my code and removed all the functionality that was using that and deployed successfully.
Request to Google Cloud Folks : Why this behavior ? Also "Invalid ARGUMENT" in logs (Stackdriver or google cloud logging whatever you call it) is misleading.
The first error, as explained in this post, is due to pip being defined in your requirements.txt file. Specifying pip as a dependency for the function causes message to appear. You did the right thing by removing it from the requirements.txt file.
The second error usually appears if the number of files or the size of the content that is being uploaded is too big and the instance used to deploy your code runs out of memory. You perhaps were using too many dependencies or static files, as explained here.

Is it possible to use virt-manager both for Openvz and KVM?

My code throws the error:
Could not connect to libvirt.
The end of the file was reached when reading the data: sh: nc: command not found: I / O error
Make sure that the remote The node is running libvirtd.
You should be able to do this with OpenVZ 7, I do not believe it is possible with any other version of OpenVZ.
If you continue having issues, providing additional information such as your OpenVZ kernel version and how you arrived at the error you previously posted, would be helpful.

How can I check if a Perl module (DBD::mysql) is properly installed?

I am using XAMPP on Mac OS X Yosemite, and I am trying to communicate with my MySQL database using Perl.
This requires two things: (1) DBI and (2) the mysql driver module, DBD::mysql.
I ran into a lot of trouble installing the DBD::mysql portion. However, after following some instructions online, it now looks like DBD::mysql is installed, but I am skeptical that it has correctly.
In Terminal, when I load up cpan and then type "install DBD::mysql", it responds, "DBD::mysql is up to date (4.032)".
From the looks of it, then, it is installed. However, I am worried that what I've installed is enough for it to say, "Hey, I am installed!", but not enough for it to actually be functional, which would be why I'm having errors show up when I try to connect to my database with Perl:
install_driver(mysql) failed: Can't load '/Library/Perl/5.18/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle' for module DBD::mysql: dlopen(/Library/Perl/5.18/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle, 1): Library not loaded: libmysqlclient.18.dylib
Referenced from: /Library/Perl/5.18/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle
Reason: image not found at /System/Library/Perl/5.18/darwin-thread-multi-2level/DynaLoader.pm line 194.
at (eval 6) line 3.
Compilation failed in require at (eval 6) line 3.
Perhaps a required shared library or dll isn't installed where expected
at login.pl line 9.
Relevant Perl code snippet:
my $dbh = DBI->connect(
"dbi:mysql:dbname=TEST",
"root",
"",
{ RaiseError => 1 },
) or die $DBI::errstr;
I am trying to troubleshoot whether this is a problem with my installation of DBD::mysql, or if it is my Perl code.
How can I verify whether my installation of DBD::mysql is all good? Better yet, how can I stop getting this error?
Thank you.
You could wrap the connect() call into an eval block to catch this error, however you don't have many options left if it crashes, probably just to try and produce a more user-friendly error messsage and then die anyway.
As for why you are getting the error, try checking where libmysqlclient.18.dylib lives on your system. It seems like it was somewhere where it could be found during compilation of the DBD::mysql driver but not during your script's runtime. Assuming it hasn't been accidentially uninstalled, adding its directory to the DYLD_LIBRARY_PATH variable should work. I don't know what config to add the path to to make that addition permanent on OSX though.