Logging in Revit Design Automation add-in - autodesk-forge

I want to send some diagnostic output to the default report.txt file.
In some posts it is shown that exceptions are logged to this report.txt file somehow (automatically or not?).
Also, I see in some samples that people do the logging with
System.Console.WriteLine(),
I've tried this, but still can't see it in the report file.
Could you tell me, how to achieve this?
I understand there is an option to create another log file and send it back with the result, but I think it would be easier to use this existing report.txt.
Thanks!
UPDATE: System.Console.WriteLine() works.
The reason why I didn't see the output was that my add-in failed to load.
So, it simply didn't reach this line of code.

Logging in Design Automation for Revit appbundles can indeed be done with System.Console.WriteLine. Anything sent to standard output will be capture in your workitems report.txt. For example.
The following code:
System.Console.WriteLine("Hello World!");
Will generate the following lines in the report.txt:
[04/23/2020 19:20:59] Hello World!

Related

Error while executing Work Item "Cannot find the addin file"

I am new to the Design Automation API, so please excuse and correct me if I am using the wrong terms. I am setting up the wiring for my very first Design Automation AppBundle, and I have almost all of it working. I followed the patterns in the "Delete Walls" tutorial.
I have a working add-in DLL that I can test locally and it runs under the "design.automation-csharp-revit.local.debug.tool".
I also have all of the Rest API connections setup, and I can successfully submit a WorkItem that will download a Revit file from a BIM 360, and start processing it in the sandbox of Design Automation. But I am getting an error during the execution on the sandbox where it seems it can't find my add-in file. Here is an excerpt from the WorkItem log:
[07/21/2020 18:02:26] Resolving location of Revit/RevitCoreEngine installation...
[07/21/2020 18:02:26] Running user application....
[07/21/2020 18:02:31] Cannot find the addin file:
[07/21/2020 18:02:31] Fail to deploy Addon DLL(s) in AppPackages.
[07/21/2020 18:02:31] RESULT: Failure
I have looked through "bundle" ZIP file many times looking for typos that could cause this, but I can't find anything, it looks identical to the "delete walls" example. So I'm wondering if there is somewhere else that I need to look. Or any other way I could debug this to find out were the connection is missing. I can only assume that the AppBundle and Activity items are setup correctly since I am getting this far and the error is not mentioning either of those items.
Any tips on where to look?
This turned out to be a misspelling of the [dot]bundle folder extension that triggered the issue.

Upload file to HTML input type='file' that does not belong to a form

I am attempting to upload a file via curl that basically should imitate how a user would upload a file to https://lutzroeder.github.io/netron/
I can see there is a:
<input type="file" id="open-file-dialog" style="display:none" multiple="false" accept=".onnx, .pb, .meta, .tflite, .lite, .tfl, .bin, .keras, .h5, .hd5, .hdf5, .json, .model, .mar, .params, .param, .armnn, .mnn, .ncnn, .dnn, .cmf, .mlmodel, .caffemodel, .pbtxt, .prototxt, .pkl, .pt, .pth, .t7, .joblib, .cfg, .xml">
But the input does not belong to any forms - which I haven't seen before. When I try doing a traditional post like:
curl -X POST -F ‘data=#example.h5’ https://lutzroeder.github.io/netron/
It is not permitted. How should I approach uploading a file to that input programmatically? I am trying to automate the creation of these Netron figures, as having to manually select e.g. 100 files to get 100 figures would be very cumbersome
Thanks!
Judging by your comment and others', the HTML issue is probably 1. not feasible; 2. not going to completely solve your goal of automating the creation of figures anyway (fill in the input is only the first step, you still need to automate the export process right?)
Therefore I suggest that the easiest solution is to run your own instance of Netron viewer. Netron is an open-source project, and there are many ways to run it on your own computer as given in its documentation.
The approach you are looking at is to utilise the browser version hosted on github.io. The documentation gives all sorts of other ways to run the viewer, macOS/Linux/Windows/Python Server pick one that's most suitable for your situation (depending on your OS and experience in programming) and then write a wrapper script (or hack the initialisation process since you have the source code) to feed the viewer with files and collect outputs.

Forge model derivative job failed. What now?

I ran a model derivative job and the status came back: failed. After drilling through the return values, it said that two of the linked dwg files were missing. I added the dwg files, re-zipped and re-uploaded the zip. When I try to run the job, it keeps coming back with the initial failed status. Am I missing something?
Assuming you have buckets, on the POST Job endpoint, use the x-ads-force header, if you pass true it will translate again the file.
In hindsight, one could say this is obvious but it isn't spelled out in any documentation anywhere. Essentially, one needs to DELETE the failed manifest and run a new job. There doesn't seem to be any re-try mechanics.

How to display all failures and error messages in Jmeter html reports- Jmeter

I have a question about jmeter reports.
I run my test plan in non ui mode and get csv, and html dashboard in Jmeter.
The problem is in html dashboard that is not informative enough, when I finished the Jmeter run, I enter the csv, and I can see all the steps and all the thread groups and for each step to see if is pass or No, and the error message.
The html dashboard reports not informative, I can see the top 5 errors, but not in which thread group they happened, moreover I want to see all the error and to see exactly where it is failed. Is their a way to display the whole csv as html? since all the reports are for performance and not give data about functional. after the run still need to enter csv and filter success row to failure, and check for error and assertions.
Is their any solution to see in reports the full picture of errors?
** my purpose is that when entering the html report the manual QA can see exactly the errors and in which step and in which thread group, exactly like in the csv. and all of them, without grouping, just the full row data
Provided pic of csv and html dashboard
[][CSV each step get a line with results]
[][dashboard not understand which error occurred in which test and not get full error results]
This "Top 5 errors" is hard-coded so it isn't something you can easily configure. There is report-template folder under "bin" folder of your JMeter installation where default report template lives, you can amend FreeMarker configuration starting from here
An easier solution would be switching to JMeter Ant Task which contains very simplified test report in HTML format with verbose error information on each and every failure, it should be a good substitution for you as manual QAs normally don't need performance-related metrics and charts. See Five Ways To Launch a JMeter Test without Using the JMeter GUI article for more detailed explanation and example configuration.

Verify a Tif with ApprovalTests

I have been asked to update a system where header information gets injected into a tif via a 3rd party console application. I don't need to worry about that bit.
The part I have been asked to look at it the merge process that generates the header information.
The current file generated by the process is assumed as correct, before I make any changes, so I want to add this as an approved result, from that I can then check that the changes I make will alter the file as expected.
I thought this would be a good opportunity to look at using ApprovalTests
The problem I have is that for what ever reason the links to the videos are considered corruptible (Possibly show me kittens jumping into boxes or something, which will stop me working, which ironically means I slow down my work done because I cannot see any help videos).
What I have been looking at is the Approvals.Verify and Approvals.VerifyFile extensions.
But what appears to be happening is confusing me.
using VerifyFile creates a received file, but the contents of the file are just a line the name of the file I have asked it to verify.
using Verify(new FileInfo("FileNameHere")) does not appear to generate the received file that I need to flag as approved, but the test does return saying that it cannot find the approved tif file.
I am probably using VerifyFile completely wrong and might be looking at using Verify wrong as well.
useful info?
Might be useful to know, that as this is a legacy application, running as a windows service, I have wrapped the service in a harness that allows me to call the routines, so the files are physically being written elsewhere on the machine outside of my control (well there is a config, but the return of the service I call generates a file in a fixed location if it is successful). I have tried copying that into the Unit Test project, but that doesn't appear to help.
Verify(File) and VerifyFile(string) are both meant to verify an existing file. As such they merely setting the received file to the file you pass in. You will still need to move/approval/create the approved file.
Here is the pseudo code and process.
[UseReporter(typeof(DiffReporter), typeof(ClipboardReporter)]
public void TestTiff()
{
string tif = YourProcessToCreateTifFile();
Approvals.VerifyFile(tif);
}
[Note: if you don't have an image diff installed, like TortoiseDiff, you might want to use the FileLauncherReporter]
Run this, once you get the result, move the file over by pasting your clipboard into a cmd window.
It will move the temporary tif to your test directory with the name ClassName.TestTiff.approved.tif
After that the test should pass until something changes.
Happy Testing!