I want to debug a new layer writing for caffe(.cpp, .cu, .hpp) on nsight, I'm wondering normally, would you wrtie a test for the layer and debug the test.cpp which has main in it and call the new layer? And should I write something similar to the layer test files in caffe?
You should definitely write a test for your layer, testing both forward and gradient computation (using the automatic numeric gradient test util).
Once you have this test it is straight forward to ascertain your layer works properly and if not it is easier to debug it without running the entire framework.
Related
how to simply do the smoke test with vorto?
use vorto to integrate ditto and devices
in the topo, device-hono-vorto-ditto, we want the simplist way to test the workflow, is there any method?
I assume you are using the Vorto Semantic Middleware between Eclipse Hono and Eclipse Ditto, similar to https://github.com/eclipse/vorto-examples/tree/master/vorto-dashboard/docs/AssetTracking.md
The following tutorial, it is described how to set up the pipeline as well as to test it: https://github.com/eclipse/vorto/blob/development/docs/tutorials/create_mapping_pipeline.md
This tutorial does not mention Eclipse Ditto as a consumer of the pipeline, because it is important that the basic (Device-Hono-Vorto) pipeline works first before adding Eclipse Ditto in the equation.
Does your mapped / semantic data of the helmet appear in the middleware logs correctly ?
I have seen that researchers are adding some functionalities to the original version of Caffe and use those layers and functionalities according to what they need and then these versions are shared through Github. If I am not mistaken, there are two ways: 1) by recompiling Caffe after adding c++ and Cuda versions of layers. 2) writing a python code for the functionality and call it as python layer in Caffe.
I want to add a new layer to Caffe based on my research problem. I really do not from which point should I start writing the new layer and which steps I should consider.
My questions are:
1) Is there any documentation or any learning materials that I can use it for writing the layer?
2) Which way of above-mentioned methods of adding a new layer is preferred?
I really appreciate any help and guidance
Thanks a lot
For research purposes, for "playing around", it is usually more convenient to write a python layer: saves you the hustle of compiling etc.
You can find a short tutorial on "Python" layer here.
On the other hand, if you want better performance you should write a native c++ code for your layer.
You can find a short explanation about it here.
Usually when using WebGL one writes most of the graphics code in a function bound to window.onload. For the sake of REPL-style graphics development, is it possible to write that OpenGL code interactively in the javascript console?
Of course it is but WebGL is a very verbose API. You have to upload, compile and link shaders, look up attributes and uniforms, create buffers and textures and upload data into each, then bind all textures, buffers, set attributes, and set uniforms and finally call a one of the draw functions
Doing that all from a REPL would be pretty tedious and error prone.
That said when I'm debugging I often paste something like this into the devtools REPL
gl = document.querySelector("canvas").getContext("webgl");
Which will give me the WebGLRenderingContext for the first canvas in the page (which is usually what I want). I can then for example check if there's an error
gl.getError();
Another common thing I do is check the available extensions
document.createElements("canvas").getContext("webgl").getSupportedExtensions().join("\n");
Otherwise if you're looking for editing WebGL in real time that's usually limited to things like glslsandbox.com or vertexshaderart.com where you're just editing a single shader that's used in a single way and not using the entire WebGL API in a REPL. There's also shdr which gives you a single model and a both a vertex and fragment shaders to work with.
If you really want a REPL you probably need some engine above it in which case it would be a name-of-engine REPL and not a WebGL REPL.
I try to use pretrained model (VGG 19) to DIGITS but I got this error.
ERROR: Your deploy network is missing a Softmax layer! Read the
documentation for custom networks and/or look at the standard networks
for examples
I try to test with my dataset which has only two classes.
I read this and this try to modify last layer but also I got error. How can I modify layers based on new dataset?
I try to modify the last layer and I got error
ERROR: Layer 'softmax' references bottom 'fc8' at the TRAIN stage however this blob is not included at that stage. Please consider using an include directive to limit the scope of this layer.
You're having a problem because you're trying to upload a "train/val" network when you really need to be uploading an "all-in-one" network. Unfortunately, we don't document this very well. I've created an RFE to remind us to improve the documentation.
Try to adjust the last layers in your network to look something like this: https://github.com/NVIDIA/DIGITS/blob/v4.0.0/digits/standard-networks/caffe/lenet.prototxt#L162-L184
For more information, here is how I've proposed updating Caffe's example networks to all-in-one nets, and here is how I updated the default DIGITS networks to be all-in-one nets.
Can we use JUnit to test java batch jobs? Since Junit runs locally and java batch jobs run on the server, i am not sure how to start a job (i tried using using the JobOperator class) from JUnit test cases.
If JUnit is not the right tool, how can we unit test java batch code.
I am using using IBM's implementation of JSR 352 running on WAS Liberty
JUnit is first of all an automation and test monitor framework. Meaning: you can use it to drive all kinds of #Test methods.
From an conceptual point, the definition of unit tests is pretty vague; if you follow wikipedia, "everything you do to test something" can be seen as unit test. Following that perspective, of course, you can "unit test" batch code that runs on a batch framework.
But: most people think that "true", "helpful" unit tests do not require the presence of any external thing. Such tests can be run "locally" at build time. No need for servers, file systems, networking, ...
Keeping that in mind, I think there are two things you can work with:
You can use JUnit to drive "integration" or "functional tests". Meaning: you can define test suites that do the "full thing" - define batches, have them processed to check for expected results in the end. As said, that would be integration tests that make sure the end-to-end flow works as expected.
You look into"normal" JUnit unit-testing. Meaning: you focus on those aspects in your code that are "un-related" to the batch framework (in other words: look out for POJOs) and unit-test those. Locally; maybe with mocking frameworks; without relying on a real batch service running your code.
Building on the answer from #GhostCat, it seems you're asking how to drive the full job (his bullet 1.) in your tests. (Of course unit testing the reader/processor/writer components individually can also be useful.)
Your basic options are:
Use Arquillian (see here for a link on getting started with Arquillian and Liberty) to run your tests in the server but to let Arquillian handle the tasks of deploying the app to the server and collecting the results.
Write your own servlet harness driving your job through the JobOperator interface. See the answer by #aguibert to this question for a starting point. Note you'll probably want to write your own simple routine polling the JobExecution for one of the "finished" states (COMPLETED, FAILED, or STOPPED) unless your jobs have some other means of making the submitter aware.
Another technique to keep in mind is the startup bean. You can run your jobs simply by starting the server with a startup bean like:
#Startup
#Singleton
public class StartupBean {
JobOperator jobOp = BatchRuntime.getJobOperator();
// Drive job(s) on startup.
jobOp.start(...);
This can be useful if you have a way to check the job results separate from using the JobOperator interface (for which you need to be in the server). Your tests can simply poll and check for the job results. You don't even have to open an HTTP port, and the server startup overhead is only a few seconds.