Keras Data Generator, show generated input and output - deep-learning

I implemented an own simple datagenerator based on https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly
My datagenerator uses as input and as output images.
Now I want to see the images, which the generator will use.
Does anyone know, how to get them?
datagen = DataGenerator(ids_training, input_dir, output_dir, **params)
X,y = datagen.next()
The datagen.next() doesn't work. I get the error "'dataGenerator' object has no attribute 'next'"
Thanks in advance!

Related

How can I use the AllenNLP framework to embed a label?

Just like using "textfieldembedder" to embed "textfieldtensors", I want to embed labels and convert them into tensors with the same dimension as embedded input texts.
Do you know about LabelField? Just put the appropriate LabelFields into your instances, and AllenNLP will add the labels to the vocabulary for you. When Model.__init__() runs, you can query the size of the label namespace to find out how big to make your embedding matrix.
Just create a normal torch.nn.Embedding in your model, and as Dirk suggests, set its size based on the vocabulary's size for the label field during Model.__init__.

Enabeling the 'perspective with ortho faces' in viewer

I can currently do:
this.viewer.navigation.toOrthographic()
and
this.viewer.navigation.toPerspective()
Is there a way that I could also use the perspective with ortho faces and change the current viewer into that on the go?
I get that there are view_types (https://forge.autodesk.com/en/docs/viewer/v7/reference/globals/VIEW_TYPES/) I could set up on initialize, but i would like to change this option after model load without having to use the view cube.
Thank you all in advance!
If I understood what you are after correctly then it's possible to call shots for viewcube as an extension programmatically (since v7.x and onwards), say for example:
//wait after model is rendered ...
const viewCubeUI = NOP_VIEWER.getExtension("Autodesk.ViewCubeUi")
viewCubeUI.setViewType(Autodesk.Viewing.Private.VIEW_TYPES.PERSPECTIVE_ORTHO_FACES)
See documentation here for details...

threejs cannot parse material in json object

Dear all I am using angular 2.4.10 and three js 0.85.0. After downloading a JSON file from the server, I have at my disposal the following JSON object:
https://drive.google.com/file/d/0B7IcIHZN137RdmVRTXpLZmlPaDg/view?usp=sharing
I am trying to load the object without using the loader URL using the following code suggested in another StackOverflow post:
loadModel(aJSONObject) {
console.log(aJSONObject);
let loader = new THREE.ObjectLoader();
let model = loader.parse( aJSONObject );
console.log(model);
.....
}
and it's working but it is not getting the materials. How can I get the material from the JSON file?
I tried to load your model and the unique material got imported correctly.
However, there is no texture nor color defined for this material in the JSON model. So the object will look white, the default color used for three.js materials.
Fiddle here.
To see your model in three.js make sure to scale it appropriately based on you camera position (I had to scale it down).Moreover, as the model is using a THREE.MeshPhongMaterial, don't forget to add some lights.
By the way, there are some X, Y and Z individual attributes defined in your model that increase its size uselessly (three.js won't use them).

THREE.js - morphTargetInfluences on an imported JSON mesh not getting results

I have a basic three.js scene in which I am attempting to get objects exported from Blender (as JSON files with embedded morphs) to function and update their shapes with user input.
Here is a test scene
http://onthez.com/temphosting/three-js-morph-test/morph-test.html
The slab is being resized without morphs by simply scaling a box, which is working just fine.
I must be missing something fundamental with the little monument on top. It has 3 morphs (width, depth, height) that are intended to allow it to resize.
I am using this code to implement the morph based on users dat.gui input.
folder1.add( params, 'width', 12, 100 ).step(1).name("Width").onChange( function () {
updateFoundation();
building.morphTargetInfluences['width'] = params.width/100;
roofL.morphTargetInfluences['width'] = params.width/100;
roofR.morphTargetInfluences['width'] = params.width/100;
building.updateMorphs();
});
The materials for building, roofL, and roofR each have morphTargets set as true.
I've been going over the three.js examples here:
http://threejs.org/examples/?q=morph#webgl_morphtargets_human
as well as #webgl_morphtargets and #webgl_morphtargets_horse
Any thoughts or input would be much appreciated!
I believe I've reached a solution for my question I was under the impression that the JSON loader was preserving the morph target names to be used in place of an index number with morphTargetInfluences
something like morphTargetInfluences['myMorphTargetName']
but, after closer inspection in the console it seems like they should be referred to by number like morphTargetInfluences[0]
Not the most intuitive, but I can work with it.

D3 JSON diagram shapes

I'm creating a diagram with D3 and JSON which is based on this:
http://bl.ocks.org/mbostock/4063550
The difference is, I also want to have different shapes for nodes (not just circles, but for example, trianges.)
I have a attribute in the JSON file that says something like "shape":"triangle".
How do I update that index.html file so that I can get different shapes to be displayed?
Really urgently need assistance...any help/advice really appreciated.
Trying modifying this line to be what you want, which is currently drawing a circle:
node.append("circle")
.attr("r", 4.5);
D3 has some built in SVG helpers for drawing symbols: d3.svg.symbol. AS #pfrank suggests, you should be able to append a path instead of a circle and set the d attribute to the output of the symbol helper configured to whatever shape you want.